Dotscience Releases New Advancements
October 30, 2019

Dotscience announced new platform advancements that offer the easiest way to deploy and monitor ML models on Kubernetes clusters, making Kubernetes simple and accessible to data scientists.

New Dotscience Deploy and Monitor features dramatically simplify the act of deploying ML models to Kubernetes and setting up monitoring dashboards for the deployed models with cloud-native tools Prometheus and Grafana, reducing the time spent on these tasks from weeks to seconds.

Dotscience now also enables hybrid and multi-cloud scenarios where, for example, model training can happen on-prem using an attached Dotscience runner, and models can then be easily deployed to a Kubernetes cluster in the cloud for inference using a Dotscience Kubernetes deployer. Dotscience also announced a joint effort with S&P Global to develop best practices for collaborative, end-to-end ML data and model management that ensure the delivery of business value from AI.

“While there are visionaries like S&P in the market who also recognize the need for reproducibility, provenance and enhanced collaboration in the model development phase of the lifecycle, our push to simplify deployment and monitoring of AI/ML is based on the market insight that many businesses are still struggling with deploying their ML models, blocking any business value from AI/ML initiatives,” said Luke Marsden, CEO and founder of Dotscience. “In addition, monitoring models in ML-specific ways is not obvious to software-focused DevOps teams. By dramatically simplifying deployment and monitoring of models, Dotscience is making MLOps accessible to every data scientist without forcing them to set up and configure complex and powerful tools like Kubernetes, Prometheus and Grafana from scratch.”

Dotscience enables data science and ML teams to own and control the entire model development and operations process, from data ingestion, through training and testing, to deploying straight into a Kubernetes cluster, and monitoring that model in production to understand its behavior as new data flows in. Furthermore, alongside the built-in Jupyter environment, Dotscience users can now use any development environment they like by using the Dotscience Python library.

“This allows teams to work faster, at scale and with confidence that their development process is fully accountable. Also, by enabling hybrid and multi-cloud scenarios, where training happens on-prem where the data is, and the deployment to production and inference happens in the cloud where Kubernetes is easy to set up, we enable flexible use of on-prem infrastructure along with easy access to harness the power of Kubernetes in the cloud,” continued Marsden.

Data science and ML teams can use Dotscience to ingest data, perform data engineering, train and test models and then deploy them to CI for further testing before final deployment to production with a single click, command or API call where the models can then be statistically monitored.

Dotscience’s Deploy gives users the ability to:

- Handle both building the ML model into a Docker image and deploying it to a Kubernetes cluster

- Hand the entire CI/CD responsibility over to existing infrastructure, if preferred, or use lightweight built-ins

- Track the deployment of the ML model back to the provenance of the model and the data it was trained on to maintain accountability across the entire ML lifecycle

“In keeping with Dotscience’s product philosophy of maximizing interoperability, users can choose to deploy their models from Dotscience through the CI tool of their choice, including using Dotscience’s built-in CI step if they don’t have or want to have their own,” said Mark Coleman, VP of Product and Marketing at Dotscience. “If a problem is reported with a deployed ML model, it is simple to trace back from the model running in production to the full provenance of data, code and hyperparameters that created the model in development, making debugging and auditing intuitive and fast.”

Users can deploy their models in three main ways:

- UI deployments - After defining parameters in the UI, users can deploy straight from within the Dotscience Hub interface

- CLI style - The Dotscience CLI tool ‘ds’ can be used to deploy an ML model using command line parameters to define the exact details

- From the Python library - deploy directly from the python library with ds.publish(deploy=True), which also automatically sets up a statistical monitoring dashboard

Dotscience’s statistical monitoring feature allows ML teams to define which metrics they would like to monitor on their deployed models and then bring those metrics straight back into the Dotscience Hub interface where the team first developed the model. This allows ML teams to “own” the health of the model throughout the entire development lifecycle and avoids integrations with other monitoring solutions and costly handovers between teams. By enabling data science teams to own the monitoring of their models, Dotscience brings the notion of integrated DevOps teams to ML, eliminating silos, maximizing productivity and minimizing mean time to recovery (MTTR) if there are issues with a model.

“Often there’s a disconnect between the type of monitoring performed by operations teams, such as error rates and request latency, and the type of monitoring that machine learning teams need to do on their models when deployed to production, such as looking at the statistical distribution of predicted categories” said Marsden. “With Dotscience, ML teams have insight into the context specific monitoring information about their model, which better positions them to understand why an error occurred and respond to it, rather than putting this onus on a central operations team.”

Share this

Industry News

November 26, 2024

Check Point® Software Technologies Ltd. has been recognized as a Leader and Fast Mover in the latest GigaOm Radar Report for Cloud-Native Application Protection Platforms (CNAPPs).

November 26, 2024

Spectro Cloud, provider of the award-winning Palette Edge™ Kubernetes management platform, announced a new integrated edge in a box solution featuring the Hewlett Packard Enterprise (HPE) ProLiant DL145 Gen11 server to help organizations deploy, secure, and manage demanding applications for diverse edge locations.

November 26, 2024

Red Hat announced the availability of Red Hat JBoss Enterprise Application Platform (JBoss EAP) 8 on Microsoft Azure.

November 26, 2024

Launchable by CloudBees is now available on AWS Marketplace, a digital catalog with thousands of software listings from independent software vendors that make it easy to find, test, buy, and deploy software that runs on Amazon Web Services (AWS).

November 26, 2024

Kong closed a $175 million in up-round Series E financing, with a mix of primary and secondary transactions at a $2 billion valuation.

November 26, 2024

Tricentis announced that GTCR, a private equity firm, has signed a definitive agreement to invest $1.33 billion in the company, valuing the enterprise at $4.5 billion and further fueling Tricentis for future growth and innovation.

November 25, 2024

Sonatype and OpenText are partnering to offer a single integrated solution that combines open-source and custom code security, making finding and fixing vulnerabilities faster than ever.

November 25, 2024

Red Hat announced an extended collaboration with Microsoft to streamline and scale artificial intelligence (AI) and generative AI (gen AI) deployments in the cloud.

November 25, 2024

Endor Labs announced that Microsoft has natively integrated its advanced SCA capabilities within Microsoft Defender for Cloud, a Cloud-Native Application Protection Platform (CNAPP).

November 21, 2024

Red Hat announced the general availability of Red Hat Enterprise Linux 9.5, the latest version of the enterprise Linux platform.

November 21, 2024

Securiti announced a new solution - Security for AI Copilots in SaaS apps.

November 20, 2024

Spectro Cloud completed a $75 million Series C funding round led by Growth Equity at Goldman Sachs Alternatives with participation from existing Spectro Cloud investors.