LaunchDarkly announced the private preview of Warehouse Native Experimentation, its Snowflake Native App, to offer Data Warehouse Native Experimentation.
Launching an airplane from an aircraft carrier is a systematic and well coordinated process that involves reliable systems, high-performance catapults, precise navigation systems, and above all, a specialized crew having different roles and responsibilities for managing air operations. This crew, also known as the flight deck crew, are known for their colored jerseys to visually distinguish their functions. Everyone associated with the flight deck has a specific job. As a corollary to this example, launching machine learning (ML) models into production are not entirely different, except instead of launching a 45,000-pound plane into air, ML teams are launching trained ML models into production to serve predictions.
There are several categorizations that define this function of enabling the whole process of taking trained ML models and launching them into production. One of those definitions is MLOps engineering and can be defined as the technical systems and processes associated with the stages of the ML lifecycle (also referred to as MLOps cycle) from data preparation, modeling building, and production deployment and management.
While MLOps engineering entails the provisioning, deployment, and management of infrastructure that enables model building, data labeling, and model inference, it can go much deeper than that. MLOps engineering can entail developing algorithms too.
Mature IT functions like data engineering, data preparation, and data quality all have corresponding personas that perform specific tasks, or in the frequently mentioned parlance, "Jobs to Be Done(link is external)."
ML engineering also has a specific persona, and that is the MLOps Engineer. What do MLOps Engineers do?
For the sake of simplicity, MLOps Engineers design, deploy, and operate the underlying systems (infrastructure) that allow data science teams to do their jobs, which include feature engineering, model training, model validation, model refinement, just to name a few. MLOps Engineers also automate the process around those specific needs so that the work involved in launching ML models into production is streamlined, simplified, and instrumented.
Just like any other IT role, there is a broad spectrum of functional tasks MLOps Engineers can undertake. Fundamentally, a MLOps Engineer fuses software engineering expertise with knowledge of machine learning.
While the number of tools, frameworks, and approaches continue to expand and evolve, there are certain skill sets that are needed, which transcend the specific tools and frameworks. That’s why it’s important to ground the discussion on first principles. There is a core list of skill sets needed for an MLOps Engineer to carry out the specific tasks, and while not all are required, the tasks an MLOps Engineer undertakes is a function of the existing composition, size, and maturity of the broader ML team.
Some of these first principles or core skill sets entail:
1. Programming experience
2. Data science knowledge
3. Familiarity with math and statistics
4. Problem-solving skills
5. Proficiency with machine learning and deep learning frameworks
6. Hands-on experience with prototyping.
Related to these core skill sets are knowledge and experience with programming languages, DevOps tools, databases (relational, data warehousing, in-memory, etc). There are a variety of online resources that unpack the details related to skill sets, and this continues to evolve as more companies mainstream ML across their teams.
While definitions are important, the industry is still early in defining MLOps engineering and better characterizing the roles and responsibilities of a MLOps Engineer. In the journey towards understanding this domain, and the associated education and learning paths to become a MLOps Engineer, it’s important to not be too dogmatic across the board. By focusing on the Jobs to Be Done, and applying that to the context of the project, company process, and maturity of teams, companies can better structure and define the MLOps engineering crew that can launch ML models into production.
Industry News
SingleStore announced the launch of SingleStore Flow, a no-code solution designed to greatly simplify data migration and Change Data Capture (CDC).
ActiveState launched its Vulnerability Management as a Service (VMaas) offering to help organizations manage open source and accelerate secure software delivery.
Genkit for Node.js is now at version 1.0 and ready for production use.
JFrog signed a strategic collaboration agreement (SCA) with Amazon Web Services (AWS).
mabl launched of two new innovations, mabl Tools for Playwright and mabl GenAI Test Creation, expanding testing capabilities beyond the bounds of traditional QA teams.
Check Point® Software Technologies Ltd.(link is external) announced a strategic partnership with leading cloud security provider Wiz to address the growing challenges enterprises face securing hybrid cloud environments.
Jitterbit announced its latest AI-infused capabilities within the Harmony platform, advancing AI from low-code development to natural language processing (NLP).
Rancher Government Solutions (RGS) and Sequoia Holdings announced a strategic partnership to enhance software supply chain security, classified workload deployments, and Kubernetes management for the Department of Defense (DOD), Intelligence Community (IC), and federal civilian agencies.
Harness and Traceable have entered into a definitive merger agreement, creating an advanced AI-native DevSecOps platform.
Endor Labs announced a partnership with GitHub that makes it easier than ever for application security teams and developers to accurately identify and remediate the most serious security vulnerabilities—all without leaving GitHub.
Are you using OpenTelemetry? Are you planning to use it? Click here to take the OpenTelemetry survey(link is external).
GitHub announced a wave of new features and enhancements to GitHub Copilot to streamline coding tasks based on an organization’s specific ways of working.
Mirantis launched k0rdent, an open-source Distributed Container Management Environment (DCME) that provides a single control point for cloud native applications – on-premises, on public clouds, at the edge – on any infrastructure, anywhere.
Hitachi Vantara announced a new co-engineered solution with Cisco designed for Red Hat OpenShift, a hybrid cloud application platform powered by Kubernetes.