First Principles for the MLOps Engineer
June 27, 2022

Taimur Rashid
Redis

Launching an airplane from an aircraft carrier is a systematic and well coordinated process that involves reliable systems, high-performance catapults, precise navigation systems, and above all, a specialized crew having different roles and responsibilities for managing air operations. This crew, also known as the flight deck crew, are known for their colored jerseys to visually distinguish their functions. Everyone associated with the flight deck has a specific job. As a corollary to this example, launching machine learning (ML) models into production are not entirely different, except instead of launching a 45,000-pound plane into air, ML teams are launching trained ML models into production to serve predictions.

There are several categorizations that define this function of enabling the whole process of taking trained ML models and launching them into production. One of those definitions is MLOps engineering and can be defined as the technical systems and processes associated with the stages of the ML lifecycle (also referred to as MLOps cycle) from data preparation, modeling building, and production deployment and management.

While MLOps engineering entails the provisioning, deployment, and management of infrastructure that enables model building, data labeling, and model inference, it can go much deeper than that. MLOps engineering can entail developing algorithms too.

Mature IT functions like data engineering, data preparation, and data quality all have corresponding personas that perform specific tasks, or in the frequently mentioned parlance, "Jobs to Be Done."

ML engineering also has a specific persona, and that is the MLOps Engineer. What do MLOps Engineers do?

For the sake of simplicity, MLOps Engineers design, deploy, and operate the underlying systems (infrastructure) that allow data science teams to do their jobs, which include feature engineering, model training, model validation, model refinement, just to name a few. MLOps Engineers also automate the process around those specific needs so that the work involved in launching ML models into production is streamlined, simplified, and instrumented.

Just like any other IT role, there is a broad spectrum of functional tasks MLOps Engineers can undertake. Fundamentally, a MLOps Engineer fuses software engineering expertise with knowledge of machine learning.

While the number of tools, frameworks, and approaches continue to expand and evolve, there are certain skill sets that are needed, which transcend the specific tools and frameworks. That’s why it’s important to ground the discussion on first principles. There is a core list of skill sets needed for an MLOps Engineer to carry out the specific tasks, and while not all are required, the tasks an MLOps Engineer undertakes is a function of the existing composition, size, and maturity of the broader ML team.

Some of these first principles or core skill sets entail:

1. Programming experience

2. Data science knowledge

3. Familiarity with math and statistics

4. Problem-solving skills

5. Proficiency with machine learning and deep learning frameworks

6. Hands-on experience with prototyping.

Related to these core skill sets are knowledge and experience with programming languages, DevOps tools, databases (relational, data warehousing, in-memory, etc). There are a variety of online resources that unpack the details related to skill sets, and this continues to evolve as more companies mainstream ML across their teams.

While definitions are important, the industry is still early in defining MLOps engineering and better characterizing the roles and responsibilities of a MLOps Engineer. In the journey towards understanding this domain, and the associated education and learning paths to become a MLOps Engineer, it’s important to not be too dogmatic across the board. By focusing on the Jobs to Be Done, and applying that to the context of the project, company process, and maturity of teams, companies can better structure and define the MLOps engineering crew that can launch ML models into production.

Taimur Rashid is Chief Business Development Officer at Redis
Share this

Industry News

May 08, 2024

MacStadium announced that it has obtained Cloud Security Alliance (CSA) Security, Trust & Assurance Registry (STAR) Level 1, meaning that MacStadium has publicly documented its compliance with CSA’s Cloud Controls Matrix (CCM), and that it joined the Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment.

May 08, 2024

The Cloud Native Computing Foundation® (CNCF®) released the two-day schedule for CloudNativeSecurityCon North America 2024 happening in Seattle, Washington from June 26-27, 2024.

May 08, 2024

Sumo Logic announced new AI and security analytics capabilities that allow security and development teams to align around a single source of truth and collect and act on data insights more quickly.

May 08, 2024

Red Hat is announcing an optional additional 12-month EUS term for OpenShift 4.14 and subsequent even-numbered Red Hat OpenShift releases in the 4.x series.

May 08, 2024

HAProxy Technologies announced the launch of HAProxy Enterprise 2.9.

May 08, 2024

ArmorCode announced the general availability of AI Correlation in the ArmorCode ASPM Platform.

May 08, 2024

Octopus Deploy launched new features to help simplify Kubernetes CD at scale for enterprises.

May 08, 2024

Cequence announced multiple ML-powered advancements to its Unified API Protection (UAP) platform.

May 07, 2024

Oracle announced plans for Oracle Code Assist, an AI code companion, to help developers boost velocity and enhance code consistency.

May 07, 2024

New Relic launched Secure Developer Alliance.

May 07, 2024

Dynatrace is enhancing its platform with new Kubernetes Security Posture Management (KSPM) capabilities for observability-driven security, configuration, and compliance monitoring.

May 07, 2024

Red Hat announced advances in Red Hat OpenShift AI, an open hybrid artificial intelligence (AI) and machine learning (ML) platform built on Red Hat OpenShift that enables enterprises to create and deliver AI-enabled applications at scale across hybrid clouds.

May 07, 2024

ServiceNow is introducing new capabilities to help teams create apps and scale workflows faster on the Now Platform and to boost developer and admin productivity.

May 06, 2024

Red Hat and Oracle announced the general availability of Red Hat OpenShift on Oracle Cloud Infrastructure (OCI) Compute Virtual Machines (VMs).

May 06, 2024

The Software Engineering Institute at Carnegie Mellon University announced the release of a tool to give a comprehensive visualization of the complete DevSecOps pipeline.