Enterprise Readiness of Cloud MLOps: A GigaOm Benchmark Reportv1.0

Azure Machine Learning, Amazon SageMaker, and Google Vertex AI

1. Executive Summary

MLOps is a practice for collaboration between data science and operations to manage the production machine learning (ML) lifecycles. As an amalgamation of machine learning and operations, MLOps applies DevOps principles to ML delivery, enabling the delivery of ML-based innovation at scale to result in:

  • Faster time to market for ML-based solutions.
  • More rapid rate of experimentation, driving innovation.
  • Assurance of quality, trustworthiness, and ethical AI.

MLOps is essential for scaling ML. Without it, enterprises risk struggling with costly overhead and stalled progress. Several vendors support MLOps: the major offerings are Microsoft Azure ML, Google Vertex AI, and Amazon SageMaker. We looked at these offerings from the perspective of enterprise features and time to value.

For the analysis, we used categories of time to value and enterprise capabilities. As shown in Table 1, our assessment resulted in a score of 2.95 (out of 3) for Azure ML using managed endpoints, 2.12 for Google Vertex AI, and 2.83 for Amazon SageMaker. The higher the score, the better, and the scoring rubric and methodology are detailed in an appendix to this report.

Table 1. Overall Assessment Scores

Vendor Time to Value & Enterprise Capabilities
Azure ML 2.95 (out of 3)
Amazon SageMaker   2.83 (out of 3)
Google Vertex AI 2.12 (out of 3)
Source: GigaOm 2022

We hope this report is informative and helpful in uncovering some of the challenges and nuances of platform selection; we leave the issue of fairness for the reader to determine. We strongly encourage you to look past marketing messages and discern for yourself what is valuable in technical terms, according to your goals. You are encouraged to compile your own representative MLOps use cases and workflows and review these platforms in a way that applies to your requirements.

2. MLOps Enterprise Readiness

The MLOps process primarily revolves around data scientists, ML engineers, and application developers creating, training, and deploying models on prepared data sets. Once trained and validated, models are deployed into an application environment that can deal with large quantities of (often streamed) data, enabling users to derive insights.

For modern enterprises, use of ML goes to the heart of digital transformation, enabling organizations to harness the power of their data and deliver new and differentiated services to their customers. Achieving this goal is predicated on three pillars:

  • Development of such models requires an iterative approach so the domain can be better understood and the models improved over time, as new learnings are achieved from data and inference.
  • Automated tools and repositories need to store and keep track of models, code, data lineage, and a target environment to deploy ML-enabled applications at speed without undermining governance.
  • Developers and data scientists need to work collaboratively to ensure ML initiatives are aligned with broader software delivery and, more broadly still, IT-business alignment.

To a large extent, these goals can be addressed using MLOps platforms. A lack of functionality (and performance) in a chosen MLOps solution can lead to workarounds, more personnel effort, and missed opportunities. Our assessment shows two main ways the platforms can be differentiated: time to value and enterprise readiness capabilities.

  • Time-to-value dimensions are ease of setup and use, and MLOps workflow.
  • Enterprise capability dimensions are security, governance, and automation.

Enterprise Time to Value

Time to value measures how much the cloud MLOps platform can shorten the time from installation to when the machine learning lifecycle is at full daily operating capacity and delivering value to the business.

This is a critical factor in the race to leverage machine learning to improve business outcomes. Enterprises are clamoring to employ machine learning and artificial intelligence to make their businesses more efficient, customer-oriented, profitable, and competitive. The less time it takes to get a machine learning platform up and running and operating effectively, the sooner a company can go to market with its newfound insights and capabilities.

Enabling this are:

  • Ease of setup and use. This considerably shortens the time to value, especially when an organization is just beginning its machine learning journey on a fully managed cloud service. Ease of setup and use has a multiplicative effect. As new data science teams are onboarded, a platform that is easy to use shortens the time it takes to set up their environment, establish workflows, and adopt and tailor the platform to meet specific requirements.
  • MLOps Workflow. Assessing the MLOps workflow capabilities of each platform is a measure of how well the cloud platform improved the efficiency of ongoing, day-to-day operations as machine learning models traverse a workflow through orchestration.

Enterprise Capability Assessment

The features enabled by the cloud determine enterprise capability readiness and contribute to a robust and feature-rich environment that makes the platform easier to manage and control.

Features include an MLOps platform service that ensures it can operate and be secured, governed, and monitored within a modern IT infrastructure and governance/compliance structure. It also includes automation and integration capabilities, such as code management and continuous improvement and development (CI/CD) tasks.

These features drive security, governance, and automation as follows:

Security
In any IT discipline, a best practice is to restrict access to the users who need it. Each cloud platform differs in the authentication and authorization model used by the service. Some organizations might also want to restrict network access or securely join resources in their on-premises network with the cloud. Data encryption is also vital, both at rest and while data moves between services. Finally, you need to be able to monitor the service and produce an audit log of activity.To fully secure the MLOps platform, network perimeters must be in place to avoid potential attackers and data exfiltration. IT administrators need to configure the platform and other services, like storage, key vault, container registry, and compute resources (virtual machines) in a network-secure way, such as using virtual networks to enable end-to-end machine learning lifecycle security. A virtual network acts as a security boundary, isolating your resources from the public internet. You must also be able to join a cloud virtual network to an on-premises network. By joining networks, you can securely train models and access deployed models for inference.

Governance
This can be a critical requirement and should not be overlooked. IT administration, data governance, and corporate compliance must ensure users are creating machine learning workspaces and other services that remain compliant with corporate standards or government regulations. Governance capabilities of the MLOps platform should allow users to set up network and data protection policies that, for example, ensure users cannot create workspaces with public IPs or without customer-managed keys.Additionally, monitoring is key to maintaining effective governance. The cloud platform should offer full-stack monitoring, whether it’s embedded in the MLOps platform GUI or part of the cloud vendor’s overall monitoring toolset.

Automation
This is a key differentiator in MLOps platforms because of the efficiency gains and saved effort achieved through well-developed features. Generally speaking, automation falls into the following categories:

  • Automated experiments (for example, the ability to automatically pick an algorithm and generate a deployment-ready model)
  • Automated workflows (such as the ability to automate workflows by automating time-consuming and iterative tasks)
  • Code, application, and CI/CD orchestration (using GitHub or Team Foundation Server for versioning, approvals, gate phasing, and deployments)
  • Event-driven workflows (the ability to trigger a workflow activity when

3. Competitive Platforms

Azure Machine Learning
Azure Machine Learning (Azure ML) is a fully managed platform as a service. Azure ML allows developers and data scientists to build, train, and deploy machine learning (ML) models, and accelerate time to value with end-to-end, fully featured MLOps.

Azure ML provides end-to-end lifecycle management, keeping track of all experiments. It stores the code, settings, and environment details to facilitate experiment replication. Models can be placed into containers for deployment like any container that runs on Kubernetes.

There are several ways to carry out machine learning on Microsoft Azure’s cloud computing platform. A popular choice is to leverage the Azure Machine Learning service, a collaborative environment that enables developers and data scientists to rapidly build, train, deploy, and manage machine learning models. Figure 1 shows the Azure solution architecture.

Figure 1. Azure Solution Architecture

  • Compute instance is used as a managed workstation by data scientists to build models. IT Admin can create a compute instance behind a VNet if there are restrictions on using a public IP.
  • Compute cluster is used as a training compute to train ML models. IT Admin (not shown) can create a compute cluster behind a VNet or enable a private link if there are restrictions in place to not use a public IP.
  • Once a model is created, it can be deployed on Azure Kubernetes Service (AKS) Azure Container Instances (ACI) and managed endpoints for both online and batch inference. A private AKS cluster with no public IP can be attached to the AML workspace, and an internal load balancer can be used so that the deployed scoring endpoint is not visible outside the virtual network. All the scoring requests to the deployed model are made over TLS/SSL.

With flexible compute options in Azure, Azure ML makes it easy to start locally and scale as needed.

Amazon SageMaker
Amazon SageMaker is an accessible Amazon Web Services offering for fully managed machine learning as a service.

SageMaker is a machine learning environment that provides tools such as Jupyter for model building and deployment. SageMaker comes with an impressive set of algorithms. These include Linear Learner (a supervised method for classification and regression) and Latent Dirichlet Allocation (an unsupervised method used for finding document categories), and many more.

As depicted in Figure 2, SageMaker comprises many services connected via an API that coordinates the ML lifecycle. SageMaker uses Docker to execute ML logic. You can download a library that lets you easily create Docker images. SageMaker retrieves a specific Docker image from ECR and then uses this image to run containers to execute the job.

Figure 2. AWS Solution Architecture

The artifacts from the model training are stored in S3. SageMaker launches EC2 instances to perform the work whenever developers create a job. SageMaker relies on Identity and Access Management (IAM) users for authentication, access control, and HTTP for requests to the API.

SageMaker works extensively with the Python SDK open-source library for model training using prebuilt algorithms and Docker images, as well as to deploy custom models and code.

You can also add your own methods and run models, leveraging SageMaker’s deployment mechanisms, or integrate SageMaker with a machine learning library. SageMaker’s design supports ML applications from model data creation to model execution. The solution architecture also makes it versatile and modular. You can use SageMaker for only model construction, training, or deployment.

Vertex AI
Google Cloud (GCP) offers Vertex AI as a fully managed, end-to-end platform for data science and machine learning. Figure 3 provides an overview.

Figure 3. Google Cloud Solution Architecture

Vertex AI is designed to make Google AI accessible to enterprise ML workflows. Vertex AI has many services for MLOps pipelines that provide support for creating ML pipelines. Continuous evaluation helps you monitor model performance, and Deep Learning Containers provide preconfigured and optimized containers for deep-learning environments.

Vertex AI is a suite of services on Google Cloud specifically targeted at building, deploying, and managing ML models in the cloud. It is used with AutoML (Google’s auto ML engine selection) and models built with Tensorflow and SKLearn.

Vertex AI offers a suite of services designed to support the activities seen in a typical ML workflow: prepare, build, run, validate, and run. AutoML provides a GUI to model selection for faster performance and more accurate predictions, while Vertex AI Vizier offers a complete ML black-box optimization service.

In addition, Vertex AI labels the data to perform the tasks of classification, object detection, and entity extraction.

4. Field Test Setup

The field test was designed to assess the three platforms’ capabilities, features, ease of use, and documentation. We strove to eliminate as much subjectivity as possible from the test plan, methodology, and measurement. However, we concede that assessing MLOps for enterprise readiness in the cloud is challenging. Certain use cases may favor one vendor over another in terms of feature availability, environments, established workflows, and requirements. Our assessment demonstrates a slice of potential configurations and workloads.

GigaOm partnered with Microsoft, the report’s sponsor, to select competitive platforms that offer comparable features and capabilities to address organizations’ MLOps use cases. GigaOm selected the test scenario, methodology, and configuration of the environments. The parameters used to replicate this test are provided throughout this document. We have provided enough information in the report for anyone to reproduce this test.

Test Scenario
For the MLOps platforms, we selected a straightforward but very common use case for our testing. A company has an attrition dataset and would like to build a model to uncover the factors that lead to employee attrition and explore important questions such as:

  • Show a breakdown of distance from home by job role and attrition.
  • Compare average monthly income by education and attrition.

The dataset is in CSV format that we upload to the respective cloud storage for each platform. We then used each platform to build, train, and deploy the model.

The field test then consists of five separate tests:

  • Test 1 – Ease of setup and use
  • Test 2 – MLOps workflow
  • Test 3 – Security
  • Test 4 – Governance
  • Test 5 – Automation

We document our approach to each test and how we scored each set of steps, in the appendix below.

5. Field Test Results

This section analyzes the levels of differentiation between the three MLOps cloud vendor platforms described in the previous section.

Test 1: Ease of Setup and Use

The overall results are as shown in Table 2.

Table 2. Ease of Setup and Use Overall Results


Create Workspace Create and Manage Resources Setup IDEs TOTAL
Azure ML 3 3 3 3
Amazon SageMaker   3 3 3 3
Google Vertex AI 2 3 3 2.67
Source: GigaOm 2022

Beginning with the initial setup and creation of the ML workspace, all three platforms have an intuitive point-and-click interface. The documentation supports the activity, but it was barely needed as the interface walked us through the environment’s configuration, networking, security, and storage. In most cases, we had a working environment ready within minutes.

Next, we tested the creation of compute resources—both a single compute instance (or virtual machine) and a production-grade cluster. All three platforms offered an easy-to-use point-and-click interface.

Third, we set up our integrated development environment. All three platforms offered simple and easy methods for creating notebooks, installing additional software (e.g., Python packages) accessing file stores (e.g., Azure Blob Storage, AWS S3, etc.) and integrating with GitHub for CI/CD processes.

Test 2: MLOps Workflow

The overall results for MLOps Workflow are shown in Table 3.

Table 3. MLOps Workflow Overall Results


Model Orchestration Data Orchestration Pipeline Orchestration TOTAL
Azure ML 3 3 3 3
Amazon SageMaker   3 3 3 3
Google Vertex AI 3 1 3 2.33
Source: GigaOm 2022

To assess these capabilities, we compared both model and data orchestration across the three platforms. We also evaluated the ability to distribute and reuse the models by sharing them with our team members. Model orchestration involves facilitating the different phases through a well-defined and repeatable workflow to improve productivity in machine learning model development, promoting a normalized and structured codebase, and creating a way to systematically reproduce model development steps. For our model development, we used the UI as much as possible for our orchestration tests–otherwise we used the Command Line Interface (CLI).

For data orchestration, we took each platform through the paces of importing new data, configuring the data source, defining its schema, validating, cleansing, transforming, normalizing, and finally staging the data set for further integrations and downstream use. This was all a straightforward process using the import data module in Azure ML. We appreciated the ability to view descriptive statistics of columns of data, as well as the handy “clean missing data” module. Transformations, normalizations, and all modules were drag-and-drop and code-free. SageMaker was nearly as robust and intuitive, offering all the same functionality. Unfortunately, during our evaluation, Google Vertex AI was painfully behind in this arena. These data orchestration operations involved multistep, extra-large coding tasks without a visual, drag-and-drop interface.

To assess pipeline orchestration, we assessed the ability of the platform to cache stages on subsequent runs. All three platforms were successful in doing so. Like model orchestration, pipeline orchestration is about improving productivity in pipeline development, as well as being able to systematically reproduce model pipelines and their components. For our pipeline setup and tests, we used the UI as much as possible for our orchestration tests–otherwise we used the Command Line Interface (CLI).

Test 3: Security

The overall results for Security testing are as shown in Table 4.

Table 4. Security Overall Results


Network Security User Security Data Security TOTAL
Azure ML 3 3 3 3
Amazon SageMaker   3 3 2.67 2.89
Google Vertex AI 0.67 2.67 3 2.11
Source: GigaOm 2022

Both Azure ML and Amazon SageMaker offered the fully realized security features we sought by isolating workspaces and training environments with a virtual private connection (VPC). Google Vertex AI missed on these requirements by not offering the ability to isolate either workspaces or the training environment out of the box—which left us with resources for our team to access through the public internet. We gave it partial credit because there is a method in beta to create a service perimeter and place our resources behind it.

In addition to network security, user security is extremely important. Identity and access management (IAM) determines who should have access to the service and what operations they are authorized to perform. The cloud MLOps platform should provide built-in role-based access controls (RBAC) for common management scenarios. Azure ML received full marks due to its built-in RBAC for common management scenarios. Azure Active Directory (Azure AD) can assign these RBAC roles to users, groups, service principals, or managed identities to grant or deny access to resources and operations. Those familiar with Active Directory and Microsoft Single Sign-On (SSO) will appreciate this.

Amazon SageMaker likewise showed its ability to set up user security through its uniform IAM service, but it can also handle managed identity access through AWS Single Sign-On. The only downside is that it would require some adoption and migration work unless your enterprise already has infrastructure security managed under AWS IAM and SSO. For Google Vertex AI, our requirements were much less satisfied by using their IAM service. Also, we were unable to find managed identity support—either natively or using a third-party solution.

Finally, an MLOps platform requires security mechanisms that companies can leverage to protect data and maintain confidentiality and integrity. A fully capable platform must support encryption at rest that includes encrypting data residing in persistent storage on physical media in any digital format. The platform also should support encryption using cloud vendor-managed keys or customer-managed keys. It should also support encryption in transit using the transport layer security (TLS) protocol to protect data traveling between the service and on-premise or remote-based users. Both platforms passed our tests by offering all of these capabilities.

Test 4: Governance

The overall results for Governance are shown in Table 5.

Table 5. Governance Overall Results


Monitoring Control TOTAL
Azure ML 3 3 3
Amazon SageMaker   3 2 2.5
Google Vertex AI 2 0 1
Source: GigaOm 2022

Only Azure ML allowed users to set up network and data protection policies that, for example, will make sure users cannot create workspaces with public IPs or without customer-managed keys.

In terms of monitoring, Azure ML has a built-in monitoring capability (Azure Monitor) that allowed us to track key pipeline metrics. For AWS, Model Monitor continuously monitored the quality of our SageMaker machine learning models in production. However, it lacked a built-in overall monitoring capability, and we had to rely on Amazon CloudWatch logs for our other metrics and monitoring requirements. Google Vertex AI has improved in this area over the former AI Platform. In addition to its audit log monitoring, we could now track pipeline metrics and job activity in the interface.

For Governance Controls, Azure ML is able to enforce compliance with corporate standards through its security controls policies. AWS offers a number of tools and resources for compliance standards; unfortunately, they are not fully integrated and require the customer to stitch them together themselves. This feature is still missing in Vertex AI.

Test 5: Automation

The overall results for Automation are as shown in Table 6.

Table 6. Automation Overall Results


Experiments Workflows Orchestration Event-driven TOTAL
Azure ML 3 3 2 3 2.75
Amazon SageMaker   3 3 2 3 2.75
Google Vertex AI 3 2 2 3 2.5
Source: GigaOm 2022

Azure ML offered Automated ML to perform automated experiments, which iterated over many combinations of algorithms and hyperparameters and helped us find the best model based on a success metric of our choosing. SageMaker had Autopilot, which explored our data, selected the algorithms relevant to our problem type, and prepared the data to facilitate model training and tuning. Google Vertex AI offers the black-box service Vertex Vizier to perform automated optimizations.

We also tested workflow automation features. For example, Azure ML allowed us to join a data preparation step to an automated ML step. SageMaker provided some MLOps templates that automated some of the model building and deployment pipelines. Google Vertex AI offered some no-code capabilities with its built-in algorithms. We used it by submitting training data, selecting an algorithm, and allowing Google Vertex AI Training to handle the preprocessing and training.

The code and application orchestration tests also revealed some differences in the platforms. Azure ML offered versioning through GitHub Actions and direct deployment; however, we found the approvals and gate-phasing options through GitHub Environments to be not quite fully capable. Azure, however, offers a mature capability in Azure DevOps that could be leveraged for release pipeline with stages and approvals. SageMaker allowed us to register a model by creating a model version that specifies the model group to which it belongs. We performed approvals and gate-phasing in the SageMaker UI and published from Jupyter, which we also found to be partially capable. Google Vertex AI allowed us to version our code using its REST API (projects.models.version) and publishing from the GCP Console, but we found no approval or gate-phasing capabilities.

Finally, Azure allowed us to create an event-driven application based on Azure ML events, such as failure notification emails or pipeline runs, when Azure Event Grid detected certain conditions. With SageMaker, we created actions on rules using CloudWatch and AWS Lambda. We also set up S3 Bucket event notifications. With Google Vertex AI, it was a bit harder. We created a workflow using the SDK with Cloud Functions to enable event-triggered Pipeline calls. Unfortunately, this required some additional setup and Kubeflow, which complicated the solution.

Overall Scoring Rubric

Table 7 presents the aggregate scoring for the solutions in our assessment tests.

Table 7. Cloud MLOps Scoring

Azure MLAzure ML Amazon SageMaker  Amazon SageMaker   Google Vertex AIGoogle Vertex AI
Setup Test Score 3 3 2.67
MLOps Test Score 3 3 2.33
Security Test Score 3 2.89 2.11
Governance Test Score 3 2.5 1
Automation Test Score 2.75 2.75 2.5
OVERALL SCORE 2.95 2.83 2.12
Source: GigaOm 2022

6. Conclusion

Overall, we found that Azure ML is the easiest to set up a test, orchestrate the model and data, and set up security.

Azure ML shined in governance monitoring and control, a critical requirement that should not be overlooked. IT administration, data governance, and corporate compliance need the capabilities to make sure that data scientists and other users are creating machine learning workspaces and other services that remain compliant with corporate standards or government regulations. Governance capabilities of the MLOps platform should allow users to set up network and data protection policies that, for example, will make sure users are not able to create workspaces with public IPs or without customer-managed keys. Only Azure ML offers the ability to do this.

Azure ML and SageMaker were the best in our judgment for automation.

While the MLOps selection will generally coordinate with the cloud platform decision, it is important to know the relative strengths of the various MLOps solutions and incorporate MLOps into plans for delivering the benefits of ML to the organization. Once you have made some discrete progress and need to consolidate and coordinate ML efforts at scale, you are ready for MLOps. The opportunity exists to drive MLOps as a practice, assuring a framework of governance and tooling that can minimize bottlenecks as efforts progress.

7. Appendix: Assessment Methodology and Scoring

This appendix explains the assessment methodology we employed and how we scored each of the respective cloud vendor platforms. The methodology and scoring used a rubric that we developed to test the key criteria for MLOps enterprise readiness. The time-to-value and enterprise capability dimensions are covered in Section 4 of the above discussion.

To score these features, we used the following scale. Each component is given a score, and the results are totaled and averaged.

  • Fully capable or fully integrated with another tool* – 3
  • Partially capable** – 2
  • Capable only with do-it-yourself external tool*** – 1
  • Missing – 0

*Examples include AutoML or a complementary integrated cloud service, like Google Dataproc, Amazon S3, Azure Data Factory, etc.

**Features in Beta or Preview were given a score of 2.

***Do-it-yourself requires a third-party or open-source tool that requires the customer to do their own integration.

Test 1 – Ease of Setup and Use

To assess ease of setup and use, we performed the following tests by simulating the setup of our development environment as if we were operations and data science teams using the platform for the first time.

Test 1a – Create ML workspace
We assessed how quickly and easily a data persona can create, configure (networking and security), and connect to a workspace using the vendor portal. We looked at policies, templates, and cloud adoption frameworks, e.g., Terraform Registry.

Test 1b – Create and manage compute resources
We assessed how quickly and easily a data persona can deploy and attach compute resources to a workspace. In this case, we created a compute instance/cluster with startup scripts, auto shutdown policy, provisioned by an admin persona but assigned to a data scientist, and put it behind a VNet with no public IP. We deployed the following resource types:

  • Single instance
  • Production-grade cluster
  • Start-stop-delete resource
  • Access the instance
  • Install extensions, packages

Test 1c – Set up development environments
We assessed how quickly/easily a data persona can set up development environments, including:

  • Notebooks (Jupyter, etc.)
  • Software (Python, Docker, etc.)
  • File stores (input and output folders)
  • Code management (Git, Team Foundation Server, etc.)

Test 2 – MLOps Workflow

To assess MLOps workflow, we performed the following tests by simulating the same operations in our development environment as if we were carrying out the operations on a day-to-day basis.

Test 2a – Model orchestration
We assessed how quickly and easily a data scientist persona can perform the following:

  • Build models, i.e., the one-click ability for a data scientist to launch a Jupyter, RStudio, or terminal interface to build models
  • Distribute models
  • Reuse models

Test 2b – Data orchestration
We assessed how quickly and easily a data engineer could perform the following data operations for both model development and automated ML:

  • Import new data
  • Validate and cleanse data
  • Transform (data munging) and normalize data
  • Stage data

Test 3 – Security

To assess security as an enterprise readiness capability, we performed the following tests:

Test 3a – Network security
We assessed the ability to put network perimeters in place to avoid potential attackers and data exfiltration, including isolating resources in a virtual network, such as:

  • Workspaces
  • Training environment
  • Policies and templates

Test 3b – User security
We assessed the ability to determine which users should have access to resources and what operations they are authorized to perform, including:

  • Identity and access management (IAM)
  • Managed identities—assess the ability to use managed identities to access resources without embedding credentials inside code
  • Personas

Test 3c – Data security
We assessed the availability of mechanisms to protect data, maintaining its confidentiality and integrity, including:

  • Encryption of data at rest and managed keys
  • Encryption of data in transit (TLS)
  • Policy inheritance

Test 4 – Governance

To assess governance as an enterprise readiness capability, we performed the following tests:

Test 4a – Monitoring
We assessed the capabilities to monitor logs and activities and track metrics from pipeline experiments.

Test 4b – Control
We assessed the ability to enforce compliance with corporate standards.

Test 5 – Automation

To assess governance as an enterprise readiness capability, we performed the following tests:

Test 5a – Experiments
We assessed the ability to automatically pick an algorithm and generate a deployment-ready model.

Test 5b – Workflows
We assessed the ability to automate workflows by automating time-consuming and iterative tasks

Test 5c – Code and app orchestration
We assessed how quickly and easily an MLOps persona could support the following CI/CD activities (e.g., we used GitHub Actions for these tasks in Azure ML):

  • Versioning
  • Approvals
  • Gate phasing
  • Deploying/publishing

Test 5d – Event-driven
We assessed the ability to trigger an event-based workflow.

8. About William McKnight

William McKnight is a former Fortune 50 technology executive and database engineer. An Ernst & Young Entrepreneur of the Year finalist and frequent best practices judge, he helps enterprise clients with action plans, architectures, strategies, and technology tools to manage information.

Currently, William is an analyst for GigaOm Research who takes corporate information and turns it into a bottom-line-enhancing asset. He has worked with Dong Energy, France Telecom, Pfizer, Samba Bank, ScotiaBank, Teva Pharmaceuticals, and Verizon, among many others. William focuses on delivering business value and solving business problems utilizing proven approaches in information management.

9. About Jake Dolezal

Jake Dolezal is a contributing analyst at GigaOm. He has two decades of experience in the information management field, with expertise in analytics, data warehousing, master data management, data governance, business intelligence, statistics, data modeling and integration, and visualization. Jake has solved technical problems across a broad range of industries, including healthcare, education, government, manufacturing, engineering, hospitality, and restaurants. He has a doctorate in information management from Syracuse University.

10. About GigaOm

GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.

GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.

GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.

11. Copyright

© Knowingly, Inc. 2022 "Enterprise Readiness of Cloud MLOps: A GigaOm Benchmark Report" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.