Search

Scaling AI While Navigating the Current Uncertainty - Dice Insights

tederes.blogspot.com

The amount of uncertainty and complexity the recent economic difficulties have introduced into the business landscape has left many businesses reeling. While trying to adjust to the new normal, businesses are pressured to find new efficiencies and discover previously untapped sources of economic opportunity, making A.I. and machine learning models more important than ever to making critical and often timely business decisions.

The time for A.I. experimentation is over. We have arrived at the point where A.I. has to produce results and drive real revenue, while safeguarding the business from all of the potential risks that can jeopardize the bottom line. This expectation only becomes more challenging at a time when data is changing by the hour and previous historical patterns are not reliable. Furthermore, the complexities compound as businesses decide to rely more on A.I. in these trying times as a way to stay ahead of the competition

Scale Your A.I. with MLOps

Newly emerging best practices, commonly referred to as MLOps (ML Operations), underpinned by a new layer of technologies with the same name, are the missing piece of the puzzle for many organizations looking to fast-track and scale their A.I. capabilities without putting their businesses at risk during this time of economic uncertainty. With MLOps technology and practices in place, businesses can bridge the inherent gap between data and operations teams, and get a scalable and governed means to deploy and manage A.I. applications in real-world production environments.

MLOps can be broken down into four key areas of the process required to derive value from machine learning, that must be well-resourced and well-understood to work in your business:

  • Deployment 
  • Monitoring 
  • Lifecycle Management
  • Governance

Production Model Deployment

With MLOps, the goal is to make model deployment easy, regardless of which platform or language those models were created in, or where they need to eventually be deployed. MLOps essentially serves as an abstraction layer of automation whereby data teams point their models to, and where they can become managed by MLOps or Ops teams, while providing role-based visibility and actionability based on the needs of your organization. 

The notion of removing ownership from the data teams as pertains to production environments, while providing them with the required visibility, allows for taking a lot of work off their plates, freeing them up to conduct their jobs—which is solving complex business problems using data.

Production Model Monitoring

To ensure the visibility and removal of unnecessary risk resulting from models going haywire, MLOps solutions need to deploy unique monitoring that is designed from the ground-up to monitor ML models. Such monitoring includes data drift, concept drift, feature importance, model accuracy, as well as overall service health, coupled with proactive alerting to be sent to various stakeholders using a variety of channels such as email, Slack and PagerDuty (based on severity and role). With MLOps monitoring in place, teams can deploy and manage thousands of models, and businesses will be ready to scale production A.I.

Model Lifecycle Management

MLOps recognizes that models need to be updated frequently and seamlessly. Model lifecycle management supports the testing and warm-up of replacement models, A/B testing of new models against older versions, seamless rollout of updates, failover procedures, and full version control for simple rollback to prior model versions, all wrapped in designable approval workflows.

Production Model Governance

MLOps provides the integrations and capabilities you need to ensure consistent, repeatable and reportable processes for your models in production. Key capabilities include access control for production models and systems, such as integration to LDAP and role-based access control systems (RBAC), as well as approval flows, logging, version storage of each version of each model, and traceability of results for legal and regulatory compliance.

With the right processes, tools and training in place, businesses will be able to reap many benefits from MLOps. It’ll provide insight into areas where the data might be skewed. One of the many frustrating parts of running A.I. models—especially right now—is that the data is constantly shifting. With MLOps, businesses can quickly identify and act on that information in order to retrain production models on newer data, using the data pipeline, algorithms, and code leveraged to create the original. 

Users can also scale production while minimizing riskScaling A.I. across the enterprise is easier said than done. There can be numerous roadblocks that stand in the way, such as lack of communication between the IT and data science teams, or lack of visibility into A.I. outcomes. With MLOps, you can support multiple types of machine learning models created by different tools, as well as software dependencies needed by models. 

Sivan Metzger is Managing Director, MLOps and Governance at DataRobot.

Let's block ads! (Why?)



"current" - Google News
July 31, 2020 at 07:45PM
https://ift.tt/2EDmJA1

Scaling AI While Navigating the Current Uncertainty - Dice Insights
"current" - Google News
https://ift.tt/3b2HZto
https://ift.tt/3c3RoCk

Bagikan Berita Ini

1 Response to "Scaling AI While Navigating the Current Uncertainty - Dice Insights"


Powered by Blogger.