AI Assurance - Facing the Challenges of Day 2 With Your Models in Production

By Pearl Lieberman
September 6, 2020

AI is everywhere. 

Businesses from all verticals are promptly adopting AI algorithms to automate some of their most important decisions: from approving transactions, to diagnosing cancers, to granting credit, and so much more. 

As the AI race is booming, more organizations step into “Day 2”, the day their models are moved out of the realms of research and training and into production. And this is when the picture starts to crack.

Once they move to production, maintaining the models is a whole new story: they become subjects to drifts, develop biases, or simply suffer from low-quality data. In this “Day 2”, no one other than the data scientists who created the model really trust it, or understand how well it's doing. And sometimes, even they feel they've lost control once it's in production! 

Operating ML models is essentially operating in the dark, without clear standards of what is entailed to ensure models make the impact they were designed for: what metrics should you look at? At what granularity? And most importantly, with what resources when your team needs to focus on creating future models and not troubleshooting the existing ones? 

And this is the great paradox of AI in production; What it can do is great, but if the natural degradation of the models over time cannot be controlled, we remain blind to biases, drifts, and compliance risks. Leaving us with no way to really achieve the full business value of machine learning. In other words, we're headed for trouble.

So what's the deal? How can we scale AI efforts while fostering trust and without losing sight?

Mind the Gaps of Day 2!

The way we see it, there are two main gaps today that prevent organizations from  stepping into “Day 2” with confidence : 

Lack of Real World Assurance - There is a lack of best practices or capabilities to help assure the health of models in production. As we  evolve into a more mature use of AI, practitioners are starting to look at monitoring more seriously but the field and the literature is still in its inception. Data scientists across all verticals reach out to us as they find themselves turning away from their homegrown solutions that lack an all-encompassing view, and often drain the resources of teams that are already spread out pretty thin. They need to find solutions that will enable them to get the right insights at the right time to help them become more efficient. They need to know if there is an issue before the business is impacted, when and if to retrain the model, and how to decide what data should be used to do so. And all this should be accomplished without creating unnecessary noise. 

Lack of Ownership - Models are created by data scientists but their results/predictions are used by the operational teams. 

These users are the ones who are the most at risk of being impacted by wrong predictions. Take marketing analysts who use machine learning to predict users' Life Time Value for example: these teams are measured by the success of the activities that depend on AI predictions...and when their activities don't yield the expected results, they are the ones losing out - and so is the whole business. 

Operational teams need to become independent and gain visibility into what makes their models tick. More than that, they should be able to put  the models to work for them and get key insights for their business: are there biases? Are there missed sub-populations?

For our users, the ability to gain independence and access information regarding  the health of the model that matters for their business is crucial. More than that, as they start understanding that the models should work for them, they become their favourite resource!

AI Assurance as the necessary leap to success 

At Superwise, we get it. With years of experience in building AI solutions and supporting organizations through their digitalization initiatives, we deeply understand the benefits and the blindspots of AI. We know that performant AI models can empower decision makers, giving them the confidence to run free with their models, to innovate and drive efficiency.

But as incredibly powerful as AI is, it requires a leap--one that is both technological and organizational--it needs Assurance.  

AI Assurance gives you the visibility and control needed to create trust and confidence and enable you to scale the use of AI across the enterprise. With AI Assurance, you’ll be prepared for Day 2, when your models meet real life.

What every organization wants is to be in control of their models, even once they've been let out into the real world. AI Assurance not only delivers the practical tools to make this possible, it empowers you, the user, to use your AI models to their fullest extent with confidence. And this is what assurance is all about--providing the right metrics and the right insights to enable real world success and independence with AI models. 

To support this leap, we deliver an AI for AI solution. We learn from your models what their normal behaviour can and should be, and help you face the challenges of bias, and concept drifts.

To illustrate, we recently helped one of our customers  reduce their time to detect and fix concept drifts by  95%! 

It’s not only the wealth of out of-the-box metrics that makes this possible--it’s our ability to give you the grid from which you can understand your models and get the tools to gain independence and control. Our solution grants you the right insights, at the right time so you can know how your models are doing, get alerted when they go south, and take the right corrective action, before there’s any business impact. 


Want to take the leap? Schedule a demo today


Recommended Readings: