Modern machine learning algorithms still require maintenance to achieve optimal performance. Models can degrade over time for a multitude of reasons, and developers must identify, measure, and respond to AI drift regularly to stay on top of changing conditions in the real world.
What Is Data Drift in Machine Learning?
AI developers train models using historical data. After deployment, a model’s performance generally stays consistent when the nature of the input data stays the same.
However, data drift occurs when the statistical distribution of the input data diverges from that of the historical data. Alternatively, the relationship between the input data and the target result may change. In both instances, the model no longer produces accurate predictions if it does not adapt to the changing environment.
For example, many banks use artificial intelligence to make business decisions. In the wake of COVID-19, a third of banks in the United Kingdom reported significantly reduced performance from their machine learning models due to unexpected market fluctuations brought on by the pandemic.
Even if developers design models to handle data shifts, no machine learning implementation is entirely impervious. That’s why data drift detection will remain a necessary responsibility for businesses that rely on AI.
Concept Drift vs. Data Drift
Whenever AI models suffer from reduced performance after deployment, one of two types of drift has occurred.
Concept Drift
Concept drift happens when the task the model aims to achieve varies over time. In other words, the statistical relationship between the input data and target variable shifts, and the algorithm consequently cannot make accurate predictions.
Concept drift may be sudden, like the stock market at the onset of COVID-19, or gradual, like the value of currency during periods of inflation. It can also be recurring as the data shifts to a new concept temporarily before returning to the old one, such as the sales of a seasonal product throughout the year.
Data Drift
Some AI models lose accuracy because the input data itself changes in distribution. A sales department might find success with a model until a new product enters the market, shifting consumer demand and changing the input data.
How To Measure Data Drift
Protecting your business from the effects of data drift often involves continuous evaluation of the input data and tweaking the model accordingly. Data scientists have to retrain a model based on new data during periods of change, but they first have to know when data drift has occurred. Otherwise, they are blindly updating their models at random and potentially wasting computational time.
Data drift monitoring is available through a variety of methods and algorithms.
- The Kolmogorov-Smirnov (K-S) test– Compares two separate data sets and determines whether they share the same distribution. If not, drift has likely occurred between them.
- Kullback-Leibler Divergence – Compares the probability distributions of the same variable between two sets.
- The Page-Hinkley method– Which looks at averages and trends in the data, is useful for identifying changes in the mean of a data set over time. Data scientists must select a threshold carefully, one that can detect drift while ignoring inconsequential changes.
- The Population Stability Index (PSI)– Looks at a single variable’s distribution within two data sets and determines whether it has changed between them. A significant PSI is a sign of drift.
Developers must take a dynamic approach to implementing these algorithms, as they can only detect drift after it has already happened. And the consequence to business operations due to drift can occur on a delay as well, necessitating a proactive approach to detecting and responding to drift.
Businesses often use a framework for drift detection. They first retrieve data and extract key information from it, including the variables that impact model performance the most if they change. Developers then test for drift and determine whether it’s statistically significant enough to warrant retraining a new model.