O

Services

Case studies

Contact

01.03.22

Data distribution shifts and monitoring

featured image thumbnail for post Data distribution shifts and monitoring

One of the many ways your machine learning model can go wrong

Building and deploying a machine learning model can be a daunting task. But even once that is done, the model can still go wrong. If the world changes in some significant way between deploying a trained model and using it, the predictions that looked good during training can suddenly become worthless.

To mitigate this risk we should monitor if our data has changed. One way is though feedback - checking if your model is still predicting well. Another is by looking at data distribution shifts. Chip Huyen has written a very useful guide to this often overlooked topic.

←Previous: ML experiment tracking with Guild AI

Next: Push-Ups with Python!→


Keep up with the latest developments in data science. One email per month.

ortom logoortom logoortom logoortom logo

©2025

LINKEDIN

CLUTCH.CO

TERMS & PRIVACY