[MLOPS] From experiment management to model serving and back. A complete usecase, step-by-step!
The recording of our talk at the MLOps World summit. This talk covers a complete example, starting from experiment management and data versioning, building up into a pipeline and finally deploying using ClearML serving with drift monitoring. We then induce artifical drift to trigger the monitoring alerts and go back down the chain to quickly retrain a model and deploy it using canary deployment.