In this talk we share our experiences of the MLOps platform we build for our customers and how they boost the way data scientists work. We show how their projects of ML models training can go from zero to production in much shorter time, while achieving superior performance, high code quality, training repeatability and governance. Also interesting from the deployment perspective, the platform mixes best-of-breed cloud managed services with a small number of powerful open-source components (e.g. Kedro, MLflow, Seldon) to get extra functionality that data scientists and their ML models need. It should be interesting to anyone from data scientists to platform builders or architects.
- Architecture of a production cloud-managed MLOps platform that is based on lessons learned from real-world projects.
- What data scientists can do on such MLOps platform, how it boosts his work and shortens project’s time to production.
- What open-source tools we use and what extensions we develop to make that happen and fill the gaps
- How this platform can be deployed in the cloud or even on-premise allowing to mix and match managed and open-source components