Session Outline
Inflexible infrastructure, wasted work, and operationalization pitfalls have all been key obstacles preventing organizations from adopting a model-driven strategy. MLOps platforms have been rapidly gaining popularity as more and more businesses realize how critical it is to have a reliable platform for accelerating research and operationalization. But a state-of-the-art MLOps platform should be more than just a tool for spinning up JupyterLab in the cloud. In this talk, we peek into the future and answer the “What’s next?” question in the context of MLOps. We’ll talk about the role of hybrid and multi-cloud architectures, GPU use for model inference, automatic cost control, integration with open source platforms like Feast and MLflow and why it matters, transitioning towards distributed compute options like Ray and OpenMPI, and more.
Key Takeaways
– What are the key elements of a state-of-the-art MLOps platform and where is the space headed?
– How hybrid and multi-cloud solutions help the growing requirements around data sovereignty?
– How GPUs are increasingly being used outside their standard “model training” role?
– Why openness is one of the most important traits of a great MLOps platform, and why do we care about feature stores and experiment managers?
Add comment