MLOps: DevOps for Machine Learning
MLOps is the application of DevOps practices and principles to the machine learning (ML) process. In DevOps for AI and Machine Learning, DevOps engineers and testers will learn the foundational knowledge and practical skills necessary to integrate machine learning operations (MLOps) into existing AI Model development workflows.
This workshop covers the entire MLOps lifecycle, focusing on essential concepts, tools, and methodologies required to deploy and maintain machine learning models within DevOps environments.
Key takeaways from this class include:
- Gaining an in-depth understanding of MLOps principles and their integration with DevOps practices.
- Learning to set up automated data engineering pipelines.
- Tracking model experiments effectively.
- Implementing CI/CD pipelines for machine learning models.
- Establishing robust monitoring and retraining strategies.
- Navigating security and compliance considerations in MLOps.
Throughout this workshop, students will gain real-world context through practical hands-on exercises, such as setting up feature stores, implementing CI/CD processes for ML models, and deploying and monitoring models.
Who Should Attend
This workshop is ideal for DevOps engineers, software testers, and operations personnel looking to expand their skill set into MLOps. Professionals involved in software development, deployment, infrastructure management, quality assurance, or operations who wish to understand the unique challenges and best practices in deploying and maintaining machine learning models will benefit. It caters to individuals with a technical background in DevOps practices but with limited exposure to machine learning, aiming to bridge the gap between traditional DevOps workflows and the specialized requirements of MLOps.
MLOps During Model Inference and Monitoring
- Prediction serving process
- Model monitoring
- Types of model monitoring
- Dealing with model decay
- Exercise #7: Identify types of model decay
- Model retraining
Session 5: MLOps for Dataset and Feature Engineering
- What are features?
- Defining and managing features
- Dataset management
- Using feature stores
- Exercise #8: Identifying steps in the dataset and feature management process
Session 6: Model Governance and Compliance
- Model governance vs. compliance
- Types of model governance
- Explainability
- Fairness and bias
- Data security and privacy
- Model compliance standards
- Exercise #9: Identifying types of governance
Putting It All Together and Next Steps
- Comprehensive ML workflow
- Summary and wrap-up of the course.
- References
- Q&A session to address participant queries.
Sign-In/Registration 7:30 - 8:30 a.m.
Morning Session 8:30 a.m. - 12:00 p.m.
Lunch 12:00 - 1:00 p.m.
Afternoon Session 1:00 - 5:00 p.m.
Times represent the typical daily schedule. Please confirm your schedule at registration.
• Digital course materials
• Continental breakfasts and refreshment breaks
• Lunches
Tom Stiehm has been developing applications and managing software development teams for over twenty years. As CTO of Coveros, he is responsible for the oversight of all technical projects and integrating new technologies and testing practices into software development projects. Recently, Tom has been focusing on how to incorporate DevSecOps and agile best practices into projects and how to achieve a balance between team productivity and cost while mitigating project risks. One of the best risk mitigation techniques Tom has found is leveraging DevSecOps and agile testing practices into all aspects of projects. Previously, as a managing architect at Digital Focus, Thomas was involved in agile development and found that agile is the only methodology that makes the business reality of constant change central to the process.