It's no secret that machine learning workflows are awkward to deploy, hard to maintain, and often cause friction with engineering/IT teams. Frequently, work done by data scientists and machine learning researchers is wasted because it never escapes their laptops and/or it cannot be scaled to larger data.
In this course, we will learn how to easily deploy and scale ML workflows on any infrastructure using Kubernetes, the container orchestration engine used by all of the top technology companies (Google, Amazon, Microsoft, and others). Kubernetes was built, from the ground up, to run and manage highly distributed workloads on huge clusters, and, thus, it provides a solid foundation for model development. We will learn how to containerize and deploy model training and inference on Kubernetes using popular open source tools like KubeFlow, Pachyderm, and Seldon. We will also discuss how to ingress/egress data, version models, utilize GPUs, and track/evaluate models.