Session: Serving TensorFlow models with Kubernetes
The rapid growth of using machine learning for modeling and prediction raises the need for monitored pipeline for serving ML models. In this walk, you will get a better understanding about best practices for building and architecting TensorFlow models' serving pipeline, and the various production aspects of building this pipeline including deploying the serving application on Kubernetes and monitoring performance, response time and health.
I will also demonstrate the advantages of using this architecture - easily managing the models' versions and improved work between engineering and research teams.
- Building monitored pipeline for serving ML models
- Best practices for building and architecting TensorFlow models' serving pipeline
Sarit is a senior software engineer and a leader in the Israeli tech industry. She is a top backend tech lead, specialist in cloud and micro-services infrastructure and holds BSc and MBA from The Hebrew University of Jerusalem with honors. She won several International Hackathons and 2020's WTGA. Also, Sarit took part in numerous exclusive organizations including Microsoft Women of Excellence and Skills by Intel.