Session: On-device ML: Artificial Intelligence in your pocket
AI is moving from the servers to our pockets, making the technology on our phones faster and more private.
But how do you go from deploying a single machine learning model on the cloud to deploying millions of models around the world on every device in our customer's hands?
A mobile device comes with incredibly constrained resources and it requires running the triathlon of memory, speed and privacy to bring the technology to our phones.
Understand how to navigate this challenging journey and take the leap to on-device ML - because the future is at our fingertips.
- Learn how to accelerate the journey from server to mobile for deploying deep learning neural networks
- Understanding the risks and challenges that come with deploying neural networks on-device
- Security implications of deploying neural networks on-device and which solutions and strategies to adopt to secure your intellectual property
- Understand the fabric of mobile deployment platforms for both Android vs iOS
- Push the boundaries of machine learning on mobile
I am a deep learning engineer at Adobe working on next-generation AI innovation for mobile devices.
I am passionate about building neural networks targeted for on-device and the edge. For the past 5 years, I have been working on applying deep learning for train ing models, compression, optimization, IP protection and mobile deployment in the domain of document image analysis and detection tasks.
Prior to joining Adobe, I pursued my Masters degree from Carnegie Mellon University with a specialization in mobile and pervasive computing.