Session: Hardware Infrastructure: The Backbone of AI Workloads
In this session, we will explore the critical role of hardware infrastructure in powering the next generation of AI workloads. AI models, particularly those at the forefront of machine learning, large language models, and high-performance computing, demand highly specialized compute resources, high-speed networking, and scalable storage solutions. This talk will provide a comprehensive overview of the hardware backbone that supports these cutting-edge workloads and examine how it enables real-time processing, scalability, and efficiency across global data centers.
Drawing from my hands-on experience with scaling hardware across 60+ global regions, we will delve into the practical challenges and successes of deploying and optimizing hardware infrastructure at scale. Key topics include hardware deployment strategies.
We will also discuss how next-gen AI hardware, such as GPUs, TPUs, and custom AI accelerators, is transforming AI performance and shaping the future of workloads in data centers. This session will offer actionable insights into how organizations can utilize and scale the existing AI infrastructure to meet the ever-growing demands of advanced AI applications.
Attendees will leave with a clear understanding of the critical role hardware infrastructure plays in AI success and how to navigate the challenges of deploying and optimizing AI workloads in a secure, cost-effective, and efficient manner.
Bio
Hema is an Engineering Leader in Cloud and AI from Microsoft who drives large-scale hardware deployments and infrastructure projects across global data centers. She has a successful track record in driving technology projects while working for multinational corporations in USA, Japan and India. Her educational and professional background is a unique combination of engineering, business and technology leadership.