Wi-Fi and AI: Bridging the Gap to a connected future
Sreeja Nair
Director, Product ManagementReviews
Transforming Connectivity: The Future of AI-Enabled Home Networks
Good morning, good afternoon, good evening! Regardless of where you are in the world, welcome to an enlightening discussion on how artificial intelligence (AI) is reshaping our connectivity landscape. I am Sreedya Nair, the Director of Product Management at Qualcomm, and today, we're diving into the fascinating realm of AI-enabled home networks and their transformative potential.
The Evolution of Connectivity
At Qualcomm, we are known for our significant contributions to cellular networking and 5G communications. However, we are also leaders in the Wi-Fi connectivity and networking space. In today's digital age, Wi-Fi has become as essential as water and electricity, and it’s undergoing a revolutionary transformation thanks to advancements in AI.
The AI Revolution: Opportunities and Challenges
Artificial intelligence is increasingly integrated into all aspects of our daily lives—from search engines and chatbots to voice assistants and beyond. With revolutionary potential comes immense challenges:
- Cost per Inference: This refers to the expenses associated with integrating AI into various applications.
- Scalability: The ability to efficiently deploy AI at scale presents significant challenges, especially as billions of users adopt AI-enabled devices.
- Privacy Concerns: As AI processes vast amounts of data, maintaining privacy remains a crucial consideration.
To effectively navigate these challenges, a hybrid AI model needs to emerge, distributing computational tasks across on-device, edge, and cloud environments. This balance not only enhances efficiency but also addresses privacy concerns, as data processing can occur closer to where the data is generated.
The Role of Edge AI in Home Networks
In the modern home, many smart devices—such as cameras, speakers, and thermostats—often depend on cloud connectivity for their intelligent features. However, this reliance introduces several challenges:
- Delays in processing data due to reliance on the cloud.
- Privacy concerns with personal data transmitted offsite.
- Lack of integration across devices limits functionality.
To address these issues, AI gateways represent the next generation of networking hardware. By integrating edge AI capabilities directly into home broadband gateways, we can offload AI computing needs, allowing for:
- Instant Responses: Edge AI enables real-time processing and response.
- Increased Reliability: Reducing dependency on the cloud improves device performance.
- Enhanced Privacy: Data remains within the home, safeguarding personal information.
Use Cases for AI in Home Networks
Let’s explore some practical applications of AI gateways:
- Smart Doorbells: Imagine a scenario where a child rings the doorbell, and motion sensing and face detection occur locally. This would eliminate delays and keep personal data safe.
- Home Health Monitoring: AI can help monitor seniors, providing alerts for falls or unusual breathing patterns, offering peace of mind to families.
- Virtual Assistants: By running assistants like Siri or Alexa locally, we can improve response times and user satisfaction without compromising privacy.
Conclusion: A Seamless, Intelligent Future
In conclusion, Edge AI refers to the deployment of AI capabilities within devices at the edge of the network to improve efficiency, reduce latency, and enhance user privacy. The advance of AI in home networks is not merely about improved connectivity; it’s about creating an intelligent ecosystem that understands and adapts to unique user needs.
As Qualcomm continues to innovate with AI gateways, we envision a future where connectivity enhances everyday life—transforming ordinary homes into intelligent living spaces. Thank you for exploring this transformative journey with me, and let's look forward to a seamless, connected future!
Video Transcription
So good morning, good afternoon, good evening, wherever you are in the world, you know, and hope you've had a splendid conference so far.I am Sreedya Nair, director of product management in the wireless infrastructure and networking division at Qualcomm. My team is focused on building technologies for the wired and wireless connectivity. Really, you know, think of your home, your enterprise, your service providers, all those access points, routers, and all the networking equipment that goes along with it. We build that, you know, the chipsets that go into it. Now before I begin, many of you are familiar with Qualcomm for our work in the cellular networking and five gs communications technology. But you may not be knowing this. We are also a leader. Qualcomm is also a leader in the connectivity and networking space. The Wi Fi connectivity and networking space is now we are leaders.
And as many of you will agree, Wi Fi today is an integral part of our lives, right next to water and electricity. And as with every other facet of our life, Wi Fi is also being revolutionized by artificial intelligence or AI, as we call it. And today, I'm here to really discuss all the fascinating work that is happening in the connectivity world using AI to enable a seamlessly connected world for all of us. So let's delve right in. Ilham, I really want to appreciate all the engagement you have in the chat. Really, really appreciate it. Thank you so much. Please keep that coming. Now as I explained, AI is really in every aspect of our life today, from searches to plug ins to agents, and it's really continuously growing. The predictions of what AI is really gonna do in terms of usefulness and and revenue, really, are just mind blowing.
And, with, you know, revenue estimates really suggesting that it's going to generate trillions and trillions of dollars. So clearly, the AI revolution is right here, right now. And it's it's evolving at such a rapid pace that it's really hard to keep in touch. I mean, to keep keep updated, to stay in touch or to keep up yourself updated is hard. Generative AI was first, and now it's agentic AI. And, you know, you know, what's coming next early? It really represents a massive wave of innovation and a massive wave of opportunity in that sense. It's revolutionizing industries from healthcare to entertainment and not just really automating tasks, but really enhancing enhancing creative processes and decision making. Foundational models, whether you call it ChatGPT, Gemini, or others, are really driving innovations, whether it's across text, whether it's across image, whether it's across video, whether it's across three d speech and audio creation.
In every facet or every aspect of life, AI is is, you know, touching our lives. Enterprise and consumer applications are really seeing a lot of push from generated AI, as we call it, and they are offering capabilities that are beyond our current imagination. I didn't imagine that this world where I could just give some context and I could just get my legal agreement easily edited or just read and summarized to me. I didn't expect that word, but that is happening today. However, this disruption isn't assured. There are hurdles to cross and we must navigate these challenges to really understand the full potential of generative AI. As we delve deeper into the AI revolution, we first have to acknowledge the challenges that we are encountering today and the ones that lie ahead of us.
And one of the biggest challenges is what we call cost per inference. Cost per inference. It really refers to the cost of exercising AI in our daily lives. And this includes the cost of integrating AI plug ins into various applications from simple voice assistance and chatbots to facial recognition. All these applications that you eventually see working for you, there's a lot that happens behind, and many of you are technologists here. So I'm sure that you're familiar with all that's happening in the background to really make this work, you know, the amount of effort that's going in. So there are challenges. And as billions of use cases really emerge with more and more people adopting these devices, there's a challenge of scaling AI. And as with any other technology, scale is a challenge.
So we talk about these usefulness of AI applications, but the scale at which AI is going to be deployed is going to be just phenomenal. And this scale will have significant implications, not only on the usefulness, but also on cost structures, on energy consumption, and the overall user experience. And generative AI really is is seeing a multiplier effect of sorts with a growing number of applications, leveraging really billions of users. And, of course, the other angle that this brings about is sustainability. So the question that we need to address is, how do we ensure AI growth without limiting access and scalability across various dimensions and sectors? How do we democratize AI to make it accessible and sustainable for everybody?
And if you look at what is happening to scale AI today, really the center of gravity is in that data center in the Cloud. Everything that we're doing is up in the Cloud. But to scale AI effectively, a hybrid model has to emerge. And this means really splitting the computation capabilities between the data center and the edge and obviously the Cloud edge as well for many good reasons. And for this, efficiency is a major, major factor. Not everything needs to go to the data center or the Cloud. Many computational capabilities, especially, you know, you know, small language models and some some large language models also, can be run effectively at the edge. And this approach not only enhances efficiency, but it can also address the privacy concerns that you have.
A lot of the AI applications, as you know, can violate, privacy dimensions. And maintaining these within, the consumer's equipment or at the edge, really ensures a higher level of safety and security for AI users. When AI processing is done within the premise of the consumer's edge, it becomes very useful and encourages more usage of AI. That multiplier effect just becomes so much better. Moreover, running more applications in the Cloud means you're utilizing the network more and more as transactions happen. This translates into bandwidth and latency issues, as many of you have seen. Now as AI scales, how quickly are these benefits being realized due to these challenges? If you have to go to the cloud for every inference, it can slow down the process. Therefore, bringing AI to the edge is critical and essential.
And that's why we are talking about what we call the hybrid AI model, as we show here. It involves distributing your computational loads across on device, edge cloud, and central cloud environments, as shown here. On device AI, of course, you know, offers benefits such as performance, efficiency, privacy, security, reliability. The central Cloud is noted for ease of deployment, training capabilities, your aggregation, And Edge Cloud on prem, as we call it, provides a balance between these two extremes. So in summary, really what you're looking at is a hybrid model for scaling AI efficiently and for privacy, security, bandwidth, latency, and for the overall user experience. Now at Qualcomm, we are really transforming the landscape of AI processing, but driving the shift towards on device AI.
By really leveraging energy efficient computing directly on the device at the source of data generation, we achieve more cost effective and energy efficient AI processing. And this transition not only makes AI more sustainable in terms of cloud resources, but it also enables a more personalized and private experience, keeping the data and the queries to the device. This has helped us generate a whole new class of AI devices, as we call it, be it the phone or be it the PCs. We call them the Copilot PCs, you know, the first generation of Windows Copilot PCs that were, you know, that were supported by our Snapdragon processors. And, you know, along with this, what we are doing is we're enabling intelligent computing everywhere, and that's how we're taking this vision to the home. And that's what I'm going to talk to you about today. How we're bringing you know, we've we've done it on the device.
We've done it with connectivity or Wi Fi on these devices. But how are we bringing it really inside your house? And for that, it's important to understand how the home looks today or what are the challenges in the home today. The latest PCs and phones, as you as as I explained, truly benefit from this on device AI. But many connected smart devices in our homes, like cameras, smart speakers, thermostats, depend primarily on your cloud connectivity for their intelligence. Now these devices lack the processing power or storage capacity to support those AI enhanced features independently. So you may have, you know, the fanciest devices, but there's always these laggards, you know, in the innovation curve as we call it.
So this reliance that you have on Cloud, really introduces many challenges. Of course, I spoke about the privacy concerns, you know, where smart devices such as a home security camera shown here sends video to cloud servers for facial recognition. And this introduces delays, raises privacy issues due to the personal data that you are transmitting, you know, on off-site. You're going to the cloud and coming back. And then voice assistants, you know, all the Alexas and the CIRIs, you know, you're always skeptical about all the data that they are collecting. They are going back to the Cloud to relay your queries, to remote servers for interpretation, for your response. So we are seeing, really siloed systems being unable to really share and enhance each other's data effectively.
And as you know, data is the core of, you know, AI networks. So this lack of integration across devices, you know, across what we're doing in AI, limits the potential of the devices to work together seamlessly across, you know, across the network. There's so much more that you could do when they work seamlessly and interconnectedly. So this lack of integration, you know, kind of limits us in many ways. And some legacy devices, the ones that I was talking about at the beginning, also get left behind due to cost constraints or just, you know, a technology outage or, and which prevents them really from leveraging the power of the modern day AI network. So those are the challenges we have today while we are advancing further in AI, have some devices. This is how today's challenge looks like. And here is where the next generation AI gateway comes into the picture, AI routers or AI gateway.
The integration of edge AI into the broadband gateway is really critical to transforming the user experience. By allowing these devices that are on your network to offload their AI computing needs to the gateway, we are enabling exceptional edge intelligence that provides users with instant responses, increased reliability, and just privacy. What that means is having a significant amount of ability to post process and infer in your gateway itself, which you already have in your home, and upgrading them, to have this ability to post process the data. And, you know, get all the data that's across these devices and process it together. You can combine the multisensory information that you're getting from these devices. For example, your camera sends some some kind of information. There are other signals that are processed by some other devices. There will be audio that's processed. You could technically bring this all together at the gateway and make decisions.
And in the following slides, I'll talk about some use cases that we are envisioning for this Edge and AI gateway. And this shift is really not about just efficiency. It's about driving personalized experiences. Generative AI can be used to predict user patterns, provide, you know, custom responses, and recommend solutions based on your behavior. So, you know, you're walking into a home and your home speaks with you and you talk to the home. So they these are tailored experiences that really enhance your daily life. And that's how we see, the access point or the gateway in your house kind of evolving and bringing all this together. So, you know, you're talking about, the AI gateway that's really going beyond connecting devices. We are transforming this landscape by making these devices intelligent, capable, personal, and secure.
This provides a highly contextualized understanding of the local data specific to you, specific to each user and your environment, all while keeping the privacy within the house, you know, within that sensitive data that you have within your house. And this innovation means that even legacy devices can benefit from these advanced AI without needing hardware upgrades because every time you want to, you know, go upgrade, there's a cost associated. So just bringing that advanced gateway capability in, all these devices and their intelligence can be brought together. So with AI Gateway, we are not just really connecting devices, we're creating an intelligent ecosystem that understands and adapts to your unique needs, ensuring privacy and delivering a seamless, enriched experience. What it means really is that beyond you know, it it can be divided into two classes. First, on the left side, as you see, you are elevating the user experience with the networking AI, which means your current experience with networks and network management, where you're optimizing your traffic for applications like streaming and gaming.
It is enhancing signal coverage by optimizing transmission, boosting operational efficiency, and really draw and then the second is about creating new possibilities with generative AI. And this is where, you're talking about your LLMs, LBMs, multimodal AIs, real time video analysis, person detection, and many more use cases. So there's the elevate aspect, and then there's the create aspects. Quickly, you know, since I I realized we are just, you know, well over time and I know folks want to attend other sessions, so I'll quickly go into a few use cases that we envision and wrap up. In terms of, you know, you know, one of the biggest use cases we see is imagine, you know, your child really walks at your door today and clicks, you know, rings the doorbell. Anything that happens there really goes back to the Cloud, sends you a notification, and there's all that processing that's involved.
Now if imagine if we had this AI gateway scenario where we had, you know, these AI capabilities built into the gateway, this wouldn't have to be sent to the, you know, to the cloud. We could do all that motion sensing, face detection, and, you know, announcing really about welcoming, the child all back locally at the gateway itself without keeping the privacy, keeping your picture, keeping your data and everything safe. This is one use case, and we demonstrated some part of this at NetworkX and MWC this year, you know, using our gateways. So it's happening today. You know, we see we will see, some gateways deploying this in the near future. Other examples where, you know, whether it's a hospital, you know, where where critical patient data, etcetera, or locating patients is critical, in enterprises where you're trying to do some batch detection or locating people are also interesting use cases that, you know, we could develop, using these.
The other one that, we demonstrated recently at, NWC is where, you know, you have your, VR glasses and you have an assistant really running on the gateway itself and without having to go back to the cloud. All any questions that you ask are really, you know, answered by the gateway when your gateway talks back to you using multi modal, large language and vision assistance built in. And another use case, you know, last but not the least, is really about taking care of elders and making sure, you know, if something happens to them, there's fall detection and breathing and everything that's monitored so you can take care of parents wherever you are located.
Like my aging parents are located very far from me, and something like this will really keep me assured that I know they're okay without really monitoring them day in, day out. You know, just if something goes wrong is where where I'm informed that there's something going wrong, and I could just look up or ask them if everything is okay. So those are some of the use cases that we are envisioning, for AI in the gateway today, and we do believe this is the next generation of innovation with AI that's going to happen in your access points and routers. So wrapping up, you know, Edge AI really refers to the data deployment of your AI compute capabilities within devices at the edge of the network, whether it's your smartphones, whether it's your PCs, or whether it's your gateways or local infrastructure. You know? And AI, particularly generated AI, with your language models, are driving an unprecedented wave of innovation that is, you know, entering the, the home gateways and the enterprise gateways as well.
And we are seeing a surge in these applications as we work with customers to really bring this forward into your homes to make, make your lives seamless and easier. Edge AI, of course, makes, you know, makes it efficient, lowers the latency, enhances the privacy, and adds a lot of personalization, to, you know, to these use cases that we are talking about. Finally, Edge AI in Wi Fi gateways is a logical step, in the evolution of of the gateway. As a central point, where we connect everything, all the devices, it offers really a lot of opportunities for us, for everybody to come together, devices, points, the ecosystem to come together and to enable, enhanced personalized experience. And I really look forward to these AI powered gateways transforming our connectivity experience in the near future. So thank you for joining me, everybody.
No comments so far – be the first to share your thoughts!