Session: The Geopatriation Era: Why Data Sovereignty Will Force You to Rethink Your AI Architecture
We are entering the Geopatriation Era, where data, algorithms, and AI models are being brought home due to tightening national controls. Just as manufacturing was reshored for security, AI pipelines are now being re-anchored to domestic or trusted ecosystems.
The evidence is everywhere. US tech giants consolidate data centres through a purely American lens, embedding US norms into globally deployed LLMs. Meanwhile, Tilde AI in the Baltics builds language-specific models to preserve what dominant models erase. As governments implement sovereignty requirements (EU's GDPR, China's CSL, India's localisation mandates, Australia's critical data laws), the cloud-first architectures that defined the last decade are becoming strategic liabilities.
Yet most tech leaders remain unprepared, conflating political mandates with actual sovereignty.
This talk explores three layers where sovereignty will force fundamental architectural changes: data, cultural, and institutional. Through Australia's distinctive lens, navigating between tech superpowers whilst holding responsibility for First Nations peoples' 60,000 years of irreplaceable knowledge, attendees will see what strategic sovereignty looks like when nations prioritise holistic benefits over political posturing.
You'll learn how to separate political theatre from strategic necessity, audit your AI dependencies for real sovereignty risks, and understand when organisations need to build capability versus when diversifying vendors suffices. You'll learn how to stay grounded in your organisation's actual needs rather than getting swept up in sovereignty rhetoric. What problem are you solving: compliance, vendor lock-in, or strategic control? The answer determines your architecture.
Most critically, you'll discover why AI teams now need policy fluency alongside technical fluency.
The era of borderless AI is over. What you decide today determines whether you retain strategic control or become a permanent consumer of others' architectures.
Bio
Kenea Dhillon is Director of AI Technology & Delivery at Victoria University, Melbourne, Australia, where she leads enterprise-wide responsible AI adoption and navigates the intersection of AI innovation, data sovereignty, and organisational transformation.
A thought leader and speaker on AI governance and cultural preservation, Kenea's work spans AI strategy, capability building, and advising senior leadership on responsible AI investment across higher education and financial services. She serves on the AI Leadership Exchange (Bevington Group), CAUDIT's HE AI Executives & Directors Special Interest Group, and AALIA's Enterprise Information Management (SIG) Executive Committee.
A strong believer that AI can be a great equaliser when used with intention, she brings a unique Australian perspective on how mid-sized nations can build AI sovereignty whilst competing with tech superpowers—always asking: what problem are we solving, and for whom?