Session: Neurosymbolic AI: a 'best of both worlds' approach to scalable and trustworthy Artificial Intelligence
Artificial Intelligence is an exciting field with a long history of researchers and developers experimenting with different approaches to building intelligent machines. From the study of logic and formal reasoning in antiquity over the first neural networks to the rise of voice assistants, a lot of progress has been made in our pursuit of Artificial Intelligence – but each step forward has also brought with it a new set of challenges and limitations.
Recently, Large Language Models have represented a huge break-through in machine learning and have revolutionised the way we interact with computers. They are hugely powerful, can learn from vast amounts of complex data and work with the nuance of natural language in ways never seen before. They also make errors, are prone to hallucination, and lack explainability and visibility into how they arrived at a given output.
These issues hamper adoption in industry, where many use-cases have little tolerance for error, and new legislation like the EU AI act enforces transparency and accuracy for AI systems.
On the other hand, we are used to more conventional software like spreadsheets, databases and rule-based systems that are reliable and deterministically produce a correct answer – we just don't think of them as AI anymore, as difficulty in scaling them led to an increased focus on machine learning methods. While these symbolic methods are accurate and human-interpretable, they are also labour-intensive to scale to more complex behaviours and they can be brittle to any changes in the data.
What if we combine neural and symbolic methods to get the best of both worlds?
At Unlikely AI, we are at the forefront of neurosymbolic AI: an approach of integrating neural network-based methods with symbolic knowledge, in a way that builds on the strengths of each.
In this talk, we will dive into what that means:
We will learn about the history of AI, how different methods for pursuing Artificial Intelligence have been developed, and what blockers they hit. As part of this, we will shine a spotlight on symbolic reasoning and its uses and limitations. We will also discuss recent developments in Large Language Models, as well as their application and challenges they bring.
Armed with this background, we can dive into neurosymbolic AI, and learn how it can combine the capabilities of deep learning with the accuracy and explainability of symbolic reasoning. We will be able to see how this approach can help address the weaknesses as well as build on the strengths of both neural and symbolic methods to help build trustworthy and scalable AI.
Bio
Deirdre is a Staff Engineer at UnlikelyAI, where she’s helping shape the future of trustworthy, explainable artificial intelligence. With a strong foundation in computer engineering and a career that spans fintech and cutting-edge startups, Deirdre brings almost a decade of experience developing complex, data-driven systems.
Her work at UnlikelyAI centers around Neurosymbolic AI, a novel approach that combines the power of deep learning with the transparency of symbolic reasoning. Drawing from her experience in both conventional software engineering and modern machine learning, she’s passionate about building scalable, interpretable AI that meets real-world demands for accuracy and accountability.
Deirdre brings not just deep technical knowledge, but a commitment to fostering inclusive innovation. Her talk will explore the evolution of AI, the challenges of current models, and how hybrid, neurosymbolic approaches can help us build more responsible AI systems.