Session: So, what’s AI anyway? Non-expert users in the face of disruptive technology
Artificial intelligence has become ever more pervasive in our daily lives and is being applied to a growing number of fields such as banking, healthcare, or art. A variety of tools to automate tasks such as text-to-image creation, disease detection or even creative writing are now available to the general public for free. However, people’s understanding of AI is still heavily influenced by science fiction and AI agents are often humanised. Also, the term “artificial intelligence” can be misleading as the concept of “intelligence” is typically associated with the ability of sentient beings to elaborate thoughts. In this light, “artificial intelligence” may be misconstrued as a machine’s ability to think and be aware of its own thoughts.
Such widespread misconceptions are partly the result of a community of tech experts that is not invested enough in popularising complex concepts and issues, nor in empowering non-expert users to interact with AI in an informed and conscious way. The risks of releasing disrupting technology without properly educating the public on its usage and its ethical implications are manifold. For example, information provided by systems such as ChatGPT may be inaccurate, biased or even blatantly wrong and should be verified before being used to make decisions. LLMs (large language models) have a considerable environmental impact and don’t represent all languages and cultures equally, thus risking to perpetrate the stereotypes and worldviews of only part of the population.
This talk will focus on outlining the potential societal risks of the uninformed use of AI-powered tools by non-expert users and will discuss how the contribution of experts to the popularisation of this field can help.
- The AI-powered technology that has been recently released to the public can be extremely useful but has also environmental, social and ethical implications that may not be apparent to non-expert users.
- The corpora used to build LLMs (large language moldels) are not curated enough to ensure that they don't perpetrate harmful stereotypes and are not sufficiently representative of minorities and their views.
- It is the responsibility of AI practioners to dispel false myths about AI and raise people's awareness of its potential and of its risks, so that people can make an informed and counscious use of it.
Raffaella Panizzon has been working in the field of NLP since 2014, first as research assistant at the University of Padua and then as Language Engineer for tech companies with a focus on voice assistants. Her academic and career path has led her to understand the power of linguistics associated with AI and how their combination can positively impact the lives of millions of people.