AI in Everyday Life: Friend or Enemy? Can We Coexist and Thrive?

Candyce Costa
CEO and Founder
Ina Toncheva
AI for Content Marketing Trainer
Pooja Jain
AI Trainer & Consultant, Founder
Carnellia Ajasin
CEO and founder

Reviews

0
No votes yet
Automatic Summary

Hello, everyone! Welcome to the blog where we explore the intriguing intersection of **artificial intelligence (AI)** and the role of women in technology. Recently, we had an engaging panel discussion during the Women in Tech Global Conference in February 2015, focusing on AI's influence in our daily lives. The central theme was whether AI is a friend or an enemy. Today, we're diving deeper into the insights shared by our phenomenal panelists—Candice Costa, Ina Toncheva, Cornelia Jasen, and Puja—who shared their perspectives on navigating the complexities of AI in our modern world.

The Reality of AI: Friend or Enemy?

AI has emerged as a pressing topic in our society, and understanding its dual nature is essential. The panelists provided compelling insights into how AI can be both beneficial and potentially harmful.

Insights from the Panelists

  • Ina Toncheva emphasized that AI is neutral and depends on human intention. It has transformative potential in work and communication but can also pose risks if we outsource critical thinking to machines. AI literacy is becoming a vital skill for professionals today.
  • Cornelia Jasen shared her perspective on AI being a mirror of our intentions. She advocated for responsible AI development that prioritizes equity, empathy, and community involvement to design systems that serve humanity rather than exploit it.
  • Puja echoed similar sentiments, acknowledging AI's power but cautioning against over-reliance. She pointed out the need for human judgment to complement AI's capabilities, thus advocating for a model of informed skepticism.

Embracing AI: Coexistence and Literacy

As AI continues to evolve, it is vital for professionals to embrace it while recognizing their unique human capabilities.

  • Building AI Literacy: Understanding AI's strengths and limitations will allow professionals to harness its benefits without losing their human touch. This understanding is crucial for industries where creativity and critical thinking are paramount.
  • Coexistence Strategies: Companies can foster an environment where AI is treated as a collaborator rather than a replacement. This involves training teams to work alongside AI, nurturing a culture of adaptability and continuous learning.

Ethical Considerations in AI Development

With the rise of AI, ethical challenges are at the forefront of discussions around technology.

  • Regenerative Intelligence: Cornelia introduced the concept of regenerative intelligence—an approach centered on care, equity, and sustainability. It emphasizes how AI should be developed with consent-driven data and community input, moving beyond mere profit motives.
  • Responsibility in AI: Each stakeholder—developers, investors, and policymakers—must share in the responsibility of creating ethical AI systems. Ethical infrastructure should be woven into AI development from inception to deployment.

Addressing Misconceptions About AI

The panelists noted that misconceptions about AI's capabilities can lead to fear and resistance in adopting new technologies.

  • Balancing Fear and Overhype: Both extremes—the fear of AI replacing jobs and the hype of AI as a miracle solution—misrepresent AI's reality. Emphasizing informed skepticism helps professionals leverage AI effectively without losing sight of their essential skills.

Key Skills for AI-Literate Professionals

In a rapidly evolving landscape, certain skills are becoming increasingly crucial:

  1. Curiosity and Adaptability: Professionals must remain open to learning and adapting as AI technology evolves.
  2. Critical Thinking: Human judgment is irreplaceable; professionals should apply their critical thinking skills to validate AI outputs.
  3. Ethical Awareness: Understanding data privacy and ethical considerations of AI applications is essential for responsible usage.
  4. Creative Problem Solving: This unique human capability sets us apart from AI, making it a vital skill in any profession.

Your Next Steps in the AI Era

As we delve into the future shaped by AI, consider the following:

  • Continue Learning: Explore courses on AI to enhance your literacy and competence.
  • Join the Community: Engage with platforms and networks focused on women in tech to share insights and experiences.
  • Stay Informed: Keep up with the latest trends and best practices in AI to remain competitive in your field.

Final Thoughts

AI isn't going away; instead


Video Transcription

Hello, everybody. Here, and I'm so happy for another year joining the Women in Tech Global Conference in 02/2015.Today, we are going to have a panel where we are going to discuss artificial intelligence. So everybody is talking about AI. I think everybody in the past two years that are involved with AI has been a little bit overwhelmed, like me. And, but it is, a very pressing, topic that we have to discuss, especially, taking in consideration the women presence in AI. So our panel today, we are our session is about AI in everyday life, friends or enemy, and can we coexist and thrive. So and, if you don't know me, my name is Candice Costa. I am the founder of, Female Tech Leaders.

It's a community that has support tech women for the past, eight years. And, also a founder of AI Make Easy. It's just a new company that I'm I'm I'm starting next month, and we are going to help entrepreneurs, digital creators with AI. So everybody, we can see in the chat box, we have my LinkedIn, so if you want to connect. We have Camilla, if you want to connect. Ina is going to add her information as well. And as usual, I will ask our guests, our panelists, talking about themselves a little bit. So, Ina, can you introduce yourself, please?

Sure. Hi, everyone. My name is Ina Toncheva. I'm a marketing consultant, marketing strategist for b two b and tech companies. And I've been doing this for a long time, helping helping companies of different industry and the industry and size scale their marketing. And then two years ago when I realized that AI is going to change everything we do, the way we do it, the way we communicate, work, and basically do everything. I immersed myself into this. And, today, I'm focused on helping marketers and market marketing teams implement AI in a meaningful way, moving from scattered prompts to using workflows, which bring result and make them feel confident. Because I think today, building our AI, fluency, literacy is one of the most important things we have to do for ourselves and our businesses.

I'll just I'll paste now my LinkedIn profile in the chat.

Yeah. Thank you. So, Cornelia, now can you introduce yourself, please?

Sure. Hello. My name is Karnela Jasen, and I'm a venture builder. I'm an investor and the founder of, Futures Inc and cofounder of Frequency Capital. My work lives, around, where, responsible AI meets, capital design. So that we we don't just think about, intelligence, but it's really about, who's building it, and who's building you know, who's keeping the technology and AI accountable, and what systems are powering AI to be in the world. So I'm honored today, to be here. Typically, you know, there's a lot of talk about the exposure of AI being harmful, surveillance, structured biases embedded in AI. And I stand alongside of those those other pioneers who speak on that. My focus is on, venture pipelines and governance around AI as well as the capital structures around AI and what funds AI to its to into existence.

So I called my, stance around this around called regenerative intelligence, and it's about building ventures that repair and not extract systems, but embed care, intergenerational responsibility, and justice at the root of technology and AI. So it's not just that AI works, but it's that, but AI that is worthy of our trust. It's not enough to mitigate harm within AI, but looking at, you know, doing good things with AI. So that's what I'm here to build. And I believe that women will lead the way in terms of this movement. So thank you so much for having me, be a part of this important conversation.

Thank you so much. And now I want, Puja, please, to you can introduce yourself. We just started. So please talk a little bit about yourself and work that you do in AI. Cannot hear you. I cannot hear you. Your mic is oh, yes. I can hear you now. Yes.

Yeah. Hi, Candice. Hi, everyone. And sorry. I got stuck in trying to join. No worries. Everyone. My name is Puja, and I am the founder of Power Up AI. It is an AI training and AI coaching company specifically focused on senior executives and, you know, small and medium businesses. A bit of my background. So, basically, I I started up this company last year, but prior to that, I have worked for spent almost a decade in corporate, and I have been working in AI and automation since now eight years. And what I am focusing in right now because in all my years of experience, I have seen, like, there is this big gap between businesses and technology. Right? So businesses speak one language and technologies, they speak some other language. So, with my company now, I am trying to bridge that crap. I am helping senior leaders, business owners, you know, trying to understand technology from a very nontechnical point of view focused on the business results. Thank you.

I will share my LinkedIn in the chat.

Yes. Please. This is I'm gonna ask you. Please share your LinkedIn, and, of course, everybody is very welcome to join or to to send a request to connect. So right now, I'm gonna start with the my first question That is in your perspective, and, and how you see, AI, do you believe that it's more a friend or a enemy in our daily routines, in our daily business routines, life routines as well? Let's start with Ina, please.

Okay. I think it is both. AI has the potential to do incredible good for the world and just the same potential to do to do very much harm. Inherently the technology is neutral. Okay? It all depends on the people behind it or even the technology behind it. Because what makes AI different from previous technologies is that it can interact, learn, and even talk to other AI systems. So this makes it incredibly powerful, as I said, and potentially unpredictable and dangerous. So to me, it's a friend when we use it to stretch our thinking, to spot our blind spots, to improve processes, to to make discoveries. And don't get me wrong. I'm a, like, I'm a big proponent of AI.

I've spent the last years, really, really working with it, learning the technology, and and learning the use cases for the technology in my profession. But at the same time, I do see a potential danger if if we decide if some people decide to outsource their thinking, their critical thinking to AI and even more dangerous beyond my beyond my profession for which I'm not gonna talk about today. But at the same time, AI is our is part of our reality today. Now this is a fact whether we like it or not. So become becoming AI literate is, in my view, the only way forward, and not just literate, but fluent. And not just to avoid becoming irrelevant as professionals, but also to be able to at least understand what's going on around us. Maybe not all of it, but at least some.

And for those of you who were on the job market in the early two thousands, I don't know if you remember, but people would write stuff like Microsoft Office proficient or something like this. And, AI literate or AI fluent is what we'll have, what we'll have on our CVs in in the months to come. And I I think it's it's even going to like, it's going to be implied that, we are fluent.

Yeah. Cornelia, what about you? What's your perspective about it?

Sure. Excuse me. I think, my answer isn't binary. I think AI is neither it's it's neither inherently a friend nor an enemy. It's a mirror. It's more importantly, the amplifier. It's skills, whatever systems and intentions that we feed it. And so my journey as a venture builder in in terms of technology and AI in particular, and also as an investor in AI, has taught me that technology alone doesn't just liberate or oppress, but it's shaped by the choices that we make. I've seen brilliant, value driven founders sidelined by building, solutions rooted in, data care or reciprocity, healing, justice around AI, you know, you know, and also rather speed versus scale. But I've also watched funding as it relates to AI in particular chase, velocity over vision or traction over trust. But I've also seen the possibilities of AI that if we build it differently.

So I've seen AI empower local education or educators to personalized learning, in underserved communities, adopting content through human human insight, but not just algorithms. So I've also seen, startup, you know, design platforms to prioritize, collecting data over exploitation. So I've also seen the powered AI, not just as a disruptor, but also for for profit, but kind of to reinforce trust and agency and belonging. Remind us that AI is really just about about servicing people and not just markets. So to me, the question really isn't about just, you know, is AI good or bad, but I think it's really about what systems are we building, around and and who is gonna be benefiting from those systems. Are we embedding empathy? Are we centering dignity and decency as a design principle in terms of AI?

And so my experience have shaped me to believe that AI becomes more of a companion, a friend to human potential only when we move beyond metrics and towards more meaning, having meaningful AI.

Yeah. And I agree with you. Puja, what is your perspective, individual perspective about being a friend or an enemy?

So I think I would just like to build up on what Ina and Carmelia has already mentioned. So AI is a technology. Right? And technology in itself is neither good nor bad. It is neutral. It is how it depends on how we use it. And same thing goes with AI. The only thing with AI is, like, compared to any other technology that we have had before, AI is very powerful as in the impact, whether it's good or bad, is is amplified with AI. So we have to be very careful on what we do with it. I mean, in my experience as a trainer for senior executives and as a adviser, I I have had cases where, you know, the c suite leaders have actually amplified or increased their productivity by almost two x. Now they have been using AI really as a collaborator, but I do not see them being dependent on AI because their role is such that they cannot really outsource everything to AI.

So that, I think, plays a big part in how you use AI, how you integrate it in your daily workflows. And to to really, like, support that, I think education, the AI literacy plays a big, big part because if not I mean, we have been reading in newspapers daily. Like, for example, recently, I read in the newspaper, so Chad GPT, actually advised some women to diver dive like, file a divorce for the for the for her husband. I think these kind of cases can happen, will happen. That's the that's the nature of the technology. But this amplifies that why we need the literacy. Why is it so important that we know where we can trust AI and where we need to, still have our human judgment?

Yeah. I agree. I think always the the lack of the machine or the the system, the technology in general is a reflect of the human and the humans that are working with that. Isn't that? So, just before we move to the second question, I wanted to tell everybody that is watching us, please say hello. Leave your LinkedIn there. Ask questions. We will try to answer a few questions in the very end. We don't have a moderator, so I have to do everything by myself. I will do my best. So please, enjoy, but also engage with us. Okay? The the the community is here, and I think the networking, is a very is amazing opportunity for us, but, of course, for our audience network between them as well.

K? So leave the link and say hi. Ask a question. So let's move to the second question, and I wanted to start with Virginia about, you mentioned regenerative intelligence. Could you explain a little bit more more about the concept and, of course, what is what what that means for AI in in in the today, general systems and daily routines that us or business are having with this impact?

Sure. Yes. Thank you so much for that. So regenerative intelligence is more of a model that I use to describe AI, that's not only built around responsibility, but also intentionality in terms of how we design, how we repair, invest, reinvest, and then also restore. So it centers around equity, equality, sustainability, and care. So not just in outcomes, but also with the first decision. Who's building this product, this AI product? Whose data is being used, in this AI product? And also who's gonna be who's gonna be the beneficiary of this AI, outcomes and data? Traditional AI systems often operate in a way that's extracted in terms of extraction. And so scraping data without consent, optimizing for speed over safety, scaling without accountability. These systems mirror and often magnify existing inequalities around AI, whereas regenerative intelligence, challenges that.

It it asks the question, what if AI were built in a partnership with communities using co creation or consent driven data? What if our funding models rewarded trust and intergenerational benefits, not just short term returns? So we're we're really looking at, using examples, in this whole ecosystem around around AI. So some builders are choosing not to rely on off off the shelf datasets riddled with bias. Instead, they're looking to take a kind of a slower intentional approach, engaging users and shaping data and also embedding, community reinvesting in those communities in terms of those types of business models around AI. So regenerative intelligence for me is AI that's rooted in respect and reciprocity and also repair. The work that I focus on is also looking at, redesigning those types of systems that and also looking at those systems that also fund and scale, and governed, around AI.

But because I feel like this this this time, this moment is really important because we don't just need AI to work well, but we need AI, for, you know, for all of us to to work in a way that that is, you know, that that's fair to everyone. And so that's the future of regenerative AI for me. And I think that that's something that that I think we can all build together.

Yeah. Thank you. And it's very important. The ethics in AI is, for me as well, is a very strong pressure topic that we have because we're starting getting a lot of traction. This is the time. The very beginning, we need to start in having these kind of conversations as well. Thank you, Coninue, for that. So let's go for Inam. You work with, as market consultant and strategist. So what do you think of the socials and economic shifts that we already seen with the AI adoption?

Yes. We are already seeing some big shifts. And on the business side, one of them the major one is speed. Like, AI is compressing timelines across the board. Teams can go from an idea to an execution, for a fraction of of the time, and this change it this changes how businesses plan, execute and communicate. However, this is not applicable to all businesses. Okay. So on one hand, there's a growing gap in AI literacy from, like, from team members in a company or from from a company to another company. So teams that are AI savvy, they're pulling ahead fast and and developing this capability, while those who aren't, they're basically falling behind. And now this brings in another, big chain another change that I see, in team in team dynamics and how people feel about AI in generals in general.

In any organization, we always have the the people who are on the front line, who are who are interested in the innovation, who are interested in bringing it to the company. And then on the other spectrum, we have the people the people who are afraid or who are resistant or who are indifferent or who fear the change, who fear what's going to happen with, with their job. And then there's also this shift in tone, like, because of the hype, because of this, skill gap, because of this gap in understanding that AI is here to stay and we have to embrace it, if we want to to stay relevant in our profession, And because when and because we're seeing AI being being driven top down, we saw the memos from the founders of Duolingo, Shopify, and other companies who basically

said now?

Yeah. Who basically said you start working with AI now. It's going to be taken into consideration in your, assessment, and, you're basically supposed to to know what to do right now, yesterday. So because of all these forces that come into into play, there's a lot of, like, there's a lot of skepticism, there's a lot of like fog I would say, people feel forced to pretend they're competent so I think right now we are in a very messy, situation. I

agree with you. But now Puja, please, from your experience that you are training professionals, especially executives, what do you think is the biggest misconceptions that we have at the moment?

So I think the biggest misconception that I'm seeing right now is, like, these two school of thoughts, which are on the polar opposite end. So one is, you know, the school of thought that AI is going to take over our jobs. There is fear, and then there is resistance to adoption. And then there is another another school of thought that says, oh, yeah. We we AI is there for everything. You know? There is a hype. There is over hype that it can do whatever. AI agents that can take over 100% of your job, you just sit, relax, work for, I don't know, one hour a day. So I guess, like, these two these are the two biggest mis misconceptions that I'm seeing, during my interactions with with the companies, with senior executives.

So, honestly, the rely the reality is somewhere in the middle. Right? So AI tools are powerful. We all know that, but they are imperfect collaborators. So the professionals that are adapting most successfully to these, AI tools, is is having this approach what I call as informed skepticism. You know? So they leverage the strength of these AI tools, to the best of their, to the best of its capability, but they are still applying their human judgment. They are still actively managing its limitations. So, let me give you an example. Like, recently, I worked with this legal team, and, after like, they they implemented this research AI assistant. I mean, after the initial excitement, they saw that the assistant was hallucinating it a lot.

I mean, it was producing output very confidently, which looked like wonderful, but when you start going in the detail, there was a lot of hallucination. So this is this is often even across, like, the teams, across organizations, what we see is as most mostly happening. Right? That's the reality of it. So they did not really abandon the project, but they they changed they shifted their, approach. So instead of just relying on the output, they started using this assistant for initial research. And then they applied their, their, human judgment and their, you know, their specialty, their expertise on it to to produce the final output. But that was still very helpful because AI was, you know, going to surfing the Internet for them, getting all the reliable resources, creating an initial report, cited resources, and so on, which was a big help for them.

So, yeah, I mean, for me, as I said, the reality is somewhere in between. That is we we should start using AI, but have this informed skepticism. Yeah.

I agree. I think, I think as everything that is really starting as the new technology, a lot of early adopters are still grasping, for example, in their own skills, what we can do with. And then we have a number of people that are scrapping the surface, selling themselves as experts. And then is when we see the problems where something goes wrong, and then we blame the k yag. We blame, like, okay. This is, like, this is not gonna happen, and then people get fear, then people get scared about, okay, they are go I'm gonna lose my job, etcetera etcetera. So so let's go for the let's talk a little bit about ethical challenges, and, I wanted to start with Ina.

How are the different industries that are addressing this disruption, and, which approaches you, in your personal opinion, are the most promising?

Yes. So different industries are tackling this in very different ways, of course, depending on how directly AI impacts their their workflows. In fast moving industries like marketing and mid media and the ones I'm in direct contact contact with, we're seeing what I call a retrain and and reframe approach. Sorry. Companies are not cutting roles from what I see for now. They're reshaping them. Writers, designers, strategists, they use AI to do better faster and even more and even more creative work sometimes. So these companies, they focus on upscaling teams to work with AI and not repeat again. In more traditional sectors, I think the emphasis is still more on automation and cost cutting, but we're also seeing a trend and someone mentioned this in the chat so I'd like to touch upon it.

We do see a significant slowdown in tech hiring, in in the tech in the tech industry. And and I think this creates this creates some fear in in people. But at the and but at the same time, we see AI native companies on the rise. Like, we see teams of 10 to 50 people generating tens of millions, in air AIR. So I think that's a huge shift in in how we think about scale and efficiency. And, of course, the industries, I think, that are doing the best job are the ones, that see AI as a, like, as a catalyst of rethinking how they get work done, and who gets to do it.

Yeah. That's that's interesting. Puja, in your, in your opinion, of course, what are the specific skills that you think that are most crucial for the professionals at the moment develop to remain them on the competitive for the market?

So I think here, I would talk about it's not just one skill. There are, like it's a very fast changing world that we are seeing with AI. So there are a couple of skills that I would list, that I think are super helpful and are going to, let's say, help regardless of, your profession or your seniority that we all need. So first and foremost is, of course, like curiosity and adaptability. I mean, the the we we are seeing the pace at which AI is changing the way we work. AI is changing the systems that we work with. So to keep up with that, I think to have that adaptability is is essential. It's the core of it. Neck next, I think that that's something that, that's going to play a big role is human judgment and critical thinking. Because AI tools or AI chatbots, they are very good with all the mundane and repetitive tasks, but they are still not get that good with critical thinking.

So any process, any business, any any profession that involves critical thinking still would need a human, right, to approve what AI is doing, to correct to correct AI, to train AI. So that I think we should be really honing our skills on critical thinking to be the best, like, in what our professions are. And speaking of, like, ethics and transparency, I think this is also something we should be be more aware of now that, we are working with AI because AI data, I mean, AI in general, it hallucinates a lot. There is a lot of, bias in the output. There is a lot of, you know, lack of transparency. Plus, you should be aware of the data safety because, any tool that you're using, any platform, you should be aware of where your data is being stored at, how is it being processed, what is happening with your data. So it does not you do not need have to be, like, at the g a CSO or something for this, but, really, like, start going into the privacy policies.

Start understanding the data privacy part of it. So I think that is very critical. And last, I would mention, like, these irreplaceable human capability of creative problem solving and, you know, the the emotional intelligence because, I mean, AI is still not very good at it. It can fake it can fake the emotional intelligence, but, yeah, we all know that's not the real, real, skill. So I think these skills are something that humans, like, we should be honing on.

Yeah.

And, of course, like, AI, like, any as any other complex skill that probably we have trained on, we know that it needs a lot of practice, refinement, and, you know, a willingness to evolve with approach. Yeah. So

yeah. Yeah. Thank you. Priscilla, in back to the ethical, especially in your case that you are working very much on the in this, area, What's the actual infrastructure for AI that look really impractical? So what we thinking in ethical, what we, as humans, we needed to be aware of it.

Yes. So thank you for that. So ethical infrastructure for me isn't just a checklist. It's the invisible kind of scaffolding, that determines what gets built, and also who gets funded and also whose data is being used and ultimately who benefits. And it must be embedded in every level, not bolted kind of after the fact, which happens quite frequently. In practice, it looks like datasets that are cocreated with communities and not scraped from them. AI that includes ethics and community activists and, social techno technical, historians, not just engineers and PMs. Right? It's it's it means, funding, you know, funding, you know, funding, you know, funding, you know, funding, you know, funding, you know, funding, you know, funding, you know, funding,

you know, funding, you know, funding,

you know, funding, you know, funding, you know, funding, you know, funding, you know, funding, you harm prevention, around AI, not just go to market strategies. So this isn't it's not just a theory. It's more of a architecture is what I'm thinking. So we don't just need to slow down the damage around AI, but we need to kind of design systems that reach, that kind of, regenerate people and planet. I think that this, this that means shifting incentives from building fast to building more wisely for short term gains to long term care. And again, decency and and sustainability, that's a really big piece for me. So in those cap tables, that includes just not profiting for things just for profit sake, but also making sure that there's an impact of some sort.

Procuring standards that demand audits in terms of efficacy. So accelerators that vet governance and not just growth. But here's the truth around that. So no single actor wins this responsibility or owns this responsibility. So founders, must build ethical structures into the blueprint of what they're building in terms of AI, products. And I think investors must, you know, has to stop rewarding speed over safety. I think policymakers also needs to also, craft and reinforce guardrails around, AI as well. And educational institutions, I think, must equip, the next generation to build, to lead with moral clarity and not just technical fluency.

I'd like to do to you.

So as stewards of ethical AI, I think we have to interrogate those harms and try to also prevent them by imagining how ventures are funded, governed, and scaled before the first line of code is actually even written.

Yeah. Yeah. Totally agree. And I think that is the the ethical bit of the AI. Having the humans more involved, taking the responsibilities, not the companies, startups, the whole ecosystem, but also governments, education. Everybody as society needs to jump on and try to understand what's happening and then create an action plan to jump on that. So what I'm gonna do now, as I mentioned to my guests, I love casual. So what I'm gonna do it is right now, I'm gonna jump to the to the chat, and I will pick up some questions. And then I think I I will pick three questions.

So I will ask one of each you because, of course, we just have like, we are already in the in the half of our panel, so we have to go a little bit faster. If not, they are gonna cut us. So and then they will just log out. So first question, I want to go yeah. So it's Hashithia Kumar. I'm so sorry if I mispronounce, and, I'm sorry. So sorry. But I'm doing my best. She says that we are getting more dependent on AI. And so she asked, could the panel share tips on to not overuse AI and still rely on human intelligence as the first point of reliance? I'm gonna ask Puja to answer that because you are working very, very, like, for a couple of years training people. So what how we are, like, not some tip, maybe one or two, not overuse AI. Mhmm.

So first thing I want to mention here is I think there are certain areas of our life where we are going to rely on AI. Right? Take example of, like, Google Maps. So since Google Maps came into our lives, who has even opened a physical map and, you know, we have lost that skill. So that is eventually going to happen. Like, there are certain skills. There are certain areas where we are going to completely outsource that way. Having said that, I think it's it all comes down to the conscious use of AI. So like social media, when you are when you start, you know, going deeper into social media, you can spend hours on it. But at some point, you have to be very conscious of what you are consuming, what you are doing with that information. Same goes for AI. Yes.

I mean, tasks like email writing, copywriting, I think there would that would, to some extent, go to AI. But still, if you want to keep that skill like, for example, I love writing, so I do not outsource that to AI completely. I write the first draft. I use AI only for editing, you know, making it better, in the proper format. So I think that that is what we have to keep in mind. And, again, I would insist that to maintain to do that, you would have to like, AI literacy plays a big part because once once you start understanding where AI can help and, you know, where it cannot help and where it goes can go wrong, I think you would be more conscious of the usage as well.

Yeah. Thank you. So the right now, we have the last next question is Rupaul Gupta. So he's asking, how can we coexist? Give some suggestions, please. Let's go with Camilla. How we can coexist with AI, Camilla? Give, like, one or two suggestions for for how can we do that.

Yes. So I think, in terms of I think, in terms of how we can coexist is, is that we look at AI not as a savior but and nor a threat, but we get to see how, the relationship is a it's more relatable in terms of the source. So I think that AI is is is not just a technical brilliance, but it's also about more clarity. So I think in terms of us coexisting, it's really about us normalizing dignity in terms of metrics along with KPIs. And so it's also that we're measuring impact not only in, in revenue and and restoration, but also just how we utilize AI. AI systems are not are not just judged by their efficiency, but also the way that we expand access, restore trust, and also serve generations beyond our own.

I think that AI, in terms of coexisting, is also for us to look at how we can I think it's also a a matter of looking at, you know, how we work together in terms of, you know, I think as Puja said earlier, using AI as a companion as opposed to, you know, kind of relying on AI to be the be all end all, be all in terms of, you know, the data that we're actually extracting from it?

So I think that those are some of the things that we can actually do too in terms of, ex establishing coexistence with AI.

Perfect. Thank you so much. So now, Ina, we have Nandini Tayo, and she's saying in a world where where AI can mimic a human behavior and even emotions, how do we protect authenticity and trust in human interactions?

Yes. I've been thinking. I saw these questions. A question Yeah.

I've been thinking about it. Actually. Actually, I will say something after

Okay.

Question.

But, please answer.

So I so I think, well, when it comes to human interaction, whether it's authentic if we are interacting with humans, right, whether they're authentic or not, it's up to them, so we cannot influence that. What we can influence, though, is whether we choose to interact with humans or robots more and more. Like, because we take it for granted for normal to talk to Chargegbt about everything work related, to talk to, I don't know, to, just to Claude for, I don't know, for content, to talk to another AI agent for data analysis, etcetera. Sometimes we would use our favorite LLM for a quick session, instead of going to the psychologist, etcetera. However, when we start when we start reaching out to, or to to AI to replace human interaction, like a client of mine, who is who are doing content moderation, which is an extremely important topic, they, they were approached by a company who is lending robots for friends. You talk to them and it charges you, like, $2 per minute. Now I think these are the important choices that, that depend on us.

How much like, how often do we reach out to the AI and to the robot instead to the people in our lives? And I think it's a slippery slope and it's like it's not like I'm, I'm saying it's wrong to do that. I I can imagine this can happen, but it's in with our within our control.

Yeah. Yeah. I think, like, for me, it's it's when the human will start understanding to realize what is a AI in the human. You know? And AI can mimic us, but they are not perfect. They are getting much better, and they will get better. But there is, there is differences where when you are educated or you are understanding how they work, you can realize. For example, in Instagram, there is a lot of AI influencers, and you see people engaging with them without knowing that it's not a human being. You know? It's it's a it's a it's a bot answering and is a generate image from one of these AI tools. You know? So I think it's around education again. You know? Understand like, for example, I love deep fake, like, listening, learning, researching about deep fake.

So I think this is very important for the human understand how to look in the eye and realize it's a machine or, you know, it's an AI or it's a human being. Someone asked about courses and the training sessions, so I put on the chat about courses. So if you go there, it's my what my blog, and I talk a lot about courses paid and free that you can do it. Okay? And I've done all of them. So, our courses that I know they are good or not. Good. So if they're not good, I don't talk about them. So there are lists there. So everybody that is interested in studying, learning, and knowing more, Of course, Ina, Puja, we they also and me, myself, we also have training sessions, you know, free books, free programs, everything about that.

So if you follow us on LinkedIn, you're going to have that access. And if you are in The US, you have Camilla there. So Camilla is based in The US. So a lot of the the the chat are saying thank you so much because we answer those questions. Because sometimes when the chat is quiet, I like to have my own questions. But when we have the the the interaction, I like to to to ask the questions. We are very close to the end. So what I'm gonna say is, let's just want another one. Dani is asking which AI tools you would recommend. Ina, one. Everyone each of us, we're gonna give one tool that we cannot live without. Ina, what is your AI tool that you recommend?

It's gonna sound very banal and cliche, but it's just GPT because you can use it for everything. And my favorite usage in my profession is all these custom GPTs that you can create, which can be used in the teams internally to make sure the content the content is on brand. Or, another great use that I see, these custom GPTs are used by many companies as an amazing new marketing channel, and they serve for discovery and for bringing people to their brands. So yeah.

So, Candelio, which is your favorite? Yeah.

I'd have to say, GBT as well because I wanna make sure I always make sure that whenever I'm speaking about something and it stays on in our within the context of of my platform.

Puja, what is your favorite that you recommend? I mean, since we have had ChargePD twice, I would I would say for AI agents, start exploring platforms like Zapier make I would say three. Zapier make any time. So I think that is the next step from chatbots to automations.

Yeah. So and I'm gonna go a little bit different because, of course, I wanted to give something different. If you are if you love images, and I want something very, very easy, go to Leonardo AI. It's very easy to use. You have a lot of fun, for images. It's super cool. It's not so difficult like the other ones. And this is very good. So the output is really, really good. I love I love Gemini because I connect with my Gmails, my calendars, and everything. I cannot live without Notion, you know, without, you know, tactic that is my AI notetaker. So, oh my goodness. I just like, I have so much stuff. I live with AI. So, Danny, can I write? Yes. I'm gonna write really quickly. Gemini. Puja, can you write, if you don't mind, the name of your reports?

I did. I did. Okay. Cool. Thank you. Notion. Yes, Cornelia. Thank you. No other said more or Leonardo. Leonardo is the the images. Sorry. I'm trying to speed up things now because I know that they are going to finish our