You, AI, and the Future We’re Building Together by Lasma Alberte
Reviews
The Future of AI: Amplifying Human Intelligence and Its Impact on Society
In today's rapidly evolving world, the future is being shaped significantly by Artificial Intelligence (AI). It has become crucial to understand how AI will affect our lives, jobs, and society as a whole. In this blog post, we will explore the positive aspects of AI, the skills needed for success in an AI-driven world, and how organizations like Kyndryl are adapting to this transformation.
AI as a Tool to Amplify Human Intelligence
AI presents a unique opportunity to enhance human intelligence. By leveraging AI, we can overcome traditional limitations that have historically hindered our intellectual capabilities. Here are some of the areas where AI is making a significant impact:
- Information Accessibility: Unlike the past, where knowledge was limited to what we studied, now we can instantly access vast amounts of information through AI.
- Time Efficiency: AI tools can perform calculations and analyses faster than humans, enabling us to focus on critical tasks.
- Error Reduction: AI minimizes human error by providing systematic tools and reducing the chances of mistakes during data processing.
Skills Required in the AI Era
As we embrace AI in our daily lives and work, the question arises: What skills do we need to thrive in this new landscape? The answer involves both new and traditional skill sets:
- Technical Skills: Familiarity with coding and software engineering will remain essential, even as AI automates some coding tasks.
- Critical Thinking: The ability to assess AI outputs critically is vital. This involves asking questions about the validity and relevance of the information provided.
- Creativity and Strategic Thinking: AI should free us from mundane tasks, allowing us to focus on creative and strategic endeavors that drive impact.
While AI tools can generate code or assist in research, having a strong foundation in these skills allows professionals to discern high-quality output from AI and apply it effectively in various fields.
Embracing AI in the Workplace
At Kyndryl, we believe in utilizing AI to address customer needs without compromising quality or compliance. Our approach involves actively listening to clients and customizing solutions that suit their unique circumstances. Here are some key initiatives we have undertaken:
- Development of the Agent AI Framework: This framework allows for flexible, adaptable AI solutions that are not locked into specific technologies or platforms.
- Policy as Code: This initiative focuses on embedding compliance and governance into AI workflows, ensuring adherence to internal and external regulations.
- Customer Consultation: We work collaboratively with clients, helping them navigate the complexities of integrating AI into their operations.
Preparing for the Transformation Ahead
As AI transitions become more prevalent, businesses need to adapt to remain competitive. The adoption of AI is still on the rise, particularly for organizations not initially developed to integrate it. However, the transformation is inevitable, and the journey presents an exciting opportunity to redefine industries.
In conclusion, while AI will undoubtedly change the landscape of work and human interaction, it is crucial to remember that humans will always retain the essence of creativity and originality. Embracing this technology, developing skill sets around it, and focusing efforts on enhancing our unique human capacities can lead to a more productive and innovative future.
Whether you're considering a career shift or looking to leverage AI within your current role, now is the time to delve into this transformative technology and harness its potential.
Get Involved
Are you ready to embrace the future of AI? Share your thoughts on how AI is transforming your industry or reach out to explore how Kyndryl can assist you in navigating this exciting landscape!
Video Transcription
So today, we want to talk about, the future and, how it's being shaped by the AI and how it will impact, us, in it together.So, I'll start on the positive note, where by saying just in general, I view AI as an opportunity for amplifying, human intelligence with big emphasis on human, and we genuinely believe that, one can bring the intelligence to the next level. And, if I want to drill into this statement a little bit. Right? So the scientific method, which is the foundation of, accumulating our understanding and knowledge as a society, has always been, like, very strict in what it means. It should be data driven. It should be reproducible. If you make an experiment, you are able to repeat it, and it's false falsifiable and unbiased. And using this method, we have, you know, accumulated a given amount of, knowledge that we can, all collectively use, and that is the basis of the human intelligence.
However, the human intelligence, itself and the skills we have and the way we operate and do our daily business is, in some sense, incomplete or there are known, weak points of our intelligence. Right? But most of most of all and for most of the time, I think that the biggest, drawback it had was the incomplete information. Say, you would only know what you studied, and you would have limited access to the other knowledge, not like these days, right, where you just don't know something and you not even Google it. You ask your, favorite LLM, and they give you the answer at this particular instance right away. Right? And this also this this incomplete, information also, was working together with the time constraints that you wouldn't have all the time in the world to do the, say, the, investigations or calculations when it that were required to, you know, discover something new. And there's always human error in the loop because we do make mistakes.
And so in in some sense, when I when I phrase it this way, that humans are quite, vulnerable. And, so, and when I say that we can amplify the intelligence with the AI is that it just sort of takes away a lot of these, you know, weak points that we have as people because you can develop systematic tools. These tools can be scalable. They can be reusable, and you can really just achieve more by using them. And we should not, be afraid of it. We should not deny it. We should just embrace it and use it and, like, help us become better in everything that we do, whether it's an enterprise or, even how maybe children study at schools and and so on and how we ourselves enhance our skill set now as adults. Right? So talking about the skill set and what will AI mean for us, for for myself, for you, then the question often arises, what skills do I need today?
Shall I, and and especially in the area where I am working because, you know, I'm a technical person. I I lead the development team working on Kyndryl's agentic, framework and the policy as code, which I'll talk about, a little bit later in more detail. So my what what I see the most is the changes in the software engineering in the in the way how people write code and approach that. And there are statements in like, if I open my my LinkedIn on daily basis, there are statements that, sometimes are a little bit dismissive saying that who needs software engineers these days and that whether one should even pursue computer science degrees and so on and and so on.
And I genuinely believe that's, like, completely, unfounded because the only thing that has changed is that people don't really need to write the code letter by letter themselves. However, they still need to instruct the, LLMs, say, to to write the code for them, and, LLMs being, designed to understand and understand language and create new material in language. They obviously writing code is then the first thing that one could practically try and make the LLMs do. However, you know, people, they used to not be these days, then you can see the people who never used to be technical. Right? They can vibe code a small hobby project or create a website for themselves with such minimal effort just by laying out their intents to the to an LLM and then seeing the task getting completed in the front of their eyes. However, to be able to actually assess the quality of the input, you have to be, up to some extent able to, know what good looks like.
And you don't know it if you don't study. So the the the skill sets of actually writing the code, understanding the code, they they need to be, still gained, and also valued a lot. Right? It's not that you can completely trust the, AI writing your code because without having seen, say, a production grade code, you can't ask the LLM to write it. Without having spent hours debugging, you will not be able to debug the, errors that will come because the say, a Copilot will change your code in one in 1,000 places across your repository, and you'll not be able to understand what's happening. Or, and without having tried out various solutions to arrive at the best one, it's also just impossible to trust the output generated by AI if you don't know don't know it yourself. So when people ask, do I need new skills to to to now, say, operate, in the AI era?
Yeah, you do need some new skills, but I would say you really also need the same skills that you already have, for example, as a software engineer. Right? Because you just need to apply them differently. It's almost like coaching a junior, member. And in general, also the same applies anywhere else. Right? I picked the software engineering because it's just what I do. But, the same is true about other things, say, academic research. There are now papers coming out which, claim that I mean, it's not a claim that have been derived using AI tools, and then somehow it also gets, the credit gets given to the AI for having done that. For example, the GPT five point two released a paper for some, amplitudes in, particle physics. However, the authors on those papers, since I since that was my field, I I know I know the names of those people, and and they were really highly achieved in their field before the AI.
And the fact that with this new tool that they have, that they could, derive new results, it just, sort of reiterates the same statement I said that we just amplify our intelligence. But you have to be an expert in the field to get to that point, at first place. So what what so in general, I would say, that what we have to be, we have to be, these days more creative, more strategic, and more impactful. And I just genuinely believe that, using AI tools should free up the humanity from the more mundane tasks and instead excel in what humans can do the best, be creative, be original, exhibit great strategic thinking, and apply their sometimes gut feeling or instincts on what what the next most impactful thing there is to achieve because maybe you would now have a little bit more time for that instead of doing some of the things that maybe you didn't like that much anyway.
Right? And so I will always, like, sort of say it again and again, and maybe I'll be wrong one day. But I think that the originality, will stay with the humans for a while. And, maybe that's even a bit more like a philosophical, thing to say, but, say, imagine creating something that is really creative, so such as, films or music. I think a lot of that comes not just from the practical sounds of it, but, like, from you as a person. So just let's not forget that. But then, what will differentiate professionals in an AI enabled world? So when we say differentiate, what I mean is that if since we all have the same AI tools, more or less, how how will we make the difference? Right?
If we together ask exactly the same thing things to the LLM, most of the time, the answer will be very similar. I think right now, so people have started to be more aware of what looks like an AI written text. So how will we then, be able to differentiate and excel, ourselves, in that world? So and I, want to put the emphasis on the intent, precision, focus, and critical thinking. And what I mean by that is, the intent is so everything starts with the intent. The LN will only or GenAI in general will only do what you tell it to do.
So the intent stays with you on the strategic goals, what you want to achieve. Then precision, is it it will also be very important because most of the time, you have to be able to communicate what it is that you want to achieve. Intent is not enough. You have to communicate it either to your coworkers or your customers or your investors or the LLMs themselves in a clear and eloquent way. And focus. Focus is really needed because, I think there's a lot of information outside. Moreover, the information grows every day. There's something new every day. Someone releases a new feature or anything. And these these are distractions. Like, if you forget about the intents that you initially had and just, like, let yourself be always pulled in in the direction of the newest hype, that's not always helpful because you have to remember where you're going and not let yourself be distracted from from it.
And critical thinking, I think, is key. Asking questions such as, is this a valid output? Does this make sense? Does this solve my problem? Is this aligned with my initial intent? These these will be the estimators for success. And so at Kyndryl, AI at Kyndryl, what that is? So we, largely listen to our customers because we are not a product company. As such, we are service company, so we serve the customers. So we listen to them and try to understand what it is that they need from the AI and up to what level it is that they need the AI tools. And when they don't need them, we also tell them that for this use case, it's actually not required because not everything needs to become agentic just because it's the hype of the moment. And we have turned the boldest of our ideas into universal and custom solutions that satisfies the needs of our customers and without locking them into a particular tool or platform.
Because since we have so many customers, then they are just different, right, in even in terms of the, say, again, software in the cloud provider that they have chosen, the hyperscale that they're working with. So our solutions always have to be very, agnostic to to this technology and rather have to be, like, a guiding principle for them. As such, we have developed our own, AI framework, which we largely just did, because the existing frameworks, they are still quite restrictive. At least, that's that's what we find. And there's a bit of the lingering black box effect, even in the frameworks where you are sort of allowed to customize quite a lot what the agents, say, are doing or how you prompt them or how you evaluate them. There's always still something always behind the behind the scenes. And, in, among our customers, many of them are very, regulated, industries. And being regulated, they actually need to be able to audit every single step in in what the agent does, what decisions were made, with what reasons, what are the evaluation strategies for the agents and so on.
And, also, like, whenever you have a production grade service, you have to make sure that your service is up and running all the time because when it's not, then it's in the news, especially if you're a tech company. And then so in another thing that was important for us when we were developing this agentic framework was also separating the agents from the tools because there's quite a lot of, not, I wouldn't say it's like a confusion. It's it's rather that these are so these concepts are put together so much that, it agent almost becomes like the agent with the tools without even thinking about it. But how how how we think about the tools is the tools are the deterministic things. Tools are the databases you have access for, the computation, the calculations you do on the databases. That is not agentic. It's not LLM. It's fully deterministic up to the extent that you have made your tool deterministic. Right?
If it's a predictive model from data science and it's still not fully deterministic, but you know the accuracy, you know the metrics and the benchmarks, with which you created those. So and we wanted to be clear that these are separate. Whereas all our agents, we call them minions. They are just like, small minions. You give them the tools. These are the tools. These are the tasks. You go and execute them. Execute the task using, let's say, the the standard, React loop, meaning, you look at the task, you execute, you evaluate, and then repeat if needed. And, so another, thing that we have been doing, in my team is the, policy as code engine slash framework. So policy as code, came to be as our response to compliance in enterprise AI.
Because in enterprise AI, how I, already mentioned this compliance to policies that are internal or external are, that's very important, for them. If that's a financial institution or government, then there's, like, very little room for error. And you have to be able to, somehow put guardrails around the agents and what they are allowed to do and what they are not. And so policy as code is like this. It's sort of all of this together. It starts with ingesting the policies. It can be any documents that you have that describe the guardrails you have to comply with and, also, processes in your, organization. So we ingest these. We extract the policies. We extract the process flows, and we extract the rules.
And when we say rules, the machine readable rules around the policies, we imagine we we focus mostly on the rules that determine whether you are allowed to progress from one step to another in that process. And for example, an example for the process would be an application process, for anything really, say, opening a bank account, that would be or applying for a citizenship if if you like. There's eligibility criteria. There are some risk score calculations, which are described in that policy, what what is considered to be risky. And so to be able to progress to the next step in the in the process so what we imagine is that at each of those steps, the agent executes the small task to calculate your risk score. But then to be able to progress to the next one, it has to, validate the transition against a we call it a decision engine, which is like a small database on which we have, uploaded all the rules that apply, for this workflow. And then agents are obliged to go and check before, they can progress to the next, step.
So, yeah, so Kyndryl's agent AI framework and policies code is something that we are excited about and we talk a lot, about, with our customers. However, so the AI enabled consulting in general, it it's quite new for everyone, and we see a lot of customers who, maybe are not that you know, they were not AI native to begin with. And now the AI is everywhere, and everyone needs to have it. And then you can see sometimes that they are not certain where to begin, like, how to even go in this in into this transition, and that's where we then can come and help. So that's part of, our, say, daily, work. It's not just the development work. It's really also, like, talking, understanding, showcasing what we can do, what can be beneficial for them.
And sometimes then, some sometimes they want to try our agency framework, sometimes they don't. And it really depends on customer to customer, but we are always very open minded with them, and it's always, for into for their best interests. Right? We are just there to consult. And, yeah, to conclude, so this is the beginning of the transformation. What we do see is that the adoption of the AI in enterprises is still quite low, but, I mean, the ones that that were not intrinsically AI native. Right? Because of these things that they don't know where to start and there's sometimes, there's, like, they did there is a significant upfront investment that is required before it becomes, tangible, to see some, profit out of, AI. So there's, like, a lot to do, and it will happen. Right? Because it's like we can't undo it, so this transformation will will will take place.
And, the we will see the enterprises becoming more and more AI native. So that's a very interesting time to, say, switch careers or apps.
No comments so far – be the first to share your thoughts!