AI Unveiled: Balancing the Benefits and Burdens with Legal Perspectives

Automatic Summary

AI Unveiled: Balancing Benefits and Burdens with Legal Perspectives

Today, we delve into the world of Artificial Intelligence (AI), examining its wealth of opportunities and complexities from a legal viewpoint. As a tech and business attorney, my specialty is supporting high-tech businesses, including those in software, SaaS, AI, and digital health spaces.

The Potential and the Challenges of AI

The advent of AI brings about a plethora of benefits, significantly impacting various businesses. These include automation of mundane tasks, creating efficiency, replacement of human labor for risky jobs, content generation speed, and improvement in customer support. AI also holds the potential to redefine our positions in the workplace and society.

However, along with the extensive benefits, come challenges that we must address as a society and individuals. Let's explore a few key areas: social and ethical considerations, economic issues, intellectual property considerations, privacy and confidentiality matters, and liability issues.

Crucial Areas for Consideration in AI

1. Social and Ethical Considerations

AI enters the realm of ethics in various forms, including potential biases in its use. AI can perpetuate embedded biases in our systems, leading to potentially harmful outcomes, e.g., incorrect sentencing in crime cases or misinformation and social manipulation via bots. There have also been concerns of technology personification, where people misunderstand AI as a sentient being, leading to risky decisions.

2. Economic Impacts

AI is experiencing exponential growth. Economists predict that AI-induced job losses may drastically outpace AI job creation. Hence, we must consider potential remedial measures, such as universal basic income or universal health care to protect the workforce.

3. Intellectual Property

Intellectual property in AI comes in two key ways: potential copyright protection for AI-created inventions and employing AI for intellectual property administration. The question of intellectual property rights and eligibility for AI-generated output is still under robust discussions in courtrooms and legislative bodies.

4. Privacy and Confidentiality

With AI becoming increasingly data-driven, concerns about privacy and confidentiality rise. Organizations that use AI platforms should ensure they handle sensitive data carefully to avoid violating any privacy rules and proprietary rights.

5. Liability Issues

Identifying who is responsible for losses or damages caused by AI errors can be complex. With a self-executing machine code, the line of causation becomes ambiguous. For instance, in the case of an autonomous vehicle error causing damage, multiple parties could potentially be liable – from the developer to the manufacturer and the owner. Hence, there's the need for clarity in agreements and the proper allocation of liability.

In summary, despite the fascinating benefits AI brings, balancing its benefits and burdens poses a challenge. Social, ethical, economic, intellectual property, privacy, and liability issues are significant considerations that governments, businesses, and individuals alike must tackle to ensure the responsible and beneficial use of AI technology.

Video Transcription

So today, we're gonna be talking about AI unveiled, balancing the benefits and burdens with legal perspectives. As my background, I am a business and tech attorney.I work with small and midsize businesses and help them negotiate and draft strong contracts so they can really focus on growth and not on disputes and ending up in court. My specialty is working with high-tech businesses and such as in the software, SAS, AI, and digital health spaces. So one of the interesting things with AI is the opportunities it brings to a lot of businesses. So examples of this include increase automation of simple tasks that create more efficiency, replacement of human labor for more dangerous and intensive tasks which reduces injuries. It's easier to generate content quicker. There's the potential to improve customer support and service by having, support and service available of 247.

There is potential redefining of our sit our basic positions in the workplace and society. So there's like a whole host of positives and potential opportunities that come with the technology. However, there are also challenges that as a, kind society and as individuals that we face with this technology. And there are a a particular area, a couple of particular areas for consideration. One is social and ethical considerations. There's economic issues, intellectual property considerations. Privacy and confidentiality considerations and liability issues. And we're gonna take the time to talk about each of these different areas briefly today. Social and ethical considerations come into the technology and its use cases in various forms specific examples are really geared towards the issue of bias, for example, is one area.

And there's different ways that bias comes in. It comes in from developers, from the data, from how the technology is used, for example, with facial ID, there's high error rates in terms of being able to by minorities and women's often because of biased data sets. We also have kind of concerns about how you are using technology that can perpetuate biases. For example, there are already certain situations where, judges are using AI for example, there is some technology that's using algorithms for sentencing and, sentencing guidelines that can be very problematic because the criminal justice system already has biases built in. Now you have an algorithm that can perpetuate that. Other ethical kind of considerations and social issues come in, for example, with increasing issues with misinformation and bots that's leading to social and political manipulation in many contexts.

We also have, for example, the situation that some people are unable to clearly be able to differentiate the technology as a tool and sometimes personify it instead. For example, not not in the past year or 2. There was someone who was engaging with one of these bots and they got into a discussion, about climate change and they somehow got into a bargain with the bot, that if they killed themselves, then the bot would save the world from climate change and they had personified it.

So they thought that was an act actual valid thing that would happen. So they ended up hurting themselves. So it kinda is a situation where people have to really be educated about this, understand the technology, and understand, the kinda ethical parameters with what how it's being used. Another kind of area that this comes in a lot is the economic impacts. There is really exponential growth of AI happening. And a lot of economists say that the job loss from AI will significantly outpace jobs that are created by AI, and governments are not really prepared to deal with that. So as a society, we need to consider what kind of remedial measures may be appropriate to take to address this transition period. Right? At some point, people will be retrained and there are more and more new jobs coming, but There is this kind of concern about there being a period of time where there is the jobless is just outpaced by the job creation.

So there's things to consider such as universal basic income or universal health care to be able to have situations where people have some kind of coverage even if they're not, you know, full time employees. Intellectual property is really another area of interest intellectual property comes in with AI in a couple different ways. So one is the frontier of AI created inventions. So this means inventions that are created by AI, for example, AI generated photos and pictures, AI generated music, AI generated code. And the question is, what kind of intellectual property can lie in these elements and who owns it? How can it be protected? What we're seeing right now at least is that courts are very clear in the copyright office is very clear that something that is fully generated by an AI is not protectable, for example, is copyright. A copyright office and US district courts have helped that there has to be a element of human creativity for something to be protectable as intellectual property.

However, this is not, you know, this is not a surprising outcome. The more interesting real outcome here is what will be the dividing line between something being fully AI generated versus something fully human generated. I think a lot of what is starting to be created and will be increasingly created is a mix of both. Right? There will be AI generated content that will then be adjusted and revised and mixed with human generated elements and what kind of protection is given to these kind of mixed element? Where is the drying lawn where whatever the human has done the AI output is sufficient to allow it to be protected by intellectual property. Courts have not quite addressed this yet. It's still very early stage. We're likely to see more cases on this.

However, we're starting to see some the copyright office has put some rulings out on this. Again, those are very simple examples and not really complicated necessarily in terms of how the mix of human generated and AI generated images is going. For example, there was one this past year with a graphic novel, were parts where the photos were AI generated, but the storyline was by the author. So the copyright office said, well, the amount of the actual storyline, like the story, the text, that's protectable, but the photos aren't because those are AI generated. That's a pretty clean situation. It'll be interesting to see what happens where, there was significant work also done to the photos Some work is not sufficient. They have already seen, for example, one case where there was a photo that won art award and was AI generated and the author had moved for copyright protection saying that they had run about 400 prompts and then done some photos, like, you know, some Photoshop and some cleanup to adjust the image.

And that was held by the copperate office, not be sufficient for copyright print technician, just doing some prompts and then doing a little bit of slight adjustments, running it through some other technology isn't sufficient to give something protection. So there has to be something more than that, but we don't know where that line quite is yet. And we will see that be developed more in the coming years. We also have the issue of AI being used for intellectual property administration, which is a great benefit, intellectual property laws are very different among different jurisdictions, and it's a complication for businesses that are trying to operate globally. So having some kind of algorithms to help them stay on top of the intellectual property and making sure they're timely registering and renewing registration is something that is very beneficial and creates a lot of efficiency in businesses and, being able to effectively govern and, control their intellectual property profile.

The other area with intellectual property is the issue of infringement and output basically autonomously, you know, is created autonomously and that output being infringing. We are already seeing some cases around this realm, particularly since some of the AI, different platforms have used, data that they have scraped from the internet, training their, algorithms. And this has created an issue where basically they didn't get permission or rights from the owners of that data underlying data, and have now just been, using it without permission. That's led to, for example, stable diffusion being signed, sued by Getty for having, taken their images for doing the training of their platform. It will be interesting to see how courts come out on, the predictability of data that's been used to train. Some platforms are trying to make sure that their output does not result in infringing content and there's no claims for users around that by making sure that they actually own the data that their training are or properly licensing that data.

For example, AJobie with their autofill has only used or or has said they've only used data that they own or properly licensed so that users can feel more comfortable with using that output without having to be worried about individually being hit hit with infringement suit.

Similarly Microsoft with Copilot has offered an indemnity to users, basically saying, hey. If you're having, you know, you're hit with in frequency for output, we will cover the cost of that. So that is to give users some comfort in being able to move forward with the technology. As we talked about it, they these technology has been trained on large data sets belonging to others, which could result in infringing content, and rights that others can, be asserted by users. The additional kinda area that comes up with AI is area concerns about privacy and confidentiality. So as more and more data data is fed into AI algorithms and platforms. This is a real issue. AI algorithms are data heavy. And for them to be adaptive, they continue to need more and more new data.

Otherwise, they don't improve and get better. But ownership data is important, and there's a lot of laws and regulations that govern the use of data, including for example, the consumer privacy act in California, GDPR in Europe, and high-tech and HIPAA in the medical space here in the US. Data that's input in platforms that says chat Gbd becomes part of the collective data set that's used to train as does the output. So you, particularly, if you're using this for particularly business reasons, you need to be very careful about what information you're inputting there. Inserting private or confidential data into the platform actually results in sharing that data. So now you've breached any confidentiality provisions that you may have agreed to or privacy rules that you may be subject to, and businesses have got in trouble here by their employees.

Right? And and intentionally, for example, last year, within a span of a couple months, Samsung had 3 privacy breaches because their employees had used chat GPT to insert their proprietary source code, as well as proprietary meeting minutes. And that's something that can now show up, for example, if a competitor is curing the platform. So it is very important to for businesses who are not only maybe a affirmatively using the technology to understand that regardless if they are using it or not, their employees still may be using it because it is something that is now available. So all businesses really should be creating AI policies that educate their employees about what are and are not allowable uses of the technology what kind of information it is appropriate to upload to a platform or what kind of interactions could be half of the platform versus what is not allowed on the platform.

Additionally, some businesses are going through the extra step of creating their own, generative AI chat bots to use in their businesses, train on their own data, and only accessible by their employees as a way to continue to preserve privacy and confidentiality of both their business and proprietary information but also to ensure they're not violating any, laws when they may be dealing with sensitive information such as, personal information or protected health information.

In general, it's also kinda important to realize that in you know, there's been a decrease in individual, privacy. Ai is also, you know, contributing to that. There are people are generally unaware of how much data is being tracked and how easy this to use AI to identify and track and monitor individuals and to even, be able to de anonymize personal data that's been anonymized by having data from different IoT devices and other personal devices that people may have and be using.

It's, you know, other kind of interesting kind of legal implications of this is the, for example, the issue of how it may be easier to circumvent legal procedure with the use of the AI technology. For example, for using voice and face recognition a lot easier to unlock a device without consent potentially than, for example, with fingerprinting or codes where you're forced to put down your finger. You know, they would have to force you to put down your finger or put in a code, which is illegal. Right? So it might be easier to get around some of the legal protections. Another area which is very, very interesting and we're starting to see, states already try to consider or pass legislation on is around profiling. Some businesses are already starting to try to use algorithms to, profile people based on their data, and to limit access to people for key areas such as you know, employment, housing, social services.

For example, there are some insurance companies are already being sued for using algorithms to deny, coverage of health claims and we will see how that pans out though those claims are initial right now. They are based on preexisting laws that are not necessarily the cleanest fit, but there are specific laws around pro profiling that are being explored and will likely be passed in the near near future. Data in general is a valuable resource and everyone is, you know, continuing to monetize this. So this privacy and confidentiality are just going to become increasing and increasingly important issues. Liability is the other issue that comes up with AI Technology. There are questions about who is responsible for losses of damages caused by ai error. So in a traditional situation, if something goes wrong, there is some kind of person or legal entity, which is also a person, for example, company is considered a legal person, someone who is liable for the thing that has gone wrong or the damage that is has been caused.

But in a situation where you have a self executing machine code, that line of causation is more ambiguous. And you have put multiple potential, parties potentially of at fault. So I always, I think the example for for example, the example of autonomous vehicle is interesting. So in that situation, if there is an error and then, you know, injure someone, other than the passengers, is it, for example, the developer or software who can potentially be liable for bad coating? Is the manufacturer of the vehicle the one who's gonna be liable? Is it the owner? Cause traditionally, if there's a vehicle issue, it's the owner who's been liable. Right? If they get into accident. So I think initially in courts, everyone's gonna be dragged in, And the courts will have to establish who where really the liability lies and what percentage of liability may lie with different parties.

Things that businesses can really do to protect themselves in this way is to think about, some 4th to address how to allocate liability on the forefront. So this is putting in some, steps in the front end to make agreements are well drafted, liability specifically address. If you as a business are using other algorithms belonging to other vendors, for example, making sure that you really declare your delineate how you're using the technology, that you're appropriately using it, that you're not being negligent, and how you're relying on these. Outputs that you are taking the additional step of actually doing due diligence to verify So I think all the evidence that you can establish that you are being reasonable, that you've put strong agreements in place, these are ways to kind of protect yourself. Additionally, I think there's different ways to kind of think about, potentially controlling for this. Some governments have talked about finding or criminalizing tech executives, for example, for spread of misinformation on social media.

I think some of that is not really realistic because then you get, a situation where some of these businesses are being too aggressive in trolling what is said. And that is also an outcome that we don't really want. This is my contact information. I know we had a short amount of time, so was going through it really fast. So, yeah, if you anyone has, wants to reach out to me, this is my information. You can reach out to me here or contact me on LinkedIn as well. Thank you.