AI Unveiled: Balancing the Benefits and Burdens with Legal Perspectives by Morvareed Salehpour
Morvareed Salehpour
Managing AttorneyMorvareed Salehpour
Managing AttorneyReviews
The Evolving Landscape of AI: Opportunities, Challenges, and Ethical Considerations
Artificial Intelligence (AI) is rapidly transforming various aspects of our lives, from the workplace to transportation. However, with these advancements come significant challenges, particularly in the realms of social, ethical, economic, and legal considerations. In this article, we'll explore the multifaceted implications of AI adoption and the responsibility we hold toward shaping its future.
Opportunities and Challenges in AI Adoption
The integration of AI into our daily lives presents several opportunities:
- Improved productivity in the workplace
- Enhanced efficiency in transportation systems
- New avenues for creative expression and innovation
However, these advancements also bring forth challenges that must be addressed:
- Social and ethical implications, including bias and discrimination
- Economic impacts related to job displacement
- Legal issues surrounding intellectual property and data privacy
Social and Ethical Considerations
One key area of concern is the **bias** that can be ingrained in AI systems. For instance, facial recognition technology has been shown to have significantly higher error rates when identifying women and people of color. This bias originates from various sources, including:
- The data on which the AI is trained
- The developers' biases in creating the algorithms
- User biases in interpreting the results
Moreover, the rise of misinformation facilitated by AI bots raises alarm, especially in sensitive fields like finance, healthcare, and legal advice. Decision-makers, including judges, may inadvertently perpetuate existing inequalities by relying on biased AI recommendations, particularly in areas such as sentencing guidelines.
Economic Impacts of AI
As AI technology grows exponentially, many economists predict that job losses will outpace job creation during the transition period. This shift calls for urgent discussions at the societal and governmental levels regarding solutions like universal basic income and healthcare, ensuring support for those who may find themselves unemployed.
Legal and Intellectual Property Challenges
The complexity of AI-related intellectual property (IP) issues requires careful navigation. Traditional IP laws must adapt to accommodate AI-generated content, which includes music, images, and code. Courts have established that human involvement is necessary for IP protection, leading to confusion about the boundaries of creativity when combining human and AI contributions.
For example, a photo generated through extensive AI prompts and minimal human touch was ruled not eligible for copyright protection due to insufficient human involvement. As AI continues to produce a mashup of human and AI-created content, the legal landscape will need to clarify these distinctions.
Additionally, concerns about liability arise as AI technologies become self-executing. In cases involving automated vehicles, for example, multiple parties may share fault, complicating the legal landscape. Businesses must adopt strong agreements and policies to mitigate the risks associated with AI.
Privacy and Confidentiality Issues
The integration of AI also raises significant **privacy and confidentiality** concerns. With laws varying widely by jurisdiction, such as the CCPA and GDPR, businesses must remain vigilant in protecting sensitive data. Inputting confidential information into AI platforms could lead to unexpected data breaches, breaching confidentiality agreements.
Moreover, the potential for AI to re-identify individuals from de-identified data adds another layer of privacy risk. This risk has prompted calls for additional regulations to protect individuals from profiling in sensitive areas like employment and housing.
The Path Forward: Strategies for Responsible AI Use
Given these challenges, businesses must take proactive steps to ensure responsible AI use:
- Implement comprehensive AI policies and guidelines for employees
- Provide training on the ethical use of AI technologies
- Establish robust agreements with vendors to address liability issues
Companies like Samsung have even opted to create internal AI systems to prevent data leaks, highlighting the importance of safeguarding proprietary information.
The Future of AI: Navigating Ethical and Legal Considerations
As AI continues to expand across industries, the ethical challenges related to bias, accountability, and privacy will remain at the forefront. Transparency and human oversight will be crucial in ensuring that AI technologies are used responsibly and ethically. Collaboration between developers, businesses, and legislators will be essential to navigate this complex landscape effectively.
If you'd like to discuss how to navigate the evolving AI landscape and implement effective strategies in your organization, feel free to reach out to me.
By addressing these critical issues now, we can harness the incredible potential of AI while safeguarding our values and principles.
Video Transcription
So there's, you know, different types of AI. We talked about the opportunities. We're going through the opportunities.So we're, you know, we're gonna see changes in how we interact with the workplace and, how the transportation industry works. So there's lots of different kinda issues that come from adoption of technology. But there's different challenges that come from it as well. So we have social and ethical considerations, economic impacts, and then particularly on the legal side, issues around intellectual property, privacy and confidentiality, and liability issues. So social and ethical considerations come in a number of different ways. For example, facial recognition software, as it currently exists, has error like, high error rates with respect to identification of women and people of color. So that creates issues.
There's biases that's built into the AI from a couple different perspectives. It's from the data itself, from developer bias as well as user bias. So it can the use of the technology can exasperate some issues around bias. We have misinformation that's being spread to bots. We have people who are relying on chatbots, even when it for for sensitive areas where it it hallucinates often and says things that are incorrect but in a very confident manner, so particularly areas like finance, medical, and legal advice, which is, not a great idea to rely on these things.
We have key decision makers such as, judges who if they rely on this and make incorrect decisions, that really is, like a serious and life changing consequence for many, people and can create you know, can perpetuate existing inequalities. For example, there is, AI that's been used for some years to make recommendations around sentencing guidelines. So if you're taking, you know, you have this, algorithm that's trained on the data in the criminal justice system that's already biased, you're helping effectuate that bias further when you're making decisions based on existing, biased data. Right? We also have economic impacts from all of this. So AI is really growing exponentially. So, many economists talk about how job loss from AI will significantly outpace jobs created during that transition period. So we're having to face, complications that are, like, a societal and governmental level about how do we address these issues.
You know, there has to be a thought towards universal basic income and universal health care because if people are gonna be unemployed, they no longer, you know, get health insurance from their employers. So there needs to be some adjustments that we as a society have to think about how we, interact with all of this. Intellectual property issues come in, several different ways for AI issues. So one, there is intellectual property that comes in in a traditional way for these platforms and services that are creating this technology. They have to make sure that they are appropriately protecting the priority part of their technology. So it's important for these companies to have well drafted agreements in place with their customers, their vendors, their strategic partners to ensure that the intellectual property that they have created is protected in their various transactions.
We also have a new frontier of AI created inventions, basically. So for example, AI generated music and our AI generated code, images, all of that. And that is a kinda complex new world. So some people are trying to get protection of these AI generated content, but the courts and the copyright office have reiterated that there has to be an element of human design and creation for something to actually be protectable intellectual property. They haven't quite drawn where the line is on that yet. They have basically said, you know, if there's no human involvement, of course, there's no protection. And if there's some minimal level of involvement, there's no protection for that as well. For example, there was a photo that won a contest just a couple years ago that was, some someone had sent about, like, three or 400 prompts into one of these, chatbots to create the image, and then they had done some Photoshop and a little bit of additional work on top of that.
And that was held not to be sufficient to make that something that was a design element or creative enough to get protection. Additionally, someone else had, at one point, tried to get protection for a graphic novel where they created the text, but all the photos were AI generated. The copyright office said the whole thing is not protectable. Only the text is protectable because that's the element that had human creation. So I think this is gonna be really complex as we move forward because more and more content is going to be a mix of, AI and human generated and not purely AI or purely human. So the courts in the coming years are gonna have to draw that line about what degree of change is needed on top of a AI output to make it something that is potentially protectable. So that is something we're gonna we're we're we're gonna be seeing more and more, cases about, and that's how decisions and and law is established in The US in particular.
We also see AI being used for IP administration or this is a potential use of it. So, intellectual property laws are jurisdictional. They're different across different countries in the world. So AI can be used to create more efficiency for businesses to kinda stay on top of their intellectual property portfolios. We also have there this other additional issue of AI infringement. So a lot of these platforms platforms have been trained on large datasets belonging to others. They've scraped the Internet, for these particularly generative AI offerings. And so the output could result in potentially infringing content if it's somehow modifying or reproducing someone else's work, someone else's intellectual property without permission. And that's why there's a whole many lawsuits pending right now around against many of these platforms for infringement or failure to attribute, properly.
We will see how courts kinda come down on this. There's some early decisions that have come down. For example, one of the cases had alleged that there wasn't proper attribution to someone to the parties who had created all the data. The court said there wasn't really harm from attribution aspect, respect perspective because, there wasn't you know, the data wasn't itself being published, but it didn't address the infringement issue. And there are more claims that are still out there in the infringement. For example, Getty has sued stable diffusion, and there was another case recently for which there was a decision and it's it found that there was the potential for infringement by one of these, companies who had scraped information, from a particular platform that was particular, like, notes that they had particularly made.
So that was considered proprietary information that was scraped and so that it was infringing. So we're gonna see more and more developments around this. One of the interesting things right now is that some of these platforms are offering more or less protections to users to make them feel more comfortable about using output because if you, as a user, are using output that is infringing, you know, you're potentially taking on that liability as well.
And a lot of these platforms, if you look at their terms of service, they say, you know, they're just giving you whatever rights they may have. So if they have no rights in the output, they're not actually you know, you're not getting any rights to the output. So this is particularly popping up in the like, often in the business use case. So for example, Adobe, when it came out with its photo fill feature, said that it only used its own content and property license content to not have to worry about the you know, to reduce the risk of infringement claims. Microsoft has offered limited, indemnity to some of its users for this. So some of these businesses are taking proactive steps to help business users feel more comfortable in using the technology while this kind of open issue of liability around infringement is continuing.
We also have, privacy and confidentiality issues that arise with the use of this technology. So there's lots of different data privacy laws that come into effect. There's, you know, state laws like CCPA. There's regional laws like GDPR, but there's depending on what data is at issue, there's other additional laws like HIPAA and HITECH if there's medical information at issue. So part of the concern is that when you input date when a data is input into a platform, it often becomes part of the training set, and it can show up in some manner in resulting output to other users. So businesses have to be very careful about inserting private or confidential data into these platforms because that is resulting in sharing of the data. So that's basically a breach of confidentiality or a data breach if they have obligations of confidentiality and data protection, obligations under applicable laws.
Additionally, if they have proprietary information that, you know, one of their personnel puts into this, that's now a breach of their proprietary information, so it's no longer secret. So that's not only bad from a competitive point of view as a competitor can potentially find that information out through out output that they may trigger with a chatbot. But, additionally, that's since you've waived the secrecy, it's no longer a trade secret, so it's no longer protectable as a trade secret. So these are real issues for businesses. So a lot of businesses need to be putting in like, they need to really be thinking about how they're using AI or how their personnel are using AI, putting together AI policies. Something I I do for clients is help them figure that out, put together AI policies, train their personnel so that there really is a thoughtful approach to the use of AI in a business, and there are not unintentional privacy and confidentiality breaches or, sharing your proprietary information.
Even big companies have had issues. For example, last year, Samsung had a situation where three in a month, there was three employees who ended up sharing proprietary and confidential information. So they ended up, barring their employees from being able to use the, chat these platforms and built built their own version instead. So businesses are having to approach it in different ways. Larger businesses can, you know, afford to build their own internal version. But as a smaller business, it's important to have, again, the AI policies in place and training personnel. Other kinda issues that come up with privacy and confidentiality is, in general, AI is being used to really gather a lot of information about people, and it can even be used to reidentify, individuals from, de identified data often.
We one area that's, you know, really coming more to the forefront. It's been going on for a while, but I think because AI wasn't as much it was more happening at the enterprise level, and consumers weren't really familiar with the technology or didn't understand that it was being used to make decisions about them. That's something that now they understand. So there's been more of a push to, particularly address issues around, profiling of individuals, on particularly sensitive areas such as employment, housing, insurance, finances, social services. So we're starting to see various legislature and regulatory action around all this, because the federal I think at the federal level, it's really hard to get action, because of where our politics are at. So we haven't really seen, much being done at that scale even just off from a data privacy perspective.
The prior administration did have some push around transparency and accountability around use of AI, but I it doesn't look like that's going to move forward in the current framework. So states are taking their own action to institute, protections in place for their residents. For example, last last spring, Colorado adopted the first kind of comprehensive AI legislation, which is going to affect early next year. And it's really focused on this profiling and, algorithmic decision making in these sensitive areas, and it puts obligations both on people who are developing AI technology as well as businesses that are not techie, but they're just the people who are using the technology and deploying it.
So some of the requirements, for example, is that if the business is making decisions about individuals in these sensitive areas, it needs to disclose that it's using the technology and needs to give opportunity to those individuals to, challenge those decisions. So it kinda goes back to that issue of transparency and accountability, and human review is really key for a lot of these, considerations around, sensitive use of a of AI to make basically decisions about people and deny them access to certain things or give them access to certain sensitive areas.
We also had the e EEOC issue guidance emphasizing that employers must ensure these tools are being used appropriately and it's not resulting in discriminatory practices. So there's been, more recognition that there is bias that can get perpetuated by using this these, tools, particularly for sensitive decision making. We have additionally issues around liability. So, traditionally, when there is an issue that arises or dispute, there's some kind of human operator or some legal entity that would be liable. But if it's a self executing machine or code, it's a little bit harder to establish that line of fault. So, I think in initial cases, there's gonna be everyone who's potentially can be involved is gonna be drawn into court. So the developers, the manufacturers, the owners of the smart devices.
So for example, if it's a automated, you know, automated AI vehicle, it could be the, the company who manufactured the vehicle, if there's a separate developer who wrote the algorithm, company who did that, and then if the vehicle's owned by someone, the vehicle owner. And then I think, basically, multiple considerations comes into play with all of this, particularly in that situation. For example, a company may, because of profit and revenue considerations, have a different goal for its algorithm. Right? Instead of kinda wanting it to be fair to passengers and pedestrians, the goal the, you know, the goal of the algorithm may be to keep the passengers most safe because that's how you're gonna get more profit because people are gonna wanna buy vehicles where the passengers are the safest.
Right? They're not gonna wanna buy vehicles where they might not be the safest. So I think there's different kind of considerations that go into effect unless we have frameworks around what you know, on a on a, like, a regulatory basis about what is appropriate or what can be prioritized or things like that. So I think we're gonna be challenged in the coming years around all of this. I think it's also important for businesses to think about how to address this kinda on the front end, and it's important to have strong agreements in place with all their suppliers and vendors to make sure that they are addressing these liability issues upfront so that if issue arises, they have strong positions in court.
And, again, they need to kinda be able to establish that they were, you know, they weren't negligent on relying on these on these algorithms or tools for decisions if something was wrong. So, for example, in a health care use case, you would want the doctors to be reviewing any recommendation from an algorithm and documenting that they reviewed it and analyzed it and they did it. You know, it really was a good decision versus just automatically effectuating something without review. So, again, it goes to that human review and transparency and accountability of the technology. Where is this all headed? It's you know, I think AI is gonna continue to expand in all sectors. We're gonna have greater inter integration of AI and human collaboration, and we're, going to see, you know, general AI at one point too. But these ethical challenges will around bias and accountability and privacy are continue will continue to be the focus for the coming years. This is my contact information.
Feel free to reach out.
No comments so far – be the first to share your thoughts!