Locate2u the Company & its Products ▾

Locate2u Pulse is a software platform designed for any delivery or service business. Learn more here.

Locate2u News

Locate2u News offers up-to-date logistics and e-commerce insights from across the globe, keeping you informed on industry trends and developments.

Locate2u Pulse

Locate2u is a software platform designed for any delivery or service business. Our solution helps these businesses improve their route efficiency, improve their customer’s delivery experience, and increase productivity, all while reducing the time it takes to plan routes.

Navigating the legal maze of AI 

Navigating the legal maze of AI 
Navigating the legal maze of AI 
Share this article

In the rapidly changing landscape of generative artificial intelligence (AI), concerns are rising about the legal implications of our spontaneous actions in search to be more productive. While generative AI is showing a promising future, many are chasing to find the boundaries. 

OpenAI and Microsoft are facing their second class action lawsuit for alleged copyright infringement while developing ChatGPT and other generative AI systems.

The US Copyright Office is studying the copyright law and policy issues raised by AI systems. The first question the copyright experts are battling is how AI models should use copyrighted data in training. Another concern it wants to iron out is whether AI-generated material can be copyrighted even without a human involved. 

Red flags around AI in the workplace

Speaking to Locate2u, Technology lawyer Amir Kashdaran says there are various legal issues that AI companies are currently dealing with. Lawsuits have been filed in different jurisdictions based on copyright infringement claims, defamation claims, consumer protection laws, and privacy violations. 

“Much like the early days of the Internet, it will take time for society and the legal system to grapple with these complexities. However, what remains certain is that generative AI technology is here to stay. It offers unparalleled opportunities for innovation and advancement in various fields,” says Kashdaran. 

Finding the right balance is important, says Kashdaran, between safeguarding individual rights and fostering innovation. “Striking this balance may involve the development of new laws and regulations specifically tailored to AI technologies. It may also require AI companies to adopt responsible practices prioritizing ethical and legal considerations.”

Bloomberg reported earlier this year that Samsung told its workers to “be careful” when using ChatGPT. The concerns are around “personal details or private company information” being fed into the chatbot. At the same time, Business Insider reports that Amazon placed limits on how employees can use ChatGPT. It reportedly spat out responses that “mirror the retail giant’s internal data.”

There are two groups: those embracing generative AI rapid infiltration into companies and those standing on the sideline. “It will take some time for society and the legal system to sort out the potential adverse consequences that may arise. What is certain is that the generative AI technology is here to stay, and we must find an adequate balance between the protection of individual rights and innovation,” warns Kashdaran. 

There is no reason to be nervous 

Microsoft now has a legal policy, Copilot Copyright Commitment, to protect its customers sued for copyright infringement over content generated by its AI systems. Microsoft will take responsibility for the potential legal risks involved if anyone sues a user on these lawful grounds. 

The technology company acknowledges that while AI tools open doors for new possibilities, they raise further questions. “Some customers are concerned about the risk of IP infringement claims using the output produced by generative AI. This is understandable, given recent public inquiries by authors and artists regarding how their work is used in conjunction with AI models and services.”

In the most recent AI court case, two engineers sued OpenAI for using their personal information to train generative artificial intelligence tools. They claim OpenAI used their personal data from social media, appropriating their expertise, which they fear could ultimately lead to the erosion of their professional relevance in the future. 

Kashdaran believes the lawsuit has some merit; however, a plausible defense against it could be that the personal information was anonymized. For instance, they could argue that “the data was collected through a technological means, without human intervention, and eventually aggregated or anonymized in such a way that a person or individual could no longer be identified. However, this is just speculation for the time being. The courts will need to assess the merits of the case based on the applicable privacy laws and the evidence presented by the plaintiffs as to collection, handling, and use of personal information.” 

Nonetheless, the lawsuit appears to be quite complex, argues Kashdaran: “The plaintiffs argue that the defendant’s use of consumer data far exceeds industry standards and their own representations.”  As such, the courts will need to consider several key questions, such as: “What [are] the industry standard[s] applicable to training AI models, what are the boundaries to these industry standards, what representations [have been made], are these representations legally binding?”

The challenge is that the data collected over the Internet “may come from many jurisdictions outside of California and subject to different laws,” says Kashdaran. So the question is, will there be a legal leg to stand on for all the allegations under the personal jurisdiction of the California courts? 

Clarity in the pool of AI confusion

As AI technology gained momentum over the last year or so, finding clarity about the legal boundaries to develop and use AI technologies is paramount. However, many unsettled questions need to be addressed. “Questions about the application of our current laws to AI technologies, such as in the areas of privacy, copyright, liability, discrimination, cybersecurity, and more. Unsettled law means more risk of good faith actions being legally challenged down the line,” says Kashdaran.

Share this article

About the author

Mia Lindeque

Mia is a multi-award-winning journalist. She has more than 14 years of experience in mainstream media. She's covered many historic moments that happened in Africa and internationally. She has a strong focus on human interest stories, to bring her readers and viewers closer to the topics at hand.

Capterra Pixel