Contentions of The Times
The New York Times has launched a copyright infringement case against OpenAI and Microsoft. According to the Times, millions of its stories were used to train automated chatbots, notably ChatGPT, without permission. The newspaper claims that these chatbots are now competing with The Times as a source of information, potentially harming its business.
Demands Made by The Times
The complaint makes no specific monetary demand, but claims that the defendants should be held liable for “billions of dollars in statutory and actual damages.” The Times requests that any chatbot models and training data created with its copyrighted information be destroyed. Furthermore, the publication urges OpenAI and Microsoft to reach an agreement comprising a commercial arrangement and technology protections for generative AI products.
History
In April, the Times allegedly approached Microsoft and OpenAI to voice its worries over unauthorised usage of its intellectual property. The newspaper wanted a peaceful conclusion, which may include a business arrangement and technological limitations. The conversations, however, did not result in a settlement, causing The Times to bring the case.
Reaction of OpenAI and Microsoft
The action surprised and disappointed OpenAI, who emphasised their commitment to protecting the rights of content providers. They added that they were making positive progress in discussions with The Times and intended to establish a mutually beneficial method to collaborate. Microsoft refused to comment on the matter.
Why AI Needs Tough Regulations
The complaint raises worries about AI systems uncompensated usage of intellectual property. The potential of technology to mimic natural language and create complex information presents legal and ethical concerns. The court dispute may put generative AI technology evolving legal contours to the test, with repercussions for the journalism business.
Some in the technology sector worry that subjecting AI businesses to copyright liability would be detrimental to innovation and competitiveness.
The complaint also illustrates the potential harm to The Times reputation from so-called A.I. “hallucinations,” which occur when chatbots input erroneous material that is then incorrectly ascribed to a source. The developing environment of AI, as well as its usage of copyrighted content, highlights the need for explicit laws that address legal, financial, and journalistic concerns.
Conclusion
OpenAI is dealing with internal issues as well as many lawsuits alleging copyright infringement, including those filed by Julian Sancton, John Grisham, and Sarah Silverman. IT specialists also filed a lawsuit, alleging unauthorised code use. Stability AI was sued by Getty Images.
CEO Sam Altman was momentarily sacked, reinstated, and given legal expense help for OpenAI clients’ copyright complaints.
This litigation suit has piqued the interest of specialists in the Artificial Intelligence and legal industries. It calls for tight legislation to govern AI. Using copyrighted works to train AI systems, according to OpenAI and Microsoft, is “fair use“, a legal concept regulating the unlicensed use of intellectual material.
According to the US Copyright Office’s website, “transformative” uses provide “something new, with a further purpose or character” and are “more likely to be considered fair.” As the question of whether using copyrighted material to train AI models violates law is an unsettled matter.
This law suit has the potential to create an unprecedented common law for regulating AI models in future.
Author: Harmanpreet Kaur
Fourth Year Student at the Army Institute of Law (AIL) Mohali & Legal Intern Data Privacy and Digital Law EU Digital Partners