advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

ChatGPT without copyright infringement would just be an interesting experiment

  • OpenAI has offered up some interesting comments in response to the the House of Lords Communications and Digital Select Committee’s inquiry into large language models in the UK.
  • Among the comments, OpenAI states that without access to copyright material for training, AI would simply be an interesting experiment.
  • How lawmakers in the UK will respond to these comments will prove interesting for the future of AI development.

Billionaires and those in the technology sector are often described as being out of touch with what the average person wants or needs in their day-to-day life. Nothing has highlighted this more than the push to popularise large language models incorporated into chatbots.

Throughout 2023 we saw how artificial intelligence became not only the latest tech craze, but a danger to jobs across a multitude of sectors. Writers and actors in Hollywood protested the proposed use of AI to replace them and won, but many more jobs are now being culled as decision makers try to keep profits up.

As is tradition, lawmakers are now examining the popularity and burgeoning purse strings of firms such as OpenAI long after the technology has established a foothold in the market. Case in point, the House of Lords Communications and Digital Select Committee’s inquiry into large language models in the UK.

The inquiry was launched in July 2023 and aims to bring a deeper level of understanding to how large language models can be used and abused. This specific inquiry wants to determine what must happen in the next one to three years for the UK to respond effectively to this emerging technology.

This week, OpenAI submitted written evidence and there are some interesting statements to be found in that submission.

For starters, when addressing the question of whether regulation would stifle innovation, OpenAI states that regulation when it comes to matters of safety should be incentivised.

“We believe it is essential to develop regulations that incentivize AI safety while ensuring that people are able to access the technology’s many benefits. Given the emergence of increasingly powerful AI systems, the stakes for global cooperation have never been higher. While we know different countries will make different choices on some aspects of regulating AI, we think it’s important that these efforts are as coordinated as possible so that we can fully realize the benefits of AI,” OpenAI wrote.

The aspect of co-ordination is an interesting thought but in reality, it doesn’t seem achievable given that countries are yet to agree on how to regulate cryptocurrency years after the tool has faded in popularity.

There’s also the matter of how AI impacts different economies. For example, Duolingo just laid off a huge amount of contractors, replacing them with AI solutions. That sort of move would be disastrous in a country with high unemployment such as South Africa.

Think of the people won’t you!

The most interesting revelation from OpenAI’s submission however is in regards to copyrighted content. Strap in because this was a wild ride.

“We respect the rights of content creators and owners, and look forward to continuing to work with them to expand their creative opportunities,” writes Open AI.

“Creative professionals around the world use ChatGPT as a part of their creative process, and we have actively sought their feedback on our tools from day one. By democratizing the capacity to create, AI tools will expand the quantity, diversity, and quality of creative works, in both the commercial and noncommercial spheres. This will invigorate all creators, including those employed by the existing copyright industries, as these tools increase worker productivity, lower the costs of production, and stimulate creativity by making it easier to brainstorm, prototype, iterate, and share ideas,” the firm adds.

OpenAI argues that in order for its product to be effective, it must have access to copyrighted materials.

“Limiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today’s citizens,” writes the AI firm.

Call us crazy, but if a company’s product relies on improperly using protected content then the product in question is theft. It would be similar to Hypertext scrapping the website of a competitor for content, posting that and then saying “Well, without this content we wouldn’t exist so you have to let us steal the content.”. It’s a truly bizarre statement for a billion dollar company to make.

It gets crazier though because in the space of two sentences, OpenAI argues that it complies with copyright law, but then says the law is vague in this regard and it believes that training doesn’t infringe on copyright. This thought is currently being tested by the New York Times which is suing OpenAI and Microsoft for using its content to train its large language models.

OpenAI is effectively saying that because it believes its not infringing on copyright, it isn’t infringing on copyright. That’s an interesting defense and of course, OpenAI and its legal team can make that defense until courts decide otherwise.

There is also the matter of AI platforms being used by creative types. In most instances, use of AI for content creation is met with criticism and venom. Just look at what happened to Wizards of the Coast recently when a contracted artist used AI to “improve” an image. Even the use of AI for “stock images” in videos is met with criticism and calls for creators to pay artists rather using a bot that creates derivatives of other artists’ work.

We’re also curious about “AI systems that meet the needs of today’s citizens”. This reads like a statement plucked right out of an investor presentation and OpenAI never actually outlines how its models benefit the ordinary person. Sure, corporations can use these models to improve business processes but even in those instances, you need a human to check that what ChatGPT is saying, doing or suggesting is accurate. In an age where folks get duped by a misleading headline, should a tool that gets so much wrong with such conviction be trusted?

OpenAI goes on to say that artists and creators can opt out of having their content used for training. The trouble is that one has to add code to the robot.txt of a website or go through an “enraging” process – as described by Business Insider – to have art removed from DALL-E’s training corpus.

We’re interested to see how the committee responds to this statement made by OpenAI and if said committee fully grasps what AI means for our society. That in itself feels like a fool’s errand considering that lawmakers are often older individuals who don’t fully grasp how technology impacts people.

You can read OpenAI’s full response to the House of Lords Communications and Digital Select Committee below.

advertisement

About Author

advertisement

Related News

advertisement