advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Lensa devs go on the defensive as critics slam AI art

  • Lensa is the latest bot using AI to generate art that has come under fire.
  • While developer Prisma Labs has assured users their data is safe, it’s the datasets used that are causing concern.
  • While datasets the include data scrapped from the internet aren’t illegal, the matter of ethics is at the centre of the debate.

Artificial intelligence making art is a novel concept to some and a danger to others.

As regards the latter, artists have been highly critical of applications that use AI to generate art with the latest target being Lensa.

Developed by Prisma Labs, Lensa started life in 2018 out as an image editor but has recently climbed the ranks of app store charts for its Magic Avatars feature. Here, users are prompted to upload 10 selfies in various angles which will then be used to generate artistic impressions of the user.

Prisma Labs employs the Stable Diffusion deep learning model to generate its images.

“Lensa uses a copy of the Stable Diffusion model that, by default, generates a random person if one is mentioned in the prompt. To personalize the output images in each particular case, we need 10-20 pictures uploaded to re-train the copy of the model,” reads a tweet from a thread published by the developer.

“It takes at least 10 minutes for the machine to make approx. 120 million billion (yes, it’s not a typo) mathematical operations every time, meaning we have a separate model for each user, not a one-size-fits-all monstrous neural network trained to reproduce any face. As soon as the avatars are generated, the user’s photos and the associated model are erased permanently from our servers. And the process would start over again for the next request,” added Prisma Labs.

However, the issue at hand is not necessarily user data being hoovered up and used for training Stable Diffusion, it’s the dataset the model uses that has raised concerns.

In order to train an AI model, one needs to feed it a wealth of data and that data needs to come from somewhere. In the instance of Stable Diffusion, it makes use of the LAION-Aesthetics dataset which forms part of LAION 5B. This dataset boasts “5.85 billion pairs of image URLs and the corresponding metadata” and it may not have obtained that data with permission.

As Ars Technica reported earlier this year, artists have had to develop tools in order to track whether their artwork is being used to train AI bots. While it may not be illegal, whether it’s ethical is the topic of conversation at the moment.

Artists are understandably upset that their art is being used to train a model that may replace them and developers such as Lensa see this art as data that it has every right to make use of.

What we do think is worthy of further investigation is how Prisma Labs profits off of this.

The Magical Avatars aren’t free and users need to fork out R78.99 for 50 unique avatars. Given the app’s rise in popularity we wouldn’t be surprised if artists who know their work has been scrapped by LAION 5B launch a class action lawsuit against the developer for damages.

[Image – CC 0 Pixabay]

advertisement

About Author

advertisement

Related News

advertisement