advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

The AI horse has escaped the stable and we’re doomed to learn some tough lessons

  • Lewd images of Taylor Swift created using AI spread like wildfire online last week.
  • At the same time, lawyers were prepping a case against podcasters who created a fake performance of George Carlin.
  • Despite claiming myriad protections, AI is constantly being used to spread misinformation, create deepfakes and more with no countermeasures or legal recourse for victims.

We now live in an era where every image, body of text, audio clip and video has the potential to be generated by artificial intelligence (AI) models. What was claimed to help make work more efficient has quickly descended into a chaotic minefield.

Last week, popstar Taylor Swift grabbed headlines after deepfake images of the singer went viral on X, formerly Twitter. The fake images included naked images as well as the popstar in lewd positions and much more.

The images were reportedly created on a website where one can create fake nudes using a celebrity’s likeness. However, it’s not just celebrities that can be faked.

As NBC News reports, high-school-aged girls in the US have reportedly been victimised by deepfakes and per the outlet, no law governs the creation and spreading of these images. While Swift may have an army of fans to come to her protection, a girl in a high school in the styx of the US, may not.

The fact that there are no laws or seemingly any effort to formulate laws around deepfakes is concerning. The Federal Bureau of Investigation (FBI) warned in June last year that cybercriminals were using deepfakes to extort people.

“The FBI continues to receive reports from victims, including minor children and non-consenting adults, whose photos or videos were altered into explicit content. The photos or videos are then publicly circulated on social media or pornographic websites, for the purpose of harassing victims or sextortion schemes,” the Bureau said in 2023.

The threat of AI being used to weaponise your likeness is as real as it is for Swift.

But AI fakery doesn’t stop there. Recently, comedian Will Sasso and author Chad Kultgen, hosts of a podcast called Dudesey released a fake George Carlin sketch. Together, the pair trained an AI bot to emulate Carlin using the hours of content and material the comedian produced in his lifetime.

Reactions to a “new”, hour-long “performance” generated using AI were mixed in that responses ranged from mild to extreme rage.

The estate of the late comic has since filed a lawsuit against Sasso and Kultgen.

It gets worse though.

At the weekend it was reported that a deepfake audio clip using US President Joe Biden’s voice was being circulated. The clip told New Hampshire residents not to vote in the state’s primary election. The clip was reportedly created using a tool from ElevenLabs and the creator of the clip has since been suspended.

But suspending a user after they have created a deepfake and spread it around, likely fooling many people is too little too late. Research has shown that correcting misinformation is perilous in and of itself. In one study it was found that simply repeating a claim to debunk it can reinforce the misinformation.

Anecdotally speaking, it’s already tough to convince people that what they read was fake, trying to convince them that what they heard and saw with their own eyes and ears is fake is a hard ask.

And yet, despite all of the threats to our social fabric, AI companies continue to field investments by the billions. The law is seemingly secondary to growing their bottom lines and ethics be damned. Just this month OpenAI admitted that without copyright infringement, ChatGPT would simply be an interesting experiment.

In the case of the Biden deepfake, ElevenLabs’ terms and conditions forbids the use of a politicians voice outside of parody, satire or caricature. How that is policed is unclear but given that a clip outright telling people not to vote was created and posted online, we’d argue that protections against that sort of use are weak to non-existent.

Unfortunately, AI is a threat to society. Not in the Terminator sense, but more in the sense that fake art, video, images and audio are slowly causing rifts between us all. Is the fact that a few comedians can emulate the voice of Carlin impressive? Yes it is. Should this be done without the comedian’s family’s permission? No, it should not, not only because it’s creepy, but also because it’s incredibly unethical.

However, ethics seem to be the last thing AI companies are thinking about and lawmakers have fielded the idea of regulating AI but those processes move far too slowly for the rate at which AI development moves.

We’re not sure what the future holds for AI or society but better protections need to be put in place, not only for celebrities and politicians but us regular folk as well.

[Image – Markus Spiske on Unsplash]

advertisement

About Author

advertisement

Related News

advertisement