advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Microsoft creates AI tool it knows will be abused

  • Microsoft Research Asia has shown of VASA-1, an AI tool that can turn still images into convincing videos.
  • The tech can produce lip movements, blink and portray an array of facial nuances and head motions.
  • Microsoft acknowledges the tool’s potential for abuse and won’t release it to the public just yet.

The fact that Microsoft is concerned about its new artificial intelligence tool being abused and yet, still went through with showing it off to the world should highlight where we are in the AI space at present. That is, creating even the most unnerving and dangerous tools so long as there is a potential for profit somewhere along the line.

The tool in question is called VASA-1 and it “is capable of not only producing lip movements that are exquisitely synchronized with the audio, but also capturing a large spectrum of facial nuances and natural head motions that contribute to the perception of authenticity and liveliness.”

It’s best to see the tech in action to get a real sense of just how dangerous it could be.

There are some peculiarities in the video above. For instance, the blinking can seem a bit off and, we aren’t dentists but we’re pretty sure teeth aren’t supposed to move that way. The images used by Microsoft aren’t real people but images of people generated by AI.

The team behind this tool at Microsoft Research Asia says that this tech can be applied to any still image. Videos generated are currently at a 512×512 resolution with a maximum frame rate of 40fps. The video can be adapted to suit your needs. For example, the position of the eyes can be changed to make the person appear as if they are looking anywhere.

It’s all rather interesting and perhaps a bit too convincing.

Deepfakes of people are already being used for illicit activities including sextortion. Young women and girls in high school are often the target of extortionists and VASA-1 looks to aggravate that problem further.

What’s worse is that Microsoft Research Asia is aware of the potential abuse, and yet forged ahead with VASA-1’s creation anyway.

“Our research focuses on generating visual affective skills for virtual AI avatars, aiming for positive applications. It is not intended to create content that is used to mislead or deceive. However, like other related content generation techniques, it could still potentially be misused for impersonating humans. We are opposed to any behavior to create misleading or harmful contents of real persons, and are interested in applying our technique for advancing forgery detection,” the researchers wrote.

It’s one thing to create something that is potentially dangerous but it’s another entirely to forge ahead with it under the guise of it maybe helping others someday. On that note, Microsoft says VASA-1 could be used to enhance educational equity, improving communication for those with challenges and offering companionship or therapeutic support. The firm doesn’t go into more detail here but adds that, “We are dedicated to developing AI responsibly, with the goal of advancing human well-being.”

The good news is that Microsoft isn’t releasing VASA-1 to the public, at least not yet. The firm says it will only release this tech when its comfortable that it will be used responsibly and “in accordance with proper regulations”.

Those regulations are in the making but it will be a while still before governments catch up with the advancements in AI.

Here’s hoping Microsoft keeps VASA-1 off of the open web until we have necessary protections in place. However, even as we type that we know that this won’t be the case and eventually, this tool will find its way into the hands of the worst people.

But hey, we can now make Mona Lisa rap, and that’s something, we guess.

advertisement

About Author

advertisement

Related News

advertisement