advertisement
Facebook
X
LinkedIn
WhatsApp
Reddit

Adobe shows off generative AI by using music from well-known opera

  • Adobe Research has revealed a work-in-progress generative AI that can create music.
  • Users will reportedly be able to fine-tune the tempo, intensity and more in tracks they create with Project Music GenAI Control.
  • There’s no word on when or even if Project Music GenAI Control will eventually become part of Adobe’s suite of products.

So far the musical ability of artificial intelligence has been limited to mimicking other musicians including Drake and The Weeknd. But now Adobe Research reckons it has a potential original hitmaker on its hands.

This week Adobe Research announced that it was working on Project Music GenAI Control. Described as “an early-stage generative AI music generation and editing tool”, it promises to help folks with little or no musical ability to turn text prompts into music.

“With Project Music GenAI Control, generative AI becomes your co-creator. It helps people craft music for their projects, whether they’re broadcasters, or podcasters, or anyone else who needs audio that’s just the right mood, tone, and length,” senior research scientist at Adobe Research, Nicholas Bryan said in a blog post published this week.

The research arm of Adobe also showcased this AI in a video.

If the tune used in the video sounds familiar, it did to us as well. The tune appears to be derived from a piece in the opera Habanera (also known as Carmen) by George Bizet which is so old it’s part of the public domain.

Things get murky later on in the blog post though as Adobe Research states that “users could transform their generated audio based on a reference melody”. This introduces scope for users to introduce copyright headaches not only for themselves, but Adobe as well. The firm mentions that Firefly – Adobe’s generative AI for images – uses Content Credentials which allow you to see how an image was created. It’s not clear if Project Music GenAI Control or whatever its more Adobe-fied name is in the future will take this approach as well.

Whichever way one arrives at a piece of AI-generated music, Adobe says that the platform’s user interface will allow users to change tempo, intensity, create a repeatable loop and more.

“One of the exciting things about these new tools is that they aren’t just about generating audio—they’re taking it to the level of Photoshop by giving creatives the same kind of deep control to shape, tweak, and edit their audio. It’s a kind of pixel-level control for music,” says Bryan.

Adobe Research hasn’t indicated when this piece of software will be made available or even if it intends to make it part of

We’ll speak plainly, Adobe has either a lot of courage or hubris to be considering a jump into AI-generated music.

Music copyright has historically been a huge landmine. In 2022 Dua Lipa was sued for the similarities between her song Levitating and Artikal Sound System’s Live Your Life. That case was ultimately dismissed as Artikal Sound System allegedly couldn’t prove that Levitating’s writers had ever encountered the reggae band’s song before writing Dua Lipa’s hit single.

Whether Adobe and its users will be able to avoid copyright infringement and any ensuing lawsuits with Project Music GenAI Control remains to be seen. I can’t wait to hear an AI interpretation of Wonderwall.

advertisement

About Author

advertisement

Related News

advertisement