AI-Generated Songs That Sound Like Kanye And Drake Are Going Viral On TikTok

AI-generated songs are exploding on TikTok like a SpaceX rocket, and while no one seems quite sure of the legality of the trend, the ability to mash two contrasting elements together has, once again, proved to be a popular use of the technology.

Generative AI tools trained to mimic the voices of popular artists can’t quite replicate the nuances of a real singer; at best, they sound like a lethargic performance from a burnt-out star, leaning too heavily on autotune.

But that’s good enough for novelty, and right now, Ye, formerly known as Kanye West, has proved a popular voice for AI hobbyists.

TikTok users have gleefully inserted AI-generated Ye into, well, the whitest songs imaginable; whiny British ballads from Coldplay and Adele, Summertime Sadness by Lana Del Ray, Hey There Delilah by Plain White T’s, a wide range of country music, and even songs written by Ye’s old “frenemy,” Taylor Swift.

On Instagram, one can stumble upon a clip of Ye, Drake, and Kendrick Lamar singing the closing song to popular anime series, Rascal Does Not Dream of Bunny Girl Senpai.

At the moment, Ye seems the most commonly used meme template for AI-generated songs, but other popular voices include Ariana Grande, Harry Styles, Rihanna, Drake, The Weeknd and Britney Spears. Many of these AI-generated songs are not memes at all; they’re experiments, pairing artists together in a non-existent duet, or having them sing KPop.

Drake has spoken out against the trend in a since-deleted Instagram story in which he shared an AI version of himself covering Ice Spice’s song “Munch,” writing: “This is the final straw AI.”

“Heart on My Sleeve,” a song created by TikTok creator ghostwriter, combining the AI-generated voices of Drake and the Weeknd, managed to go viral, accumulating millions of plays across Spotify, TikTok, and YouTube before being taken down; the song wasn’t a joke, and its success shone a spotlight on the blurry legality of the trend, and the ethical concerns of AI-generated voices.

Since then, Universal Music Group (UMG) has reportedly asked streaming platforms like Spotify and Apple Music to block AI developers from using UMG artists to train the software.

“We have a moral and commercial responsibility to our artists to work to prevent the unauthorized use of their music and to stop platforms from ingesting content that violates the rights of artists and other creators,” a UMG spokesperson told Financial Times.

“We expect our platform partners will want to prevent their services from being used in ways that harm artists.”

Ethically, the trend is already uncomfortable, as dead artists who cannot consent are being “resurrected” by AI, and living artists now have to contend with an AI-generated copy of their voice used as a tool, and potentially used to say something offensive.

Concerns around the training data used to train generative AI models have been there from the onset of the technology, as many working artists are fiercely opposed to their work being absorbed by the machine, potentially mimicking their style and devaluing their labor, while image-creation tools like Midjourney threatens to warp our shared sense of reality by flooding the internet with fake images that, at first glance, seem like real photographs.

All of these concerns are embodied in the rise of AI-generated voices and songs, and the floodgates are already open; as the success of “Heart on My Sleeve” proved, there’s more potential for this technology than mere memes.

Commenting under “Heart on My Sleeve” on YouTube, ghostwriter wrote, rather ominously: “This is just the beginning.”

Source: https://www.forbes.com/sites/danidiplacido/2023/04/24/ai-generated-songs-that-sound-like-kanye-and-drake-are-going-viral-on-tiktok/