Music has forever been moved by technology — from the invention of the phonograph, to Bob Dylan pivoting from acoustic to electric guitar, to the ubiquity of streaming platforms and, most recently, an ambitious attempt at crossing AI with commercial music.
FN Meka, introduced in 2021 as a “virtual” rapper whose lyrics and beats were constructed with “proprietary AI technology,” had a promising rise.
But just days after he signed on with Capitol Records — the label that carried The Beatles, Nat King Cole and The Beach Boys — and released his debut track “Florida Water,” the record company dropped him. His pink slip was a response in part to fans and activists widely criticizing his image — a digital avatar with face tattoos, green braids and a golden grill — and decrying his blend of stereotypes and slur-infused lyrics.
The AI artist, voiced by a real person and created by a company called Factory New, was not, technologically, a groundbreaking experiment. But it was a needle-mover for a discussion that is imminent within the industry: How AI will continue to shape how we experience music.
The music/AI partnership: crucial to diverse listening experiences
In 1984, classical trombonist George Lewis used three Apple II computers to program Yamaha digital synthesizers to “improvise” along with a live quartet. The resulting record — a syrupy and spacey co-creation of computer and human musicians — was titled “Rainbow Family,” and is considered by many as the first instance of artificially intelligent music
In the years since, advances in mixing boards popularized the practice of sampling and interpolation — igniting debates about remixing old songs to make new ones (art form or cheap trick?) — and Auto-Tune became a central tool in singers’ recorded and onstage performances.
FN Meka isn’t the only AI artist out there. Some have been introduced, and lasted, with less commercial backing. YONA, a “virtual singer-songwriter and AI poet” made by Ash Koosha, has performed live at music festivals around the globe, including MUTEK in Montreal, Rewire in the Netherlands and Barbican in the U.K.
In fact, the most crucial and successful partnerships between AI and music have been “under the hood,” said Patricia Alessandrini, a composer, sound artist and researcher at Stanford University’s Center for Computer Research in Music and Acoustics.
During the pandemic, the music world leaned heavily on digital tools to overcome challenges of sharing and playing music while remote, Alessandrini said. JackTrip Virtual Studio, for example, was an online platform used to teach university music lessons while students were remote. It minimized time delay, making audiovisual synchronicity much easier, and was born from machine learning sound research.
And for producers who deal with large music files and digital compression, AI can play a role in signal processing, Alessandrini said. This is important for sound engineers and musicians alike, saving time and helping them more smoothly create, or export, big records.
There are beneficial applications for technology and music to intersect when it comes to accessibility, she said. Instruments have been made using AI to require less strength or pressure in order to generate sound, for example — allowing those with injuries or disabilities to play with eye movements alone.
Alessandrini’s own projects include the Piano Machine — which uses computers and voltages as “fingers” to create new sounds — and Harp Fingers, a technology that allows users to play a harp without physically touching it.
On a meta level, algorithms are the ubiquitous drivers of online streaming platforms — Spotify, Apple Music, SoundCloud, YouTube and others are constantly using machine learning, in less transparent ways, to personalize playlists, releases, lists of nearby concerts and music recommendations.
Up for discussion: the concept of an AI artist itself
Less agreed upon is the concept of an AI artist itself. Reactions have been split among those loyal to the humanity of art; some who argued that if certain artists were indistinguishable from AI, then they deserved to be replaced; others who invited the newness; and many whose feelings fall somewhere in between.
“With any cultural form, part of what you’re dealing with are people’s expectations for ‘what things sound like or what an artist looks like,’” Oliver Wang, a music writer and sociology professor at California State University, Long Beach, told Grid.
Some experts argue that those questions leave out a critical point: Whatever the technology, there is always a human behind the work — and that should count.
“Sometimes people don’t know or see how much human work is behind artificial intelligence,” said Adriana Amaral, a professor at UNISINOS in Brazil and expert in pop culture, influencers and fan studies. “It’s a team of people — developers, programmers, designers, people from production and marketing.”
But this misunderstanding isn’t always the fault of the public, said Alessandrini. It often comes down to marketing. “It’s more exciting to say that something’s made entirely by AI,” Alessandrini said. This was how FN Meka was marketed and promoted online — as an AI artist. But while his lyrics, sound and beats were AI-generated, they then were performed by a human and animated, cartoon-style.
If it sounds strange that one would become a dedicated fan of a virtual persona, it shouldn’t, Amaral said. The world of competitive video gaming, which is nothing without its on-screen characters, is a multibillion-dollar industry that sells out arenas worldwide.
Still, music purists and audiophiles — and any person who appreciates music as an experience, rather than just entertainment — may very well resist AI musicians. In particular, Alessandrini said, AI is better at generating content faster and copying genres, though unable to innovate new ones — a result of training their computing models, largely, using what music already exists.
“When a rap artist has these different influences and their own specific cultural experience, then that’s the kind of magical thing that they use to create,” Alessandrini said. “You can say that Bobby Shmurda is one of the first Brooklyn drill artists because of a particular song. So that’s a [distinctly] human capacity, compared to AI.”
Alessandrini likens this artistic experience to the advancements of AI in medicine — the applications of robotic technologies used during surgeries that are more efficient and mitigate the risk of human error. But, she said, there are some things that humans do better — caring for a patient, understanding their suffering.
It’s hard to imagine AI vocals ever reaching the emotional and beautifully human depths, say, of a Nina Simone or Ann Peebles; or channeling the authentic camaraderie and bounce of a group like OutKast.
What’s next for artist personas built using AI?
In 2017, the French government commissioned mathematician and politician Cédric Villani to lay ambitious groundwork for the country’s artificially intelligent (AI) future.
His strategy, one that considered economics, ethics and education, foremost straddled the thinning line between creation and consumption.
“The division between the noncreative machine and the creative human is ever less clear-cut,” he wrote. Creativity, he went on to say, was no longer just an artist’s skill — it was a necessary tool for a world of co-inhabitance, machine and human together.
Is that what is happening?
One can’t talk about music on grand scales without also talking about money. Though FN Meka was a failure, AI has strong ties to the music sphere that won’t be broken because one AI rapper got cut from a label. And it feels inevitable that another big record company or music festival will give it a go.
Why? It might all come down to cost, say experts and music listeners who run the cynicism gamut.
Wang said he has a sneaking suspicion that record companies and executives see AI musicians as a way to save money on royalty payments and travel costs moving forward.
Beyond the money-hungry music industry, there is also room for a lot of good moving forward with AI, said Amaral. She hopes FN Meka’s image, and how he was received, was a wake-up call for whatever AI artist inevitably comes next. She also mentioned YONA, which she saw in concert in Japan, as a thin, white, able pop star — not unlike many who dominate the music scene today.
“We have all the technological tools to make someone who could be green, or fat or any way we like, and we still are stuck on these patterns,” she said.
“What will the landscape look like five or 10 or 15 years from now?” Wang asks. “Pop music, despite people’s cynicism, rarely stays static. It’s constantly changing, and perhaps these computer-based attempts at ‘creating’ artists will be part of that change.”
Thanks to Dave Tepps for copy editing this article.