ΑΙhub.org
 

AI can now generate entire songs on demand. What does this mean for music as we know it?

by
30 May 2024



share this:

By Oliver Bown, UNSW Sydney

In March, we saw the launch of a “ChatGPT for music” called Suno, which uses generative AI to produce realistic songs on demand from short text prompts. A few weeks later, a similar competitor – Udioarrived on the scene.

I’ve been working with various creative computational tools for the past 15 years, both as a researcher and a producer, and the recent pace of change has floored me. As I’ve argued elsewhere, the view that AI systems will never make “real” music like humans do should be understood more as a claim about social context than technical capability.

The argument “sure, it can make expressive, complex-structured, natural-sounding, virtuosic, original music which can stir human emotions, but AI can’t make proper music” can easily begin to sound like something from a Monty Python sketch.

After playing with Suno and Udio, I’ve been thinking about what it is exactly they change – and what they might mean not only for the way professionals and amateur artists create music, but the way all of us consume it.

Expressing emotion without feeling it

Generating audio from text prompts in itself is nothing new. However, Suno and Udio have made an obvious development: from a simple text prompt, they generate song lyrics (using a ChatGPT-like text generator), feed them into a generative voice model, and integrate the “vocals” with generated music to produce a coherent song segment.

This integration is a small but remarkable feat. The systems are very good at making up coherent songs that sound expressively “sung” (there I go anthropomorphising).

The effect can be uncanny. I know it’s AI, but the voice can still cut through with emotional impact. When the music performs a perfectly executed end-of-bar pirouette into a new section, my brain gets some of those little sparks of pattern-processing joy that I might get listening to a great band.

To me this highlights something sometimes missed about musical expression: AI doesn’t need to experience emotions and life events to successfully express them in music that resonates with people.

Music as an everyday language

Like other generative AI products, Suno and Udio were trained on vast amounts of existing work by real humans – and there is much debate about those humans’ intellectual property rights.

Nevertheless, these tools may mark the dawn of mainstream AI music culture. They offer new forms of musical engagement that people will just want to use, to explore, to play with and actually listen to for their own enjoyment.

AI capable of “end to end” music creation is arguably not technology for makers of music, but for consumers of music. For now it remains unclear whether users of Udio and Suno are creators or consumers – or whether the distinction is even useful.

A long-observed phenomenon in creative technologies is that as something becomes easier and cheaper to produce, it is used for more casual expression. As a result, the medium goes from an exclusive high art form to more of an everyday language – think what smartphones have done to photography.

So imagine you could send your father a professionally produced song all about him for his birthday, with minimal cost and effort, in a style of his preference – a modern-day birthday card. Researchers have long considered this eventuality, and now we can do it. Happy birthday, dad!

Mr Bown’s Blues.
Generated by Oliver Bown using Udio 3.75 MB (download)

Can you create without control?

Whatever these systems have achieved and may achieve in the near future, they face a glaring limitation: the lack of control.

Text prompts are often not much good as precise instructions, especially in music. So these tools are fit for blind search – a kind of wandering through the space of possibilities – but not for accurate control. (That’s not to diminish their value. Blind search can be a powerful creative force.)

Viewing these tools as a practising music producer, things look very different. Although Udio’s about page says “anyone with a tune, some lyrics, or a funny idea can now express themselves in music”, I don’t feel I have enough control to express myself with these tools.

I can see them being useful to seed raw materials for manipulation, much like samples and field recordings. But when I’m seeking to express myself, I need control.

Using Suno, I had some fun finding the most gnarly dark techno grooves I could get out of it. The result was something I would absolutely use in a track.

Cheese Lovers’ Anthem.
Generated by Oliver Bown using Suno 2.75 MB (download)

But I found I could also just gladly listen. I felt no compulsion to add anything or manipulate the result to add my mark.

And many jurisdictions have declared that you won’t be awarded copyright for something just because you prompted it into existence with AI.

For a start, the output depends just as much on everything that went into the AI – including the creative work of millions of other artists. Arguably, you didn’t do the work of creation. You simply requested it.

New musical experiences in the no-man’s land between production and consumption

So Udio’s declaration that anyone can express themselves in music is an interesting provocation. The people who use tools like Suno and Udio may be considered more consumers of music AI experiences than creators of music AI works, or as with many technological impacts, we may need to come up with new concepts for what they’re doing.

A shift to generative music may draw attention away from current forms of musical culture, just as the era of recorded music saw the diminishing (but not death) of orchestral music, which was once the only way to hear complex, timbrally rich and loud music. If engagement in these new types of music culture and exchange explodes, we may see reduced engagement in the traditional music consumption of artists, bands, radio and playlists.

While it is too early to tell what the impact will be, we should be attentive. The effort to defend existing creators’ intellectual property protections, a significant moral rights issue, is part of this equation.

But even if it succeeds I believe it won’t fundamentally address this potentially explosive shift in culture, and claims that such music might be inferior also have had little effect in halting cultural change historically, as with techno or even jazz, long ago. Government AI policies may need to look beyond these issues to understand how music works socially and to ensure that our musical cultures are vibrant, sustainable, enriching and meaningful for both individuals and communities.The Conversation

Oliver Bown, Associate Professor, UNSW Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.




The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.
The Conversation is an independent source of news and views, sourced from the academic and research community and delivered direct to the public.




            AIhub is supported by:


Related posts :



Making it easier to verify an AI model’s responses

By allowing users to clearly see data referenced by a large language model, this tool speeds manual validation to help users spot AI errors.
15 November 2024, by

Online hands-on science communication training – sign up here!

Find out how to communicate about your work with experts from AIhub, Robohub, and IEEE Spectrum.
13 November 2024, by

Enhancing controlled query evaluation through epistemic policies

The winners of an IJCAI2024 best paper award explain the key advances of their work.

Modeling the minutia of motor manipulation with AI

Developing a model to provide deep insights into hand movement, which is an essential step for the development of neuroprosthetics and rehabilitation technologies.
11 November 2024, by

The machine learning victories at the 2024 Nobel Prize Awards and how to explain them

Anna Demming delves into the details behind the prizes.
08 November 2024, by




AIhub is supported by:






©2024 - Association for the Understanding of Artificial Intelligence


 












©2021 - ROBOTS Association