Google have taken Artificial Intelligence a step further with their text-to-music AI called MusicLM. Having been announced in January, the model is now available to the public. However, you have to sign up for a waitlist.

Research scientist Neil Zeghidour says, "MusicLM builds on top of AudioLM, which is a broader audio generation algorithm. And what MusicLM does is taking these very powerful audio generation algorithm, train it on music data and add specific controls for music creation, such as text voice, melody conditioning, and so on."

Image from Mike Russell on YouTube

Here are some examples: (taken from Google's GitHub)

"The main soundtrack of an arcade game. It is fast-paced and upbeat, with a catchy electric guitar riff. The music is repetitive and easy to remember, but with unexpected sounds, like cymbal crashes or drum rolls."

Example 1
0:00
/0:30

"Slow tempo, bass-and-drums-led reggae song. Sustained electric guitar. High-pitched bongos with ringing tones. Vocals are relaxed with a laid-back feel, very expressive."

Example 2
0:00
/0:30

MusicLM can also produce vocals:

"This is an r&b/hip-hop music piece. There is a male vocal rapping and a female vocal singing in a rap-like manner. The beat is comprised of a piano playing the chords of the tune with an electronic drum backing. The atmosphere of the piece is playful and energetic. This piece could be used in the soundtrack of a high school drama movie/TV show. It could also be played at birthday parties or beach parties."

Example 3
0:00
/0:30

Whenever you give MusicLM a prompt, it'll give you two versions of what you've asked. You can give the one you like better a "trophy", which'll help improve the model. Then you can download it.

"You can listen to both and give a trophy to the track that you like better, which will help improve the model." Image from Google Blog

Will It Take Musicians' Jobs?

It's not very likely that it'll replace musicians. For example, a YouTuber needing some background music for their video could make use of this.

This has been made with musicians in the first place. Google had partnered with musicians like Dan Deacon to improve the model and see how it would empower the creative process. Google had set up Arts and Culture Lab workshops in Paris to experiment with the model.

Neil Zeghidour also said that it's used by artist in a way meaningful for them, so they could essentially use it to explore new ways of creating music.

Another use case mentioned by Simon Doury, creative coder at Google Arts and Culture Lab, "For example, if you are a drummer and you want something to play with, you can just type, and you have the music to play with."

It's always good to see technology progress, and I'm excited to see what MusicLM will further be capable of.

You can sign up on Google's AI Test Kitchen to try out MusicLM.