So what's it sound like now? It must still be modelling the sound on something
The instruments sound newer, like studio musicians. All the output has a same-y quality instead of the unique sounds of the original model. And the melodies are jerky, incoherent, one-note and one-dimensional. It's just horrible! I guess they got some studio musicians to replace the original training material.
It's still possible to get a snotty sounding punk-ish vocal sound. But the melodies are hopeless, so it's useless.
I did some deep research on how to train AI music models, but it's not really possible to use the current open-source models (MusicGen or Jukebox). MusicGen can do the music, but not the vocals. Jukebox has ultra-low fidelity and a 12 second time limit, and it's very old and primitive. What is needed is an updated version of MusicGen that allows for properly sung lyrics, not just grunts and yelps.
How the models work : after downloading and installing MusicGen or Jukebox, you make a music file containing one song, or a part of a song, or a song split into separate instruments. You can do any or all of these variations. It can be your own music or someone else's. You name the file song_001.WAV (or whatever). Then you write a short description of what the music sounds like, for example "60s garage, fuzz guitar, high energy, rocker" and you name that descrition file the same name as the corresponding sound file, song_001.TXT. After making thousands of WAV files and TXT files, you process them in a training session that lasts hours or even days, using an NVDIA RTX 4090 GPU card or better.
Then when you ask for fuzz guitar, it knows what you mean. It doesn't directly use the songs it trained on. But it does use its understanding of fuzz guitar based on all the examples you gave it to train on. It makes up something original based on its understanding of what it learned in training. Basically the same way a human does, after listening to other people's music.