For as long as there have been computers, people have believed there are certain tasks a computer will "never" be able to do.
I don't like the word "never." What if you could go back in time and tell Isaac Newton that we'd send a horseless (and humanless) carriage to Mars to collect and analyze soil samples?
He'd think you were mad!
But we've done that twice now. Science progresses fast.
One task people thought a computer would never be able to do is generate creative works. Whether it's because you think creativity pours out of your soul and your deepest emotions, or you just think it's too computationally sophisticated, this seems to be outside the realm of beeps and boops.
Well, it turns out computers are starting to break this barrier—and in spectacular ways. But before we get to that, let's look at a bit of the history that got us here.
My creative journey
When I was a kid, I played a computer game called ELIZA1. ELIZA was pretty good at “faking” creativity. In this program (calling it a “game” is actually a bit of a stretch), you chat with a digital psychotherapist. It mostly just parrots back to what you say to it, with a few parlor tricks coded in to give a “memory” and seem more alive. Here’s a sample conversation, as I remember them going:
ELIZA: Welcome. What is your name?
Me: Gerbert Dishwater.
ELIZA: Hello, Gerbert. What brings you here today?
Me: My mom. She drove me here.
ELIZA: How do you feel about that?
Me: Angry. I want to be at home playing video games.
ELIZA: I see.
Me: What are your qualifications, anyway?
ELIZA: I’m not sure I understand you fully.
ELIZA: Let us not forget about your video games, Gerbert.
This might seem rather unremarkable in the age of Siri, but it was downright magical to a 10-year-old kid in the 1990s! (Also, I was a goofy kid, so my conversations with ELIZA were typically psychotic and entertaining.)
For the next 20 years or so, I had a digital creativity dry spell. Researchers were surely hard at work, but there weren’t many toys for the average person to play with. The only tool I can think of that would qualify are Photoshop filters. Back before “filter” meant your phone turning you into a cat, it referred to image processing algorithms inside a photo editor.
Here’s an example:
Again, this isn’t too impressive by modern standards, but it’s a lot of fun when you’re a teen and this is considered cutting-edge technology. (Also, I learned a lot about digital painting by applying filters and then re-creating what they did!)
The next breakthrough I learned about was the work of Dr. David Cope. Cope used his computer to analyze classical music, find patterns in the composition, and then generate new pieces that follow the same rules. Music generation has progressed at a breakneck pace since then—and uses much more sophisticated math now. I don’t know exactly how Dr. Cope generated his pieces, but my guess is that he used Markov chains (which I learned about in my undergrad finite math class, so the math isn’t too difficult to grasp if you’re motivated).
In any case, he called this project EMI—Experiments in Musical Intelligence—and created the persona “Emily Howell” to release this music under. It’s quite beautiful when performed by humans. Here’s my favorite, a AI-generated Bach chorale:
As soon as I heard this, I was hooked! I knew I wanted to explore this space myself (as an artist moreso than a researcher), so I patiently waited over the next several years for the technology to make its way out of the lab.
The state of AI creativity today
In 2019, OpenAI introduced MuseNet, an artificial intelligence for generating music in several different genres.2 The basic idea behind it is the same as David Cope’s work: analyze lots and lots of music, and then find patterns in it. The math, however, is much more sophisticated—it involves a lot of calculus and linear algebra that I don’t fully understand myself. (But that’s okay! I just want to make music with it!)
MuseNet allows you to take a few introductory notes of a song, choose an artist or genre, and then generate a “continuation”; an improvisation that you might hear if that artist or genre continued playing in that exact style. You can even specify what instruments to use!
The implementation of this AI isn’t perfect. These improvisations often fall into one of two extremes:
Too repetitive. Sometimes it’ll just keep repeating what it’s seen so far, with only minimal changes in each loop.
Too “noodly.” Other times the song will just wander aimlessly with no underlying structure.
Here’s the good news: I said it’s the implementation that’s imperfect; not the technology. I know just enough about the underlying code to know this particular problem can be fixed with just a few tweaks. The tech will undoubtedly improve too, so I’m excited to see what the future holds.
Strictly speaking, you could just save MuseNet’s outputs to MP3, and then dust off your hands and say, “See! A computer can be creative!” And you wouldn’t be wrong. But I think the technology really shines when you, as a human, remain a part of the equation.
That’s what I’ve been doing for the past two years. MuseNet is an integral part of my creative process—but I use it as a tool, much in the way a mathematician uses a calculator.
To reiterate, MuseNet’s greatest weakness (in my opinion) is understanding structure. In fairness, it doesn’t utterly fail at this every single time—but if you want your song to have an intro, a verse, a chorus, and a bridge, it’s easiest if you remain part of the creative process and lean on MuseNet for idea generation. That’s exactly what I’ve been doing. My process looks something like this:
Have MuseNet generate a few dozen song fragments until I finally hear something that catches my ear.
Download the output (as MIDI), and clean it up/improve it a bit.
Add my own touches (for my synthwave music, this includes drums/arpeggios/etc.)
Feed my work back into MuseNet, generate a few dozen continuations, and repeat the process.
Structure the song (verse, chorus, bridge, etc.)
If I can toot my (our?) own horn a bit, I think the music comes out quite good. Here are two of my favorite works:
Wheels of Time
Bird Don’t Fly
This is pretty representative of my music right now: split between piano/orchestral pieces and video gamey synthwave. If you have a keen ear, you might have noticed that I fully embraced MuseNet’s noodliness on Bird Don’t Fly. The melody evolves every 8 bars or so and there isn’t a clear verse or chorus in the song. This approach usually doesn’t work—but it did so beautifully this time, at least!
The future of AI art
Let me be clear: those two songs would not exist without AI. Although I worked on each of them for 10+ hours to make them fully “my own,” the melodies and motifs were mostly written by silicon transistors, not biological neurons.
Like I said: MuseNet is my (musical) calculator. It saves me time. But for now, it’s just a tool, and music still needs a human touch if you want it to touch your heart or get you dancing.
It won’t be this way for much longer. You might think there’s this certain je ne sais quoi that makes certain music beautiful—the human pianist (in this case, the wonderful Holly Mead) knowing when to slow down her playing for a certain passage, or use dynamics to make certain parts loud and commanding, and other parts quiet and gentle.
But the truth is that’s all math too—and it, too, can be modeled with linear algebra and calculus. We just haven’t done it yet.
I said I don’t like the word “never,” but I don’t think this will put human musicians like me or Holly out of a job. But it will change the nature of our work. Music will become more interactive and rely even more deeply on human connection. Live performances and music lessons, for example, will continue to have their place. (Don’t be surprised if I’m doing a world tour of MuseNet’s Greatest Hits in 2035.) But the work of session musicians (who just show up in the studio and play something) stands a good chance of being automated in the next decade or so. AI will be able to do that. Soon.
So, what can computers and AI not do? I’m willing to go out on a limb and say empathy is going to remain in the domain of humans for the longest. If you want rock-solid job security, become a therapist. ELIZA isn’t going to replace you any time soon.
Tell your grandchildren, however, to not be surprised if sentient AIs learn to empathize better than humans over the next few centuries. Science progresses fast.
It's ancient by digital standards, by the way. It was developed in the 1960s! But I was enjoying a Mac port of it in the 1990s.
There’s still no program to download or install, but the web demo is very usable and I’ve been using it ever since its introduction.