Will our “robot overlords” someday become creative enough to make music and art humans appreciate? The folks at Moogfest from Google Brain’s Magenta project say that what they’re really up to is using machine learning to create new tools artists can use to enhance their own expression.

They hope to engage artists, software developers, and machine learning researchers in using their shared open source code base built on TensorFlow (www.tensorflow.com). They want Magenta to give rise to a larger community of researchers and creators working on new ways to generate art and music with computer technology.

Douglas Eck, a research scientist working at the intersection of music and machine learning, presented a half hour lecture on what Magenta is up to at a Sunday afternoon Moogfest event. His talk was followed by a panel discussion, which included:

Adam Florin, a creative technologist exploring languages and computer systems for storytelling and expression. He is the creator of Patter, a real-time/freeform generative music system for Ableton Live

Adam Roberts, who earned his PhD in Computer Science from UC Berkeley with emphasis in Computational and Genomic Biology. He has focused on combining music and technology as a software engineer at Google, where he helped to organize the world’s music knowledge for Google Play and is now applying deep learning to the generation of music and art for Google Brain.

Duke University professor Tobias Overath, who investigates how the brain processes sound—from very basic sound attributes such as pitch or timbre, to more complex signals such as speech—using a combination of behavioral, electrophysiological (M/EEG) and hemodynamic (fMRI) methods.

Much to learn

One of the things that became very clear early in Eck’s outline of the Magenta project, is just how difficult image recognition and making human sounding music are for computers. Machine learning, is vastly improving that, but there’s still a long way to go.

Computers making music still do things such as extending a single note for up to half a minute, something no human musician does. And slides showing how researchers teach a computer to recognize images make it clear that it can be a complicated process.

He noted, however, that there have been “amazing advances” in areas such as speech recognition machine learning, which is greatly reduced its error rate and increased word recognition.

Music is rather more complex than speech for computers the panelists pointed out. Among the ideas discussed:

  • We don’t yet have any idea how the brain interprets pitch code,Overath said. “We don’t know what’s going on in the brain when it hears music.” It could be 20 or 30 years before we really know how the brain decodes music he added.
  • Eck noted that people have been brain scanned while listening to music and when they like it it lights up pleasure centers also activated by drugs sex and food. “But we don’t know what makes us like a certain piece of music. It’s individual. It’s different for everyone.”
  • Overath said “to a certain extent the kinds of music we like is cultural, “but we don’t know what makes the money note. If we did we get rich pretty quickly.”
  • Eck suggested it would be great if we could pinpoint exactly what gives us that much desired “music chill.” Atlanta bed enjoying it. Unfortunately, he added, “we would need to be hooked up to scanners.”
  • In building these machine learning systems, the programmers have to direct the computers attention. For instance an image recognition to recognize and describe a picture of a giraffe in the woods, the machine first needs to isolate the foreground image and then focus on the background image.
  • Another factor programmers have to consider in devising their algorithms to make music or art, is surprise.

Eck “doesn’t know exactly where magenta is going.”

It is however, starting Github, starting with music, then including video and visual art, to engage people outside Google with using testing and enhancing and eventually even contributing code to the project.