Google’s computers are creating songs. Making music may never be the same.
Even if people haven’t figured out the need, techs have them already in their minds.
Artificial intelligence “is today’s tech just like the piano was today’s tech at time when it came about. Composers started to write with this crazy new loud machine called the piano,” said Peter Swendsen, an Oberlin professor of computer music and digital arts. “Now we just sort of take it for granted as part of musical landscape.” (L. Todd Spencer/Virginian-Pilot via AP)
Google has launched a project to use artificial intelligence to create compelling art and music, offering a reminder of how technology is rapidly changing what it means to be a musician, and what makes us distinctly human.
Google’s Project Magenta, announced Wednesday, aims to push the state of the art in machine intelligence that’s used to generate music and art.
“We don’t know what artists and musicians will do with these new tools, but we’re excited to find out,” said Douglas Eck, the project’s leader in a blog post. “Daguerre and later Eastman didn’t imagine what Annie Liebovitz or Richard Avedon would accomplish in photography. Surely Rickenbacker and Gibson didn’t have Jimi Hendrix or St. Vincent in mind.”
Google has already released a song demonstrating the technology. The song was created with a neural network — a computer system loosely modeled on the human brain — which was fed recordings of a lot of songs. With exposure to tons of examples, the neural network soon begins to realize which note should come next in a sequence. Eventually the neural network learns enough to generate entire songs of its own.
The project has just begun so the only available tools now are for musicians with machine learning expertise. Google hopes to produce — along with contributors from outside Google — more tools that will …