Art, to many, is a distinctly human expression of creativity. So what role – if any – can machines play in the creation of music, fashion and other creative work?
Artificial intelligence is already being used to create different types of art. Most of these are run by researchers testing the limits of the technology’s capabilities in this field, but have produced interesting results nonetheless.
Sony CSL Research Laboratory is planning to release a whole album of songs written by artificial intelligence. Sony’s researchers have built an AI system called FlowMachines, which analyzes a database of songs and then creates compositions following a particular musical style. In the example below, “Daddy’s Car” is a poppy tune in the style of The Beatles.
The same team previously produced a jazz bot capable of producing jazz music in a specific style. Another project, which came out of the University of Toronto’s computer science lab, features an algorithm which can turn images into songs.
Sony’s FlowMachines isn’t capable of doing all the work itself, however. For “Daddy’s Car,” French composer Benoît Carré arranged the songs and wrote the lyrics.
For anybody scared that humans will be replaced by robots, and that computers will eclipse the artistic accomplishments of humans, this is a key point. The AI in the Sony case analyzed a huge number of songs, but the final touch was applied by a human. This, according to Francesca Rossi, a research scientist at the IBM J.T. Watson Research Center in New York, is how artificial intelligence will mix: by augmenting the efforts of humans.
“In general, I think that AI can really help humans enhance their creative capabilities, because even our creative capabilities don’t come out of the spur of the moment, they come out of analyzing the world around us, a lot of data around us,” Rossi says. “So, the more data we can analyze with the help of AI systems, the more our creativity is enhanced.”
IBM’s Watson has dabbled in creativity on several occasions, each time producing impressive results. Watson was used to bring a flourish to the dress of model Karolina Kurkova in 2016, when she attended the Met Gala in New York. Watson analyzed the social media accounts of Marchesa, the high-fashion label that designed the dress, and changed the color palette of the lights on Kurkova’s dress based on that analysis.
Chef Watson also helps to bring creativity to those getting inventive with food. Users enter ingredient choices into the program, and Chef Watson suggests the best accompanying ingredients and recipes that can be cooked up. With both of these examples, the machine is helping the human, rather than replacing them. One of the reasons why machines aren’t able to go that extra step, is because they have trouble understanding and invoking emotions.
“These creative activities have to do with emotions because when you look at a painting emotions come out,” says Rossi. “To be creative it means you generate emotions into people, that’s something machines need to be able to understand, how to generate emotions in people, because then you’ll be creative.”
AI systems have made a lot of progress in recent years, and the likes of Amazon’s Alexa, Apple’s Siri, and IBM’s Watson are already a part of our everyday lives. But to go from useful and analytical to creative and spontaneous is a huge step.
“I think there is still a lot of work to do because creativity is a convergence of many different activities in our brain. For example, a caption contest, it’s very difficult because you need to understand what is in the picture, and then understand what the picture means in the mind of people who see the picture, historical and cultural context and so on, and then come up with a caption that is funny for those people that read that,” Rossi explains.
Google is another major player in this field, and its Magenta program has two goals. The first is to develop algorithms that can learn how to make art and music. The Magenta team is also aiming to build a community of artists, coders, and machine learning researchers. “We don’t know what artists and musicians will do with these new tools, but we’re excited to find out,” the Google Magenta welcome page reads.
Despite this, artificial intelligence is often seen as an existential threat to humans, either poised to steal all the jobs or rise up and destroy humanity by force. Most of this comes from Hollywood, where movies such as Terminator have implanted a seed of distrust in any notion of a computer which can think.
Google also created an art “generator” called DeepDream. The program utilizes the same neural nets used to recognize an image and turns them the other way around, so DeepDream is creating images. Users can upload an image to DeepDream, and the neural net looks for familiar patterns, and then enhances them. It then repeats the process with the same image. “This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird,” Google said in a blog post when it revealed the project. “This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.”
“I think you have to be more clear about what the goal is,” Rossi says. “The goal is to help humans be more creative, and not just to replace painters or songwriters or whatever. I think that’s usually the way it’s perceived so that’s why you have this resistance. So maybe the systems should be put in a package that shows clearly that you want to help people be more creative.”
Art of any kind has always been greatly influenced and affected by advances in technology and breakthroughs such as the printing press, the computer, photography, all spring to mind. Artificial intelligence has the potential to be an equally important agent of change in the creative world if it’s embraced, and not unjustly feared.