Coding as Poetry

Generative music is probably best known through Brian Eno’s piece Generative Music 1. Since Eno’s piece was released in 1996 and sold on a floppy disc, many software programs have been written to create the ever-changing sounds, with new variations and improvisations, that are created by a system within a given frame of parameters.
For his web residency (call no. 2 2017 by Solitude & ZKM), visual artist and composer Jim Rolland wanted to examine the relationship between image and sound through generative patterns based on algorithms. In this interview, he provides insight into how to create generative artworks, the beauty of code, and the impact composing and performing with machines has on the notion of authorship. Find the project page here.

Clara Herrmann: For your project, you work with generative music probably best known through Brian Eno’s piece Generative Music 1. What has happened since the piece was released in 1996 within this specific field, and how and why did your interest to work with generative music start?

Jim Rolland: Actually, almost 20 years have passed since Brian Eno’s Generative Music 1 but, from what I’ve heard, most of the experimentations that have been made so far focus on sounds, and not on melodies. The sounds I use are very simple, which is intentional. For the web residency, I decided to use the same 60 piano samples for each of my episodes to draw the attention on the patterns rather than the sound texture. And of course, as a visual artist interested in the relationship between image and sound, my aim is to build audiovisual pieces in which image and sound are generated by the same algorithms. My first visual generative music piece, Disobedience was created from the desire to produce a new audiovisual reality coming out from nowhere, after having created many audiovisual pieces captured from real life.

 

CH: For your web residency, you try to create as many different patterns as possible to make the music sound different. How does this work?

JR: During this web residency, I tried to be as didactic as possible: I began with very simple operations and increased their complexity as I improved the visuals. But when I create generative music out of any frame, I spend a lot of time trying different algorithms and listening how they make the music sound. The patterns created are very important, but the rhythm is also a major point. The result must be unexpected but melodious, and be coherent without repeating itself. Just before releasing a new piece, I always spend a few hours listening to it to be sure of its coherence and interest. But there are other ways to generate music. See for example Brain Music, a performance in which the notes are generated by my brain waves:

 

CH: Concerning your work for the web residencies, one could say that the machine is the composer. What is the notion of authorship your work is based on and where do you position yourself as an artist here?

JR: In my opinion, the machine is not the composer because its bound by my code, while a composer writes the music with total freedom. But I’m definitely not the author of the music produced for two main reasons. First, I don’t directly compose any musical piece and I’m unable to write the sheet music of a melody that’s ever-changing. Second, when I’m dead, my generative works will still exist and play some music that’s never been heard before, even by me. Actually, as a computer can’t be considered a legal person; the music I produce has no author. So, if a musician is inspired by a pattern coming from one of my artworks, he’s absolutely free to use it to create his own music. However, I’m the author of the code that enables the music and the visuals. Moreover, we have to keep in mind that generative art is played live by the machine and each time it’s activated, the artwork’s result is always different. So, I think that any kind of generative art should be considered as a performance where the computer is the performer but where the creator — the author — of the performance is the artist.

CH: When did you start working with code as an artist?

JR: First I have to say that my artistic life is very complicated. I’ve been a painter and a musician for a long time, and in 2009, as I was experiencing another creative crisis, I decided to make my two passions meet in digital arts. I began producing video art pieces where sound and image were entangled and some of my videos have been quite appreciated. Neons Melody was created in 2011 and is my most rewarded video:

 

I then began to produce performances where the audiovisual material was edited live. For these performances, I used a software called Isadora that’s perfect for playing back several videos at the same time in front of an audience. The Unknown Skater has been performed many times in France, Italy, and Germany:

 

But in 2014, I had the desire to produce interactive installations using a Kinect and so I began to learn coding on Processing (free software designed for artists by two MIT researchers, using a simplified Java language). After a few months, I was able to produce my first generative artwork, Biosphere. Since then, I’ve produced many different works with this software: interactive VJ-ing, where I project abstract visuals that are directly related to the sound of the musicians I’m playing with and interactive installations like Body Instrument where dancers generate their own music according to their movement:

 

CH: On your project page it says that you now understand what computer nerds mean when they talk about the beauty of code. In what way?

JR: I must confess that I consider coding a kind of poetry. At first, when I was learning Java, I’ve been several times amazed by the beauty that visually emerged from this barbarian language but later on, after two or three years, I discovered new ways of writing things in a smarter way, and now, even if I know this must sound weird, I really think the beauty doesn’t only lie in the audiovisual result but also in the code itself.

CH: What did you discover during the working process? What new observations or ideas did you gain?

JR: This web residency was really useful because it gave me a clear objective and made me focus on simple patterns, creating by coming back to the basis. It lead me to clarify my thoughts on the subject and to create three new pieces, a trilogy, (In the Forest, In the City, and In the Nebula) that I published in the last episode of my blog.

CH: As it is an interactive animation piece that the user can start and stop — what would you like your audience to experience?

JR: I think that one really needs a cultural background to fully appreciate an artwork, and digital arts suffer from lack of knowledge. So I just hope that the people who saw my work in progress were curious enough to trigger the animations a few times to discover the different patterns allowed by the code, because I’m sure that this interested audience will be more able to appreciate generative music and generative art in general. That’s why I conceived my program for this written residency as a kind of tutorial in order to explain what generative art is.

CH: How is this work related to your work and topics in general as an artist and composer?

JR: This work is important in my global activities because, as I explained earlier, I’m very concerned by the relationships between image and sound. And here, the notion of pattern that was the subject of this web residency is a kind of key to unlock a door that separates the instantaneity of an image and the continuity of a sound. The drawn pattern is first here to visualise a musical pattern, as a sheet of music, and in a second iteration it can evolve and become more aesthetic in order to arouse contemplation.