Episode Transcript
Aaron Nathans:
Picture a violin without any strings, or for that matter, a body. You have a bow and you're playing with all the feeling and physical movement and presence with which you'd play a real violin, but instead of strings, you're running the bow along an array of digital sensors. And what you hear coming out isn't a traditional violin sound, but more a voice that you'd associate with a robot. <<MUSIC>>
Aaron Nathans:
But there's something else really interesting about this device. Instead of one or two speakers, this home brewed instrument sits on top of what is effectively a ball of speakers pointing in every direction, mimicking the 360-degree sound you'd get from an acoustic instrument. This device was the creation of Princeton music grad student Dan Trueman, along with his advisor, Perry Cook, and Trueman's dad. You can still see a 1999 YouTube video of a young Trueman playing it. It was called the BOSSA, which stood for bowed sensor speaker array. The idea was to combine human physicality with electronic sound. They concluded their research paper on this invention with what seemed like blue sky dreaming at the time: "It is easy to imagine an enormous family of instruments like BOSSA, instruments with input interfaces inspired by every known existing instrument." But soon Trueman was off to Colgate to join the faculty. Several years later, however, Princeton hired him as an assistant professor. And when he returned, Trueman approached his old advisor. "Remember that thing I wrote in my thesis, what if we had an entire ensemble of these? Let's do that." And so they did. <<THEME MUSIC>>
Aaron Nathans:
From the School of Engineering and Applied Science at Princeton University, this is Composers and Computers, a podcast about the amazing things that can happen when artists and engineers collaborate. I'm Aaron Nathans. Part five, Laptop Orchestra.
Aaron Nathans:
It's ironic that just as computers were becoming so powerful, many of the early Princeton electronic and computer musicians simply stopped making computer music, opting to return to more traditional instruments. Milton Babbitt, who made such innovative use of the RCA Mark II synthesizer in the 1950s and ‘60s, abandoned the machine in the mid 1970s. He never lost his taste for strict serialism, but he preferred it played on a good old-fashioned piano. He died in 2011 at the age of 94. Jim Randall, whose work in the 1960s thrilled his students by showing what kind of innovative, evocative music a computer was capable of making, walked away from computer music not even a decade later. Again, he preferred to work with more traditional instruments. He died in 2014 at the age of 84. Even Paul Lansky, the master of computer music, who carried the art so far forward, eventually turned back to acoustic instruments as well, before his retirement from the Princeton faculty in 2014.
Aaron Nathans:
It was as if computers had, by the dawning of a new millennium, become so firmly integrated into the creation of music, the electronic sound so well accepted that there was no longer a dividing line between computer music and traditional music. There was only music. Maybe they figured it had all been done. It was Babbitt, after all, who's famous for having said, "Nothing gets old faster than a new sound." But not everyone thought that computer music was at an end, because music is a way of expressing what it means to be human, and a computer of course, is anything but human. So would it be possible for a computer to help us explore that humanity, to not just build artificial intelligence, but to create digital authenticity?
Aaron Nathans:
You'll remember, if you listened to episode two, when composer Toby Robison talked about the fellow in roughly 1966, who was trying to get the room-sized IBM to make music that emoted, or when Paul Lansky experimented with using random numbers to try and imbue his electronic music with the thrill of live performance. Starting in the 1990s, a trio of Princeton faculty set their minds to figuring out how to make our digital music tools more human.
Aaron Nathans:
Perry Cook was born in Missouri in 1955. He began singing in the church choir at age three and took piano lessons as a child. He later played the trombone and the euphonium. He majored in music at the University of Missouri, Kansas City, where he discovered an electronic music studio. An alumnus had gifted the studio with a Moog Mark IV analog synthesizer.
Aaron Nathans:
You'll recall the Moog from our first episode. Its founder, Robert Moog, had done his initial research on the instrument at the Columbia Princeton Electronic Music Center in the 1960s. This is Wendy Carlos playing the Moog in the famous pathbreaking 1968 album, “Switched On-Bach.” <<MUSIC>> Cook was hired as a student to maintain the machines and record concerts, which only whetted his appetite for electronic music. Later as a graduate student at Stanford, he joined the Center for Computer Research in Music and Acoustics. He studied with Julius Smith and John Chowning who invented technology behind Yamaha digital keyboards. One of Cook's advisors at Stanford was the late John Pierce, who had worked at Bell Labs in Murray Hill, New Jersey in the 1950s and ‘60s, and empowered his employee, Max Matthews, to take the time to invent the first music made by a computer. By 1987, Max Matthews was at Stanford too.
Aaron Nathans:
In such a fertile environment, Cook was free to explore the science of sound on his own terms. His work included human computer interfaces, using sensors to do real-time captures of gestures so computers could respond to human movements. He did physical modeling of the voice, capturing the physics of nature in digital form. And he created instruments. In around 1988, Perry Cook, still at Stanford, built a MIDI trumpet for budding jazz legend Wynton Marsalis, alongside professor Dexter Morrill of Colgate.
Perry Cook:
We took it to New York, went to his apartment. And he tried it out and he loved it. And the first thing he said, which makes me cringe still, is "Wow, I could fire the band," because basically you can control everything from your trumpet, because it had sensors on it and a computer to augment every gesture you did. But he decided that computer music was the realm of the predominantly white European compositional world. And he had just won the Grammy that year, two Grammys, one for jazz and won for classical. And he had made his decision to not do classical music anymore. He was going to emphasize jazz because that's the indigenous African American art form. And so he counted computer music as classical music. And so he said, "This is great, but I'm not going to play classical music anymore."
Aaron Nathans:
In 1996, Princeton hired Cook away from Stanford. Arriving in the deep snows of January that year, Cook became the first-ever formal cross appointment between engineering and music at Princeton. He set up his Sound Lab in the Computer Science building. Cook brought east with him his research into the use of sensors to make and shape sound. At the time, there were no iPhones with their tilt sensors. Cars had just begun to use acceleration sensors to deploy airbags. Microchips were just becoming available, that could sense acceleration pressure, force, and tilt. Cook figured that he could repurpose those chips to musical effect to build controllers to make digital musical instruments more expressive. By this point, the age of the personal computer was well underway. Intel had begun developing high-powered microprocessors. And in accordance with Moore's Law, computing capacity really began to snowball. Once consumers began to buy computers, it became big business. Disk drives gave way to CD readers. So by the turn of the millennium, computing had become decentralized on Princeton's campus.
Aaron Nathans:
In the 1960s, there was a single centralized campus computer center in the Engineering Quadrangle for everyone to use. But by the 1990s, composers were using a set of NeXT brand computers installed in a lab at the Woolworth Building, with built-in digital-to-analog converters, so the musicians could hear what they were creating in real time. Dan Trueman says he remembered the tail end of that period, the camaraderie of the composers using those machines, side by side, hearing each other's works. But soon after, their work moved onto PCs and laptops, allowing the composers to take their work home. Suddenly, there was no need for a lab. So what the composers lost in camaraderie, they gained in raw computing power. And within a year of Cook's arrival at Princeton, he was working with Dan Trueman, who was a graduate student at the time, to put that computing power to creative use.
Aaron Nathans:
Daniel Trueman was born in 1968 on Long Island. He started playing the violin at age four, and later fell in love with the Norwegian Hardanger fiddle. He got his bachelor's degree in physics from Carlton College, then pivoted to music, receiving his master's from the University of Cincinnati College Conservatory of Music. Cook and Trueman saw the further possibilities for imbuing computers with human expression. That really started with the BOSSA, the digital device played with the violin bow that we talked about at the start of this episode.
Dan Trueman:
That speaker sat in my lap, literally sits in my lap, and then there's a kind of an abstraction of a violin fingerboard with a sensor on it. It's literally an ebony fingerboard that's attached to it that I can finger. And then there's a violin bow that has sensors on it. And I literally bow the speaker, and then the sound... The sensors go to the computer, and buh, buh, buh, stuff happens and the sound comes out the speaker in my lap. So it's this kind of almost this weird cello. So to me, this was awesome, and it's still awesome actually. It put the sound in my hands and in my body. When I play the fiddle, sound is coming through my jaw, right? It's coming into my body directly. It's not just coming through my ears. And with that, it's not coming through my jaw, but it's coming through my legs. I can feel the whole thing vibrating.
Perry Cook:
He poses an interesting question. And his thesis is, what if we had an ensemble of such instruments. And so he basically invented the laptop orchestra with that question. And so when we hired him back after he went away for a while to Colgate to be on their music faculty, one of the things he did when he arrived, and he said, "Remember that thing in my thesis where I pose, what if we had an ensemble of these? Let's do that." And so basically, the laptop orchestra was born.
Aaron Nathans:
The Princeton Laptop Orchestra. By now, it's pretty well known, and it's been imitated countless times. But back then, it was just an idea. By then, Dan Trueman had more experience building digital instruments. In a 2001 album, under the name Interface Duo, created with fellow Princeton music grad, Curtis Bahn, they conjured up a flurry of cutting-edge digital sound. <<MUSIC>> On his 2004 album of chamber music called Machine Language, he uses computer applications to transform instrumental sound. This is “Traps,” played on electric violin, run through a laptop. You can hear the emotion in this piece. It was written in the opening days of the Iraq War in 2003. <<MUSIC>> Before they could realize their vision for a full laptop orchestra, they needed funding. And they had some from Cook's lab, the computer science department, and the music department now that Trueman was on the faculty. But in order to create the size of an ensemble, they had in mind, eight to 10 players, they needed additional funding from the School of Engineering and Applied Science.
Dan Trueman:
I wanted to get funding to build six of these spherical speakers so that I could have a small class of maybe a dozen students where we could have groups of six playing with these spherical speakers, and we could just figure out how to make music together, and have the students writing code and building instruments and so on, but having this idea of the sound being localized near the player and being able to play and hear where the sound's coming from. That was one of the problems of electronic music. We'd all plug into a PA, and you'd have no idea who's making what sound. Whereas this is... You can actually hear where the sound is coming from. You can hear and see where it's coming from. And so I wanted to make this class, I need a little bit of funding to do it. And so Perry and I went to Maria Klawe, who was the dean of SEAS at the time. And I was an early junior faculty member at the time, and so I made this presentation to Maria saying, "Oh, I'd love to do this.”
Dan Trueman:
We need such and such money in order to buy and fabricate these things. And Maria listens to us. And Maria was wonderful. She was just such, I feel like, a real visionary. And really, she just wanted to find connections and build connections, within SEAS and outside of SEAS. And she said, "This is great. I would love to fund your project, but I will only fund it if you make it four times as big, and I will give you four times as much money as you asked for." Maybe not a uniquely Princeton thing, but it felt like... I was like, "Oh, I see." And so I, of course, said yes to that, but it, of course, also ended up turning into a much bigger project than I ever intended. It consumed me for a number of years.
Perry Cook:
Originally, we had the idea of eight or 10. So we went to her saying, "Here's what we want to do. And here's how it's going to work, and here's our vision for it." And she said, "I will give you money only if you make it 15." So I believe she pinned the number at 15, where we were only in a single-digit sort of range before that. And so that's my recollection.
Aaron Nathans:
And so rather than a small beta band, for lack of a better term, the Princeton Laptop Orchestra started big. They had enough money to buy 15 sets of laptops, hemispherical speakers, and racks with sensor capture and processing, as well as other equipment. Trueman and Cook offered the first Princeton Laptop Orchestra class in the fall of 2005 as a first-year seminar. The first two graduate teaching assistants were Ge Wang, a computer science student with musical experience, and Scott Smallwood, a graduate composer with computer experience. The laptop they would use would be the Apple 12-inch, 1.5 gigahertz Powerbook G4. Each player in the laptop orchestra sat on a meditation pillow, and either held the laptop literally on their lap or placed it on a rack to the right, and instead held some interface to the laptop. The ball-shaped set of speakers sat directly in front of each performer. So each instrument was entirely self-contained.
Ge Wang:
This speaker array that's part of BOSSA, it's this kind of... It looks like something that lands on Mars, this kind of multi-sided hemisphere, sometimes this sphere, but it's got multiple speakers in it. And the reason that's done is part of the series of research that Dan and Perry were doing in the late nineties of trying to look at how instruments emanate sound physically, and how to capture that and work with that in computer music instruments. So basically, because you follow these speakers pointing out in a spherical or hemispherical, it actually approximates a point source. If I were to play a ukulele in the room with you, without amplification, the sound naturally comes from the ukulele and not from speakers around the room.
Ge Wang:
So most naturally, we're hearing the sound from the object, the physical objects making the sound. That's kind of the BOSSA, and that's kind of one of the fundamental tenets of the laptop orchestra, is that not only do we have a group of people making music together with computers, we're also going to really be very mindful about how sound is actually propagated. So each computer and each person, each station had a hemispherical speaker array, a six-channel speaker array that made sound proximal to each laptop. Taken in the aggregate, it really is meant to create the sense that there is a sound stage, that there are different points of sound, corresponding to kind of, I think Dan's, I think, term for this is electronic chamber music. It has this sonic intimacy of a chamber ensemble. And so you're fairly close, and you can basically pick out, in space, where each instrument is coming from.
Aaron Nathans:
There's a 2005 video of the first time that the Princeton Laptop Orchestra made music together. Taking place in that first year seminar, the sounds are made from recordings of students saying each letter processed through a comb filter and using three-part harmony. <<MUSIC>> There's also a video from that year of another early Princeton Laptop Orchestra creation. “On the Floor” by Scott Smallwood has the class recreate the soothing sound of an Atlantic City casino with the cascade of C-major slot machine sounds echoing from around the auditorium. Each laptop replays a game on their machine until they're out of virtual money. And as their money dwindles, the sounds become more and more abstract.
Aaron Nathans:
The next year, the Laptop Orchestra was being offered as a junior or senior-level course in the music department, cross listed in computer science and engineering so engineering students could take it for their degree. The laptop orchestra started playing gigs around campus, playing its first concert in January 2006. For those of you familiar with Princeton's campus, they did a show that year encircling the balcony of the Chancellor Green rotunda. They opened the new Genomics Center when it was built. They would play at the EQuad. And then they took it on the road, taking the orchestra to Dartmouth.
Perry Cook:
At that time, laptops still had hard drives with spinning discs in them, so there were tilt sensors, which were primarily there to pause the hard drive in case the laptop got bumped or dropped, but we used it to use the laptop actually as a controller. So if you pick up the laptop and lean left and lean right and lean forward, it is a sensor. It's a gestural controller, just like we were building by hand, and eventually the iPhone. There are instruments, Smule instruments. Some of the very early ones responded to tilt. So you could do a whammy bar on the guitar model by leaning left as you strummed on the phone. And so that's when the Laptop Orchestra really kicked in.
Aaron Nathans:
Additional funding from the MacArthur Digital Learning Initiative later helped them triple the project from 15 to 45 units. In 2007, Wang left Princeton to join the faculty at Stanford, and there, he founded the second laptop orchestra called, appropriately, the Stanford Laptop Orchestra or SLOrk. Other universities followed suit, and soon, there were dozens of laptop orchestras. There was a symposium on laptop ensembles and orchestras in 2012 in Baton Rouge, Louisiana, and there were 30 such ensembles there. Trueman said the laptop orchestra was born of the collaborative vibe at Princeton. In a way, the electronic music pioneered by Milton Babbitt, born of the necessity of trying to save money by not having to hire an orchestra, making music in isolation, had given way to electronic music as a group experience.
Dan Trueman:
It was a product of us not wanting them separate. It was a product of us wanting to have all of these, to explore all of these new tools and techniques and to build them, but in the context of actually making music with each other with other musicians. And that's... If you make a center and it's separate, that's just not as likely to happen, because your priorities are now more on research, more on research in the sense that like, "Oh, I'm going to develop an algorithm. I'm going to publish it," as opposed to, "Oh, I'm going to write an algorithm, and I'm going to go jam with it. And I'm going to figure out how I can make a track with this." And with us, with Perry here, we were having all those conversations at once. Perry was writing papers about new synthesis techniques at the same time that we were using, and he was using those, with the laptop orchestra to make new pieces and to play with people who... Some of them had no idea what was going on under the hood.
Aaron Nathans:
Here's Seth Cluett of Columbia University, who was an early co-director of the laptop orchestra Princeton during his time as a grad student here.
Seth Cluett:
…Which, like the RCA synthesizer, was a collaboration between engineers and musicians, to think about what music could do to push the boundaries of technology, and to think about what technology can do to push the limits of music.
Jeff Snyder:
The worst laptop orchestra piece would be something where, well, it could be played by one computer if somebody pushed the space bar, but for some reason we have a bunch of other people sitting on stage letting the computer do its job, right? Which you could do. I mean, we could just... Everybody could be checking their email. You wouldn't know. That's the issue with computer music. But I think the more that we move away from that and make things that really need the player and take advantage of what humans can do that computers can't, while also taking advantage of what computers can do that humans can't, we can get a really exciting mix in there.
Ge Wang:
The laptop orchestra is such a new, was, and maybe still is, such a new thing, a new way to make music, a new way to think about instrument design, a new way to think about writing for this medium, and a new way to teach. It's like a whole new classroom. So I was... Yeah, it was quite the experience.
Aaron Nathans:
Trueman stepped away from day-to-day administration of the laptop orchestra several years ago. It's now run by his colleague music senior lecturer, Jeff Snyder, who has introduced new electronic instruments that he invented. There's now an analog digital brass instrumental group as well. The most recent performance, Mirror Displays, had a strong visual component. The program included electronically-processed tap dance, complete with tap-controlled lights, a piece of music that was also a video game, and a piece where the audience's cell phones are part of the musical soundscape.
Jeff Snyder:
In a way, it's the biggest... It's a giant move away from the Babbitt model. So Babbitt's thing was, "Okay, I'm one person in a studio trying to make my creative vision perfect by having the computer be the performer." And that was a model that tons of other people followed over the years. And when you go see a techno electronic dance music artist, there's one person, or two people on stage at most, right? And they're playing back music that's mostly prerecorded with a little bit of control, because the computer can do so much. It's like, I don't know. You could play electronic music by pushing the space bar with iTunes open. And I don't know, it's electronic music. The computer made it. But do you do anything? Not really. You can't actually... You can't fail. You can't... It's like you're not really doing much live, right? And the idea of laptop orchestra was, "Well, what if we wanted to make live electronic music together as more than one or two people?"
Aaron Nathans:
But in order to make the orchestra go, they would need more than just laptops, omnidirectional speakers, and groovy digital instruments. They'd also need software. And in true Princeton fashion, rather than rely on someone else to create that software, the Princeton crew made it themselves. That is what we'll talk about after the break.
Aaron Nathans:
If you're enjoying this podcast, you might want to check out our other podcast, which also deals with technology. “Cookies: Tech Security & Privacy,” deals with the many ways technology finds its way into our lives, in ways we notice, and in ways we might not. If you're looking to shore up the security of your personal data and communication, you'll find some great tips from some of the best-informed people in the business. You can find Cookies in your favorite podcast app, or on our website, engineering.princeton.edu. That's engineering.princeton.edu.
Aaron Nathans:
We're halfway through the fifth episode of this podcast, “Composers & Computers.” Now, I know I've been telling you there are five episodes of this podcast. Well, as it turns out, there's just too much good stuff to fit into five episodes, so we'll do another. We can call it an epilogue, if you will, to discuss the bigger meaning of everything we've been discussing up to this point. And we'll talk about some of the present day, visual-art collaborations with Princeton engineers. But let's not get ahead of ourselves. Here's the second half of part five of Composers and Computers
Aaron Nathans:
Ge Wang was born in 1977 in Beijing. He grew up listening to cassettes of classical music at his grandparents' home. He moved to the United States at age nine. And for his 13th birthday, he got an electric guitar, but he also loved video games and the sounds that they made. He received his undergraduate degree at Duke, and came to Princeton to pursue a doctorate in computer science. In one of his books, he described rock music as his "gateway drug into music making." In his time at Duke, as his passage into programming. "Although I couldn't quite articulate it at the time, I was drawn to the elegance of certain features and programming languages and aspired to create things, programmable software things that empower people to make music, but in a way that was aesthetically nuanced and fun. I wanted to rock, and help others rock, with the computer."
Aaron Nathans:
He also wrote that people tend to do their best work when they follow their true interests. So, he was drawn to create the software language, ChucK, because he wanted such a composer-friendly language to exist. An online community sprung up in 2003 and 2004 to help work out the bugs in the program, but it really started to take off when he got involved with the laptop orchestra at its inception, and it was decided that ChucK would be its primary programming and teaching tool. In a research paper Wang described ChucK as, "an ongoing open-source research experiment in designing a computer music language completely from the ground up." And so, as students in the laptop orchestra were motivated to make music, they were also motivated to learn programming so they could make full use of this tool. They also learned how to use another more established language, Max MSP, a language named in honor of Max Matthews, and which used some of the original concepts Matthews built into his MUSIC N series sound synthesis software more than 60 years ago.
Ge Wang:
For me, this was like the first real testing ground for ChucK, because Dan and Perry were like, "We're going to use... We're basically going to use as primary teaching tools for programming languages, both Max MSP and ChucK." So we're teaching students both of these languages, which is a really nice pedagogical thing, by the way, because then you can compare and contrast these tools, and you can kind of also get the sense of the importance of trying to choose the right tool for the task at hand. Any case, so as a TA, I was helping students with programming, sound design, live performance, instrument design. While at the same time, I was also maintaining ChucK, as I realized, "Oh, uh-oh. This part isn't..."
Ge Wang:
This is where the hole filling comes in. This is where you realize maybe the language wasn't designed to fill a hole. But then when people start using it, they start running into holes or potholes. And you're like, "Oh man, whoops, I'm so sorry. Let me go fix that for you," or "Let me add this feature." So that was really the first real-world context in which still the fledgling ChucK, as a language, was really being used.
Aaron Nathans:
Dozens of laptop orchestras have now produced hundreds of musical pieces using ChucK. Here's Continuum, which Wang wrote with Madeline Huberth. ChucK helped power his next creation, which came amid the advent of the app-based smartphone revolution of the mid 2000s. Designed for the iPhone, he named the Ocarina after a real handheld instrument that you blow into. It sounds like the pan flute.
Ge Wang:
The way that the Ocarina works is you blow into the microphone at the bottom of the phone. That sound signal goes to a ChucK program that's running within the app, that then is tracking kind of the strength of how hard you blow into the microphone. That's then used to modulate the amplitude of the Ocarina sound that's generated within the app. Tilting the phone causes vibrato, controls vibrato, and then the multi-touch is used to control the pitch. And these four different kinds of onscreen kind of finger holes that the different combinations produce different pitches. If you'd like, can I play you a bit of the Ocarina. Let’s see. Hopefully it will work. So here, I'm just going to play the scales. I'm blowing into the phone, but also using multi-touch on the screen to control pitch. Vibrato is controlled by tilt, so I have the phone flat, and it sounds like this. As I tilt it down, it adds more and more vibrato. Excuse me. <<MUSIC>>
Aaron Nathans:
The Ocarina was a product of the company he had co-founded with Jeff Smith in 2008. Cook himself was involved with the founding of the company too. A 2006 computer science grad from Princeton, Spencer Salazar, was a software engineer. Rebecca Fiebrink, one of Wang's fellow doctoral computer science students at Princeton, would work there as well. They called it Smule, a shortening of “sonic mule,” an Isaac Asimov reference. Smule became quite successful, and it was profiled in the New York Times Magazine. It offered a variety of apps, all aimed at the creation of musical tools that anyone could use. Wang worked two jobs, his academic position at Stanford, and his spinoff job at Smule, until 2013, when he stepped down from his role at Smule to focus on his job at Stanford. Wang is now associate professor at Stanford University at the computer music research center there, CCRMA.
Aaron Nathans:
In 2011 Perry Cook, retired from Princeton at the age of 55, in order to join his wife, who lived in California, in a common home in Oregon, as well as to write, sing, record, and in his own words, farm sunlight. He continued his work at Smule and other private ventures started by his students, as well as serving as a visiting professor or artist at various institutions, including Stanford and the California Institute for the Arts. Another one of Cook's students that year was hired to fill the computer music vacuum he left in the Princeton computer science department. As a child growing up in central Ohio, Rebecca Fiebrink had learned piano using the Suzuki method from the age of three, and flute starting in middle school. She got her undergraduate degree from Ohio State, studying engineering and music. And during that time, she learned interesting ways to combine computers and music, something she pursued with a master's program at McGill. She was attracted to Princeton, because not only were there people here passionate about computers and their applications to music, but there was also a growing strength in machine learning as well.
Rebecca Fiebrink:
PLOrk, the Princeton Laptop Orchestra, had just started I think the previous year. So the idea of having this sort of lab, this experimental lab of students and faculty all working together to, again, kind of explore these questions of what are computers good for in musical performance, and what happens if we do this at a different scale and with a different perspective than what's really been done before. And one of the things that I really liked was, there's just so many applications at the intersection of computer science and music.
Rebecca Fiebrink:
Yeah, part of it is making weird sounds and synthesizing new sounds that you can use in experimental performance, and I like that, but that was one piece of a much bigger puzzle about the work that we can do, as researchers, to understand human perception, the work that we can do to deepen our understanding of performance practices, the work that we can do to help people learn music or share music with each other or find music. And so in 2004, people were really starting to think seriously about applications of machine learning to music. I would say that's the place that I still largely work today in 2022.
Ge Wang:
She is the world's expert at the intersection of human computer interaction, artificial intelligence, and computer music. And I think her work is extremely exciting when it comes to AI, because she is always asking this question, kind of this humanistic question of, what is, for example, the right amount of human interaction with computer automation? I think that's probably one of the most important questions we need to answer for ourselves, not just in music, but just in life, in society.
Aaron Nathans:
She had recently received her doctoral degree at Princeton. Cook was her advisor. Her work at the time was about using machine learning techniques to allow composers to build new expressive instruments. Active in the laptop orchestra, Fiebrink became its co-director, first with Seth Cluett, then with Dan Trueman. Fiebrink created her own music creation software, which she dubbed the Wekinator. This is composer and sound artist, Laetitia Sonami, using the Wekinator to create a circular instrument with springs that she manipulates by hand. <<SOUND>>
Rebecca Fiebrink:
A piece of software that I started working on during my PhD, and it started with this question of what might creators do with machine learning. And I had hunches about that. In particular, I was coming from a music information retrieval background from my master’s, where we were very focused on audio labeling. And that has applications in real-time performance when you think about score following or collaborative improvisation, where you want the computer to listen to you and follow along or play along. And so that's kind of what I had in mind. And when I started working with Dan Trueman and a whole bunch of the Ph.D. student composers in the music department, that just wasn't the thing that they grabbed onto. They immediately were much more excited about its potential to build new musical instruments and new gestural controllers. And so it wasn't necessarily that people were sitting around saying, "Oh, this building of gestural controllers is hard. We should make a thing to make it easier." But people were sitting around, building a lot of new instruments.
Rebecca Fiebrink:
They knew intimately, what is fun about it, what is satisfying about it, what is hard about it? And then I walked into this space with a thing, and said, "Hey, what's this thing good for?" And they were able to recognize, oh yeah, that can fit into my practice in this particular way.
Aaron Nathans:
With music as her foundation, Fiebrink has broadened her scope of research in recent years into video games, virtual reality experiences, and working with visual artists and historians to make tools appropriate for museums. She still does some music specific work as well. She does not, however, do it at Princeton. In 2013, Fiebrink left the faculty at Princeton to join the University of Arts London, where she is a professor today. Dan Trueman and Jeff Snyder have continued to create innovative new digital instruments in the music department. After Fiebrink left, they no longer had a counterpart at the engineering school. A saga of engineering music, collaboration that began the moment Ken Stieglitz wandered into the E Quad in 1963 and bumped into Godfrey Winham and Jim Randall, ended in a sense after Fiebrink departed for England, and the computer science department decided not to replace her with a computer music engineer.
Perry Cook:
My hire was experimental for the university, being the only joint appointment between music and engineering in the history of Princeton. And it worked out, the things that I was able to do and publish and the students that I had and things. I got tenured. We were able to build amazing things like the laptop orchestra. In replacing that with Rebecca, that was wonderful. We were all very excited. But she was poached away by another university in jolly old England. And I think by that time, that slot was just not pinned to music or art or computer music.
Aaron Nathans:
Cook says computer science and engineering departments would benefit from having a faculty slot dedicated to music.
Perry Cook:
With multimedia audio, what it is, with the iPhone, the iPad, multimedia on the desktop and laptop, with VR and AR, even if it's for assistive technology, all of those things need audio, and they need good audio, and they need people who understand that. And so I don't even limit it to music, but I always mean music when I say audio, unless I'm writing a grant. Sometimes I just say audio, but yes, I think departments would benefit worldwide from recognizing that the application area of sound, music, audio is as important as the application area of graphics and visual art.
Aaron Nathans:
In the meantime for the Princeton Laptop Orchestra, the beat goes on. Jeff Snyder, its present-day leader, is interested in building digital instruments that can withstand changes in computers. He builds instruments that are digital, but people who are trained in the acoustic versions can play them too. As for Dan Trueman, he spent several years working on the bitKlavier, a way to digitize a keyboard playing beyond what's been done before. On the project's website, it's described as "An instrument that pushes back."
Dan Trueman:
The thing that the computer does that is categorically different in the thousands of years of history of instrument building is that it severs the body from sound. Until the computer came along, the body was always connected to sound in some kind of physical acoustic way. And even with analog synthesizers, you've got knobs that are connected to oscillators that are actually doing what... that are constrained and can only do certain things. With a computer, I can get data from a keyboard or from an accelerometer or whatever game controller. It's just data, and it can be whatever. So we have to invent that connection. And that is... We are still very much at the beginning of that process.
Ge Wang:
And some could say that's what art is. It's kind of... It's our earnest effort to try and mostly fail to try to understand ourselves. Some could say that's what art is. And in that sense, the computer and computer music is this very unique tool, partly because it's maybe the first tool we've ever had, well, that's programmable, so which means you can make more tools with this tool, right? And it's not an end point. It's kind of a beginning and kind of a vehicle for new tools, that lead to more tools, or more art or more whatever.
Dan Trueman:
I think this border, it's a fungible border, between engineering and music making has always been incredibly rich and productive, where one is inspiring the other, one is enabling the other. That is very much continuing. And I do feel like Princeton has been really good in that regard over the decades.
Aaron Nathans:
Trueman says that the story of computer music at Princeton has been the story of people inventing algorithms and software for the benefit of their own creative process. And in the process, creating something that the wider community wants to use.
Dan Trueman:
There are tools out there that are pre-made. Those shape the kind of music that we make. And that's great. And in commercial music, that world, they use these tools, for the most part, but a big part of this is wanting to be able to make our own tools and make them, maybe not in our own image, but sort of reflecting our own priorities about what our musical interests are, and to be able to build a workbench that is rich and variegated and has a variety of things on it. I work with ChucK, I work with Max, I work with Pro Tools and Logic and commercial tools. At the same time, I'm working with my fiddles, my hundred-years-old fiddles hanging on the wall, and they're all like part of the same jumble that is me. And I love commercial tools, and we want to use off-the-shelf tools when they're right for what we want to do. We also want to recognize when those tools are framing how we... They're framing our imagination.
Dan Trueman:
We're talking about any sound you can imagine. It's like, well, yeah, what can I imagine with Logic, or with such and such a plugin or whatever. That sets the frame. If you want to push the boundaries of that, you’ve got to dig in and make your own things.
Aaron Nathans:
In our epilogue episode, we'll tie things together. We'll talk about the entrepreneurial spirit that undergirded the six-decade collaboration between engineers and musicians at Princeton. We'll look at how that spirit of interdisciplinary exploration between engineers and artists at Princeton has kept going, branching out into different media well beyond just music. And we'll probe the question, something that I've wondered a lot about. For such a great story, why has so much of this history been so hidden?
Aaron Nathans:
This has been Composers and Computers, a production of the Princeton University's School of Engineering and Applied Science. I'm Aaron Nathans, your host and producer of this podcast. I conducted all the interviews. Our podcast assistant is Mirabelle Weinbach. Thanks to Dan Kearns for helping us out with audio engineering. Thanks to Dan Gallagher and the folks at the Mendel Music Library for collecting music for this podcast. Graphics are by Ashley Butera. Steve Schultz is the director of communications at Princeton Engineering. Thanks to Scott Lyon and C. Nathans. This podcast is available on iTunes, Spotify, Google podcasts, Stitcher, and other platforms. Show notes, including a listing of podcast heard on this episode, sources, and an audio recording of this podcast are available at our website engineering.princeton.edu. If you get a chance, please leave a review. It helps.
C. Nathans:
The views expressed on this podcast to not necessarily reflect those of Princeton University. Our next episode should be in your feed soon. Talk with you soon. Peace.
Aaron Nathans:
Thanks, buddy.