Episode Transcript
Aaron Nathans:
When it comes to making music today, computers are ever present. Chances are that the device you're playing this podcast on right now doubles as a digital music synthesizer and recording studio. But many of the amazing sounds that have been made with computers are possible because someone once had a sound in mind and couldn't make it with the tools that already existed. So rather than subjugate their ideas to the existing tools, they made new tools. And a lot of those tools were made at Princeton.
Aaron Nathans:
This is the story of a crew of young, math-loving classical music composers, and what happened when they learned that a computer center was opening in the brand new engineering quadrangle at Princeton at the end of 1962. They had the idea, unthinkable at the time, that this hulking, room-sized mainframe had the capacity to not just compute, but to surprise and delight. This is the story of a collaborative journey of music-making. Princeton engineers have helped musicians get the sounds out of their minds and into their ears by helping them build the digital tools they need. Together, they have changed the sound of music.
Aaron Nathans:
And just when you think it's all been done, Princeton computer musicians, faculty and alumni, say they're just getting started tackling the most difficult challenge of all: making our digital music tools more human.
Aaron Nathans:
From the School of Engineering and Applied Science at Princeton University, this is Composers and Computers, a podcast about the amazing things that can happen when artists and engineers collaborate. I'm Aaron Nathans. <<THEME MUSIC>>
Aaron Nathans:
Part one, “Serial... ism.” To understand what drew these composers to the computer center and what they were initially trying to accomplish with that beast of a machine, you have to know their mentor and what made him tick. Meet Milton Babbitt. Upon the centennial of his birth in 2016, the New York Times wrote, "This composer's name continues to strike fear into the hearts of audiences." Yes, the amazing, endless creative possibilities of music made by a computer has its roots in some of the strictest, most methodical, often challenging to listen to music ever made. Babbitt, who was a professor in the music department at Princeton, is perhaps best known, quite unfairly, for a headline and editor put on one of his journal pieces: "Who cares if you listen?" Babbitt cared a lot about his preferred style of music, serialism, 12-tone music. But the editor had made his point. This music was not exactly easy on the ears.
Aaron Nathans:
Unless you're a music student steeped in theory, what you're about to hear may sound a mad jumble. And this is a good time for me to beg your indulgence, because a lot of the music on this podcast may not be your cup of tea, but I hope you'll agree it's fascinating. So please put your thinking brain on as we listen to a short clip of Milton Babbitt's 1964 piece, “Ensembles for Synthesizer”... <<MUSIC>>
Aaron Nathans:
First, let's talk about what you just heard, and then we'll talk about the machine that he composed it on. This music might have rubbed you the wrong way because most music in the Western world tends to tonal. It starts on one note and it finishes there, too... Each octave contains seven notes, but there's more than seven keys within an octave: 12, to be exact. But in a Western scale, we only play seven of them.
Aaron Nathans:
The kind of music that Milton Babbitt stood for, serialism, is atonal. It emphasizes mathematical patterns. It's called 12-tone music because it uses all 12 tones... Though not necessarily in that order. But all 12 keys must be played before you can repeat one. Serialism has a lot of rules, but within those rules, there are a lot of possibilities... 479 million possibilities, more or less, because with more keys, there's more combinations. And there are more combinations than a mere mortal could fathom.
Aaron Nathans:
It's not surprising that Milton Babbitt was drawn to music with a mathematical bent. Babbitt, who was born in 1918, went to college at the University of Pennsylvania, not for music, but instead for mathematics. He later studied music at New York University and was attracted to the work of the Viennese composer Arnold Schoenberg, who invented serialism and whose name became associated with atonal music... <<MUSIC>>
Aaron Nathans:
This was a relatively new phenomenon, austere music for an austere time. Schoenberg, who was Jewish, fled Nazi Germany in 1933 and became an American citizen in 1941. Here's Seth Cluett, who received his Ph.D. from Princeton in composition in 2012. He's now the assistant director of the Computer Music Center at Columbia University.
Seth Cluett:
Serialism comes along and it's a bit of a rupture because Schoenberg was basically like, the hierarchy of the of the Western scale where it's like, do, re, mi, fa, so, le, ti, do and the idea that when you get to “do,” you're home, seemed a bit hierarchical to him. So the idea was okay, "Well, what can I do with music if the problem is one of, I have to use every note in the 12 notes available to me in the chromatic keyboard on the piano before I reuse a note?" So to the modern ear, that sounds a little bit of musical Sudoku, but the result is that composers solved problems with musical material, and this was just a new way to solve those problems.
Aaron Nathans:
Jeff Snyder is the director of electronic music at the Princeton University Department of Music.
Jeff Snyder:
Part of the idea is that old classical music structures, you have a home key, the tonic; you have what's called the dominant, which is the sort of opposing chord that wants to lead back to the tonic. And then all the other chords have their own role in this hierarchy. So it's like we have the main chord and then these other chords and the main note and these other notes...
Jeff Snyder:
So part of serialism was, "Well, what if we took that away and had this more sort of equality-based idea of the notes of the scale?"... So instead, okay, well in tonality you have 12 notes of the scale, but then, "Oh, let's get rid of five of them. Now we only have seven." Each of those has their own importance and they're rated, right? So instead with serialism, let's say they're all good. They all actually have to be used equally. So then how do you enforce that? The way he did it was he put them in an order, and then you can't use one of the notes again until you've used all the other ones.
Aaron Nathans:
But Babbitt went further. Beyond new configurations of notes, he also wanted to organize rhythmic patterns in a new way and patterns of dynamics as well. He called it total serialism... Now you're really pushing the envelope of what a human being, even a highly-trained concert pianist, is capable of doing.
Jeff Snyder:
It's really hard for people to play it because suddenly it sort of doesn't make any traditional sense with how the rhythms are structured and oh, now the rhythms are like soup. There's a really short note and then, oh, now it's supposed to be a septuplet followed by what, you know? They get really complicated and the players weren't really able to execute it.
Aaron Nathans:
Dan Trueman is a professor of music at Princeton.
Dan Trueman:
At Princeton. I have the sense of sort of mid-20th century in the music world, there being a kind of science envy that I think was reflected in the work of Milton and some of his compatriots. This sense that Einstein was here and all these incredible people are here and they're figuring out the fundamentals of how the world works and changing the world, and we should be able to do that with music as well. Music does tend to, in certain ways, lend itself to being objectified in mathematical ways, even though in my view, whenever you do that, you miss 99% of what music is about.
Jeff Snyder:
Babbitt was part of this sort of mid-century modernism movement of, you can think of the same thing with visual art and move toward abstraction. There's this idea that they wanted to throw away, especially post-World War II. They were kind of like, okay, the old world, the old way of doing things got us into a lot of trouble. Let's try something totally new. So they were really trying to say, "What can we do if we're not using the materials that have been handed down as the acceptable ways to make music?" So it was a lot of exploring and saying, "Okay, well, if I can't use a major scale, what can I do? If I can't use the major and minor scales and all this traditional theory."
Jeff Snyder:
So Babbitt's one of the people who's exploring those possibilities and saying, "Okay, well, what if instead of organizing based on these traditional forms and dance forms that are coming from Western classical music, what if we organized it based on random number generation or based on some kind of we just pick a series of numbers and we repeat and turn it upside down, move it backwards, that kind of stuff?" He was curious whether that would be something that we could perceive as listeners and whether it would have an interesting effect on us.
Aaron Nathans:
Dan Trueman says 12-tone music has more or less vanished from contemporary music.
Dan Trueman:
Which again, by the time I came here in the mid-90s was already history. There was some legacy of it and even today, there's some legacy of it, but it wasn't the way it was in the '60s and '70s, where that was what you did. That's my understanding.
Aaron Nathans:
With so much challenging music to play, Milton Babbitt was looking for a player that was up to the challenge. The thing about performers, of course, is that they are only human. Sometimes they make mistakes, but even the best of them often have a hard time executing what a composer has in mind. And yes, you have to pay them. So Babbitt thought, "What if there was a way to produce the music exactly the way that composer envisioned it, right on, every time, and you don't have to pay them?" A possibility like that was awaiting Babbitt just across town, and he would play a part in its development.
Aaron Nathans:
The Radio Corporation of America, RCA, had its laboratories in Princeton in the 1950s. New Jersey at the time had already etched an indelible mark in the field of electrical engineering, notably with Thomas Edison's invention factory in Menlo Park, as well as some of the early computer work done here at Princeton and the nearby Institute for Advanced Study. RCA audio researcher Harry Olson, an accomplished microphone creator, was building a machine that engineers there figured could produce any sound you could imagine. Studying the characteristics of each note, in 1955 Olson and audio engineer Herbert Belar built the RCA Mark I, one of the earliest analog music synthesizers.
Aaron Nathans:
The Mark I contained a bank of 12 oscillator circuits, which used electron tubes. An oscillator creates a sound wave form, much like a violin string vibrates to produce its own sound... These oscillators generated the 12 tones, there's that term again, of a musical scale. These tones could be shaped in all kinds of ways as they passed through filters and other electronic circuits.
Aaron Nathans:
RCA built a second generation synthesizer with twice as many oscillators and considerably more composing flexibility in 1957. This was the Mark II, and Milton Babbitt was hired by RCA as a consultant on the product. It being so expensive, Princeton and Columbia Universities decided to pool their resources together to afford the synthesizer, which they bought in 1959 with the help of funding from the Rockefeller Foundation. It was housed in Upper Manhattan at Columbia at what was dubbed the Columbia-Princeton Electronic Music Center. Babbitt became the co-director of the center along with his Princeton colleague Roger Sessions, as well as Vladimir Ussachevsky and Otto Luening of Columbia.
Aaron Nathans:
Jeff Snyder points out that whether it was the RCA synthesizer or a mainframe computer, so many of the early advances in electronic music were made at universities like Princeton because they could afford the equipment. Others outside of academia may have had good ideas, but Princeton and Columbia had the resources and the connections.
Aaron Nathans:
Babbitt appreciated the Mark II for the way, it could create effects like glissandi, adjusting the amplitude and duration of a sound and controlling the frequency... You could play a melodic or rhythmic series backwards or forwards, slow or fast. But the device was created for more than just as a mode of creative expression. Seth Cluett.
Seth Cluett:
One of our alums refers to it as the Great Grift, but it is the idea that the RCA synthesizer could replace the expensive union musicians for commercials and for radio spots and for incidental music for television, that you could save money by not having to pay humans by creating a thing that could synthesize sound for you.
Aaron Nathans:
Milton Babbitt, however, had other considerations in mind that drew him to the device. Seth Cluett said of Babbitt...
Seth Cluett:
When RCA wanted the synthesizer and he was in the process of developing music that was more complicated than human beings could play, the idea that you could program a synthesizer to play things humans can't was of very significant interest to him and the other composers who were interested in this discipline.
Aaron Nathans:
Now it's important to note that the synthesizer was not a computer, although the RCA Mark II did have a digital component in the literal sense. That's because instructions were typed in with ones and zeros, which were punched onto paper rolls as punched notes. These holes gave the electronics their instructions. However, despite the fact that it ran on electricity, the Mark II was largely analog. Its output was a record lathe.
Aaron Nathans:
The Mark II at Columbia was the only one of its kind, and there was no Mark III. It weighs, yes, present tense, three tons and contains 7,000 vacuum tubes, which produced unwelcome background noise. It broke down a lot and they'd need to spend days fixing it. The device wouldn't move easily and not surprisingly, it's still there today though, though it hasn't been used much in the last 25 years. They're having it cleaned to remove the dust from the tubes in hope they can get sounds out of it in the future. Now the Mark II was not the first synthesizer. That distinction goes to Hugh Le Caine's electronic Sackbut, built in 1940.
Radio Host:
The title is “The Sackbut Blues...” <<MUSIC>>
Aaron Nathans:
There was the theremin, named for the Russian scientist who created it in 1920. It had two antennas, one for volume, one for pitch, and you moved your hands to change the sound... <<MUSIC>>… You might remember it from the Beach Boys' “Good Vibrations”... <<MUSIC>>
Aaron Nathans:
The telharmonium, made by Thaddeus Cahill in 1906, was the first instrument to create sound with electricity. It used metal cogs to generate musical frequencies. Elisha Gray, who has a competing claim to Alexander Graham bell to have invented the telephone, also created the musical telegraph in 1876. The oscillation of steel reeds were transmitted over a telephone line.
Seth Cluett:
But the RCA Mark I and Mark II represent the first musically controllable and playable synthesizers. So this is essentially the first thing that you could compose music on...
Aaron Nathans:
Jeff Snyder notes the RCA devices were the first synthesizers that allowed playback of composed music.
Jeff Snyder:
It had more in common with a player piano in a way. So you could type in on these little, these rolls, you could type in a four bit number to represent pitch, rhythm, things like that, and which octave you're in and stuff for each note. Then it would roll it. You'd type it in on this little input keyboard, and then you'd feed in the roll, and then the synthesizer would play that.
<<THEME MUSIC>>
Aaron Nathans:
We'll be right back with more of “Composers and Computers.” <<INTERMISSION>>
Aaron Nathans:
At its heart, this podcast is a story about interdisciplinary research. And here at the Princeton University School of Engineering and Applied Science, interdisciplinary work is part of who we are. We have a wide array of initiatives that cut across disciplines, including bioengineering, quantum computing, robotics, smart cities, data science, and yes, engineering and the arts. You can keep track of all the exciting things here by subscribing to our newsletter, visit us at engineering.princeton.edu, and scroll to the bottom of the page to sign up for our mailing list. That's engineering.princeton.edu.
Aaron Nathans:
We're halfway through the first of five episodes of this podcast, Composers and Computers. On our next episode, we'll move into the mid-1960s and watch as Princeton engineers and composers put their heads together to try to coax musical sounds out of an early model IBM. The results are pretty exciting, and the music's pretty cool too. But let's not get ahead of ourselves. Here's the second half of part one of “Composers and Computers.” <<END INTERMISSION>>
Aaron Nathans:
The synthesizer mirrored the times. Men were going into space and breaking the sound barrier. Computers were coming into people's consciousness. Science fiction was all the rage. The counterculture was growing.
Seth Cluett:
They were still making some pretty strong statements about a need to break from the past. So this was about that kind of like, well, we are no longer part of the long history. World War II was a rupture and it said things get to be different now. Then the sort of self-reflection or lack of self reflection around the Vietnam War, these things were reckoning moments for people who are creative, trying to think what's the role of me in society? What's the role of music in society? Where do I sit in the academy? How am I related to my history? Those things were all sort of in the ether at the time.
Aaron Nathans:
The Columbia Princeton Center's heaviest user was Milton Babbitt, but he certainly was not its only user. On the Columbia side was Otto Luening, who was very different in his orientation than Babbitt and used tape as his main material rather than the synthesizer. Think of the Beatles as they used tape to put together their more avant-garde, complex pieces. Seth Cluett says Luening was all about experimentation.
Seth Cluett:
He was like, okay, what happens if I record a flute and I slow it down 20 times? What happens if I layer a flute on top of a flute over and over again. Now we think about that and that's something people do on their iPads on the train. This was the first time it had ever been done. And so a lot of his pieces, there's a very famous one you can find on Spotify called “Low Speed.” It's very accessible. Sounds environmental. It sounds like a cue from a horror movie. It's got references that are outside of music. And so it's trying to leave music behind and it just is about texture and sound and think landscape photography instead of portraiture, right? It's creating an environment to be listening in rather than an object to see, to scrutinize... <<MUSIC>>
Aaron Nathans:
Babbitt's masterpiece on the Mark II was “Philomel,” a piece for synthesizer and voice.
Seth Cluett:
That's the one that people reference in textbooks related to the RCA Mark II synthesizer. Like, where he was very creative and innovative was in rhythmic gesture. So he was making one thing lead to another thing with an incredible physics, dynamism, and energy. I think that's a quite special thing... <<MUSIC>>
Aaron Nathans:
Babbitt's collaborator on that recording was soprano and Princeton resident Bethany Beardslee. She was known as a composer's singer for her devotion to working to execute composers' visions in often far-flung areas of music. Her autobiography, “I Sang the Unsingable,” hints at the challenges that composers like Milton Babbitt put in her path and the faith that Babbitt had in her ability to deliver on his vision with the help of what she called Babbitt's “robot orchestra.” Here's Mark Zuckerman, a computer musician who received his doctorate from Princeton in 1976.
Mark Zuckerman:
The piece that I would recommend, if you wanted to get involved in listening to some of the most masterful use of an electronic instrument in a live performance would be Bethany Beardslee's recording of “Philomel.” That, I think, is a tour de force, bar none.
Aaron Nathans:
It's notable that Beardslee married one of Babbitt's proteges, a Princeton graduate student from the United Kingdom who was commuting into the city to work with the Mark II. His name was Godfrey Winham. In 1964, he became the first person at Princeton to receive a doctorate in music composition. We'll be hearing a lot about Godfrey Winham in later episodes of this series.
Aaron Nathans:
Cluett is enthusiastic about Milton Babbitt's “Ensembles for Synthesizer,” which we heard at the top of this episode...
Seth Cluett:
I think “Ensembles for Synthesizer” is another example of Babbitt's incredible kind of timbre and rhythmic ingenuity. The amount of time it would've taken to program the musical material for that piece would've been astonishing. It must have been hours or he instructed graduate students to work hours to punch in all of the rhythms. It is incredibly dense. It has a great deal of information. And it's timbrely just really interesting, sort of pushing the boundaries of what the simplicity of the synthesizer was capable of.
Aaron Nathans:
Lots of music was made on the Mark II by other composers. One piece, Charles Wuorinen's “Time's Encomium,” won the Pulitzer Prize, the first piece of electronic music ever to win that honor... <<MUSIC>> Cluett says he has a lot of respect for the work, but he doesn't enjoy listening to it. It's a little aggressive for his taste.
Seth Cluett:
I just, I can't get my head around it. It's one of those things where I am an open-minded person. I will listen to it. I taught it. I teach it. But it's difficult music to listen to, and that's okay because some things are just difficult to listen to. It was a difficult time.
Aaron Nathans:
Dan Trueman agrees. There's a quality to the electronic serial music of this era, so adventurous in its approach that often eludes what we tend to appreciate about music today.
Dan Trueman:
There were a number of things that that music avoided, and that was generally it avoided groove and engaging the body in a way that we're very much used to now, and actually I think all of music for the most part in the world engages the body in certain ways, and that music really tried to avoid that. It also avoided melody and conventional harmony, things that sound certain ways that we don't necessarily understand why, and instead there were these systems that were put in place to try to... kind of generative process to try to generate new relationships.
Aaron Nathans:
Yeah [crosstalk 00:28:09]
Mark Zuckerman:
The critics of that time used to call it the Columbia-Princeton axis, as if it were some World War II force that was fighting against nature somehow. They considered it to be a battle and they triumphed over it. So you don't find very many people who would say that their main influence today is, say, Milton Babbitt.
Aaron Nathans:
But the Mark II, revolutionary for its time, was quickly being surpassed. One of the students at Columbia doing research at the center at the time was an undergraduate named Robert Moog who studied electrical engineering. He went on to Cornell and to create the famous Moog synthesizer. Here's a bit of music from Martin Denny's 1969 album, “Exotic Moog...” <<MUSIC>>
Aaron Nathans:
Moog synthesizers started to get picked up by members of popular bands. Another analog synthesizer, the Buchla, meanwhile, was developed in California. The Princeton-Columbia Center purchased a few in the late 1960s. The Grateful Dead started to incorporate the Buchla into their music. The legacy of the Mark II was that it set a template of how art and science could work together, setting the stage for what would come next.
Seth Cluett:
Right? Because engineering gets pushed forward when people who have creative minds think past what we can do now, ask questions that engineers haven't yet asked. So there's a great quote from Laurie Spiegel, who's a founding pioneer of our field, she was a researcher at Bell Labs. And she said, "What has no possible, conceivable use is the answer to a problem that does not yet need to be solved." So we make a technology; we don't know what it's going to do. And it creates something new.
Aaron Nathans:
Milton Babbitt mentored the next generation of composers at Princeton, and naturally they were steeped in serialism, using that as their starting point. But of that group, only Godfrey Winham was making regular trips into New York to work with the synthesizer. But in the mid-1960s, Winham turned his attention to a different piece of equipment, one right there on the Princeton campus, albeit on the outer edge. The School of Engineering had outgrown its home of Green Hall, and on the side of Old University Field was rising the sprawling new engineering quadrangle.
Aaron Nathans:
Also outgrowing its tiny home at the Gauss House on Nassau Street was the first general purpose computing center on the Princeton campus, which housed the early IBM device called the 650. Princeton had just acquired IBM's next generation computer, the IBM 7090. This is the same model of computer made famous in the movie “Hidden Figures.” The machine and all of its ancillary equipment would take up several rooms. It needed the space that the new EQuad would afford. The creators of the first official computer center at Princeton University knew it would be popular and room was left for offices, and there was plenty of space for people to gather and learn about the new machine.
Aaron Nathans:
The new device was intriguing to Winham. He knew of some early experiments to coax sounds out of an IBM 7090 at Bell Labs in Murray Hill, New Jersey. Winham knew the device on the third floor of the E-Quad had the potential to not just create electronic music closer to home than Upper Manhattan, it also held the promise of breaking free of the limitations of an analog device. While the Mark II could do an awful lot, what an analog synthesizer could do was finite. But echoing the space race, with a computer, the sky was the limit. Jeff Snyder said there was another limitation to analog synthesizers.
Jeff Snyder:
What was difficult about it was making the same sound twice because the analog electronics were noisy. Once you patched something together, you'd have to unpatch it, and then you'd have to somehow get it back into the same state. There were no presets. There was no way to... You really couldn't do the same sound again. You'd have to record it when you got it and hope that... And that was that. There wasn't a lot of control in a way. The control was very, very messy in some ways. It had control of, you could turn knobs, you could push little sensors, things like that. Part of the move towards doing more computer-based music was this move towards control of like, well, what if we designed something that could make exactly the same sound again, like a very complex sound that we could control all the parameters of, but then it would just do it again. We tweak a little and have it come out with that tweak, but exactly what we told it.
Aaron Nathans:
Here's Mark Zuckerman speaking about Milton Babbitt.
Mark Zuckerman:
We tried to interest him in doing computer music because it seemed to be logical for him to do that. But he was happy putting things on punch paper rolls like a player piano up at Prentis Hall at Columbia.
Aaron Nathans:
In her memoir, Bethany Beardslee recalled her husband Godfrey Winham trying to sell his mentor on the virtues and possibilities of making music with a computer. Winham tried to lure Milton Babbitt into the EQuad. But as innovative as Babbitt was, he was unwilling to go down that path. She wrote, "But Milton protested, 'You can't teach an old dog new tricks!'"
Aaron Nathans:
In our next episode, we'll explore how the new dogs, Milton Babbitt's students, used the computer in the EQuad to pick up where Babbitt essentially left off with electronic music, to help create a world of new sounds and pushing forward a new way of making and hearing music.
Aaron Nathans:
This has been Composers and Computers, a production of the Princeton University School of Engineering and Applied Science. I'm Aaron Nathans.
Aaron Nathans:
I conducted all the interviews and produced the podcast. Our podcast assistant is Mirabelle Weinbach. Our audio engineer is Dan Kearns. Thanks to Dan Gallagher and the folks at the Mendel Music Library for collecting music for this podcast. That was Jeff Snyder at the piano. Thanks, Jeff. Graphics are by Ashley Butera, and Steve Schultz is the director of communications at Princeton Engineering.
Aaron Nathans:
This podcast is available on iTunes, Spotify, Google Podcasts, Stitcher and other platforms. Show notes, including a listing of music heard on this episode, sources, and an audio recording of this podcast, are available at our website, engineering.princeton.edu. If you get a chance, please leave a review. It helps.
Aaron Nathans:
The views expressed on this podcast do not necessarily reflect those of Princeton University. Part two of this podcast should already be in your feed. The story's just getting started. Talk with you soon. Peace.