Orchestrating the Chimera: Musical Hybrids, Technology and the Development of a "Maximalist" Musical Style

Leonardo Music Journal (vol. 5, 1996)

Click here for original article with figures.


This article describes the "maximalist" approach I take in my musical composition. This approach embraces heterogeneity and allows for complex systems of juxtapositions and collisions, in which all outside influences are viewed as potential raw material. I focus here on the notion of hybridization, in which two or more sharply-defined and highly-contrasting aspects of experience are combined to produce something that is both alien and strangely familiar. Recent technological advances have allowed hybridization to extend into the realms of the synthesis of sound itself, the ensemble relationship between musical lines and the connection between performer and instrument.


An important aspect of much of my compositional work over the last twenty years--and one that has been gaining momentum in contemporary art in general--has been what could be called a "maximalist" (or, perhaps, "inclusive") attitude toward material. Rather than seeking a rarefied pure style free of outside references, I embrace heterogeneity, with all its contradictions, and view all outside influences as potential raw material from which something idiosyncratic and sharply-defined may be created. My works in this vein are based on interrelationships, juxtapositions and collisions and draw upon material as diverse as folk musics from around the world, popular music, jazz and historical Western concert music styles from many periods, as well as ideas derived from such far-flung fields as literature, visual art, science and philosophy. Because of the extreme diversity of the pool of potential raw material, such compositions tend to sound quite different from one another on the surface. Nevertheless, a common theme runs through all of them: the merging of fundamentally incompatible material and the working out of the dynamic this process sets into motion. The final effect expresses something of the fragmented and hybrid nature of contemporary identity.

The choice of basic material is a key pre-compositional decision with a generative influence as prevasive as the choice of 12-tone row was for Schoenberg and his followers. It sets up oppositions and parallels, which give rise to possibilities to develop and structure. The materials themselves may be coarse grain, such as entire musical styles, or fine-grain slices cut at some very particular angle. Examples I have used include the patterned melodic structure of bluegrass banjo melodies, the abortive fracturing of forward motion in the last movement of the Debussy String Quartet, the relationship of solo parts to an underlying implied basic rhythm in Afro-Cuban percussion and the rapid fragmented orchestration of Strauss' Rosencavalier. By using these elements as breeding stock, it is possible to create, in effect, hybrid musical styles.

The results are even more startling when this approach is extended to the world of computer music, where sounds and instruments themselves can be hybridized, and the very connection between performer and instrument can be challenged. The resulting creatures, compounded as they are from fundamentally incongruous parts, can be of frighteningly-emphatic character. Like the primordial fire-breathing Chimera, with her lion's head, goat's body, and serpent's tail, they conjure a wealth of conflicting associations and provide a glimpse into the realm of the mystical, in the form of the reconciliation of the irreconcilable.


If challenged to imagine a sound you have never heard, chances are you will make reference to something familiar in an attempt at coming up with something unfamiliar. You may say to yourself, "it has the strength of the ocean surf combined with the singing quality of a violin." This is the hybrid solution to coming up with a new sound. Digital computers make it possible to create just such hybrids at the basic level of sound production itself. These is done by selecting one or more idiosyncratic aspects of one sound source and combining them with different but equally idiosyncratic aspects of another source, producing a new synthetic source with the fully-formed aspects of its two parents. The result may be a simple extension of a known family (such as a contra-contra-bass flute) or it may be a bizarre deviation.

Impossible Animals

In 1985, I was commissioned to compose a piece for chorus and synthesized voices called Impossible Animals [1]. With the notion of creating a fanciful exploration of the boundary between human and animal behavior (as well as between nature and imagination), I wrote a text about imaginary animals seen while looking at clouds. The text describes several composite animals ("a llama with a llama belly...his neck is made of an opossum carrying another opossum in his mouth..."), concluding with a description of a more familiar---though no less unlikely---beast ("Has an upright posture, has an opposable thumb, has the ability to communicate by means of organized speech and a variety of symbolic systems.") To complement this text, I searched for a computer technique that could create a hybrid between human and animal sounds.

Click here to display figure 1: Block diagram showing the process of transforming the song of a Winter Wren into a half-human/half-bird hybrid, as used in the composition Impossible Animals. Dark boxes show source material. Circles show computer processing. Lines with arrows show data paths.

Figure 1 illustrates the process that created the hybrid. I began with a birdsong recording of a Winter Wren (Troglodytes troglodytes), chosen for its remarkable length and variety. With the aid of a Fourier-based analysis program called "PARSHL" [2], I split the birdsong into basic sinusoidal components and tracked each one through time. The result of this process was a set of extremely finely-resolved contours of frequency and amplitude.

Armed with this mass of analysis data, I began the task of shaping it to the musical situation at hand. First I slowed the birdsong by a large factor, but without changing its pitch. Then I lowered the pitch by a different factor, bringing it into the range of the human voice. (These operations were made possible by the fact that the analysis program provides a parametric representation of the sound.) As the Wren trills were slowed and transposed downward in pitch, they became repeated phrases. I was shocked to hear the Wren seem to turn into a Northern Mockingbird (Mimus polyglottos)!

My eventual goal was to create a synthesized vocal solo that would soar above the live chorus. To do this, I needed to tune the pitch of the bird solo to that of the chorus, but it was not clear how to tune material that is trilling or sliding in pitch. So I wrote a series of computer programs: The first searches through the analysis data and looks for quiet spots between phrases. When it finds such a location, it marks it as a new segment. The result is a set of separate mini-phrases or "chirps." The second program takes this data and shapes the pitch range dynamically, as specified in the musical score, providing a kind of meta-control over the dramatic contour of the bird song. The third program looks for places where the pitch is relatively constant. When it finds such a pitch plateau, it re-tunes the segment to be in harmony with a series of jazz-style chords that the live chorus sings. Since sliding pitches are not perceived to have a particular pitch, the program leaves them alone.

The final step was the most revealing, the rendering of the bird song by a synthetic human vocalist. This was achieved with the aid of the Chant synthesis system [3], which represents human singing by a series of glottal pulses, resonated according to vocal tract impulse responses. This technique allows transitions between the notes of a musical phrase to be synthesized by simply interpolating from one pulse resonance shape to another. In this manner, it can synthesize not just a series of separate isolated notes (as is too often the vocabulary provided by MIDI instruments [4]), but fluid phrases with vowels that changed gradually and pitches that slide from one note to another. Parameters of the Chant system include the vowel to be sung and various aspects of vocal performance, such as vocal effort.

Using yet another home-brew program, I applied the Winter Wren-derived pitch contour to the transitions between a series of human vowels. For example the lowest pitch might correspond to "Ah," the highest pitch to "Ee" and the pitches between might correspond to "Oo," "Uh" and "I." As the pitch glides up, the vocal synthesis gradually shifts from vowel to vowel, producing "Ah-Oo-Uh-I-Ee," as well as the transitions between each adjacent pair of vowels. Because this pitch-to-vowel mapping is constantly in effect, even during extremely rapid trills, this technique renders ornaments with a natural, un-mechanistic sound. For additional variety, I changed the set of vowels as the music progresses, as if the imaginary animal were pronouncing different words in some unknown language.

Finally, I re-synthesized the data into a new and greatly-transformed rendition of the original birdsong. The result is a true hybrid, as if the brain of a Winter Wren had been transplanted inside a wildly-gifted human singer. Yet there is nothing "intermediate" about it; it does not sound like a compromise between a bird and a voice. Rather, it is a peculiar mutation of bird and voice, with its own powerful identity. When this solo is combined with the live chorus, the effect is weirdly evocative, a solo vocalise of utterances full of emotion, but from a species that has never existed.

The combination of such highly-contrasting components in a single musical context results in a music that is at once both radically challenging and strangely reminiscent of past experience. As in a cubist painting, a nose may be sideways, sticking out from the wrong side of the head, but its identification as a nose gives it an expressive power that an abstract shape would not have, while simultaneously setting up a rich network of associations with everyday life. The result can be something quite alien, but with a strong hauntingly-familiar identity, like a vision from a long-forgotten childhood dream.

Abstractions of Familiar Sounds

One hybridization technique that I have found particularly effective is abstraction. Beginning with what may be called "concrete" stylistic elements---those with a high degree of predictability and familiarity, strong tonality, regular rhythm, etc.---I introduce elements of irregularity and unpredictability. A familiar style may emerge from a primarily abstract context, as in the fourth movement of Three Musicians, for viola and guitar [5], in which a pointillistic pizzicato duet is gradually transformed into a jazz duo. Alternatively, a familiar style itself may be abstracted, as in the the third movement of Three Musicians. Here, bluegrass music idioms are fractured and distorted so as to break up the regularity of the material, freeing it from its conventional formal context and allowing it to be "deconstructed," reinterpreted and used as fodder for my own processes of organization.

Like hybridization, abstraction can be extended to the domain of sound synthesis. A particularly effective technique begins with a computer simulation of an actual physical sound-producing mechanism and extends this model along physical dimensions, such as string thickness, stiffness and tightness. This technique is known as physical modeling (or virtual acoustics or waveguide synthesis) [6][7], in contrast with conventional waveform synthesis techniques, which provide no clear path to extending the sound in physically-meaningful ways [8].

My experience with physical modeling dates back to 1980, when I was commissioned to write May All Your Children Be Acrobats, for guitar ensemble, mezzo-soprano and tape [9]. I was searching for a synthesis method to complement the sound of the acoustic guitars and had the good fortune to find myself in a chamber music ensemble with violist Alex Strong, who had just discovered a revolutionary plucked string synthesis technique [10]. I began applying his technique in a musical context and found that, while it produced quite convincing string sounds, its expressivity was seriously limited. To solve these problems, I teamed up with Julius Smith, who had been working on similar approaches to bowed string synthesis. He recognized the technique as a simplified physical model and together we applied signal processing techniques to it. The resulting synthesis technique [11], shown in figure 2, is composed of a network of processing modules, each indicated in the figure by a box. Thick lines from one box to another represent signal paths computed at the sound sampling rate. Thin lines and arrows show control parameters that change irregularly, possibly as a result of asynchronous input, to create expressive variation. The computation done by each module is indicated by text inside its box; its role in the physical model given in parenthesis. For example, the string is modeled as a "delay line" that feeds its signal to a low-pass filter that models the frequency-dependent dissipation of energy along the string. "String length" is a parameter that affects the string's pitch, as is the case with a real string. Other terms used in figure 2 are defined in the glossary. Recent developments [12] have shown that any pizzicato instrument can be simulated by exciting the string with the resonances of the body of the instrument.

Click here to display figure 2: Signal processing block diagram of the Karplus/Strong plucked string with Jaffe/Smith extensions. Dark lines and arrows indicate signal paths, light lines and arrows indicate parameters available for expressive variation.

This technique, being a physical model, has a number of features that make it well-suited to the creation sound abstractions. First, its parameters not only mimic the behavior of a real string, but actually control the synthetic simulation in the same manner as its real-world counterpart. Rapid changes in these control parameters produce complex dynamic effects in the sound synthesis, which may even exceed our understanding. That is, we need not know what happens in the acoustic waveform in a particular musical situation; the synthesis technique automatically provides the correct natural response. Parameters include those traditionally in the domain of the performer, such as pick position and dynamics, as well as those historically crafted by the instrument maker, such as string flexibility, string thickness, resonant qualities of the body, and decay characteristics. All of these are now available as time-varying performance parameters.

Another advantage of the technique is its ability to retain its identity, even in the context of great variation. Unlike a sampled sound, it can be stretched and warped beyond the bounds of ordinary physical experience, while never leaving the realm of string-like identity. For example, in Silicon Valley Breakdown [13], I created a string with colossal thickness, with a sound like you would get if you plucked a cable of the Golden Gate Bridge (13:30'). Each note of this mega-string consists of 20 parallel strings, all slightly mistuned, and each with its own decay characteristic and location in quadraphonic space. The complex beating pattern this sets up actually leaps around the space, shifting over time as the various strings decay. (This effect is, unfortunately, only suggested in the stereo version of the piece.)

Later in the piece, I exploit the similarity between string resonance and room reverberation to create hybrids that flip from one identity to another. Near the climax (12:09'), a series of chords strum, building up a great deal of resonance in the strings. I then redefine the string physical model as a reverberation system and play contrasting material through it (12:34'). This material, a rising glissando with a rich harmonic spectrum, beats in a complex manner as the harmonics move in and out of the resonant peaks of the strings. The situation has a direct physical analog---it is like playing an instrument into a piano with the pedal down---but differs from the physical scenario in that none of the direct energy is heard; the entire effect is created by the resonance of the string model. I then (13:04') reverse this process, going from a reverberation effect to one of string resonance, by performing a glissando on the reverberating strings, using discrete steps. Each of these steps introduces some high-frequency energy into the string, producing a "pluck" sound, instantly transforming the resonant room back into a string. Such slight-of-hand is made possible by the immutable string-like identity of the physical modeling technique, which allows great variation in the context of a well-defined timbral identity.


Silicon Valley Breakdown deals with hybridization on higher levels as well. Its entire formal structure grows out of an opposition between concrete bluegrass music and a much more abstract atonal music, the latter based on a twelve-tone row. These styles begin in stark opposition and gradually hybridize as the piece progresses, exchanging attributes and eventually merging into a single synthetic style (15:05'). The hybridization process proceeds in a multitude of ways. One especially compelling transformation involves abstracting the ensemble relationship between the synthetic performers.

I began with a desire to abstract and distort the familiar ensemble relationship of simulated performers in a bluegrass music ensemble. However, I did not want to sacrifice precise control over where the points of coincidence occur. In order to solve these competing needs, I developed a new technique called the time map [14], which describes transformations from bar time (time as traditionally represented on the musical page) to clock time (actual times at which the musical events are performed). This technique can support wild independent tempo deviations for different parts, with no sacrifice of the ability to exactly predict when and where coincidences happen.

Click here to display figure 3: Example time maps. A is the identity time map. B is a time map with two intersecting trajectories, each composed of tempos that change in discrete steps. C is a time map employing two trajectories with continuously varying tempos.

A number of time maps are shown in figure 3. The linear time map (figure 3A) with slope 1.0 is simply the identity mapping, Figure 3B shows two trajectories: Trajectory 1 begins quickly, with a slope greater than 1.0, then abruptly changes to a slower tempo, with slope less than 1.0, while trajectory 2 begins slowly, then abruptly changes to a faster tempo. The point at which the two trajectories cross corresponds to a simultaneity---that is, both parts are at the same point in the score, indicated by the value of the x axis at the same time, indicated by the point of the y axis. Curves produce tempos that continuously speed up or slow down. Figure 3C shows two time maps that begin together, gradually diverge, then gradually converge, etc. This is a simplified version of the map used in Silicon Valley Breakdown, in which I distorted bluegrass music using several sinusoidal deviations from the linear time map, applied to multiple simultaneous instances of a melody, each with a different amplitude of deviation. The ensemble begins together (4:43), but then the parts deviate in complex ways, gradually returning to perfect synchrony at a precisely-defined point in the score. The pseudo-banjo ensemble has its own time map, which is distinct from that of the pseudo-bass ensemble. These two maps have contrasting synchronization points---as the banjos are coming into synchrony, the basses are at their point of maximal deviation from unison ensemble. The effect is like a camera that goes out of focus, with near objects quickly losing their focus and distant objects gradually brought into focus. By varying the degree of deviation from the linear (i.e. the sinusoidal amplitude), the ensemble effect moves from unison, to heterophony and finally to polyphony.


A different approach to hybrid ensemble relationships becomes available when live performers join their computers counterparts. Modern performance sensors, sometimes called virtual instruments or hyper-instruments, allow a performer's gestures to be de-coupled from the physical sound-producing mechanism of the musical instrument. The sensor measures the performer's actions and describes this information to a computer, which then translates these actions into any desired musical result. This transformation enables a new breed of hybrids concerned with performance control itself.

Softening the Boundaries Between Instruments

Click here to display figure 4. Hardware configuration of the Wildlife improvisational duo. The dark lines and arrows indicate stereo audio signal paths, the light lines and arrows indicate control signal paths (such as MIDI).

In order to explore the softening of the boundaries that normally distinguish one instrument from another, I teamed up with percussionist/composer Andrew Schloss in an improvisational duo called Wildlife[15][16]. The hardware configuration is shown in figure 4. I play the Zeta violin, a hybrid instrument that is both amplified violin and MIDI performance sensor, while Schloss plays the Mathews/Boie Radio-Drum [17][18], a three-dimensional sensor with two independent mallets that provides six distinct degrees of freedom. (The name "Radio-Drum" stems from the use of radio frequencies to distinguish the two mallets.) The gestural output of these two virtual instruments is processed by two computers in series, a Macintosh computer running Max software [19] and a NeXT computer running Music Kit [20] and Ensemble [21] software. Sound is synthesized by a Yamaha TG77 synthesizer, the NeXT's DSP and a Macintosh Sample Cell card. This arrangement allows sounding notes and timbres to depend on both of our actions, as when a violin glissando changes the pitch of chords played on the Radio-Drum. Additionally, we allow the computers themselves a degree of autonomy so they too becomes active participants in the musical discourse. Our performance actions cause the computers to spawn independent processes, autonomous artificial "lifeforms" that breed, propagate, compete and interact with us in a manner that may be symbiotic, parasitic or benign.

There is an inherent danger in this domain. It is easy for the performers, the computers and the audience to lose the connection between the gesture and the musical result, leading to a state of meaningless chaos. Much of the work in developing Wildlife involved finding interaction schemes that inspired us as performers. The schemes that work best give us a great deal of freedom, but provide mechanisms for reining in the complexity. Two examples may help clarify the sense of such a "computer-extended ensemble."

The first movement begins with a simple interaction scheme, allowing the audience to clearly perceive the causality between a performed action and the resulting sound [22]. The violin produces synthesized piano-like sounds via a set of chord mappings, each of which specifies a chord produced by the computer when the violin plays a particular pitch. The Radio-Drum performer chooses which of several sets of such mappings is active. For example, one set produces chords derived from chromatic tone clusters while another produces a different octave-displacement for each pitch. The Radio-Drum is partitioned in half, allowing the performer to play either single notes or chords, with the Radio-Drum's x axis controls register, its y axis controls duration and its z axis controls loudness. Overlaying this partition is a grid that the performer activates with a foot-pedal to select the active chord mapping set. When the grid is active, the familiar gesture of striking the drum is abstracted from its normal function and has the drastic result of changing the harmonization of the violinist's melody.

The second movement introduces a further degree of interdependence between the violinist and the Radio-Drum performer. As the violinist improvises a slow sustained melody, the computer listens and remembers the pitches most recently played. The Radio-Drum performer can then replay these pitches in any order, with any rhythm and with various instrumental timbres. Moving left along the Radio-Drum corresponds to a move further back in time, while moving to the far right gives access to pitches played most recently. Thus, both players direct the flow of the music, with the violinist continuously specifying the available pitches and the Radio-Drum performer affecting the degree of harmonic contrast.

Cross-Instrument Hybrids

An entirely different approach to instrument hybridization is used in The Seven Wonders of the Ancient World [23], an expansive seventy-minute piano concerto in seven movements. The piece is scored for Yamaha Disklavier grand piano, which is controlled by the Radio-Drum and accompanied by an ensemble of eight plucked string and percussion instruments. This is a purely acoustic computer piece---there are no loudspeakers. Similarly, nobody sits at the piano. Instead, the piano plays itself in response to the physical gestures of the Radio-Drum performer, as interpreted by a computer. This configuration allows the idioms, cliches and expressive vocabulary of one instrumental tradition to be transplanted on the physical sound producing mechanism of another. The language of percussion music, with its rolls, bounces, and flams, speaks through the voice of the piano, with its own deep and vastly different tradition [24].

The Seven Wonders motif was suggested by the grand, monumental, yet very un-pianistic, sounds that the Radio-Drum-to-Disklavier mapping makes possible. The motif also fits well with my interest in great contrasts, as the Wonders reveal a crosshatch of parallels and oppositions. Two deal with death---the Pyramids and the Mausoleum. The Hanging Gardens glorify cultivated nature, while Artemis is the goddess of wilderness. The two statues invoke the heavens---Zeus, the god of thunder and rain; and the sun god of the Colossus of Rhodes.

The primary challenge in the creation of this piece was to find ways of using the Drum-Piano hybrid idiomatically, and to exploit each component's strengths, while avoiding its weaknesses. The Disklavier restrictions include a maximum of sixteen simultaneous notes and a limited repetition rate for any one note. The Radio-Drum has its own limitations. While it is excellent for virtuosic rapid passages and simultaneous control of multiple independent variables, it is poor at picking out a particular value from a large set, such as the keys of a piano. So, whenever possible, I structured the piece around improvisational scenarios that allow the soloist a great deal of freedom while constraining him, through the software programming, to a particular sound vocabulary. In such a context, I leave behind the role of the traditional composer, who specifies what will be played. Instead, I circumscribe an area that defines what can be played. The challenge becomes to allow the soloist sufficient flexibility to freely express himself, while providing sufficient constraints so that anything that he can possibly express will be meaningful in the context of the composed piece.

This solution works well for improvised cadenzas, but is inadequate for situations in which exact pitches are required. One alternative is the sequential drum approach [25], in which notes or phrases of a pre-stored sequence are played successively each time the soloist strikes the Radio-Drum surface. Downward-facing arrows in the score indicate the Radio Drum rhythm and the conventional notation shows the resultant pitches. For example, in The Statue of Zeus, the Radio-Drum plays for one measure in sequential mode, then moves into a quasi-improvisational gesture.

An interesting variant of this approach is used in the second movement ("The Hanging Garden of Babylon"). Here, a pre-composed melody plays, duplicated at several transposition levels, while the soloist controls the drifting of the voices with respect to one another. This is done using a real-time performance time map. The Drum's x axis determines how far the voices move out of synchronization. As the mallet moves to the right, the voices move from unison rhythm to a kind of heterophony, and eventually to a canonic texture.

Throughout the piece, the Drum-Piano hybrid plays music that would be awkward or impossible if played by a conventional pianist. For example, the first movement ("The Pyramids") focuses on ponderous massive blocks of sound, each comprising all 88 notes of the piano. I devised a mapping of the Drum in which the computer plays all 88 notes of the piano each time the soloist strikes the Drum surface. The speed of these notes corresponds to the location along the y axis, while their loudness is determined by hard hard the soloist strikes the Drum. Meanwhile, the x axis controls the correlation of the notes. The far right edge gives the most correlated effect, a single 88-note chromatic glissando, while the far left edge provides the least correlated effect, 88 notes in random order. Between these extremes are such combinations as 4 groups of 22-note glissandi, 8 groups of 11-note glissandi, etc. Multiple instances of this process can be active at once, allowing for a great deal of improvisational variety, despite the severe constraint that each Drum stroke produces 88 notes.


The maximalist attitude, in which any aspect of experience may become basic material, opens expressive vistas of great expanse. It also enables a particular brand of radicalism that stems from a willingness to embrace the strange and unfamiliar, such as the Chimera-like hybrids presented here. In my application of the hybridization procedure to such diverse domains as musical style, sound production, ensemble synchronization, and the coupling between a performer's gestures and instrument, I have found it to be a fertile area of exploration that has the potential for compelling artistic statement.


Many of the ideas presented here were inspired by the music of Henry Brant and Charles Ives. Thanks also to composers Joel Chadabe, John Chowning, Karel Husa and Marta Ptazynska, and to my collaborators, Andrew Schloss and Julius Smith. The Radio-Drum part in The Seven Wonders of the Ancient World was developed in collaboration with Andrew Schloss, using his extensive work with the Radio-Drum as a foundation. This piece was supported in part by a Collaborative Composer Fellowship from the National Endowment for the Arts, a federal agency. Impossible Animals was commissioned by the Hamilton College Chorus, while May All Your Children Be Acrobats was commissioned by David Starobin for the Purchase Guitar Ensemble. Finally, thanks to Carol Adee, Kirsten Spalding and Robert Cowart for suggestions on this manuscript.


1. D. Jaffe, Impossible Animals, for chorus and synthesized voices. Also versions for SATB soloists and synthesized voices, violin and synthesized voices, oboe and synthesized voices, and five winds and synthesized voices (Berkeley, CA: Well-Tempered Productions, 1986)---10'.

2. J. O. Smith and X. Serra, "PARSHL: A Program for the Analysis/Synthesis of Inharmonic Sounds Based on a Sinusoidal Representation", Proceedings of the International Computer Music Conference at Urbana, Illinois (1987).

3. X. O. Rodet, "Time Domain Formant Wave Function Synthesis", Computer Music Journal, Vol. 8, No. 3, pp. 9--14 (1984).

4. MIDI Manufacturers Association MIDI 1.0 Detailed Specification, The International MIDI Association, LA, CA (1988).

5. D. Jaffe, Three Musicians, for viola and guitar (Berkeley, CA: Well-Tempered Productions, 1986)

6. J. O. Smith, "Physical Modeling using Digital Waveguides", Computer Music Journal, Vol. 16, No. 4, pp. 74--87 (1992).

7. McIntyre and Woodhouse, "On the Oscillations of Musical Instruments", Journal of the Acoustical Society of America, Vol. 63, No. 3, pp. 816--825 (1983).

8. J. O. Smith, "Viewpoints on the History of Digital Synthesis", Proceedings of the International Computer Music Conference, Montreal, pp. 1--10 (1991).

9. D. Jaffe, May All Your Children Be Acrobats, for mezzo-soprano, eight guitars and computer-generated tape (Berkeley, CA: Well-Tempered Productions, 1980)---16'.

10. A. Strong and K. Karplus, "Digital Synthesis of Plucked String and Drum Timbres", Computer Music Journal, Vol. 7, No. 2, pp. 43--55 (1983). Reprinted in The Music Machine, (Cambridge, MA: MIT Press, C. Roads ed., 1989).

11. D. Jaffe and J. O. Smith. "Extensions of the Karplus-Strong Plucked String Algorithm", Computer Music Journal, Vol. 7, No. 2, pp. 56--69 (1983). Reprinted in The Music Machine, (Cambridge, MA: MIT Press, C. Roads ed., 1989).

12. J. O. Smith, "Efficient Simulation of Stringed Musical Instruments", Proceedings of the International Computer Music Conference, Tokyo, pp. 64--71 (1993).

13. D. Jaffe, Silicon Valley Breakdown, for stereo computer-generated tape, (Mainz, Germany: B. Schott's Sohne International, 1988)---20'. Also, version for four-channel computer-generated tape (Berkeley, CA: Well-Tempered Productions, 1982)---20'.

14. D. Jaffe, "Ensemble Timing in Computer Music", Computer Music Journal, MIT Press, Vol. 9, No. 4, pp. 38--48 (1985).

15. D. Jaffe and A. Schloss, Wildlife, (Berkeley, CA: Well-Tempered Productions, 1992)---22'30."

16. D. Jaffe and A. Schloss, "The Computer-Extended Ensemble", Computer Music Journal, MIT Press, Vol. 18, No. 2, pp. 78--86 (1994).

17. R.A. Boie, L.W. Ruedisueli and E.R. Wagner, "Gesture Sensing via Capacitive Moments", Work Project No 311401-(2099,2399) AT&T Bell Laboratories (1989).

18. M.V. Mathews and A. Schloss, "The Radio Drum as a Synthesizer Controller", Proc. of the International Computer Music Conference (1989).

19. M. Puckette and D. Zicarelli, "Max," OpCode Systems, Palo Alto, CA (1989).

20. J. O. Smith and D. A. Jaffe and L. Boynton, "Music System Architecture on the NeXT Computer", Proceedings of the Audio Engineering Society Conference, Los Angeles (1989).

21. Michael McNabb "Ensemble, An Extensible Real-Time Performance Environment", Proc. 89th Audio Engineering Society Convention, Los Angeles, CA (1990).

22. A. Schloss and D. Jaffe, "Intelligent Musical Instruments: The Future of Musical Performance or the Demise of the Performer?", Interface Journal for New Music Research, Netherlands, Vol. 22, No. 3, pp. 183--193 (1993).

23. D. Jaffe, The Seven Wonders of the Ancient World, for Radio-Drum -controlled Disklavier Grand Piano, mandolin, guitar, harp, harpsichord, harmonium, bass, and 2 percussionists (Berkeley, CA: Well-Tempered Productions, 1995)---70'. Radio-Drum part developed in collaboration with Andrew Schloss.

24. D. Jaffe and A. Schloss, "A Virtual Piano Concerto Coupling of the Mathews/Boie Radio-Drum and the Yamaha Disklavier Grand Piano in 'The Seven Wonders of the Ancient World'", Proceedings of the International Computer Music Conference (1994).

25. M.V.,Mathews, "The Conductor Program and Mechanical Baton", in Current Directions in Computer Music (Cambridge, MA: MIT Press, 1988).


adder--A processing module that adds its inputs to produce its output. Used in the plucked string to simulate the bridge.

all pass filter--A processing module that modifies the phase of its input without boosting or attenuating any frequencies. Used in the plucked string to mimic the effect of string stiffness.

comb filter--A processing module that produces as output the sum of its input and a delayed version of its input, used in the plucked string model to mimic the effect of pick position.

delay line--A processing module that delays each sample by a certain amount, known as the "length" of the delay line, because it corresponds to the length of the simulated string.

Fourier-based analysis--A method of analyzing sound that involves using a mathetmatical technique to split a sound into coefficients of a set of pure tones, each with its own frequency. Similar to the way a prism breaks white light into a rainbow of pure colors.

low pass filter--A processing module that attenuates high frequencies more than low frequencies (or boosts low frequencies more than high frequencies.)

parameteric representation--A descrition of a sound in terms of intuitively-meaningful variables, allowing it to be more easily manipulated.

resonant filter--A processing module that boosts certain regions of the sound spectrum, producing resonances.

scaler--A processing module that multiplies its input by a parameter.

sinusoidal components--Pure tones (i.e. sounds with no harmonics). Fourier-based analysis techniques split a comlex sound into a set of sinusoidal components.