81 0 8MB
Live-Electronic Music
During the twentieth century, electronic technology enabled the explosive development of new tools for the production, performance, dissemination and conservation of music. The era of the mechanical reproduction of music has, rather ironically, opened up new perspectives, which have contributed to the revitalisation of the performer’s role and the concept of music as performance. This book examines questions related to music that cannot be set in conventional notation, reporting and reflecting on current research and creative practice primarily in live electronic music. It studies compositions for which the musical text is problematic, that is, nonexistent, incomplete, insufficiently precise or transmitted in a nontraditional format. Thus, at the core of this project is an absence. The objects of study lack a reliably precise graphical representation of the work as the composer or the composer/ performer conceived or imagined it. How do we compose, perform and study music that cannot be set in conventional notation? The authors of this book examine this problem from the complementary perspectives of the composer, the performer, the musical assistant, the audio engineer, the computer scientist and the musicologist. Friedemann Sallis is Professor and Director of Graduate Studies in the Music Department at the University of Calgary, Canada. Valentina Bertolani is currently pursuing a PhD in musicology at the University of Calgary, Canada. Jan Burle is a scientist at Jülich Centre for Neutron Science, Forschungszentrum Jülich GmbH, Outstation at MLZ in Garching, Germany. Laura Zattra is a research fellow at Institut de Recherche et Coordination Acoustique/Musique (IRCAM) in Paris, France.
Routledge Research in Music
This series is our home for cutting-edge, upper-level scholarly studies and edited collections. Considering music performance, theory, and culture alongside topics such as gender, race, ecology, film, religion, politics, and science, titles are characterized by dynamic interventions into established subjects and innovative studies on emerging topics. Current Directions in Ecomusicology Music, Culture, Nature Edited by Aaron S. Allen, Kevin Dawe Liveness in Modern Music Musicians, Technology, and the Perception of Performance Paul Sanden Vocal Music and Contemporary Identities Unlimited Voices in East Asia and the West Edited by Christian Utz, Frederick Lau Music Video After MTV Audiovisual Studies, New Media, and Popular Music Mathias Bonde Korsgaard Masculinity in Opera Philip Purvis Music, Performance, and the Realities of Film Shared Concert Experiences in Screen Fiction Ben Winters Live-Electronic Music Composition, Performance, Study Edited by Friedemann Sallis, Valentina Bertolani, Jan Burle and Laura Zattra
Live-Electronic Music Composition, Performance, Study
Edited by Friedemann Sallis, Valentina Bertolani, Jan Burle and Laura Zattra
First published 2018 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN and by Routledge 711 Third Avenue, New York, NY 10017 Routledge is an imprint of the Taylor & Francis Group, an informa business © 2018 selection and editorial matter, Friedemann Sallis, Valentina Bertolani, Jan Burle, and Laura Zattra; individual chapters, the contributors The right of Friedemann Sallis, Valentina Bertolani, Jan Burle, and Laura Zattra to be identified as the authors of the editorial material, and of the authors for their individual chapters, has been asserted in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging-in-Publication Data Names: Sallis, Friedemann. | Bertolani, Valentina. | Burle, Jan. | Zattra, Laura. Title: Live-electronic music: composition, performance, study / edited by Friedemann Sallis, Valentina Bertolani, Jan Burle, and Laura Zattra. Description: Abingdon, Oxon; New York, NY: Routledge, 2018. | Includes bibliographical references and index. Identifiers: LCCN 2017024915 | ISBN 9781138022607 (hardback) | ISBN 9781315776989 (ebook) Subjects: LCSH: Electronic music—History and criticism. Classification: LCC ML1380 .L6 2018 | DDC 786.7—dc23 LC record available at https://lccn.loc.gov/2017024915 ISBN: 978-1-138-02260-7 (hbk) ISBN: 978-1-315-77698-9 (ebk) Typeset in Times New Roman by codeMantra Every effort has been made to contact copyright-holders. Please advise the publisher of any errors or omissions, and these will be corrected in subsequent editions.
Contents
List of figures List of contributors Acknowledgements Introduction
viii xiii xvii 1
F riedemann Sallis , Valentina Bertolani , Jan Burle and L aura Z attra
Part I
Composition
15
1 Dwelling in a field of sonic relationships: ‘instrument’ and ‘listening’ in an ecosystemic view of live electronics performance
17
Agostino Di S cipio
2 (The) speaking of characters, musically speaking
46
C hris C hafe
3 Collaborating on composition: the role of the musical assistant at IRCAM, CCRMA and CSC
59
L aura Z attra
Part II
Performance
81
4 Alvise Vidolin interviewed by Laura Zattra: the role of the computer music designers in composition and performance
83
L aura Z attra
vi Contents 5 Instrumentalists on solo works with live electronics: towards a contemporary form of chamber music?
101
F ranç ois -X avier F é ron and Guillaume B outard
6 Approaches to notation in music for piano and live electronics: the performer’s perspective
131
X enia Pestova
7 Encounterpoint: the ungainly instrument as co-performer
160
John Granzow
8 Robotic musicianship in live improvisation involving humans and machines
172
George T zaneta k is
Part III
Study
193
9 Authorship and performance tradition in the age of technology: (with examples from the performance history of works by Luigi Nono, Luciano Berio and Karlheinz Stockhausen)
195
A ngela I da De Benedictis
10 (Absent) authors, texts and technologies: ethnographic pathways and compositional practices
217
N icola S caldaferri
11 Computer-supported analysis of religious chant
230
Dá niel Pé ter Biró and George T zaneta k is
12 Fixing the fugitive: a case study in spectral transcription of Luigi Nono’s A Pierre. Dell’azzurro silenzio, inquietum. A più cori for contrabass flute in G, contrabass clarinet in B flat and live electronics (1985) Jan Burle
253
Contents vii 13 A spectral examination of Luigi Nono’s A Pierre. Dell’azzurro silenzio, inquietum (1985)
275
F riedemann Sallis
14 Experiencing music as strong works or as games: the examination of learning processes in the production and reception of live electronic music
290
V incent T iffon
Bibliography Index
305 331
Figures
1.1 Agostino Di Scipio, Two Pieces of Listening and Surveillance. Diagram of the complete performance infrastructure 21 1.2 Agostino Di Scipio, Two Pieces of Listening and Surveillance, sketch of the complete process 24 1.3 Agostino Di Scipio, Two Pieces of Listening and Surveillance, graphic score for flute action (excerpt) 27 1.4 Agostino Di Scipio, Two Pieces of Listening and Surveillance (score excerpt), signal flow chart describing some of the digital signal processing 32 2.1 The Animal algorithm is comprised of two parallel resonators with the logistic map in their feedback path 49 2.2 Amplitude and spectrogram display of two seconds of sound from ramping up ratios of resonator delay lengths from 1.04 to 8.0 53 2.3 Amplitude and spectrogram display of two seconds of sound from ramping up feedback gain to both resonators from 0.0 to 1.0 53 2.4 Amplitude and spectrogram display of two seconds of sound from changing the balance between resonators 54 2.5 Amplitude and spectrogram display of two seconds of sound from ramping up the lowpass frequency from 550 to 9000 Hz 54 2.6 Amplitude and spectrogram display of two seconds of sound from ramping up ratios of resonator lowpass frequencies from 1.003 to 4.0 54 2.7 Amplitude and spectrogram display of two seconds of sound from ramping up the parameter r of the logistic map 55 3.1 Pierre Boulez at a desk working on Répons at IRCAM, 1984 (IRCAM, Paris, Espace de projection) 65 3.2 1975: Pierre Boulez brought an IRCAM team to CCRMA for a two-week course in computer music 70
Figures ix 3.3 Richard Teitelbaum (standing) and from left to right Joel Chadabe, and musical assistants Mauro Graziani and Alvise Vidolin in 1983, Venice Biennale, Festival ‘La scelta trasgressiva’ 74 5.1 Population distribution in terms of their first experience in musique mixte 103 5.2 Schematic depiction of the social interaction in musique mixte 104 6.1 Jonty Harrison, Some of its Parts, page 3 (excerpt) 135 6.2 Heather Frasch, Frozen Transitions, page 2 (excerpt) 137 6.3 Lou Bunk, Being and Becoming, bars 58–60 of full score 139 6.4 Lou Bunk, Being and Becoming, bars 58–60 of performance score 140 6.5 Denis Smalley, Piano Nets, page 11 (excerpt) 140 6.6 Elainie Lillios, Nostalgic Visions, page 2 (excerpt) 141 6.7 Juraj Kojs, Three Movements, page 2 (excerpt) 143 6.8 Juraj Kojs, All Forgotten, page 14 (excerpt) 144 6.9 Per Bloland, Of Dust and Sand, bars 73–75 (piano part) 145 6.10 Larry Austin, Accidents Two, Event 36 1/2 146 6.11 Dominic Thibault, Igaluk: To Scare the Moon with its Own Shadow, bars 213–15 147 6.12 Hans Tutschku, Zellen-Linien, page 1 (excerpt) 148 6.13 Bryan Jacobs, Song from the Moment, bars 84–92 149 6.14 Scott Wilson, On the Impossibility of Reflection, bars 1–4 150 6.15 Alistair Zaldua, Contrejours, page 3 (excerpt) 152 6.16 Karlheinz Essl and Gerhard Eckel, Con una Certa Espressione Parlante, page 6 (excerpt) 154 6.17 Karlheinz Essl and Gerhard Eckel, Con una Certa Espressione Parlante, page 9 (excerpt) 155 6.18 (a) The author with The Rulers, image by Vanessa Yaremchuk. (b) Detail from Figure 6.18a 156 6.19 D. Andrew Stewart, Sounds between Our Minds, page 4, full score (excerpt). The Rulers notation is shown on the 156 two bottom staves 7.1 A partially 3D printed version of Hans Reichel’s daxophone constructed by author, with the ‘dax’ resting on tongue 162 8.1 Mahadevibot robotic percussion instruments designed by Ajay Kapur 174 8.2 Early robotic idiophones by Trimpin 176 8.3 Percussion robots with microphone for self-listening 179 8.4 Velocity calibration based on loudness and timbre: (a) MFCC-values, (b) MFCC-inverse-mapping, (c) PCA-values, (d) calibrated PCA 182
x Figures 8.5 Pattern recognition – average precision for different gestures on the radiodrum and vibraphone. The mean average precisions (MAP) are 0.931 and 0.799 185 8.6 Kinect-sensing of free space mallet gestures above a vibraphone 187 8.7 Virtual vibraphone bar faders 188 8.8 Trimpin next to one of the robotically actuated piano boards developed for Canon X + 4:33 = 100 189 9.1 Charles Rodrigues, ‘And now, electronic music of Stockhausen…’, Stereo Review (November 1980) 195 9.2 (a) Luciano Berio, Sequenza I (Milan: Edizioni Suvini Zerboni, n.d.), p. [1] (© 1958), S. 5531 Z. (b) Luciano Berio, Sequenza I (Milan: Edizioni Suvini Zerboni, n.d.), performance notes, p. [1] (© 1958), S. 5531 Z 199 9.3 Luciano Berio, Sequenza I (Vienna: Universal Edition, n.d.), p. [1] (© 1998), UE 19 957. 201 9.4 Karlheinz Stockhausen, Kreuzspiel, Kontra-Punkte, Zeitmaße, Adieu, The London Sinfonietta, Dirigent: Karlheinz Stockhausen, LP Hamburg: Polydor 1974, dustjacket LP Deutsche Grammophon (2530 443) 203 9.5 (a) Karlheinz Stockhausen, Kreuzspiel (Vienna: Universal Edition, n.d.), performers notes, n. p (© 1960); UE 13 117. (b) Karlheinz Stockhausen, Kreuzspiel, rev. 4th edn. (Vienna: Universal, 1990), performance notes, n. p. (UE 13 117) 204 9.6 Luciano Berio, Sequenza III (London: Universal, n.d.), p. [1] (© 1968), UE 13 723. 206 9.7 Luciano Berio, handwritten page from the electronic score of Ofaním, cue clarinet (Luciano Berio Collection, Paul Sacher Foundation) 208 10.1 Simha Arom, analysis of the music of Banda Linda as found among Luciano Berio’s sketches for Coro (Scherzinger 2012, 412) 221 10.2 Steven Feld, wearing DSM microphones, records canti a zampogna (voice: Giuseppe Rocco, zampogna: Nicola Scaldaferri). Accettura (Matera, Italy) 14 May 2005; (Scaldaferri and Feld 2012, 84) 226 11.1 Qur’an sura, Al-Qadr recited by Sheikh Mahmûd Khalîl al-Husarî, pitch (top, MIDI units) and energy (bottom, decibels) contours 232 11.2 Qur’an sura, Al-Qadr recited by Sheikh Mahmûd Khalîl al-Husarî, recording-specific scale derivation 233 11.3 Screen-shot of interface: paradigmatic analysis of neume types in Graduale Triplex 398 as they relate to melodic gesture 234 11.4 Béla Bartók, transcription of Mrs. János Péntek (#17b) from 1937 238
Figures xi 11.5 Density plot of the recording of Mrs. János Péntek 239 11.6 Density plot transcription of the recording of Mrs. János Péntek 240 11.7 Pitches, based on the density plot, ordered in terms of their density 240 11.8 Pitches, based on the density plot, ordered in terms of scale degree 240 11.9 Bartók’s original transcription, juxtaposed with the version with scales derived from density plot 241 11.10 Bartók’s Original Transcription, juxtaposed with the version with scales derived from the density plot; primary pitches have note heads marked by an ‘x’, secondary pitches by a triangle and tertiary pitches by a diagonal line through the note head 241 11.11 Sirató, paradigmatic analysis of text/melody relationship as displayed in the cantillion interface 242 11.12 Pitch-histograms of Genesis chapters 1–4 (a) and Genesis chapter 5 (b) as read in The Hague by Amir Na’amani in November 2011. Recorded by Dániel Péter Biró and pitch histogram created by Peter van Kranenburg 244 11.13 (a) Distribution of distances between unrelated segments. (b) Distributions of distances between sof pasuq renditions in Italian (a) and Moroccan (b) renditions as exemplified by Peter van Kranenburg and Dániel Péter Biró 246 11.14 (a) Density plots of frequencies occurring in Indonesian (a) and Dutch (b) recitation of sura al Qadr. (b) Scale degrees derived from Indonesian (solid) and Dutch (dashed) pitch density plots for sura al Quadr. (c) Contours of the same cadence as sung by Dutch (a) and Indonesian (b) reciters quantised according to the derived scale degrees 247 12.1 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, diagrams of the position of the loudspeakers (left) and the live electronic configuration with line recordings identified (right) (Nono 1996, xv) 259 12.2 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, unprocessed spectrogram of a performance recorded on 28 February 2009 262 12.3 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, spectrogram of the contrabass clarinet sound recorded on 28 February 2009 262 12.4 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, spectrogram of the contrabass flute part, bars 4–7, recorded on 28 February 2009 263 12.5 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, manual transcription of sound of the flute and clarinet, bars 1–9 264
xii Figures 12.6 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, bars 15–31 268 12.7 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, contrabass clarinet part, bars 24–25, (a–c) present stages of the transcription process 270 12.8 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, contrabass flute part, bars 24–25, (a–c) present stages of the transcription process 271 12.9 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, contrabass flute and contrabass clarinet parts, bars 17–29, (a) Loris analysis, (b) final transcription 272 13.1 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, transcription of the entire performance, recorded in Banff on 28 February 2009 281 13.2 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, transcription of sounds produced by the contrabass flute and the contrabass clarinet directly, bars 17–29 282 13.3 Luigi Nono, A Pierre, transcription of sounds produced by the harmonisers and filter 3, bars 17–29 283 13.4 Luigi Nono, A Pierre, amalgamation of Figures 13.2 and 13.3, bars 17–29 284 13.5 Luigi Nono, Notes for a lecture ‘Altre possibilità di ascolto’ presented during August 1985 at the Fondazioni Cini 288 14.1 Marco Stroppa, …of Silence…, photo of the ‘acoustic totem’ 293 14.2 Marco Stroppa, …of Silence…, diagram of the audio setup 294 14.3 XY installation, diagram of the audio device and capture 295 14.4 XY installation, technical schemata 295
Contributors
Valentina Bertolani is a PhD candidate at the University of Calgary. Her dissertation focuses on the relationships among American, Canadian and Italian avant-garde collectives of composers/performers in the 1960s and 1970s, focusing on their aesthetic principles and improvising procedures. She holds a Masters in Musicology from the University of Pavia. She presented her work at society meetings and International conferences in Canada, the UK, France, Italy and Japan. Valentina has been the recipient of several awards, and in 2016 she received an Izaak Walton Killam Pre-doctoral scholarship. Dániel Péter Biró is Associate Professor of Composition and Music Theory at the University of Victoria, BC, Canada. After studying in Hungary, Germany and Austria, he completed his PhD in composition at Princeton University in 2004. He was Visiting Professor, Utrecht University in 2011 and Fellow at the Radcliffe Institute for Advanced Study, Harvard University in 2014–2015. In 2015, he was elected to the College of New Scholars, Scientists and Artists of the Royal Society of Canada. Guillaume Boutard is Assistant Professor in the École de bibliothéconomie et des sciences de l'information at Université de Montréal. His research interests include digital curation and creative process documentation methodologies. He holds a PhD in Information Studies (McGill University), a MSc in Computer Science (Pierre et Marie Curie University-Paris VI), a MSc in Geophysics (Pierre et Marie Curie University-Paris VI), and conducted a two-year postdoctoral research in the Faculté de Musique at Université de Montréal. He previously worked at IRCAM (Institut de Recherche et Coordination Acoustique/Musique) as an engineer from 2001 to 2009. Jan Burle currently develops scientific software at Jülich Centre for Neutron Science in Garching bei München, Germany. Before that, he was Assistant Professor in the Music Department at the University of Lethbridge, Canada. His main research interest is general application of computing
xiv Contributors related to musical sound and music: analysis, transcription, microtonal aspects, performance and reception. Chris Chafe is a composer, improviser and cellist, developing much of his music alongside computer-based research. He is Director of Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA). Computer synthesis of novel sounds and music remains an interest ever since his first exposure to the work of John Chowning, William Gardner Schottstaedt and David Wessel as a student at the Center in the 1970s and 1980s. Angela Ida De Benedictis is a scholarly staff member and curator at the Paul Sacher Foundation. Previously she was Assistant Professor at the University of Pavia (Cremona), and she taught at the Universities of Padova, Salerno, Parma and Berne. Among her scholarly interests are the Italian postwar avant-garde, radiophonic music, music theatre, study of creative process, and electronic music. Publications includes the writings of Luigi Nono (Ricordi 2000 and il Saggiatore 2007) and Luciano Berio (Einaudi 2013); Imagination at Play. The Prix Italia and the Radiophonic Experimentation (RAI/Die Schachtel 2012); Radiodramma e arte radiofonica (EDT 2004); New Music on the Radio (ERI-RAI 2000), critical editions of Maderna’s, Nono’s and Togni’s work (by Suvini Zerboni and Schott) and other books and essays of theory and analysis mainly featuring twentieth-century music. Agostino Di Scipio composer, sound artist, scholar. As a scholar, he is interested in the cognitive and political implications of music technologies and in systemic notions of sound and auditory experience. As a composer, he is well known for performance and installation works based on man-machine-environment networks. A thematic issue of Contemporary Music Review documents his efforts in such direction. He is a DAAD artist (Berlin 2004–2005) and Edgar-Varèse-Professor at Technische Universität (Berlin 2007–2008). He is a Full Professor of Electroacoustic Composition at Conservatory of Naples (2001–2013) and L’Aquila (since 2013). François-Xavier Féron holds a Master’s Degree in musical acoustics (University of Paris VI) and a PhD in musicology (University of Paris IV). After teaching at the University of Nantes (2006–2007), he was a postdoctoral researcher at the Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT, Montreal, 2008–2009), then at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM, Paris, 2009–2013). Since 2013, he has been a tenured researcher at the French National Centre for Scientific Research (CNRS) and works at the LaBRI (Laboratoire Bordelais de Recherche en Informatique). His research focuses on contemporary musical practices, perception of auditory trajectories and, more broadly, on interactions between art, science and technology.
Contributors xv John Granzow is Assistant Professor of Performing Arts Technology at the University of Michigan. He teaches musical acoustics, sound synthesis, performance systems and digital fabrication. He initiated the 3D Printing for Acoustics workshop at the Centre for Computer Research in Music and Acoustics at Stanford. His instruments and installations leverage found objects, iterative CAD design, additive manufacturing and embedded sound synthesis. Xenia Pestova’s performances and recordings have earned her a reputation as a leading interpreter of uncompromising piano repertoire of her generation. Her commitment and dedication to the promotion of music by living composers led her to commission dozens of new works and collaborate with major innovators in contemporary music. Her widely acclaimed recordings of core piano duo works of the twentieth century by John Cage and Karlheinz Stockhausen are available on four CDs for Naxos Records. Her evocative solo debut of premiere recordings for piano and toy piano with electronics on the Innova label titled Shadow Piano was described as a ‘terrific album of dark, probing music’ by the Chicago Reader. She is the Director of Performance at the University of Nottingham. www.xeniapestova.com. Friedemann Sallis is Professor at the School of Creative and Performing Arts of the University of Calgary. He is an established scholar with an international reputation in the field of sketch studies and archival research in music. His research interests include the study of music that escapes conventional notation (such as live electronic music) and of how music relates to place. Recent publications include Music Sketches (Cambridge University Press, 2015), Centre and Periphery, Roots and Exile: Interpreting the Music of István Anhalt, György Kurtág and Sándor Veress (Wilfrid Laurier University Press, 2011), as well as numerous articles on twentieth- century music. Over the past twenty years, he has received six standard research grants from the Social Sciences and Humanities Research Council of Canada. Nicola Scaldaferri is Associate Professor of Ethnomusicology at the University of Milan, where is the director of the LEAV (Laboratory of Ethnomusicology and Visual Anthropology). He received his PhD in Musicology at the University of Bologna and the degree in Composition at the Conservatory of Parma; he was Fulbright scholar at Harvard University and visiting professor at St. Peterburg State University. His interests include twentieth-century music and technology, Balkan epics, Italian folk music, instruments from Western Africa. Among his recent publications: When the Trees Resound. Collaborative Media Research on an Italian Festival (2017, edited with Steven Feld). Vincent Tiffon is Professor of musicology at the University of Lille, Researcher in the CEAC research centre, and Co-director of the EDESAC
xvi Contributors research team. He is also an associated researcher at IRCAM in Paris. Tiffon’s research addresses the history, analysis and aesthetics of electroacoustic and musique mixte and takes special interest in analysing the creative process in music and musical mediology. His work has been published in journals including Acoustic Arts & Artifacts/Technology, Aesthetics, Communication, Analyse musicale, Les Cahiers du Cirem, Les Cahiers de Médiologie, Contemporary Music Review, DEMéter, Filigrane, LIEN, Medium, Médiation et communication, Musurgia, NUNC, Revue de musicologie, and Circuit. George Tzanetakis is Professor at the Department of Computer Science at the University of Victoria, BC, Canada. He holds cross-listed appointments at the School of Music and the Electrical and Computer Engineering Department. He received his PhD at Princeton University in 2002. In 2011, he was a visiting scientist at Google Research in Mountainview, California. Since 2010, he has been a Canada Research Chair (Tier II) in the computer analysis of music and audio. Laura Zattra obtained her PhD at Sorbonne/Paris IV and Trento University. She collaborates with research centres, archives and universities (Padova, De Monfort, Calgary, Sorbonne). Research Associate at the Analysis of Musical Practices Research Group, IRCAM-CNRS (Paris) and IreMus (Paris-Sorbonne). Her research interests cover twentiethand twenty-first-century music, especially the interaction of music and technology, collaborative artistic creativity, the analysis of compositional process, women’s studies and music. She is currently lecturing at University of Padova, as well as at the Parma and Rovigo conservatoires (Italy).
Acknowledgements
The editors would like to heartily thank Heidi Bishop and Annie Vaughan for patiently shepherding us through the publication process. Their kind advice was much appreciated. We would also like to thank Elizabeth Levine for her help in getting this project up and running. We are grateful to the following people and institutions for allowing us to publish material for which they hold copyright: John Chowning and the Center for Computer Research in Music and Acoustics (CCRMA) of Stanford University, Marion Kalter, Marco Mazzolini (Casa Ricordi), Nuria Schoenberg Nono and the Archivio Luigi Nono, Alvise Vidolin and the Centro di Sonologia Computazionale (CSC) of the Università di Padova, as well as Edizioni Suvini Zerboni, the Institut de Recherche et Coordination Acoustique/Musique (IRCAM), the Paul Sacher Foundation and Universal Edition.
This page intentionally left blank
Introduction Friedemann Sallis, Valentina Bertolani, Jan Burle and Laura Zattra
This book examines aspects of live electronic music from the overlapping perspectives of composition, performance and study. It presents neither a history nor a theory of this music, though we believe that it can contribute to both. It also does not endeavour to cover the topic comprehensively. Given the vast array of innovative musical practices that have been and continue to be associated with this term, no book could possibly undertake a comprehensive overview. The chapters should thus be seen as snapshots of a rapidly evolving object of study. They present an array of musicological research, in which some authors report on recent achievements while others contemplate unresolved problems that have arisen over the past half century. The book reflects on current practice and how we got where we are.
Evolving definitions The concept of live electronic music preceded the term. In 1959, Karlheinz Stockhausen (with typical clairvoyance) juxtaposed the unlimited repeatability of electronic music composed using machines with instrumental music that appeals directly to the creative, ever-variable capacities of the musician, ‘enabling multifarious production and unrepeatability from performance to performance’ (2004, 379). He then predicted that the combination of electronic and instrumental would move beyond the stage of simple juxtaposition in order to explore ‘the higher, inherent laws of a bond’ (2004, 380). John Cage, surely one of the English language’s most important wordsmiths with regard to new music, has been recognised for coining the term (Supper 2016, 221). In 1962, Cage presented the two goals he pursued in composing Cartridge Music (1960). The first was to render performance indeterminate, and the second was ‘to make electronic music live’ (Cage 1970, 145). The term ‘live electronic music’ began to be used regularly in groups of young composer-performers devoted to concert presentations of electronic music in the early 1960s. David H. Cope (1976, 97; see also Manning 2013, 161–66; Deliège 2011, 415–18; Collins 2007, 41–43) mentioned the Sonic Arts Group (inaugurated at Brandeis University in 1966, later the Sonic Arts Union) and Musica Elettronica Viva (Rome 1966).1 Other groups he could
2 Friedemann Sallis et al. have named include the ONCE Group (Ann Arbor, active from the late 1950s), the AMM (London 1965) and Gentle Fire (York 1968). The latter explored the potential of new electronic media using a heterogeneous mix of traditional and newly invented instruments to present music that blurred the line between the avant-garde and progressive rock of the day (H. Davies 2001, 55–56). In 1967, the University of California at Davis in collaboration with the Mills College Tape Center (Oakland) organised the First Festival of Live-Electronic Music, the first time the term was used prominently in a public event. According to a reviewer, the Festival presented a radical shift in the way composer-performers approached the sound world of the concert stage (Johnson 2011, 116–24). By the 1970s, ‘live electronic music’ was widely used, though not consistently. The second edition of the venerable Harvard Dictionary of Music presents articles on electronic instruments and electronic music. The latter contains no mention of the concept or the term, even though the author was likely aware of both (Boucourechliev 1972, 285–86). What does ‘live electronic music’ mean? The definition has always been troublesome, with differences cropping up depending on what is being described. In English, the term can be defined in (at least) two ways. On the one hand, it is an umbrella term under which we find a wide range of musical practices, styles, techniques and technologies that stage the dichotomy embedded in it: live (= human) vs. electronic (= sound generated by some sort of electrically powered device). In this sense, live electronic music was and continues to be used as a broad oppositional category to acousmatic music: i.e. music prepared in a studio and fixed on some medium in advance of being ‘played back’, normally without ‘performers’ in the traditional sense of the term.2 The origin of this binary construction can be traced back to the 1930s, when the adjective ‘live’ began to be used to qualify music performance in response to a crisis caused by the broadcast of recorded music on the radio. Recording technology had existed since the beginning of the century. However, by removing the sound source from the listener’s perspective, radio obscured the difference between live and recorded sound, motivating the use of the term. ‘The word live was pressed into service as part of a vocabulary designed to contain the crisis by describing it and reinstating the former distinction discursively even if it could no longer be sustained experientially’ (Auslander 2002, 17). The binary constructions (human-machine, live-recorded, art-technology) embedded in the term ‘live electronic music’ are typical of discourse about art music in the twentieth century.3 According to Sanden (2013, 18–43), these binaries evoke a technophobia prevalent in this discourse, which remains alive and well to this day. On the other hand, ‘live electronic music’ can be used more narrowly to underscore the fact that the electronic sound production is taking place on the stage in real time. In this case, the adjective ‘live’ directly qualifies the electronic devices or methods used to modify or produce sound, giving rise to the term ‘live electronics’. Rather than implying the binary opposition
Introduction 3 presented above, the second meaning focuses on some kind of interactive use of the electronic devices. Simon Emmerson has observed that the self-declared ‘live electronic’ ensembles of the 1960s and 1970s tended to use the descriptive label ‘live electronic’ freely, applying it to music that was ‘produced and performed through real-time electroacoustic activity’ or was the result of a combination of ‘live performers and fixed electroacoustic sound’ (2007a, 104). This terminological ambiguity has remained embedded in English usage to this day, resulting in a plurality of hazy definitions that are typical of electronic music in general and becoming increasingly problematic (Peters et al. 2012, 3–4). Currently, when used in the narrower sense, live electronic music usually refers to works involving the digital management or manipulation of sound, placing it firmly in the era of personal computing that emerged in the last decades of the twentieth century. Ironically, this leads to the rather odd relegation of earlier examples based on analogue technologies to the prehistory of the live electronic music, even though these earlier examples generated the term in the first place. These diverse perspectives result in strikingly different ways of explaining what live electronic music is and how it developed. For example, Peter Manning (2013, 157–67) uses the term live electronic music to discuss the period from the 1950s to the digital revolution of the 1980s. By contrast, Angela Ida De Benedictis, one of our authors, focuses resolutely on the period from the 1980s to the present. Though acknowledging an earlier period of live electronic music that produced numerous masterpieces (Stockhausen’s Microphonie I, II and Mantra), she divides her period of study into two phases: the first is designated the ‘historical phase’ of live electronic music (the 1980s and early 1990s), followed by the current phase ‘characterised by the hybridisation of live electronics with computer music’ (De Benedictis 2015, 301–2).
Three concepts and three phases of live electronic music Clearly numerous kinds of live electronic music have arisen over the past half century, characterised by different aesthetic goals and technologies. Elena Ungeheuer’s attempt to systematically capture the development and ramification of live electronic music in three concepts provides a helpful synthesis of the problems presented above and enables us to begin to make sense of the many different threads embedded in this story. Her first concept is marked by compositions that stage the human-machine opposition described above. Like many others (Emmerson 2007a, 89; Cope 1976, 92), she cites Bruno Maderna’s Musica su due dimensioni for flute, cymbals and tape (1952) as characteristic of this first period in which compositions for traditional instruments and music prerecorded on tape proliferated (2013, 1368–69). Ungeheuer’s second concept focuses on compositions in which technology allowed the music to transgress the traditional temporal and physical limitations of instrumental performance (2013, 1369–71). The dramaturgical
4 Friedemann Sallis et al. confrontation of the first concept, intended for the eye as much as for the ear, is here intentionally erased. In other words, the music of this concept moves towards a more homogeneously integrated environment dominated by listening. This shift is underscored thematically in the subtitle of the work Ungeheuer uses to illustrate her concept: Luigi Nono’s Prometeo. Tragedia dell’ascolto [a tragedy of listening] for soloists, choir, orchestra and live electronics (1981–84, rev. 1985). The third concept is defined by the enhancement of real-time interactivity between the performing agents (be they humans or machines) that provided live electronic music with its ‘lettres de noblesses’ (2013, 1372–73).4 Ungeheuer cites Répons for chamber ensemble and live electronics (1981–84) by Pierre Boulez as an early example of music that enabled a new more intensive interaction between the human performer and the machine. Rather than having to mechanically follow a prerecorded tape (concept one) or to be subjected to a preprogrammed sound production scheme (concept two), the performer was now able to interact directly with the sound-generating devices, as though playing a traditional acoustic instrument. In the late 1980s, Philippe Manoury composed a series of four compositions entitled Sonus ex machina (1987–91) based on the possibilities offered by a new programming language called Max (now Max/MSP), developed at IRCAM by Miller Puckette in the mid-1980s. Ungeheuer’s concepts imply a rough chronological frame: concept one precedes concepts two and three, while two and three tend to overlap in her presentation. In an effort to flesh out the history of live electronic music, Emmerson has identified a series of small but significant technological revolutions, which he calls the three paradigms of development. •
• •
Paradigm 1 (ca. 1950–80): the steady miniaturisation of circuits following the adoption of the transistor and the subsequent development of voltage controlled synthesis and processing in the mid-1960s, resulting in (a) the signal processing of a live instrument or voice and (b) the combination of this processed sound with prerecorded material; Paradigm 2 (the 1980s): the revolution of the personal computer and the invention of the Musical Instrument Digital Interface (MIDI) protocol, which enabled event processing in so-called real time; Paradigm 3 (the 1990s going forward): the quantum leap in processing power of personal computers and the emergence of the laptop allowed real-time signal processing, as well as the absorption of most aspects of studio and performance systems. (Emmerson 2007a, 115–16)
Emmerson’s paradigms map on to Ungeheuer’s concepts very well, effectively sharpening the chronological articulation of her categories. Thus, an initial period of development in which analogue technology dominated (ca. 1950–80) was followed by a period in which new digital tools and personal computing replaced earlier equipment (ca. 1990 to the present).
Introduction 5 Between these two periods, we have a transition phase (ca. the late 1970s to the early 1990s) in which new digital technology was combined with older analogue equipment.5 Of course, the progressive nature of the story is destabilised, because older concepts and paradigms do not conveniently disappear when new ones arise. ‘At each juncture the previous archaeological layer has not peacefully given way to the next, but has carried on, adapting and upgrading its technology’ (Emmerson 2007a, 116). For example, after completing his monumental Prometeo featuring the innovative use of newly developed digital technology, Nono wrote La lontananza nostalgica utopica futura madrigal for several ‘travellers’ with Gidon Kremer, for solo violin, eight magnetic tapes and eight to ten music stands (1988–89). Even though it employs older technology, it would be wrong to understand this impressive work, written in the last years of the composer’s life, as somehow going back to an earlier aesthetic. On the contrary, Nono’s use of magnetic tape was informed by his recent achievements. Thus, today we are confronted with a complex assortment of live electronic practices that have arisen over the past half century and continue to cohabit and intersect.
Musique mixte – mixed music If the terms and concepts currently associated with live electronic music have resulted in ambiguous and contradictory discourse, the situation with regard to ‘mixed music’, the English translation of musique mixte, is even worse.6 Vincent Tiffon, one of the authors in this book, has defined musique mixte as concert music that associates acoustic instrumental music and sounds generated electronically; the latter being produced either in real time during the concert event or prerecorded and projected via loudspeaker during the concert.7 This distinction between the real-time manipulation of electronically generated sounds in concert (temps reél) and sounds fixed on some medium in advance (temps différé) has been consistently present in discourse about musique mixte since the term emerged in the 1960s (Emmerson 2007a, 104). In 1972, Fernand Vandenborgaerde published a short text that elaborated on this distinction and announced that Karlheinz Stockhausen’s Mixtur for five orchestral groups, four sine-wave generators and four ring modulators (1964) was the first significant example of live electroacoustic manipulation of sound. He explained that Stockhausen’s achievement constituted a response to a problem that had plagued musique mixte from the beginning, i.e. the stark, unrelenting contrast between the acoustic and electronic sound sources that characterised the early works of the 1950s (Vandenborgaerde 1972, 44–45).8 In identifying the problem, he cited a text published three years earlier by his former teacher, Jean-Étienne Marie, who had attended the first performance of Edgar Varèse’s Déserts for wind instruments, piano, percussion and tape (1949–54) at the Théâtre des Champs-Elysées in 1954. In Marie’s view, the two distinct sound worlds of Déserts were merely
6 Friedemann Sallis et al. juxtaposed, resulting in nothing more than the ‘confused, timid stammering of children’, though he hastened to add that the work nevertheless identified ‘the path to the future’ (Marie 1969, 130–31). During the 1970s, the meaning of musique mixte evolved and began to be used to differentiate new forms of interaction between human performers and electronically powered devices (enabled by digital tools then being developed at IRCAM and other centres) from the older works for performer and tape of the previous generation.9 Mario Stroppa’s Traiettoria for piano and electronics (1982–88) and numerous works by Jean-Claude Risset are often cited as new, innovative examples of musique mixte. According to Tiffon (2005b, 27), during this period musique mixte moved away from the aesthetics of collage (confrontation and juxtaposition) that characterised the early works to one of dialogue. The older distinction between temps reél and temps différé continued to be used discursively. However, by the turn of the twenty-first century, the explosive development of new digital tools rendered it obsolete (Tiffon 2013, 1300). This brief examination of musique mixte suggests that the term is in fact the French expression of what in English has been and continues to be known as live electronic music. The terms are different, but the story is the same. Ungeheuer’s concepts and Emmerson’s phases of live electronic music easily coincide with the different categories of musique mixte and their historical development.10 Indeed, even the scholarly examination of the respective terms, which took place independently, shows a remarkable parallelism. Vincent Tiffon submitted his PhD dissertation, entitled ‘Recherches sur la musique mixte’ in 1994, the same year in which Simon Emmerson published ‘“Live” versus “Real-Time”’. Since then, both have gone on to establish themselves as authorities with regard to the meaning and development of musique mixte (Tiffon 2013; 2005b; 2004 among others) and live electronic music (Emmerson 2012; 2009; 2007a among others). The subtle differences one finds in the respective discussions have more to do with the cultural backgrounds and contexts of the authors than with the music and technology the terms are intended to describe.11 If live electronic music and musique mixte do indeed designate the same music, then the English translation of musique mixte is unnecessary and ought to be abandoned, because it has generated and continues to generate confusion. The term mixed music appears to have emerged in the last decades of the twentieth century, thanks in part to the international success of IRCAM. However, its reception has been ‘mixed’, to say the least. While some have ignored it (Manning 2013; Collins 2007, 38–54), others have embraced the term and attempted to explain the difference between it and live electronic music (Landy 2007, 154–55; Emmerson 2007, 104–8). Recently Nicolas Collins et al. have addressed both terms in two separate sections of Electronic Music implicitly suggesting that the terms denote different categories (2013, 133–34 and 180–91, respectively). Concerning mixed music,
Introduction 7 the authors note that though works for instrument and tape constitute the classical model, today the term is also used to designate: 1 the live processing of instrumental sound; 2 music produced by meta-instruments; 3 music using software to provide more flexible playback of prerecorded material; 4 live performance of electroacoustic instruments; 5 machine listening or interactivity; 6 computer-assisted instrumental composition. (Collins et al. 2013, 133–34) Thus, with the exception of compositions for tape, mixed music can now include any music involving some kind of electronically generated sound, creating a yawning catch-all category that approaches the universal fallacy, i.e. a term, a category, a concept or a theory that purports to explain everything explains nothing. Consequently, the editors have advised the authors of this book to use the original term ‘musique mixte’ and not the English translation. Why do we choose to use a foreign term when plain English is readily available? English authors have been borrowing terms from Italy, France and Germany to discuss music for centuries. The problem with the English equivalents of ‘bel canto’ and ‘Sturm und Drang’ is that they erase the cultural and historical connotations of the terms, which are far more important than the definitions of specific words. An example of problems that arise when literal translations are applied too liberally is the unfortunate decision by Christine North and John Dack to translate musique concrète as concrete music (Schaeffer 2012). No English reader can possibly know what concrete music means, unless he or she is already familiar with Schaeffer’s definition of musique concrète, in which case the English translation is utterly useless. In this case as well, we have advised our authors to stick with the original French term.
Live electronic music as performance For the purposes of this book, we will define live electronic music as performance in which the electronic part has an impact on or is influenced by the performers in some interactive way (Bertolani and Sallis 2016). Donin and Traube (2016, 283) have recently observed that the scholarly examination of musical performance has become one of the most rapidly growing subfields of the study of music and particularly of the creative process. Our book contributes to this literature. Rather than attempting to understand live electronic music as a compositional category, as has often been done in the past, we believe it is best to approach it as a performance practice. Why insist on this distinction? After all composers usually consider the constraints of an eventual performance when they create their work. By focusing on performance rather than compositional techniques or strategies, we obtain
8 Friedemann Sallis et al. a more comprehensive and coherent understanding of live electronic music. A performance of Symphonie pour un homme seul for tape (1950) by Pierre Schaeffer and Pierre Henry at the Salle de l’Empire in Paris on 6 July 1951 provides a good example. During the early 1950s Schaeffer and Henry experimented with a gestural controller that allowed a performer to modify the amplitude of individual loudspeakers in real time from a central position on stage and in so doing articulate the performance from a spatial perspective. According to Schaeffer, the goal was to associate musical form with a three-dimensional spatial form, whether static or cinematic.12 He called the apparatus that enabled this a ‘pupitre potentiomètrique de relief’: roughly, a potentiometric desk that enables an acoustic articulation of space (Schaeffer, cited in Gayou 2007, 413). Designed and built by Jacques Poullin in 1951, the apparatus consisted of circular electromagnets placed perpendicularly between which a performer (in this case Pierre Henry) would move an activating device in and out of the circles. The gestures allowed him to control the sound intensity of the speakers placed around the audience (Teruggi 2007, 218). In 1953, an astonished correspondent for The New York Times reported sitting in a small studio equipped with four loudspeakers (two in front, one behind and one suspended above the audience) and listening to a performer articulate space. In the front center were four large loops and an ‘executant’ moving a small magnetic unit through the air. The four loops controlled the four speakers, and while all four were giving off sounds all the time, the distance of the unit from the loops determined the volume sent out from each. The music thus came to one at varying intensity from various points in the room, and this ‘spatial projection’ gave new sense to the rather abstract sequence of sound originally recorded. (Cited in Ungeheuer 1992, 152) The example is pertinent here for two reasons. First, compositional strategies and techniques are not an inherent feature of live electronic music. Symphonie pour un homme seul, a classic piece of musique concrète, was not composed with the pupitre potentiomètrique de relief in mind and was initially performed without it. This work is not usually listed as an example of live electronic music, and yet when performed under the circumstances described above, that is precisely what it became for the duration of the performance. Second, live electronic music should not be associated with specific types of technology. According to Johannes Goebel, new digital tools developed in the last two decades of the twentieth century opened up a demarcation between a ‘pre-interactive’ period of live electronic music (corresponding with Ungeheuer’s first concept) and the digital era, which enabled true interaction between performing agents (Goebel 1994, 3–4). Goebel’s notion of true interaction and its implicit value judgement constitute a form of ‘flat-earth’ thinking. As our example clearly shows, musicians did not
Introduction 9 wait for the emergence of digital technology to engage interactively with sound in real time. To be sure, the horizon of expectation with regard to live interactivity has changed considerably over the past half century. A reconstruction of the 1951 performance of Symphonie pour un homme seul would no doubt appear quaint to audiences familiar with the complex and polished capability of current digital technology, but to judge an event fifty years ago by today’s standards misreads the past and ultimately hinders our ability to understand the present. Since the nineteenth century, composers and their acolytes have regularly misread and reinterpreted the past consciously, semi-consciously and unconsciously. Richard Wagner’s reinterpretation of Beethoven’s achievement, through his introduction of the term ‘absolute Musik’, is only one of a long series of such endeavours. The frequency and regularity with which this takes place does not justify the practice. Thus, live electronic music is not a subgenre of electronic music, nor does it rely on specific technology. The term does not define a compositional type or category; rather, it designates the performance of music using some kind of electronic technology and covers a continuum of practice, ‘from the single press a button to initiate playback, to in-the-moment fine control of all aspects of the music’ (Collins et al. 2013, 180). Consequently, to examine live electronic music is to look ‘over the whole history of electronic music, since the drive to take such music live has been ever present’ (188). In his seminal article, entitled ‘Live-Electronic Music’ (published almost forty years earlier), Gordon Mumma came to the same conclusion, stating that the ‘history of electronic music begins with live-electronic music’, which for him meant the end of the nineteenth century (Mumma 1975, 287). As the reader will have noted, this timeframe differs sharply from the accounts of most authors, who normally place the beginning of electronic music in the years following World War II. This stark discrepancy reflects the fact that the history of Western music is usually written from the perspective of the composer and rarely from that of the performer. Compositional outcomes have been the backbone of music historiography since it began in the nineteenth century. Consequently, most authors in search of a terminus post quem for live electronic music have inevitably chosen the mid-twentieth century, when the radio stations in Paris and Cologne began using magnetic tape as a reliable storage medium. Composers quickly realised that the medium could be edited, allowing them to intervene creatively with recorded sound. This perspective conveniently ignores the fact that music using electronic devices had been made and performed for a half century already: see, for example, Thaddeus Cahill’s Telharmonium, as well as the Theremin, the Hammond organ and the Ondes Martenot, to name but a few (Mumma 1975, 287–91).
Examining music that escapes conventional notation Our definition of live electronic music puts performance at the centre of this book (both physically and metaphorically), which is not to say that we
10 Friedemann Sallis et al. ignore composers or their perspective. On the contrary, the book begins with chapters by Agostino Di Scipio and Chris Chafe, two very different composers, whose music shares (at least) one important characteristic. Both compose music conceived as performance events rather than as ideal aesthetic objects consigned to paper or some other fixed medium. Indeed, Di Scipio (2011a, 106) has categorically denied that he composes idealised sound objects at all. He presents Two Pieces of Listening and Surveillance (2009–10) by giving a detailed account of a performance, an unusual approach for a composer, but so appropriate for this book. Chafe presents Animal, an algorithm he created to react unpredictably in two pieces of computer music: an interactive installation entitled Tomato Quintet (2007) and Phasor for contrabass and computer (2011). Whereas the algorithm is activated in the former by ripening tomatoes and the presence of visitors, in the latter Animal has to be coaxed into unexpected sound outcomes by the contrabassist. Our book also examines the circumstances under which live electronic music is composed: who is involved and how the creative process is organised. In Chapter 3, Laura Zattra discusses the collective nature of the creative process at the Institut de Recherche et Coordination Acoustique/ Musique (IRCAM) in Paris, the Centre for Computer Research in Music and Acoustics (CCRMA) at Stanford University and the Centro di Sonologia Computazionale (CSC) in Università di Padova. She compares the roles of musical assistants at each of the institutions and explains how the professional computer music designer emerged at turn of the century. The central section of the book containing five chapters focused on the performance of live electronic music. Chapter 4 is largely based on a series of interviews of Alvise Vidolin undertaken by Zattra over the past seventeen years. Vidolin, one of Italy’s leading computer music designers, speaks of his close collaboration with Luigi Nono and Salvatore Sciarrino. The interviews contain detailed information about Vidolin’s role in the first performances of major works, such as Nono’s Prometeo. Tragedia dell’ascolto (1981–85) and Sciarrino’s Perseo e Andromeda (1991). François-Xavier Féron and Guillaume Boutard present information obtained through a survey of twelve performers from France and Quebec in Chapter 5. All are classically trained on traditional instruments, and all have performed live electronic music. Using semi-structured interviews, Féron and Boutard interrogate the achievements and frustrations of the performers, examining first-hand accounts of what works and what does not with this type of repertoire. Chapter 6 takes this perspective one step further. Xenia Pestova writes about her extensive experience performing live electronic music as a pianist. Her chapter examines the innovative notational approaches that composers have devised in order to communicate their instructions and the challenges that these approaches entail for the performer. Chapters 7 and 8 address robotic performance, an area that is likely to develop considerably in the coming years. Initially, John Granzow was
Introduction 11 supposed to write the second part of a co-authored chapter with Chris Chafe. In the end, the authors submitted independent chapters, which can be read as one extended text. Whereas Chafe explores the internal mechanisms of his algorithm, Granzow examines Animal from the perspective of organology and performance. He compares it to the daxophone, a bowed electronic instrument designed, like Animal, to produce surprising and unexpected sounds. Following a brief overview of the field, George Tzanetakis examines the technical challenges of robotic performance. His research, carried out with a team of specialists, aims to produce automated instruments that can function as partners in improvisation with human musicians. His chapter reports on progress, as well as on the challenges that remain. In order to test techniques and methods embedded in his research project, members of the research team composed pieces, two of which are presented at the end of the chapter: Red + Blue = Purple for tenor saxophone and robotic piano (2012) by Tzanetakis and Prepared Dimensions for performer and robotic piano (2012) by Gabrielle Odowichuk and David Parfit. In both works, the creators (who function as researchers, composers and performers) experimented with automated improvisation. This book was first proposed under the working title: ‘Seizing the Ephemeral: The Composition, Performance and Study of Music that Escapes Conventional Notation’. The title was rejected because, given current search-engine technology, potential readers would probably not be able to find the book. It is worth citing here because it constitutes an important thread, linking the last six chapters of the book in which the authors examine authorship, reception, notation, transcription and the study of performance. All of these topics are related to the problematic fact that live electronic music, like much of the world’s music, cannot be set in Western staff notation. Chapters 9 and 10 by Angela Ida De Benedictis and Nicola Scaldaferri, respectively, were initially intended to make up one co-authored text, like those by Chafe and Granzow cited above. In her chapter, De Benedictis looks at how the performance practice of music by Luciano Berio, Stockhausen and Nono has generated problems of authorship and work identity. In the following chapter, Nicola Scaldaferri examines similar questions, but from the perspective of contemporary ethnomusicology. He is particularly sensitive to the impact of recording technology that objectifies performances of traditional music. By fixing this music on stable platforms, the technology provides opportunities to better understand the practice. However, it also raises questions about authorship and work identity that are foreign to the source culture. In chapter 11, Dániel Biró and George Tzanetakis apply computational tools to recordings of Hungarian laments, Jewish Torah cantillation and Qur’an recitation to identify the pitch content of this music, which cannot be accurately set in conventional staff notation. As such this chapter can be understood as a bridge. On the one hand, it continues the examination of traditional music that we encountered in chapter 10. On the other, it presents
12 Friedemann Sallis et al. automated methods developed to identify pitch content similar to those in chapter 12. In the next two chapters, Jan Burle and Friedemann Sallis focus on the same object of study: a recorded performance of Luigi Nono’s A Pierre. Dell’azzurro silenzio, inquietum for contrabass flute in G, contrabass clarinet in B flat and live electronics (1985). Burle reports on the transcription of the recorded audio data captured on a spectrogram in Chapter 12. Using a combination of automated pitch analysis and close human listening, Burle and student assistants were able to identify the salient musical events of the performance, articulated by both real (the two performers) and virtual (the delay, the harmonisers and band pass filters) voices. The data collection and transcription methods developed for this project present examples of how the study of music is adapting to the new technological environment. Building on the outcome of the previous chapter, Sallis presents an interpretation of the performance of A Pierre in Chapter 13. As De Benedictis pointed out in Chapter 9, Nono’s late work is elusive because large portions of it cannot be set in conventional notation, and performances will change from site to site depending on room acoustics. Consequently, one of the only means we have to study this music in its entirety is through an examination of recordings of specific performances. In the final chapter of this book Vincent Tiffon examines the performance experience of two very different pieces of live electronic music: …Of Silence… for alto saxophone and chamber electronics (2007) by Marco Stroppa and the digital installation entitled the XY Project (2007…) created by Tiffon and a team of researchers. Tiffon compares the traditional concert presentation of Stroppa’s piece, with the experience of visitors engaging with the XY Project. The latter is made up of a graduated series of gamelike events that force visitors to actively listen and act on what they hear. In this process and regardless of their previous training, engaged visitors are transformed into ‘musicants’ (i.e. musical participants).
The companion website: live-electronic-music.com Discussing music without hearing it is like looking at paintings with closed eyes. A book, as a printed medium, can contain only text and static images. The number of images, their size and resolution are limited, and they can be printed only in shades of grey, lest the book become too expensive to print. CDs or DVDs used to be included with books that needed multimedia – audio used to be played in a CD player, colour images and videos to be watched on a computer screen. But CD and DVD players are disappearing, both as portable devices and as parts in personal computers, replaced by media streamed over the Internet. Personal computers are being replaced by tablets and smartphones with Internet connection, anywhere and anytime. Therefore, we did not produce a CD or DVD as a multimedia companion to the book, but rather created a companion website. The web address
Introduction 13 of the website complements the title of this book and thus is easy to remember: live-electronic-music.com. On the website, the interested reader will find, for selected chapters, playable audio tracks of musical works discussed in the text, higher-resolution full-colour versions of printed images and additional material, such as images that for space reasons could not be included in the printed book. The companion website also contains an errata and corrigenda page where mistakes will be rectified as we become aware of them and gives the reader an option to send an electronic message to the editors.
Notes 1 The Group’s name was clearly an effort to translate ‘live electronic music’ in Italian. 2 See, for example, Richard Toop’s discussion of electroacoustic music, which he subdivides into two categories: tape music and live electronics (2004, 464–66). 3 The same terminological anxiety does not appear in discourse about popular music. From the crooners of the 1930s who used microphones to transform vocal technique, through the electric guitar and the emergence of turn-tabling, electronic technology has been part and parcel of the worldwide development of commercial popular music. 4 In Chapter 4, Laura Zattra addresses these interactions and presents the different types of agents, notably the computer music designer, that intervene in the composition and performance of live electronic music (see also Zattra and Donin 2016). 5 Hans Peter Haller (1995) presents an excellent overview of this transition period seen from the perspective of the Heinrich Strobel Stiftung in Freiburg. 6 Musique mixte has been translated into Italian (Zattra 2008), but, to the best of our knowledge, not in German. The item entitled ‘Elektronische Musik/ Elektroakustische Musik/Computermusik’ in the Lexikon Neue Musik contains a section on Live Elektronik Musik, but no mention of musique mixte (Supper 2016, 218–26). 7 La musique mixte est ‘une musique de concert qui associe des instruments de musique d’origine acoustique et des sons d’origine électronique, ces derniers produits en temps réel – lors du concert – ou fixés sur support électronique et projetés via des haut-parleurs au moment du concert’ (Tiffon 2005b, 23). 8 Echoes of this problem can be found reverberating through the literature down to the present day (Tiffon 2013, 1303–4). 9 A significant part of this shift is no doubt the rise of the spectral composers in France; notably, Gérard Grisey and Tristan Murail had a strong impact on how the composition and performance of music involving electronically generated sound was defined and discussed. 10 Discussions of the terms often cite the same exemplary works; see, for example, Tiffon’s list of 111 works (2005b, 40–4). 11 Without indulging in cultural clichés, it is difficult not to notice the Latin desire for clarity and order in Tiffon’s finely drawn typologies and precise categories of compositional strategies. By the same token, Emmerson’s analysis of ‘liveness’ as a performance event displays the traditional English preference for pragmatic explanation. 12 ‘Associer la forme musicale proprement dit à une forme spatiale, statique ou cinématique, tel est le but de ce premier essai de projection en relief intégral, c’est-à-dire en trois dimensions’ (Schaeffer, cited in Gayou 2007, 413).
This page intentionally left blank
Part I
Composition
This page intentionally left blank
1 Dwelling in a field of sonic relationships ‘Instrument’ and ‘listening’ in an ecosystemic view of live electronics performance1 Agostino Di Scipio Premises and methodological position At an early stage in the planning of the present book, the editors circulated a document among the invited contributors. It described the scope of the publication as ‘electroacoustic music involving a strong performance component with some kind of human/machine interaction (what used to be known as “live electronic music” and may now fall under the rubric of musique mixte)’. I am happy and sincerely grateful that they considered such subject matters, too often neglected in scholarly musical investigations. Yet, upon reading that particular line, I was partially taken aback. I find misleading the idea that ‘live electronics’ can be considered today under the rubric of ‘musique mixte’. I am inclined to see the two terms as expressive of different conceptual frameworks and different empirical attitudes. In my view, musique mixte implies perhaps a narrower focus, one that cuts down the multiplicity and diversity of performance practices and musical repertoires and keeps itself mainly (or exclusively) to a range of musical works meant for concert presentation with instrument(s) plus electroacoustic sound heard through speakers. In a typical setting, the electronics is experienced as an extension to the instrument aimed at enlarging its possibilities. In comparison, ‘live electronics’ (albeit admittedly often an abused term) seems to provide a view inclusive of a wider range of performance practices, including some in which music instruments are not necessarily involved and only analogue and/or digital devices are managed ‘live’ on stage. In this regard, musique mixte may be taken to define a particular category within a much larger context. In addition, live electronics may represent a more appropriate framework for weaving together issues of musicological relevance, a competent understanding of the technologies involved and matters of concern in a broader perspective of cultural studies. Interdisciplinary topics such as ‘performance’, ‘presence’, ‘liveness’ and others – to name only few subjects I see preliminarily investigated in recent publications (Sanden 2013; Peters
18 Agostino Di Scipio et al. 2012) – may bring the work of interested scholars and practitioners to fruitfully intersect the research agenda of ‘sound studies’, with its inherent challenge ‘to think conjuncturally about sound and culture’ (Sterne 2012, 3). Finally, whereas a rationale of ‘mixed media’ ultimately mirrors the venerable McLuhanian notion of electronic media as a prosthesis to the human body and an empowerment of its performances, live electronics better conveys the fact that electronic media have (for a long time now) become a life environment for humans and thus probably better connect with much broader issues and questions of technology we face daily as citizens committed to music making and music-related research (Di Scipio 1998). A discussion of live electronics performance practices would demand an insight into several artistic endeavours, and a careful examination of earlier historical, music-analytical and sociological studies on the subject (e.g. Nelson and Montague 1991; Battier 1999; Cremaschi and Giomi 2005; Emmerson 2007a). However, as a premise to the present discussion, I will give priority instead to a perspective that looks at artistic efforts in this area as instances of empirical interdisciplinary research in music performance in general (examples I have in mind include, among others, Impett 1998; Waters 2013; Green 2013 and several authors in Waters 2011). In such a perspective, first-person reports of practical experiences are seen as crucial to illuminate the many facets of live sound and music making. In general, scholarly approaches on matters of electroacoustic music would greatly benefit from a higher ‘ethnomusicological awareness’ (Di Scipio 1995), meaning a more direct and involved commitment to – or a ‘field analysis’ of – the practices, strategies and designs under scrutiny, a competent and participatory observation of the productive strategies and their technologies, as well as a competent examination of final products. That seems to some extent inevitable, if we agree that ‘the situated act of making music [is] a fertile site for thinking music’ (Green 2013, 25). Task 1 (the instrument) The bulk of my personal artistic efforts includes, next to performance works and sound installations using live electronic means only, several concert pieces with one or several instruments involved alongside electroacoustic and computer resources. In this chapter, I would like to focus on a work of the latter kind, Two Pieces of Listening and Surveillance (2009–2010). On the one hand, the particular work follows smoothly from the ‘solo live electronics’ of the Audible Ecosystemics series that has been already discussed in other publications (Di Scipio 2003; 2008; 2011a; Solomos 2014). On the other, in many respects it stands on its own. The fact that a musical instrument (a flute) is utilised does not define Two Pieces of Listening and Surveillance necessarily as following the instrument ‘plus’ electronics paradigm of musique mixte. The instrument acts here as one of the many components of a larger ‘performance ecosystem’ (Waters 2007). In other words, it works
Dwelling in a field of sonic relationships 19 as a functional element in a network of relationships and interdependencies among several sound-related resources, including electroacoustic transducers, analogue or computer-operated signal processing methods and the surrounding physical space. The notion of ‘performance ecosystem’ is especially useful ‘for the investigation of the complex relationship between performer, instrument, and environment’ and provides ‘an analytic framework within which to fruitfully refuse easy distinctions between these three apparently distinct categories’ (Waters 2013, 122). Therefore, it could be helpful in the examination of music-making practices that, as typical of inventive approaches of live electronics, imply a reconsideration of the hierarchical duality of ‘instrument(s) plus electronics’ and its implicit division of labour. (Two Pieces of Listening and Surveillance may serve as an example of the kind.) Moreover, the notion of ‘performance ecosystem’ could be referred to a more comprehensive concept of ‘media ecology’ (Strate 2006), as an interdisciplinary research area that may help us in addressing ‘the link between our biological history and our cultural history’ (Hallowell 2009, 155). Task 2 (listening) Parallel to a discussion of the ‘instrument’ as a particular site of agency within a more comprehensive ecosystem, I would also like to consider the apparently unrelated issue of ‘listening’. Listening is of course structural to any music performing activity; a musician listens to the sounds emitted by the instrument s/he plays and carefully listens to other musicians and sound sources in order to coordinate with them; experienced musicians are attentive to the sounds in the surrounding space and to the resonances of the space’s own physical structure. However, I will also consider a different perspective. Listening has a primary role in the embodiment and situatedness of a music performance; it is an actual engagement in the world that is ‘productive’ (not only receptive) of sound. Moreover, I am especially interested in the fact that the activity of listening is in itself generative of sound, through specific ‘tools for listening’ and in manners connoted by bodily attitudes and social conventions. Even listening in silence is not entirely without sound. We should consider that the audience itself, as a small community of individuals sharing a time and a place in attendance of a performance, has its own sonic presence. The intertwining of ‘instrument’ and ‘listening’ seems to me crucial for an understanding of performance as a domain of experience bound up in the action-perception cycle – in other words, as a domain of embodied cognition or enaction. Enactivism describes cognition as emerging in the lived interaction between an organism and its environment (Varela et al. 1993). This embodied enactment is lively and indeed performant in a live music performance, the interplay of action and perception mediated by particular tools and coupled to the environment via specific terminals. In live electronic
20 Agostino Di Scipio performance, the sites of mediation may include not (only) separate and independent tools – such as musical instruments – but a larger infrastructure coupled to the environment via terminals such as electroacoustic transducers, beside mechanical devices and body organs, of course. In such a context, listening is perhaps better characterised as both ‘a method of exploration’ – borrowing the terminology of Voegeling (2010, 4) – and an embodied process generative of the listened-to.
Two Pieces of Listening and Surveillance: Belfast, Sonic Lab (Sonic Arts Research Centre), 26 April 2013, 8:30 pm A flute lies on a dimly lit small table, in front of the audience, roughly in the centre stage area. A thin cable finds its way through the embouchure hole, ending in a small-capsule microphone deep inside the head joint. No one is nearby. After a lengthy silence, a faint prolonged sonority is heard from two speakers a little removed in the back, a ‘hiss’ reminiscent of air passing through a small tube. In a few seconds, other flute-like sounds overlap, prolonged but sparse and soft, at different but seemingly recurrent pitches. Eventually they break apart, each in its own turn, giving way to a texture of small sonic droplets that first grows quickly denser and then empties out slowly (it does ‘empty out’ or ‘crumble’, not ‘fade out’). Before vanishing entirely, the by-now sparser droplets are joined by a new sustained hiss, steady, yet intermittent at times. Then silence. Then the prolonged hiss comes back, with rarer crackles and puffs of tiny sonic droplets scratching its surface from time to time. The counterpoint of these events goes on, with overt or elusive variations, punctuated by silent pauses. For the audience, this is how Two Pieces of Listening and Surveillance begins. Someone shortly will enter the scene and take hold of the flute, but for the time being the music comes into existence apparently out of nothing as the performance process develops autonomously. How does it make sound in the first place, given that nobody is playing the flute? Sound (synchronic emergence) In acoustic terms, the flute is a strong mechanical resonator, a kind of filter that has an amplifying effect in certain frequency ranges. At the beginning of Two Pieces, what resonates and gets amplified inside the nearly cylindrical pipe is only the external noise, i.e. the ambient sound as it enters the flute. Normally such resonance is barely audible. The miniature microphone picks up the internal pressure wave and brings it to a computer. The sound is converted to a digital signal; with a delay of fifteen seconds, it is eventually converted back to the analogue domain and channelled to the speakers. From there, the sound spreads around and also recirculates in the flute, where it is reinforced and filtered again by the flute resonance structure. Then of course it is picked up again by the microphone, converted and
Dwelling in a field of sonic relationships 21
Figure 1.1 Agostino Di Scipio, Two Pieces of Listening and Surveillance. Diagram of the complete performance infrastructure: ambient sound enters the flute, gets picked up by a miniature mic, sent to a computer, processed, and heard through the room, whence it enters the flute again.…
delayed again and eventually diffused through the speakers again; it recirculates across these various stages again and again. Figure 1.1 illustrates a feedback loop mediated by two acoustic transfer spaces – one internal (the flute’s pipe, a very small niche) and one external (the room, or at least the room area where flute and equipment are positioned) – as well as by analogue transducers (microphone and speakers, with their somewhat nonlinear transfer functions) and a computer-operated delay line (a linear, time-invariant system). I should observe that, with a fifteen-second delay time, the sound born of the feedback loop will not present overtly repetitive patterns.2 In its timbre qualities, the sound is heavily connoted by the flute’s resonant frequencies, much stronger than those in the surrounding room. However, several details, and ultimately the pitch itself in the sound events thus generated, may also depend on the microphone-to-speakers distance as well as on their orientation. The overall room acoustics have an important role, here, in that they may reinforce or decrease the wave transfer from the speakers to the flute. The flute-like quality remains, notwithstanding the fact that no airflow is fed into the instrument save the minimal air tremblings of the ambient and background noise (an alternative title for this piece would be Background Noise Study, with Flute). Perhaps there seems to be more ‘breath’ or ‘hiss’ than in a regular flute note, yet this could be the psychological side-effect of not seeing a flutist playing. Noise components we usually denote as ‘hiss’ are actually very present in typical flute sounds (they are tantamount to their ‘fluteness’). Experimental styles of flute playing displace them from the periphery of auditory perception to a more central place, so to speak. In order for the delayed feedback loop to generate sound, the feedback gain level is rather high. However, the actual gain is dynamically managed in the computer in such a way that the sound level cannot increase beyond a given threshold. This is achieved by gradually scaling down the output level, as the input level increases. The continuing gain adjustment results
22 Agostino Di Scipio of course in amplitude changes, with either smoother or more accentuated envelope profiles. In addition, it is also used to drive a signal decorrelation between the two speakers, causing slight modifications in phase relations and frequency contents heard as subtle timbre fluctuations. In essence, such a control mechanism implements a kind of time-varying compensation function and thus acts as an adaptive self-regulation at the heart of the sound-making process (Di Scipio 2003). Thanks to it, the delayed feedback loop can incessantly nurture other operations in the sound-generating process (see below), allowing for the emergence of sound events from the material conditions defined by the equipment and the physical environment hosting the performance. This ensures that, even when silence eventually takes over, sound may resurface and keep feeding the overall performance process. Only in exceptional circumstances will the loop be blocked altogether, in order to avoid inflating the overall sound texture to excessively dense or intense levels (see subheading Security measures). Support and development – diachronic emergence/1 The sound born in the delayed feedback loop is diffused by the speakers as is, but occasionally is also replicated by the computer at pitches higher or lower than the original. That however only happens as long as the amplitude level is less than a stipulated threshold. The pitch-shifted sounds follow from very basic signal processing methods, namely simple resampling mechanisms.3 Resampling implies changes in frequency, but also corresponding and inversely proportional changes in duration. (There are other minor by-products, too, but within limits they do not affect the flute-like appearance of the resampled materials). These changes in duration allow the pattern of resampled materials to be audibly disconnected from the repetitive frame of the delayed feedback loop and thus articulate a more composite contrapuntal structure. I should observe, in this regard, that no formal or symbolic representations of musical patterns are involved: the computer is not utilised as a sequencer or as a scheduler of different processes in time, but merely as a digital signal processor. All short- and long-term articulations, contrapuntal or other, are born in real time from continuing and concurrent signal-level processes. I should also observe that the computer operations are not the only source of sonic transformations and timbre fluctuations, as the analogue transducers and the mechanical parts certainly play a primary role across the whole sound-generating process. From the speakers, the resampled material spreads around and enters the flute’s pipe. There it becomes mechanically amplified and filtered and then picked up by the microphone and diffused again through the speakers. If the total feedback gain is high enough, it will in turn be resampled and frequency scaled. This is a ‘recursive’ sound signal transformation (recursive processes are ubiquitous in my work).
Dwelling in a field of sonic relationships 23 Deflation and decay – diachronic emergence/2 The outcomes of resampling are not only diffused through the speakers but are also subjected to a further digital signal-processing step. However, the resultant sound is only heard when the amplitude level of the delayed feedback loop sound exceeds a given threshold. The further digital signal-processing step comprises another resampling mechanism but of a more complicated kind, denoted as ‘granular processing’ in computer music jargon. It basically involves (1) segmenting the sound signal in extremely short ‘grains’ or ‘sonic quanta’ (durations in the order of a few centiseconds, down to milliseconds sometimes) and of (2) reassembling myriads of these grains in sonic layers that can be perceived either as very similar or totally unrelated to the original. Given the limits of this chapter, we will leave technical details aside. Also, we cannot discuss the theoretical implications of quantum-oriented representations of sound (Gabor 1947) and their potential for electroacoustic music composition – first explored by Iannis Xenakis in the late 1950s and then by computer music composers and researchers; see Roads (2001) for a comprehensive historical and technical account. Suffice it to say that such granular representations allow unique methods of sound synthesis and processing, particularly in that they can be made to operate across several time scales in the sound signal, including very small ones. In my earlier work, I experimented with granular methods to shape up textures of different densities and varying degrees of porosity and granularity, working in the time domain of the sound signal only (as opposed to the frequency domain of the spectrum) (Di Scipio 1994). In Two Pieces, granular processing transformations are intended to degrade sound, they let it evaporate or tear apart and unravel into sonic dust and gravel of isolated, sparse pulses (these metaphors evidently strive to denote forms of decrease in acoustic energy, which are phenomenologically different from a decrease in amplitude level). In other words, granular processing acts as a ‘decay operator’. When the sound of the delayed feedback loop gets thicker and louder, granular processes are activated to ‘deflate’ it. Their function contrasts and compensates the resampling methods, as these are in fact expected to reinforce, sustain and articulate the sound emerging in the feedback loop as long as its level remains below a given threshold. (Notice that in this context signal processing transformations have a truly systemic role to play and are not at all used for their aesthetic potential alone.) Clearly, as the grainy sound material is diffused by the speakers, it also recirculates into the flute. In so doing, it takes on the spectral colorations of the flute resonance structure and is then picked up by the microphone and sent to the speakers as well as to the computer. In the computer, it is delayed, resampled and frequency scaled, and eventually it is submitted to granular processing. The complete process is both recursive and ‘cascaded’ (see Figure 1.2). Processes of this kind are ubiquitous in my work.
24 Agostino Di Scipio
Figure 1.2 Agostino Di Scipio, Two Pieces of Listening and Surveillance, sketch of the complete process, including the main components of the computer processing (grey-shaded area: bold lines are signal connections, dotted lines are live-generated control signals). Notice the several recursive and cascading sound signal paths.
Similar technical arrangements typically make it difficult for anyone (including myself) to tell ‘input’ from ‘output’, and to separate ‘cause’ from ‘effect’. It is difficult, if not impossible, to parse and dissect its audible manifestations in terms of discrete items and separate units. Sounds emerge apparently out of nothing, and their articulation in longer sonic shapes and gestures, albeit self-consistent and musically coherent to the ear, defies a clear separation and categorisation of constructive items, leaving analysts who use traditional approaches puzzled.4 Network of interactions: autonomous and emergent behaviour With reference to Figure 1.2, it is important to observe that the granular and resampling algorithms are written to dynamically change in time, as they are constantly updated during the performance. However, no one in the performance manipulates knobs or level faders, nor are there automated (prescheduled, predetermined) time functions driving the parameters. Rather, the computer is instructed to (1) ‘follow up’ and ‘describe’ certain time-varying features in the sound that are picked up by the microphone, especially in terms of energy (root-mean-squared measures of the sound signal) and temporal density (number of onset events in a time unit) averaged over shorter and longer stretches of time and to (2) turn these sound descriptors into controls driving the granular and resampling transformations (in Figure 1.2 control signals are denoted with dotted line connections, while sound signals are denoted with continuing lines). In this way, sound drives its own transformations and brings forth its own detailed articulation. Indeed, sound is considered here not simply ‘raw material’ to be manipulated according to independent plans, but rather an audible event that carries information useful for its development in a musical structure (Di Scipio 2003).
Dwelling in a field of sonic relationships 25 This technical arrangement sets in motion another kind of feedback, but in the ‘control’ rather than the audio domain and acts as a primary source of shorter- and longer-term articulation. A permanent dynamic correlation is established among perceptually relevant sonic properties, which represents in actuality a veritable ‘syntax’, i.e. a set of inside-time relationships. Music emerges from the interactions of mutually affecting sound-related agencies, sonically coupled with the environment. The complete network implements therefore a kind of ‘autonomous’ but open-ended and room-dependent sound-generating system. All interactions among component parts can evolve based on features of the sound materials it generates, in a kind of self-organising manner. To what extent music- generating systems can be said to have ‘autonomy’ is of course a difficult question to tackle (Bown and Martin 2012). My perspective would be roughly that of the biological and ecological sciences. A living system is ‘autonomous’ (Greek for ‘self-ruling’) only through permanent exposure to the noise of heteronomous forces and agencies in the environment. It can only work out its autonomy and its systemic closure (its identity, its Self) via a permanent structural coupling with the environment (a permanent openness to the non-Self), and actually elaborating the coupling itself as something not pre-established and fixed but dynamically negotiated and, ultimately, construed or built (Maturana and Varela 1980). An ecosystem is ‘a network of interactions between components’ (Golley 2000, 31) able ‘to retain structure and function under continually changing environmental conditions’ (22). The coming into sonic existence and the morphological development of such situated processes take place neither in purely deterministic nor in purely nondeterministic ways: it happens through emergence, i.e. through a complex bundle of phenomena manifesting emergent properties. Ecosystems, and living beings in general, exhibit ‘emergent behaviours’ to the extent that the dynamics of their network of interactions cannot be understood in terms of the activity of the single parts.5 Once set up and started, the technical infrastructure of Two Pieces manifests itself as a sufficiently dynamical and unsupervised process that is both autonomous enough and heteronomous enough (room-specific, and thus ‘situated’), to effectively be used as the possible basis for site-specific sound installations. However, it became a performance piece when I decided to use the flute as an artefact offering special physical affordances for a performer to enter the autonomous process and find his/her way through it.
Two Pieces of Listening and Surveillance (same place and date), 8:33 pm Two or three minutes later, as the autonomous sound-generating process keeps going, a person discreetly enters the scene. To this point, s/he has remained in the position of a listener but now resolves to take action. With extreme caution, s/he takes the flute in her/his hands, lifts it – not up to the mouth, but at chest level – and stays as still as possible for a while. A little later, s/he depresses the key closest to the barrel. After some fifteen seconds, s/he depresses the next
26 Agostino Di Scipio key, always very softly. The total sonority seems to change slightly, taking on new colourations but not at all unrelated to what we have heard so far. The action continues with the performer (flutist?) either depressing or lifting the next keys, one by one, every fifteen seconds or so, occasionally inclining the flute a bit, sometimes slightly shifting her-/himself laterally. By the end of the first of the Two Pieces, all keys are down and the tube’s length is at its maximum. The sound materials accumulate in the feedback loop as the flute resonance changes because of the subsequent key configurations, smoothing out some frequencies, reinforcing others. A thicker and deeper quasi-harmonic sound field gradually forms, as the pipe length is increased. You may think of the flute’s pipe as a filter, like the room in Alvin Lucier’s I Am Sitting in a Room, except that (1) the filter frequency response changes in time following the subsequent configurations of keys; (2) the filter phase response also changes in time, as the flute is manipulated and imperceptibly moved in space, altering the distance relative to the speakers; (3) sound recirculating in the flute first traverses a portion of the larger room space and (4) there is nothing in particular that must be filtered, nothing excites the filter save the ambient noise and, at later stages, the sound events that originated in the performance itself. Silent actions and sonic residues Consider, too, the fact that any deliberate or involuntary contact with the keys and the other flute parts will inevitably have some acoustic effect, albeit small. Improperly delivered actions (key clicks, small frictions or tiny impacts against the pipe external surface, etc.) will resonate in the pipe as louder or as much louder than the average ambient noise. The threshold logics I have mentioned above will be particularly sensitive to these tiny residual sounds. In regular flute playing, these sonic residues are always there, in fact, but they are usually so much softer than the flute sound ‘proper’ that they can hardly be heard. In Two Pieces, they are amplified and taken in the overall chain of recursive and cascaded signal transformations. Dusty sonic strias and more abrasive materials will possibly arise from the processing of these leftovers and overlap with the more resonant texture generated by the fundamental delayed feedback loop. A score exists for Two Pieces. As well as numerous instructions and some schematics aimed at illustrating the particular performative praxis; it also presents a page of graphic notation (Figure 1.3), illustrating the sequence of flute keys to be depressed (black circles) or lifted (white circles). The ‘two pieces’ correspond to two subsequent readings or scans of this page.6 For the first piece, the key sequence alone is considered. For the second, the key sequence is repeated, but some extra remarks and symbols featured on the page are also taken into account (we will consider these below under the subheading ‘Extreme boundary conditions’). The idea is to go twice across the space of sonic relationships in place, but with different boundary conditions and with possibly different results in terms of system behaviours (one may think of it as a form of variation).7
Figure 1.3 Agostino Di Scipio, Two Pieces of Listening and Surveillance, graphic score for flute action (excerpt).
28 Agostino Di Scipio In later sections, we will consider the complete score and its meaning in this context. In the meantime, it is useful to observe that the score says nothing concerning the sonic residues of the key-lowering and releasing action. In a sense, these leftovers are essential to the piece, yet they are ‘expected’ and not ‘prescribed’. They are allowed to happen as casual residues that are structural to the kind of action requested. The score shows a task to be pursued, the pursuit of which demands actions typically very quiet if not entirely silent. The imperfections of this ‘mute playing’ feed the main feedback loop and hence the overall performance process.8
Two Pieces of Listening and Surveillance (same place and date), 8:37 pm The sound texture is now more intense, and more densely articulated, punctuated at times by louder gestures. The flow seems to develop in a random, nonoriented fashion, very fast, not exactly too loud but slightly aggressive. The texture inflates; the overall process inexorably reinforces itself. The flutist seems helpless. This is what the score calls an ‘emergency situation’. Emergency situations The score instructions inform the flutist that, in a performance, s/he might face one or more ‘emergency situations’. An emergency is defined as an excessive accumulation of materials in the feedback loop that irremediably results in a sound fabric that is ‘too dense’ or ‘too chaotic’. ‘Too dense’ or ‘too chaotic’ imply of course a subjective and qualitative judgement. More particularly, they refer to sound that has become so dense and randomly articulated as to mask, to the ear, the flute resonance structure as heard through the delayed feedback loop. The implication is that the overall system behaviour is perceived as shifting out of control, adrift and independent of the flute action. Or, conversely, that the actions exerted at some earlier moment were taken without a clear perception of the particular context and broke the delicate acoustic balance between the smaller niche (flute) and the larger one (room). In these circumstances, the flutist has to resort to the ‘security measures’ the score offers in order to cope with the emergency. Security measures The flutist temporarily suspends the sequence of key actions and switches to any of the security measures described in the score. These include blowing into the flute and hitting hard against the keys. In either case, strong pressure waves are created inside the flute. Thanks to the self-regulating amplitude mechanism mentioned earlier, such actions have the consequence of blocking the delayed feedback loop: in a short period of time, no further material will recirculate from the speakers into the flute and the microphone,
Dwelling in a field of sonic relationships 29 and thus no more sound will be let in the signal-processing transformations. Consequently, the chaotic sound texture gradually smooths out, and a better balance between the flute niche and the external room is recovered. If the flutist persists with security measures, the whole sound-generating process will be eventually reduced to silence. Perhaps some waste sonic materials will still scratch the surface of silence, but with no further consequences. A few seconds later, the flutist can return to his/her main task and proceed with the sequence of depressed and lifted keys. The piece seems to start from the beginning, but the working conditions have meanwhile changed: a different key configuration is now in place; the flute and/or the flutist is most probably not in the same exact position; subtle changes have meanwhile occurred in the ambient noise.9 Extreme boundary conditions In the second of the two pieces, the graphic score invites the flutist to repeat the entire key sequence, but this time additional instructions require that s/he also play a few ‘trills’ (with keys only, of course) and temporarily bring his/her mouth close to the embouchure, blocking it with lips or tongue (no blowing, of course). The latter action modifies the flute resonance structure, introducing new spectral colorations and nuances of dynamics. The former potentially introduces some extra key clicks that, though delivered shyly and with circumspection, will inevitably result in sounds louder than background noise and in stronger finger impacts. This creates a context of more extreme boundary conditions for the whole performance ecosystem process, rather like instigating unbalanced behaviours, and thus maximising the need to watch over and to guard against possible drifts. Accordingly, the second of the Two Pieces features a higher variety of sound but at the cost of a greater risk of emergency situations.
Two Pieces of Listening and Surveillance (same place and date), 8:40 pm Approximately nine or ten minutes after the beginning, the flutist suddenly turns the instrument around and blocks both the embouchure hole and the hole at the opposite end, with the fingers or the palms. Then s/he stays still for a while. The granular textures that have formed through earlier stages, and all pulse-like or intermittent residues that have appeared, recirculate in the flute again and again, slowly turning into a more continuous and smoother harmonic field. The physical structure determined by blocking the pipe’s main apertures reveals deeper resonances, so the total sound gradually takes on larger spectral colourations and a kind of reverberant character. There is a lot of space in the sound, now. A variety of sound events have certainly taken place, at earlier stages, but now it is space that seems to ‘take sound’, albeit from within such a small niche.
30 Agostino Di Scipio Like the audience, the flutist now waits and listens to her/his own (non) actions as these let the sound happen and become more and more spacious. After a one or two minutes of this, s/he brings the mouth close to one of the key holes and breathes normally, or perhaps just a bit more intensely than usual. This action gradually results in the same effect as a security measure: the sound texture first hesitates, then vanishes, letting the breathing remain the only audible event (only direct sound, not amplified). After a longer breath, the performance ends.
The conceptual tangle of performative praxis, instrument and listening The score to Two Pieces of Listening and Surveillance is primarily a ‘verbal score’, i.e. a text with detailed instructions relative not just to the flute techniques but to the entire electroacoustic setup and the computer programming, together with few diagrams and schematics illustrating technical details. However, as we have seen, it also includes a ‘graphic score’ (see Figure 1.3), a single page meant to specify a line of conduct for the flutist (the flutist can be either a professional or an amateur and may be the same person who is in charge of the electroacoustics). The verbal instructions mainly explain and illustrate the competences implicated in the particular performance praxis, i.e. in a particular ‘way of playing’. The graphic score presents a specially designed action plan that encourages a flexible time frame for action. The latter is somewhat subordinate to the former, however, in that it proposes a sequence of actions the pursuit of which is only meaningful in the particular context described in the instructions. Indeed, it works essentially as a device that varies some of the boundary conditions of the larger ecosystem process, namely, those set by the flute’s physical structure. The graphic score implies a temporal arrangement, a timeline, yet the actual timing remains largely underdetermined. Indeed, (1) the demanded actions can be suspended at any moment because of unexpected emergency situations, only to be then reintegrated a little later, after the application of security measures; (2) at particular spots (such as the beginning, the passage from the end of the first piece to the beginning of second, and the ending of the complete performance) time is de facto suspended, as graphically indicated by longer fermata signs; in such moments, the flutist attends to the events, listening carefully and anticipating the right moment to take action again; (3) as the instructions make clear, the recurrent frame of fifteen seconds represents a very flexible frame – the ‘breath’ and pace of the performance will inevitably and considerably vary, especially with the emergence of sound materials that one can hardly hear at earlier stages in the performance. All of this requires of the flutist a sense of timing, a sense for the ‘opportune moment’, which is different from quantifiable, ‘chronological’
Dwelling in a field of sonic relationships 31 time (ancient Greek uses the term Kairòs for the former, as distinct from the latter, Chronos). This aspect of the work also requires a sense of the margin of manoeuvre that is revealed as the actions proceed. These are embodied, performative inclinations that cannot be denoted in words or symbols. The pace of the action cannot be described or prescribed. However, we may consider it to some extent ‘inscribed’ in the interactional dynamics of the performance ecosystem. We can say it is enacted by the flutist’s physical involvement with the material conditions set in place. At more advanced stages in studying and practicing, as s/he develops a deeper familiarity with the scope of her/his actions, the flutist can find out more personal, alternative or even improvisatory paths through the sequence of depressed and lifted keys. The graphic score essentially provides a basic, introductory guide useful to carefully approaching and learning the role s/he plays in the particular performance ecosystem. It provides a practical means to learning ‘a way of playing’, i.e. to developing the competences described in the verbal score and turning them into effective actions.
A sound relationship of man, machine and environment The normal function of musical scores is to incite a sequence of sound-making actions, however randomly or deterministically delivered, and to delimit a range within which actions may vary in ways that remain consistent with the intended result. In those works where I use a more-or-less conventionally notated score, I typically consider an additional function and think of the score as a possible source of controls over computer operations and related threshold logics. I set aside screen-based interaction and the direct manipulation of computer controllers and let the sound events resulting from instrumental action drive their own computer-operated transformations. Following the same strategy (described above under the subheading Network of interactions: autonomous and emergent behaviour), one first needs to turn the instrument’s sound into low-frequency signals, via feature- extraction methods and other signal processing operations, and then to apply these as control signals driving the computer processing of that sound.10 Figure 1.4 shows a page of the score of Two Pieces illustrating some of the computer operations meant to generate several control signals out of the microphone input.11 From a device meant to make sounds, the instrument becomes a device whose sounding outcomes drive their own computer transformation. The function of a time-based score changes accordingly, becoming a source of time-varying controls. Provided the particular signal processing methods yield time-scale changes in the signal (as is the case with the methods described above under the subheadings Support and development – diachronic emergence/1 and Deflation and decay – diachronic emergence/2), the actions demanded by the score can then result in more extended gestural controls and affect developments and articulations over longer stretches of time.12
Figure 1.4 Agostino Di Scipio, Two Pieces of Listening and Surveillance (score excerpt), signal flow chart describing some of the digital signal processing (particularly, the control signal generation).
Dwelling in a field of sonic relationships 33 The reason behind this idea is twofold. In the first place, in its engagement with the instrument, our body is capable of incredibly refined and detailed gestures, usually much more interesting and varied than those allowed by screen-based operations and interactive computer devices. After all, instrument playing has to do not only with sound but also with the body and the extremely complex and finely controlled movements of which its organs are capable. The instrument is adopted then for its ‘capacity to amplify the detail of tactility, to disclose in full the often unconscious non-linearities of musculature and of the impulses behind them’ (Waters 2013, 129). In the second place, the approach invites performers to find a more personal way to engage in the performance, enlarging the scope of the actions for which they are usually responsible. In Two Pieces, as we have seen, the graphic score offers primarily a flexible guidance to enter the performance process and find one’s own way through it. Now we can add that the particular way in which the flutist exerts the actions expected of her/him may eventually affect long-term behaviours in the process. As a control device, hence, the score remains far from having a deterministic function, as actual control signals will only materialise through reinterpretations, personal adaptations and bodily inclinations, projecting the details of the flutist’s actions into more extended developments in the performance. I often discover new expressive nuances and new structural reorientations when pieces like this are presented by committed performers (see e.g. Bittencourt 2014). Personal attitudes and subtleties in instrument playing leave behind audible traces. These corporeal traces intertwine with the traces left by the site hosting the performance as well as by the technical infrastructure. In an eco-systemic approach on music making, the ultimate goal is to audibly reveal a sound relationship among human beings, machines and the immediate environment (Di Scipio 2014b).
Dwelling and wayfinding All actions done by the flutist across the performance of Two Pieces remain subject to last-minute changes and subtle variations in scope and intent. Each next move is done whilst listening carefully to the current sounding context and is therefore mediated by one’s own subjective perception of past and recent developments. While that is clear from the designed interplay of ‘emergency situations’ and ‘security measures’, it actually represents a more general attitude, as the sound events emerging in the process will always influence the flutist’s next actions in scope and intent. A larger systemic loop is in place, a ‘control feedback’ loop: the score demands of the flutist actions that have a role in a larger sound-making process, and the emergent sonorities define the context of the flutist’s subsequent actions, possibly inducing a rearrangement or a reconsideration of the potential moves.13 In most cases, the flutist will not really ‘lead’ the process and will rather act as a system component, albeit a quite important one. At the same time, s/he
34 Agostino Di Scipio has a significant responsibility: everything in her/his actions has an impact (directly or indirectly) on the whole process, often unpredictably because of the nonlinear dynamics of the overall performance ecosystem. Each little action and movement, wanted or unwanted, inevitably enters a larger ecology of sonic interdependencies, with consequences largely unknown in advance. This may evoke the peculiarities of ‘chaotic systems’ (nonlinear dynamical systems, as found in either natural or cultural phenomena). But more importantly, in my mind, it evokes Edgar Morin’s notion of ‘ecology of action’ (1990) and related issues of human responsibility in the performance momentum. With Two Pieces, we are plunged into the contingencies of a real-time and real-space network of interactions where everything affects everything else in the medium of sound, in the here and now. Because of such intrinsic ‘circular causality’, such performative circumstances may be considered problematic – but only from a certain perspective. They might be considered problematic in the sense that the involved relationships of power (what drives what? Who is responsible for what?) are negotiated across the performance itself and are not deterministically stipulated beforehand. Each element taking part in the performance ecology actively contributes to the whole, but the constraints and limits of its contribution are also negotiated in the process. In system-theoretical terms, this implies a notion of ‘downward causation’: emergent behaviours reveal a novel potential in the interplay of system component parts, but also bind and limit the freedom of action specific to the single parts (e.g. Chalmers 2006). However, given the space actually left to personalisation and reinterpretation, and given the essential open-endedness of the process, such circumstances are ultimately less problematic than constitutive of the piece and intrinsic to a more exploratory attitude vis-à-vis the agencies and forces one deals with in the performance. In the context of Two Pieces, the instrument provides us with ways for dwelling in a field of sonic relationships rather than with ways to reach prescribed goals. Here the music is in the making of a path, not in the final destination or in a sonorous representation of the path but in the ‘way finding’, to paraphrase Ingold (2000). Each move is made rather rapidly and often just on the basis of a partial and intuitive grasp of the current context, listening to emergent shapes of sound and trying to come to terms with the scenario they open. Yet each move might have important long-term consequences, due to the interconnection and interdependence of all sound- related system components. A politics of risk management – or maybe a ‘workmanship of risk’ (Ingold 2011, 59) – needs to be considered, maybe together with a precautionary principle and a balance between innovation (unconcealment of emergent behaviours) and prudence (contemplation of consolidated behaviours). As the title emphasises, an attitude of ‘listening’ is expected of the flutist in the first place. During the performance that may eventually mix with
Dwelling in a field of sonic relationships 35 or degenerate into an attitude of ‘surveillance’, which indeed materialises when things go awry and the implementation of emergency tactics becomes desirable or appropriate. By turning such attitudes into active forces in the lived performance experience, Two Pieces is less ‘about’ listening and surveillance and more ‘of’ listening and surveillance. The performance of Two Pieces enacts the activities that listening and surveillance signify: it does not represent them on an ideal, representational plan but makes them happen in real experience and makes them work as generators of sound and music.
Seizing the ephemeral Failure is a concrete risk, of course. In principle, failure should not be understood as a catastrophe to avoid at all costs, because the very way it does happen may be significant and may contribute to the meaning of what has taken place prior to its occurrence (resonances of Samuel Beckett might be heard here). Coping with the materialisation of failure has a peculiar sounding counterpart. (This was the case a few days before I wrote these very lines, when Two Pieces was presented in a reverberant exhibition space in Naples: I was aware that the whole thing was not going well, but the precise way in which it went wrong was peculiar and apparently moved the audience. I could never have designed such faltering fluctuations and tremors of sound, but in retrospect I have to acknowledge them as belonging to the relational dynamics I set in place). The fragile and fleeting character of my live electronic music ultimately manifests itself as a delicate and sometimes intermittent flow of sound powders and other ephemeral sonic residues – ‘toujours le fruit fragile et éphémère d’une multitude de conditions fluctuantes’ (Meric 2008, 208) – because one hears the whole sound fabric strive to come into existence and to keep itself in existence (resisting however shortly to potentially destructive or dysfunctional factors in the environment and in the technical components involved). One hears that existence is achieved and renovated in the palpable absence of principles of efficiency and in facing relatively safe boundary conditions. The physical environment acts as a source of life-bearing energy and as a site of unexpected perturbations and intolerable pressure. ‘Being situated’ – or ‘being structurally coupled’ with an environment – is essential for a sound-generating ‘system’ to turn into an ‘ecosystem’, but this very ‘being-there’ keeps everything on the verge of vanishing. This reveals a more general character of the situatedness of performance: ‘performance’s being […] becomes itself through disappearance’ (Phelan 1993, 146). The environment is much more than a neutral box in which the music is presented and is rather experienced for what it is: a medium of existence. Living beings are born and supported by that which consumes them and gives them death (at the same time, the environment is transformed and possibly supported by human activities that also consume it and pollute it).
36 Agostino Di Scipio Mutatis mutandis, a similar dialectics holds for other components in the performance ecosystem. Consider analogue transducers and other technical elements: their congenital nonlinearity leaves audible imprints in the resultant sound and can represent either an operator of complexity and a bearer of form or an unsurpassable limitation, perhaps concealing or banning a wider range of possibilities. Or – to come back to Two Pieces – just think of an old or badly maintained flute, which certainly contributes in ways other than a new, expensive instrument, probably for the worse and for the better. All this makes the ‘ephemeral particularly hard to seize’, so to say, but nonetheless sets the conditions for the ‘live’ attribute of live electronics to attain a more tangible presence and truth. Music materialises as the audible traces of the ‘being-there-ness’ of a lived, embodied relationship to the environment and to the technologies involved in that relationship.
The instrument and the performance ecosystem In paradigmatic examples of live electronics, performative roles and functions are often reconsidered and reshaped, in ways often independent of pre-established performance norms and the related division of labour. Called into question is the ‘traditional separation of materials, interface and performance’ (Emmerson 2007b), as well as a number of apparently neatly distinct and hierarchically arranged roles – such as ‘composer vs. performer’, ‘score vs. performance’, ‘instrument’ (means for sound-making actions) vs. ‘listening’ (dispositions for welcoming actions and sounds) and even ‘musician vs. instrument builder’. A consideration of relevant live electronic music approaches – from David Tudor and Alvin Lucier to early efforts by Franco Evangelisti (Spazio a 5, 1959–1961), from the late Luigi Nono to Toshimaru Nakamura’s no-inputmixer and more current trends in either academic or nonacademic experimental circles – may suggest that what a music instrument is should be defined in actual experience by the functions it has in a larger context of interconnected parts, whose global behaviour both exceeds and binds the behaviour of the separate component parts. In some cases, music instruments may act as extensions to the electroacoustic equipment – perhaps a significant inversion of roles, in consideration of more typical approaches of musique mixte! This is the case with Two Pieces, for example, where the flute provides a special site of access to a larger performance ecosystem of its own autonomous sound-generating possibilities. In this particular case, we also see a quite refined mechanical artefact with its own cultural connotations (a music instrument belonging to the European concert music tradition) largely reduced to the materiality of its mere physical structure. Compared with the predominant interest for electronically ‘augmented’ instruments today, one may argue that this is rather an example of a functionally reduced and deliberately constrained, ‘diminished’ instrument. Other authors work with ‘infra-instruments’ (Bowers and Archer 2005; Green 2013), and
Dwelling in a field of sonic relationships 37 ‘found objects’ electronically transduced and turned into music instruments (Delle Monache et al. 2008), in a line of experimentation historically pioneered by David Tudor and others. Traditional instruments are sophisticated technical objects and value-laden artefacts. In a way, they are evidence of a general tenet in the philosophy of technology according to which, far from being neutral, all technical tools are charged with cultural values (e.g. Feenberg 1991). Music instruments are ‘stories’ – they reproduce and transmit human visions, ideals and knowledge, as Berio (2006) used to point out. However, just as a music instrument is not a neutral tool of sound production, neither are the devices and artefacts involved in a composed, studied performance setup. In paradigmatic examples of live electronics, the performance ecosystem is indeed something composed, specially designed and crafted (in some cases, it can even define the identity of a specific work).14 The general idea here is that technologies exert agency and act as scripts for action (Green 2013, 69). Inventive approaches of live electronics seem to acknowledge this notion (and to open our ears to its extra-musical implications) by associating it not only with traditional music instruments (mechanical technology), but with the larger range of technological mediations involved in the performance ecosystem. I call ‘audible ecosystemics’ a perspective of artistic research that consists of designing, experimenting and elaborating the mutual exchanges of sonic energy and sound-related information between and across components of a specially designed hybrid performance infrastructure (‘hybrid’ as it typically includes bodily, mechanical, analogue and digital sites of agency). In the making of Two Pieces, not unlike better-known examples of live electronics, a well-designed composition of reciprocal influences and desirable interactions of system parts is more important than the potential and scope of the single component parts. This is not at all ‘interactive music’ (a term that is often used to mean ‘interactive computer music’). Nor it is music in which separate resources ‘mix’ together, preserving their individual connotations and possibilities. It is instead a music made by ‘composing the interactions’ (Di Scipio 2003; 2008), that is, by creatively addressing the cooperation among the component parts but also their frictions or mutual resistance. It works only in the performance momentum by having all involved parts ‘coalesce’ and become one larger performance ecosystem. Each part trades its individual possibilities for an emergent whole that is more (and often less) than the parts. I subscribe to the idea according to which, in real-world complex systems (either biological or social), the whole is ‘more’ and, at the same time, ‘less’ than the sum of the parts (Morin 1992, 132–133). In an ecologically informed epistemology, while the whole can only be studied as emerging from the interaction of the component parts, each part can only be studied in the context of the emergence of the whole. Taken by itself, a part could present a much richer potential than the one it expresses in a context of mutually constraining and binding interactions. At stake here is ‘not’ a straightforward
38 Agostino Di Scipio opposition between ‘holist’ and ‘reductionistic’ views, but rather a view of ‘complexity’ that largely overcomes that opposition (Varela 1986). I find this view very close to an understanding of what it means ‘to compose’. The compositional elaboration of Two Pieces could not have advanced in abstract terms and ideal conditions; instead, it went through tests and experiments in real-world conditions (including, among others, several ‘pre-premiere’ public presentations). It was largely a question of trying it out in different environments and sharing it with competent interlocutors (Roels 2014). It benefitted from the collaboration of various flutists (professionals and amateurs, myself included among the latter), keen to explore the performance ecosystem and to find their position within it. The situatedness and ‘being-thereness’ that makes the performance ‘live’ is only achieved by ‘being with’ or ‘living with’ the complete performance ecosystem for some time, working it out while listening to it, making it the place of one’s own embodied auditory and sound-making experience.
Listening It seems appropriate to conclude this chapter with some remarks on ‘listening’. This is not only because of the centrality it may presumably have in a work bearing as title Two Pieces of Listening and Surveillance, but also because of reasons of possible relevance in a broader context. Just as an instrument can be considered a site for the clash or the encounter of different systemic functions and different values and traditions, listening can be considered the ‘locus’ where an entire ecology of perception and action is mediated and overdetermined by diverse cultural and ideological constructions. I would like briefly to consider two perspectives. The first one accords to listening to an ‘instrumental’ function, meaning an active and creative role in the process of sound and music making. The other, in a more materialistic view, considers listening an activity that, carried out in either individual or social situations, has its own sonic reality. It can be experienced as a method of exploration that, in its process, is generative of sound. Ascoltando Listening is evidently paramount to instrument playing and to performing music in general. Performers listen to the sounds occasioned by their actions on an instrument and adjust their playing accordingly. They listen to the broader context where such sounds and such actions take place. Everyone listens to the sound or voice of other performers in order to coordinate with them, as well as to the sounds in the surrounding environment. In so-called ‘acoustic rehearsals’, musicians listen to the room’s acoustics and maybe change their playing to let the music better match the reverberant character or other sound qualities of the particular concert venue. These are lines of behaviour at the heart of any serious involvement with artistic
Dwelling in a field of sonic relationships 39 sound-making practices and are denotative of music in general as a domain of embodied and situated cognition. However, and more emphatically, musicians also listen ‘through’ the instrument. They play ascoltando; they play ‘by listening’ (Nancy 2001). In many languages, the word for ‘listening’ also means ‘welcoming’ and ‘accepting’. Listening implies attending, staying in attendance and caring for the events attended to. It is a method of exploration that works by turning faint tremblings of air moving the ears into the phenomenon of some ‘listened-to’ (sometimes the term ‘akoumenon’ has been used instead of ‘phenomenon’, to eschew the dominance of the visual in thinking of sound).15 Musicians have special tools to help them in the process of listening; they experience the instrument as ‘a tool for listening’. What the performer of Two Pieces actually does is to make her/his listening audible to us, and s/he does so by helping him/herself with a flute, now revealed as a tool for listening that produces or affects sound in the process. S/he is constantly engaged in attending to the unravelling of the flow of sonic relationships that s/he takes part in, but in order to have a ‘perception’ of such relationships s/he can only take ‘action’ and do something in and to the medium of those relationships, i.e. sound – hopefully acting at the right moment and with adequate determination. While s/he plays ‘by listening’, s/ he also listens ‘by playing’. The two are closely intertwined. In this enactive engagement, action and perception are reciprocally involved and structurally coupled (in fact, cognitive scientists speak of the coupling and the reciprocal determination of the human body’s sensory system and motor system). Of course, this type of dynamics is proper for any performance and also represents a general aspect of what it means to play an instrument. What is particular to Two Pieces and other examples of live electronic performance is that the interplay of action and perception is coupled with the physical environment (a small niche in the larger space) neither directly nor through a specific interface or instrument, but rather through the designed mediation of a larger technological network (a minimal infrastructure in the larger, overly technologised world in which we live). In the ‘situatedness’ of performance, the embodied tangle of ‘instrument’ and ‘listening’ reveals dimensions of ecological relevance as dimensions of social relevance, and vice versa, because both the physical space and the technological environment are culturally connoted. A notion of sound seems inherent, here, that defines it as a medium in which we possibly make distinctions and grasp relationships in the surrounding space as these become audibly manifest to us (i.e. as ‘we’ turn them into ‘akoumena’). This constructivist notion indeed implies an ‘ecological’ dimension (the relationship between a subject and its physical environment) and a ‘social’ dimension: in the medium of sound, we can grasp the cultural and technological conditions by which the sound event is born to perception. In turn, the latter opens to a ‘political’ dimension: we act (or avoid acting) in the medium of sound in order to reinforce or modify the relationships
40 Agostino Di Scipio by which the sound event is born to perception. If listening is a method of exploration, it is a method of exploring and disclosing the margin of manoeuvre in our way through both domains of human experience – the one ‘situated’ in the physical environment and the other ‘situated’ in the technological environment, i.e. in a vast array of social bonds. If sound is not ‘what we hear’ but ‘what we hear in’ (Ingold 2011, 138), then the question is: what do we hear ‘in’ it? In sound, we hear the mesh of relationships and the reciprocal determinations of agencies and forces by which sonic events are born and channelled. Sound is a cognitive medium in which the (audible) traces are inscribed that are left by the power relationships of which sound events are born. More particularly, sound is a medium in which the material and ideological conditions are inscribed that negotiate the lived experience of music. One can consider such observations as the subject matter in a possible ‘biopolitics of music’ (Di Scipio 2012; 2014a). An ecosystemic perspective on music making brings us to disengage our ear from the current pervasive ideology that considers sounds as separate, object-like entities. It deconstructs the attitude of listening to ‘sound as such’ and reinforces instead our grasp of sound’s ‘intrinsically and unignorably relational’ nature (Labelle 2006, ix). It reveals the reifying and antiecological element in the pervasive notion of the objet sonore (Kane 2007; Di Scipio 2014a) and emphasizes in contrast that ‘sound is never about the relationship between things, but is the relationship heard’ (Voegelin 2010, 49). It bears the traces of the material mediations of which it is born, including culturally determined mediations such as technological ones. In this field of audibly experienced relationships, we should count our own intimate, bodily relationship to sound. Listening to listening As a composer and sound artist, I am becoming increasingly interested in providing experiential analogues for the fact that listening is an embodied activity that determines the listened to while being determined by it (Di Scipio 2011a). Sound only exists as a function of our bodily presence to it. The activity of listening embeds us in a web of mediations and relationships that brings forth sound as some listened-to event. Indeed, the listener is always part of the listened to, although his/her cognitive dispositions always efficiently distinguish his/her own audible traces from those left behind by other sounding bodies. ‘World and perceiver specify each other’ (Varela et al. 1993, 172). A dynamics of Self and non-Self is in play here, which opens up a constructivist epistemology of listening that I am trying to address in my current artistic efforts. In sound-installation contexts, such dynamics reflect the position of the visitor/listener vis à vis the adaptive or sensitized sonorous environment s/ he explores. In ecosystemic installations like Untitled (2005), Stanze Private (‘private rooms’, 2008) or Condotte Pubbliche (‘public conducts’, 2010–2011),
Dwelling in a field of sonic relationships 41 one cannot typically experience the sound work ‘as such’, because one’s own physical audible presence alters the sound-generating process that implements the work. One can only experience the work as affected or biased by one’s own presence (Di Scipio 2011b). No specific interaction is expected, no interface device is there to be directly manipulated or played with and the visitor/listener simply experiences his/her own sonic presence in the installation ecosystem through the traces left by his/her being there and being part of it. Listeners experience the listened to as the latter results from their listening dispositions and ultimately attend to their own sonic presence through the particular environment. That is actually what normally happens (or should happen) in anyone’s auditory experience of the surrounding world. Elsewhere (Di Scipio 2011a; Mazzoli 2011) I have described this line of research as creating auditory analogues of the very general fact that all living systems build up and develop their identities through a permanent exposure to the environment, as the epitome of the Other. The same discourse returns as relative to the position of the performers vis à vis the dynamic field of sonic relationships of which they are part in performative contexts.16 In current efforts, I also focus on the fact that, in its own right, listening sounds – i.e. it never comes without its own noises, usually connoted in individual bodily attitudes and social conventions. Even silent listening is not entirely without sound. In Audible Ecosystemics n.3a (Background Noise Study) the ‘amount of quiet’ in the space hosting the performance is a very important element (Green 2013, 36). In Audible Ecosystemics n.3c (Background Noise Study, with own sounds), the incidental sounds feeding the performance ecosystem include very reduced but inevitable noises caused by the electronic performer as s/he supervises the performance and manages the equipment. In a recently started project, Koinoi Topoi (‘common places’), the only sound source is the feeble whirring or buzzing produced by earplugs worn by performers as they listen to selected music tracks from their Mp3 or other portable players. In the latter case, ‘performing’ consists literally of music listening (through very cheap and heavily connoted tools for listening). Hence, the audience listens to people listening. But already in the performance of Two Pieces, as we have seen, what the audience listens to is the flutist’s own listening experience as s/he explores the ecosystem s/he is part of. Any action of listening makes its own sound; different manners of listening make different sounds and come with different tools for listening.
Conclusions: an intermediate field between music and sound ecology In the preparation of the present chapter, my initial plan was to sketch an overview of general aspects of my work as a composer and sound artist. Eventually, I resolved to focus on a particular piece in order to provide a more detailed discussion of performance practices. The choice of the particular example followed from the idea that it could be interesting, for the
42 Agostino Di Scipio purposes of the present publication, to consider a work where a traditional instrument is utilised, along with electroacoustic and computational resources. Central to Two Pieces of Listening and Surveillance is a network of sonic interactions among parts of a larger and hybrid technological infrastructure (i.e. including mechanical, analogue and digital means) structurally coupled with the surrounding room environment. In this particular case, a flute is involved as a structural component and a site of agency within the overall network process. Upon consideration of this ‘performance ecosystem’ and the role played by the instrument within it, we have seen that the particular approach defies many of the characteristics expected of an example of musique mixte. The whole technical infrastructure works as a largely autonomous sound- generating ecosystem, prior to any action on the flute. As a part of the whole, the flute seems almost reduced to an extension of the electronics and is largely misused or at least underutilised, with regard to the multiplicity of conventional and ‘extended’ playing techniques we know. The flutist’s actions are bound to deviate constantly from the main track illustrated in the score. Principles of efficient routine and programmed action are replaced by principles of emergency and risk management. The whole performance develops from ambient noise (accumulated in a delayed feedback loop) and from fleeting sonic residues of (silent) actions exerted on the flute. A work like this simply does not match the operative prerequisites the framework of ‘musique mixte’ and indeed undermines the apparently obvious division of labour inherent in the related performative practice. A more generic or less partisan framework of ‘live electronics’ appears more appropriate to elucidating the ecosystemic dynamics implemented and the emergence phenomena such dynamics bring forth. An accent on ‘practices’ usually reflects an intent to relativise a strong concept of ‘the work’ and the role of prescriptive documents such as ‘the score’. Yet, as we have seen, Two Pieces does have a score, consisting mainly of a list of detailed instructions (a ‘recipe’ if you like), accompanied by diagrams and technical schemes. It defines and describes the complete process and the performative praxis involved, independent of any specific arrangement of events on a timeline. A graphically notated plan is also included, illustrating a sequence of actions to be exerted on the flute but not their sounding manifestation or their precise timing. The function of this graphic score appears subordinate. Demanded of the flutist is in fact an exploratory attitude in pursuing his/her task and a sense for timely action that can hardly be notated or verbally illustrated. Whereas the pace of the performance can be neither prescribed nor described, we may consider it to some extent ‘inscribed’ or ‘implicitly coded’ in the performance ecosystem as a composed, carefully designed infrastructure of its own time-varying emergent behaviours. In sum, the performance ecosystem in its own way acts as a script and captures the dynamical, often uncertain (weak?) identity of the work – the fact that it is Two Pieces or Listening and Surveillance and not
Dwelling in a field of sonic relationships 43 any other work – to a much greater extent than a time-based score could do. There we see a general feature of ‘live electronic’ approaches, as essentially different from musique mixte. Already in Tudor’s Bandoneon! (1966), for example, ‘the electronic components […] create a configuration that imposes a set of material constraints on a musical performance’, so much so that ‘the defining concepts [of the work] are lodged more and more “inside electronics”’ (Kuivila 2001, 2 and 7; emphasis mine). But that is perhaps what live electronic music tells us of music in general: a musical work is never of a thing-like or substantial nature, but is relational and dynamical, and its identity is specified to a large extent by creatively designed and crafted configurations of media and practices. The performer’s task – that I have characterised as one of ‘dwelling in a field of sonic relationships’ – represents neither an improvisatory task (one cannot act ‘without thinking’) nor a deterministic task (to a very large extent, one cannot truly follow a given path, let alone predetermine a particular final goal). It is rather like finding one’s place and one’s way upon attentive consideration of the traces resulting of one’s own interaction with other agencies, possibly catching the opportunities such traces provide (i.e. the opportunities one actually provides him-/herself with, interpreting the traces of action and the environment perturbations). In Two Pieces, each move on the performer’s part enters a whole ecology of action, with short- or long-term consequences, often unexpected. In such circumstances, criteria of ‘facture’, while maybe not given entirely away, are heavily mitigated with criteria of responsibility. The action-perception feedback cycle is constantly negotiated under real-time and real-space performance constraints. I have insisted (under the subheadings Ascoltando and Listening to listening) on the intertwining and even the tangle of ‘instrument’ and ‘listening’, viewed as an embodied and situated experience of the physical and cultural relationships inscribed in the medium of sound. The situatedness of any embodied experience entails ‘a more encompassing biological, psychological and cultural context’ (Varela et al. 1993, 173, emphasis mine). We may perhaps think of the ‘ecosystemic perspective’ I am illustrating here as a ‘champs intermédiaire entre musique et écologie sonore’ (Solomos 2012). Indeed, beyond personal aesthetic inclinations, the definition could refer to a broader range of sound-making practices and related research where we – as practitioners and researchers – try to interrogate and possibly overcome the entrenched division between the physiobiological and the sociocultural environments and to address and appreciate their intimate communion.
Notes 1 Thanks to Simon Waters, composer and flutist, for setting the conditions for the Belfast concert performance mentioned in this chapter (where I myself performed the flute part…). Thanks to flutists Manuel Zurria (who premiered Two Pieces in the living room of the Fondazione Scelsi, Rome, 2012), Tommaso Rossi (who loaned me a flute of his for years) and Gianni Trovalusci (who got me
44 Agostino Di Scipio started on Two Pieces, in 2009, and provided his precious collaboration in the making of a related sound installation work, Condotte Pubbliche). 2 When I make use of delay lines as structural building blocks, I typically set rather long delay times in order to avoid inducing auditory groupings of simplistic regular rhythms as much as possible. Following early research work in the perception of rhythm (Fraisse 1974), the minimum delay should be longer than the so-called ‘length of the present’ – i.e. it should be at least ten or twelve seconds. 3 None of the signal-processing methods involved here is particularly elaborate or demanding in terms of computational load. The ways by which they connect to each other (and to other component parts of the overall process) are certainly more elaborate. 4 In a brilliant discussion of the Audible Ecosystemics series of works, Meric and Solomos (2011) carefully considered the difficulties such works raise for most common music-analysis approaches and related music-theoretical implications. The same holds for to the discussion presented in Meric (2008), particularly focused on Audible Ecosystemics n.3a (Background Noise Study) (solo live electronics, 2005) and the sound installation Untitled 2005. 5 Here I’m using ‘emergence’ not only as a term of common language, but also as referred to research in nonlinear dynamical systems and related approaches in the biology of cognition (Maturana and Varela 1980; Varela et al. 1993). In Di Scipio (2008), various emergence phenomena are described as they take place in the performance of Audible Ecosystemics n.2a (Feedback Study) (solo live electronics, 2003) and other works. Solomos (2010) has discussed ‘emergence’ in my work. 6 I say ‘two readings or scans of the page’, yet I should also note that this graphic score is so simple that a performer can and should easily memorise it, avoiding score gazing during the performance. 7 The same concept is used in Audible Ecosystemics n.3b (Background Noise Study, in the Vocal Tract) (one or more ‘mouth performers’ with miniature microphone and electronics, 2005), and 3 pezzi muti (piano and electronics, 2005–2007). 8 Strategies of ‘mute playing’ are central in 3 pezzi muti, Texture Residue (ensemble and electronics, 2006), Two Sound Pieces with Repertoire String Music (string ensemble and electronics, 2012) and 3 stille stücke (string quartet with optional amplification, 2005–2009). 9 I have elaborated a similar interplay of ‘emergency’ and ‘security measures’ in other works, such as those cited in note 7. An interesting discussion of the particular subject is in Schröder (2011), where the distinction between ‘emergence’ and ‘emergency’ is also addressed. 10 References to ‘feature-extraction methods’ and ‘timbre descriptors’ are innumerable in the literature of computer music and digital audio engineering. However, the transformation of features extracted from a sound into control signals driving the processing of the same sound is very rarely addressed (a small exception is in Zolzer 2002, 476–478). Some strategies are presented in my written publications (Di Scipio 2003; 2008). 11 The computer-processing schematics I use in my scores are generic and ‘machine- independent’, i.e. they do not refer to any particular programming language or system. They include basic signal processing algorithms that can be implemented and ported across a variety of suitable computer programming languages. 12 The idea was worked out in a rather deterministic fashion in the composition of more conventionally notated works such as Book of Flute Dynamics (flute and digital signal processing, 2000) and Due di Uno (violin, recorder flute and digital signal processing, 2002–2003). In Pulse Code (2002–2004), pulsed sequences played by a percussionist are interpreted as binary words (‘ons’ and ‘offs’) and thus made to control computer programming operations.
Dwelling in a field of sonic relationships 45 13 I have developed this notion of ‘control feedback’ since the mid-1990s, particularly for compositions for small ensemble and electronics, e.g. Texture Multiple (3–6 instruments and room-dependent signal processing, 1993) and 5 difference-sensitive circular interactions (string quartet and room-dependent signal processing, 1998). The idea returns in more recent works, e.g. Modes of interference n.1 (‘audio feedback system with trumpet and electronics’, 2006) and n.2 (‘audio feedback system with saxophone and electronics’, 2007). 14 In David Tudor’s live electronic music, as is known, each work was identified with a specially designed electroacoustic setup, independent of any particular manifestation of its sonic potential and independent of any particular sequence of sound events. In his case, diagrams illustrating the electroacoustic devices and their connections took on the connotation of an autograph score: ‘the circuit – whether built from scratch, a customized commercial device, or store-bought and scrutinized to death – became the score’ (Collins 2004, 1). 15 The term ‘akoumenon’ (Greek for ‘the thing heard’, the audible appearance) is used in Derrida (1967), but only in passing, and returns in a more articulated way in the context of the phenomenology of sound sketched in Smith (1968). It is found again in a number of more recent contributions at the border between philosophical discourse and more theoretically inclined sound studies, yet no author seems to turn it into a stable substitute for ‘phenomenon’ (‘the thing seen’, the visible appearance). 16 I am assuming the distinction of ‘performance’ and ‘installation’ as an obvious one, but only for the sake of brevity. An interesting discussion can be found in Davis (2008), where the main focus is on Alvin Lucier’s work. In Schröder (2008), examples of sound art ‘by composers’ are discussed as presumably different from sound art ‘by sound artists’.
2 (The) speaking of characters, musically speaking Chris Chafe
If you’ve ever sat in a forest or a garden and sensed the plants breathing, you’ll appreciate how the exhibit heightens and celebrates this sensation. (LaTempa 2007) The computerized sounds were spacey and sometimes menacing, sounding at times like Chafe was trying to tame an evil subterranean beast. (Ying 2011)
The pair of reviews above caught the moods and temperaments of a custom- designed computer music synthesis algorithm, Animal. The works described capitalise on Animal’s great expressive range and use it to give voice to musical characters. The former refers to an interactive music installation, Tomato Quintet (2007), and the latter to Phasor (2011) for contrabass and computer. This chapter will discuss how Animal’s moods and temperaments arise from its dynamics and dynamical response in performance. It will also situate these pieces between poles of new and traditional media and compare how Animal has been adapted to each. ‘New media’ will be used here as a label for data-driven art and digitally produced works in the millennial period. The musical characters achieved in these two pieces are different faces of a single, identifiable instrument. In the following, we will examine their dichotomous personalities. In the installation piece, updates from environmental sensors near vats of tomatoes are mapped to Animal’s parameters so we can listen to the tomatoes ripening. For Phasor, signals from a sensor bow are used to ‘play’ the algorithm. Different strategies for performance and different roles for their audiences distinguish the two works, but manipulation of the Animal system is a central element in the construction of both. Tomato Quintet is performed by its tomatoes and by its audience, inviting interactive participation, which builds understanding through ‘hands-on’ manipulation, whereas the audiences of Phasor are observers and require that the soloist do the manipulations, coaxing the system and exploring its qualities. Tomato Quintet is an exhibit that foregrounds with a singular focus the ripening process of tomatoes. Gas sensors monitor this process and
(The) speaking of characters 47 computers translate the gas levels into sound and graphs. My collaborator, Greg Niemeyer, calls it a ‘new media still life’ since very little seems to change, at least when taken on the time scale of the gallery goers. The sensors pick up the ten-day increase and then decrease of carbon dioxide and ethylene as fresh-picked green tomatoes redden and die. Viewers can interact by blowing on the sensors and prodding the system into real-time reaction. However, the exhibited process is so slow that it is essentially imperceptible until the listener/viewer lets go of the ‘now’. The extreme mismatch between the process speed and human perception contrasts with a much faster-paced work, Oxygen Flute (2002), in which the life-giving exchange of carbon dioxide and oxygen between plants and humans is made perceptible. That work gives sound to respiration and photosynthesis in real time and makes the human element a central object (Chafe 2005, 220). Tomato Quintet II is a second version in which the initial form of the exhibit, which was largely about observation from outside and ‘slowing down’, is transformed into one enjoining participants to observe themselves from the inside (like Oxygen Flute). The human element is objectified by enclosing both the tomatoes and the participants in a five-armed tent in which tomatoes are set to ripen. As they ripen – or if visitors breathe on them – the tomatoes trigger CO2-sensitive sensors that cause salsa music to play and coloured lights to flash. The installation’s visitors dance to ‘ripening melodies’ from Animal’s gas-level sonifications and to rhythmic music synchronized with disco lights. The new version was featured in the San Jose Zero1 Biennial ‘Build Your Own World’ and again in the Beijing National Art Museum of China (NAMOC) Triennial ‘Translife: Media Art China 2011’, where it was shown in the context of a broad discussion around new media. If ‘Translife’ poses numerous tough, even uncomfortable, questions, its biggest challenge is perhaps to the notion of art itself. Fan Di’an, director of NAMOC, acknowledges that some see the show as an effort to popularise science and technology rather than as an art exhibition, but he disagrees with this view. ‘I think New Media as art is not really understood by the public’, he said. ‘This is scientific art and it is also artistic science’. Zhang Zikang, the curator, went further. ‘Art is at a crossroads’, he said. It has exhausted its possibilities and needs to expand. ‘Representational art is past’, he added. ‘Even the most avant-garde art is past. New media art is real-time art – it is not signifying something. The media itself is the content’ (Melvin 2011). Tomato Quintet’s ‘delicious reddish spheres’ and Phasor’s ‘evil subterranean beast’ are characters that get their voices from Animal; it is the medium through which they speak. Cast in the ‘post-human’ milieu of Zhang’s ‘Translife’ Exhibition, Animal as medium is manifested as the work’s content. On the other hand, by shifting the definition to a contrasting pole (one describing a more ‘human’ and less ‘post-human’ context) the primary content manifests itself as empathy with these beings. The tomatoes (wired
48 Chris Chafe up and ‘singing’ for Tomato Quintet) and the evoked beast (a metaphorical invention of one reviewer hearing Phasor) are musical characters, foregrounded explicitly in the one case and implicitly in the other. Without humans in the loop, either as observers or as observed, such empathy cannot exist. At the end of this chapter, we will turn to another work in which the ‘human’ over ‘post-human’ dialectic will completely dissolve. Tomato Music (2008; derived from Tomato Quintet) fits ideally into Zhang’s ‘media is the content’ proposition. It is a data-driven concert work, bereft of agonistic character. At the conclusion of this chapter, I will examine the lack of ‘musically speaking’ characters in Tomato Music and ask: in the absence of characters (virtuosic or otherwise) can expression exist? Expression comes into the game when a musical voice communicates a musical ‘something’. This could be a melodic construction or gestural figure. It is a moment in which sound contacts our feelings. The communication takes place through a transmitting character, for example, a flautist (our agonist) plays a melody, which the composer has deftly assigned to her/him at a particular moment in a composition. Or perhaps the flautist works in an improvisational context in which one time and one time only s/he plays the most expressive accumulation of notes and articulations to bring the performance to a climax or conversely to quietly close it. These are hypothetical illustrations, but they symbolise expressive possibility in music. Overt communication of such emotional messaging has even been tied to evidence of physiological changes in the listener such as the ‘frisson response’ (Huron 2006, 282–83). Descending as far as we wish, dissecting performance to the most micro-time scale, expression might be found in a moment of felt emotion covertly evoked by one part of one note that is brilliantly changed, perhaps the most modest modification of the flautist’s comportment, but musically thrilling in ways difficult to put into words. No list of these expressive ‘somethings’, or even a list of the ‘types of somethings’, could ever be complete.
Easy ‘instrument-ness’ The physical components responsible for carrying musical ideas are of interest in studying Animal’s application in the two works. A subdivision into instrument and performer helps us approach the works’ systems and is in accord with a ubiquitous paradigm that bisects computer music systems since its earliest days: the instrument is what sounds when manipulated by a performer and the performer is responsible for communicating ideas (Mathews and Miller 1969, 35–6). By virtue of its digital signal-processing (DSP) properties Animal acquires ‘instrument-ness’ of a particular kind. The performers are tomatoes, gallery goers or a musician. The corresponding ideas are biological process, inquisitive manipulation or those with musical import. The DSP of Animal can be categorised as a physical modelling abstraction and as such it has an antecedent in a project integrating physical
(The) speaking of characters 49
Figure 2.1 The Animal algorithm is comprised of two parallel resonators with the logistic map in their feedback path.
model families – Perry Cook’s ‘meta-physical’ model (Cook 1992, 275) – and another in a musical work, which recombines physical model components in physically impossible ways (Burns 2003). DSP designs are open to inclusion of mathematical ‘parts’ from other domains. In Animal’s case, the logistic equation has been borrowed from population biology (May 1976, 460). Figure 2.1 shows the entire DSP algorithm for Animal. Algorithms using additive, subtractive or modulation-based synthesis (wave shaping and frequency modulation) can be factored into multiple instrument identities. For example, a given synthesis technique can be used for both percussion and woodwind simulation. A unique and specific ‘synth instrument’ using one of these general-purpose techniques represents a particular algorithm and set of algorithm parameter tunings endowed with an instrument identity. The Animal algorithm is not derived from a general- purpose technique, nor does it extend to more than a single identity. Its identity is intrinsic to its physical model technique. To take this a bit further for the sake of clarity, frequency modulation (FM) can be used in many algorithms (or ‘patches’). The timbre possibilities have produced a magnificent range of synthetic instruments across many families (from brass, winds, percussion and vocals to new identities). Tuning or ‘voicing’ a particular FM algorithm to match an identity is an art in its own right requiring experimentation, specialised knowledge, intuition and even some amount of luck. These ingredients are described in a primer written by Chowning (the inventor of FM) and Bristow (1986, 140–59) who produced banks of successful voicings for the Yamaha DX-7 FM keyboard.
50 Chris Chafe It is often a twofold quest to create both a coarse identity (some kind of distinguishable instrument) and then a more sharply defined variant. Voicing real pianos is analogous to this final aspect. Piano technicians will adjust the felts and touch to achieve a capability for rendering subtle shades of expression. Luthiers make similar adjustments to stringed instruments. As a cellist, I recently had the experience of comparing a couple dozen cellos in a two-hour sitting; all were priced in the same mid-level. They were wellmade, excellent sounding instruments, and the experience impressed upon me the fine-grained differences underlying the unique personality that each possessed. Their differences resided in their respective timbres or in their responsiveness to my playing, affecting the ease with which I could evoke a full palette of expressive tonal qualities. Overall, an identifiable personality seems to be a complex mix of static qualities and dynamic responses. The latter aspect, which is exposed by parameter deflections in performance, is what makes Animal come alive.
Physical ingredients Animal is a nonlinear difference equation solved in real time to produce a stream of audio samples. Computational studies of this kind (but not in real time) were extended to musical instrument acoustics in the early 1980s in the work of McIntyre, Schumacher and Woodhouse (1983) who showed that sustained oscillations of the edge-tone instruments (flutes), reed instruments and bowed strings were the result of negative feedback systems. The production of musical tones (and a variety of other sounds) was accomplished by setting up a model system of equations and iterating it one audio sample at a time. Like Animal, these models consist of a resonator (analogous to an air column or string) coupled with a nonlinear excitation mechanism (the mouthpiece or bow) through which the system can be driven by an external force (the player). The output of the nonlinear element feeds back to its input after passing through the resonator. During the same period that physical models of this kind were being studied for their resemblances to real-world instruments, two congruent projects emerged, which intersect in the genesis of Animal. The Karplus–Strong physical model synthesis technique was developed for its inherent guitar-like sound and its computational efficiency (Karplus and Strong 1983, 43–44). Like the models proposed by McIntyre, Schumacher and Woodhouse (1983), the plucked string algorithm has a resonator component, but it uses a more efficient computational method (lumped-circuit waveguide rather than convolution). Only simple plucks or strikes are possible, transient excitations, which are created from the initial condition of the waveguide. The basic synth instrument, which was originally intended for game sound effects, was adapted for high-quality musical use by adding several features including precise pitch tuning, a method for achieving a variety of pluck types (Jaffe and Smith 1983, 59–67) and guitar
(The) speaking of characters 51 body modelling (Smith 1997, 264–267). Extending the model to include the effect of guitar feedback through a guitar amp provides a self-oscillating capability (Sullivan 1990, 32–34). Animal’s double resonators employ waveguides with precise tuning. The self-oscillation method is used rather than an external driving force. The other domain studied concurrently was the existence of chaotic systems made of iterated nonlinear difference equations. Earlier work had discovered chaotic behaviour in systems of ordinary differential equations, which have no explicit temporal dimension and require three or more dimensions in the system of differential equations to exhibit chaos. Numerical solutions of such dynamical systems by computer evolved into a field in their own right (Lorenz 1963, 137). Simpler iterated difference equations were subsequently found that can also exhibit chaos. One of the first examples was the logistic map from biology (May 1976, 460), a nonlinear feedback system that iterates generation by generation. Its single state variable models a population in which the magnitude of each subsequent generation depends on the previous magnitude. Depending on the value of the equation’s tuning parameter, the state will either remain constant (at a fixed point), vary periodically (in a limit cycle) or behave unpredictably (exhibiting chaos). Generating a sequence of states in a computer program tuned for chaos demonstrates the butterfly effect, wherein if the initial state is slightly different it will yield sequences that diverge further and further from one another. Animal inherits these dynamics through its inclusion of the logistic map as its nonlinear excitation component. Chaotic dynamics can involve a ‘basin of attraction’ with the right parameter tunings of the map equation. States that lie outside the basin will gravitate towards states within it as the map is iterated. Once a sequence is trapped, subsequent behaviour will oscillate inside the basin but never exactly periodically. Using such a system to produce a stream of audio samples creates a timbre ‘basin of attraction’ and a quasi-p eriodic waveform pattern. A fixed-media piece, Vanishing Point (1989), used the same dynamics to create oscillatory rhythmic patterns by iterating the system much more slowly, once for each note, and triggering percussion samples. Rhythmic ‘basins of attraction’ were created that had qualities of predictability (because of the bounded oscillation), variety (because states never exactly repeated themselves) and transient behaviour (because the system could be ‘kicked’ outside the basin momentarily and then gravitate back in). Animal’s parallel resonators are delay-line and low-pass filter units with delay times whose periods create frequencies in the pitch range. First-order Butterworth low-pass filters are used in series with the delays to attenuate higher harmonics. The logistic map is applied to the sum of the resonator outputs, and its output is fed back to their inputs. A DC- blocking filter is applied to the entire circuit’s output. A tiny DC source biases the system to kick-start it and to avoid the computing of subnormal
52 Chris Chafe numbers (i.e. very small values near zero resulting from numerical rounding errors). The algorithm is ‘self-excited’ as in Sullivan’s guitar feedback rather than excited via the MWS-style external energy source. The use of dual resonators in feedback through a potentially chaotic system produces acoustical behaviours including mode quenching and beating that produces amplitude modulation (AM). This results in dual sidebands, period doubling, multimodal regimes and various distortions. The parameters available are the gain of resonators, the length of resonators, filter cut-offs and r, the logistic map’s tuning parameter shown in the following equation: x(n+1) = rx(n) (1 − x(n)). The algorithm does not intentionally mimic any particular physical instrument though at times it has a clarinet-like or brassy tone, depending on parameter values. It produces a palette of sounds whose time-varying transitions are rich in the timbre features of familiar instruments.
From ‘instrument-ness’ to refining character through ‘timbre moves’ What does it mean to say that creating character is up to the performer? First, it requires that the identity of the instrument type be stable. Alternatively, if it is unstable, then the choice of identity is made by the performer. Either way, the choice of instrumental source is bound and controlled by the performer. A melodic figure with a persistent ‘croaky’ timbre at its most lethargic, and a sharp crisp, rippling, piercing quality when awakened would constitute a recognisable character. As a thought experiment, we shall call this one Animal ‘A’ and imagine an Animal ‘B’ with a contrasting sets of characteristics. Animal ‘B’ might simply be a stutterer that tries to hit pitches and only sporadically succeeds. Both ‘A’ and ‘B’ are recognisable instances of Animal and are constructed from the set of sonic ‘moves’ afforded by Animal’s identity. Consequently, they share an identity but differ in character. As musical voices, they are separable and could play contrasting roles. Or these characters could hold forth in tandem: the resulting duo could start badly and end happily, etc. – all of this would depend on the musical ideas being constructed. Can there be musical expression without character? In simplest terms, no, because we suppose that expression is the planting of ideas from one consciousness into another. We also need to remember that, as listeners, we relentlessly try to identify the source communicating to us. We will even infer or construct a plausible source model in the absence of a recognisable source. Ascribing character is the essence of this tendency. We conjure the performer whether the provenance of the music is human or mechanical. A robot is a valid character, as are the ‘actors’ in the recorded sound of a tropical rain forest. Once we accept a rain forest as music that is ‘communicating to us’, the music’s source entities are immediately endowed with what
(The) speaking of characters 53 can be called ‘character’. If that fails, does the music fail? Can the rainforest itself be communicating? Yes, music sometimes fails, and yes we can hear music in many ways and things. Figures 2.2–2.7 illustrate ‘timbre moves’ that are available to the performer constructing a character with Animal. As a lab experiment, all but one of its seven parameters are held at their medium value. The independent parameter is varied in a linear ramp from a low to a high value. Many acoustical features present themselves in the isolated examples illustrated below: envelopes shaping amplitude; spectral evolution; pitches supported by harmonic and subharmonic series; possibly multiple series at the same time; effects similar to overblowing, sul ponticello, sul tasto, and ‘creaky voice’ and nonharmonic sideband modulation and distortion.
Figure 2.2 A mplitude and spectrogram display of two seconds of sound from ramping up ratios of resonator delay lengths from 1.04 to 8.0. One resonator delay length is held constant while the other’s length is shortened. Varying ratios create a variety of pitches similar to overblowing or sul ponticello effects.
Figure 2.3 A mplitude and spectrogram display of two seconds of sound from ramping up feedback gain to both resonators from 0.0 to 1.0. Animal is self-excited by the slight DC bias injection, which is constantly present. The algorithm will not speak with a feedback gain of 0.0. Increasing feedback gain energizes the system and tonal quality traverses from muted to brilliant, eventually hitting modes that are gravelly and forceful.
Figure 2.4 A mplitude and spectrogram display of two seconds of sound from changing the balance between resonators. With resonators holding noncoincidental tunings of delay length and/or lowpass frequency, effects can be derived from altering their relative contribution. The figure shows three pitch regimes obtained, including subharmonics.
Figure 2.5 A mplitude and spectrogram display of two seconds of sound from ramping up the lowpass frequency from 550 to 9000 Hz. Akin to increasing feedback gain, but without the gravelly sound in Figure 2.3, the higher lowpass cut-off frequency towards the end of the sound creates a brightness effect.
Figure 2.6 Amplitude and spectrogram display of two seconds of sound from ramping up ratios of resonator lowpass frequencies from 1.003 to 4.0. Almost undetectable in the sonogram, but visible in a zoomed spectral slice, the ratio of resonator lowpass frequencies creates a quality shift by traversing a region with sidebands. The overall percept is strongly pitched. An inflection in tone is caused by sidebands growing and diminishing in strength.
(The) speaking of characters 55
Figure 2.7 A mplitude and spectrogram display of two seconds of sound from ramping up the parameter r of the logistic map. Increasing r grows the second (octave lower) subharmonic as seen in the end of the figure.
What animal is and is not To summarise, Animal is a synthesis technique manifesting a single instrument identity. Its performer can construct personalities of different character, which are fit to the intended musical role. Using its possible ‘timbre moves’, the performer is free to construct characters that are as convincing as the music itself. The application of this technique in Tomato Quintet stretches the notion of character to its limit. Transference is the goal. Listeners ascribe to the tomatoes a sonic character that they infer from the music. The music contrasts slow time scale material (the tomatoes) with much faster time scales (corresponding to human activities). Animal provides a voice for both. The ambient ‘tomato character’ consists of pitched material driven by slowly changing signals from carbon dioxide sensors tracking the ripening of the fruit (other ambient layers use other algorithms that produce sounds of wind and transient, percussive sounds reminiscent of hail on a roof). Faster figurations create the human-related musical characters, which are energised by the motion of accelerometers when visitors touch the sensor systems. Phasor employs a chorus of Animals to achieve a pitched texture performed directly by bowing gestures. The bassist uses the cello model of the K-Bow, which tracks several factors that contribute to the sounds a string instrument will produce, using a three-axis accelerometer, grip strength sensor, tilt sensor and hair tension sensor. The system also tracks the bow’s position from the bridge or across the violin. The K-Bow provides gesture signals to the accompanying sound-generating computer via Bluetooth.1 The character evoked is one of intricate pitch structures whose modulations are interspersed with abrupt rhythmic surprises and textural intrusions. Animal, as used in these pieces, does not conform to rigid scales or categories of familiar tonal qualities (the backbone of more traditional music). The surprises it creates in pitch and timbre are a part of its identity. It is capable of large shifts and small in-between shades as control values are traversed. Its two resonators with their separate gains and filters make it difficult to tune precisely or predictably, because it tends to jump between states and create parasitic tones. In ‘On the Oscillations of Musical Instruments’, McIntyre et al. (1983) conclude with a description of the acoustical qualities of model
56 Chris Chafe systems that resemble Animal. They refer to ‘playing’ numerical solutions via computer programs started with different parameter values. A little experience with this soon reminds one of a well-known property of non-linear phenomena, namely their non-uniqueness. Several different regimes may be possible for the same final set of parameter values. One soon learns how to encourage a given type of oscillation during the initial transient, a matter in which musicians develop superlative skill. One is also reminded of the rich variety of periodic and aperiodic behaviour, which may be exhibited by even the simplest nonlinear oscillators (see Appendix A Relation to the Theory of Iterated Maps). The question of which behaviours are physically realistic for musical-acoustical purposes, and which result from too unrealistic a choice of model characteristics, has yet to be studied systematically, although instructive examples regarding stable versus unstable behaviour were encountered in foregoing sections.2 (McIntyre et al. 1983, 1339) Tomato music The Tomato Quintet installation spawned the concert piece Tomato Music that was composed from data collected during the first exhibition. The two compositions are worth examining in light of Zhang’s comments on new media noted above: ‘New media art is real-time art’ and ‘When we talk about time, it is multiple times now.’ Tomato Music is purely sonified data. Gas-level recordings from one tenday run of Tomato Quintet are compressed into ten minutes of music. The gas-level readings are mapped to parameters playing fifteen synthesised slide-flute-like instruments (the parameters are air pressure, tube length, portamento and embouchure). Tomato Music is primarily a process work – much like Alvin Lucier’s I Am Sitting in a Room – in which a fixed procedure is applied to a given input. The algorithmic machinery in Tomato Music elicits a rigidly occurring interruption of texture every forty-nine seconds by updating its data-to-instrument mapping to a new scheme. Though not interactive (it is a fixed-media piece) and not ‘real time’ (because its data is compressed in time), Tomato Music does create its own time scape. Works that engage a process as a primary component and make time malleable are species of new media. Tomato Music engenders music devoid of character (in the sense described above) landing musically closer to Zhang’s ‘The media itself is the content’. The instrument in Tomato Music is a physical model, not Animal but one that began with attempts to simulate the pipes of the ancient Greek hydraulis or water organ.3 The model produces tones, which can be shrill and austere but also can emit rumbling subharmonics or quiet ‘hissing’ sounds, qualities reminiscent of György Ligeti’s organ works, Volumina and Harmonies. As opposed to a music of discrete pitches (which was
(The) speaking of characters 57 probably what the hydraulis mechanism played) the ‘medium’ of Tomato Music is a data set of smooth changes that occur during ripening. As a final nod to the polemics around new media, we can also say that the ripening of tomatoes is the content. The long ten-day arc is inscribed with shorter-spanned curves from daily temperature and light variation. These curves violate the fixed-pitch structure of the hydraulis simulation. Making them speak meant replacing the hydraulis with a bank of slide flutes capable of continuous pitch. This type of modification is something only possible in software and adheres to a commonplace practice in which physics can be violated on a whim. To finalise the transformation from simulation to new musical instrument, the organ with its polyphonic manual controlling an ensemble of pipes, was replaced with a software design unconstrained by physics. In our inhabited world, could we ever attempt or achieve an ensemble of slide flutes synchronised this tightly? Or do media of this kind take us into a realm, which, from our immediate vantage point in time, we should call ‘new’? In an honorific for Roger Reynolds’s seventieth birthday, I wrote: …the set of norms and institutions is plastic too, a result of so many individuals’ gifts back to culture. Music produces virtuosi in continuous streams. The sequences of teachers and students who become teachers form braided, merging and diverging schools, worldwide. Master musicians cross tens of generations when charting, for example, the gharana of sarod or tabla on the Indian subcontinent. Such histories emerge from deep time and are continuously evolving. Passed on from the teacher is both craft and a way of communicating meaning. Added by each individual is new meaning to be folded into the musical style. The folding-in is at the crux of virtuosity. (Chafe 2004) The question left hanging in the air at this point relates to musical expression: without characters (virtuosic or otherwise) does expression exist? For many listeners experiencing Tomato Music, it seems that it does. In this case, expression is not a product of direct human manipulation. The ideas to be expressed in Tomato Music probably exist only at the outermost layer, as an element of design. What a character is and what it speaks, musically speaking, must be things conjured entirely in the minds of the receivers.
Notes 1 The K-bow was produced between 2007 and 2014 by Keith McMillen Instruments, Berkeley, California. For more information, see www.electronista.com/ articles/08/11/07/mcmillen.string.interfaces/. 2 The article’s Appendix A expands on a ‘Relation to the Theory of Iterated Maps’ and is recommended for further reading.
58 Chris Chafe 3 ‘Ctesibius of Alexandrea, who lived about B.C 200, took the idea of his organ from the Syrinx or Pandean pipes, a musical instrument of the highest antiquity among the Greeks. His object being to employ a row of pipes of great size and capable of emitting the most powerful as well as the softest sounds, he contrived the means of adapting keys with levers (agkoniskoi), and with perforated sliders (pomata) to open and shut the mouths of the pipes (glossokoma), a supply of wind being obtained, without intermission, by bellows, in which the pressure of water performed the same part which is fulfilled in the modern organ by a weight.’ (Smith 1874, 422–423).
3 Collaborating on composition The role of the musical assistant at IRCAM, CCRMA and CSC Laura Zattra
A ping-pong match: this metaphor neatly sums up the very close cooperation between a composer and a musical assistant on a computer-based artistic project.1 It served as the headline of an article by Pierre Gervasoni: ‘Le ping-pong de Pierre Boulez’, discussing the collaboration between Boulez and Andrew Gerzso. Boulez declared that as I do not make daily visits to the studio [IRCAM – Institut de Recherche et Coordination Acoustique/Musique in Paris], we discuss the project at length. Not in the abstract, but starting from my previous works. I come up with some musical proposals, which Andrew Gerzso, musician, comes to understand. He seeks for and provides solutions, which I evaluate in order to check whether this corresponds to my objectives or still needs to be expanded. And so on […]. Foresight should always alternate with the control of real possibilities.2 The last sentence highlights the kind of situations and dynamics that come to play in this collaboration: a path of endless adjustment in the dialogue between the artistic vision and the scientific visionaries. And yet, the idea of role-play and game contained in Gervasoni’s provocative title assumes there is a winner and a loser in this collaboration. Speaking at a conference held at IRCAM in 2007 on the role of the computer music designer, Gerzso, who had been collaborating with Boulez since the creation of Répons (1980), described the role and the profession of the musical assistant in these words.3 The emergence of the profession of Computer Music Designer (previously called musical assistant) at IRCAM at the beginning of the 1980s came about in response to a specific need: freeing researchers from an excessively exclusive relation with the composer coupled with the need to translate from the world of music to the world of science and vice versa. With the increase in the number of production projects in the 1990s, the musical assistant’s responsibilities increased. He had to take charge of the composer, manage the production projects, and
60 Laura Zattra carry out musical work in collaboration with the sound engineer and the composer. Gerzso then asked: Are these needs still pertinent today [in 2007]? Is the Computer Music Designer specific to IRCAM? Probably not, since today everyplace where artists work with new technology in the fields of sound or music – dance, theater, computer graphics, video, fine arts, music – one finds professionals who master similar concepts, techniques, and practices although they may be called by a different title (e.g. sound designer, Foley artist, etc.). However, today there is no shared professional identity, no public recognition of the profession, and no related training program guaranteeing the acquisition of the technical and musical competences necessary to practice this relatively recent profession.4 Gerzso’s words still apply today. Ten years after that conference, the debate that is still open. Back in 1988, a four-handed article by Boulez and Gerzso stressed that exploring possible musical relations between computers and traditional instruments requires much communication [emphasis added] between composers and those who design computer hardware and software. Through such collaboration, electronic devices can be constructed that serve the composer’s immediate purpose while preserving enough generality and flexibility for future musical exploration – a task complicated by the fact that the composition’s musical complexity is usually not commensurate with the technical complexity needed for its realization. What appears to be a simple musical problem often defies an easy technological solution. Perhaps for the first time in history a composer has to explain and formalize the way he or she develops and manipulates concepts, themes and relations in a musical context in order for technicians (who may have little musical training) to bring them into existence. (Boulez and Gerzso 1988) These introductory quotations are intended to acquaint the reader with the themes of this chapter: the art-science collaboration, the emergence of a profession and the traces remaining from the habitually wordless communication between a composer and an assistant in the early era of computer music. The chapter covers the period that runs from the early computer programs until the first real-time experiments (ca. 1960–80). The end of this period is marked by: (1) the 4X digital work station programmed by Giuseppe Di Giugno at IRCAM (a project developed since 1976 and culminated in the creation of this powerful real-time audio hardware, which was used in
Collaborating on composition 61 Répons by Pierre Boulez), and (2) the era of the microprocessors. Computer music means here both music produced and performed in differed time (computer generated ‘acousmatic’ music or music that combines live musicians and fixed computer-generated sounds) or in real time. In this context, terminology will also have to be taken into consideration. The emerging profession presented in this chapter has been described and defined in different ways over the years: musical assistant, technician, tutor, computer music designer, music mediator (Zattra 2013), klangregisseur, live electronics musician, digital audio processing performer (Plessas and Boutard 2015).
Who is the musical assistant? The term musical assistant has been loosely applied over the course of music history, to a musician, a translator or an interpreter of musical ideas (copyist, amanuensis, transcriber, etc.), who work alongside the composer. Collaboration can occur in a number of different work phases: the transcription of working documents (e.g. a manuscript score can be cleaned and transformed into a fair copy), the arrangement of a musical piece, the development of rough ideas or sketches provided by the composer, assistance in the direction of a performance. (Consider, for example, the working relationships of Joseph Joachim Raff and Franz Liszt, Imogen Holtz and Benjamin Britten, Alex Weston and Philip Glass.) The same term can be applied conveniently for other less defined and more complex relationships: Ernst Krenek at the Staatstheater Kassel where he assisted Paul Bekker during the 1920s; Robert Craft, musical assistant of Igor Stravinsky, defined by Richard Taruskin (1995, 362) as an ‘interlocutor, ghost-writer, musical assistant and executor’; Joseph Joachim who collaborated with Johannes Brahms as an assistant and a performer as the latter wrote his Violin Concerto (Schwarz 1983). The history of music shows that the word ‘musical assistant’ has taken on different meanings, based on several applications of its original etymology: help (someone), typically by doing a share of the work … from Latin assistere: ‘take one’s stand by’ (ad: ‘to, at’ + sistere: ‘take one’s stand’) (The New Oxford Dictionary). The revolution of sound recording, synthesis and transformation (musique concrète 1948, electronic music 1950), followed by the birth of computer music (1957), caused the natural emergence of a new professional profile – someone who can work in research, writing, the creation of new instruments and recording and performance on electronic devices during concerts. The composition of music had gone from a paradigm based on ‘writing, score, performance, listening’ to one based on ‘writing, notation, projection, listening’ (Tiffon 2002) or more often ‘technological research, writing, control-evaluation-implementation, new writing, control’ and so on. From the early days, laboratories and electronic music studios have normally involved the presence of different individuals with diverse but
62 Laura Zattra intertwined competencies. This is true for the centres in Milan, Cologne, Paris and San Francisco during the first analogue generation and has continued with the digital revolution (at CCRMA in Stanford and other centres in the United States, in France, Italy, Great Britain, Germany, East Asia, to name a few). Yet the existence of the musical assistant has been often unreasonably neglected, both in the literature and by audiences. As one frustrated French musical assistant acknowledged: ‘the fact is, by and large the public ignores the implications of a musical assistant for the creation of contemporary music’ (Poletti et al. 2002, 243). The musical assistant is responsible for the technical setup from the early experimentation phases until the concert production. S/he explains the possibilities of the various instruments and applications, as well as the potential sound effects to the composer.5 The musical assistant also explains the most recent results in musical research and translates artistic ideas into programming languages. Finally, s/he transforms those ideas into a score or a computer program and often is involved in performing the musical piece in concerts. Unfortunately, in the musical score, the program notes and other published sources, the presence of the musical assistant remains hidden most of the time. I shall therefore focus on primary and secondary archival sources and administrative documents, conserved at three computer music centres: the IRCAM in Paris, the Centre for Computer Research in Music and Acoustics (CCRMA) at Stanford University and the Centro di Sonologia Computazionale (CSC) at the Università degli Studi di Padova. My analysis will examine two points: 1 Institutionalisation and recognition: I will investigate the presence, absence or understatement (as the case may be) of an expressed concern for collaboration and the role of the musical assistant. 2 Source information: I will describe the ways in which this collaboration was undertaken between musical assistant and composer. As well as research grounded on material sources, further investigation has been conducted based on oral communication. As Bennett (1995, n.p.) once stressed: ‘Electroacoustic music […] is an almost exclusively oral culture. There is very little written documentation of compositional practice (as opposed to technical practice)’. The choice of these three centres is motivated by the close historical, musical, organisational, scientific and technological connections and numerous technical, cultural and scientific exchanges among them. IRCAM may also be considered the first facility to officially recognise the musical assistant as a professional. However, in an unpublished source written in 1977, John Chowning indicated that CCRMA provided software and human resources to IRCAM.6 According to Laurent Bayle (director of IRCAM from 1992 to 2001), ‘the musical assistant was historically born with a composer profile;
Collaborating on composition 63 rather young and freshly initiated to new technologies, normally within the framework of a residence in the USA’ (cited in Gervasoni 2000e, 20). For example, James (Andy) Moorer (co-founder of CCRMA) was the Scientific and Technical Advisor at IRCAM and came highly recommended by John Chowning.7 The CSC was funded using the same structure as CCRMA (a musical centre located within a university structure) and quickly recognised the presence of the musical assistant. French and American composers and researchers from IRCAM and CCRMA worked at the CSC.
IRCAM, Centre Pompidou, Paris IRCAM was created under the leadership of Pierre Boulez in 1974 and 1975, as the Centre Georges Pompidou was being conceived. The presence of the assistant was acknowledged at the official opening of the institution, which occurred in the last months of 1976.8 In fact, IRCAM seems to be the first institution to professionalise this activity and define the assistant’s specific function within its organisational charter. Archival documents show that the assistant’s identity was based on a model of collaboration.9 Jean-Claude Risset (head of the Computer Department for four years) recalls that in 1970 Boulez explained to him his desire to create an institution based on the idea that a ‘collaborative research project was necessary in order to solve some problems composers came to face […]. [His] ambitious project – calling into question the [traditional] context of musical creation in a collective approach – was obviously exciting’.10 The naming of the assistant’s occupation followed a tangled path, reflecting the emergence of this career (Zattra 2013). During the 1970s, names such as scientist, researcher, engineer and technician were used equally.11 The designations of tutor and musical assistant emerged during the 1980s.12 Finally in 2007, to achieve a more effective and stable appellation, the title Réalisateur en Informatique Musicale (RIM) was chosen, which is usually translated as computer music designer (Zattra 2013, 118). Initially, researchers and engineers could choose to voluntarily share their experience and help composers, but in an ‘unofficial’ capacity (Born 1995, 332–63).13 Therefore, they remained scientists, researchers, engineers and technicians. Under Boulez’s leadership, the heads of departments were all composers. This was done to ensure that aesthetic issues would take precedence over technical issues. According to Risset (2014, 14), this was important but inevitably established a hierarchy, if not a subordination, of researchers to composers. An IRCAM activity report identified the interlocking of musical collaboration in 1978 and declared that the programming for Wellenspiele by Balz Truempy (commissioned by IRCAM) was done by Giuseppe di Giugno and Jean Kott. Instrument design was done by Neil B. Rolnick together with Truempy. Rolnick also participated in the performance.14 Moreover, reports of the activity carried out at IRCAM in 1979 mention several works
64 Laura Zattra realised by researchers, engineers and technicians.15 The state of technology and its limitations led to increased cooperation. In an interview with Andrew J. Nelson, researcher Xavier Rodet observed: All the main things were done on the PDP-10 (computer). What was very interesting was the sharing of the digital to analogue converter. That was a very complicated and costly and difficult piece of hardware at the time. So there was one, essentially, attached to the PDP-10. Everyone would work on the PDP-10 and send the sounds to the converter. Then, the sounds were distributed in all the rooms by analogue lines, which was very interesting because it means that we were hearing the sounds done by all the others. That was fascinating because, you would hear something [and think], ‘Wow, this sound has something.’ So, you would go to the computer and ask the guy [who made the sound], ‘What are you doing? What is this you have been doing?’ It became an excellent exchange of knowledge. I found several of my collaborators by hearing them doing that. (cited in Nelson 2015, 64) At IRCAM, the necessity of defining a role for the musical assistant grew over the years; it is exemplified in a number of internal documents. On 15 October 1982, Pierre Boulez states that ‘tutors will be regularly summoned in the artistic committee, in order to report on the state of the projects where they are responsible and to make any suggestions they might think advantageous for the performance of their work’.16 The earliest documentary evidence of the term ‘tutor’, as a professional designation, is dated 3 March 1983. The tutor was to ensure teaching and guidance and on the other hand was himself active in musical research and related documentation.17 He embodied the connection between the research and its application to pedagogy and musical production.18 During this period, the activity of assisting the composer started to separate from the others within the Institution, hence the idea of a veritable profession (‘poste de tuteur’). When a member of the steering committee asks the musical direction of IRCAM to equip each national conservatory of music with the 4X System (20 May 1983), another member outlines the problem of pedagogy. Boulez then states that this problem occurs every year at IRCAM and that IRCAM calls for the establishment of positions for recognised tutors.19 Other documentary sources similarly refer to ‘contracts for supplementary tutors’.20 In a meeting of the board of directors in 1988, the first item of the agenda reads: The problem of tutors: the question has been with for many years, and we have certainly not come up with a solution, not even in terms of statute, the time management, the distinction between their job as a tutor and their will to compose.21
Collaborating on composition 65 For the first time, the importance of a sideline compositional activity was acknowledged, which was necessary ‘in order to understand composers’.22 During this meeting, Gerzso defined tutors as ‘instrument players, instrument virtuosos (Synthesiser, computer…), with deep technical know-how’. The tutor’s mission was now clear: to realise a composer’s idea – teaching composers how to use technology, to organise the schedule of the studio, to follow the musical work process, to prepare musical documentation, to teach a wider audience (i.e. presenting workshops on computer music), to undertake administrative tasks.23 Within the context of this discussion, the term ‘musical assistant’ began to appear in 1989, parallel to tutor.24 A report, edited by Marc Battier in 1989, envisioned the assistant’s activity in three distinct phases (Figure 3.1). 1 The composer explains the ideas and vision to the assistant. They work together to formalise these ideas (experiments, testing, software adaptation or writing). The project and the technical environment are adapted into a quasi-definitive form. 2 The composer begins to work independently. During this phase, the assistant’s intervention is moderate, while the composer writes the score. 3 The project is completed at the institute, where the tutor’s role is crucial.25
Figure 3.1 Pierre Boulez at a desk working on Répons at IRCAM, 1984 (IRCAM, Paris, Espace de projection). Seated left to right, Denis Lorrain, Andrew Gerzso, Pierre Boulez; standing, left to right, Emmanuel Favreau and Giuseppe Di Giugno; sitting in the back of the room, unknown. Source: Courtesy ©Marion Kalter.
66 Laura Zattra The designation musical assistant lasted for about fifteen years, until the 2000s.26 However, unpublished documents show that IRCAM members still felt somewhat uneasy with the term and its functions. During an administrative meeting in 2001, Boulez asked ‘…where are we? Are composers advanced enough to act on their own without the help of musical assistants?’ Bernard Stiegler (who, in a few weeks, became the new director following Laurent Bayle) responded that composers would always need a musical assistant to realise a musical research project that involved technology and would need to come to IRCAM to finalise this.27 However, the problem regarding copyright, recognition and authorship remained. During the 2000s, IRCAM officially adopted the designation RIM (Réalisateur en Informatique Musicale), computer music designer in English.28
CCRMA, Stanford University Originally located at the Stanford Artificial Intelligence Laboratories (SAIL) during the 1960s, CCRMA (pronounced ‘karma’) was officially founded in June 1975 by John Chowning, a professor, researcher and musician. Chowning spent his career synthesising sound fields; he is the father of FM synthesis technology. Until then, research in the analysis, synthesis and psychology of sound perception was undertaken through the largely unsupported work of professors, graduate students and staff members. In two famous publications, Chowning presented his research on the control and movement of synthesised sounds in an illusory acoustical space (1971) and on frequency modulation synthesis (1973), a technique widely used in computer music installations around the world.29 Early compositions from those pre-CCRMA days include Sabelithe (1971) and Turenas (1972) by Chowning, Rondino by Leland Smith (1968), a realisation of John Erikson’s Loops by John Grey (1974) and Leland Smith’s SCORE, ‘a computer programme written in FORTRAN which enable composers to synthesize and compose pieces using the DEC PDP-6 and later the PDP-10’ and other important contributions to the field of computer music made by James (Andy) Moorer, Loren Rush, John Grey and F. Richard Moore (Serra-Wood 1988). CCRMA quickly established a reputation as a major research centre for computer music, a multidisciplinary facility where researchers and composers worked together to create computer-based technology and digital audio as artistic media and research tools. Numerous authors (Chadabe 1997; Collins 2007; Dean 2009; Manning 2013; Nelson 2015) and essays dedicated to the history of CCRMA agree on the interdisciplinary nature of the facility. ‘Collaboration’, ‘working together’, ‘cooperation’ are common terms used to describe this approach, and yet, to the best of my knowledge, all the sources – published and unpublished – clearly show that there was no intentional division of labour within the centre.30
Collaborating on composition 67 The question of interdisciplinarity is mentioned in several texts, such as the one dated 13 June 1977, in which Chowning wrote: the extraordinary results already obtained have occurred in those few instances where scientists and musicians have taken the opportunity to bring their respective skills to bear on problems of common interest in a rich interdisciplinary environment. It is an example of cooperation, but more, an expression of the freedom of intellect and invention, where creative minds from diverse disciplines have joined in a common goal to produce fundamental knowledge which must be the source for new music, and to produce works of art which reflect the scientific-technological riches of the present.31 Since the beginning, synergy is the keyword in CCRMA policy. The whole is greater than the sum of its parts and the essence is to value differences (‘rich interdisciplinary environment’, ‘diverse disciplines’) and to work together (‘cooperation’) through the creative process for a common benefit (‘common interest’, ‘common goal’). ‘At Stanford University’, – the text continues – ‘such cooperation has been commonplace over the past ten years’.32 CCRMA did not formally acknowledge the institutional role of assistants, because strictly speaking there were no musical assistants. Anyone could be either composer or assistant, and all participants were scientists, engineers or researchers, who could also become artists or composers. Aspects of Chowning’s thinking are very helpful in this regard. He stressed that as far as I remember there was no idea of Musical Assistant in the sense that existed at IRCAM, e.g. Jonathan Harvey/Stanley Haynes. However, there were collaborations that were extremely effective and came about informally – mutual interests or a question that resulted in a longer interchange and eventual collaboration, e.g. David Jaffe/ Julius Smith. At CCRMA, collaborations involved people who had at least programming skills as a common language and, of course, music.33 During the 1980s, the collaboration between Jaffe and Smith enabled the two computer musicians to discover a mutual interest in physical modelling and the Systems Concept Digital Syntheziser at CCRMA (Jaffe and Smith 1983).34 One result of this collaboration was Silicon Valley Breakdown by David Jaffe, premiered at the Venice Biennale in 1983.35 Jaffe and Smith went on to work on the seminal NeXT computer, on which future Apple products are based.36
68 Laura Zattra Starting in 1977, Moorer introduced Chowning to the Stanford Artificial Intelligence Language (SAIL) as Chowning composed his influential piece Stria (Zattra 2007). Moorer – also a scientist/composer – was working at IRCAM when Chowning came to give his first performance of the piece.37 Moorer helped him mix the sections of Stria into the complete piece at IRCAM. As Chowning recalls ‘no IRCAM technician was involved in the production, except for Andy Moorer, who had worked at Stanford and was temporarily there at IRCAM, and simply helped in the starting and stopping of tape recorders to make the final tape’.38 As the reader will note, the term ‘technician’ is used, which was also used at IRCAM at that time. As was the case at IRCAM, shared equipment was crucial in shaping the collaborative environment at CCRMA. SAIL Laboratory participants shared the same computer. Bill Schottstaedt, composer and computer scientist who worked at CCRMA for 36 years, recalled that during the 1970s, [w]e had people, parties and things were going on all the time. You could come in at any time day or night and there was always the same number of people doing things, they never slowed down… In those days there was one [computer for music]. If you wanted to do it [work with the computer], you had to be at that place. (Schottstaedt, cited in Nelson 2015, 32) This was the era of the mainframe computer, big, high-performance machines, which were used for large-scale computing purposes. They operated in ‘time sharing’, and all users (through terminals) could operate simultaneously with batch processing. According to Moorer you all came together round the computer […] and everybody was together in these rooms with the consoles or with the terminals, so sharing of what you were doing was pretty common. You’re walking around seeing what was on the screen of the person next to you: a very, very intense, collaborative, open atmosphere. (Moorer, cited in Nelson 2015, 32) The examples of collaboration I mention in these paragraphs are some of the highlights of the sources I have sifted through over the years (files are stocked within the CCRMA Saildart Archive). By themselves, however, they do not reveal the full extent of the paths articulating the process of collaboration and the relationships linking the persons involved in this creative cooperation. It is more difficult to find passages describing how the actual collaboration took place within the research and musical projects. Information on this can be deduced from programme notes on
Collaborating on composition 69 works produced at CCRMA. For example, Fred Malouf (Chromatonal, 1985) writes that ‘the creation of this piece would not have been possible without the cooperation of the staff at CCRMA […] My sincere gratitude goes out to those people’.39 Michael McNabb (Invisible cities, 1987) is similarly grateful to ‘CCRMA, its staff, and others there whose help in this effort was invaluable’.40 In another document compiled by Richard Karpen, we read that ‘CCRMA has a long tradition of accommodating composers with diverse views about what music is and how to go about making it’.41 The terms ‘accommodate’ and ‘diverse views about…’ denote once again the strongly collaborative character of the centre: researchers and musicians stepped forward with different views, but they let the composer develop his or her own technological project. Even the openness of the SAIL/CCRMA space, rooms and technological environment helped this cooperation. As composer Michael McNabb stresses: ‘people listened to everybody else. You never know when you might hear some interesting sound that peaks your interest and you think “That would fit in the piece that I’m working on”’ (McNabb, cited in Nelson 2015, 33). Thus, CCRMA policy did not seek to establish a clear division of labour. Chowning has recently confirmed to me that there was never a policy regarding visiting researchers and composers being assigned someone to help them, number of hours/day, etc. It was assumed that the visitor would audit classes, read documents and ask questions of anyone – faculty, staff, or students – to acquire the means to pursue their project. That is still the way that it works as far as I know.42 Exchanges, cooperation and transfer of expertise also occurred between CCRMA and IRCAM, well before the official opening of IRCAM. John Chowning described the future French laboratory in a message sent on 20 June 1977: ‘the general conception of IRCAM as a structured research environment where scientists and musicians will interact in pursuit of problems of common interest belongs to Pierre Boulez who will serve as director of the institute’.43 The identity of the future French institution is seen through the eyes of the CCRMA director as a place – mirroring CCRMA policy – where interaction is pursued for the benefit of all. From the same text, we know that two years earlier, in August 1975, a team from the future IRCAM had attended a ten-day intensive seminar. The team included Pierre Boulez, Luciano Berio and Jean-Claude Risset, who would each become the head of one of IRCAM departments in 1976.44 ‘Each visiting member (including Pierre Boulez) made extensive use of the computer in a “hands-on” environment. Each attendant was instructed in the usage of the computer and encouraged to experiment with synthesis techniques’ (Figure 3.2).45
70 Laura Zattra
Figure 3.2 1975: Pierre Boulez brought an IRCAM team to CCRMA for a two-week course in computer music. Seated by the computer (left to right) Pierre Boulez and Steve Martin (graduate student); standing (left to right) James (Andy) Moorer, John Chowning, Max Mathews. Photo by José Mercado. Source: Courtesy Stanford University.
CSC (Centro di Sonologia Computazionale dell’Università di Padova) As with CCRMA, activity in the field of computer music in Padova started within the research activities of the University and within the same technological environment. The Institute of Electrical Engineering (today School of Engineering) provided computer facilities and space.46 Early activities started well before the official opening of the CSC in 1979. Giovanni Battista Debiasi, professor at the Faculty of Engineering, had worked on vocal synthesis since the late 1950s. He attracted the interest of his sound engineering students – Giovanni De Poli, Alvise Vidolin and Graziano Tisato, as well as the American composer James Dashow. Together they founded the Computer Music Group in 1974, which was the first name of the CSC.47 Dashow installed the Music 4BF program in the IBM System 370 and composed Effetti Collaterali, for clarinet and synthesised sounds on tape (1976), the first computer music piece in Padova. De Poli had worked at IRCAM from December 1975 to May 1976 in the Department directed by Risset, before returning to Italy (De Poli, cited in Zattra 2000, 129). During his stay at IRCAM, De Poli presented MUSICA, one of the first software programs for music notation that he and Debiasi had developed in Italy (Debiasi and De Poli 1974; 1986).
Collaborating on composition 71 When I came back from France, I brought with me the software MUSIC 5, I was given by IRCAM friends; at the CSC [then still called Computer Music Group] we already had the MUSIC 4BF thanks to James Dashow. From then on, we could work with composers. (De Poli, cited in Zattra 2000, 129) Composers and students, both Italian and foreign, began to collaborate. For example, Richard Karpen, later associated with CCRMA, worked at CSC in 1984–85. Reminiscing on his pre-Italian studies and how he got to Padova, he remarked: I met James Dashow in New York in 1983. I was studying composition and computer music with Charles Dodge at Brooklyn College at that time. James came to present his work there. He told me about CSC and suggested that I could perhaps visit. So I applied for a fellowship to Padova University. My application was successful and I arrived at CSC in September 1984.48 From the beginning, the purpose of the work undertaken at the Università di Padova was to create an interdisciplinary space where scientific and musical expertise could meet so as to achieve a constant application of theoretical research to the production of music with computer equipment [and] to encourage scientists to investigate and formalise together with the creative utopias of the composers. (Marinelli 1995, 95) The statement certainly recalls passages from CCRMA documentation. However, unlike the members of CCRMA – who could be composers and researchers at the same time – the CSC founders and associates were either composers or researchers. The engineers presented the latest results of their research to the composers and the composers submitted their requests to the engineers (Tisato [1999], Vidolin [1999], both cited in Zattra 2000, 133–34 and 138–42, respectively). These roles, established by training, remained clearly identified, and the role of the engineer had the characteristics of a musical assistant. However, this differentiation did not impede collaboration, and it did not stop some individuals from acquiring skills in both domains. Colleagues who were both composers and researchers were not common: James Dashow, Mauro Graziani, Daniele Torresan and Marco Stroppa.49 The CSC’s visionary programme attached great importance to investment in both high-level scientific research and high-quality musical production: a musical composition project and a scientific publication were equally significant (Vidolin [1999], cited in Zattra 2000, 45–111). In both cases, serious investigation, professionalism and progress in terms of results had to
72 Laura Zattra be guaranteed. ‘This is a result of the presence, in the organising committee, of pure engineers, who are interested in the advances of technological and musical research, rather than the affirmation or the continuation of the work of one single Maestro’ (Vidolin 1988a; 1988b; 1988c; 1989b; 1990). Edgar Varèse had always believed that collaboration would lead to the ‘liberation of sound’ (Chadabe 1997, 3). According to Wladimiro Dorigo, ‘what [Varèse] wished for in 1922: “the composer and the technician will have to work together” finally came true [at the CSC]’ (Dorigo 1977). Unlike IRCAM, CSC members never intended to create a specific ‘school’ or aesthetics; for this reason a much diversified production followed. According to CSC protocols, engineers (who could be compared to the musical assistants at IRCAM), were fully at the disposal of composers whenever and for as long as the composers wished. This collaboration was nonetheless precisely scheduled to allow researchers enough time to undertake their own research activities. However, as Vidolin recalls, it did happen that some composers asked us to find and make artistic choices, but we have never accepted this. Whenever this delegation act took place, we insisted in pushing composers to give voice to their own personal artistic approach so that they could contribute valuable ideas to research projects. They would have to be research composers, not composers requesting provision of services. (Vidolin [1999], cited in Zattra 2000, 50) Vidolin’s statement testifies to the richness of musical and artistic ideas and to the research capability at the CSC, to which both the composers and the engineers contributed. Each musical project illustrated a different way in which computer software or results on sound research could be used. In this way, engineers and composers were able to work together to produce works with sound synthesis or computer-assisted composition, acoustic sound processing, acousmatic pieces, live electronics or open works using IBM, Next, Atari, DOS, Windows 9x, Macintosh, PowerMac (Zattra et al. 2001).50 Conditions for use of CSC machines by composers were defined by the founding board of directors. The applicant had to provide a detailed work sheet (provided by the centre including a description of the work, scheduling, etc.) at least one month before the beginning of the creative process.51 The CSC founders did not entertain any formalisation of the division of competences. ‘There was a great deal of willingness to cooperate. We had the entire personnel from the Centro di Calcolo [University Computer Centre], who helped us in every way. If, for example, we had to archive our data, we could always find someone willing to do that, even if it was not in his job description. This resulted in a lively production of music and research’ (Tisato [1999], cited in Zattra 2000, 41).
Collaborating on composition 73 In my archival research, I was unable to find minutes from meetings containing any real discussions of the practical actions of cooperation, collaborative creation or related problems.52 However, in musical programs, musical working sheets and scientific articles, it is possible to find trace evidence of this collaboration. Teresa Rampazzi completed Fluxus in 1979 in collaboration with musical assistants Mauro Graziani and Gianantonio Patella. The work was made using the Interactive Computer Music System (ICMS) software, realised by engineer Graziano Tisato specifically to help less expert composers with sound synthesis, voice synthesis, processing, interpolation and mixing, including in real time.53 Many other pieces were composed using ICMS, most probably the very first ‘user-friendly’ computer programme ever made, allowing composers to bypass the difficulties inherent in the alphanumeric MUSIC series software.54 In 1983, another occasion of collaboration occurred, when three American composers (David Behrman, Joel Chadabe and Richard Teitelbaum) were asked to compose a piece to be presented at the Venice Biennale. They used the system 4i, implemented on a 4i processor given by IRCAM derived from the 4X (the ‘I’ stood for Italy!). They had only a week to work in close collaboration with the CSC musical assistants. On that occasion, Behrman wrote that as anyone who has worked with computer music system may have noticed, composition will need months to get all processes done and working. The fact that we were given only few days to complete the commission, that meant we had to make decisions quickly and make mutual helping and concessions trusting in everyone’s competence and intuitions […]. 4i system turned out to be extremely flexible and potent […]. We found out that we could complete a piece within a matter of days, by grace of CSC members’ skills. (Behrman et al. 1984, 86) Behrman realised Oracolo, for 4i real-time system (voice synthesis and transformation, keyboard connected with the 4i system and a videogame) in collaboration with Graziano Tisato. Richard Teitelbaum created Barcarola, for 4i real-time system, based on sound synthesis to simulate sea waves and wind. Joel Chadabe composed Canzona veneziana, with Frequency Modulation synthesis to simulate drum sounds, manipulated to become an imaginary bell (Figure 3.3). One of the most important collaborative projects at CSC was Prometeo. Tragedia dell’Ascolto by Luigi Nono (1984–85). Nono recalled that ‘with Alvise Vidolin, Sylviane Sapir and Mauro Graziani, we worked as follows: first of all, we agreed on the use of some type of sound material I have been interested in; they provided me with a sort of sound catalogue, which became a starting point; from here we began testing and discussing’ (cited in Tamburini 1985).
74 Laura Zattra
Figure 3.3 Richard Teitelbaum (standing) and from left to right Joel Chadabe, and musical assistants Mauro Graziani and Alvise Vidolin in 1983, Venice Biennale, Festival ‘La scelta trasgressiva’. Source: Courtesy Padova University, CSC – Sound and Music Computing Group.
Conclusions Let us cast our minds back to the ping-pong match I discussed at the beginning. Is there a winner within the process of collaboration? As Richard Sennett pointed out: Natural cooperation […] begins with the fact that we can’t survive alone. The division of labour helps us multiply our insufficient powers, but this division works best when it is supple, because the environment itself is in a constant process of change. (Sennett 2012, 73) According to Sennett, the spectrum of the give and take exchange can be defined as follows: 1 2 3 4 5
altruistic exchange, which entails self-sacrifice; win–win exchange (both parties are equal and benefit from the cooperation); differentiating exchange (parties are aware of their respective differences); zero-sum exchange (one party prevails); winner-takes-all exchange (one party completely defeats and wipes out the other). (Sennett 2012, 72)
Collaborating on composition 75 We may then try to evaluate the experiences of cooperation at IRCAM, CCRMA and CSC and position each in relation to the above points. The CSC experience – with its non-aesthetic approach – was oriented towards a kind of altruistic exchange in which a win-win exchange allowed both parties (composers and engineers) to pursue interesting musical research and therefore maintain the respective differences (differentiating exchange). Cooperation at CCRMA was, according to sources, also a win–win exchange, which may be interpreted as an environment, described once by Chowning as similar to the ‘Socratean abode’ (Moorer, cited in Nelson 2015, 32; also cited in Roads 1982, 13; Markoff 2005). At CSC, the roles of composers and engineers (who also functioned as musical assistants) remained differentiated; at CCRMA, all participants were researchers who could also become artists or composers or vice-versa. The history of collaboration at IRCAM reflects the specific character of the institution and the position of its founder and first director. In the conclusion to his now-famous article, ‘Technology and the Composer’ (originally published in 1977), Boulez wrote: ‘Research/Invention, individual/collective, the multiple resources of this double dialectic are capable of engendering infinite possibilities. That invention is marked more particularly by the imprint of an individual goes without saying (1984, 494). IRCAM, perhaps by nature of its structure – notably the ‘art-science’ dichotomy – helped to establish a ‘musical assistant’ culture in the 1980s. The composer was the ‘grand architect’ of the electronic and live electronic aspects of a work; the musical assistant was responsible for the realisation of the composer’s idea. The danger is that the “musical assistant” culture [helps] composers to work more effectively by removing the requirement for expert knowledge in electronics technology, but [has] the side-effect of distancing the technology from the creative process. It ultimately [creates] a culture of dependence – perhaps even subservience. (Bullock 2008, 204–5) Risset observed that unfortunately not all composers are worried about the collaborative issues to the same extent. On the one hand, a composer is often in a hurry, or even not fully interested in research, and he may restrict collaboration to a simple provision of service. On the other hand, real innovations always fall outside the boundaries of expectation and prediction. Research has not the same timing of creation: research puts urgency between brackets. Composition has to be made quickly; by its nature, research never ends. Victor Hugo has once said “Science seeks perpetual movement. It has found it; it is itself perpetual motion”. (Risset 2014, 14)55 These quotations suggest a zero-sum or a winner-take-all exchange at IRCAM. For this reason, Risset’s claim that ‘every creative artist is also a researcher’
76 Laura Zattra (2014, 14), is all the more important, not only for the specific case of IRCAM. As a researcher and composer himself, his words point to a vision of collaboration from the very early computer music era, when computer music centres such CSC or CCRMA asked every composer to be a composer/researcher and every scientist a musician/researcher. Within his/her respective role, each member of this dichotomy had a responsibility to ensure professional ethics: each creation should reflect a profound artistic and personal research. By its very nature, computer music requires comparable amounts of pure creativity and research, characteristic of this ‘community of practice’ (Lave and Wenger, 1991; see also Wenger, 1998). Thus if, according to Risset, ‘every creative artist is also a researcher’, then every musical assistant is also an artist.
Notes 1 This chapter provides results of an individual research project initially conducted at IRCAM funded by the CNRS (Centre National de la Recherche Scientifique; invited researcher, CNRS INS2I, June–October 2012) and by Padova University (Research Grant, project ‘COEM – Cooperative Electroacoustic Music’, 2011–12). This research is currently in progress and is designed to assess the network of agents and processes involved in music making with new media, the implications of musical mediation and music’s changing ontology. 2 ‘Comme je ne vais pas journellement en studio, nous parlons longuement du projet. Pas dans l’abstrait mais à partir de mes réalisations antérieures. Je fais des propositions musicales qu’Andrew Gerzso, musicien, comprend. Il cherche et me propose une solution que j’étudie pour voir si elle correspond à ce que je veux ou s’il faut encore l’élargir. Et ainsi de suite […]. Il faut donc toujours alterner prévision et contrôle des possibilités réelles’ (Gervasoni 2000a, 20). This special issue of Le Monde dedicated to the Festival Agora, also included the articles Gervasoni (2000b, 2000c, 2000d, 2000e). 3 Gerzso spoke at the first conference devoted to the profession of the computer music designer. The conference, which he organised, was held at IRCAM on 22–23 June 2007. 4 Gerzso concluded by saying that ‘the ambition of this meeting is to sketch the contours of this new profession in its different forms and elicit the best type of training programs’. Andrew Gerzso, presentation text (brochure) for conference on the profession of the computer music designer, IRCAM, June 22–23 2007, p. 5. 5 Until the 1990s and the development of the first user-friendly software, such as Max/MSP, very few composers were able to generate computer music pieces autonomously, from the first conception and synthesis, to the diffusion of sound. We can cite John Chowning, Jean-Claude Risset and James Tenney among the rare composers who were at the same time composers, researchers and computer programmers (Kahn 2012, 131–46). 6 ‘As a prototype for other such systems, the software produced here over the years has been exported internationally. At least one system of IRCAM in the Centre Georges Pompidou in Paris, was patterned entirely after the Stanford system, even to the type of computer used. They are currently running a large fraction of the Stanford programs and will soon be running the entire Stanford program library.’ John Chowning, ‘A Brief History of the Stanford Computer Music Project, 13 June 1977, unpublished, CCRMA, Saildart Archive, classified as ‘1977-06-13 15:12 APPA .PUB [TXT, JC]’.
Collaborating on composition 77 7 ‘I request that James A. Moorer be granted leave from June 14 1977 until 1 September 1978. During this period Moorer will act as scientific adviser for the IRCAM in Paris. His responsibilities at IRCAM will include the development of A.I. Lab type software on the PDP-10 system at IRCAM in addition to providing technical and scientific advice’. John Chowning, unpublished digital letter, 25 May 1977 CCRMA, Saildart Archive, classified as ‘ANDY[TXT,JC]: 1977-05-25)’. 8 My investigation has, to a very large extent, revolved around unpublished archival sources at the Centre de Ressources de l’IRCAM (CRI). 9 An archival unpublished source reads: ‘L’I.R.C.A.M. pourquoi?/Depuis une dizaine d’années, d’importantes découvertes dans les domaines de l’électroacoustique et de l’informatique ont profondément modifié la fonction des compositeurs de musique; […] Cette révolution, dont les conséquences sont encore embryonnaires mais n’ont pas fini de s’étendre, doit être maîtrisée. Tel est l’objet de l’I.R.C.A.M. [sic], qui se propose: – d’inventorier systématiquement les possibilités nouvelles qu’offrent aux compositeurs et interprètes les techniques scientifiques récentes de production de sons nouveaux; – de mettre les compositeurs, que leur formation n’a pas préparés à utiliser ces nouvelles ressources, en mesure d’appréhender la démarche des scientifiques qui en assurent le maniement, et par un travail en commun de l’influencer en vue d’en tirer le meilleur profit pour la création musicale; de diffuser, dans un public de spécialistes et de non- spécialistes….’ (emphasis added). ‘L’I.R.C.A.M. pourquoi?’, unpublished typed document (9 pages), IRCAM Archives, 7 October 1976; the same text had been sent to the Minister of Interior in June 1977 entitled ‘L’I.R.C.A.M. Ses objectifs – son statut – ses activités’, IRCAM Archives, 1977. 10 ‘Boulez expliquait qu’une recherché en collaboration était nécessaire pour résoudre certains problèmes se posant aux compositeurs […]. Et l’ambitieux projet de remettre en question le context de la creation musicale dans une demarche collective était évidemment enthousiasmant’ (Risset 2014, 13). 11 A report on research mentions ‘chercheurs, ingénieurs et techniciens de l’IRCAM’. ‘La recherche à l’Ircam en 1979’. Rapports IRCAM 29/80, Paris, Centre Georges Pompidou, 1980, 1. 12 With regard to the term ‘tutor’, see, ‘Diffusion générale’, unpublished typed document, 15 October 1982, IRCAM Archives. For ‘musical assistant’, see ‘Le tutorat à l’IRCAM’, unpublished document, probably the late 1980s, IRCAM Archives. Other documentary sources are quoted in (Zattra 2013). 13 Personal communications from Serge Lemouton 27 June 2012 and Andrew Gerzso 19 October 2012. 14 Gerald Bennett, ‘Research at IRCAM in 1978’, Rapports Ircam 19/79, Paris, Centre Georges Pompidou, 1979. 15 ‘La recherche à l’Ircam en 1979’, Rapports IRCAM 29/80, Paris, Centre Georges Pompidou, 1980. 16 ‘Diffusion générale’, unpublished typed document, 15 October 1982, IRCAM Archives. 17 ‘L’IRCAM – Bilan et perspectives’, unpublished document, 3 March 1983, IRCAM Archives. 18 IRCAM, Administrative meeting (Minutes), unpublished document, 3, sections b/c, 25 April 1984, IRCAM Archives. In French texts of the day, pronouns would all be in the masculine. My research has shown that were no female assistants or tutors. Today, IRCAM employs one female computer music designer. 19 IRCAM, Administrative meeting (Minutes), unpublished document, 20 May 1983, IRCAM Archives. 20 Structures/Création bureau de Production Juillet, unpublished document (3 pages), in ‘Diffusion générale’, unpublished document signed by Boulez, 5 July 1983, IRCAM Archives.
78 Laura Zattra 21 IRCAM, Coordination Committee (Agenda and Minutes of the Meeting), unpublished document, 13 April 1988, IRCAM Archives. 22 Ibid. 23 Ibid. 24 See ‘Le tutorat à l’IRCAM’, unpublished document, probably the late 1980s, IRCAM Archives and IRCAM, Administrative meeting (Minutes), unpublished document, 9 January 1990 Ircam Archives. 25 Marc Battier ed., ‘Rapport d’activité 1989’, Paris, Centre Georges Pompidou, 1990. 26 In the mid-1990s, musical assistants at IRCAM were: Pierre Charvet, Eric Daubresse, Christophe de Coudenhove, Thomas Hummel, Serge Lemouton, Cort Lippe, Leslie Stuck. Rapport d’activité 1991, Paris, Centre Georges Pompidou, 1992, and dossier Ircam Conseil d’administration du 25 juin 1992 + PV signé, Ircam archives). 27 IRCAM, Administrative meeting (Minutes), unpublished document, 11 December 2001, IRCAM Archives, 8–9. 28 The term ‘Conseilleur et Réalisateur de l’Informatique Musicale’ was first mentioned during an informal meeting in 1997. I am grateful to Serge Lemouton, RIM at IRCAM, for showing me the email from Leslie Stuck, musical assistant, in which the term was used (Serge Lemouton, personal archive). 29 A brief history of CCRMA can be found in Xavier Serra and Patte Wood eds., ‘Overview. Centre for Computer Research in Music and Acoustics (Recent Work), report number STAN-M-44, March 1988, available at https://ccrma. stanford.edu/files/papers/stanm44.pdf and Nelson (2015). 30 I had the opportunity to access the files stocked within the CCRMA SAILDART computer archive, a facility created by Bruce Baumgart that preserves most of the records (fewer than a million files) of the Stanford Artificial Intelligence Lab from the 1970s and 1980s (part of these records are public and accessible on www.saildart.org). SAILDART also preserves a sort of internal messenger service, used by the members to communicate with each other. I was interested particularly in files written by John Chowning (founding director of the CCRMA from 1975 to his retirement in 1996). I am grateful to Bruce Baumgart and John Chowning for letting me access this incredible archive. 31 John Chowning, ‘A Brief History of the Stanford Computer Music Project’, unpublished, 13 June 1977, CCRMA, Saildart Archive, classified as ‘1977-06-13 15:12 APPA .PUB [TXT, JC]’. 32 Ibid. 33 Email from John Chowning to the author 22 March 2015. 34 They prototyped a method of digital sound processing in which physical properties of acoustical instruments (or voice or natural sounds) are represented as computer algorithms that can be manipulated. 35 YouTube can be an important source for oral history testimonies. In the documentary from the late 1980s ‘High Tech Heroes #6: Julius O. Smith & David A. Jaffe’ (probably 1988) (www.youtube.com/watch?v=15jG1zfx-IM), we can listen to the explanation of the two computer musicians (these are rough takes from the broadcast documentary). The video shows excerpts of Smith’s music. 36 Other examples of synergy at CCRMA include the collaboration of Chowning, Gareth Loy and Moorer. On 14 February 1978, Chowning thanked Loy and Moorer for the recursive entry feature, as well as Leland for the SCORE feature. He then went on to explain, in a very informal style, the syntax used in the Stanford Artificial Intelligence Language (SAIL): ‘this file will run as is… (remember a blank column 1 is not read by SCORE)[…]. To use SCORE for the sambox, first type yup… you guessed it… 999 and then the file name as per usual. SCORE will then use the same instrument name for conditions of overlap’. John Chowning, digital message from CCRMA, 14 February 1978, Saildart Archive, classified as ‘1978-02-14 13:49 PTS. [SAM, JC]’.
Collaborating on composition 79 37 Among Moorer’s works: We Stopped at Perfect Days, Stanford, 1977; Lions Are Growing, Stanford/IRCAM, 1978, THX Logo Theme, Lucasfilm Ltd., 1985 (www.jamminpower.com/jam.html). 38 Personal communications from Chowning in 2004 and 2007, cited in Zattra (2007). 39 Fred Malouf, cited in Xavier Serra and Patte Wood eds., ‘Overview. Centre for Computer Research in Music and Acoustics (Recent Work), report number STAN-M-44, March 1988, available at https://ccrma.stanford.edu/files/papers/stanm44.pdf. 40 Ibid., 58. 41 Ibid., 39. 42 Email from John Chowning to the author, 22 March 2015. 43 John Chowning, unpublished digital letter, 20 June 1977, CCRMA, Saildart Archive, classified as ‘1977-06-20 23:55 BLURB, [TXT, JC]’. 44 Four departments formed the original IRCAM charter: instruments and voice (head: Vinko Globokar), electroacoustics (Luciano Berio), computer (JeanClaude Risset) and a coordinating department known as the départment diagonal (Gerald Bennett); these were followed by a fifth department devoted to teaching (pédagogie: Michel Decoust). 45 John Chowning, unpublished digital letter, 20 June 1977, CCRMA, Saildart Archive, classified as ‘1977-06-20 23:55 BLURB, [TXT, JC]’. Because of their growing reputation, members of the pre-CCRMA computer music group were asked to participate in the planning of the future IRCAM as early as 1973. Xavier Serra and and Patte Wood, Overview. Center for Computer Research in Music and Acoustics. Recent work, report n. STAN-M-44, March 1988, available online: https://ccrma.stanford.edu/files/papers/stanm44.pdf. 46 The history of CSC can be found in Zattra (2000; 2002) and Canazza et al. (2012; 2013). 47 A reference to the year 1974 appears in the founding CSC Statute (6 July 1979). The group members presented for the first time their activity to an international audience and defined themselves as members of the ‘Computer Music Group’ at the third International Computer Music Conference (ICMC), held at the Massachusetts Institute of Technology in Boston-Cambridge. 48 Email from Richard Karpen to the author, 10 July 2000. 49 While working at CSC, Marco Stroppa attended IRCAM summer courses for young composers in 1982. It was decided that he would continue working and collaborating in the Paris centre and proposed that he would stay after he finished this course and work as an assistant. After having discussed the matter with Pierre Boulez, he became an assistant to Tod Machover (then the head of the Research Department at IRCAM), who had been commissioned by Venice Biennale to compose the piece Fusione Fugace. Stroppa, together with Emanuel Favreau assisted Machover: ‘This piece was one of the first pieces entirely performed live by three performers, [Machover, Stroppa and Favreau]. Tod played the keyboard’ (Stroppa [1998], cited in Zattra 2000, 81). During that time, Stroppa regularly travelled to Padova and worked on his own piece Dialoghi, the second movement of the cycle Traiettoria, for piano and electronics (1982–1984). In 1985, Luigi Nono wrote that Marco Stroppa was an ‘unusual example of a person who has mastered the capabilities of the composer and the technician’ (Nono, cited in Tamburini 1985, 11). According to Stroppa, the technician/composer is comparable to the fusion of a composer and an orchestral conductor, who knows a work to the last detail and can therefore decide very thoroughly on the performance ([1998], cited in Zattra 2000, 82). 50 Since the 1976 piece by James Dashow Effetti Collaterali, a hundred works have been realised; among them are works by Claudio Ambrosini, Guido Baggiani, Giorgio Battistelli, David Behrman, Anselmo Cananzi, Joel Chadabe, Aldo Clementi, Wolfango Dalla Vecchia, James Dashow, Agostino Di Scipio, Roberto Doati, Franco Donatoni, Mauro Graziani, Hubert Howe jr., Richard Karpen,
80 Laura Zattra Jonathan Impett, Albert Mayr, John Melby, Wolfgang Motz, Luigi Nono, Corrado Pasquotti, Teresa Rampazzi, Fausto Razzi, Salvatore Sciarrino, Marco Stroppa, Richard Teitelbaum, Adriano Guarnieri (Zattra 2000). A complete list of composers may be consulted at: smc.dei.unipd.it/production.html. 51 ‘Regolamento per l’utilizzazione delle risorse del C.S.C.’, Centro di sonologia computazionale. Informazioni su scopi e attività, Bollettino notiziario dell’Università degli studi di Padova, n. 19, giugno 1981, anno XXX, a.a. 1980–81, pp. 7–8. 52 I was able to analyse several minutes of the board of founders from the 1970s to the 1980s. My first archival research dates back to 1999–2000 (Zattra 2000, 166–170; 2002). CSC was located in the same building from the 1970s up to the early 2000s, but its archive was not organised in a formal, structured manner. The centre has relocated three times since then. The repository contains records of invoices, minutes of committee meetings of the 1970s, 1980s, 1990s, drafts and records of composers’ works, scientific papers. 53 Instructions in ICMS were tapped by means of a light pen; the software showed a list of functions and parameter, instead of tapping single instruction as in MUSIC 5 or similar computer programmes. Graziano Tisato, ‘ICMS: manuale d’impiego’, Rapporto interno Centro di Calcolo, Università di Padova, 1978. See also Tisato (1976, 1977a, 1977b). 54 The first version was presented with success at the ICMC – International Computer Music Conference in Cambridge-Boston (Massachusetts Institute of Technology) in 1976 (Tisato 1976). 55 Victor Hugo’s quote is taken from his essay Shakespeare (1864).
Part II
Performance
This page intentionally left blank
4 Alvise Vidolin interviewed by Laura Zattra The role of the computer music designers in composition and performance Laura Zattra Alvise Vidolin: My original training has been characterized by a rational mental attitude. The electroacoustic music world, and even more computer music, are however abstract worlds. They equally require and allow (almost a contradiction) a rigorous evasion. An engineer has to sort things out with operational purposes. In this field on the other hand you have the satisfaction of obtaining results based on consistent and precise models, for the sheer pleasure of the human intellect. I am fascinated by this construction of the absurd and the utopian. (interviewed by Laura Zattra on 27 July 1999)
With these words, Alvise Vidolin (born 1949 in Padova) describes his profession, marked by an entanglement of discipline and creativity. He is the co-founder, member and researcher of the Centro di Sonologia Computazionale (CSC – University of Padova). He is a sound engineer, a live electronics performer, a researcher on computer music and a pioneer in his field. Since the foundation of the CSC during the 1970s, Vidolin has worked closely with many Italian composers including Claudio Ambrosini, Giorgio Battistelli, Luciano Berio, Aldo Clementi, Wolfango Dalla Vecchia, Franco Donatoni, Adriano Guarnieri, Luigi Nono and Salvatore Sciarrino. He has assisted them during the creative process and has worked as a performer in the first and in subsequent performances of compositions. He has consistently taken great care to document and preserve information pertaining to his work, particularly with regard to the upgrading of technology. Vidolin is what we call a computer music designer, one of the pioneers of this profession. The term, increasingly used by members of the electroacoustic music scene, implies a multitude of different functions in both composition and performance. (Computer music designers are often called musical assistants). On innumerable occasions and with hundreds of publications, Vidolin has contributed to the discussion of themes such as cooperative creation, electroacoustic music notation, implications of musical
84 Laura Zattra mediation, preservation of electroacoustic music and (as a researcher and sound engineer) the computing of sound and music.1 In 1980, he helped to found and later direct the Laboratorio permanente per l’Informatica Musicale della Biennale (LIMB), an institution meant to create a permanent and operational link between CSC and the Music Sector of the Venice Biennale.2 Within this framework, Vidolin promoted research, studies, workshops, publications, concerts and commissions for computer music pieces. In 1982, he was one of the organisers of the first European edition of ICMC (International Computer Music Conference) in Venice and was later the curator of the exhibition Nuova Atlantide. Il Continente della Musica Elettronica (Venice 1986) and co-editor of the homonym catalogue (Doati and Vidolin 1986). The exhibit enabled visitors to experience types of electroacoustic music that have been produced in different parts of the world since 1900. The book grouped together a selection of essays as reference points on the historical, technological and sociological electroacoustic music scene. The second part of the book provided profiles of centres and descriptions of electroacoustic instruments, a bibliography and a discography (Doati and Vidolin 1986). As a musicologist and historian, one of the first contributions made by Vidolin in the area of electroacoustic music studies is a workshop titled Music/Synthesis. Electronic Music, Electroacoustic Music, Computer Music, followed by a short book with the same name published soon afterwards (Vidolin 1977). This project marked the beginning of his decade-long collaboration with Venetian composer Luigi Nono (1924–90). Vidolin describes himself as an ‘interpreter of electronic music instruments’, meaning by that a professional capable of combining musical skills with sonological and signal processing know-how. According to him, this type of musical performer ‘not only “plays” during a concert but also designs the performance environment for the piece, and acts as an interface between the composer’s musical idea, and its transformation into sound’ (Vidolin 1997, 439). The roles and competencies of these professionals have not yet been well defined (see the previous chapter by the same author and Davies 2001), because they ‘range from the players of synthesizers to signal processing researchers, with many intermediate levels of specialization’ (Vidolin 1997, 440). Vidolin defines this professional as someone who ‘does not simply translate a score into a sound, but transforms the composer’s abstract musical project into an operative fact, making use of digital technology and new developments in the synthesis and signal processing’ (Vidolin 1997, 440; see also Vidolin 1993). Consequently, Vidolin distinguishes between the interpreter who works with the composer in the studio and the interpreter who is a performer of the composition in concert, i.e. the before and the after phases of musical creation. These two dimensions of the computer music designer may either
Alvise Vidolin interviewed by Laura Zattra 85 co-exist within the same individual, as is the case with Vidolin, or in specialists in one domain or the other (Vidolin 1997, 441–43). Attempting to understand this collaboration with a composer is challenging because the interchange between two actors is typically hidden, unrecorded and in most cases takes place orally. The present chapter investigates Vidolin’s vision of the role of the computer music designer through a study of his collaboration on two compositions: Luigi Nono’s Prometeo. Tragedia dell’Ascolto (1981–84)3 and Salvatore Sciarrino’s Perseo e Andromeda (1991). The chapter is based on a series of interviews and discussions I had between 1998 and 2015, complemented by research based on sources and archival documents. Knowledge of the collaboration can provide crucially important information about compositional choices within the creative process, structural patterns and aesthetic solutions within the completed work. Much of this information would remain unknown to the analyst, who focuses primarily on the composer’s contribution.
Post-industrial fascinations and creative interpretation The birth of a profession can rarely be determined. It starts when someone, in the course of a spontaneous activity, decides to focus entirely on that activity. This signifies a change in identity and status of the practitioner. Instead of simply being a skill for which one is paid, the activity becomes part of a network of several people with similar skills. The key feature is the emergence of professional autonomy. Once members of a rising profession have acquired the experience and competence to judge the quality of similar work performed by other individuals, then the basis for defining the profession has been achieved (Becker 2009, 10). A standard framework of tasks, know-how and objectives is beginning to take shape in computer music: the status of the computer music designer has to be seen in this context and is currently at this stage (Zattra and Donin 2016). Alvise Vidolin and I took these thoughts as our point of departure in April 2013, in order to discuss whether this profession has reached a recognised status or if it still is a spontaneous activity. AV: I think my position, as I guess is true for my colleagues, is somewhere in between an orchestra conductor ‘from a post-industrial world’, and a musical interpreter. Conductors lead musicians, human-beings; they have a crucial role in shaping, understanding and interpreting the whole musical work, and it is their responsibility to create a performance anew every night. Nowadays postindustrial conductors lead automata. So I see myself as a designer of computer programs who then ‘leads’ those programs, which brings along the aforementioned responsibilities. At the same time, I see myself as an interpreter, in terms of what Joseph Joachim had been for Johannes Brahms or Roberto Fabbriciani,
86 Laura Zattra Ciro Scarponi and Giancarlo Schiaffini for Luigi Nono. I represent both these roles, depending on whether I act before (or during) the composer’s creative process or after the composer’s creative process. However, it is unfortunately true that musicians – at least in the case of Western music or avant-garde music – are still regarded as pure executors of the composer’s will, with a quite limited freedom, or better, even in the most open works, with a ‘directed’ freedom (though of course other forms of music making are much less bothered by this). Musicians/interpreters like Joachim or Fabbriciani are not yet fully appreciated; their completeness is not yet fully recognised. (12 April 2013) During the same interview, I asked if his profession is comparable with others’ collaborations in the art world, such as film production. AV: If I consider these issues, surely I cannot compare them with cinema. Cinema is more like acousmatic music: collaboration happens ‘before’, and the result is irrevocably fixed. Projectionists cannot do anything but get the projection started. The same is, or at least was, true with acousmatic pieces: during concerts there was no or very little interpretation. But of course acousmatic music has evolved. Today acousmatic music is beyond that: the interpreter has regained a crucial role and needs to ‘re-shape’ the room acoustics response at every concert. Many musical ‘scores’ give this possibility today (think, for example, to the French school with the Acousmonium, or many electroacoustic music pieces realized at Bourges, or many pieces by Annette Vande Gorne, to name a few). It is true that technicians from the first analogue era of electroacoustic music (therefore acousmatic), like Marino Zuccheri at the Studio di Fonologia della RAI in Milan, can be seen more as cinematographers or directors of photography (as I also state in my article: Vidolin 2013). These technicians had a crucial role to play, just as technicians did in the film production. However, acousmatic music was never acousmatic stricto sensu! I am very committed lately to an educational and historical operation, consisting in studying, reconstructing and playing some particular works of musical theatre. Among them, there are many ‘acousmatic’ pieces by Luigi Nono. Some of them are completely acousmatic works such as Ricorda cosa ti hanno fatto in Auschwitz (1966), but I’m working also on acousmatic versions of pieces that were originally composed for tape and soloists, for example, A floresta é jovem e cheja de vida (1965–66). I sort of ‘re-compose’ these pieces sonically at every concert, which was Nono’s intention. In A floresta I use the original two quadraphonic
Alvise Vidolin interviewed by Laura Zattra 87 tapes, a tape for 8 channels (digitized version), and the multi-track recordings that were made for the official disc (the only version approved by Nono), with the soloists playing their parts. My role now is to play those pieces and perpetrate Nono’s compositional wishes. Again, it is a question of ‘re-shaping’ this music, following Nono’s aesthetic logic, according to [the acoustics of] every concert hall. The reason behind these acousmatic renditions […] is that I want to make authentic renditions of Nono’s pieces, with sound recordings of the original interpreters who were strict collaborators and knew precisely how to play Nono’s music. Mind you, it is not my intention to eliminate the original version of the piece [A floresta] with real human performers (which moreover is still what I do in other occasions), but to be open to new possibilities of listening (a word so important for Luigi Nono) to historical acousmatic documents. In the end, if we want to find an equivalent with other collaborations in the art world, I think we should consider the world of industry, again. Not the old-fashioned industry, with a boss of a factory and its workers; I am thinking more of an up-to-date idea of industry, current companies based on team work, where everyone works together to ensure positive results. (12 April 2013)4 How, then, is this collaboration being carried out in practice in such ‘creative factories’? (An electroacoustic music work can involve dozens of people). Is there a protocol with rules or a consensus on how to behave? Are there typical collaboration methods between composers and computer music designers? AV: In my experience, a great work of oral communication and planning is the key to a successful collaboration. In my work, I feel I have two basic tasks: the first is to understand the composer’s vision. This is possible only through dialogue, empathy and even imagination: as in any relationship, it is not always easy to decipher others’ mind and intentions. Planning is the second important tool and the key to a positive experience. By planning, I mean taking the time to organize, reflect after meetings, submit my ideas, solutions, creations. After that, I leave composers the time to evaluate and discuss again and again every step of the creative process, in order to deliver on time something that really satisfies them, represents them, but still is something I am happy about. This also applies to my role of interpreter. Once the work is completed following those premises, I am in a position to perform the musical work in the correct way, or at least to try to reflect the composer’s intention. In practical terms, today online communication (written or oral) is really useful. Before the Internet era, I used to meet composers,
88 Laura Zattra call them by telephone, write them letters, sketches, notes and anything else I needed to do to favour dialogue and cooperation. (12 April 2013)
The computer music designer in the studio In presenting Vidolin’s experience and practical activity, it is crucial to differentiate between the so-called poietic and aesthetic sides of his work, i.e. between creating electronics in collaboration with a composer and performing or reperforming a completed work.5 Vidolin calls the two sides of his activity the before and the after.6 Vidolin’s (1997, 441) observation of the necessity of separating the roles of the electroacoustic music ‘composer’ from that of the electroacoustic (or electronic) ‘interpreter’ was noted as far back as the 1950s at the Cologne Studio for electronic music. The young Gottfried Michael Koenig (who later became a composer/researcher) had the task of transforming graphic scores produced by invited composers into real electronic sounds generated via analogue instruments. With the development of computer music, this ‘interpreter’ gradually became the computer music designer.7
Alvise Vidolin and Luigi Nono According to Vidolin, the studio interpreter of the 1950–70s was in charge of ‘bringing to light’ the composer’s embryonic technological ideas. One of Vidolin’s first collaborations was with Luigi Nono. They met in 1977 during the organisation of the workshop Music/Synthesis. Electronic Music, Electroacoustic Music, Computer Music. AV: Nono had expressed his interest in getting to know me better and possibly in collaborating. He wanted to study computer music and knew I was collaborating with the University of Padova [at CSC] and this could be the input for him to develop new musical research and ideas. This was about the time that Luigi Nono was abandoning the Studio di Fonologia della RAI di Milano, when the Studio was in decline. He was eager to start new collaborations, with new performers and institutions. He had the idea to start a laboratory in Venice with the Giorgio Cini Foundation or in the music conservatory, but those projects remained unfulfilled. (27 July 1999) After…sofferte onde serene… (1976, for piano and magnetic tape), he realized that the Studio di Fonologia in Milan was insufficient for his compositional needs. For the piece Con Luigi Dallapiccola (1979, for 6 percussionists and live electronics), Nono asked technician Giovanni Belletti from Milan to build three-ring modulators in order to transform
Alvise Vidolin interviewed by Laura Zattra 89 some sounds. This was his first attempt to work outside the Studio di Fonologia. (25 September 2015) Nono was eagerly looking for new paths in his personal research in the late 1970s. He was trying to explore new ‘anti-academic’ sounds that only a heterogeneous circle of collaborators devoted to his work would give him; the group included Vidolin.8 This research took place at the CSC and at the Experimental Studio of the Heinrich-Strobel-Stiftung (hereafter the Strobel Foundation) of the Südwestrundfunk (SWR) at Freiburg im Breisgau.9 After their first meeting, Nono and Vidolin started collaborating on the review Laboratorio Musica (1980–81), of which Nono was director, and on several musical projects during the period known as ‘Verso Prometeo’ (ca. 1980–84, the years leading up to the first performance of Prometeo. Tragedia dell’ascolto). Vidolin recalls that Nono ‘had a strong need to have new ears’ and that therefore he was asked by the composer to experiment and make him listen the technological possibilities; his presence was related to the verification of sounds, the sound diffusion and spatialisation (De Pirro 1993, 13). During the period called ‘Verso Prometeo’, Vidolin worked with Nono, but only as an interpreter, on the production of Io, frammento dal Prometeo (1981), Quando stanno morendo. Diario polacco n. 2 (1982), Omaggio a György Kurtág (1983–86) and Guai ai gelidi mostri (1983). These experiences were very important for Vidolin’s developing career as a computer music designer. Meanwhile, they started developing the project Prometeo. CSC members received visits from the composer, during which they discussed and made him listen to their first sounds experiments (27 July 1999). The following is a description by Nono of how they initially organised their collaboration. In an interview with Alessandro Tamburini, immediately following the first performance, Nono said: ‘first of all, we agreed on the use of some type of sound material I’ve been interested; they provided me with a sort of sound catalogue, which has become a starting point; from here on out, we started to do some tests and discuss’ (Nono, cited in Tamburini 1985, 11). Vidolin was responsible for a small group of researchers, which included French engineer/researcher Sylviane Sapir and Italian researcher/composer Mauro Graziani. Nono made frequent trips from Venice (where he lived) to Padova in order to experiment and comprehend real-time possibilities and synthesis and conquered a ‘quite big place’ within the CSC studio (27 July 1999). On the other hand, he was not fully satisfied with the computer programs CSC used at that time (notably the MUSIC 5 software) because they worked in differed time (25 September 2015). They therefore designed a real-time digital sound processor, called 4i system, originally conceived at IRCAM in Paris by Giuseppe Di Giugno, which was capable of synthesising sounds in real time (Azzolini and Sapir 1984; Debiasi 1984; Debiasi et al. 1984; Di Giugno 1984;
90 Laura Zattra Sapir 1984; Sapir and Vidolin 1985). The sounds produced by the electronic devices had to be adapted to the space of the original performance, which was held in Chiesa di San Lorenzo, a disused church in Venice, during four days in late September 1984. The performers (and also the audience) were placed inside of a large and high wooden structure designed by the architect Renzo Piano. The structure resembled an ark (i.e. a big unfinished boat) or the interior of a violin (performers were placed on balconies in three levels). Thus, the work was performed in an unconventional concert space; the church was almost filled with a new wooden structure. The sound would first resonate inside the wooden structure, then inside the church and would ultimately circulate everywhere inside the space. AV: After a few sessions, we started elaborating a few sounds. We also made several trips to Venice. We used to walk in Venice districts, where he made me listen to typical Venice sounds. I kept a diary of those meetings. He made me listen to some sounds with glass bells. Someone had built them for him. Sylviane [Sapir] used her knowledge on nonlinear distortion to create the application Inter2. (1 June 2009) One of the first musical paths was intended to create sounds for the simulation of breaths and blowing; these could be transformed from feeble zephyrs into tornadoes, in constantly changing. But this instrument turned out to be excessively automatic, not very musical. (27 July 1999) So we decided not to use it. Nono thought these sound structures were too precomposed. Why use a precomposed structure when one has the possibility to use the 4i system, a real time digital sound processor? That was what he told us. (1 June 2009) We decided to investigate two main groups of sounds in the extreme range of human hearing: bands of sinusoids in the very low range and very high frequencies (this research on the most extreme band of frequencies was part of his aesthetics, as one can heard in works such as Como una ola de fuerza y luz (1972), or 1° Caminantes…..Ayacucho (1987). We also designed sounds to evoke glass bell resonances, wind instruments, far echoes. (27 July 1999) We synthesized those sounds, and then during his second visit to CSC, we made him listen to those experiments.10 He was very happy with them and decided to insert those sounds into the overall score.11 Those sounds were supposed to integrate the sounds produced by acoustic
Alvise Vidolin interviewed by Laura Zattra 91 instruments, modify them and enlarge them. What we designed in the end was a real-time synthesis environment that could synthesize a unique sound developing in larger micro-intervals in the form of Nono’s preferred chords, fifth chords and tritones. Our 4i digital sound processing system made those sounds that Freiburg live electronics instruments were incapable of making (at that time). Our systems was engineered to intervene in the Prologue (the opening of Prometeo) and in the Isole (Islands), particularly in the first Isola. (27 July 1999) We then created the computer application PEATA (from the name of a boat!), based on the principle of Frequency Modulation. (1 June 2009) This was not a compositional system; it was more gestural environment (the performance gesture worked via potentiometers). Sound was changed thanks to movements we made with our hands: we operated 6 potentiometers and other keys on the computer keyboard. The overall sound effect sounded like a choir (24 ‘voices’ controlled in pitch, timbre, and their micro-intervals). (25 September 2015) The performance environment for Prometeo was designed to guarantee maximum liberty and to adapt in the most appropriate manner to the sounds produced by instrumentalists and singers (Sapir and Vidolin 1985). In the Isola I, sound production used the principles of granular synthesis. AV: Granular synthesis was not meant to generate sound pointillism. It was rather used to provide continuity to sound in constant evolution (Nono’s famous concept of Suono Mobile-Mobile Sound).12 Grains – each one was different from every other one – were seamlessly chainlinked (you don’t perceive them as separate). We used 24 ‘voices’, and they really evoked human voices, like a choir. The overall system was based on frequency modulation, which means it was versatile and open. So it might have generated other different resonances. Anyhow, we used small indexes of frequency modulation, near the sinusoid, and that provoked beats, which sounded like a choir singing in unison. We also used fourth and minor seconds to simulate what the real choir was doing. (25 September 2015)13 We used the instrument PEATA for the Prologue and the Isola I. In the 1984 Venice creation of Prometeo, Luigi Nono intended to evoke the opening chord from Mahler’s First symphony. So the 4i real-time
92 Laura Zattra processor could create an 116.5 Hz B-flat, projected from the loudspeaker under the wooden structure, then it would open up over seven octaves and then transform into the sound of a distant chorus. PEATA always proceeded in conjunction with strings parts (soloists) marked in the score. In Interludio 2, Nono’s desire was to create a ‘sonic silence’. This time, we had to ‘play’ in conjunction with some glass percussion instruments specifically built for the occasion: this was the task with which Nono had entrusted us (GLASS: in the score, they were 2 glass bells, with sound transformed by the Freiburg Strobel Foundation technology). The 4i system interacted with glass instruments by generating sound synthesis (and with the live electronics from Freiburg assistants) and was used in the original performance in Chiesa di San Lorenzo, as well as in Milan the year after. (22 February 2011) In his logbook, Alvise Vidolin took note of ideas (some of them aborted), discussions, computer sketches and schemata. AV: In my diary of the time, I’ve written ‘snaps’ [he shows me the page]. I remember Nono came one day to CSC. He told us he had just listened to some ‘snaps’ while he were sailing in his motorboat: he was travelling from Venice towards Padova. It was a window of the motorboat banging in the wind. That sound fascinated him; it was rude, violent, and noisy. He wanted us to synthesize some ‘snaps’. We did that, but in the end, we did not use them in the Venice version, only in Milan (the second version) in 1985. (1 June 2009) In that case, Vidolin and other musical assistants synthesised precise sounds according to Nono’s specific requests. Among the tasks of the musical assistant, like any luthier, programming open and versatile environments is one of the more important. A computer music designer always tries to create complex instruments (much like the makers of acoustic instruments), capable of adaptation to any musical and compositional situation. The 4i system was an open system; that was its greatest feature. This means that you did not have to program the machine each time for a new musical work. I called this feature performance environment. When Luigi Nono visited CSC, we had to be flexible enough to change immediately our system and results. We had to show him different possibilities in a short time, let him choose, refine the one he eventually chose. We had to foresee what he liked and what he could not like, so to speak. (1 June 2009)
Alvise Vidolin interviewed by Laura Zattra 93 There were also difficulties, or at least complexities, to overcome, especially at the level of communication and language: the mind set of computer music designers is different from the mind set and vision of composers. AV: During this long phase of planning and realization of the technological system, the greatest difficulty for me as a researcher (with Sylviane Sapir and Mauro Graziani) lay in the coordination between Nono’s perfect knowledge of the architectural environment, our knowledge of the problems related to the installation of the entire system in such a place (the 4i system and the computer PDP 11–34), and our attempts to imagine the hypothetical computer sound result. Ours, my previous experience was at a lab, related to ‘musical research’ developed in a laboratory. So it was such an enormous change and challenge for us. Prometeo was a demanding musical work, long and complex, and our technology had to be part of a bigger musical machine. It was not easy. Prometeo was a significant step for CSC toward greater musical/technological outcomes, for new solutions, new paths. (27 July 1999) Alvise Vidolin and Salvatore Sciarrino The second, important collaboration for Alvise Vidolin, as a computer music designer, is the one with Sicilian composer Salvatore Sciarrino. Vidolin and Sciarrino met for the first time in 1981 during Opera Prima workshop at La Fenice Theatre in Venice. They have so far collaborated on Perseo e Andromeda (1991), Nom des Airs (1994), Cantare con silenzio (1999) and Lohengrin 2 (2004). In the next paragraphs, I shall concentrate on their first collaboration. AV: My collaboration with Salvatore Sciarrino began as an unusual collaboration. During the 1960s, Sciarrino had already realized an electronic piece at the Studio di Fonologia della RAI di Milano, but that was more a learning exercise. In the following years, he remained quite uninterested with electronic technology. I think the reason was that he could already produce such beautiful and almost ‘electronic’ sounds with his compositional ability applied to traditional musical instruments. Then, in 1989, he received an important commission from the Stuttgart Theatre. (27 July 1999) Perseo e Andromeda is an unusual music theatre work (Zattra 2006b). The overall background sound-space is made by digital synthesis. Sounds are programmed by computer music designers and diffused by live electronic interpreters during concerts. However, if the soundscape were formed by an imitation of real orchestral sounds, it would have been a completely
94 Laura Zattra traditional opera, with four voices, ‘instrumental’ parts and conductor. Instead, synthetic sounds are programmed to recreate the Island of Andromeda’s soundscape, according to Sciarrino’s musical rationale. Sound synthesis in Perseo e Andromeda suggests winds, the sea, the seagulls, the horizon, the pebbles, drops of water. … AV: Sciarrino is a unique and courageous composer who rows against the mainstream. Digital synthesis was beginning to be substituted by real-time techniques, at the time, and he decided on the contrary to use only pure synthesis, as a sort of limitation, a constraint. This opera is a traditional one, from a structural point of view, for 4 voices, but he wanted to substitute the entire orchestra with a synthetic orchestra. Synthetic sounds should not imitate real sounds. Digital synthesis attracted him because it could give a totally new, different and suggestive sound-scape to the Island. (27 July 1999) Sciarrino’s basic idea was to start from sounds to create the illusion of a wave, and this was the essence of the piece. AV: We basically used subtractive synthesis: white noise filtered through a second order low-pass resonant filter, to create sounds that go from almost sinusoid filtering to more complex sounds. We did not sample natural sounds, which seamed the most obvious solution. That way it would have been an example of musique concrète, and he did not want that. He also did not want us to analyse and re-synthesize those sounds, as a sort of imitation; it would have been a hyper-realistic work. What he wanted was an extremely musical synthesis. (27 July 1999) Collaboration with Sciarrino was carried on in a totally different manner from that with Nono. AV: This was completely new for us (at CSC). We met with Sciarrino for a long period of solfège (à la Pierre Schaeffer), a phase of familiarization at CSC with the machine. He had submitted his idea of suggesting and recreating waves. So, starting from this, I made him listen to different synthesized sounds.14 He listened and learned. All the solfège was made with the 4i system because he wanted to learn, understand, and above all interact, change and vary any sounds he needed in the same rehearsal session. (25 September 2015) He watched and listened to the series of different timbres that I could synthesize with the 4i. He wanted to be completely familiar with the
Alvise Vidolin interviewed by Laura Zattra 95 ‘new orchestra’. Only after that process did he feel confident to ‘compose’ for the new orchestra, with total independence. He became really competent in the knowledge of the machine and would decide on a symbol for every sound we could synthesize. This was his personal way of notating different timbres. So he was able to foresee everything he needed. He wrote the complete score once he got home. That was amazing. It was a graphical score, with the voices and the synthetic part, with a quantitative description of the filter resonances, dynamic ranges and spatialization. This is very rare. He was not the type of composer with an intuition, to whom we have to adapt in order to translate his idea in a language for the machine and proceed with several steps of adjustments. He wanted to learn and become completely confident with our instrumentarium. (27 July 1999) Once he came back with the complete graphical score, it was our turn to make things possible. This was the second phase of our collaboration, and I worked in collaboration with Paolo Zavagna. Sciarrino had planned to use a large number of voices (which means a complex polyphony of waves, of synthetic sounds). We agreed that we could not synthesize all sounds in real time; technology at the time was not advanced enough (there was no commercial synthesizer capable of synthesizing the entire synthetic part). I made an instrument with MUSIC 5 software15 in order to create most of the sounds, starting from an instrument that I had previously created for the 4i digital sound processing. MUSIC 5 permitted the synthesis of the same type of timbres. But it was not equal to 4i and did not have the same ‘logic’. We had to simulate with MUSIC 5 what we had made with the 4i. It was quite a struggle. We had to program a filter ‘from scratch’, so to say. During concerts, we used a play-list of those MUSIC 5 pre-calculated synthesized parts. We kept the 4i for those parts whose duration was not foreseeable. There were parts in the score the conductor could lengthen or shorten, following his sensibility, dramaturgy or on-stage movements. (25 September 2015)
The computer music designer as a performer of live electronic music According to Vidolin, in performance the computer music designer ‘must realize the processing system and plan the performance environment’ for the concert situation (1997, 442). Sound and space are the two dimensions at stake. Accordingly, it is clear that a large part of this work commences at an earlier stage, during the studio activity.
96 Laura Zattra AV: Actually, the interpreter’s work begins during the studio work, because planning is quite central. After all, during concerts the live electronic interpreter ‘just’ makes sure that technology works properly, meaning the overall complex system of algorithms, patches, dynamic levels, spatial projection, etc. (12 April 2013)16 Alvise Vidolin and Luigi Nono With Nono, Vidolin started as a live electronic musician; he was not involved in the compositional process but collaborated in the performance phase of many pieces. Through this experience, he became acquainted with Nono’s artistic world. AV: My first work with Nono was Io, frammento dal Prometeo (1981). We made the first performance at Venice La Biennale (in the sports hall). The whole time he told me to ‘listen’ (‘Ascolta!’, is the motto used by Luigi Nono during the 1980s). He wanted me to listen to the space of the sports hall and to report what I heard, because he could not be in all the corners of the place at the same time. Then we made Quando stanno morendo. Diario polacco n. 2 (1982), in which the whole central part is made up of delays. I worked with him on Omaggio a György Kurtág (1983–86). The first performance was a sort of improvisation at the Maggio Musicale Fiorentino (10 June 1983) with Hans-Peter Haller from the Strobel Foundation and me explaining to Nono the theoretical principles involved in the piece. I assisted the first performance of Guai ai gelidi mostri (in Cologne, 23 October 1983) and helped him in the choice of harmonizers. Quando stanno morendo. Diario polacco n. 2, was a remarkable experience, at the San Rocco School room. Luigi Nono came to me and said: ‘you’ll perform the delays’, even if that was the first time I saw the score (he had just finished writing). But we had already spoken and discussed in depth at the Strobel Foundation his expectations of the live electronics technology. So, even if I had not participated in the compositional phase, I knew his compositional aesthetic. In this piece, the live electronic part must pick up sounds from the acoustic instruments and multiply them with delays (2 seconds for the flute and 5 seconds for the cello). This produced an accumulation of sounds. I hesitated at first […]. But slowly I started to feel confident and allowed the system to reinforce the sounds. I looked at him trying to receive some feedback, and he waived his hand again and again. I began to feel like transported from the sound towards the maximum climax of the piece, with sounds circulating all over the room at full volume. Without knowing, I had caught Nono’s vision; he was very happy, and the tension preceding each rehearsal vanished all of a sudden. (25 September 2015)17
Alvise Vidolin interviewed by Laura Zattra 97 Prometeo by Luigi Nono was the first real complex collaboration between Nono and Vidolin, during both the compositional phase and the performance. Real-time sounds produced with CSC’s technology had to fit with live electronics transformation realised at the Strobel Foundation (Haller 1985). AV: When working on Prometeo during the compositional/research process, we decided to leave the performance environment open, so that we could test it and adapt it during the one-month period of rehearsal in Venice, in San Lorenzo church (1984). We had a series of difficulties, not the least of which was that we were worried that the humidity in San Lorenzo was very high and there were sudden drastic temperature changes. That was not good for our computers. Other difficulties were related to live electronic interactions with singers and performers. We must of course not forget that Prometeo, as many other pieces by Nono, was a work in progress. Nono made different versions at every concert. We had to adapt to this fluidity and make changes very quickly. (27 July 1999) Alvise Vidolin and Salvatore Sciarrino Perseo e Andromeda by Sciarrino was performed using the 4i system on January 1990 at the Staattheater in Stuttgart, Germany. From the second performance on, the 4i system was substituted with another digital workstation known as the Musical Audio Research Station (MARS) designed at the Istituto di Ricerca per l’Industria dello Spettacolo (IRIS). It is important to note that Sylviane Sapir had a crucial role in the development of both the 4i system and the MARS workstation. AV: In Perseo e Andromeda, we could have recorded the entire synthetic part on tape and just reproduce it during concerts. But we were not interested in this ‘simplicity’. Moreover, this type of solution would have resulted in a rigid and even dangerous performance for the singers. On the other hand, we could not synthesize completely in real time, because no such technological equipment permitted that. So we decided on a batch system and used a general purpose signal processor (the 4i and later the MARS system). In the end, we used 4 computers: the 4i system (and afterwards the MARS) and two computers with play-list (with MUSIC 5 pre-synthesized parts); a fourth computer was used for the spatialization. (12 April 2013) This system was played via a gestural control, connected with two personal computers: each sound was triggered by independent playlist of precalculated audio files (Provaglio et al. 1991; Vidolin 1991; 1997, 456).
98 Laura Zattra AV: Spatialization in Perseo e Andromeda was also very interesting. Sciarrino wanted a precise ambience. Sounds should pass over the heads of the listener or completely encircle the listener. What is noteworthy is that he notated all those movements on the score. However, movements were so numerous that we had to use another computer completely dedicated to this task. (12 April 2013)18 It is also important to note that Vidolin and Sciarrino decided to publish the complete computer synthesis of the opera. The Ricordi score is in fact an excellent example of the preservation; it presents the complete distribution of material used for the creation of the piece (Sciarrino 1992). The edition (214 pages) includes the traditional score for four voices along with tape, as well as the complete computer score made using MUSIC 5. The synthetic sounds have been written in traditional notation by indicating the approximate intonation of objects made with white filtered noise. This was based on Sciarrino’s diagrams made after his period of study at the CSC. It was he who noted every parameters of filter. This kind of publication ensures the conservation of data, which is finalised not only for future performances of the work using new software, but also for possible analytical work.
Conclusions Alvise Vidolin’s experience sheds light on the emergence of the computer music designer. He was the first to articulate the roles and responsibilities of this professional musician in both the compositional and performance phases of the creative process. Vidolin has always taken good care to preserve and transmit his work. His numerous articles and workshops dedicated to Nono and his music and his contribution to the edition of the Sciarrino’s Perseo e Andromeda score are two examples. As a performer, he suggests that computer music designers must carefully consider the succession of several performance environments during a concert: This must be taken into account when choosing the equipment and when planning the environments. For example, the transition from one environment to another must be instantaneous and should not cause any disturbance, and the performer controls must be organized in such a way that the change is reduced to a minimum. (Vidolin 1997, 444) According to him, the training of a computer music designer demands a high degree of personal engagement and must be rigorous. Learning cannot be by intuition or imitation, as is the usual practice. The interpreter must, instead, develop a learning capacity based on the analytical nature of the technologies and signal processing techniques and must
Alvise Vidolin interviewed by Laura Zattra 99 be able to pass rapidly from a variety of situations and configurations to others that are very different, which may also occur even in a single piece. (Vidolin 1997, 445) Performing abilities must match technological and analytical capabilities. Furthermore, the computer music designer must remain alert to the operational differences between studio and the live performance contexts (Vidolin 1996, 447). To that end, he has drawn up five tables that relate sound-processing techniques (with operational information) to perceptual and performative results, with detailed suggestions for performers (Vidolin 1997, 446–49). At the time (1997) the diagrams could have been seen as a kind of ‘basic training’ for computer music designers, and to this day the tables remain valuable information for scholars interested in digital music at the turn of the century. This emerging profession must also take into account the problem of obsolescence. ‘Performance environments, unlike musical scores, are subject to periodic changes in that many devices become obsolete and must be replaced by new equipment that is conceptually similar to the previous system but which is operated differently’ (Vidolin 1997, 445). Computer music designers should therefore be active writers, not only to fix and keep track of their work in papers, interviews and essays, but more importantly to provide records of procedures and outcomes as technology continues to migrate.
Alvise Vidolin interviewed by Laura Zattra • • • • •
27 July 1999, CSC in Padova Italy (transcribed in L. Zattra, Da Teresa Rampazzi al Centro di Sonologia Computazionale (CSC), Master’s Degree, Università degli studi di Padova, 2000, 137–43). 1 June 2009, his home, Padova. 22 February 2011, Conservatorio Cesare Pollini, Padova. 12 April 2013, CSC-Sound and Music Computing Group, Department of Information Engineering, Università degli studi di Padova. 25 September 2015, via Skype.
Notes 1 For an annotated bibliography of Vidolin’s writings from 1975 up to 2008, see Zattra (2009). 2 Strongly supported by Mario Messinis, director of the Music Sector of the Venice Biennale at the time, the LIMB, headed by Alvise Vidolin and from 1983 co-led by Roberto Doati, has operates for six years from 1980 to 1986, with other scattered activities in 1989 (concerts and workshops with Sylvano Bussotti and Walter Prati), 1993 (workshop on the musical praxis of Luigi Nono) and 1995 (concerts ‘L'ora di là dal tempo’ at the 46th Art Exibition for the 100th anniversary of the Biennale). 3 Tragedia dell’ascolto means ‘the tragedy of listening’ and refers to the Greek notion of tragedy, the fate of Prometheus, as well as the small flame of faith of learning and listening Nono believed in. Ascolta! (Listen!) is the ultimate and only hope for understanding.
100 Laura Zattra 4 In April 2015 Vidolin and musicologist Veniero Rizzardi presented the complete rendition of Luigi Nono’s pieces realised at the Studio di Fonologia della RAI di Milano during the 1960s, in collaboration with the centre of musical research Angelica in Bologna www.aaa-angelica.com/aaa/angelica-progetto-nono. The aesthetical and historical research of Nono’s works made by Rizzardi and Vidolin began in 2011 with the performance of A floresta é jovem e cheja de vida (19666) for the 55th Edition of the Festival Internazionale di Musica Contemporanea of the Venice Biennale. 5 The concepts of the poietic and aesthetic sides of the work are borrowed from the well-known musical semiology developed by Nattiez (1975). 6 Plessas and Boutard (2015) have recently defined the computer music designer as the person who assists in the compositional process and the live electronics musician as the person who assists with the performance of the completed composition. 7 Indeed, the very name of this emerging profession is far from settled. IRCAM in Paris is the first institution to reflect on both its specific function and naming (Zattra 2013). 8 This group started forming in the late 1970s and consisted of numerous musicians (Roberto Fabbriciani, flute; Ciro Scarponi, clarinet; Giancarlo Schiaffini, tuba; Susanne Otto, contralto; among others), technicians (Bernd Noll, Andreas Breitscheid) and sound engineers (Hans Peter Haller, Rudolf Strauss and Alvise Vidolin) (Zattra et al., 2011). 9 Vidolin and historical documents prove the fact that Nono had visited John Chowning at the Center for Computer Research in Music and Acoustics in Stanford and had met Charles Dodge; he liked what computer music offered, as well as its infinite possibilities, and contemplated a period of study at CCRMA. In the end, he decided to go to the Strobel Foundation instead, for he could work there with delay too, with live electronics techniques and new sounds. (1 June 2009; Nono 1987c, 550). 10 Archivio Luigi Nono in Venice preserves several tapes (now digitised) with those audio sketches. 11 I have looked at two handwritten scores of Prometeo for this research. The first is held at Archivio Luigi Nono in Venice; it shows the part of the 4i sounds at the bottom of the score, but ‘the pentagram’ is void (this is the score of the 1985 version, Milan). The second is held at the Archivio Storico delle Arti Contemporanee (ASAC) in Venice; it shows some details about the using of 4i, but it is also incomplete. I intend to undertake further research at the Paul Sacher Foundation in Basel, where another handwritten score is preserved. I hope this will allow me to get more information on the use of the real-time digital-sound processor. 12 For more on this, see articles by Vidolin (1992) and Cecchinato (1998). 13 Similar concepts are evoked in Vidolin (1997, 454). 14 Usually, computer music centres show composers the state of musical research and what computer technology is capable of, before starting the real creative process. Sciarrino went to the CSC with a very precise compositional concept, and the creative process started there. 15 MUSIC 5 was a program developed by Max Mathews at Bell Telephone Laboratories in the early 1960s. CSC had a copy of the program and made a personalised version. 16 The same concepts are echoed in Vidolin (1997, 442). 17 Vidolin narrated the same story in a brief but very informative article (Vidolin 2002). 18 The same concepts are echoed in Vidolin (1997, 456).
5 Instrumentalists on solo works with live electronics Towards a contemporary form of chamber music? François-Xavier Féron and Guillaume Boutard Introduction Musique mixte has spawned multiple conceptual aesthetic forms. The first version of Musica su due dimensioni (1952) by Bruno Maderna, usually considered the first example of this type of music, confronted two instrumentalists (a flutist and a percussionist) and a tape. This work introduced a new paradigm into contemporary composition, one that then developed over the years and is now widespread. Our intention is to question the nature of the confrontation between instrumentalists and electronics within the technological and social context of performance. Tiffon (2005b) distinguishes three potential configurations: in the first, the electronic segment is produced before the performance (and fixed on a media); in the second, the electronic segment is created in real-time during the performance; and in the last there is a combination of the previous configurations. The most current nowadays is the last. Our study engages with live electronics and, specifically, solo works.1 Our goal is to investigate the conceptualisation of this interaction from the point of view of professional instrumentalists. This conceptualisation relates not to specific works but to a broader view, grounded in the experience these experts have compiled during their careers. Thus, we aim to provide a better understanding of the theoretical and practical impact of this repertoire on the musical practice of instrumentalists, from a piece’s preparation to its performance in concert. A previously unheeded question emerged from our study: are solo pieces with live electronics a new archetypical form of chamber music?
Interviews and participants Using semi-structured face-to-face interviews, we systematically collected the discourse of instrumentalists pertaining to selected topics. Semi- structured interviews require a set of predetermined questions. Nonetheless, the interviewer may choose to further explore points raised by the interviewee, paying particular attention not to influence her/him (see Kvale and Brinkmann 2009 for considerations about the interview as a data collection
102 François-Xavier Féron and Guillaume Boutard technique). In a different context and with a different purpose, Rink (2011, 436) stated that, for too long there has been an implicit assumption within musicology that scholars have the upper hand in matters of knowledge and judgement, and that performers who do not seek out and eagerly assimilate the findings of scholarship in their interpretations run the risk of shallow, meaningless music-making which serves them as individuals rather than some higher ideal. Such a view is untenable and should be laid to rest once and for all. Similarly, we argue that the performers’ ability to verbalise their own expertise in the context of music with live electronics is truly relevant. Clarke (2004) details evaluative and qualitative methods in the study of performance stemming from psychotherapy in which performers are able to discuss their practice. In this context, the data collected comprises performance recordings, as well as a posteriori comments. We have sought to detach our research from a specific work, in order to question instrumentalists’ practice in relation to their perspectives on musical genre. Our specific goal was to collect their statements, ‘asking questions and encouraging them to tell their own stories of their lived world’ (Kvale and Brinkmann 2009, 48). The output of this method is an analytical grid, grounded in the systematically collected data, as opposed to using the collected statements to validate an existing theory. The methodological background and epistemological assumptions are detailed in Boutard and Féron (2017). The analytical process in grounded theory (Strauss and Corbin 1998, for a thorough explanation of the analysis process) is long and meticulous. It relies primarily on a continuous process and the diversity of the data. Through this process, the conceptual framework emerges and stabilises. This research is qualitative rather than quantitative in nature (i.e. based on stabilisation rather than the number of word occurrences). We conducted interviews with twelve instrumentalists with multiple backgrounds. They are identified with the code I-xx (xx is the identification number). The interviewees included two saxophonists (I-03 and I-09), two flutists (I-01 and I-08), one trombonist (I-02), one trumpeter (I-04), two cellists (I-05 and I-12), three clarinettists (I-07, I-10 and I-11) and one percussionist (I-06). Several instrumentalists have played music with live electronics since the end of the 1970s. Some younger participants with relevant professional experience were also included in the survey, thus providing greater data variety in terms of practice and social context. Figure 5.1 shows the broad distribution of experience among interviewees, most of whom (eleven out of twelve) also have teaching experience. Seven out of twelve began before the year 2000. Five interviews took place in Quebec and seven in France; all of them were conducted in French, which is the researchers’ native language.
Instrumentalists on solo works with live electronics 103
Figure 5.1 Population distribution in terms of their first experience in musique mixte.
The questions, originally in French, were the following: 1 I would like to ask you to describe the similarities and differences that arise in preparing a musique mixte piece with live electronics, on one hand, and a purely instrumental piece, on the other. 2 During your experience with this repertoire, I would like you to describe what systematic work methods you enacted. 3 In the preparation stages, how invested are you in the electronic component (in terms of time and sound explorations)? 4 In the best-case scenario, what type of documentation pertaining to technological setup and electronic effects do you have access to? 5 Globally, with regard to available documentation, what do you feel might be lacking, and what elements do you feel it would be wise to include? 6 What evidence of the electronic segment is visible in the score? 7 As an instrumental performer, does preparing musique mixte pieces with live electronics require any skills that might not be implied in instrumental music? 8 When preparing the piece, what type of outside experts do you call upon? 9 I would like you to describe any further problems you might have faced in preparing musique mixte pieces that have thus far gone unmentioned. 10 Do you believe that the performance of musique mixte has the potential to become a field of study in professional musical scholarship?
104 François-Xavier Féron and Guillaume Boutard
Categories We collected a large amount of data from these interviews, which consisted of approximately 68,000 words that we then segmented into roughly 1,300 quotes.2 Before we begin a broad presentation of the categories that emerged from our analysis, we must define a few terms used in this chapter. ‘Agents’ represent the broad range of human and technological entities participating in an activity, that is to say, in our context, the production and creation of musique mixte. The range of agents in this definition matches the use of the term in Laura Zattra’s (2006a) chapter on the context of electroacoustic music. ‘Actors’ are defined as both human and institutional ‘agents’. They include composers, universities, music schools and performers but exclude technological entities. ‘Performers’ are human ‘actors’ participating in the live production of works. Performers are engaged in musical practice according to their domain of expertise. Musical practice ‘consists of a variety of different but interrelated a ctivities including memorisation, the development of technical expertise and, ultimately, the formulation of interpretations’ (Reid 2002, 102). Performers include instrumentalists as well as sound engineers and live electronic musicians. The term ‘instrumentalist’ refers to the human agent who plays a ‘traditional’ musical instrument.3 The term ‘live electronic musician’ designates the human agent in charge of electronics during the performance (Plessas and Boutard 2015).4 Figure 5.2 presents a broad description of the relations among the various human ‘actors’ in the context of musique mixte, including the listener. The analysis of the data collected provides us with a conceptual framework consisting of three broad categories divided into several levels of subcategories (for an example of such a framework, see Boutard and Guastavino 2012). A detailed presentation of these categories (grounded in the verbalisations of the participants) would be too lengthy a task for this chapter. We will present the highest levels of the conceptual framework that support the discussion. The three main categories to emerge from the analysis are the following: production context, organisation of musical practice and musique mixte conceptualisation.
Figure 5.2 Schematic depiction of the social interaction in musique mixte. Source: Adapted from Boutard and Féron, 2017.
Instrumentalists on solo works with live electronics 105 Production context The production context includes all factors related to the management of both human and technological resources at any given time during the production process, from the commission to the concert. To be specific, several agents are necessary to production, spanning from the broadest (e.g. institutions such as music research centres) to the smallest units (e.g. microphones and speakers) via human actors. For example, I-01 emphasises the importance of the microphone: ‘I think that the most important thing […] is to learn a few things about microphones and amplification […] If we have good knowledge of microphones, specifically our own, […] it becomes part of our instrument’ (Q-107). As a matter of fact, Haller (1999b, 12) states that ‘the function of the microphone in real time electronic sound transformation is completely different from that of normal studio operation: it represents a quasi-instrument of electronic sound extension, with which the interpreter can play’. The complexity of agencies in the context of music production with technologies has long been emphasised in the literature, for example, in the work of Menger and Cullinane (1989). The production context of musique mixte also encompasses the management of various stages. As I-11 states: often, in a piece without electronics, we come in when the piece is finished. That is, when the composer is done writing, at which time he brings us the finished product [the score]. However, when we work with an electronic component, the writing takes place in stages. (Q-1097) This specific context impacts the work strategies described in the following section on the organisation of musical practice. Additionally, as many instrumentalists highlighted, it affects the implication of actors and the required musical and technological skills. ‘Sometimes in order to handle the difficult cues, you need to develop a specific work technique’ (Q-166), recalls I-02. Some instrumentalists improve their technological knowledge; for instance, I-01 explains: ‘Since I have become more familiar with Max/MSP, I feel a lot more comfortable when something is not working, or if I have questions, I can take a look for myself’ (Q-110). This situation is not one that all participants share, ‘personally’, says I-12, ‘I have no idea how a Max patch works. I know more or less how it works but I am totally incapable of running it’ (Q-1256). Another important part of this context is the legibility and the comprehensiveness of the documentation pertaining to electronics (for the record, this point is explicitly raised in the questions). This topic, especially as it relates to the score, has long been discussed in the literature (see, for example, Stroppa 1984; Manoury 1999; Emmerson 2006b). Nonetheless, from the instrumentalist point of view, opinions are manifold on this topic. Most interviewees emphasised potential deficiencies in the electronic score. I-12 states that ‘sometimes, but not always, there is a line that represents the electronic segment, but it is rarely very illustrative’ (Q-1233), further describing
106 François-Xavier Féron and Guillaume Boutard the electronic notation as often ‘exotic’. The same participant nevertheless suggests that this is not a priority for resolving the inherent difficulties of playing with live electronics. Organisation of musical practice In the following statement, Dunsby (2002, 233) accurately describes the complexity of musical practice organisation: actual performance is the tip of an iceberg of performer’s practice and rehearsal, which in countless different ways is the “analytical” level of music- making, the time when everything is put in place mentally and physically for the on-stage “calculation” that has but one opportunity to be right. In our analysis, the organisation of musical practice relates to the description of all activities necessary to the production of the works and, at the collective level, the multiple interactions between various actors, most especially those between performers, which include collaboration, supervision and delegation. I-02 mentions that ‘a skilful person is needed at the computer, to go either forward or backwards, if something has gone wrong’ (Q-216). Activities that pertain to the transfer of knowledge at multiple levels, among multiple actors, also belong to that category. For example, I-11 is often sought out for his expertise on a specific work that he premiered and recorded. He further states: ‘somehow we are the works’ ambassadors’ (Q-1075). An important aspect that I-11 brought up, in terms of knowledge transfer, relates to annotation: ‘we note the electronic elements on our own score as we go along because they are not in there’ (Q-1123). Some participants place the emphasis on oral transmission, which is not limited to the relationship between the composer and the instrumentalist. As Berweck (2012, 209) noted, ‘performers must acquire the skills to communicate their needs to the colleagues with whom they perform – the stage technicians and the sound engineers’. An important part of this category is the conceptualisation of the multiple appropriation strategies developed by instrumentalists in the context of solo works with live electronics. Two subcategories emerged. The first one relates to various kinds of adaptations, including the aforementioned annotation activity but also in relation to the production context, especially with regard to the technology. A funny example is provided by I-12, concerning a piece requiring many pedal cues: ‘I remember that I worked at home with a hole punch that I put on the floor just to have the feeling to press something with my foot at the right time’ (Q-1195). The second one relates to investigation strategies, including sound investigation, which was important in the instrumentalists’ statements. I-07 describes his strategy in relation to live electronics: ‘often, we refine the dynamics, especially in the pianissimo range, to see what happens when we strengthen them’ (Q-751). Beyond sound perception, these investigations are of particular importance in the process of apperception, that is to say, according to Vermersch
Instrumentalists on solo works with live electronics 107 (2011, 188), the ‘mental act through which the person becomes aware of his/her own representations’. It is important to figure out the piece’s original sound intention, or aesthetic. As I-03 underscores, ‘the instrumentalist needs […] to be aware of the type of transformation at that specific point’ (Q-279). Musique mixte: conceptualisation This category brings together all of the theoretical reflections on musique mixte from the instrumentalists’ point of view. The interviews spawned more abstract and context-independent enquiries about the nature of musique mixte and performance. These reflections bring into question the relation to electronics and the repertoire’s specificities. I-02 highlights the causal link: ‘the processing is live, the result depends on what I do’ (Q-157). I-05 describes this relationship as a symbiosis between instrumental sound and live electronics. He states that, in some cases, ‘one does not exist without the other […] This is true musique mixte, it is almost hybrid’ (Q-591). How one relates to space as well as to the instrument is greatly modified. I-07, for instance, tries to convey to the students that: ‘the fact that we work with speakers, with live sound processing, all of a sudden, […] the room disappears, everything is transformed’ (Q-753). Similarly, I-05 describes the implications of live processing for the virtual extensions of the instrumental capabilities: The electronic element supplements the instrumentalist such that he becomes a super-performer. It allows him to play faster, higher, lower, all those things he cannot do because no matter what, he is limited, even if he is the best in the world. (Q-586) On a large scale, the different perspectives of these reflections match Ungeheuer’s conceptualisation of live electronic music, which she organises in three stages or types: (1) the opposition between human and machine actions (which relates, historically, to fixed media) as a dramaturgic concept; (2) the instrument’s time-space transgression, in which ‘the scope of the possibilities of action of the instrumentalist transgresses the usual frames of time and space’ (2013, 1369) and (3) the interaction, which ‘starts where the gesture of the instrumentalist ceases to trigger mechanical processes’ (2013, 1372). The latter relates to the notion of chamber music and the fluid conceptualisation of the partnership in the context of solo works with live electronics, which is the focus of the next section.
A contemporary form of chamber music: primary characteristics In order to establish a meaningful relationship with electronics, the instrumentalists must collaborate closely with their partners, who need to have
108 François-Xavier Féron and Guillaume Boutard great expertise in the field of computer music and sound engineering. Most of the instrumentalists we interviewed spontaneously described solo pieces with live electronics as chamber music. An important issue that emerged in this study has to do with the question of whether this description involves human collaboration or human-machine interaction. According to I-08, chamber music relies on the interaction ‘with both the electronic segment and the person with whom we work’ (Q-900). Once you have discovered the instrumental score, ‘it is useless to work alone’, explains I-03 (Q-273). I-02 also underlines the need to ‘rehearse with electronics as you ccording [would] rehearse with a piano, just like chamber music’ (Q-199). A to I-07, ‘you are not working on a solo piece, you are really working in a chamber music context’ (Q-738). I-12 goes so far as to compare the situation to that of a string quartet, usually considered the paragon of chamber music. This participant argues that, during rehearsals or concert, ‘we will never play the same way twice and we will all interact with each other’ (Q-1222). Playing in a string quartet or playing a solo piece with live electronics requires collaboration between actors but in a very different production context. A string quartet is a well-defined and fixed ensemble of four instrumentalists who share a common background in terms of education and profession. They are literally colleagues since they play instruments belonging to the same family. They also share the same vocabulary and face similar issues. Solo pieces with live electronics raise several questions: how many partners are required? Does this number include the live electronic musician, the sound engineer and so on? What is expected from them, and how do they interact? Do instrumentalists have shared expectations concerning the role of their partners? Before getting to the heart of the matter and discussing the relevance of the term ‘chamber music’, we will provide a brief state of the literature on chamber music. In current usage, a chamber music piece is defined as a composition traditionally written for a small instrumental ensemble with no more than one player to a part; it is historically intended for a performance in a private room or small concert hall before an audience of limited size (Bashford 2001). Three main aspects seem to determine what a chamber music piece is: the number of musicians, the notion of interaction between the instrumental parts and, finally, room types where this repertoire should be performed. The usual definition of chamber music, in terms of number of musicians, is relatively constant and secure but remains arbitrary. Traditionally, chamber music is performed by a small group of players between two and ten in number (Tranchefort 1989, vii). From our perspective, the maximum number of musicians is not the issue. In fact, the question is rather: may a solo piece (with live electronics or not) be considered a chamber music piece? Several authors have examined this issue, and answers tend to differ. Although it is often defined as involving more than two players, ‘much solo repertory such as Renaissance lute music, Bach’s violin sonatas and partitas and cello suites and several of Beethoven’s piano sonatas fulfils many of the
Instrumentalists on solo works with live electronics 109 functions and conditions of chamber music’ (Bashford 2001, 434). Tranchefort (1989) ignores the repertoire for solo piano but includes all solo pieces written for solo strings or winds, explaining that this seemingly arbitrary choice is in fact a historical artefact. McCalla (1996) notes that chamber music is performed by a group of players normally numbering between two and nine. Nevertheless, he includes solo percussion piece such as Zyklus by Stockhausen and mentions a few examples of solo pieces with tape. As for Baron (2010, xv), he believes that chamber music is classical European instrumental ensemble music for anywhere from two to approximately twelve performers. He does not include ‘music for two or more keyboard instruments without additional, non-keyboard instruments, and percussion music’. From these multiple and conflicting points of view emerges a subjective definition of chamber music, including solo pieces or not. From this perspective, it is relevant to consider a solo piece with live electronics as belonging to this genre. Another aspect that usually characterises a chamber music piece is interaction. ‘In essence, the term implies intimate, carefully constructed music, written and played for its own sake; and one of the most important elements in chamber music is the social and musical pleasure for musicians of playing together’ (Bashford 2001, 434). In a solo instrumental piece, such as Bach’s violin sonatas, the instrumentalist need not interact with other performers. S/he is responsible for the sound production as a whole. In a solo piece with live electronics, the instrumentalist must work with an individual or a group of people in charge of diffusing the electronic segment. There is an interaction in terms of sound, between the instrumental and the electronic segments; consequently, there exists a social and musical interaction with other agents. Finally, the term ‘chamber music’ indicates the type of rooms where this music should be played. This repertoire emphasises subtlety and intimacy: it is not originally meant to be performed in symphony halls or opera houses but rather in ‘a domestic environment with or without listeners, or in public in a small concert hall before an audience of limited size’ (Bashford 2001, 434).5 This architectural aspect is also present in the case of solo works with live electronics. Such pieces require the installation of an important technical setup in the concert room and may transform the relation to space. Electronics aims to create virtual spaces that have to be perceived homogeneously in the room; however, in most cases, the solo instrument is an acoustic one. In order to respect the natural instrumental sound and the localisation of the source, engineers try not to amplify it too much. For these different reasons, this repertoire should be played in rooms of limited size. How do the instrumentalists that we interviewed approach these three aspects? What arguments are brought up when they naturally describe the performance of solo pieces with live electronics in terms of chamber music? What is the best framework for practicing and performing such a repertoire?
110 François-Xavier Féron and Guillaume Boutard
Sound interaction Before focusing on the number, skills or role of the partners working with instrumentalists during rehearsals and concerts, let us question the interaction between the instrumental and the electronic segments. How do the instrumentalists understand this interaction? What are the similarities and differences in comparison to a traditional piece of chamber music? Even though it is now common to consider the instrumental and electronic segments as two parts of the same whole, this is not necessarily reflected in the preparation of the performance. It is not always possible for the instrumentalists to work with the electronics right from the beginning. I-11 underlines the importance of understanding the musical language of the instrumental segment by deciphering the score and also states ‘if you are lucky to work directly with electronics, it is better’. The correlation between the instrumental and electronic segments implies that ‘there is a constant back and forth between what I put forth and what electronics do’, continues I-11 (Q-1106). I-03 holds a similar view. In any kind of musique mixte (with tapes or live processing), the instrumentalist must adapt to ‘working with another partner, with a sound generator other than a traditional music instrument’ (Q-377). Musique mixte is often described in terms of dialogue, confrontation or symbiosis between the instrumental segment on one hand and the electronic segment on the other. I-11 assimilates the electronic segment to a second person. Thus, it is similar to two strangers. They will get to know each other, it is once we are properly acquainted that the piece is ready to be performed in public. It is exactly the same as any relationship with another instrumentalist. (Q-1121) The same participant also argues that this is a form of chamber music since there is a need to listen carefully to the sound diffused by speakers. This is not another building block – ‘a second Lego’ (Q-1122) – added by the composer to the instrumental segment. I-07 compares the context to the performance of a piano trio, in which you have ‘to find a sound space’ (Q-735). For most of the instrumentalists, it is critical to really play with electronics, to develop, according to I-09, a ‘form of connivance with the speakers […] as it is with a pianist’ (Q-955). I-07 points out the importance of modulating the performance, ‘to vary the intensities, the articulation and the tempo’ (Q-748) and observe the outcome with respect to electronics. In comparison to traditional chamber pieces, this causal relationship is often stronger. A musical pattern can be memorised by the computer and replayed with a delay in order to build a canon with the live instrumentalist. In this common case, preparatory tasks are similar to the work done with another musician, but
Instrumentalists on solo works with live electronics 111 ‘to play with oneself […] is really a new chamber music experience’ (Q-1005), specifies I-10. ‘You have to work as if you were in the presence of other instrumentalists’, insists I-12 before concluding: ‘It is the rehearsal process of chamber music, but with electronics’ (Q-1211). The symbiosis with electronics is fundamental to most pieces of musique mixte. Nevertheless, sometimes the instrumentalist must content herself/ himself with a focus on the instrumental segment and leave the control of electronics to the other human actors of the performance. ‘In some cases, it is better to play almost as though one were autistic, paying no mind to the electronics’, explains I-12, who makes an explicit comparison to an orchestra ensemble: ‘we play the score, we follow the lead of the conductor, we trust the conductor’ (Q-1218). I-05 remarks that, for pieces belonging to certain specific aesthetic categories, such as Boulez’s music, ‘the role of the instrumentalist is strictly defined, or even confined to a precise definition’ (Q-585). In these particular cases, the degree of interaction is limited. In general, instrumentalists underscore the interaction they need to have with electronics and compare this experience to that of chamber music. Nevertheless, the identification of partners varies significantly from one instrumentalist to another and from one context to another. The partner may be associated with the electronic segment (in terms of sound), with devices (such as laptop, speakers, microphone and so on) or with human actors in charge of electronics during the performance.
Social interaction In this section, we will discuss the social interactions taking place during the multiple phases of the production of this repertoire. How many partners does the instrumentalist need to perform a solo piece with live electronics? What role does each human actor play? Could these relationships be compared to those in the context of instrumental chamber music? Confidence among performers We are alone on stage, but in the case of a piece with electronics, we are not even alone. At the very least, we work as a duo and I find that very pleasant. According to me, it is similar to chamber music. (Q-1284) These words, coming from I-12, are quite explicit: although ‘solo’ is generally mentioned to describe a piece for one instrument with live electronics, it is obviously not a solo piece. Even if the instrumentalist is alone on stage, s/ he is not single-handedly responsible for the sound. S/he performs the music alongside her/his partners in charge of electronics. I-07 insists that when performing musique mixte ‘you are not alone’, people ‘who are playing with us, are generally present in the performance hall’ (Q-761 and 762).
112 François-Xavier Féron and Guillaume Boutard Performing a solo piece with live electronics implies working with other actors you can rely on. ‘Confidence’ is a word instrumentalists often employ to describe their relationship with other performers. If possible, I-12 prefers to work with the same people at every concert: ‘I know them; they know how I work. The partnership, the trust, is the same as with an instrumentalist’ (Q-1290). During their education, musicians learn to be an element of a group, whether it is an orchestra or a chamber ensemble. As a consequence, they learn to rely upon each other, observes I-04. In musique mixte, the same situation applies, but the configuration is quite different: the instrumentalist, who is on stage, often has to work with partners positioned at the back of the console placed in the audience. I-07 acknowledges that the instrumentalist is usually ‘totally incompetent when it comes to operating the technical equipment’, and must therefore trust the partners (Q-784). I-12 delegates all technical aspects to them (Q-1204). Furthermore, the instrumentalist cannot control the sound balance inside the audience: ‘You have to relinquish the idea of absolute control over the music you are producing’, I-03 understandably remarks (Q-247). Performers in charge of electronics have to make sure that the technical set-up is efficient. They need to constantly adjust the electronic segment with regard to the instrumental sound. The instrumentalist then ‘begins to understand the electroacoustic phenomena that occur around him and it will completely change his work’, explains I-04 (Q-403). According to this participant, this specificity of musique mixte may seem slightly disturbing at the beginning: ‘You must trust the sound engineer. Quite simply, you cannot manage your own sound in the mix with a real-time treatment. You are compelled place your entire trust in the person who will be in charge of this balance’ (Q-394 and 395). I-07 observes the same thing in explaining that the instrumentalist needs people in the audience to provide some of the feedback to her/him (Q-761). Team configuration In Figure 5.2, we have simplified the social interaction between human actors during the composition and the performance of a solo piece with live electronics. This schematic representation is not static. On the one hand, the instrumentalist could collaborate with the composer and the computer music designer during the creative process: as noted above, when you work on the premiere of a piece with live electronics, I-11 observes that, as opposed to an instrumental piece, you are not in the presence of the final product (Q-1097). Most of the instrumentalists we interviewed were involved in premieres and are accustomed to working with the composer. In the case of a solo instrumental piece, the only interaction is between the instrumentalist and the composer. In the case of a solo musique mixte piece, the electronic segment is generally supervised by the composer, the computer music designer – who
Instrumentalists on solo works with live electronics 113 puts on the live electronic musician’s hat – and the sound engineer. It is not a ‘dialogue’ but a ‘quadrilogue’ (Q-1137), insists I-11, before concluding: ‘according to me, it is a quartet, even if I am alone on stage. Nevertheless, I carry approximately twenty five per cent of the work’ (Q-1134). I-12 considers the composer the main piece of the puzzle. S/he should have a powerful creative idea and use the skills of the instrumentalist, the computer music designer and the sound engineer (Q-1269). Ideally, s/he should always be present to avoid deviations during the performance, explains I-03 (Q-340). In fact, ‘it is quite rare that composers are not present’, in the specific case of this repertoire (Q-276). Once the piece has been performed several times in concert, the composer’s presence is not required, but the patch must be available and up to date. Thus, the team may be reduced. According to I-07 or I-02, you have to work with a sound engineer who is responsible for the entire audio set up (speakers, pedals…) and a person ‘who follows the music (the computer music designer or the composer) at the very least’ (Q-169). The social context has undergone a profound mutation via the democratisation of technology and training in the use of audio and real-time software in music schools. Indeed, many composers create the electronic segment and might appear to be the instrumentalist’s primary and single partner during a performance. André Richard observes that, for the performance, composers come with the patch and are in charge of electronics (Boutard 2013). If they are not on site, the piece cannot be performed. However, depending on the availability and reliability of the patch, it can happen that the instrumentalist works with just a live electronic musician, who is not necessarily the composer, notes I-12 (Q-1271). Performers’ roles and skills Playing musique mixte requires a specific framework that consists of a complex technical set up managed by multiple human actors. I-03 compares this framework to the situation in dance and theatre productions: the rehearsal is not strictly musical in these cases since ‘you have to take into account a specific environment and its technical contingencies’ (Q263). During the first steps of the preparation, both the instrumentalist and the technical staff have to work on their respective parts. The tasks are clearly separated. I-05 insists that: ‘you need to know that each person has their own specific role’ (Q-662). Each actor has to resolve technical issues before working together. There is no real collaboration during this stage, explains I-12: just as the musical assistant or the composer does not come to look over my shoulder when I am playing scales; I am not behind his/her laptop when s/he is opening a file that has not been read in four years. (Q-1285)
114 François-Xavier Féron and Guillaume Boutard Instrumentalists do not generally know much about software such as Max/ MSP, which is often used in pieces with live electronics. This aspect is left to competent individuals. In addition, during the performance, the instrumentalist on stage does not have to care about the correct processing of the electronic segment, explains I-05: ‘We are on stage. The show must go on. But it is very complicated’ (Q-662). As previously mentioned, it sometimes happens that a single person, with multiple skills, oversees both the computer and audio aspects (Q-482, Q-825 & 826, Q-1271). I-12 has often played solo pieces with live electronics with the same partner and underlines: ‘I am playing as a duo in the same way I would play with a pianist’, since his partner ‘behaves like a musician, that is to say: who listens’ (Q-1275). I-12 also notices that in case of a healthy and stable patch, partners must have particularly good competence in sound engineering. They must be able to ‘produce a specific sound in the room’ (Q-1274). This consideration is shared by I-04, who explains that ‘the sound engineer plays a very important part, almost as important as that of the performer’ (Q-407). Instrumentalists insist on the fact that at least one of the partners must be a trained musician. ‘You have to trust and respect him/ her’, explains I-03 before adding: We are in presence of a real musician and, if it is not the case, it can be dramatic. S/he should not limit her/himself to mere sound projection. S/ he must perform. In the end, the context is similar to that of chamber music. (Q-248 & 249) I-05 underlines the collective nature of the work and explains that ‘there is a chamber music approach, in the classical sense of the term that emerges among the partners’ (Q-572). According to I-04, it is essential to know the partners well: ‘For example, I cannot imagine playing a piece of musique mixte without knowing who will be carrying out the mixing’ (Q-405). I-03 explains that one of the partners must act like a performer, choosing the microphones, positioning the speakers in the room, adjusting the sound in relation to the acoustics. If the electronic performer does not guarantee a real interpretation ‘loyal to the score, but supplemented with an artistic sensibility, then the instrumentalist is screwed’ (Q-353). Instrumentalists claim that pieces with live electronics require the expertise of a music technology partner who would then interpret the electronic segment just like the performer of an acoustic instrument. Höller and Richard both share this type of assessment. According to the former, in case of musique mixte with tape, instrumentalists have to familiarise themselves with the instrumental score and the tape simultaneously. If the composer is not present, ‘a qualified musician must pilot the balance between the orchestra and sounds on the tape’ (Höller 26). The latter experienced surprising issues during the performance of an instrument piece with live
Instrumentalists on solo works with live electronics 115 electronics when technicians or developers were not accomplished musicians. When he became the artistic director of the Experimental Studio of the Heinrich-Strobel-Stiftung of the Südwestfunk (SWR, Freiburg im Breisgau) in 1989, Richard aimed to create a sort of small troupe to perform the works with live electronics, ‘like a string quartet’. This company was made of very close collaborators with expert knowledge in the use of new media who were also musicians. ‘According to me, these technicians were not mere technicians. They had strong technology skills but also had very good notions in music’ (Boutard 2013, 28).
Room and set-up characteristics Chamber music pieces were historically destined for performance in domestic environments, private in nature; solo pieces with live electronics also require appropriate rooms but for different reasons. When preparing live electronic works, instrumentalists face a number of unconventional situations. Specifically, they must deal with both a technological and a social context that are outside the normal scope of an acoustic music performance. This repertoire ‘requires a certain constellation [in terms of equipment and human actors] we do not always have easy access to’ says I-02 (Q-147), be it during the rehearsal process or the actual performance. According to I-11, many students would like to access this ‘constellation’ but cannot, due to its high cost (Q-1131). Before incorporating a piece of musique mixte in a recital, instrumentalists, such as I-10 or I-12, first find out about the room where the concert will occur. I-12 integrates these pieces as soon as possible, in terms of technology: If I have the means to play pieces of musique mixte, if it is possible to have access to a sound engineer, a musical assistant and all the required set-up to make things work? Well, then I will do it. (Q-1168) The room has to be configured to welcome the audio set up, and this is not always the case. If it is possible to install speakers and the audio equipment, explains I-10, ‘I try to add a piece [of musique mixte]’ (Q-991). The technical environment implies that this repertoire often relies on specific studios or concert halls. During the rehearsal process, instrumentalists must access audio-equipped studios, which are generally located in research centres or music schools with a specific department dedicated to acousmatic and electroacoustic music. Consequently, this repertoire is often closely related to institutions that can provide instrumentalists with both technical and human logistics. For example, I-04 has worked essentially at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM) where a sound engineer was always present to manage the set up (Q-431). I-08, who has worked at the Centre for Interdisciplinary Research
116 François-Xavier Féron and Guillaume Boutard in Music Media and Technology (CIRMMT, McGill University), explains that musicians ‘are pampered when they practice there’ (Q-902). There are increasing numbers of music centres around the world with the relevant infrastructure and staff to perform music with live electronics, but instrumentalists do not yet have easy access to them. Working in such centres remains a privilege, explains I-11 (Q-1131). Unfortunately, in music schools, I-02 notices, conditions are not yet optimal, as specific courses dedicated to musique mixte rarely exist (Q-221). I-08 regrets that students in sound engineering are trained to record concerts, not to perform musique mixte, ‘which is completely different’ (Q-911). Playing solo pieces with live electronics ‘is like playing with another instrument’, explains I-09, except that we have more common references when we are working with a pianist, for example. When we are playing a piece with electronics, we usually begin work on this segment later in the process. There are many things to assimilate in order to be familiar with electronic effects, just as there are with a pianist, a flutist or anything else. (Q-925) Access to a well-equipped studio and working with good partners are the first issues instrumentalists face. Rehearsing in the concert hall is another problem. ‘The reality is that most concerts are held in a setting where the musician on stage will meet the house technician on the day of the concert’ (Berweck 2012, 187). This assessment does not entirely apply when instrumentalists have the opportunity to collaborate with institutions devoted to the dissemination of musique mixte, though, as mentioned above, this situation remains an exception. Even when instrumentalists have access to these agents, the amount of time spent rehearsing in the concert hall is important. It is only in situ that the instrumentalist ‘can truly seize the sound aspect of the electronic segment’ notices 1–05 (Q-603). This repertoire ‘truly requires rehearsals in the [concert] room, even if only for positioning microphones or testing speakers, both of which are very important’, indicates I-09 (Q-936). According to I-07, the reverberation and fold-back sensations on stage differ entirely from sensations in the audience (Q-780). Generally, there are not enough practice sessions available in situ to properly adjust interpretation with regard to the room acoustics. Moreover, such sessions generally occur too late in the practice schedule. In studio, the artistic conditions are similar to those of a concert but on a different scale. ‘We can recreate live conditions, explains I-09, but on a smaller scale. It is like playing in a fish bowl’ (Q-934). Working in studio is fundamental to becoming familiar with both the technological and human agents. Once the technology is functioning, I-02 sometimes works alone in the studio (Q-207). A few musicians we interviewed try to have their own studio at home where they can rehearse pieces of musique mixte autonomously.
Instrumentalists on solo works with live electronics 117 However, most of them cannot work at home as they do not possess the necessary equipment, as notices I-04 (Q-432). I-11, who is an exception in this regard, wishes to have a studio at home but is restricted by the equipment’s cost. I have created my own studio at home […] I prepared it carefully because ideally I would like a tiny IRCAM at home, but it is not that easy. It requires at least one mixing table with a dedicated laptop, if not two, because the files are very heavy. You need two or even four loudspeakers. This equipment has a price but it is possible. […] I am still hoping to make it happen. (Q-1155–1156) If we set the issue of money aside, it is possible today to recreate work situations at home similar to studio ones, underlines I-05, but ‘it is still difficult to rehearse alone at home with adequate listening conditions if nobody is here to help’ (Q-543). The question of the musical partner remains relevant.
Conclusion In this chapter, we have investigated the expertise of instrumentalists who perform musique mixte as it relates to acts of apperception, appropriation and interaction with live electronics. Ravet (2005, 5) reminds us that ‘if interpretation is considered performance, in that it is a process of (re)creation in acts that is supported/initiated/driven by musical bodies, then it would seem necessary to embrace sociological and musicological perspectives. […] Both of the interested disciplines, musicology and sociology are examined here with regard to their methods and potential points of convergence and cooperation’. With a qualitative analysis grounded in instrumentalists’ discourses, we provided a conceptualisation of the multiple factors in play during the production process of solo works of musique mixte with live electronics. Several notions that help us understand performance from a musicological perspective have emerged from this conceptualisation. The complexity of the relationship between partners as it relates to the appropriation act and the convergence of this type of contemporary music with chamber music are two of the notions that broaden our understanding of the genre. A chamber music piece is defined as a composition written for a small instrumental ensemble with one player to each part. Interaction, in sound and human terms, is one of its main characteristics. Although solo instrumental pieces do not lead to the same interaction present in traditional chamber music, they might nevertheless be considered as belonging to the chamber music repertoire. Combining acoustic instruments with tape or with live electronics has led this repertoire in a new direction. Such pieces are definitively not viewed as solo pieces. They invite instrumentalists to work with
118 François-Xavier Féron and Guillaume Boutard different partners who will be in charge of electronics. The instrumentalists we interviewed seem unanimous in comparing the performance of a solo piece with live electronics to that of an instrumental chamber music piece. They must develop a close relationship with the electronic segment in order to refine the sound interaction between the electronics and the instrumental. This requires collaborating with at least one partner, who will have his/her own musical sensibility and knowledge. The main differences with instrumental chamber music pieces lie in the production context. Until the 1970s, few concert halls had adequate playback and amplification equipment to perform music with tape. ‘A performer would find it difficult to cart around equipment of his or her own, and the cost of renting it for a single performance might be prohibitive’ (McCalla 1996, 138). We can mitigate these particular points today, but the technological environment is still an issue for the practice of musique mixte. The environment is itself responsible for the critical modification of rehearsal processes and relationships with partners. First, instrumentalists preparing new music with live electronics face new problems because of the various profiles of their partners, who are no longer solely considered technicians. These partners, who range in number from one (a live electronic musician only) to three (which includes the composer and a sound engineer), must manage a complex technical environment in order to ensure the electronic segment is properly projected and interacts correctly with the instrumental sound. These partners do not share the stage with the instrumentalist but act as his/her ears in the audience. Second, the notation of electronics in the score is often approximate, sometimes even nonexistent. Apperception of electronics is only possible through essentially ear-based work. Thus, it is essential for instrumentalists to rehearse sufficiently with their partners in an adequate studio and in the concert hall. In France, the computer music designer (réalisateur en informatique musicale) is officially recognised as a profession, of which Zattra (2013) has traced the evolution. Specific academic courses are emerging in music schools and universities to promote this professional activity.6 In the case of a piece of musique mixte, computer music designers assist the composers in their work with technologic medium. They also take part in the premiere and consequently appear as live electronic musicians. Many pieces are performed only a few times before fading from memory or becoming unplayable due to the obsolescence of technological tools. On the other hand, others become classics, performed around the world. Neither the composer nor the computer music designer can be present each time the piece is performed. We may wonder who will be entrusted with the electronic segment. As Richard underlines, instrumentalists’ partners in charge of electronics should not be viewed as technicians but as ‘performers of electronic medium’ (Boutard 2013, 34). Plessas and Boutard (2015) define the role of this performer in comparison to the role of computer music designers. Unfortunately, this new kind of performer is not yet widespread. ‘If we take a look at classically trained instrumentalists in their entirety, we find that relatively few
Instrumentalists on solo works with live electronics 119 musicians specialize in the performance of music with electronics’ (Berweck 2012, 190). This situation is destined to change in the coming years. With luck, we will witness the advent of new interpretive courses in schools of music or cultural and academic institutions, dedicated to training electronic performers who will then progress to become multiskilled actors. 1 They would possess significant knowledge of repertoire involving live electronics. 2 They would be classically trained musicians, in order to perfectly understand the instrumental score and develop a true complicity with the instrumentalist. 3 They should have a great expertise in sound technology. 4 They should have organisational skills allowing them to manage the necessary equipment and coordinate rehearsals in studios and concert hall. Beyond the complex interactions among the multiple performers, this study revealed a broad range of work strategies that instrumentalists implement in their appropriation of this repertoire. These strategies can scarcely be generalised; nevertheless, they are significantly different from those of traditional instrumental music. Both the technological and the human requirements, specific to this repertoire, critically transform the production context. If, in the context of this chapter, we focused on the notion of chamber music, the study also pointed to several recurring issues that may impede practice and, moreover, disseminate the repertoire. Compositions with live electronics are developing a new paradigm for chamber music and shaking up the codes of instrumental practice. In doing so, works for solo instrumentalists and live electronics are participating to the redefinition of chamber music, as well its transformation.7 Annex: Statements of interviewees quoted in English in this chapter Instrumentalist Quotes Quotation number I-01
Q-107
Q-110
Je pense que la chose la plus importante que je conseille toujours aux gens qui me demandent « qu’est ce que je dois faire en premier? »: c’est apprendre quelque chose sur les microphones et sur le renforcement sonore. Ça, c’est très important puis c’est pas… c’est impossible de juste laisser ça à quelqu’un d’autre parce que si on a une bonne connaissance des microphones, au moins de son propre microphone, c’est… ça devient une partie de son instrument. Alors ça c’est extrêmement important. […] c’est sur que depuis que je connais un peu mieux Max/MSP, tout ça est devenu… je suis beaucoup plus confortable parce que s’il y a quelque chose qui marche pas ou si j’ai des questions, je peux regarder. (Continued)
Instrumentalist Quotes Quotation number I-02
Q-147
Q-157 Q-166 Q-169
Q-199 Q-207 Q-216 Q-221
I-03
Q-247
Q-248
Ce répertoire dans la totalité de mon répertoire soliste, c’est quand même un peu limité. Par le nombre, et par la difficulté d’exécution qui demande toujours une installation. L’ingénieur du son, l’informaticien, qu’ils soient dans la salle ou derrière la console. Ça nécessite a priori… une certaine constellation qu’on n’a pas toujours facilement. C’est du traitement en temps réel, le résultat dépend de ce que je sors moi. Parfois il y a des déclenchements très difficiles à faire qui nécessitent de développer une certaine technique de travail pour justement gérer ça. Comme je le disais tout à l’heure, la répétition nécessite au moins deux personnes, voire trois personnes supplémentaires, une installation dans une salle même, et même si ce n’est pas la salle de concert, une petite salle de répétition, avec un retour suffisant, avec un micro. C’est toujours compliqué. De bien répéter avec électronique, comme on répète avec le piano, comme on répète avec la musique de chambre. En répétition, quelqu’un peut me mettre en studio, tout brancher et me laisser tranquille pendant une heure ou deux pour que je travaille. Et donc il faut aussi une personne compétente pour faire avancer ou reculer la machine si jamais il y a des choses qui n’ont pas marché. Normalement dans les conservatoires, déjà beaucoup de disciplines ne font pas travailler la musique actuelle, alors on ne va pas compliquer et demander aussi de la musique mixte… Ça ne fait pas partie du cursus. Et là, il y a nécessairement un abandon d’une partie de son plein pouvoir quant au rayonnement de la musique qu’on produit. Parce que il y a quelqu’un qui travaille en lien avec le compositeur et l’interprète, une interface, et cela, évidemment, si on n’accepte pas de rentrer dans ce jeu-là, ça peut être très frustrant. Parce qu’on ne maîtrise plus les choses en fait. Et là pour le coup, à la charge du compositeur ou de l’assistant de dire « attention, on sort totalement des clous » en termes de dynamique. Évidemment ce tiers joue un rôle. Il faut une confiance absolue. Une confiance, un respect et, comment dire, il faut savoir qu’on a affaire à un vrai musicien et si ce n’est pas le cas, c’est dramatique.
Instrumentalist Quotes Quotation number Q-249
Q-263
Q-273 Q-276
Q-279
Q-377 I-04
Q-340 Q-353
Q-394
Q-395
Ce n’est pas seulement quelqu’un qui diffuse. Il faut qu’il ait, lui aussi, un rapport d’interprétation. Et/ou de fond, on est dans une situation un peu comme dans une musique de chambre. Au fond, ce n’est pas tellement différent de l’expérience de travailler avec des danseurs ou avec des gens du théâtre ou une répétition, au lieu d’être purement musicale, est à 10 ou 20 ou 30 ou 50% maximum musicale. Et l’on va perdre, entre guillemets, beaucoup de temps parce qu’il faut tenir compte d’un environnement, de contingences techniques qui ne sont pas de notre fait et cela, c’est le cas avec électronique. Alors qu’avec l’électronique, je dirais qu’au fond, une fois l’étape « une » de découverte du matériau, ça ne sert à rien de travailler seul. Ça ne sert à rien. C’est que, quand en général on joue ces œuvres, c’est assez rare que ce soient des œuvres du répertoire avec des compositeurs qui ne sont pas là. En général on travaille avec les compositeurs. Il faut que l’instrumentiste […] ait conscience du type de transformation à cet endroit-là, quels sont les sons fixés qui interviennent, car il y a souvent une part de sons fixés. Tout n’est pas transformé en temps réel, enfin ca peut arriver mais c’est relativement rare. Il faut donc prendre conscience tout de suite de la dimension du contexte. Mais déjà l’habitude de travailler avec un autre partenaire, avec un générateur de son autre qu’un instrument traditionnel. Il faut que le compositeur soit là. C’est vraiment l’idéal. Ça évite de dévier. C’est quand même lui qui a conçu la pièce. S’il ne fait pas une vraie interprétation avec cette proximité avec le texte et en même temps de sens artistique, on va dire tout bêtement, l’interprète tout seul est cuit. L’autre aspect qui est au départ un petit peu dérangeant: on est obligé de faire confiance à l’ingénieur du son. Tout simplement parce qu’on ne gère pas la place qu’on prend dans le volume acoustique que peut présenter une transformation temps réel. Et donc on est obligé un moment de faire complètement confiance à l’ingénieur du son, ou en tout cas à celui qui va gérer cet équilibre-là. (Continued)
Instrumentalist Quotes Quotation number Q-403
Q-405
Q-407
Q-431
I-05
Q-432 Q-482
Q-543
Q-572
La capacité à mon avis d’un instrumentiste, dans ces pièces, a en tout cas, c’est de pouvoir se déconnecter de cela, de dire « je joue mon truc », aux nuances que j'ai envie de donner et en faisant confiance, comme je le disais à l’instant, à ce qui va être mixé derrière. Cela, c’est la petite difficulté. C’est-à-dire que l’assistant musical va s’adapter ou adapter l’électronique par rapport à la situation avec l’instrument etc. et l’interprète va comprendre, commencer à comprendre en tout cas, les phénomènes électroacoustiques qui vont se produire autour de lui. Et ça va complètement changer son travail. Ah oui. Ah oui. Ça me paraît essentiel. Je me vois mal par exemple jouer seul une pièce de musique mixte, sans savoir qui va mixer. C’est quelque chose qui pourrait m’embêter un peu. Dans ces musiques mixtes, à mon avis l’ingénieur du son a un rôle très important, très très important, au moins presque aussi important que celui de l’exécutant. Oui. Oui. Moi à chaque fois, j’ai travaillé avec l’IRCAM. Il y avait toujours un ingénieur du son pour cela. Forcément à domicile, je n’ai pas le matériel. Mais si vraiment on veut travailler de manière plus ouverte sur la question de la sonorité, enfin des traitements du son, des langages vraiment électroacoustiques, il faut travailler avec le compositeur, un technicien, un assistant musical ou type assistant musical qui peut être d’ailleurs le plus souvent la même personne. Alors ensuite, c’est un peu moins le cas maintenant, mais quand même pour pouvoir répéter en situation réelle ou proche de la réalité l’œuvre mixte il faut pouvoir le faire avec l’aide de quelqu’un si possible dans un studio. C’est quand même difficile de répéter chez soi tout seul avec une écoute convenable s’il n’y pas quelqu’un qui est là pour aider. C’est-à-dire qu’en fait c’est un travail collectif. L’assistant musical ou la personne qui va s’occuper de la diffusion de l’œuvre, il est finalement dans la même situation que l’interprète qui n’a pas encore totalement validé son apprentissage sur la partition et il y a un travail de musique de chambre, de musique de chambre pour reprendre le terme classique qui va s’établir entre les partenaires.
Instrumentalist Quotes Quotation number Q-585
Q-586
Q-591
Q-603
Q-662
I-07
Q-735
Q-738
Mais là on n’est pas dans une situation très favorable à l’éclosion aboutie d’une symbiose entre l’instrument et l’électronique. Il se trouve que c’est en général les meilleurs instrumentistes du monde qui sont à l’EIC [ensemble intercontemporain] et qu’ils jouent extrêmement bien la partition qui est demandée. Ça ça colle quand on est dans une esthétique particulière comme celle de Pierre Boulez où le rôle de l’interprète est strictement défini, voire cantonné à une définition précise. Et c’est cette définition précise qui justifie d’ailleurs par rapport au travail de Boulez – c’est ma vision – par rapport à ces œuvres qui justifie l’électronique. L’électronique va s’ajouter à l’interprète pour lui permettre de devenir un hyper interprète, c’est-à-dire de jouer plus vite, plus haut, plus grave, tout ce que l’interprète ne peut pas faire puisqu’il est limité, pauvre interprète qu’il est, même si c’est le meilleur du monde. Il va se marier avec. Il y a un véritable travail de symbiose sonore où l’un n’existe pas sans l’autre. Ce n’est pas la limitation de l’instrumentiste qui justifie l’utilisation de l’électronique, c’est un mélange des deux. Là, on est dans une mixité réelle, presque hybride. On peut avoir une bonne écoute chez soi. C’est quand même de l’électroacoustique. C’est une partie qui est à prendre en compte d’une manière essentielle. L’électroacoustique, ça ne peut sonner que si la diffusion est à la hauteur, c’est-à-dire on prend en connaissance réellement de la dimension sonore de la partie électroacoustique que réellement en situation. Il faut savoir que chacun a son rôle, que cette angoisse-là, elle n’est pas pour soi mais elle est pour la personne qui est derrière la console. Nous sur scène, on est sur scène. « The show must go on. » Mais c’est très compliqué. Et donc pour moi c’était vraiment tout ce qui concerne la musique de chambre, le travail sur j’allais dire « trouver un espace sonore », et qui exactement le même quand on doit faire travailler un trio avec piano et voilà. Déjà ce qui me semble intéressant, c’est quand on travaille une œuvre instrumentale, on a donc toutes les données techniques à aborder. Je vais peut-être parler d’une œuvre instrumentale avec d’autres instruments puisque pour moi, ce n’est pas vraiment le répertoire solo pur car quand on travaille une pièce de musique mixte, on ne travaille pas une pièce solo, on travaille vraiment dans un cadre plus de musique de chambre.
(Continued)
Instrumentalist Quotes Quotation number Q-748
Q-751 Q-753
Q-761
Q-762 Q-780
Q-784
Q-825
Q-826
En fait, c’est les aider [les étudiants] à pouvoir moduler justement tout ce travail, j’allais dire. Enfin, c’est la même chose pour moi quand je dis cela car ça m’a permis de prendre un peu de recul en fait mais c’est le même travail que je fais avec moi: quand on module notre façon de jouer à nous, voir en quoi ça module la partie de l’autre beaucoup plus fortement, je trouve, que dans une partie écrite de musique de chambre. […] Ce que je trouve intéressant justement c’est doser le travail sur les nuances, l’articulation et le tempo. […] on va voir ce que ça donne si on définit, si on affine les nuances, entre autres dans tout ce qui est pianissimo; voir, si on les renforce, ce que ça peut donner. Ce que je trouve vraiment très intéressant et que j’essaye de faire passer aux étudiants, c’est que habituellement quand on joue, on a une acoustique avec quatre murs, un plafond, un sol et on a le son qui se diffuse de façon cohérente par rapport à cela. Et là, le fait qu’on ait un travail avec les haut-parleurs, une transformation du son en temps réel, tout d’un coup, cette notion d’acoustique cubique – je ne suis pas spécialiste –… la salle disparaît et tout se transforme. Par contre, il faut qu’il y ait quelqu’un dans la salle qui est souvent le compositeur ou le RIM [Réalisateur en Informatique Musicale] ou voilà, des gens qui nous donnent le retour. Mais dès qu’il y a de la musique mixte, on n’est pas tout seul. Et ils sont globalement dans la salle les gens qui jouent avec nous si je puis dire. Ce qui va jouer c’est la résonance de la salle et c’est surtout la sensation de retour du musicien. Quand on est dans la salle, il n’y pas forcément une énorme différence pour l’auditeur mais il y a une grosse différence de sensation sur scène. Et puis les repères ne sont pas les mêmes. Après je suis complètement incompétente pour utiliser le matériel. Et je ne vois pas forcément comment l’organiser et tout. C’est en cela que je dis que je fais confiance aux autres. Ça implique comme personnes, j’allais dire quelqu’un qui soit ingénieur du son pour tout ce qui est besoin de micros, pédales, diffusion avec les haut-parleurs et puis quelqu’un qui suit la musique – ça peut être la même personne – mais celui qui suit la musique peut être un RIM, le compositeur. En fait j’ai l’impression qu’il faut au minimum deux personnes même si ce n’est pas tout le temps deux personnes, il faut une personne qui a les doubles compétences en plus de l’interprète.
Instrumentalist Quotes Quotation number I-08
Q-900
Q-902
Q-911
I-09
Q-925
Q-934
Ca peut se comparer, je trouve, à toute la musique de chambre. Surtout avec le temps réel, parce que le temps réel réagit… c’est de la musique de chambre avec… à la fois avec la partie électronique, à la fois avec la personne avec qui on travaille. Au début, quand j’ai commencé mon doctorat, j’avais un peu l’idée de développer mes compétences informatiques, ces choses-là, puis je me suis vite rendue compte que ça m’intéressait pas tant que ça finalement. Je n’allais pas faire croire que, genre, j’allais être indépendante sur scène avec mon propre ordinateur pour déclencher mes affaires. Ça n’a pas pris de temps que j’ai laissé ça aux autres. Puis je trouve ça assez satisfaisant comme ça, quand j’ai des super belles collaborations avec des gens au CIRMMT. On est super choyé quand on joue là. Mais qu’il y ait les équipements puis les ressources humaines disponibles facilement. Autant que d’avoir un pianiste. Ça, ça veut dire que, moi je pense que l’idéal, c’est qu’il y ait des classes d’ingénieur du son. Je pense qu’il y a cela à McGill, je ne me trompe pas. Ici on n’a pas ça. Donc ce sont des étudiants techniciens qui sont engagés, ça devient peut être un petit peu plus compliqué parce qu’ils ne sont pas nécessairement formés pour ça, certains oui certains non, ils sont plutôt… leur mandat principal d’habitude c’est plutôt d'enregistrer les concerts. Ce n’est pas du tout pareil. Une certaine manière c’est comme travailler avec un autre instrument sauf qu’on a beaucoup plus de références quand on travaille avec un pianiste par exemple, que quand on travaille avec de l’électro parce qu’en général on commence à faire ça plus tard, donc il y a beaucoup de choses à ré-apprendre pour se familiariser, avoir le même niveau de familiarité avec des effets électro qu’on aurait avec un pianiste ou un flutiste ou n’importe quoi. Ici au CIRMMT, on a de la chance, ou au DCS , c'est un autre studio, ici à McGill, dans l’autre bâtiment, mais je sais pas s’ils sont reliés ou pas, Digital computer… quelque chose comme ça [Digital Composition Studios]. On peut recréer les conditions live mais en beaucoup plus petit, c’est comme travailler dans un bocal. Donc je peux avoir une idée de comment ça va être en live. Parce qu’il va y avoir le même système de diffusion par exemple en 7.1 ou 5.1. (Continued)
Instrumentalist Quotes Quotation number Q-936
Q-955
I-10
Q-991
Q-1005
I-11
Q-1075 Q-1097
Q-1106
Q-1121
Q-1122
C’est un répertoire qui demande vraiment de la préparation en salle. Rien que pour les placements de micros ou tester les haut-parleurs, tout ça c’est super important. Pour moi c’est aussi important d’avoir autant de connivence avec un haut-parleur, ou une boite de n’importe quel type que ce soit, qu’avec un pianiste. […] alors je veux bien… chaque fois que j’ai un concert qui est un peu dans, soit dans un espace différent, qu’il y a moyen d’avoir les haut-parleurs, l’équipement, j’essaie d’ajouter une pièce et même à coté de la musique classique, à côté de la musique du XXe siècle ou quelque chose. […] de vraiment jouer avec l’électronique, c’est quelque chose… c’est vraiment une expérience nouvelle, de musique de chambre, de jouer avec toi-même, nous, de jouer avec nous-même, les interprètes, ou les hautparleurs […] et de comprendre, parce qu’on est sur scène et on [ne] va pas comprendre. Quelque part, nous sommes les passeurs de cette marque de fabrique si on peut dire. […] souvent, dans une pièce sans électronique, on arrive à la fin de l’écriture de la pièce. C’est-àdire que le compositeur a déjà écrit tout. Et puis, après, il nous amène le produit fini, alors que quand on travaille avec électronique, c’est étape par étape […] Il y a deux façons de faire. La méthodologie qu’on peut avoir, c’est se dire: j’ai une partition, déjà il faut que je comprenne le langage de cette partition en tant que musique acoustique. Et puis si on a la chance de travailler directement avec l’électronique, c’est mieux. Parce qu’il y a une corrélation entre les deux. Il y a, comment dire, c’est sans arrêt des aller-retours entre ce que je propose, ce que l’électronique me propose, etc. Donc c’est comme deux personnes qui ne se connaissent pas: ils vont apprendre à se connaître. Mais c’est au moment où on se connaît bien que la pièce peut être diffusée au public. C’est exactement la même relation qu’avec un autre instrumentiste. Pour moi ça devrait être cela. Pour moi c’est de la musique de chambre. C’est dans l’écoute. Ce n’est pas un instrumentiste et puis on rajoute un deuxième lego par-dessus et ça fait une musique mixte. C’est sans arrêt, on passe de l’un à l’autre.
Instrumentalist Quotes Quotation number
I-12
Q-1123 Au fur et à mesure du travail on annote sur notre propre partition des éléments d’électronique parce qu’ils n’y sont pas. Q-1131 Donc oui il y a des instrumentistes qui veulent absolument. Encore aujourd’hui de jouer, que ce soit à l’IRCAM ou ailleurs, mais être instrumentiste et jouer des pièces avec l’IRCAM, c’est un privilège et ce n’est pas donné à tout le monde alors que beaucoup de clarinettistes, même étudiants, aimeraient pouvoir avoir accès à cela mais n’ont pas cet accès. Pourquoi. Parce qu’il y a un coût très important. Q-1134 Oui, pour moi c’est un quatuor. Même si c’est seulement moi qui suis sur scène. Tout seul sur scène. Mais je ne fais que 25% du travail. Q-1137 Hier soir j’ai créé une pièce pour clarinette seule. Ce n’est pas du tout la même relation. C’est un dialogue. Alors que quand on met de l’électronique ce n’est plus un dialogue, c’est un quadrilogue. Q-1155 C’était mon idée. Je me suis fait un studio à la maison. Mais par manque de moyens… Q-1156 Non non. J’ai préparé cela minutieusement car je voulais avoir un petit IRCAM à la maison mais c’est complexe. Parce que ça veut dire avoir au moins une table de mixage, au moins un ordinateur, voire deux dédiés à cela car les fichiers c’est très long, très lourd. Il faut avoir au moins deux ou quatre enceintes. Tout cela se chiffre. Donc c’est possible. Imaginons mon studio, c’est la table, j’ai mis quatre prises électriques au centre, j’ai mis huit points pour accueillir les enceintes mais pour l’instant je n’ai pas eu l’argent pour les acheter. L’idée, elle est présente chez moi. Q-1168 Si j’ai les moyens de faire un récital avec des pièces avec électronique, si j’ai les possibilités d’avoir un ingénieur du son, un assistant musical ou tout le matériel qui va avec pour faire les choses bien, je vais le faire mais ce ne sera pas nécessairement exclusivement des pièces mixtes et tant que possible je vais voir la possibilité d’intégrer des pièces mixtes tant qu’on a la possibilité de le faire techniquement. Q-1195 Je me rappelle, je l’ai travaillée avec une perforatrice au sol quand je travaillais chez moi seulement pour avoir la sensation d’appuyer sur quelque chose avec le pied au bon moment. Q-1204 Pour moi il n’y a pas de réelle différence entre la préparation d’une pièce acoustique et d’une pièce avec électronique. Je me repose beaucoup sur l’assistant musical ou le musicien ou les gens qui sont derrière la console. (Continued)
Instrumentalist Quotes Quotation number Q-1211 Dans ce cas, ça revient à, comme si c’est un travail de musique de chambre avec un autre musicien. […] Il y a des décalages, on travaille exactement comme si on était avec d’autres instrumentistes. C’est un travail de musique de chambre avec l’électronique. Q-1218 Dans certaines pièces, il faut mieux jouer presque en autiste sans s’occuper de l’électronique et faire confiance aux gens derrière la console et tracer, faire son boulot comme, je dirais presque dans certaines pièces d’orchestre quand on joue sa partie, on suit le chef, on fait confiance au chef et on ne se pose pas la question si savoir le pupitre de cors n’est pas un peu fort de l’autre côté de l’orchestre. Là, on est obligé de faire confiance au chef et on ne peut pas soi-même gérer cela. Q-1222 Et ça me paraît exactement de la même manière que dans un quatuor à cordes, on ne va pas jouer deux fois de la même façon et on va interagir les uns avec les autres sur le moment comme en répétition. Q-1233 Et puis effectivement sur la partition parfois, mais pas toujours, il y a une ligne qui est censée représenter l’électronique. En général ce n’est pas très représentatif. Q-1256 Moi, je ne sais absolument pas comment ça marche un patch MAX. Je sais sur les grandes lignes comment ça marche mais je suis absolument incompétent pour faire fonctionner cela. Q-1269 Il reste quand même en haut de la pyramide le compositeur – ou à la base ça dépend comment on voit les choses – et si le compositeur n’a pas une pensée musicale forte, ça ne peut pas marcher. Après l’extension de cela, c’est d’avoir des assistants, que ce soit le violoncelliste d’un côté, et l’assistant musical et l’ingénieur du son de l’autre qui ont leurs compétences poussées et qu’il puisse se reposer sur eux. Q-1271 La création faîte, si tout le monde a très bien fait son boulot et que tout fonctionne, que la partition écrite est réalisable, qu’on peut la rejouer sans la présence du compositeur, [que] le patch est sain et fonctionne. Voilà. Après c’est pour cela qu’il faut particulièrement les compétences de l’assistant musical et de l’ingénieur du son qui peuvent parfois être la même personne.
Instrumentalists on solo works with live electronics 129 Instrumentalist Quotes Quotation number Q-1274 C’est pour cela que la compétence d’ingénieur du son de cette personne me paraît plus importante que la compétence d’assistant musical passé le stade de la création si la pièce est saine, si le patch est sain et fonctionne. Evidemment c’est essentiel d’avoir quelqu’un qui ait vraiment des oreilles, qui puissent aller dans la salle et faire un son, créer un son dans la salle. Q-1275 C’est très agréable de jouer avec un type comme cela car je joue en duo avec lui exactement comme je jouerais en duo avec un pianiste. Il gère les deux très bien. On a fait les pièces de Saariaho avec lui et d’ailleurs Saariaho, en général, lui laisse carte blanche car elle a confiance en son écoute, ses compétences et sa sensibilité de musicien. Qu’elle soit là ou pas, elle le laisse faire parce que ça se passe très bien. Il sait. Il a vraiment une attitude de musicien qui écoute. Q-1284 On est seul sur scène mais dans une pièce avec électronique on n’est quand même pas seul. Il y a ce travail au moins en duo que je trouve très agréable et qui pour moi est comparable au travail de musique de chambre. Q-1285 De même que l’assistant musical ou le compositeur ne vient pas au-dessus de mon épaule quand je fais les gammes, de même je ne viens pas derrière son ordinateur quand il ouvre des fichiers qui n’ont pas été ouverts depuis quatre ans… voir comme ça marche… si ça marche toujours. Q-1290 [en parlant de deux de ces partenaires] C’est des gens que je pratique depuis des années que ce soit en solo ou en ensemble. Je les connais, ils me connaissent, ils savent comment je travaille. Il y a vraiment le même type de partenariat de confiance qu’il y a avec un musicien instrumentiste.
Notes 1 In this chapter, we use the term ‘live electronics’ to describe any type of action in terms of electronics that occur during the performance (sound processing or synthesis, cues management, etc.). 2 All quotes, identified as Q-xxx and provided in the annex, are translated into English. Unless otherwise indicated, all French quotes have been translated into English by the authors. 3 The notion of a ‘musical instrument’ became very controversial with the development of new musical technologies. In this chapter, the term ‘instrumentalists’
130 François-Xavier Féron and Guillaume Boutard refers to traditional ‘classical’ musicians and includes both singers and musical instrument players. Instrumentalists read music and play an instrument that is generally taught in music schools. The instrument can be acoustic (e.g violin, piano, saxophone…) or electronic (e.g. ondes Martenot, electronic keyboards…). We do not consider here a composer or improviser playing his own patch or a kind of homemade analogue and/or digital devices as an instrumentalist but as a live electronic musician. The participants we interviewed were what we call instrumentalists. 4 We distinguish live electronic musicians from computer music designers, who do not necessarily participate in the live performance after the premiere. 5 We may observe that today this is not always the case. Famous string quartets, for example, are often invited to play in large concert halls. 6 A graduate degree entitled ‘Master arts parcours Réalisateur en informatique musicale’ was created in 2011 at the Université Jean Monnet, Saint-Étienne: France. 7 We would like to express our gratitude to all the instrumentalists who participated in this research. We also would like to thank Karen Brunel-Lafargue and the editors for their attentive proofreading.
6 Approaches to notation in music for piano and live electronics The performer’s perspective Xenia Pestova This chapter examines elements of notation in music for piano and live electronics. The author introduces examples from the repertoire and discusses different notational approaches. These are grouped into three main categories. 1 Graphic notations and visual representations or descriptive scores (Boorman 2001) are commonly used in repertoire with fixed media to simplify synchronisation and can also be found in more recent repertoire with interactive live electronics. 2 Tablature-style notations, or prescriptive scores (Boorman 2001) including notation as documentation, offer a different and potentially complementary approach. 3 Hybrid combinations of approaches and notations, including semi- improvised pieces and notation for new instruments, show new perspectives in the field. The author addresses performance practice challenges and draws conclusions from that perspective.
Introduction: raising questions Musicians are often faced with notational challenges when interpreting twentieth-century and contemporary repertoire (Fox 2014, 7). These issues become more apparent as composers depart from traditional milieus and performing situations in instrumentation and intent. In the context of repertoire with electronics, the lack of unified notation has been identified as a major issue. Gregorio Garcia Karman writes: Like other forms of writing, a score for electroacoustic music is not a neutral means of representation but the expression of a system of relations; you have to understand the language to be able to read the text. But the notation of electroacoustic music is not based on a widely accepted system of signs; there are a number of dialects. (Karman 2013, 154)
132 Xenia Pestova Thus, the performer is effectively required to learn a new dialect almost every time he or she learns a new piece. In notated music for instruments with live electronics, the problems of notation and communication multiply further. Complex set-ups often result in the lack of rehearsal time, at times requiring performers to internalise their parts without even hearing the electronic processing (Tutschku 2011, 395). The role of the sound projectionist or computer performer can become more demanding, in many cases calling for duo or chamber-like interactive relationships (Pestova 2009, 119).1 At times, the score contains more information than reasonable for page turns, requiring reduction, as we will see below. In addition to instructions for spatialisation and synchronisation that can be found in fixed media pieces, musicians might be required to execute complex trigger points, for example with a MIDI pedal. At the same time, notation of the electronics often remains sparse or even nonexistent, partly due to the complexity or unpredictability of live electronic behaviour, as well as the lack of accepted conventions (Nicolls 2010, 38). These issues raise further questions as soon as we introduce elements of improvisation and indeterminacy into performance. This chapter explores several approaches to notation in live electronic music for piano and other keyboard instruments through a survey of a selection of repertoire in this genre. Discussion is grouped into three broad categories: abstractions and graphic representations of electronic sound, tablature-style notational systems and hybrid approaches that include combinations of these categories, improvisation and new or modified instruments.
On terminology Currently, there is no unified terminology in the field of live electronic music. For the purposes of this discussion, pieces for ‘piano and tape’, ‘piano and playback’, ‘piano and CD’, ‘piano and soundtrack’, ‘piano and electroacoustic sound’, etc. will be termed simply ‘piano and fixed media’. Similarly, the term ‘live electronics’ will refer to both real-time processing of instrumental sound and real-time triggering of previously recorded and processed sound (as opposed to playing with a fixed recording). This will include pieces for ‘piano and computer’, ‘piano and interactive electronics’ and ‘piano and interactive electroacoustics’.
Background: the score in electronic music Composers have struggled with notation of electronic music since the early studio experiments. Pierre Schaeffer’s dilemmas are well documented in his research on musique concrete. While working from sketches to record material for Étude aux tourniquets in 1949, Schaeffer laments the inadequacy of Western notation and the constrictive nature of imposing bar lines on the
Approaches to notation in music for piano and live electronics 133 freer instrumental gestures (Schaeffer 1952, 25). Schaeffer goes on to identify ‘two problems of principle: tablature and notation technique’, drawing comparisons to eighteenth-century musical tablature notation and elaborating on the distinction between a ‘partition causale’ and a ‘partition des effets’ in electronic music. The former is the tablature required to obtain the effects that are currently impossible to represent accurately with appropriate symbols, while the latter focuses on graphic representation of the sounding result (Schaeffer 1952, 86). Much of the notation in repertoire for piano and electronics fits into these two categories. I will discuss abstract and coded visual representations of electronic sound and then tablature notation in the following sections. Descriptive scores in electronic music can be traced back to the tradition of a so-called diffusion score. While a piece of acousmatic music might be fully realised by the composer in the studio, interpretation can still play a vital role. Even without instrumental performers, live diffusion or articulating the composition spatially in the concert hall through movement on a loudspeaker array is arguably a performance art. As Jonty Harrison explains, ‘within the acousmatic tradition, descended from musique concrète, composition and performance are inextricably linked – diffusion being, in effect, a continuation of the compositional process’ (1999, 1). Harrison goes on to state that ‘in performing this music, therefore, it is appropriate that the same type of “physical” gestures that were used to shape material during the process of composition should be used again in performance to reinforce that shape in the audience’s perception and to enhance further the articulation of the work’s sonic fabric and structure’ (1999, 3). Again, Schaeffer was the first to discuss tentative experiments with diffusion as interpretation or as performance art: […] we occupied, fairly recklessly, the magic circle where the usual sight is strings vibrating, bows susurrating, reeds palpitating under the inspired baton of the conductor. The audience had to be content with an infinitely more disappointing sight: turntables and potentiometers, cables and loudspeakers […] I had indeed to be there, and, to however small an extent, (apparently), interpret […] with no other means of expression than imperceptible hand movements that added to or reduced the general sound level by a few decibels. (Schaeffer 2012, 61) Performance and interpretation in this context require a visual reduction of the musical events. While approaches to producing diffusion scores vary, one common tactic is to visually represent the outline of the waveform against a horizontal time grid in order to show the timing and dynamic envelope of the events in the piece.2 Documentary information such as sound levels or indications of spatialisation3 can also be present, although this is likely not to be a tablature-style score showing execution of events,
134 Xenia Pestova but rather an abstraction of the sonic result, to be studied closely through repeated listening prior to performance. This approach to notation is often used in electronic music with a live pianist, as discussed below. Karlheinz Stockhausen took a different approach to Schaeffer not only compositionally by working with purely synthesised sounds but also notationally. In Studie II (1954), notation doubles as technical information and documentation, showing meticulously how the work was constructed, and in theory – but not necessarily in practice – making it possible to realise the score.4 Simon Emmerson terms this type of notation the ‘realisation score’ (Emmerson 2006a, 4), comparable to Schaeffer’s ‘partition causale’ as described above, but perhaps even more detailed. In the case of Studie II, the composer thus has complete control over the finished product, arguably leaving little scope for interpretation due to the highly prescriptive and detailed notation (an approach Stockhausen was also to lean towards in instrumental music in his later years, with rigid and crystallised performance practice, leaving little or no scope for interpretation). As we shall see below, this approach to tablature-style notation-as-documentation also surfaces in later works with live electronics. It is worth mentioning that specifically designed or transcribed ‘analyses’ (Emmerson 2006a, 4) or listening scores can also be produced for the purposes of listening to and following the structural development of a piece of electronic music (Haus 1983). This approach offers an interesting point of departure in complex scores for instruments and electronics. Some works may require a reduction for performance but can still benefit from a detailed version of the ‘full score’ to aid study while learning the behaviour of the electronic part.
Abstractions and codes: visual representations of electronic sound This section explores repertoire that uses concepts similar to Schaeffer’s ‘partitions des effets’ or descriptive scores as a point of departure. As mentioned in the previous section, traditional acousmatic approaches often use a waveform score to aid diffusion. This approach is also used in works with live electronics. The waveform is placed against the notated instrumental part to help with synchronisation (as shown in the next section, this technique is fundamentally different from a tablature-style notation that has technical information or physical actions as the main focus). We can compare this approach to a visual abstraction of the resulting sound, akin to graphic notation, although it does not necessarily provide information on the sonic texture, being mostly focused on timing as indicated by volume contours.5 Jonty Harrison’s piano version of Some of its Parts (2014) provides an example of this type of notation in a fixed media context (see Figure 6.1). This piece is a mobile construction for piano, violin, percussion and a fixed soundtrack that can be performed as three solos, three duos or a full trio with the same fixed media part. The waveform of the electronic part is represented visually against a time count, with instrumental events aligned proportionally with the salient attacks in the electronic part.
Figure 6.1 Jonty Harrison, Some of its Parts, page 3 (excerpt).
136 Xenia Pestova Harrison foregoes traditional pitch/rhythm relationships due to the somewhat ‘unconventional’ nature of the material: instead of playing on the keyboard, the pianist acts more as a percussionist, ‘colouring’ the electronic part and focusing on timbre. The inside of the instrument is divided into five regions by the struts of the piano frame, struck by the pianist with mallets and other implements and represented with a five-line staff used to notate percussive events and occasional pitches. Spaces between the lines indicate events on the tuning pegs (high, medium or low) and strings. At times, the pedal is depressed to enhance or manipulate resonance and blend with the more reverberant events in the fixed electronic part. For the pianist, the difficulty of this work lies not only in the physical stamina required to stand up and lean into the piano for the duration of the piece, but also in the synchronisation points. At times, strong simultaneous attacks occur after nonpulsed silences. While the performers should aim to internalise the electronic part in rehearsal and learn the timings through repeated listening, accuracy presents a challenge in concert.6 Adrenalin and varying hall acoustics can influence the perception of time in performance in contrast with the inherent inflexibility of fixed media (Pestova 2008, 10–11). It may be helpful for the performer to incorporate more visual cues that provide information on the timbral qualities of the electronic part in this context. Heather Frasch takes a somewhat different approach in notating electronics and extended techniques in Frozen Transitions (2014). Scored for flute, piano and live electronics, the piece utilises Frasch’s own notations for the nonstandard instrumental performance techniques, devised following consultation with the original performers of the piece and at times visually suggesting the texture of the resulting sounds (see Figure 6.2). In the piano part, different note heads combined with precisely notated rhythms are displayed spatially to show different registers of the instrument. The note heads indicate the types of sounds the pianist generates: sliding on the strings with gloved hands, running drum-kit brushes or chopsticks along the tuning pegs and the keys and plucking strings behind the bridge to create ‘untuned’ sounds. Frasch outlines the gestural contours of movements that generate the sound world of the piece and builds the flexibility of timing and natural physicality of gestural transitions into the fabric of the work. While the performers do not necessarily synchronise with the electronic part directly – the composer or a computer performer cues different sections, which are subtle and textural, lacking strong rhythmic attacks – it is nonetheless important for the performer to have a visual representation of the computer part in order to blend and interact with it. In this context, it is helpful to consider the computer (and also the computer performer) as a chamber music partner and study the full score in order to enhance the performer’s understanding, as in traditional ensemble settings (Fox 2014, 8). Frasch uses two systems in the electronic part: real-time amplification of the (mostly extremely quiet) instrumental parts and playback of samples. The playback staff features numbered trigger cues with a verbal description of what is audible as well as symbolic graphic notation of the sound types and shapes.
Figure 6.2 Heather Frasch, Frozen Transitions, page 2 (excerpt).
138 Xenia Pestova Another type of abstraction or code used to depict resulting sound is the use of Western musical notation (perhaps with some modifications) to show the electronic part. The obvious advantage in this situation is that standard pitch and rhythm notation is a code that is universally understood by trained musicians. However, some limitations of this approach to notation include the lack of existing symbolic depictions for the rich and varied timbral capabilities made available by electronic processing. One classic example from fixed media repertoire is Tombeau de Messiaen (1994) by Jonathan Harvey. Harvey uses standard pitch and rhythm notation to show the main features of the electronic part in order to aid synchronisation. The composer is careful to state that the pitches written are approximate only, due to the microtonal nature of the material. A more detailed approach is taken by Lou Bunk in Being and Becoming (2010) for toy piano and live electronics, part of a series of works with the same title featuring different solo instruments. The electronic material consists of sections of previously recorded and manipulated material triggered in real time by the performer, creating flexibility with timing and movement between the different sections of the work. Bunk uses a separate staff to meticulously show the pitch and dynamic envelope of each line in the electronic part. Quartertone notation is used to show the pitch on the staff along with the frequency in Hertz within brackets; resulting in up to ten different staves in addition to the toy piano part (see Figure 6.3). This kind of approach can be very useful for an ‘analysis’ or study score (Emmerson 2006a, 4) but makes performance impossible due to the number of page turns required. Following the performer’s request, the composer made a performance score that shows the main lines, enough to give a representation of the electronics without cluttering the page. This is comparable to a concerto part with a piano reduction as opposed to playing from the full orchestral score. Bunk includes a separate staff to rhythmically indicate the placement of trigger cues that advance the computer onto the next section (see Figure 6.4). Combination scores featuring standard notation alongside specially devised symbols also fit into the category of abstractions. Returning to the fixed medium once again, we can find a classic example of this kind of notation in Denis Smalley’s Piano Nets (1990) (see Figure 6.5). Traditionally notated pitches and rhythmic elements alternate with and morph into abstract shapes and graphics in a temporally aligned space above the piano staff. As an example for piano with live electronics, Nostalgic Visions (2009) by Elainie Lillios shows specifically devised nonstandard notation. At times, the pianist is required to improvise on given material, and the composer uses verbal descriptions to provide guidance.7 In addition to showing when
Figure 6.3 Lou Bunk, Being and Becoming, bars 58–60 of full score.
140 Xenia Pestova
Figure 6.4 Lou Bunk, Being and Becoming, bars 58–60 of performance score.
Figure 6.5 Denis Smalley, Piano Nets, page 11 (excerpt).
to advance between numbered cues (either with a MIDI trigger pedal or with a second computer performer at the laptop), the electronic part is notated using graphic symbols that reflect the texture and trajectory of the sounds without reference to standard notation (see Figure 6.6). Although it is not possible to imagine or hear the sonic events in one’s mind from simply looking at the notation, the graphics still provide an idea of the properties of the sounds and simplify the learning process, allowing the pianist to rely less on repeated listening to the electronic part prior to rehearsal.
Figure 6.6 Elainie Lillios, Nostalgic Visions, page 2 (excerpt).
142 Xenia Pestova
Tablatures and notation as documentation The second category for discussion covers prescriptive instruction-based tablature notations (‘causal’ scores). These notational systems place emphasis on the actions required to produce the sound and/or technical information needed to realise the score. This style of notation is particularly well suited to ‘action music’ (such as extended techniques) that is led by physical gesture rather than information on pitch, timbre or rhythm (Kojs 2011). Juraj Kojs investigates instructive, as opposed to depictive, notation, based on traditional tablature techniques: ‘Tablature systems […] preserved the focus on the physicality of the music-making […] with information about the placement of the fingers on particular strings or keys, rather than conveying the desired pitch or interval’ (Kojs 2011, 66). Kojs uses tablature notation in his works for piano and live electronics, Three Movements (2004) and All Forgotten (2006–13). Both pieces offer interesting glimpses into an approach to music as gesture and action. The two pieces make extensive use of extended techniques. Three Movements separates the inside of the piano into three regions (not dissimilar to Harrison’s Some of its Parts and Frasch’s Frozen Transitions), using three lines to represent the high, medium and low strings. Different note heads and verbal instructions are used to indicate actions such as striking or rubbing the strings or sliding palms along the keys and are set against minute and second timings. The computer part is colour coded, denoting differences between elements such as the fixed electronic part, physical string model sounds, piano samples and real-time processing. At times, a waveform of the rhythmic pattern of the fixed electronic part is also given in addition to dynamic shapes and graphic representations of the sonic events (see Figure 6.7). However, as Kojs writes, ‘the graphic representation of the computer part is purely referential. The outlined gestures, in particular those of the live- electronics, change from performance to performance’ (Kojs n.d.).8 While the detailed instructions and explanations of the electronics come close to documentation scores such as Stockhausen’s Studie II, there is still not enough visual information to recreate the piece without a copy of the composer’s Max patch. In this instance, the software effectively becomes a part of the ‘notation’ required for performance.9 While we are seeing an action-based notation in the piano part, it is in fact coupled with a primarily abstraction-based notation in the electronic part. All Forgotten is similar in providing detailed instructions and diagrams on the execution of string glissandi to the performer. The pianist rubs resin on his or her fingers and slides along strings, the pitches marked with conventional musical notation. The computer part consists of verbal descriptions of events taking place and pitch content with rhythmical cues. As before, the computer part does not require specific ‘execution’ by the pianist or a second performer, reacting to the actions of the pianist instead (see Figure 6.8). Kojs writes: ‘Both pieces are based on the piano providing the energy input to m otivate a layer of live electronics: for example,
rhythmic pattern (tape)
Figure 6.7 Juraj Kojs, Three Movements, page 2 (excerpt).
144 Xenia Pestova
Figure 6.8 Juraj Kojs, All Forgotten, page 14 (excerpt).
through amplitude tracking to excite virtual strings (Three Movements) and virtual marimbas (All Forgotten) designed by Stefania Serafin and implemented as external objects in Max.’10 Another example of tablature notation is given in Of Dust and Sand (2010) by Per Bloland for alto saxophone and piano with electromagnets positioned over the strings. This work follows on from Elsewhere Is a Negative Mirror (2005) for piano and electromagnets. Both pieces require a Max/MSP patch that controls the magnets’ vibration. Two grand staves are provided for the piano. The lower two staves indicate the pitches generated by the electromagnets, while the upper two staves (labelled Finger Tablature) indicate the actual actions to be undertaken by the performer. The top seven electromagnets are for the most part ‘on’ throughout the piece, attempting to excite their respective strings. The performer at the piano must lean over the keyboard and press down directly on these seven strings, thus damping their vibration. When a note is called for in the score, the appropriate finger is lifted off the string, allowing it to vibrate. A note on the tablature staff thus indicates removal of the given finger. (Bloland 2010) The tablature contains three lines for the right hand and four for the left. It is complemented by a second staff below indicating the resultant pitches (effectively, a combination of tablature and abstraction scores, see Figure 6.9). While no other electronic sound is executed or notated, this is a curious and imaginative use of tablature-style notation for a ‘hyper- instrument’, an electromagnetically prepared piano.11 At times, tablature notation can bear little or no relation to the audible result and require an accompanying set of instructions and a detailed technical rider in order to interpret the work. Larry Austin’s Accidents Two for
Approaches to notation in music for piano and live electronics 145
Figure 6.9 Per Bloland, Of Dust and Sand, bars 73–75 (piano part).
piano and sound projection (1992) is a classic example of an intriguing and mysterious score that would be impossible to realise without detailed study of the instructions. In Austin’s example, the pianist and the sound projectionist read the same page simultaneously, using the ‘full’ score rather than different ‘parts’ that serves both as a technical and a performance score at the same time. The pianist reads a graphic representation of a waveform, interpreting the outline as relative register positions on the keyboard and attempting to press the keys without activating the strings (the resulting sonic ‘accidents’ provide the pitch material of the work). The sound projectionist interprets numbers to trigger prerecorded events, arrows to spatialise amplified piano sound and colour bars to initiate sound processing such as modulation, pitch change, distortion, compression, gating and comb filtering (see Figure 6.10). Dominic Thibault’s Igaluk: to Scare the Moon with its own Shadow (2012) for keyboard and MIDI controllers with live electronics is another interesting example. The score shows pitches on the keyboard that trigger different sound events with a continuous curve on the bottom staff depicting a volume pedal that controls filtering and distortion (see Figure 6.11). The score has up to five staves, to be read simultaneously, which can be challenging to follow due to the fact that ‘heard’ events do not necessarily correspond to the notated pitches.12 The detailed technical rider forms an integral part of this example of notation as documentation. This approach harks back to the classic ‘sampler’ writing in works such as R elated Rocks (1997) for two pianists and two percussionists by Magnus Lindberg, with the pianists doubling on Yamaha DX7 keyboards with a bank of samples.
Figure 6.10 Larry Austin, Accidents Two, Event 36 1/2.
Approaches to notation in music for piano and live electronics 147
Figure 6.11 Dominic Thibault, Igaluk: To Scare the Moon with its Own Shadow, bars 213–15.
Cue-based notations in live electronic repertoire provide further examples of tablatures. These are instructional scores that place emphasis on the actions required to trigger a sample or start and stop sound processing rather than describing the nature of the sounds produced. The most common approach in music for piano and live electronics is to use a MIDI trigger foot pedal. Zellen-Linien (2007) by Hans Tutschku shows an example of notation that is primarily focused on the piano, while MIDI pedal trigger points are shown on a separate staff underneath the piano staves with rhythmic cues. Occasionally, the heard samples are shown in musical notation for synchronisation purposes, but most of the electronic behaviour is only referred to in shorthand, in boxes next to the cues (see Figure 6.12). According to Sebastian Berweck (2012, 102), ‘[…] in Zellen-Linien it is unclear to the pianist what the computer will do […] the computer remains a black box and the player has to learn the reactions from the computer by playing the piece often’. While shorthand notation can be a useful aide-mémoire to the composer, the performer would benefit from a more detailed verbal or graphic description in order to simplify the learning process, although this may in turn make page turns problematic. In this situation, composers can consider providing a separate study score with detailed information in addition to the performance score (see Bunk Being and Becoming).
Figure 6.12 Hans Tutschku, Zellen-Linien, page 1 (excerpt).
Figure 6.13 Bryan Jacobs, Song from the Moment, bars 84–92.
150 Xenia Pestova Working with a MIDI pedal can present challenges and coordination issues for the performer, which can in turn impact composition and notation (Pestova 2008, 20). The nature of the MIDI pedal is different from the regular piano pedals, functioning as an on/off mechanism with no gradations in between (Pestova 2008, 61–62). Furthermore, due to the lack of vibro-tactile feedback and the nature of some patches, there are often no immediate results following the pressing of the pedal (Berweck 2012, 103). This introduces the probability of missing a cue or playing one too many, causing the pianist to unintentionally skip a section of the piece or to start a section early. The sheer number and placement of such pedal cues can also provide challenges to reading and execution. Song from the Moment (2008) by Bryan Jacobs contains 108 MIDI pedal cues over the 14 minutes of the piece, averaging at 7 per minute. The pianist presses the MIDI pedal simultaneously with many right pedal changes, which can also happen independently. The MIDI pedal is notated graphically with triangular wedge shapes and numbers indicating cue marks. The frequent rhythmical nature of these events may make it more useful for the performer to have this information in rhythmic notation on a separate staff (as in Tutschku’s Zellen-Linien), although this space is already taken up with intermittent click track notation (see Figure 6.13. Note the electronic part attacks, shown graphically, unlike in the Tutschku). Performance practice difficulties related to multiple MIDI pedal cues are partially resolved in On the Impossibility of Reflection (2010) by Scott Wilson. The piece requires a second musician (to date, the composer) to control the electronic part, which runs in SuperCollider software. Events and processing are triggered from the laptop, except where tight synchronisation is required. Following the initial rehearsal period with the composer, it was decided that during instances where piano attacks are meant to coincide with attacks in the electronic part, the pianist would trigger the events with a MIDI pedal, resulting in a ‘division of labour’. This is a simple and elegant solution. Wilson notates the pedal cues rhythmically and shows the rhythmic patterns of the responses in the electronic part (see Figure 6.14).
Figure 6.14 Scott Wilson, On the Impossibility of Reflection, bars 1–4.
Approaches to notation in music for piano and live electronics 151
Hybrids, improvisations, future directions Hybrid notations are interesting to consider when searching for solutions in this context. One classic example of a hybrid notation in piano music is György Kurtág’s intuitive nonrhythmical notation for his series of progressive pedagogical pieces, Játékok (1973 – ongoing). While remaining natural in its symbolic depiction of the physical gestures required to produce sounds at the keyboard and including ‘extended’ techniques such as black and white note glissandi and clusters, the notation attempts to reflect the ‘sounding’ gesture. For example, the use of colour coding might draw the pupil’s attention to different elements such as dynamics or clusters (big ‘blobs’ of sound marked in red), while glissandi are represented by wavy lines going across the range of the keyboard, getting the child used to spatial and physical orientation at the piano and exploring musical movement (Junttu 2008). An example of hybrid notation for piano and live electronics can be found in Mantra (1970) by Karlheinz Stockhausen. Scored for two pianists who also play percussion and control ring modulation (originally analogue, but now digitised, Pestova et al. 2008), the piece mostly uses conventional notation. However, changes between ring modulator pitches are shown below the piano staff with glissandi, reflecting the direct physical movement of the performers, at times requiring drastic and theatrical gestures when the dials are turned through the whole range. In some instances, the modulating frequency is also indicated in hertz. The pianists form a ‘super-instrument’ together with the modulation and the percussion, and the notation is a combination of tablature and visual abstraction/graphic representation. This is due to the fact that while the modulating sine tone frequencies are notated, they are never heard directly. Instead, what the performers and the audience hear is a combination of direct piano sound with the ring-modulated piano sound. A different type of hybrid notation is used in Alistair Zaldua’s Contrejours (2011–12). The pianist plays harmonics by dampening the strings while striking keys, both the actual and the resulting pitches shown on the two lines of the piano staff. This triggers prerecorded ‘harmonics’ and resonances in the computer part, shown on the ‘electronics’ staff (see Figure 6.15). Just like in Mantra, this model is comparable to a hyper-instrument, with the computer literally extending the piano, with the pianist activating the electronic part through the keyboard and the two sound actors blending and morphing into one. At times, notation takes a secondary role or is bypassed altogether. This can be the case in semi-improvised and collaborative pieces. Largely improvised works for piano and fixed media in the repertoire include Faisceaux (1985) by Annette Vande Gorne and Figures de Rhétorique (1997) by Robert Normandeau. The pianist performs alongside a fixed soundtrack, but the events in the piano part are notated with a degree of freedom and verbal instructions13 to improvise: different types of attacks on repeated notes in the case of Vande Gorne, and pitch collections with a free cadenza-like section in Normandeau. The electronic parts are
Figure 6.15 A listair Zaldua, Contrejours, page 3 (excerpt).
Approaches to notation in music for piano and live electronics 153 notated with waveform/attack representation of salient events against a time frame in the style of a diffusion score, similar to Harrison’s Some of its Parts. Karlheinz Essl and Gerhard Eckel use structured improvisation in combination with striking and original hybrid notations in Con una Certa Espressione Parlante (1985) for pianist and tape machine operator. The pianist performs with a range of extended techniques inside the instrument and on the keys (also using props such as mallets and a bottle). At times, the pianist works from traditionally notated pitches and rhythms, shown in the top part of the score (see Figure 6.16), while synchronising with material recorded live (see Figure 6.17). The tape machine operator is required to record and play back parts of the pianist’s live performance, as notated in the bottom section. The tape machine operator also ‘scratches’ the tape in real time – a pioneering technique (see Figure 6.17) developed especially for the piece by Gerhard Eckel (Fuchs 1986). This imaginative notational approach comes with a set of detailed instructions for decoding the symbols. Composer and installation artist Patricia Alessandrini takes this approach even further in Schattengewächse (2013–14) for toy piano and live electronics. In this piece, the instrument is physically modified in order to limit its possibilities and bypass the need for notation altogether, inviting the performer to explore and interact with the sounds directly. Transducer speakers are placed on the body of the toy piano in order to play sound files as well as excite vibration and create feedback loops through contact microphones inside. The keys are prepared and blocked with a metal ruler, creating various ‘bouncing’ and ‘buzzing’ sounds due to the hammers being in constant proximity to the metal rods inside. The performer is able to modify and shape the resulting resonances and key bounces by shaking the instrument, pressing on the keys, dampening the instrument with their body and eventually removing the blocking ruler to ‘liberate’ the keys. The electronic part is semi-improvised as well, with the second performer controlling the order and rate of change between prerecorded sound files and levels of amplification for feedback based on the pianist’s response. In this situation, the instrument itself becomes the score through limiting, modifying and shaping its sonic possibilities. This creates an inclusive and interactive performance situation by breaking down potential barriers presented by sign-based notations (Alessandrini and Pestova 2014). New keyboard instruments can also offer tantalising possibilities in terms of developing new notations. One such instrument is The Rulers, designed and built at McGill University by David Birnbaum and Steven Sinclair as part of the Digital Orchestra Project (see Figure 6.18a,b).14 The instrument consists of metal tines that are manipulated by the performer, their position captured by infrared sensors that communicate information to the computer- based synthesis engine (Pestova et al. 2009). In addition to striking tines in a piano-like fashion to initiate sounds, the performer is able to move them in
Figure 6.16 Karlheinz Essl and Gerhard Eckel, Con una Certa Espressione Parlante, page 6 (excerpt).
Figure 6.17 Karlheinz Essl and Gerhard Eckel, Con una Certa Espressione Parlante, page 9 (excerpt).
(a)
(b)
Figure 6.18 (a) The author with The Rulers, image by Vanessa Yaremchuk. (b) Detail from Figure 6.18a.
Figure 6.19 D. Andrew Stewart, Sounds between Our Minds, page 4, full score (excerpt). The Rulers notation is shown on the two bottom staves.
Approaches to notation in music for piano and live electronics 157 order to shape the sounds following attack. D. Andrew Stewart developed special notation in his piece Sounds between Our Minds (2008) for The Rulers with two other digital musical instruments. Stewart’s notation is a hybrid combining traditional symbols and shapes reflecting tine movement graphically, as well as using tablature-style staves showing which tine should be activated (see Figure 6.19). The Max/MSP patch for the piece also includes a graphic user interface showing which tines are active. The challenge for this project was to develop new notational methods that complement the nature of the instrument while drawing on traditional notations to minimise learning time, allowing performers to build on existing motor skills even if the interface does not closely resemble existing instrumental models.15
Conclusions: the performance perspective This brief overview of notation in repertoire for piano and electronics shows the lack of a unified notation protocol and terminology, which continues to be problematic. While there have been some attempts to create or identify unified notations in the past,16 this may prove difficult in practice, and it would be naïve to suggest that all composers should focus on one approach due to the varied and personal nature of each piece. However, some conclusions can be made based on the above observations. It is clear that detailed visual representations of the electronic part are useful for the performer in order to facilitate learning and rehearsal, and composers should strive to include this information. It is also useful to note that performer–composer collaboration should not be overlooked when creating new repertoire, as it can inform and enrich notational approaches, as in Being and Becoming by Bunk, Frozen Transitions by Frasch and On the Impossibility of Reflection by Wilson. Graphic representations and visual abstractions depicting electronic sound are helpful to the performer for the purposes of coordination, as well as providing an aide-mémoire for the behaviour of the electronic chamber music partner. Following in the tradition of acousmatic diffusion scores, waveform representations can give an idea of the dynamic envelope and salient attacks and are useful in music with fixed media, although they do not tend to provide specific information on timbre or texture (Harrison Some of its Parts). Similarly, using standard notation to show electronic sound is helpful in some instances (Harvey Tombeau de Messiaen, Bunk Being and Becoming), but may not always be practical, depending on the nature of the timbral events in the work in question. Special symbols devised by the composers can show the performer approximate timbral qualities of sonic events and their placement in time (Lillios Nostalgic Visions), but may not be sufficient to get an idea of the piece without repeated listening to the electronic part. In these cases, a combination of verbal descriptions and graphics can be helpful (Frasch Frozen Transitions). Tablature notations can be used to show actions required to perform the instrumental and electronic parts (for example, Kojs Three Movements and
158 Xenia Pestova Bloland Of Dust and Sand). These can also take form of notation as documentation and technical scores needed to realise the work (Austin Accidents Two). Tablature scores can be particularly challenging to the performer when the notation does not reflect resultant sound (Thibault Igaluk: to Scare the Moon with Its own Shadow). For such cases, combinations of tablature and more abstract visual notation can be more successful (Bloland Of Dust and Sand and Zaldua contrejours). I believe that notating MIDI pedal cues rhythmically (Tutschku Zellen-Linien) rather than graphically (Jacobs Song from the Moment) is more useful for performance, while dividing cues between the pianist and the computer performer provides a practical solution in order to minimise the number of notated triggers for each performer and simplify coordination (Wilson On the Impossibility of Reflection). Hybrid notations, semi-improvised scores, modification of existing instruments and working with new instruments all show further directions to consider (for example, Essl and Eckel Con una certa espressione parlante, Stewart Sounds between Our Minds and Alessandrini Schattengewächse). I hope that we will see further developments of detailed notations in new repertoire that combine the above methods and draw on successful approaches from the past, such as the use of intuitive gestural notation and colour coding (Kurtág Játékok, Kojs Three Movements). In addition to showing instrumental parts in a transparent and practical way, notation of live electronic repertoire should include clear documentation of the electronic parts and the actions required to realise them, such as triggering sound files and processing. At the same time, it should give an indication of the resultant electronic sound through a combination of standard notation, verbal descriptions and graphic symbols. Returning to Pierre Schaeffer once again, we can refer to his suggestion of a ‘variable’ principle of notation, with some elements notated ‘with the greatest precision’ and others ‘with an approximate outline’, while difficult-to-analyse sound complexes are to be represented by symbols (Schaeffer 2012, 71). If this amount of information proves to be impractical for a performance score, an alternative solution is to create a special study score for reference with a performance reduction to minimise page turns and facilitate reading (Bunk Being and Becoming). These approaches enhance visual cues and can greatly aid the performer’s learning process and minimise rehearsal time in repertoire with live electronics, which may otherwise demand repeated listening and playing through just to familiarise oneself with the score. Detailed and thorough documentation will also aid the preservation and study of this emerging repertoire for the future.
Notes 1 For examples of duo-type notation systems for piano and an electronics performer, see Austin (Accidents Two), Essl and Eckel (Con una Certa Espressione Parlante) and Montague (Tongues of Fire). Each work calls for actions such as live processing, spatialisation or real-time recording and playback to be carried out by the second musician.
Approaches to notation in music for piano and live electronics 159 2 Diffusion Score, http://ears.pierrecouprie.fr/spip.php?rubrique168. 3 For attempts at formalising spatialisation notation, see http://blog.zhdk.ch/ ssmn/about/. 4 Simon Emmerson argues that precise recreation of classic electronic repertoire is not possible as it is unrealistic to replicate exactly the equipment used (Emmerson 2006a, 5). Similar approaches to producing notation-as-documentation scores can be found in works of other composers such as Luciano Berio (Giomi et al. 2003). 5 Pianist Sarah Nicolls writes: These representations are… in practical terms not much use to the performer and simply take up precious space whilst providing no really useful information, so I therefore often cut them out of scores, replacing them with more descriptive words, or a mixture of normal notational devices and graphic shapes. (Nicolls 2010, 38)
This is similar to performers inserting traditional notation in order to simplify the reading of a graphic score, as described by pianist Aloys Kontarsky (Kontarsky and Martin 1972, 73). 6 Pianist Philip Mead discusses the learning process in music for piano and fixed media in an interview with the author (Pestova 2011). 7 Verbal performance instructions or verbal descriptions of electronic sound are often found in scores for piano and live or fixed electronics, at times approaching text score concepts (Lely and Saunders 2012). 8 Kojs, cited in Juraj Kojs, n.d, ‘Three Movements (2004) for unprepared piano and electronics’. http://kojs.net/3MVTS.html, accessed 15 August 2014. 9 Pianist Sebastian Berweck makes a study of the way notation, documentation and software patches need to be provided together and combined in order to make a performance of a live electronic work possible: ‘The material with which a performer works [should] consist of the score, one or more patches and instructions’ (Berweck 2012, 36). 10 Juraj Kojs, email correspondence with the author, 7 July 2014. 11 An excellent and more recent example of tablature-style notation can also be found in “Astraglossa, or First Steps in Celestial Syntax” by Helga Arias Parra (2016). 12 This is comparable to performing prepared piano works by John Cage. In these pieces, written pitches do not correspond to the percussive sounds activated by the keyboard. In pieces with electronics, one solution would be to notate the heard samples on a separate staff (although this may be impractical in a work that already uses multiple staves). An example of such notation is given in Nicolls (2010, 20) in a work for piano and keyboard activating samples by Larry Goves entitled My name is Peter Stillman. That is not my real name (2007). 13 See also Lillios (Nostalgic Visions). 14 See www.idmil.org/projects/digital_orchestra. 15 For more on gestural notation in electronic music, see Tormey (2011). 16 See Patton (2007) for examples of morphological notation in music for instruments and live electronics that can potentially be adapted to other repertoire. In related work, we attempted to initiate discussion on creating a unified system, or a ‘gestural lexicon of mixed music’ (Lewis and Pestova 2012). While we felt this to be useful for establishing terminology for gestural types found in live electronic music, no unity was found in existing notational approaches.
7 Encounterpoint The ungainly instrument as co-performer John Granzow
The musical idea comes first, but I think almost simultaneously of instruments. Harry Partch, The Music Magazine (Jacobi 1962) In music the instrument often predates the expression it authorizes, which explains why a new invention has the nature of noise. Jacques Attali, Noise (1985, 35) Any history of instruments must also account for their changing forms of agency and visibility. John Tresch and Emily Dolan, Toward a New Organology (2013, 289)
Introduction Musical instrument taxonomies achieve generality when they classify instruments by their material make up rather than their variable musical context. In this way, the horn, named by its brass and not its sound, undergoes no identity crisis when at one turn it solos in Tchaikovsky’s Fifth and the next lands in the electroacoustic experiments of Gordon Mumma. This is no different from preferring to name a bird by its feather rather than the tree in which it alights so that it flies with its name. This scientific mode of classification excludes contextual factors in order to generalise across them. The advantages are indisputable: stable and reliable categories and common signifiers for our musical tools. Yet we may still ask: what is the cost of bracketing out the musical site in which the instrument sounds? Do such nomenclatures forestall inquiry into how interaction with an instrument in a given musical practice might alter its character? Beyond material configurations, and the related need for common parlance, how do we experience the instrument in diverging modes of musical production? Is the horn in fact changed by how it is played and what music it serves? For decades, critique of classification systems that ignore the cultural and performance context of the instrument has been growing. Sue Carole DeVale (1990) proposes the split image of a hologram as an apt metaphor for instruments that are at once material things
Encounterpoint 161 as well as objects replete with musical, social and historical dimensions. John Tresch and Emily Dolan (2013) broach the topic with the deceptively simple question, ‘what must stay the same for an instrument to remain the same kind of thing over time’? Paul Théberge (1997) would answer that even performance practice can change the instrument, and he illustrates the point with the violin and fiddle, physically identical instruments diverging in character through different musical contexts. In this chapter, I will take the reverse path by examining physically disparate instruments that converge in their character through the improvisational practice in which they are deployed. The two instruments are Hans Reichel’s Daxophone, a bowed plank of wood amplified through a contact microphone, and Chris Chafe’s Animal, an algorithmic instrument that harvests its parameter values from environmental sensors or human gestures. These instruments have come together in free improvisation concerts at Stanford University, in the wider San Francisco Bay area, and at a recent concert at University of Guanajuato with electronic music composer Roberto Morales and percussionist Ivan Manzanilla.1 I begin with the more conventional material description and then discuss how particular interactions with these instruments infuse them with a kind of musical agency and character. This move from generality (instrument material) to specificity (musical context) can be taken so far that the instrument becomes indistinguishable from the musical event and takes on as many guises as there are pieces written for it or instances of those pieces. A move towards such specificity risks producing arbitrary descriptions reminiscent of Jorge Luis Borges’ (1999) ‘Celestial Emporium of Benevolent Knowledge’ where animals fall into such categories as ‘those that have just broken the flower vase’ and ‘those that, at a distance, resemble flies’. Yet between general material descriptions and site-specific relational ones, we may find a space where the instrument is both a collection of distinguishing materials, dimensions and excitations and an object subjected to local interactions that become integral to a shifting instrumental character.
The daxophone Musician and designer Hans Reichel discovered strange voice-like timbres when he bowed strips of thin wood suspended off a table. He amplified these strips (called tongues) with contact microphones to foreground otherwise inaudible micro vibrations. To control the pitch, a wooden block was moved over the surface to change the length of the vibrating portion. He named this instrument the daxophone. ‘Dax’ refers to the wood block that alters the tongue length (Figure 7.1). When bowing these tongues, the inverse relationship between length and resulting frequency is often interrupted with transient squawks, squeaks and brassy diphthongs evocative of awkward orchestral misfortunes. The tone breaks into strident squalls and hockets between registers. Sounds leap into upper resonant modes as if the boundary of falsetto creeps away from its last known location. Bowing techniques such as spiccato cause the daxophone to jitter as noisily as stridulating
162 John Granzow
Figure 7.1 A partially 3D printed version of Hans Reichel’s daxophone constructed by author, with the ‘dax’ resting on tongue.
insects, or, as its name promises, the stuttering growl of a badger (‘dax’ is derived from ‘Dachs’, German for badger). Changing the position, angle, pressure, and speed of both the bow and the dax produces multiple transformations of timbre. The structural simplicity of the daxophone is surprising, given its uncanny variety of sounds. In the Hornbostel–Sachs system of instrument classification, the daxophone is considered a friction idiophone (Hornbostel and Sachs 1914). Idiophones are plucked, bowed or struck directly to produce vibration, rather than relying on an attached excitation such as a string or reed. Friction refers to the stick and slip motion imparted by the bow. Adnan Akay describes the dynamics of such bowed idiophones that are often prohibitive to human control. When sliding takes place under strong contact conditions, the influence of the contact force reaches beyond the interface and friction; the friction pair becomes a coupled system and produces a more complex and often nonlinear response. Under such conditions, instabilities develop and frequently lead to a condition called mode lock-in, where the coupled system responds at one of its fundamental frequencies and its harmonics. Development of mode lock-in and the selection of the mode at which the system responds depends on the normal load, sliding velocity, and the contact geometry. (Akay 2002, 1527)
Encounterpoint 163 Stable pitch is achieved through sustained periodic motion between the tongue and the bow. The control of this periodicity proves elusive and gives rise to an exploratory approach; indeed failing to sustain or replicate a timbre or pitch brings forth interesting novel sound qualities. The instrument seems to insist on improvisation through these moments of dialectic surprise. Designing an electroacoustic instrument involves negotiating the relationship of the performer’s gesture to the electronic reinforcement or processing. The daxophone is amplified using contact microphones that bring micro-vibrations in the tongue to the audible foreground. This amplification of low amplitude changes serves to perceptually disjoin the sound from its associated human gesture. The sound is often heard as arising outside of the human-instrument feedback path, through a loudspeaker that is open to the interjection of sampling or synthesis manipulations. Simon Emmerson (2009) calls such use of amplification ‘aspiring to the condition of the acousmatic’. I have observed a related need to confirm to listeners that the daxophone has no internal microprocessor because there is a frequent assumption that the strange sounds must arise from a more complex signal chain. When unplugged, the sonic discontinuities are not as prevalent and the anticipated relationship between tongue length and pitch restores the sensation of gestural control. Yet the daxophone has no acoustic resonator and is therefore very quiet when unplugged. Indeed, like other electroacoustic instruments, its ‘character’ is inseparable from its mode of amplification. It is likely that some readers will not have heard of the daxophone, a lack of notoriety that is due partly to its short history, also because it mostly eludes conventional instrumental goals of control. As composer Michael Berger once said, ‘it seems to do everything an instrument should not’. Such negative attributions can lead to entire research projects that aim to mitigate the very character that is described here (for an example see Zhang et al. 2014). Diverging musical aims pull the instrumental character in opposing directions. I have described the daxophone as containing a chaotic component that disrupts the perceived continuity between gesture and resultant sound, a quality that is sought in improvisational settings. This does not prevent increasing control through a prolonged effort to tame this ungainly response. Yet, in a free improvisational context where contingencies are welcome, the surprising output of the daxophone is embraced. I observe a similarly ruptured control in Chris Chafe’s computer-based resonator, Animal.
Animal Animal is an algorithmic instrument deployed in both performance settings and sound installations. It exemplifies a long tradition of driving computer- hosted physical models with a variety of controllers, including acoustic instruments. The use of this agent in Chafe’s piece Phasor for contrabass and
164 John Granzow electronic bow illustrates an alternate path to a nondeterministic component similar to that of the daxophone, but through a very different process.2 Animal is hosted on a laptop and responds to gestures captured by an accelerometer embedded in the heel of a cello bow. Gestures from the contrabass player propagate in the parameter space of the computer-hosted algorithm. Movement, converted into data, becomes subject to numerical manipulations implemented in the design phase and tailored to this quest for musical surprise. The computer-hosted logistic map and the embedded delay lines intervene at times with chaotic behaviour, breaking the continuity between motoric gestures and resulting sound. What is inherent in the daxophone’s idiophonic nonlinear response is now simulated in the software through a set of equations devised to simulate dynamical systems. Constructing an instrument outward from the synthesis scheme to an interface with the performer provides many possibilities for integrating instrumental autonomy. Animal, for example, will not speak until a DC offset is overcome, a process that can arise independently from the human performer’s input. What in the daxophone is only a projection of agency onto an unpredictable response becomes in Phasor the capacity to voice the opening gambit of a given performance. Implementing this wilful musical motion pushes the instrument further into the space of co-performer. The presence of the contrabass in Phasor has its own way of wresting material into its ambit as a sound source. An emphatic bowing gesture can enable the performer to appropriate the unfurling electronic sound, claiming its digital extensions through the timing of transients in relation to subsequent synthesised materials. This sonic coupling occurs, for example, when the contrabass lends its onset transients of the string to the unfurling of synthesised envelopes. Tim Perkis articulates this phenomenon where pitched material becomes associated with transient cues of usefully underdetermined sound sources: When a very cheesy synthesized violin sound plays in counterpoint with a real violin, it can quite convincingly seem as if two violins are playing. It’s as if all the cues we use to define the reality of the violin sound— little chiffs and squeaks and timbral irregularities—don’t adhere all that strongly to one pitch stream or another. They both benefit from these features. … (Perkis 2009, 164) Such benefits are at times present in Phasor where Animal seems tightly coupled to bow gestures and the correlated accelerometer data. High valence (actively combining) transients from the real string perceptually claim the resulting envelopes. The performance becomes an act of negotiating how such cues are sustained and broken. The computer-hosted oscillators and their numerical transformations can easily generate multiple auditory streams, which do not necessarily imply
Encounterpoint 165 an equal number of heard agencies of the kind that can be negotiated as co-performer. Perceived agency in the instrument is not achieved simply by introducing sounds that emerge outside of gestural control, but rather through voices that seem to wrest the musical material from the performer through dialectic play. Agencies in the instrument are acutely felt because their imposition is negotiated, not just multiplied. What the daxophone gets for free in its physically bounded single resonator is designed into the numerical underpinnings of Animal. In the foregoing discussion, I propose that the daxophone and Animal are deployed to be complicit in the trajectories of our free improvisation through the foregrounding of a chaotic component. This idea can be stated more generally in terms of our broad reliance on instruments as feedback systems. I have so far described this feedback as either conforming to or breaking with a sense of authorial control of sound. Yet if we might seek out the co-performing instrument, what does it mean to do the opposite, to seek out an instrument that seamlessly gives voice to the imagination? The following section explores this perhaps more common notion of the instrument as a means of transmitting the musical imagination to the air.
The necessary instrument Paul Hegarty (2008, 27) describes the musical instrument as both a ‘vehicular machine for human ideas or feelings’ and a ‘physical machine for displacing air … via a huge variety of material forms’. The instrument as a ‘physical machine’ is one perspective of the instrument builder who inherits techniques to reproduce mechanical or electrical properties for physical or virtual acoustics. These techniques include computer-controlled milling to produce nearly identical material forms as well as embedding integrated circuits into the instrument for signal processing; with the latter integration of synthesis, the character of the instrument gives way to the parameter manipulations and sound design of the composer or the performer. In the second part of Hegarty’s definition, the physical machine serves the ‘vehicular’ function of transmitting ideas and feelings. This metaphor of transmitter might imply the framework of telecommunications where signals are always subject to varying degrees of loss in transit. Under this view, the agency of the instrument is diminished, as it is only a channel for transmission of varying quality, not itself a source of ideas. I am reminded of Glenn Gould, who famously dismissed the piano as a machine of various compromises in its voicing of his musical ideas (Payzant 2008, 95). Improving the transmitter would involve reducing its loss, with the ideal being an interface that carried uncorrupted imagined sound. In the mid-twentieth century, this fantasy of an instrument of vanishing mediation was vested in electronic and computer-generated music. I have chosen below two quotations from the history of computer music that exemplify this dream of a seamless interface with the imagination. Yet I also perceive in these words
166 John Granzow a tribute to the instrument as an agent complicit in musical trajectories. Therefore, the following ideas seem at once to celebrate and critique the idea of reduced mediation between mind and sound. Take, for example, Edgar Varèse’s well-known statement: ‘I dream of instruments obedient to my thought and which, with their contribution of a whole new world of unsuspected sounds, will lend themselves to the exigencies of my inner rhythm’ (1962, 1). Dissatisfied with the futurists and their attempts to liberate sound through electronics, Varèse imagined a more concise reckoning of his mind. Yet in stating this, he attributes to this ‘obedient’ instrument the ability to produce ‘unsuspecting’ sounds. He has imagined an instrument that both voices the mind and surprises. Even when conceived as a conduit for ideas, surprise and discovery still play a role in characterising the instrument. David Cope suggests a similar desire in the introduction to his book on algorithmic composition: ‘The dream continues of a digital extension of myself that provides music from which I can extract at will’ (2000, ix). In addition to this muse to digital converter, Cope’s dream implies an ‘extension’ of himself that in turn is a reservoir of selection; it is as if only through an external manifestation can the compositional process of considered selection take place. The dream seems less the reification of imagined sound, and more the conversion of the imagination into musical materials that are then worked with as a secondary process. In imagining such vectors from mind to sound, these composers pay tribute to an instrumental mediation that I claim is often necessary for the creative process and of course undeniably so in improvisational contexts. This idea is reminiscent of Pierre Schaeffer’s claim that we ‘did not perceive music until it had passed onto an instrument, even if that was a stone, or a skin stretched on a gourd. Probably man needed to go outside of himself, to have another object: an instrument, a machine’ (Schaeffer, cited in Nelson 2011, 109; for the original text see Schaeffer 1971, 56). Although the sentiments of Varèse and Cope venerate interior, imagined sound, they still also invoke systems of mediated feedback. If the implied apparatus seamlessly extends the imagination, it also functions as itself a source of those ideas.
Extension and break Claude Cadoz defines the instrumental gesture as ‘the set of actions applied to the instrument, a portion of which constitute the final outcome of the task’ (cited in Sève 2013, 65).3 This definition evokes the analogue condition of acoustic instruments, where bodily exertions are conducted through physical materials. This transmission is bidirectional with actions producing vibrotactile feedback critical for instrumental control. Andy Clark, borrowing a well-known example from Maurice Merleau-Ponty, illustrates this extension more generally by asking us to imagine dragging a stick along the ground, through which the ground, not the stick, is felt between one’s
Encounterpoint 167 fingers (2008, 31). In similar fashion sound vibrations in an instrument conduct through tactile feedback channels such that we feel direct contact with the projected sound. The tactile becomes another channel to understand and manipulate what is heard. This connection may be important for the performer’s sense of gesture in continuity with sound, but it is only one part of the design goals underlying the development of the daxophone and Animal. Congruency of intent, gesture and sound comes under scrutiny in many writers, including theorists who investigate improvisation as a process of decentring authorial roles: see Wilf (2013) for a review of contingency in improvisation. If the instrument seems to extend motoric gesture into sound then the perceptual threshold where that control is lost can be an opportunity to imagine the musical idea as arising from the instrument itself. This threshold where control is disrupted or relinquished is conceived differently across musical and cultural practices. In Western repertoire-based music, this boundary is often heard as the failed instrument. The instrument can ‘appear’ to exhibit consciousness at the occasion of its failure. The duck-call produced by a clarinet, the violin string that breaks during a concert, these are ontological revelations that wrest the instrument into transparency. Can we speak of the involuntary aesthetic of the instrument presented as itself. Or perhaps it is a garish appearance of what is better left hidden. When such things appear, the work is disturbed (if the violin string breaks, the concert is interrupted; if the oboe honks the concert continues but under a certain musical instability). (Sève, 2013, 119) Implied here is that the instrument can vanish in the service of organised sound, the way the strings of a marionette seem to vanish under the skilled manipulations of the puppeteer. The instrument becomes to the performer what the performer was to the composer under the nineteenth century concept of the strong work: an agent of execution not interpretation.4 Conversely, opacity or failure arises when the instrument is revealed as a noisy mechanism, a box flexing under tension, an ungainly object with noisy strings attached. Yet to Sève’s question of the ‘aesthetic of the instrument presented as itself’, the daxophone and Animal answer in the affirmative: this opacity is in part the desired result. The Portsmouth Sinfonia famously worked with this ‘appearing’ instrument by tackling difficult works with musicians that had little experience on the instruments they were given. Gavin Bryars conceived of the group to foreground such instability. As the notes of venerated musical works are fumbled, the sounds lay bare their material provenance in the instrument, now a noisemaker in garish relief. It is interesting to imagine how a change of genre here would produce a complete transformation of instrumental
168 John Granzow character. Leveraged for free improvisation, those same sounds would be heard and deployed very differently. Performers would now explore a very narrow range of sounds on their respective instruments, slowly venturing out to build coordinated interactions. Although this would necessitate the ‘appearance’ of the instrument, the sounds would be unhinged from any known work with all its auditory expectations; the appearances could then be considered instrumental agencies, co-performers in now more populated fields of influence. This motivation can define the instrument even in the design phase. Bart Hopkins ends his practical book on instrument design with a call to listen to the sound of materials, without polishing their character away: It will be tempting to turn each sound generating system into a pitch selection and rhythm-control device—after all, that is what most musical instruments are, and what most instrumental music is composed for. But it may be that to do so is to impoverish the sound. Try to let the instrument and its sound suggest their own music. (1996, 148) This ‘control’ associated with most instrumental music may be a historical exception defining modern orchestras. Alexandra Hui reminds us, ‘the world of sound in the second half of the nineteenth century was highly unstable. New tuning systems, new tones, new music and the fledgling discipline of musicology all jostled to establish position’ (2013, 121). Hopkins is implicitly suggesting a return to such diversity through the inclusion of found objects and electronics in custom instrument design. His own instruments are fantastic manifestations of his practice of meeting the resonator half way. As with the daxophone, this noninvasive craft reflects a use of materials that come physically bounded with inherent nonlinearities to be explored and teased out. Conversely, a top-down approach is required for Animal to tweak the parameters of the algorithm to integrate chaos-based instabilities that can be regulated only imprecisely. These two paths to contingency (bottom up through found materials and top down through manipulating otherwise deterministic computer code) are manifest respectively in both the daxophone and Animal.
Improvisation The daxophone and Animal are conceived as complicit in the trajectory of the musical imagination as it unfolds in free improvisation. They are appendages without which our musical travels are hypothetically less constrained (free to imagine any music) and yet strangely feel much more so. A rhetorical analogy comes to mind: to converse is to relinquish some control over the tenor of thought (one is no longer able to follow one’s own stream of consciousness). Yet when conversation is engaging, we experience a vastly
Encounterpoint 169 extended mind that strings our thoughts along into surprising mental spaces and insights. Can we construct the instrument for a similar feeling of dialectical and musical expansion? The rhetorical metaphor breaks down, because this invoked conversation between performer and instrument is a form of ventriloquism, quickly understood as such. It is important to note here that there is nothing essential about this mode of constructing, hearing and playing instruments. For it to bear fruit, a suspension of knowledge of the system is required. It is an interpretive act to hear such agencies and try to relay to the listener this imaginary sonic other. Jeff Pressing proposes what might comprise a knowledge base for improvisation: ‘musical materials, excerpts, repertoire, subskills, perceptual strategies, problem solving routines, hierarchical memory structures and schemas, generalized motor programs, and more’ (1998, 53). Some of these things do not apply to free improvisation. We forego, for example, moving through key structures or modulations. Nonetheless, the instrument as a compositional platform can stand in as a memory schema and dictate psychological strategies. In its prototype, we learn and remember its variable output, a ratio of gestural extension and perceived instrumental autonomy that we intend to engage with as we might other living performers. Rather than a score, or tonal idiom, we acquire a set of heuristics for negotiating agencies as they proliferate and fuse in the instrumental response. So often conceived as a tool of control, the instrument is now designed to dramatise a framework for distributed musical decision-making.
Ontology The comparison between an unruly friction idiophone and an algorithm downstream from a contrabass is based on subjective attributions of character. Their affinities do not correspond to material taxonomies, but rather ontologies of musical performance. The instruments’ identities are implied in their colloquial names: Dachs (badger) and animal. Perceived at times as autonomous, the sound is associated with other life forms. This sonic metonymy may also inform the performer’s psychological stance producing what George Lewis would call a ‘technology mediated animism’ (2000, 37). Treating the instrument as though it were alive is also an example of cross-domain mapping (Zbikowski, 2002). The mapping emerges as a theory in which the observers compare domain A (target) with domain B (source) to test for potential isomorphism. The motivating question is whether the heuristics induced through one domain are also apposite to the other. Ample theoretical work outlines the operation of such mappings from the mental models first proposed by Kenneth Craik (1967) to the now much-cited image schema hypothesis of George Lakoff and Mark Johnson (1980). This vast literature provides suggestive examples of how conceptual maps can be lifted from one domain and applied to another.
170 John Granzow Perceived as an autonomous agent negotiating adjacent actors, our instruments can be conceived as though they were alive. The analogy maps the unpredictable encounter with other life forms onto the sonic output, a stance promoted through our chosen mode of design, performance and listening. A critique arises where these instruments are ‘animal’ only insofar as they seem to engage in sonic motion decoupled from human energy inputs. This zoomorphism (or rather a veiled anthropomorphism) truncates the vast diversity of life with a few narrow clichés. Animal is flattened to merely stand in for our ruptured control: see Lorraine Daston and Greg Mittman (2005) for a fascinating discussion on how human animals think with other animals. Jacques Derrida (2008) rejected the word ‘animal’ as generalising across riotously diverse life forms and serving the Enlightenment project of keeping humans separate from them. Yet Animal’s abstraction may still be commensurate with the experience of listening to or performing improvised experimental music. Rudimentary distinctions of agency, otherwise resolved quickly when the senses combine, can be intentionally foregone in an art form concerned primarily with the listening and the imagination. The question of ‘who is there’ is protracted through what might be called expanded causal listening. Causal listening is to decipher what swims under the sonic surface and makes it ripple. It is the inference of plausible cause. Yet this causal listening remains open to the imagination. Simon Emmerson (2007a, 169) talks of the hyperreality of sound samples recognised from the source process (i.e. what was recorded), but also evoking totally alternative images in his mind. This kind of imaginative listening forgoes local knowledge of the scene for a wider field of candidate sources. Some might close their eyes to impose such expansions, as Stockhausen suggested (Wörner 1973, 155). As an instrument builder and performer, I am aware of technical correlates of the sound produced. Yet I may suspend this knowledge in order to let analogies from wider experiences seep in and activate a metaphorical hearing and playing. The names daxophone (badger) and Animal suggest these expanded sources of musical sound in the biosphere, and our experiences with other life.
Conclusion I have compared two recently constructed instruments deployed in free improvisational concerts. Their identities converge through musical practice rather than material makeup. They are designed to foreground a nondeterministic component that imbues them with an authorial role in free improvisation. They are both means of gestural extension and co-performing agents. In the case of the daxophone, this agency was discussed as arising from momentary violations of an otherwise linear pitch/length relationship. This was due in part to its mode of amplification as well as the nonlinearities inherent in bowed idiophones. With Chafe’s algorithmic extension, a similar agency is crafted through the coding of feedback paths in digital delay lines.
Encounterpoint 171 The move is to see these instruments not as unruly, but as otherwise ruled, and thus wakefully negotiated through animal analogies. Combining these instruments in improvisation is to sonically prolong the question: what musical agents populate and interact in this space? We play to make the count exceed the human.
Notes 1 An excerpt from the concert can be found at www.youtube.com/watch?v= PtGceG66-Vs. 2 The K-bow is produced by Keith McMillen. 3 Unless otherwise stated all translations are by the author. 4 Stravinsky said that music should be executed by performers not interpreted (cited in Cook 2003, 204).
8 Robotic musicianship in live improvisation involving humans and machines1 George Tzanetakis
The term music robot is typically used to refer to a device that is digitally controlled but produces sound acoustically. For example, a percussion robot might use solenoids (electromagnetic devices that convert energy to linear motion) attached to sticks to hit a drum membrane for sound production. In some ways, these devices can be thought of as the modern counterparts of mechanical pianos that utilise computers for control instead of rolls of paper with holes that encode the music. In this chapter, the challenges and opportunities of creating improvised music that combines human and robotic musicians are discussed. The importance of robots being able to understand musically both their own output and the collective output of the group is emphasised. This will be illustrated through specific case studies of how techniques of automatic music analysis can be used in the context of live performance. For example, we describe how to build robotic instruments that are able to correctly identify the instruments they are actuating without explicit mapping, are able to calibrate their dynamics automatically and can self-tune. We also discuss how to model higher-level activities such as recognising gestures of a performer and rhythmic patterns independently of tempo. In order to put these techniques into a specific context, we discuss their use for creating two music pieces for the interactive Trimpin installation Canon X + 4:33 = 100.2 The installation consisted of reconstructed acoustic pianos made into automatons using a variety of uncommon actuators. The exhibition title paid homage to the work of two significant and influential twentieth-century composers: John Cage and Conlon Nancarrow.
Introduction Mechanical devices that generate acoustic sounds without direct human interaction have a long history, from mechanical singing birds in antiquity to sophisticated player pianos in the late nineteenth century that performed arbitrary scores written in piano-roll notation. Using computers to control such devices has opened up new possibilities in terms of flexibility and expression while retaining the richness of the acoustic sound associated with actual musical instruments. The terms music robots or robotic musical instruments have been used to describe such devices (Kapur 2005). They are
Robotic musicianship in live improvisation 173 a fascinating hybrid of traditional acoustic performance and live electronic music. While musical robotics may seem like a niche area and a rather esoteric activity (Burtner 2004), they could become part of the regular fabric of music performance. Historic innovations, such as the transition of monophonic to polyphonic music, the electrical amplification of the guitar and the use of computers in the recording studio, were all regarded with scepticism but eventually became mainstay practices. In the last few years the design and development of music robots have accelerated due to factors such as the availability of well-supported embedded computing ecosystems such as the Arduino and the use of 3D-printing techniques to create reusable and easy-to-replicate designs (Trail et al. 2013). In this chapter, I provide a brief overview of the history and current state of the art in music robotics. The term musicianship refers to a broad set of abilities required for performing music typically related to listening while playing music. As any musician can attest, it is important to listen to both your own playing and what the other musicians in the group are doing. The term music robot typically refers to the electromechanical sound-producing apparatus or this apparatus and the associated controlling software. Although such music robots have been used in performance of both composed and improvised music with or without human performers sharing the stage, they are essentially passive output devices that receive control messages and in response actuate sound-producing mechanisms. Their control is typically handled by software written specifically for each piece by the composer and/or programmer. In the majority of cases to date, the robot is effectively ‘deaf’ and has no ability to perceive its own sound or the sound of the other musicians. For example, a percussion robot will happily continue actuating its solenoid and the attached drumstick even if the underlying drum is removed and no sound is produced, something highly unlikely to happen with a human musician. In his excellent book, Robert Rowe (2004) describes how musicianship can be modelled computationally mostly in the symbolic domain and includes information about how to implement musical processes such as segmentation, pattern recognition and interactive improvisation in computer programs. In this chapter, I describe how techniques of Music Information Retrieval (Orio 2006) can be adapted in the context of live music robotic performance to model musicianship in the audio domain. The chapter concludes with a case study of how some of these ideas were employed to create two music pieces for the interactive Trimpin installation Canon X + 4:33 = 100. The installation consisted of reconstructed acoustic pianos made into automatons using a variety of uncommon actuators.
Background Robotic music instruments, like their acoustic counterparts, come in a variety of forms and sound-producing mechanisms. An early example of an automated, programmable musical instrument ensemble was described by
174 George Tzanetakis Ismā‘īl ibn al-Razzāz Jazarī (1136–1206), a Kurdish scholar, inventor, artist and mathematician, who lived towards the end of the Islamic Golden Age (ca. 650–1250). His automaton was described as a fountain on a boat featuring four automatic musicians that floated on a lake to entertain guests at royal drinking parties. It had a programmable drum machine with pegs that connected to little levers, which operated the percussion. The drummer could be made to play different rhythms and different drum patterns if the pegs were moved around, performing more than fifty facial and body actions during each musical selection. This was achieved through the innovative use of hydraulic switching (Jazarī 1974, 42–50). The player piano is a more recent example of an automatic, mechanically played musical instrument. Over time, its hammer action has been powered by foot pedals, hand cranks and electricity. Compositions and performances were originally encoded on piano rolls (sheets of paper with holes) and more recently have been stored digitally. Today, automated pianos controlled using the MIDI (Musical Instrument Digital Interface) are commercially available (for example, the well-known Disklavier system by the Yamaha Corporation). A modern example of a robotic musical ensemble is guitarist Pat Metheny’s Orchestrion, which was specifically influenced by the player piano. Metheny cites his grandfather’s player piano as being the catalyst to his interest in orchestrions, which are programmable machines that play music acoustically and are designed to sound like an orchestra or band (Metheny 2009).
Figure 8.1 Mahadevibot robotic percussion instruments designed by Ajay Kapur.
Robotic musicianship in live improvisation 175 Robotic percussion instruments are arguably the most diverse and most common type of music robots. They generate sound by striking different sound-producing objects. A common approach is to make a motor or solenoid system that strikes a membrane (or other object) with a stick. As an example, Figure 8.1 shows the Mahadevibot, a robotic percussion instrument created by Ajay Kapur et al. (2007) for live North Indian music performance. Researchers at Harvard University designed a system using pneumatic actuators with variable passive impedance that was capable of robotic drum rolls. The robot could execute drum rolls with speed comparable to that of human drumming (40–160 millisecond interval between bounces) (Hajian et al. 1997). It is also possible to utilise existing humanoid robots. For example, researchers at the Massachusetts Institute of Technology (MIT) used oscillators to drive either the wrist or elbow of a humanoid robot (named Cog) to strike a drum membrane with a stick (Williamson 1999). More recently, there have been attempts to create percussion robots that show musicianship and go beyond being simple controlled output devices. Haile is a robotic percussionist that listens to live human players, analyses perceptual aspects of their playing in real time and, informed by this analysis, can play along in a collaborative and improvisatory manner (Weinberg and Driscoll 2006). The robot is designed to combine the benefits of computational software, perceptual modelling and the principles of algorithmic music with the richness, visual interactivity and expression of acoustic playing. The combination of machine listening, improvisational algorithms and mechanical operations with human creativity and expression can lead to novel musical experiences. Understanding what human musicians are playing is an important part of robotic musicianship. Using techniques from digital signal processing and machine learning, it is possible to extract information about what a performer is playing by processing audio signals captured by one or more microphones. Also, information about the playing gestures of a performer can also be extracted using digital sensors attached to acoustic instruments. Such acoustic instruments, retrofitted with digital sensing technology, are called hyper-instruments. An example of this approach is the Mahadevibot (shown in Figure 8.1): a percussion robot designed to improvise North Indian classical music by responding to a human playing an E-Sitar, a hyper-instrument (Kapur 2007). Another interesting family of percussion music robots is made up of idiophones, which include pitched percussion instruments. Seattle artist Trimpin designed early examples of such mechanical instruments in the 1970s (see Figure 8.2). In the ‘CeLL’ project, Miles van Dorssen created a number of robotic percussion instruments, including an eight-octave xylophone, a bamboo rattle, a high hat, a gong, jingle bells and tubular bells (Dorssen n.d.). Shimon, developed at the Georgia Institute of Technology, is an anthropomorphic robot and associated improvisation system (Hoffman and Weinberg 2010a) that uses a gesture-based framework that recognises that musicianship is not limited to the production of notes, but includes the intentional communication among musicians (Hoffman and Weinberg 2010b). Shimon can perform music on any standard acoustic marimba and provides visual cues by moving with the rhythm.
176 George Tzanetakis
Figure 8.2 Early robotic idiophones by Trimpin.
Another important family of robotic instruments actuates strings; it can be divided into two categories based on the method of actuation: struck and bowed. In the early 1990s, Trimpin created a collection of 12 robotic guitar-like instruments in an installation called Kranktkontrol. Each guitar had a motor-powered plucking mechanism. Fretting (including a damper) was accomplished using solenoids. A robotic slide-guitar named Aglaopheme was developed by N. A. Baginksy in 1990s (n.d.). The six-stringed instrument has a set of solenoids for plucking and damping each string, and a motor that positions the bridge for pitch manipulation. In 1997, Sergi Jordà (2002) created Afasia, a robotic electric guitar, which has a 72-finger left hand, with 12 hammer-fingers for each of the six strings. The Guitarbot by Eric Singer is a further example of a novel, electronically controlled, acoustic instrument that uses old dot-matrix printer components (Singer et al. 2003). Probably the most well-known and large-scale robotic guitar project is a permanent installation; If VI was IX, by Trimpin at the Experience Music Project in Seattle consisting of over 500 guitars, many of them with self-tuning mechanisms, plucking actuators, and pitch manipulation. Murphy et al. (2015) provide a thorough overview of the history and technologies behind robotic guitar-like instruments. Bowed robotic instruments also have a long tradition. In 1920, C.V. Raman designed an automatic mechanical violin, in order to conduct detailed studies of its acoustics and performance (Raman 1920). The Mubot, designed in Japan in 1989, was an anthropomorphic robot that could bow actual acoustic violins and cellos (Kajitani 1992; 1989). In the Afasia project, a violin robot with one string that is actuated with an Ebow (a handheld electronic bow for guitars, www.ebow.com/home.php) and fretted with a glissando finger that is controlled by a step motor, can slide up and down the string (Jordà 2002). Wind robotic music instruments are another category with examples from both woodwinds and brass. They tend to be more expensive and complicated, as they require air pumps for blowing. Over a period of ten years, a team at Waseda University in Japan has been developing an anthropomorphic robot that can hold and play a real flute (Takanishi and Maeda 1998;
Robotic musicianship in live improvisation 177 Solis et al. 2004). Robotic bagpipes have also been constructed. Ohta et al. (1993) designed one of the earliest examples in which a custom-constructed chamber was fitted to traditional pipes and controlled by a belt-driven finger mechanism. The Afasia project also had a three-bagpipe robot on which each hole was closed by a dedicated finger controlled by a valve (Jordà 2002). McBlare is a more recent robotic bagpipe player that performs on a traditional set of bagpipes (Dannenberg et al. 2005). The chanter and automatic fingers are powered by a custom-made air compressor. Large collections of robots that are all controlled centrally have also been designed, such as the Modbots by Eric Singer et al. (2004). These modular robots can be attached to almost all objects and surfaces. They include beaters, wineglass effect resonators, scrapers, pullers, shakers, bowers and pluckers. Arguably the most interesting application of robotic music instruments is for ensembles that combine human and computer musicians, especially in a live, improvisatory context. In recent years, such ensembles have been created and used to explore new forms of live electronic music. These projects include the Machine Orchestra at the California Institute of the Arts (Kapur et al. 2011), the work of Gil Weinberg and his group at the Georgia Institute of Technology (Weinberg et al. 2009) and the Logos Foundation Robotic Ensemble (Maes et al. 2011).
Robotic musicianship Work is now being done to leverage techniques from Music Information Retrieval (MIR) (Orio 2006) to extract information from audio signals for the development of basic musicianship abilities in a robotic context. In general, this is more difficult than dealing with symbolic information. MIR techniques cover a wide variety of tasks related to the automatic extraction of information from music in either symbolic or audio form. Techniques that have been explored in MIR include classification into genres/styles, mood and emotion detection, segmentation, detection of music structure, automatic chord recognition, melody extraction, similarity retrieval, score following and automatic music transcription (one of the biggest challenges in the field and still far from being solved). The majority of existing work focuses on the analysis of audio recordings and digital music scores. More recent work deals with the challenging task of processing music performed live. For example, in a musical score each notated event is associated with an instrument. However, an audio signal needs to be first analysed using digital signal-processing techniques and machine learning to figure out even simple things, such as which instruments are present or what chord they are playing. Some of the musicianship abilities described in this section are relatively simple and basic while others are more complicated. Though some of these abilities can easily be managed by beginner musicians, examining them in the framework of robotic music making can be useful, because a better understanding of these abilities will open up new creative possibilities
178 George Tzanetakis for live performance in which not every single detail of the robotic performance will have to be predetermined, as has commonly been the case until now. Explaining the techniques of digital signal processing and machine learning that are needed to accomplish these tasks automatically is beyond the scope of this chapter. Instead, a high-level overview is provided accompanied by references to literature in which the interested reader can find more information. Specifically this chapter will focus on timbre identification and proprioception, timbre adaptive velocity calibration, self-tuning, pattern and loop detection and general aspects of gesture control. One of the most important preparatory events to any musical performance is the sound check and final rehearsal that take place before a concert in a particular venue. During this time, the musicians set up their instruments, adjust the sound levels of each instrument and negotiate information specific to the performance, such as positioning, sequencing and cues. A similar activity takes place in performances involving robotic acoustic instruments in which the robots are set up, their acoustic output is calibrated to the particular venue and mappings between controls and gestures are adjusted. This process is often tedious and typically requires extensive manual intervention. Using robotic musicianship techniques, this process can be simplified and automated. Issues such as velocity calibration or control mapping described below can be quite difficult when dealing with real instruments. We believe that the ability of a robotic instrument to perceive at some level its own functioning is important in order to create robust, adaptive systems that do not require regular human intervention to function properly. In my research group, we refer to this ability as ‘proprioception’, which in its original meaning refers to the ability of an organism to perceive its own status especially in terms of how the body is situated in space. A well-known example is the sobriety test in which the subject is required to touch his or her nose with eyes closed. That is normally an easy task, but not when proprioception is impaired due to consumption of alcohol. In the typical architecture of interactive music robots, the control software receives symbolic messages based on what the other performers (robotic or human) are playing, as well as messages from some kind of score. It then sends control messages to the robot in order to trigger the actuators generating the acoustic sound. In some cases, the audio output of the other performers is automatically analysed to generate control messages. For example, audio beat tracking can be used to adapt to the tempo played. The ability to actually listen to the audio signal produced by the robot is also important. To summarise, one can view the described efforts to model musicianship abilities as attempts to create different aspects of a ‘virtual’ musical ear, while at the same time addressing practical issues that arise in live performance involving both humans and robots. The techniques described below are explained through example systems for illustration purposes, but the associated concepts can be applied more generally.
Robotic musicianship in live improvisation 179
Instrument/timbre identification and proprioception As noted above, the concept of proprioception refers to the sense of one’s own body. In music performance (both human and robotic), the instrument can be considered part of the body. The intimate relationship to and complex control of sound afforded by instruments to a trained musician support this extended definition of proprioception. The main idea is a type of self-awareness in which robots adapt their behaviour based on understanding the connection between their actions and sound production through machine listening. Self-listening is a critical part of musicianship, as anyone who has struggled to play amplified music on a stage without a proper monitor setup has experienced. However this ability is conspicuously absent in existing music robots. One could remove the acoustic drum actuated by a solenoid so that no sound is produced, and the robotic percussionist would continue ‘blissfully’ playing along. This work was motivated by practical problems experienced in a variety of performances involving robotic percussion instruments. Figure 8.3 shows our experimental setup in which solenoid actuators are used to excite different types of frame drums.3 We used audio signal processing and machine-learning techniques to create robotic musical instruments that ‘listen’ to themselves using a single centrally located microphone. Setting up robotic instruments in different venues is a time-consuming and challenging process. One issue is mapping: which signal sent from the
Figure 8.3 Percussion robots with microphone for self-listening.
180 George Tzanetakis computer maps to which robotic instrument. As the number of drums grows, the management of the cables and connections between the controlling computer and the robotic instruments becomes more challenging. The system we propose correctly performs timbre classification of the incoming audio, automatically mapping solenoids in real-time to the MIDI or OSC (Open Sound Control) note messages sent to the drum with the proper timbre. For example, rather than sending an arbitrary control message to actuator 40, the control message is addressed to the bass drum and will be routed to the correct actuator by simply ‘listening’ to what each actuator is playing in a sound-check stage. Actuators can be moved or replaced easily even during the performance without changes in the control software. The same approach is used to detect broken or malfunctioning actuators that do not produce sound. To perform automatic mapping, it is possible to utilise audio feature extraction followed by a standard supervised classification. For example, the well-known Mel-Frequency Cepstral Coefficients (MFCC) (Li et al. 2011), which capture different characteristics of the timbre, can be used as audio features for this purpose. In supervised classification, a training set of labelled drum sounds is used to create an algorithm (a classifier) that takes audio features (numerical values characterising the associated audio) as input and predicts the type of drum sound for an unlabelled audio recording. Experiments show classification accuracies between 90 and 100 percent for six different drum sounds. In this approach, each solenoid is actuated in turn, the generated audio signal is captured by the microphone, analysed with audio feature extraction and the associated sound classified. Rather than using absolute solenoid controller numbers, the controlling patch refers to the instruments and the correct mapping is inferred by the classification process (Ness et al. 2011).
Timbre-adaptive velocity calibration When working with mechanical instruments, there is a great deal of nonlinearity and physical complexity that makes the situation fundamentally different from working with electronic sound, which is entirely ‘virtual’ (or at least not physical) until it comes out of the speakers. The moving parts of the actuators have momentum, and changes of direction are not instantaneous. Gravity may also play a part, and friction must be overcome. Frequently actuators are powered separately, which can result in inconsistencies in the voltage. The compositional process, rehearsal and performance of The Space between Us by David A. Jaffe, in which Andrew Schloss performed as a soloist on robotic percussion, involved hand calibrating every note of the robotic chimes, xylophone and glockenspiel.4 This required 18+23+35 separate hand calibrations and took valuable rehearsal time. Motivated by this experience, we developed a method for velocity calibration, that is, what voltage should be sent to a solenoid to generate a desired volume and timbre from an instrument.
Robotic musicianship in live improvisation 181 Due to the mechanical properties of solenoids and drums, a small movement in the relative position of either can lead to a significant change in sound output. A dramatic example of this occurs when a drum moves out of place during performance, and the voltage that at the start of the performance had allowed the drum to be hit now fails to make the drum sound. Depending on the musical context, this can be disastrous in a performance. Good velocity scaling is essential in enabling a percussion instrument to give a naturally graduated response to subtle changes in gesture, e.g. a slight increase in the strength (velocity) of a stroke should not result in a sudden increase in the loudness of sound. Human drummers use their ears to adjust to the relative response of the drums they are using. Our approach can be viewed as an attempt to create a ‘virtual’ ear for this specific purpose. The acoustic response of a drum in terms of both perceived loudness and timbral quality is nonlinear with respect to linear increases in voltage as well as to the distance of the solenoid to the vibrating surface. In most existing systems, calibration is typically performed manually by listening to the output and adjusting the mapping of input velocities to voltage until smooth changes in loudness and timbre are heard. In this section, we describe how to derive an automatic data-driven mapping that is specific to the particular drum. Our first objective is to achieve a linear increase in loudness with increasing MIDI velocity for a given fixed distance between a beater and the drumhead. However, in practice, the beater may be mounted on a stand and placed next to the drumhead mounted on a different stand. Thus the distance between the beater and the drumhead will vary depending on the setup and may even change during a performance. A second objective is to achieve a similar loudness for the same MIDI velocity curve (corresponding to voltage) over a range of distances between the beater and drumhead. To achieve these objectives we collected audio for all velocity values and three distance configurations (near – 1 cm, medium – 2 cm, far – 3cm). The loudness and timbre variation were captured by computing MFCC for each strike. More specifically, for each velocity value and a particular distance, we obtained a vector of MFCC values. The frequency of beating was kept constant at eight strikes per second. The first MFCC coefficient (MFCC0) at the time of onset is used to approximate loudness. Plots of MFCC 0 for the distance configurations are shown in Figure 8.4a. In order to capture some of the timbral variation in addition to the loudness variation, the MFCC vectors are projected to a single dimension (the first principal component) using Principal Component Analysis (PCA) (Jolliffe 2002). As can be seen in Figure 8.4c, the PCA0 values follow closely the loudness curve. This is expected, as loudness is the primary characteristic that changes with increasing velocity. However, some information about timbre is also present, as can be seen by the ‘near’ plot that has higher variance in PCA0 than in MFCC0.
(a)
(b)
Figure 8.4 Velocity calibration based on loudness and timbre: (a) MFCC-values, (b) MFCC-inverse-mapping.
(c)
(d)
Figure 8.4 (c) PCA-values, (d) calibrated PCA.
184 George Tzanetakis Our goal is to obtain a mapping from user input calibrated velocity to output driving velocity such that linear changes in input (MIDI velocity) will yield approximately linear changes in the perceived loudness and timbre as expressed in PCA0. We utilise data from all the three distance configurations for the PCA computation to allow timbre space to be shared. Even though we obtain separate calibration mappings for each distance configuration, in each of these three mappings the same c alibrated input value will generate the same output in terms of loudness and timbre independent of distance. The technical details of how this inverse mapping is obtained can be found in Ness et al. (2011). Figures 8.4b and 8.4d show how changing the calibrated input velocity linearly results in a linearised progression through the timbre-space (PCA0) and loudness (MFCC 0). These graphs show the results of this calibration, but it is also possible to fit lines to them. In either case (direct calculated mapping or line fit), the calibrated output changes sound more smoothly than the original output.5
Self-tuning The ritual of tuning an instrument is familiar to concertgoers. For several minutes the musicians alternate between playing and adjusting the pitch until they are satisfied that they are in tune with themselves and the other musicians in the ensemble. With a robotic music instrument, automatic tuning requires two components: (1) a pitch detection method either in the software or the hardware and (2) an electromechanical means to adjust the produced pitch. For example, a microphone can be used to digitise the sound output. Then automatic pitch detection algorithms such as the YIN algorithm (De Cheveigné and Kawahara 2002) or the SWIPE algorithm (Camacho and Harris 2008) can be used to measure pitch. An alternative would be to use embedded chips that perform frequency analysis. As an example of pitch adjusting for guitar-like string instruments, stepper motors can be used to tighten tuning pegs (Trail et al. 2013). Self-tuning mechanisms are used in the guitars of the massive permanent installation If VI was IX by Trimpin at the Experience Music Project (EMP) Museum in Seattle.6
Rhythm and melodic pattern detection for cueing Collaborating musicians frequently utilise high-level cues to communicate with each other, especially in improvisations. For example, a jazz ensemble might agree to switch to a different section/rhythm when the saxophone player plays a particular melodic pattern during a solo. This type of communication through high-level cues is difficult to achieve when performing with robotic music instruments. In performances involving my research group, we have utilised a variety of less flexible communication strategies including
Robotic musicianship in live improvisation 185 preprogrammed output (the simplest), direct mapping of sensors on a performer to robotic actions and indirect mapping through automatic beat tracking. We have conducted experiments that show how high-level gesture recognition that is robust to changes in tempo and pitch contour can be correctly identified and used as a cue. Our system is flexible and can accept input from a wide variety of input systems. We have shown experimental results for gesture recognition on the radiodrum (Mathews and Schloss 1989), as well as melodic patterns played on a vibraphone (Ness et al. 2011). Considerable work has been done in the area of dynamic time warping (DTW) for gesture recognition (Akl and Valaee 2010; Liu et al. 2009). For the first experiment, we used the most recent iteration of the radiodrum system, a new instrument designed by Bob Boie that dramatically outperforms the original radiodrum in terms of both data rate and accuracy (Mathews and Schloss 1989). We instructed a professional musician to generate eight different instances of following five gesture types: an open stroke roll, a sweep of the stick through the air, a pinching gesture (similar to the pinch to zoom metaphor on touchscreens), a circle in the air and a buzz roll. We collected (x, y, z) triplets of data from the sensor at a sampling rate of 44,100 Hz and then down sampled this data to 120 Hz to allow us to compare gestures that were on average one or two seconds in length while remaining within the memory limits of our computer system. We empirically determined that this rate captured most of the information relevant to gesture recognition. For the second experiment, different melodic patterns were played on a vibraphone varying the tempo and transcribed into a symbolic representation using automatic pitch detection. From this data, the similarity matrix of each gesture to all other gestures was computed. DTW (Sakoe and Chiba 1978) was used to compute an alignment score for each pair of gestures that corresponds to how similar they are. For each query gesture, we created a ranked list based on the alignment score and calculated the average precision for each gesture. Precision at 1 refers to how many times the gesture returned as closest to the query belongs to the same type. As can be seen in Figure 8.5, gesture identification is quite reliable in both cases. radiodrum Gestures AP roll 0.866 sweep 0.980 pinch 0.837 circle 1.000 buzz 0.978 MAP 0.931
P@1 1.0 1.0 1.0 1.0 1.0 1.0
Vibraphone Gesture AP pattern1 0.914 pattern2 0.812 pattern3 0.771 pattern4 0.882 pattern5 0.616 MAP 0.799
P@1 1.0 0.9 0.9 1.0 0.9 0.94
Figure 8.5 P attern recognition – average precision for different gestures on the radiodrum and vibraphone. The mean average precisions (MAP) are 0.931 and 0.799.
186 George Tzanetakis A similar approach can be used to recognise percussive patterns from a user-defined set of patterns in order to initiate computer-controlled musical events. Each pattern can be modelled as a sequence of vectors of numbers over time. If a pattern is played exactly the same way but slower, it can be matched to the original sequence through interpolation. To deal with possible differences, cross-correlation of the two sequences (interpolated so they have the same length) can be used to calculate their similarity. Alternatively, a single vector of numbers characterising the overall pattern can be calculated. Compared to these simpler approaches, DTW has the advantage of being able to model nonlinear variations in timing: for example, when part of the pattern is slightly faster and part of it is slightly slower. Typically, in live performance of electroacoustic music, the performers (or the composer) have to rely on external tactile controls such as MIDI sliders and knobs to trigger and control different algorithmic processes. In contrast, the proposed gesture-detection ability enables the performer to influence the computer/robot part of the ensemble with musical cues while playing their acoustic instruments. When used for cueing, a set of previously recorded patterns are used as reference templates. When a new gesture is detected, it is compared for similarity with the prerecorded templates, and if a match is found a control signal is generated and can be used to trigger new processes.
Gesture control Robots can be controlled in many different ways. The most common are: (1) a predefined symbolic score or (2) a direct mapping from performer data to robotic actions. The robotic musicianship techniques described in this chapter enable more musical control. The development of novel, commercially available sensing technologies opens up another exciting possibility in which the gestures of a performer are used for control. These can be free gestures somewhat similar to what a conductor does, or they can be constrained and adapted to the playing of a specific acoustic instrument. New sensing devices such as the Microsoft Kinect enable new possibilities of interaction. The Kinect uses structured infrared light and a camera that can detect not only x and y coordinates but also depth. The resulting data can then be used to perform human skeleton tracking. This tracking is used to produce estimated positions of the performer’s limbs, which can then be mapped to musical parameters. We have used the tracking data to create free body control gestures, as well as to integrate the sensing with the playing of an acoustic instrument, in our case, a vibraphone or marimba. For example, each Cartesian axis of motion can be mapped to filter parameters that modify live and sampled audio from the vibraphone, creating a control volume accessed through the mallets in the space over the keys (Odowichuk et al. 2011). Figure 8.6 shows how this mapping works. In this way, the performer does not have to stop holding the mallets and can seamlessly play the instrument and provide control information to the computer/robotic processes.
Robotic musicianship in live improvisation 187
Figure 8.6 K inect-sensing of free space mallet gestures above a vibraphone.
A new extension of this work introduces a form of augmented reality to the vibraphone. Using the Kinect web camera and computer vision libraries, we are able to detect the position of the percussionist’s mallet tips. This tracking is used to augment each bar of the vibraphone with the functionality of a fader (Trail et al. 2012). Using this technique on a 37-bar vibraphone, it is possible to provide the functionality of 37 virtual faders that may be used as traditional controllers. This augmentation, illustrated in Figure 8.7, provides an intuitive platform that allows the performer to control a large number of sliders without turning away from the instrument. Currently, we track mallet tips based on their colour. In order to detect the positions of the mallet tips, the colour image from the video camera is transformed into hue, saturation and value (brightness) (HSV). Each of these signals is thresholded to filter out unwanted colours. The resulting signals are combined, and a contour detection algorithm is executed. This process yields bounding rectangles that identify the mallet tips. The centroid of the bounding rectangle is assumed to be the position of the mallets in the virtual representation. Determining the position of the mallet tips in terms of the vibraphone bars requires the creation of a virtual representation of the instrument. This representation was created using explicit measurements of the vibraphone bars. These measurements were used to create a model consisting of the set of bars, each with a corresponding placement, size, pitch and an associated control
188 George Tzanetakis
Figure 8.7 Virtual vibraphone bar faders.
value. The algorithm supplies the mallet positions relative to the camera, but we actually want data for the position in virtual space. Effectively mapping the tracked mallets involved several transformations and requires a calibration phase in which the position of the vibraphone with respect to the camera is also recorded. Once we have obtained the position of the mallets in the same space as the vibraphone, the system yields information on what bar is currently covered by the mallet, and a fader value associated with that bar. A delay-based activation time was added to the algorithm, so that the performer must pause on each bar for a predefined time before the sliders start to change. Figure 8.7 shows the virtual bar faders and the corresponding actual vibraphone bars as seen through the Kinect web camera.
Automatic loop detection In live electronic music and especially in music with strong repeating rhythms, looping mechanisms are common. The performer indicates the start and end of a loop by pressing a button. The sound between the beginning and end of the loop is captured and repeated. This opens the possibilities of creating multiple layers through overdubbing. Pressing a button at the start and end of the loop sounds simple, but as any practitioner can attest, it is not trivial and if not done at the correct time it does not work. It is also a distraction from the actual playing. Using signal processing methods, it is possible to avoid this step and detect the loop points automatically. This can be viewed as a type of musicianship, as the repetition of a pattern either melodically or rhythmically is something most listeners can do. This ability is related to the pattern detection process we described above but focuses on repetition and continuation. Our method is capable of forecasting the continuation of a given data array x[n], where x[n] is the value of the n-th reading of a certain sensor (Trail et al. 2012). The forecasting algorithm does not require any previous training on previous templates, which means that the musician has freedom to improvise and generate creative soundscapes and define the appropriate gestures
Robotic musicianship in live improvisation 189 while performing. The gesture is identified by being repeated without requiring a preset looping duration. When forecasting, the system provides data that aim to be equivalent to the sensor data that would be yielded if the musician continued repeating the previous gesture. This allows the performer to continue playing while still getting the soundscape variations derived from sensor data. The forecasting method is based on evaluating what time lag, in samples, is most likely to be the fundamental period of the received data. This means that although the motion may be freely composed and improvised, only repetitive motions can be predicted by the system.
A case study of robotic musicianship in live performance These basic robotic musicianship abilities were employed in the creation of two compositions: Red + Blue = Purple for tenor saxophone and automated piano (2012) by this author and Prepared dimensions for performer and robotic piano (2012) by Gabrielle Odowichuk and David Parfit. Both pieces were composed for the robotic music installation Canon X + 4:33 = 100, designed and built by Trimpin. The robotic installation was on display in the Open Space Gallery in Victoria performing preset music pieces. In addition, several composers were invited to write pieces for the installation that were then performed in concert. These pieces employed the more traditional approaches
Figure 8.8 Trimpin next to one of the robotically actuated piano boards developed for Canon X + 4:33 = 100.
190 George Tzanetakis such as a specific symbolic static score or direct mapping of the performer gestures to robotic actions. Combining ancient concepts and methods with the latest in digital technology, Trimpin gave new life to an array of abandoned pianos by constructing visually dynamic and aurally stunning acoustic sculptures and automatons out of their carcasses. A wide variety of actuators was utilised, ranging from striking solenoids to electromagnets that can cause the strings to vibrate without an attack. The actuators replace the hammer mechanism and associated piano keys, which were removed from the pianos. Figure 8.8 shows Trimpin next to one of these pianos. As noted above, Red + Blue = Purple was created by this author, who also played the tenor saxophone part. The work has a simple ABA form. Part A consists primarily of sustained tones resonating with chords from the red piano and part B features more rhythmic, percussive playing on the saxophone and an evolving gamelan-like texture created by the blue piano. The piece utilised some of the robotic musicianship capabilities described in this chapter. In performance, much of the piece and especially the saxophone part was improvised. Three different mappings (corresponding to compositional units of the piece) specified how the robotically actuated pianos would respond to the saxophone. My goal was not to use a prepared score. I also wanted to achieve a more subtle relationship between what the saxophone played and the robotically actuated pianos responded than what could be obtained through direct mapping. In addition, I did not want to have any type of external MIDI controller. Therefore, all triggering and communication had to be accomplished by analysing the sound of the saxophone in real time. I wanted to give the impression that the saxophone, which starts by playing solo, simply wakes up the pianos and that they respond musically to it, echoing and resonating with the playing. The piece utilises two of the pianos: one painted red and one blue. In the red piano, the strings are vibrated by electromagnets similar to the Ebow device used in electric guitars. The resulting sound is very smooth with no attack, creating long, resonant, sustaining textures. The blue piano contains various solenoids and scrapers that hit the strings percussively, creating a gamelan-like texture. The hardware and software was set up to have individual MIDI notes corresponding to each string, and MIDI velocity was used to control the amplitude of the vibration. With experimentation, I discovered that the same velocity values for different strings produced different amounts of perceived loudness and that lower notes took longer to start vibrating. For the first problem, a variant of the automatic dynamic velocity/timbre calibration was used so that sending the same velocity value to multiple strings would result in a balanced blended chord sound. The sound of the saxophone was captured by a microphone, and automatic pitch detection was performed on it. The automatically detected pitch was used to guide automatically generated chords that were consonant and resonated during the part of the piece that consisted of slow, long notes. The accompanying chords were intentionally designed to take into account the second problem of the different delays in
Robotic musicianship in live improvisation 191 triggering the notes, which was dealt with compositionally. The automatically detected pitch output from the saxophone was also used to trigger different sections/mappings of the piece by melodic and rhythmic pattern detection. For example, the transition from part A to part B, and from part B to part A was accomplished that way. The rhythmic component of part B consisted of various rhythmic cells that were selected by matching their rhythm to the rhythm of the saxophone part, which introduced a certain degree of randomness. It is hard to know how much the use of these robotic musicianship techniques was perceived by the audience. From a performer/ composer perspective, I found them very useful for creating a piece that was to a large extent improvised. At the same time, this piece has a more complex music behaviour than direct mapping, which in its simplicity can feel constraining and even comical. In Prepared Dimensions by Odowichuk and Parfit, the free hand and body gestures of a performer (Odowichuk) are captured by a Microsoft Kinect device that uses structured infrared light to perform skeleton tracking. Different aspects of the complex, evolving texture created by the robotically actuated piano were controlled by these gestures. To the audience it looked as if Odowichuk was moving in tight synchronisation with the music, which most of them thought was precomposed. In fact, she controlled the production of the resulting music. Although this use of robotic musicianship (namely gesture control) was not apparent to the audience, it creates a completely different feeling of freedom for the human performer, enabling improvisation while retaining control. The use of free body gestures and the distributed acoustic output of the pianos create a unique performance experience that is very different from typical live electronic music in which the sound emanates from loudspeakers controlled by the musicians performing on laptops and keyboards.
Conclusions and future work Digitally controlled devices that generate sound acoustically are called robotic music instruments or music robots. Although they are not widely used, they provide a fascinating alternative to live electronic music in which all the computer controlled and/or generated sounds come from loudspeakers. A wide variety of such robotic music instruments has been proposed and integrated into ensembles combining human and robotic performers. However, in most cases their ability to listen to the sound they and the surrounding musicians produce is limited or nonexistent. Robotic musicianship refers to techniques that can enable music robots to listen ‘musically’ and utilise the extracted information to adapt how they are playing. I have described basic robotic musicianship tasks that we have found useful in our work with music robots in live performance. These tasks are instrument identification for automatic mapping, timbre-based dynamics calibration, melodic and rhythmic pattern-detection for cueing and gesture control. In order to
192 George Tzanetakis perform these tasks automatically, advanced techniques from digital signal processing and machine learning are utilised. It is humbling to realise that all of these tasks are accomplished effortlessly by any young child learning a musical instrument. Traditionally, performances involving robotic music instruments are completely specified and scripted through some kind of symbolic score or directly map performer information to robotic actions. Robotic musicianship enables new possibilities for live performances involving both humans and music robots. In this chapter, the use of these techniques for the creation of two pieces for Trimpin’s robotic piano installation Canon X + 4:33 = 100 is described as an example of the potential of robotic musicianship for live performance. Obviously, this is just the beginning, and we are still a long way from the complexity, expressivity and sensitivity human musicians demonstrate when performing live. Advances in music information retrieval will enable extracting more sophisticated types of musical information from audio signals than the ones described in this chapter. Work in automatic rhythm analysis, chord recognition, structure detection, instrument identification and sound source separation is ongoing and could be incorporated in a robotic context. Another interesting direction for future work will be the development of techniques specifically for the instrument playing technique that the robot is modelling. For example, adjusting the pitch on a robotic guitar is relatively simple compared to a violin in which complex interaction between the bow force, velocity and position needs to take place. There is a long tradition of work in machine musicianship using symbolic data that can possibly be leveraged by better automatic transcription (Klapuri and Davy 2006) and score alignment techniques (Dannenberg and Raphael, 2006). It is my hope that in the future music robots will become as common as loudspeakers and mixing boards in live electronic music.
Notes 1 The author would like to thank the Natural Sciences and Engineering Research Council (NSERC) and Social Sciences and Humanities Research Council (SSHRC) of Canada for their financial support. Ajay Kapur, Adam Tindale, Steven Ness, Gabrielle Odowichuk, Tiago Tavares, Shawn Trial, Peter Driessen, Andrew Schloss and Trimpin have contributed in many ways to the development of the ideas and associated robot implementations described in this chapter. 2 Internationally celebrated sound sculptor/composer/inventor Trimpin extended his ongoing exploration of sound, vision and movement in an interactive installation at Open Space (Victoria, BC, Canada), which ran from 16 March to 28 April 2012 (www.openspace.ca/Trimpin_2012). 3 The solenoid actuators were supplied by Karmetik LLC (http://karmetik.com). 4 Personal communication with Andrew Schloss, March 2011. 5 Murphy et al. (2012) describes an alternative approach to velocity calibration for solenoids based on search and optimisation. 6 More recently, Gibson has included robotic self-tuning systems in a new line of electric guitars http://en.wikipedia.org/wiki/Gibson_Robot_Guitar.
Part III
Study
This page intentionally left blank
9 Authorship and performance tradition in the age of technology (with examples from the performance history of works by Luigi Nono, Luciano Berio and Karlheinz Stockhausen)1 Angela Ida De Benedictis A famous cartoon, published in the American trade journal Stereo Review in 1980, shows a man comfortably seated in front of a well-equipped stereo system about to listen in amazement to ‘electronic music of Stockhausen, played on the original transistors, capacitors, and potentiometers’ (see Figure 9.1). Though meant to provoke a smile, it also gives us pause for some serious thoughts. Here the notion of an original or ‘period’ instrument
Figure 9.1 Charles Rodrigues, ‘And now, electronic music of Stockhausen…’, Stereo Review (November 1980). Source: Reproduced with the kind permission of the publishers.
196 Angela Ida De Benedictis is transferred from early music to electronic equipment. In addition to the instruments used to produce and capture the sounds, the reference to an electronic repertoire seems designed to conjure up an authorial and authentic performance tradition in the reader’s mind. That Stockhausen happens to be the butt of this satire is unsurprising: after all, it was he who coined the term ‘originelle Technik’ for the analogue and predigital sound generators he employed in his own music (Stockhausen 1998, 576). The same Stockhausen, as we all know, gave performers and musicologists a multitude of essays designed to precisely and definitively codify and theoretically underpin a correct and unequivocal performance tradition for his own music. These introductory thoughts define the framework of my chapter: I will deal with a few problematic areas in the performance tradition of certain musical repertoires from the latter half of the twentieth century, including those works realised with the aid of live electronics, i.e. with the transformation of acoustic data in real time. Before delving into specific cases, I would like to give some thought to the concept of ‘authorship’. Here I will address questions that have long been debated in literature and philosophy. I will also build on Hermann Danuser’s discussion about authorial intention and the authorial performance tradition (Danuser 1997, 27–34). When applied to a literary text, authorship is a term that, to quote Michel Foucault, refers to the ‘relationship that holds between an author and a text, the manner in which a text apparently points to this figure who is outside and precedes it’ ([1969] 1977, 115; italics added). Here I especially single out the word ‘text’ because it will become the crux of our line of questioning. Applied to performance practice, ‘authorship’ must necessarily refer to the relationship that holds between an author and ‘a performance’, that is, to the manner in which ‘a performance’ points to this figure. Or to put it another way, the manner in which a performance tradition can explicitly and faithfully reflect the ‘creative and authoritative grasp of the author’.2 The transferral of this term to music is, however, fraught with dangers and inevitably leads us back to semiotics: all cultured musical traditions draw on ‘signs’ that must be interpreted on the basis of a ‘text’, whether oral, written or electronic. This text in turn leads us back to the ‘creative and authoritative grasp of the author’, who, though frequently pronounced dead, continues to flourish in the pink of health in the repertoire and the examples I intend to discuss. What I wish to emphasise is that speaking of an ‘authorial performance tradition’, and going further to ask whether ‘authorial’ is always synonymous with ‘authentic’ in musical performance, we could end up discussing a dual authorship for one and the same work at a given moment: one for the text and another for the performance. Often enough we are confronted with cases in which authors seem virtually at loggerheads with themselves – namely, where their instructions for a particular performance are obviously self-contradictory or remote from what they have set down in their own texts. We have been assured from various quarters that the ‘authorial intention remains an unsolved problem in literary [and musical] studies’ (Farrell 2005, 98).3
Authorship and performance 197 qually nebulous is everything associated with ‘performative freedom’.4 AlE though ‘authoriality’ and ‘freedom of interpretation in performance’ could be seen as oppositional concepts, an analysis of performance traditions in the latter half of the twentieth century suggests that the two terms cannot be treated simply as antitheses: as we shall see, it is frequently the unquestionably authorial performances – those that establish or prolong a tradition – that prove to be ‘freer’ than their nonauthorial counterparts. The disclaimer commonly raised by performers and exegetes – that ‘we cannot prove […] that an interpretation is consistent or inconsistent with an author’s intentions’ (Farrell 2005, 100) – dilutes not so much the concept of authorship as that of freedom, which alone certifies whether a performance is obviously wrongheaded or ‘unauthentic’. These observations ultimately lead us to reconsider the hermeneutic circle of author, text and performance. If this model proves only partly valid for the classical-romantic repertoire (Della Seta 2010), it ignites a fullfledged crisis when applied to certain forms of musical expression in the latter half of the twentieth century. The principal cause of this crisis is not unlike the one that undermined the validity of the ‘author-text-work’ trichotomy in the preceding century, and it is related to the fact that a performance practice or tradition is never unalterable. However, in the twentieth century, the gradual (and inexorable) diachronous motion associated with the flow of history (with its many changes of stylistic and aesthetic paradigms) intersects in addition with a synchronous motion caused by the local variants to which a work is soon subjected on the performance level by its own author. Hermann Danuser has already established that the “auktoriale Aufführ ungstradition” (authorial performance tradition) is an aporetische Kategorie, a questionable concept (Danuser 1997, 30). For my part, I would like to propose that it is more akin to a ‘utopian concept’ and to pursue this idea through an examination of several examples from the music of Luciano Berio, Luigi Nono and Karlheinz Stockhausen, composers whose performance practice was at once characteristic and, in different ways, problematical. When we think of these three composers, their artistic careers and their greater or lesser interest in giving their works a performance tradition capable of being considered authorial, we immediately note substantial differences in their particular relation towards codifying their own musical thought. By this I do not mean differences of a technical or aesthetic nature in matters of musical composition, but rather of technology and the practical procedures they employed to capture their own works and to hand them down to succeeding generations and thus to posterity. In the initial phases of their careers, they seem to have had no noticeable differences in this respect: all three, without exception, entrusted their musical ideas to performers whose concern it was to translate the author’s intentions, much as in the classical-romantic repertoire. Until the mid-1950s, composing generally meant setting down a written text – a score – that, unlike the explicit or implicit poetics pervading it, was invariably committed to paper. It was only when the composers encountered technology while working
198 Angela Ida De Benedictis in electronic studios with magnetic tape (and later with live electronics and computer programs) that changes began to occur, step by step, in their creative practice and in the relevant procedures needed to capture and transmit their thoughts. One of the most obvious effects that came about when technology impinged on twentieth-century music was the redefinition of the relation of author, work and performance. The working methods typical of composing in the electronic studio, based on repeated listenings and continual reworking of the sonic results, was evidently also transferred to the processes of performance. Since the 1960s, it has become apparent that composers such as Nono, Berio and Stockhausen not infrequently viewed a performance as a step in the ‘exploration’ (Nono), ‘perfectioning’ (Berio) or ‘normative systematisation’ (Stockhausen) of a text (that is, a work) still in search of its definitive form and codification. Nonetheless, each of these steps is ‘authorial’ and contributes to the creation of a living and vibrant performance tradition. In schematic terms, one might say that, in creating performance traditions for their own music, Berio and Stockhausen placed their ‘authorial stamp’ in ‘writing’ and ‘rewriting’, whereas Nono increasingly did so in ‘non-writing’. Still, regardless of its presence or absence, the text (in the broadest sense) conditions the birth and evolution of an authorial performance tradition and continues to determine its fate after the author’s death. It should be pointed out that, as far as ‘writing’ and ‘rewriting’ are concerned, I am not referring to Berio’s typical process of determining, step by step, the work’s final form. Often enough this process encompasses the work’s initial performances, which must be viewed as the actual anchor points of the creative process: these ‘rewritings’ are part of the work’s gestation. Rather, my attention applies to those ‘rewritings’ that, over the years, intervened in the performance tradition of some pieces, occasioning alterations and that resulted from the author’s changing relation to his own text. Let us take, as an example, one of Berio’s best-known compositions: Sequenza I, dedicated to Severino Gazzelloni (see Figure 9.2a). This piece, published by Suvini Zerboni in 1958, is one of the musical manifestos of the Opera Aperta (open form) in music. It initiated the well-known series of like-named pieces and constituted a sort of tribute to the freedom of the performer who, as we are told in the concise performance notes (see Figure 9.2b), is granted a certain leeway on the level of rhythm and tempo, above all owing to the proportional notation. But precisely this freedom of execution – a freedom written ‘structurally’ into the score as an enhancement of the conventional view of interpretation – gradually became a source of great annoyance to the author. In 1981, after innumerable performances over twenty-three years, Berio finally affirmed that the ‘margin of flexibility’ inherent in the notation of Sequenza I was intended to give the performer
(a)
Figure 9.2 (a) Luciano Berio, Sequenza I (Milan: Edizioni Suvini Zerboni, n.d.), p. [1] (© 1958), S. 5531 Z. Source: Reproduced with the kind permission of the publishers.
200 Angela Ida De Benedictis (b)
Figure 9.2 (b) Luciano Berio, Sequenza I (Milan: Edizioni Suvini Zerboni, n.d.), performance notes, p. [1] (© 1958), S. 5531 Z. Source: Reproduced with the kind permission of the publishers.
the freedom – psychological rather than musical – to adapt the piece here and there to his technical stature. But instead, this notation has allowed many players […] to perpetrate adaptations that were little short of piratical. In fact, I hope to rewrite Sequenza I in rhythmic notation: maybe it will be less ‘open’ and more authoritarian, but at least it will be reliable. (Berio [1981] 1985, 9) Another eighteen years were to pass during which the piece’s performance tradition continued to consolidate. Finally, in 1998, the new version appeared in print (see Figure 9.3).5 As the example shows, it is in completely closed notation, written out with great rhythmic exactitude and entirely free of introductory notes. It is as if Berio wanted to decree imperiously, ‘This is how it must be played’. Many performers have perceived this rewriting as a genuine betrayal of the author. It is equally true that the best performances, whether live or recorded, are still based on the old edition of 1958. Bereft of an important interpretative point of reference (i.e. the original freedom), the readings of the 1998 version usually sound as if ‘cast in bronze’. The question arises, of course, whether the performer’s ‘free’ choice to use the first edition is itself a ‘betrayal’ of the author’s final wishes and whether in consequence all performances based on this edition should no longer be seen
Figure 9.3 Luciano Berio, Sequenza I (Vienna: Universal Edition, n.d.), p. [1] (© 1998), UE 19 957. Source: Reproduced with the kind permission of the publishers.
202 Angela Ida De Benedictis as ‘authorial’. This leads me to raise the question of whether Berio should rather have codified ‘his’ ideal performance of Sequenza I without encapsulating it in a new version, for example, by releasing a model recording. In the final analysis, he would have thereby chosen an option already pursued by other musicians, including Karlheinz Stockhausen.6 Figure 9.4 shows the back of the dustjacket for Stockhausen’s Kreuzspiel, recorded in 1974 along with Kontra-Punkte, Zeitmaße and Adieu. Composed in 1951, the piece had to wait until 1960 before being published by Universal Edition with a single page of notes on the instruments and their placement on the concert stage (see Figure 9.5a). Ever since its Darmstadt première in 1952, the balance of dynamic levels has caused various problems that were solved from one performance to the next by means of microphones and careful amplification (see Stockhausen 1998, 552–5). The use of microphones became obligatory in the piece’s performance tradition, ‘imposed’ by the author, even though nothing is said about them in the performance notes of the first edition. This gap was closed by the recording released in 1974, where Stockhausen clarified the matter in his liner notes: These performances and recordings should be regarded as extensions of the scores. In KREUZSPIEL, for example, all public performances used one microphone each for the oboe and bass clarinet as well as a (highly directional) microphone to the left beneath the piano for the bass register and a contact microphone to the right beneath the piano (attached with beeswax) for the altissimo descant register. […] The recording thus reproduces the sound of our public performances! (Liner notes in Stockhausen 1974, back of dustjacket, see Fig. 9.4) In later years, Stockhausen incorporated these and other aspects of the ‘sole correct’ performance of Kreuzspiel into the preface of his revised fourth edition of the score (1990), thereby enlarging the preface from one to six pages (see Figure 9.5b).7 We are thus dealing here with one of the many cases in which authorial performances influenced a piece’s editorial history. Let us return to Berio and try to answer the question of whether he might have found a different solution to the problems of ‘performative freedom’ in Sequenza I. Perhaps, like Stockhausen, he would have arrived at a modified edition of the same score, but not necessarily a new version. Browsing through the performance history of other Berio pieces, we find Sequenza III for voice, published by Universal Edition in 1968, that seems at first glance to offer another answer to our question. This piece is a prime example of the problems involved in the creative interaction between author and performer that conditions the definitive form of a music composition. Figure 9.6 shows the first page of the edition in the form in which it was published by Universal Edition in 1968. Once again, it is written in a ‘free’ notation that, however, is regulated by precise tempo indications in seconds.
Figure 9.4 Karlheinz Stockhausen, Kreuzspiel, Kontra-Punkte, Zeitmaße, Adieu, The London Sinfonietta, Dirigent: Karlheinz Stockhausen, LP Hamburg: Polydor 1974, dustjacket LP Deutsche Grammophon (2530 443).
(a)
Figure 9.5 (a) Karlheinz Stockhausen, Kreuzspiel (Vienna: Universal Edition, n.d.), performers notes, n. p (© 1960); UE 13 117. Source: Reproduced with the kind permission of the publishers.
(b)
Figure 9.5 (b) Karlheinz Stockhausen, Kreuzspiel, rev. 4th edn. (Vienna: Universal, 1990), performance notes, n. p. (UE 13 117). Source: Reproduced with the kind permission of the publishers.
Figure 9.6 Luciano Berio, Sequenza III (London: Universal, n.d.), p. [1] (© 1968), UE 13 723. Source: Reproduced with the kind permission of the publishers.
Authorship and performance 207 Yet these pages would never have been published in this form without the presence and collaboration of the performer, Cathy Berberian. In its present configuration, the work is a sort of compromise between what Berio wrote in his first version of 1965–66 and what Berberian considered feasible in performance. The graphic notation of the score passed through three stages. The first performance took place at Radio Bremen on 5 May 1966. Evidently, Berberian received the manuscript only five days before the premiere, and there were several problems in her part.8 After performing the piece, she proposed several changes that Berio incorporated in his score in a second version, which was then performed more than a month later on BBC London (June) and again at Brown University-Pembroke College in Providence, Rhode Island (9 October). After these performances, Berberian made her first recording of Sequenza III for Wergo in 1967 under the composer’s supervision (Berio 1967). Her interpretation is remarkably close to the temporal proportions depicted in the printed edition of 1968 (see Figure 9.6). If we compare this recording to the score, it seems as if Berberian almost sang with a stopwatch in hand and followed the author’s instructions with painstaking accuracy. In reality, however, the process was exactly the opposite: the third (and final) version of the score, which ultimately appeared in print, was prepared only ‘after’ the Wergo recording. The composer drew on this recording as a model (in many respects) in order to reach a publication that ‘also’ reflected an incontestably authorial performance. One could say that a prescriptive and a descriptive function converge on the pages of this score, interwoven in consequence of a creative process resembling that of works composed in the electronic studio, and unfolding in successive stages, from writing, listening (performance), elaborating, recording, rewriting and so forth until the work finally reached its definitive publication. This example shows the strength of Berio’s desire (and ability) to codify his musical ideas in conclusive form on a medium – in this case, in a score already reflecting experiences gained from previous performances. At the same time, it allows us to conclude that, for Berio, no recording could ever have compensated for the freedom performers enjoy in Sequenza I. His final wishes, as we also know from other cases, were transmitted to his performers by means of a ‘decodifiable’ medium (paper, magnetic tape), leaving aside the dialectics (and distortions) resulting from the interpretation of these codes. Berio’s ability to control and codify his own thoughts (in my estimation, perhaps in a superior and more efficient manner than Stockhausen’s) remained virtually intact even in his works with live electronics,9 despite his quest for sonic mobility and its projection onto spatial trajectories realised ad hoc for each performance venue, which invariably caused his pieces to continuously change from one performance to the next.10 In works such as Ofaním and Altra voce, for instance, the use of electronics is ‘deeply ingrained in the writing’.11 Though handled differently from one work to the next, the live electronics reinforce and highlight Berio’s distinctive writing style and are treated on a par with the other instruments in the preparatory materials.
208 Angela Ida De Benedictis
Figure 9.7 Luciano Berio, handwritten page from the electronic score of Ofaním, cue clarinet (Luciano Berio Collection, Paul Sacher Foundation). Source: Reproduced with the kind permission of the Paul Sacher Foundation and the Berio heirs.
The written codification of the sonic processes is controlled with great exactitude, and every electronic intervention is clearly and precisely defined. In some cases, Berio even provided for electronic scores to be integrated with the vocal/instrumental scores and performed by the sound designer (see Figure 9.7). Almost every printed edition of music with live electronics that Berio released during his lifetime has these same features: unlike Stockhausen’s editions, almost none is prefaced with introductory notes. The author seems to have wanted to entrust his scores primarily to those performers and sound designers who had already worked with him or were at least familiar with his aesthetic and interpretative horizons. Over the years, however, these works had to face not only the modification of initial aesthetic ideals, but also the steady advance of technology. For this reason, what is set down in those scores, no matter how detailed, will not satisfy performers today. New editions edited by those very performers, or ‘authorial witnesses’, can help to
Authorship and performance 209 improve the presentation of this ‘electronic thought’ and to clarify technological and performative aspects by providing additional notes to the performer.12 In the future, the inclusion of experiences from the performers in new editions will perhaps guide and alter the performance tradition of the music, enriching it with identity-forming additional value from ‘historical’ performers. Despite all the verbal improvements or refinements and technological advances, these interventions cannot alter a work’s musical ‘skeleton’ as set down by its author. Nevertheless, this obviously quite idealised notion begins to totter when faced with a practical observation: although Berio essentially defined everything, he rethought each performance involving electronics and often revised things he had previously worked out. In this respect, each new performance (e.g. of Ofaním or Altra voce) was inevitably influenced by experiences gained from previous performances. The dialectic of authorship and freedom leads in this way to a paradox that becomes still more pronounced in the case of Nono: in these works, the ‘freedom of interpretation’ of the performer seems indeed to become diminished after the death of their author. It is safe to assume that Berio would have continued to experiment and change things with each new performance, whereas the performer, lacking the composer’s guidance, obviously has to freeze the interpretation of a piece at a particular stage on the continuum of its potential performances. Without the composer’s sanction, every further movement along this continuum will always be to a certain extent arbitrary, even if the composer himself might have condoned much more radical interpretative freedoms. These thoughts lead us directly to the problematical topic of the performance tradition surrounding some of Nono’s compositions, for which the genesis of a work often is congruent with the history of its performance and interpretation. Ever since La fabbrica illuminata (1964) and A floresta é jovem e cheja de vida (1965–66), Nono’s creative process evolved in ever closer contact with his performers, who frequently functioned as ‘“living materials” in whom the work is fixed by means of their own specific oral tradition’ (Rizzardi 1998, xxix). Owing to the choice of performers and their collaboration with Nono during the composition phase, the author’s will was often not definitively set down in a printed edition. Instead, it could become manifest in instructions, notes or sketches, i.e. in a form encompassing instructions to the performer, whose memory coincides wholly or in part with the work’s text. The composition in which this tendency first appeared in its full problematic nature was A floresta é jovem e cheja de vida, for which Nono never produced a score at all (indeed, it was not established until 1998, several years after his death). Note that despite the absence of a score, it was referred to as a closed and finished work that had its own authorial performance history, at least until the late 1970s. The musical text of A floresta was defined step by step in parallel with a performance practice that departed completely from conventional usage: it was learned by rote by the performers, who always played it with the author, and was thus codified in a manner typical of oral traditions. In the 1970s, Nono, now en route to new horizons,
210 Angela Ida De Benedictis no longer felt the need to perform A floresta, and the piece remained frozen at the year 1976 in its performance tradition.13 Given the absence of a finished written configuration, performances were now impossible without the author’s participation, and Nono began to turn down requests to perform it: I’ve been asked many times to do it [A floresta] again, but I’ve said No because I’d have to look for voices again, work for at least a month, and discover new possibilities… and I prefer to write a new work. […] In any case, there remains a recording, and that’s enough, even if it only offers 10 percent of the reality. (Nono, 1987, in Albèra 1997, 97) The recording in question was made with the historical performers (Nono 1967). In the absence of a score, over the years it came to represent ‘the’ (reproducible) text of A floresta. The difficulties of codifying sonic events in a definitive form continued to intensify in the course of the 1970s and reached a climax with the advent of the age of live electronics. The resultant change in the relations between theory and practice, the direct contact with a permanent team of performers, and the experimentation with sound in ever-new auditoriums led Nono to cease committing his ideas to paper. In the end, he began to revise the concept of ‘tradition’ and to proclaim, from time to time, that he ‘no longer attached any importance to the permanence of his works’ (see published statements in Albèra 1997; Nono [1987] 2001, vol. II, 447) but was fascinated by the ‘impossibility of repetitive reproducibility’ (Nono [1988] 2001, vol. II, 465). What he wanted, he claimed, was to ‘focus and transform’ on a continual basis. He even sought ‘betrayal’: To speak of tradition is to speak of something being tradiert, or handed down. Tradimento… It’s wonderful in Italian: tra-dizione – tra-dire. And tradire is the word for ‘betrayal’. I believe that when we think we’re handing down a tradition we betray it over and over again. (Nono 1984, 6) In this way, the flexibility that had already marked the performances of Nono’s works in the 1960s and 1970s (his …..sofferte onde serene… of 1976 is a paradigmatic case) became an immutable feature of his performance practice. This feature distinguishes all his works written before, around and beyond Prometeo. Pieces such as Das atmende Klarsein (1981), Io, frammento dal Prometeo (1981), Quando stanno morendo: Diario polacco n. 2 (1982), Omaggio a György Kurtág (1983), A Pierre, dell’azzuro silenzio, inquietum (1985), Risonanze erranti: Liederzyklus a Massimo Cacciari (1986) and others, up to and including his final works of the 1980s, are aligned on a ‘performative uniqueness’ that runs contrary to the concept of tradition: each work changes with every new performance and can be renewed over
Authorship and performance 211 and over again in new auditoriums and concert spaces. In an extreme case, the work will even verge on directed improvisation, as in the first version of Omaggio a Kurtág of 1983 and in such problematic pieces as Découvrir la subversion: Hommage à Edmond de Jabès of 1987, posthumously unearthed by the Editorial Committee for the Works of Luigi Nono (Haller, Richard, Stenzl et al. 1993, esp. 21). Some of the above-mentioned compositions were published by Nono himself in noncomprehensive editions that often present nothing more than the handwritten vocal and/or instrumental parts. These editions were long used almost exclusively by his preferred performers, all of whom filled in the missing aspects of the score themselves (in the best cases, they made use of earlier instructions from the composer, who usually took part in the performances as sound designer). Unlike Berio or Stockhausen, none of the editions of Nono’s works with live electronics published in his lifetime contains a protocol of the electronic dimension or a reference to an electronic score. The patches were stored directly in the equipment, and each new performance tried out something different. The handling of acoustics and dynamics was defined from one time to the next ‘with’ the performers, who used the score as a sort of ‘sketch’ to be fleshed out and completed in consultation with the composer.14 It might thus be said that the sum total of Nono’s own performances, though mobile and variable, helped to create a species of tradition that was both sui generis and ‘freely authorial’. The question remains, however, whether this species of tradition can be prolonged today for works conceived in this fashion and especially whether we can continue to call it ‘authorial’ at all. Indeed, in these works, though based on close interaction between author and performer at the time of their creation, the role of the author is not only incontestable; it also extends into the performative sphere. On closer inspection, it is perhaps the presumed ambiguity of his role in the creative sphere that both reinforces and consolidates his ‘authority’ in the field of performance. Here we seem to confront a sort of fresh subdivision of the authorial function. Given the above-mentioned dual typology of authorship presupposed for a work of music (one on the level of text, another on the level of performance), one might claim in these cases that the significance and function of the author has continuously shifted from the field of composition to that of performance (see De Benedictis 2015). In Nono’s case, it could be said that precisely this shift along the authorial axis is one of the cardinal reasons for the constant mobility of his works in performance: each work, in written form, is akin to a ‘sketch’ of a compositional idea that is only brought to fruition by the composer himself in the act of its performance, based on a wide array of acoustical conditions. From the performers’ viewpoint, however, ‘manipulations’ of this sort, though typical of oral traditions, sometimes blur the concepts of authorship and freedom as well as their roles in the creative process.15 This obfuscation continued to thicken after the composer’s death. In his absence, the role of the music’s author or ‘generator’ usually passes to its historical performers, as repositories of interpretative authenticity of performances, to
212 Angela Ida De Benedictis performance courses or to new editions that all, however, in the words of Nicholas Cook, are ‘borrowed, a reflection of the composer’s authority’ (1998, 24; see also 89–90). In the publication history of several Nono works (here I am thinking of various recent new editions issued after his death, often edited by historical performers or based on their testimony), the will to clarify and consolidate the conditions underlying a correct and ‘authentic’ performance practice, i.e. one expressly consistent with the composer’s wishes, causes such new editions and the performance tradition founded upon them to enter the realm of ‘meta-authorship’.16 As just one of many examples, allow me to choose the most recent edition of Das atmende Klarsein for small chorus, bass flute, live electronics and tape (2005). This edition, accompanied by a ‘didactic DVD’ with testimony from historical performers and various explanations regarding the performance, calls itself ‘definitive’ compared to an earlier edition (likewise called ‘definitive’) that Nono (1987b) himself published. Without any doubt, the new edition contains many improvements in the electronic part (that were missing entirely in the 1987 edition) and others in problematical passages, the flute improvisation, and general dynamics. But there is no overlooking the normative spirit with which the composer’s authority is invoked in order to justify editorial decisions, usually accompanied by such statements as ‘Nono requires’ [‘Nono richiede’], ‘the composer further specified’ [‘per esplicita volontà dell’autore’], ‘Nono described’ [‘Nono prescrive’], and so forth (Richard and Mazzolini, cited in Nono 2005, xiii [in English] and vii [in Italian]). This is by no means to belittle the work of the editors, and it is my firm belief that this edition must be the benchmark for all new performances of Das atmende Klarsein. Nonetheless, I wish to return to the question previously posed regarding Berio’s Sequenza I: Does this edition entirely supersede and annul, on the level of authorship, the ‘versione definitiva’ published during Nono’s lifetime in 1987? I deliberately pose this question in a provocative spirit, thinking in particular of the statement published in the 1987 edition: ‘A performance with bass flute and live electronics is permissible’.17 This was replaced in the ‘definitive’ edition of 2005 by: Luigi Nono authorised the flautist Roberto Fabbriciani to play just the parts for flute, called Das atmende Klarsein – Fragmente; these require special electronic treatment, different to that used for this score. (Richard and Mazzolini, cited in Nono 2005, xviii [English] and xii [Italian]) Can Fabricciani still claim today to be authorised to play these pages in a manner that conveys only half an impression of this work? Evidently, the author allowed just this, and the answer, on the basis of the first edition of 1987, would seem to be ‘Yes’. But, according to his later (presumed) wishes,
Authorship and performance 213 communicated by historical performers, and from the vantage point of the 2005 edition, the answer is ‘No’. This example might open up further problems in the performance tradition that even transcend ‘meta-authorship’. However, to pursue them here would go beyond the limits of this chapter. Thus, I would like to reaffirm that, for many of Nono’s works, there is no doubt as to the need to codify compositions in the form of a text capable of being handed down in order to prevent its becoming lost. But I have serious doubts as to the ‘definitive’ tag attached to scores that, unlike all the standard labels, are, I feel, neither ‘critical’ nor ‘definitive’, but at best ‘authentic’ in the sense of being faithful to the spirit (or a spirit) of the author. With regard to their alleged ‘authoritativeness,’ I can only second the words of Nicholas Cook: the authorship ascribed to such editions is unrealisable because ‘[s]ituations like this are not hard to resolve, they are impossible’ (1998, 89).18 As time passes and the performance tradition based on these new editions solidifies, we must replace the concept of ‘authorial’ performance with that of a more or less ‘authentic’ tradition. Today it is already clear that the newly edited works and performances by historical performers will give rise to and recall performance conditions that themselves reflect a particular stage in a work’s history, but that, however, could have been changed and further developed by the author if he were still alive. Historical performers usually speak of their work as a sort of ‘testamentary performance’. To decide when this testament is betrayed is difficult enough today. Tomorrow it may be wellnigh impossible. In the confines of this chapter, the figure of Stockhausen has apparently received short shrift. Nonetheless, we have always had him in mind ex negativo when speaking of the problems associated with Nono and more specifically when discussing Berio’s practice of writing and rewriting. When musicologists approach the early and more recent editions of Stockhausen’s works and writings, they generally tend to feel that they are dealing with an author to whom the concept of authorial performance tradition precisely applies. On the other hand, many performers who have played from scores largely set down and redefined by the author himself (such as the last edition of Mikrophonie I [Stockhausen 1979; see also 2008]) feel that the appearances deceive. With the disappearance of the author, problems of ‘freedom’ and the salvaging of an ‘authentic’ performance have arisen even with these works.19 As with Nono, the way to preserve a performance tradition of Stockhausen’s music worthy of being called ‘authorial’ seems to reside in courses with historical performers – an undertaking launched in Kürten during Stockhausen’s lifetime and since then repeated year after year. But unlike Nono and the ‘meta-authorship’ tradition in which some of his works are apparently handed down, I wonder whether it would not be appropriate in Stockhausen’s case to coin the term ‘post-authorship’ …
214 Angela Ida De Benedictis Let me abandon this concept in rough outline and end my chapter with what has been for years perhaps the most frequently debated question in both literature and music. Are we really sure that ‘the meaning of a text is to be identified with or found in the intention of its author’?20 Berio, in one of his Norton Lectures of 1994, seems to have given an initial, partial and seemingly dialectical answer to this question, albeit one clearly slanted in favour of the author: A musical text, in the mind of its composer, may take the shape of a perfectly closed and conceptually sealed entity. To an interpreter, the same text may on the contrary appear open-ended and fraught with structurally significant alternatives. But a text may also appear openended to its composer and closed to its interpreter. Over and above the author’s intentions and the listener’s a prioris, the performer’s intentions and a prioris also converge in the music. They are the most relevant, but, as we know all too well, the performer is the not always legitimate heir to a terribly complex and burdensome history. (Berio [1993–94] 2006, 92) Berio’s words, like the examples shown above, confirm what reception theory determined a long time ago: the meaning of a text (or in our case a composition), far from being unique, is open to infinite interpretations on the parts of the author, the performers and the listeners. Added to this complexity, for some of the works I have mentioned, is the need to overcome an attitude that views an edition, whether critical or only faithful to the author’s intentions, as fixed for all time and thus immutable. Until a new editorial paradigm appears that can follow, reflect, restore and convey the mutability of a text in its many authorial variants (and the increasing integration of technical equipment in compositional reality), the relative subservience of the concept and significance of authorship will remain open for part of the twentieth-century repertoire.
Notes 1 This chapter is closely connected with that by Nicola Scaldaferri. Initially we planned to write one chapter in two sections. My contribution would have made up the first section. In the end, we have produced two chapters that examine questions of authorship from different but related perspectives. First published under the title Auktoriale versus freie Aufführungstradition. Zur Interpretationsgeschichte bei Nono und Berio (… und Stockhausen ist auch dabei), in Wessen Klänge? Über Autorschaft in neuer Musik, hrsg. von H. Danuser und M. Kassel, Veröffentlichgung der Paul Sacher Stiftung, 2017, pp. 47–67. Translated from German by J. Brad Robinson. To Fabrizio Della Seta on his 65th birthday. 2 ‘…piglio inventivo e autorevole dell'autore’ (Devoto and Oli 2009, s.v. ‘Autorialità’). 3 See also Livingston (2009); Caduff and Wälchli (2007) and Bruns (1980).
Authorship and performance 215 4 Freedom belongs to the process of interpretation. It is an added value that allows the (musical) work of art to live in time and from one performance to the next, albeit always with due regard to authorial intentionality, understood not so much as an interpretative norm, but as an ‘invitation’ to a practical and cognitive discipline of interpretation. See inter alia Shusterman (1988), esp. p. 402. 5 This new version was written in 1992 and was played sometimes before appearing in print. 6 Hermann Danuser (1997, 27) sees in ‘reproduction of sound recordings’ one of the ‘four means of forming an authorial tradition’. The problems involved are highly precarious and stand in need of further reflection. Here it should be recalled that to view a composer's own recording of one of his works as a means of codifying an ‘authorially definitive’ interpretation has not only been called into question by composers since the 1920s (Schoenberg 1926, 71–5) but also championed by Stravinsky (Fels 1928, 11). In his Norton Lectures (1993–94), Berio declared himself in favour of recordings as a means of safeguarding the ‘authority’ and ‘authenticity’ of works of music; see Berio [1993–94] 2006, 66. 7 In 1986, Stockhausen announced that ever since Kreuzspiel, he viewed each instrumental work as ‘a special art form’ precisely described in the preface to the relevant score. He then added: ‘I even had the size of the concert stage printed in the preface to the new edition of Kreuzspiel and precisely described the placement of the microphones in my Texte zur Musik’ (Frisius 1996, 240f.). 8 This state of affairs was reported by several scholars and verified by Berberian herself in an unpublished interview conducted with Silvana Ottieri in 1981 (transcribed and read by Cristina Berio at the International ASCA Conference, University of Amsterdam, 27–28 April 2006). I wish to thank Cristina Berio for allowing me to read the transcript of the Berberian–Ottieri interview (currently preserved in the Cathy Berberian Collection at the Paul Sacher Foundation). 9 Ofaním (1988–92), Outis (1995–96), Cronaca del Luogo (1998–99) and Altra voce (1999). 10 From his very first piece with live electronics, Berio was aware that ‘the acoustic strategy of Ofaním (namely the software that determines its acoustic profile) has to be modified with each new performance and consequently several aspects of the work are “recomposed”’ (Berio n.d.). 11 ‘In Berio il live electronics è profondamente connaturato con la scrittura.’ Francesco Giomi in conversation with the author on 25 February 2011. 12 Note, for example, the new edition of Altra voce, edited by Francesco Giomi, Damiano Meacci, and Kilian Schwoon (Universal Edition, UE 31 492), which also includes a ‘Technical Manual’ for the electronic part (UE 31 492b). 13 Nono and the performers he trusted played A floresta several times between 1966 and 1976. The next performance had to wait until after his death, when it was given in Stuttgart in 1992 (by a new set of performers and the ‘historical’ sound technician Marino Zuccheri). 14 Information kindly supplied by Alvise Vidolin and André Richard in conversation with the present author on 2 and 20 March 2011. 15 We need only recall a well-known letter to Nono from Carla Henius, the historical performer of La fabbrica illuminata, who faulted the composer for not seeking her permission before performing the piece with a different soprano (Henius 1995, 62 [letter 19 March 1966] and 63 [Nono’s forthright reply, undated]). 16 See inter alia Nono (2010; 2005; 1999; 1996; 1992). 17 ‘È possibile l'esecuzione per flauto solo e live electronic’ (Nono 1987b, ‘Indicazioni per il flauto basso’, n.p.). 18 In their original context, Cook’s words relate to the authenticity of historical performers.
216 Angela Ida De Benedictis 19 For example, see Anthony Pays’ account of his experiences with Stockhausen (Bailey 1980, 90–2). 20 ‘One of the most salient and powerful trends in the last few decades of literary theory has been the attempt to discredit and displace the traditional project of intentional interpretation, the idea that the meaning of a text is to be identified with or found in the intention of its author’ (Shusterman 1988, 399). For an ‘ideal’ response from Stockhausen, see the video recording of some rehearsals for Refrain (1959), made several years after the work’s origin (Stockhausen [1997?]).
10 (Absent) authors, texts and technologies Ethnographic pathways and compositional practices1 Nicola Scaldaferri If the question of the relationship of authorial intention, text and performance – and of the crucial role played by technology in all these processes – lies at the heart of Western art music and its practices, it is also true that the same question touches, albeit in different ways, many other musical realms, be they nonliterate musical traditions within the West or non-Western sonic and musical practices. In this chapter, I will broaden the discussion of authorship in the previous chapter to include musical examples that are foreign – at least in their original guise – to the tradition of Western art music but have been brought into increasingly frequent interaction with Western compositional practices. Once we discard the myth (or perhaps illusion) of ‘pure’ and ‘authentic’, we find that the interaction between languages and traditions is not only a widely recognised phenomenon in our day, but also a trait discernible – to the knowing eye – in a variety of cases across the span of Western music history. These cases might consist of specific elements explicitly drawn into a composer’s activity. Such elements may involve an authorial appropriation and reinscription of musical materials traditionally untraceable to a Western ‘composer’ figure or instead take the form of more widespread stylistic watermarks incorporated into the composer’s idiom. Let us recall only a few famous examples in music history: Zuccalmaglio’s transcriptions of folk songs that Brahms used to compose his lieder; Stravinsky’s captivating (and often adapted and reinvented) references to his cultural origins during the Russian period; Bartók’s phonograph recordings and musical transcriptions of Hungarian materials, carried out in parallel with his compositional activity and eventually fully incorporated into it. These phenomena intensified over time, thanks especially to the development of sound recording technology, and led to the formation of new modes of consumption for music, modes that have been the subject of some theoretical discussion.2 The term ‘technology’ is intended in the broader, anthropological sense of an extension and intensification of human capabilities and thus includes writing as well as sound recording and its many developments in the digital era. In this way, technology plays a fundamental role of giving us access to musical phenomena belonging to cultures distant from our own, and of therefore providing the conditions for a heightened
218 Nicola Scaldaferri consciousness of others and ultimately of ourselves. Indeed, although there have been (with increasing frequency across Western history) direct contacts between Western composers and the music of non-Western cultures, textual forms of documentation – obtained by way of a technological device – have been the chief means of transmission of musical performances. These forms include transcriptions of traditional songs onto the musical staff, sound recordings and finally audiovisual recordings and contemporary modes of dissemination such as the Internet. Technology is indeed – in most cases – more than a mere go-between, a means to gain knowledge of phenomena otherwise unavailable to our experience; it is also the very ground upon which the interaction between the composer’s toil and distant musical activities concretely takes place thanks to acts of reinscription – that is, through new forms of textualisation. These already occurred within more traditional technologies such as writing – this is the case with transcriptions and literary elaborations on oral cultures – and are discernible nowadays in musical activities that involve the interaction of live performances with sound recordings or live electronics.3 These latter activities often produce situations that are not codifiable according to the consolidated categories of musical exegesis, and we will refer to some of these situations in the course of the first part of this chapter. Encounters between different musical practices thus often take the form of a genuine dialogue between musical systems, an interaction generated during the process of textualisation. Consider—to stay within the domain of previously discussed case studies—the example of Luciano Berio. A careful study has yet to be produced of the profound encounters that Berio had with other musical traditions and of his ability to reinvent these musical traditions from within his own musical language. Indeed, charting the complex pathways of Berio’s compositional appropriations would offer new insight towards a fuller understanding of his work as a composer. The modus operandi of a composer like Berio often provides problematic case studies with regard to the concepts treated here. Consider, for instance, Naturale, a ballet completed in 1985 and based on a previous composition, Voci (1984). The original instrumental forces for Naturale were listed as ‘viola, marimba, tam-tam, and magnetic tape’ 4 but were eventually modified into ‘viola, percussion and recorded voice’ for the final version of the piece, which was conceived for concert performance. The change from ‘magnetic tape’ to ‘recorded voice’ in the piece’s scoring might seem like a minor detail but is in fact symptomatic of a subtle shift. What one hears during a performance of Naturale is a (recorded) voice interacting with the instrumental ensemble; this voice is a live recording of cantastorie (singer/ storyteller) Peppino Celano, intent on performing traditional Sicilian abbagnate. In the final scoring of Naturale, Berio thus appears to be correctly shifting the emphasis away from the technological support upon which sound is inscribed and onto the presence of voice. Indeed, not just a voice but a particular voice: the recorded and ‘disembodied’ singing of Celano,
(Absent) authors, texts and technologies 219 captured by a live recording of one of his performances.5 The (decon)textualisation of a performance by means of sound recording, and the subsequent insertion of it into the creative process of the composer, is a problematic gesture: if, on the one hand, it turns the recorded performance into a new object more widely available for artistic elaboration, it also raises the question of whether the performance is incorporated as mere sonic ‘material’, free of authorial connotations. Indeed, the process of capturing and decontextualisation involved in sound recording results in the production of new forms of authorship, forms that are often discussed nowadays in relation to sound archives and matters of privacy. These new forms of authorship now extend to those who have made a particular recording, or the owners of a particular tape, and end up compounding the more traditional forms of authorship such as the ownership of the voice – that is, of the subject expressed through a voice – that has been recorded.6 In the pages that follow, we will treat two case studies that are foreign – at least in their original guise – to the compositional ethos of Western art music but grow to interact more and more closely with it over time. These case studies will be compared in order to highlight the importance of reassessing the explanatory categories that have emerged in the previous chapter. A key element in both case studies will be the encounter between ethnographic practices and artistic practices, an encounter that occurs with increasing frequency both at the level of scholarly reflection and at the level of concrete musical practice (Schneider and Wright 2010; Marano 2013).7
The polyphony and polyrhythms of Central African music The first example concerns a famous non-Western musical tradition: the polyphony and polyrhythms of Central African music. The research and analysis of the French-Israeli scholar Simha Arom were a key contribution to our knowledge and systematic study of these practices (Arom 1985, in English Arom 1991).8 Arom’s work became not only one of the pillars of Africanist ethnomusicology, but also a powerful means of connecting this music to the Western sonic imagination. Thanks to Arom, Central African polyphony has repeatedly intersected with the work of contemporary musicians and generated a reflection on the similarities of these polyphonic practices with certain aspects of the literate European musical tradition, especially during the medieval period. The preface to the English translation of the monumental collected volume of Arom’s research was penned by György Ligeti. Ligeti’s essay was both concise and strikingly accurate in its discernment of the music’s characteristic traits and displayed a fine understanding of Arom’s work. Berio – who had also encountered Arom’s analyses – was quite taken by these same musical practices. Indeed, he not only made frequent mention of Arom’s work in public presentations and in his own writings, but also famously drew key compositional stimuli from it
220 Nicola Scaldaferri while composing Coro (Berio 2013, 294, 450; Berio 2006, 31–61; on Coro, see Agamennone 2012; and Scherzinger 2012). Musical phenomena such as those of Central Africa provide an exemplary case study for a discussion of concepts such as text, author and p erformance and the role of technology as a mediator among the three. These musical phenomena are not only fascinating and highly complex; they also do not feature any individual assuming the function of the composer or author as we understand it in Western cultural practice. Among these phenomena, the most notorious case is the horn music of the Banda Linda tribe. The Banda Linda horn is known to us nowadays both thanks to its circulation as a recording and because the musicians themselves gave concert performances – coordinated by Arom himself – at several prestigious European music festivals. The performances consist of a number of musicians playing horns of varying dimensions, shaped and cut so as to cover a range of registers, but each only capable of producing a single pitch. The musicians alternate their pitches according to intricate rhythmic patterns; the hocket-like overlaps among the melodic lines allow for the production of sonic architectures that are highly captivating to the ear (see Figure 10.1). Indeed, the music is especially fascinating to Western ears by virtue of the fact that it is produced without an individual will assigning and prescribing the musical roles in advance. The result of this ‘extemporaneous’ performance is a highly rigorous sonic text, audibly recognisable and – as Arom’s detailed analyses show – capable of being codified and reproduced at will in its original cultural context. The musicians’ performance, however, relies on structures acquired and internalised by the musicians thanks to cultural mechanisms that shape cognitive and motor skills. In the original context of this performance, the sonic ‘result’ can only be obtained thanks to a long cognitive and behavioural process. John Blacking once famously defined music as ‘humanly organised sound’ (1973). This definition holds for our musical example here: we are in the presence of a series of musical norms resulting from complex individual interactions with social behaviours and ritual practices. Given that no composer is establishing the rules necessary to the construction of the sonic architecture, the collective performance coincides with the creative process. To recall a phrase coined by Albert Bates Lord with regard to the performative-creative processes of orally transmitted epic songs, we are in the presence of ‘composing in performance’ according to cognitive and behavioural models acquired by the performers through their ethnic and cultural background (Lord 1991; 2000). The lack of an author – that is, of an individual who performs authorship as a socially recognised role – does not, evidently, exclude the possibility of a musical text bearing recognisable traits, capable of being repeated, and characterised – by our own parameters – by extremely complex technical elements. Indeed, the Banda Linda horn music constitutes a text that, if technologically mediated or simply decontextualised by removing the
Figure 10.1 Simha Arom, analysis of the music of Banda Linda as found among Luciano Berio’s sketches for Coro (Scherzinger 2012, 412). Source: Courtesy of Paul Sacher Foundation, Basel.
222 Nicola Scaldaferri musicians from Central Africa to a European concert hall, ends up gaining an aesthetic function for a public foreign to the ritual aspects of the original performance. The presence of an author is not indispensable for the existence of a text, just as the possibility of aesthetic consumption does not need to be part of a text’s original purpose. If anything, the opposite is true: only a few texts can be traced back to a specific authorial intention. It’s worth remembering here that the origin of the debate on authorship in Western culture is related to Homeric texts; their objective existence proves the fact that complex narrative texts do not need an author and that this absence of authorship lies at the heart of one of Western culture’s foundational texts. It has since been demonstrated –thanks to the experimental research with recording technology led by Milman Parry on active epic traditions –that it is possible, thanks to the use of formulaic structures, to achieve a collective creative effort. The result of such an effort is considered a fully identifiable and recognisable text within the original cultural tradition.9 Doing away with the notion of a binding individual intention – an intention that must be referred to during the re-enactment of a text – also means doing away with the necessity of a performer of that particular intention, namely the very idea that a text has to be ‘interpreted’. Without a primary authorship, even a ‘double’, or delegated authorship transferred onto the performer, as is often the case in the Western art tradition, is not possible. In the act of composing in performance, the performer carries out the role of author and interpreter at the same time. Her role is evaluated according to technical ability and with reference to aesthetic canons and most importantly on the basis of her adherence to a model that must always be recognisable and shared by the other performers (if there are any) and by the audience. Whenever forms of personal agency and deviations from the shared model occur, they must never compromise the model’s recognisability. Without this fundamental recognition, the music cannot be shared, nor can a relationship be established with the other performers or with the listeners at whom the performance is aimed. One of the key operations carried out by Arom on the music of Banda Linda consists of working out the models and structures used by the musicians and then tracing them back to Western categories. Indeed, for Arom it was almost as if the musicians were performing from a sort of imaginary score. In order to obtain this result, Arom made use of a complex technological apparatus – extremely innovative and advanced considering Arom was working in the 1970s – that allowed him to record the musicians’ parts individually. The complexity of the technological apparatus was necessary because the musical practice in question did not involve the possibility of musicians playing as soloists; they could only play their parts in relation to the other musicians, as a group.10 Arom employed recording techniques that are commonly used in recording studios – such as rerecording and playback. He thus set up collective recording sessions and then asked individual musicians to perform their own part again while listening with headphones to the
(Absent) authors, texts and technologies 223 previously recorded collective performance. This way, he was able to record the individual parts separately. These operations were all carried out in the field, employing technologies that in those years proved rather cumbersome. The recording of individual parts allowed for subsequent transcription and analysis; most importantly, it provided the basis for a reconstructed ideal score in which all the parts could be visualised and the rhythmic processes and modes of interaction between musicians explicitly represented.11 The recording of the individual parts as separate tracks could be achieved with far greater ease using contemporary sound recording technology: recordings could be made directly on the field with multitrack recording devices, with no need for the complex intermediate phase of playback and rerecording. Yet, from a methodological point of view, the intricate procedure devised by Arom is a key aspect of his achievement. Arom’s mode of working produced profound change in the relationship between the scholar and the musicians involved in his research. The research process was not simply a matter of recording (or documenting) a piece in order to analyse it at a later time. On the contrary, the performance itself was carried out in a way that was functional to the possibility of recording it on multiple separate tracks and then accurately rendering it as text. The musicians thus became collaborators and played an active and conscious role within an experiment whose aim was to ‘fix’ a text endowed with specific technical characteristics. The final goal was not the recording, but rather the drafting of a ‘score’, where score stands for a text conceptually similar to what a Western art music composer would have written down as a binding script for the purposes of performance.12 Arom’s work is something quite obviously distant from the traditional notions – or concepts – of ‘documentation’ that sometimes underlie musical ethnography. On the one hand, the decisional and subjective elements were introduced to recording technology, thus lending an authorial dimension to the work of whoever ‘textualises’ the performance by recording it onto an external support. On the other hand, this same use of technology gives life to an interaction between scholar and musicians whose common purpose is to capture a performance in such a way that it may be subsequently repeated in mediatised form. This interaction almost seems to mimic the kind of collaborations that occur between Western composers and their trusted performers. What was once considered – according to traditional ethnomusicological research – a ‘document’ becomes here the final stage of a process planned and shared by both the scholar and the musician. If this shift was already evident in Arom’s work, it has only received scholarly consideration in recent years. Of key importance in this respect is the work of Steven Feld, whose research is founded upon a dialogic and collaborative approach to ethnographic research.13 The textualisation of a performance and its preservation onto an external support that allows it to be played back – whether as an ensemble piece or
224 Nicola Scaldaferri as detachable analytical components – introduces a new form of authorship. Namely, the ‘researcher’ (Arom in this case) who reifies and mediatises a particular performance with the aid of certain technological devices, thus rendering it into an object that is circumscribed and reproducible. The mediatised performance will in turn constitute the point of departure for further analysis leading – according to Arom’s methodology – to the transcription of individual parts and the reconstruction of a ‘score’ in line with the compositional conventions and analytical discourse of Western musical culture. It is hardly a coincidence that Berio’s attention was captured by the transcriptions, as Arom himself recalls.14 The translation of the performance into visual terms intelligible to and comprehensible by Western musicians allowed this musical tradition to enter into dialogue with Western compositional processes. The composer becomes, in this case, someone who ‘retextualises’ a phenomenon previously known only in mediatised form, thus creating new relational units for it.
Soundscape compositions The second case study we will examine here concerns the practice of soundscape compositions, with an emphasis on the definition and methodology lent to this practice by the work of Steven Feld.15 If the example of Arom’s work with the Banda Linda allowed us to explore new nuances of the concepts of text and authorship, Feld’s work lends new connotations to the very idea of a musical composition. The concept of ‘soundscape’, which nowadays enjoys widespread usage and development in a variety of sectors, found its origin and initial development as a compositional practice made possible by sound recording technology. A key moment of articulation is discernible in the compositional and intellectual work of R. Murray Schafer (1977).16 Environmental sounds and sounds produced by humans (including musical performances) are recorded, re-elaborated, manipulated live or assembled in the studio into creations whose functions vary from ambience sound to works with a specifically artistic purpose. One can obviously note a far older lineage to soundscape – from Italian futurism to French musique concrète, from Luigi Russolo to Pierre Schaeffer by way of Walter Ruttmann. There are also less apparent links, links that have yet to receive extensive scholarly treatment and deserve a more detailed account. I am referring to the radio documentary Ritratto di città (1955) – music by Luciano Berio and Bruno Maderna and text by Roberto Leydi. Although the conditions behind the composition of Ritratto di città involved a highly experimental use of sound technology for pioneering purposes, it played a key role as the opening gambit for the research activity Milan’s Studio di Fonologia. Another interesting example is Fontana Mix (1959) by John Cage, also assembled at Milan’s Studio di Fonologia, and a piece that makes use of ambience sounds recorded by Cage during a walk on the streets of Milan with microphone and magnetophone.
(Absent) authors, texts and technologies 225 Cage deconstructed and arranged his recorded sound materials, finally offering – through a series of characteristic compositional processes – a creative reconstruction of his aural experience (Rizzardi and De Benedictis 2000; De Benedictis 2004).17 In Feld’s case, compositional discourse takes on connotations that point towards new perspectives. Feld, who is active as a performer (as a jazz trombonist and as a player of the ashiwa, an African thumb piano, with the trio Accra Trane Station), has been producing scholarship capable of redefining the contours of ethnomusicological research. He has lent the practice of musical fieldwork an intense dialogical dimension and has made theoretical contributions towards the establishment a true ‘anthropology of sound’. Feld’s scholarly focus is thus aimed not only at the musical practices that constitute the object of traditional musical ethnography (his work provides intense documentation of such practices) but also at the relationship among sounds in a broader sense. Feld draws attention to the relationship between sounds produced by humans (such as musical performances) and their surrounding sonic environment. He innovatively provides an extensive documentation of both human and environmental sounds, thus inviting reflections on how past ethnographic sound recordings have often been extrapolations from a much richer sonic context. A famous example of Feld’s ethnographic practice are his recordings of Kaluli women whose song interacts with birdsong and other natural sounds, alongside a score of recordings of environmental sounds that betray a special attention towards ecological issues.18 Feld’s work is made possible by his high competence in the use of sound recording and sound editing technologies, including devices such as the DSM microphones, which he has used regularly in recent years. DSM microphones are stereo-dynamic microphones in the shape of headphones and allow she who wears them to record ‘with her body’, thus accounting for the relationship of she who is recording with the surrounding sonic environment.19 Feld is also interested in audio-visual technological devices, an interest that has led him to produce important documentaries on musical and artistic aspects of Ghana’s cultural production, as well as theoretical insights for the scholarly community.20 Soundscape composition, according to Feld, is rather close to electroacoustic composition; indeed, there is an artistic bent to Feld’s intense collaborations with musicians, photographers and visual artists, often resulting in multimedia installations.21 Although based on materials personally gathered by the researcher in the field– with the aid of specific sound recording devices, a soundscape composition is far from a live recording. Essential to its creation is the editing process carried out in the studio. Feld’s particular sound recording processes are therefore not aimed at the so-called ‘objectivity’ that traditionally applies to sound documentation techniques. They are instead closer to the work of a composer. This is not only due to the kind of technological devices employed and to the expertise required to handle them, but also because these devices involve and
226 Nicola Scaldaferri document Feld’s direct participation in the recorded event; the recorded materials are thus mediated through Feld’s personal perception and edited according to his mnemonic processes. Indeed, as Feld repeatedly illustrated in some detail, the memory of one’s participation in the event during which the original sound materials were recorded plays an important role in the process of selection and assemblage in the studio. This form of authorship is therefore traceable not only to Feld’s expressed desire to create a particular sonic text, but is also shaped by his physical participation in the event from which the recorded materials originate. Listening to a soundscape composition involves finding one’s path across the author’s experience of the event from which she gathered the raw recorded materials. This is the case with the CD Voices of the Rainforest, in which a day’s worth of sound in the Kaluli community is enclosed and summed up in a one-hour track of sonorous storytelling. This same process is all the more intense in the vivid and complex sonic ritual of the Maggio in Accettura, Southern Italy, ‘narrated’ in first person by Feld, who thus evokes the sonic memory of his participation (see Figure 10.2). The result in a true ‘symphony’ of sounds, whose primary function is to reflect Feld’s own active participation in the event (Scaldaferri and Feld 2012).22
Figure 10.2 Steven Feld, wearing DSM microphones, records canti a zampogna (voice: Giuseppe Rocco, zampogna: Nicola Scaldaferri). Accettura (Matera, Italy) 14 May 2005; (Scaldaferri and Feld 2012, 84). Source: Photograph by Lorenzo Ferrarini.
(Absent) authors, texts and technologies 227 However, if the processes described here closely resemble the practice of electroacoustic composition – a resemblance that has significant consequences with regard to performance, particularly if we are dealing with installations that require an interpretive effort – the purpose of a soundscape composition presents us with far more distinctive and innovative traits. According to Feld, listening is a profound cognitive experience, as well as a means of aesthetic enjoyment; in his theoretical writings, he articulated this idea through the concept of ‘acoustemology’, a word that welds together acoustics and epistemology (2015a). A soundscape composition ends up constituting a means towards the knowledge of phenomena narrated in sonorous form. The operative paradigm here is predominantly aural and can be combined with or even substituted by the visual paradigm always implicitly at work in our culture. A soundscape composition can thus become more effective than ethnographic inquiry or scholarly discourse in presenting knowledge of certain phenomena.23 In this respect, the work of the composer – precisely by virtue of the subjective and authorial character of this activity – takes on important epistemological implications. Soundscape composition not only enriches the framework of the relations of author, text and technology, but also introduces an understanding of the composer’s work that is radically new for the Western tradition. This new form of research and composition demonstrates how, once again, the roles, categories and conceptual apparatuses employed by Western musical discourse constantly require new definitions to reflect the shifts of concrete musical practice.24
Notes 1 This chapter is the result of discussion with Angela Ida De Benedictis. Initially we planned to write one chapter in two sections. My contribution would have made up the second section. In the end, we have produced two chapters that examine questions of authorship from different, but related, perspectives. 2 Béla Bartók had already mused upon this matter at the beginning of the 1930s, when he sketched a framework of possible uses of folkloric musical materials (and specifically traditional Hungarian materials) within one’s compositional practice (Bartók 1976, 340–35). A recent redefinition of the forms of interaction between traditional materials and art music composition is found in Maurizio Agamennone (2012). 3 The author remembers a concert organised by the CERM (Center for Musical Research and Acoustic Experimentation) of Sassari (Sardinia), during the Darmstadt Ferienkurse of 1992, in which Luigi Nono’s La fabbrica illuminata (1964) was played alongside something strikingly unusual: recordings of traditional Sardinian music, live singing by the canto a tenore (traditional male Sardinian polyphonic ensemble) and live electronic elaborations of a virtuosic live performance on the launeddas (a triple-piped reed instrument typical of Southern Sardinia). On the launeddas that evening was Luigi Lai, the most celebrated and accomplished player of this instrument at the time; the live electronic music was by the composer Franco Oppo. For more details, see Roberto Favaro (1994). 4 This information, along with ensuing details regarding Berio’s compositions, is drawn from the composer’s programme notes to his own compositions, accessible on the website of the Centro Studi Luciano Berio: www.lucianoberio.org.
228 Nicola Scaldaferri 5 On the mediatised voice and its implications, see Scaldaferri (2014a, 2014b). 6 Although this is not the place to enter into this debate, it’s worth noting how Berio registered a similar concern when – in a piece like Visage– he added the subtitle ‘for electronic sound and the voice of Cathy Berberian on magnetic tape’. It is important to emphasise that Berberian’s voice was, on that occasion, recorded for the purposes of that specific composition. The ‘mediatised’ form of Sicilian abbagnate, on the other hand, textualises a performance whose original context of performance is very different from that aesthetic-artistic purposes of concert music. 7 For a few concrete cases of the encounter between ethnomusicological research and compositional activity, see Scaldaferri (2015). 8 Also notable is the Italian publication of Arom’s collected writings, which has the merit of including Arom’s audio and video field recordings (Arom 2013). 9 According to Parry (1987), this occurs especially in contexts that involve the encounter of different languages and cultures, which we would now describe as phenomena of contamination (Scaldaferri 2012). For an introduction to the Homeric question, see Nagy (1996). 10 Ligeti makes the following observation in his preface to Arom’s writings: As the parts function only in terms of the collective structure and have in themselves no autonomous “meaning”, the performers have neither learned nor practiced their parts individually and are virtually unable to play them without hearing the complete ensemble. Arom elegantly circumvents this problem by providing each musician in turn with headphones through which he hears a recording of the complete ensemble. This allows the musician to play his part “alone” and for it to be separately recorded and later transcribed. (Arom 1991, xvii–xviii; see also Ligeti 2007, 509–10). 11 A complete understanding of these operations is largely possible thanks to the recent publication of the audio and visual materials of Arom’s research in the CD and DVD attached to the aforementioned recent Italian published translation of Arom’s writings. 12 On the rich debate regarding the concept of musical text and its configurations during the course of the twentieth century, see De Benedictis et al. (2009). For a discussion of the function of musical transcription, see Scaldaferri (2005). 13 ‘The method described profoundly changes the usual relationship between the ethnomusicologist and the musicians during fieldwork: the musicologist is no longer on one side, with his informants on the other. On the contrary, its application requires very active participation from the musicians; they become true scientific collaborators. It is necessary that they not only understand what they are asked to do in the experimental conditions, but also that they assume the determination of the successive stages of the experimental work.’ (Arom 1976, 483–519; see also Feld 2015b). 14 ‘…in 1975 a colloquium, entitled Musique et lingquistique, was held at IRCAM. I was asked to deliver a paper and I spoke on the music of the Banda-Linda horn orchestras, basing my analysis on principles used in linguistics. The audience listened to my musical examples and I circulated copies of score examples that I had prepared. Berio was very impressed, very impressed, very enthusiastic. A few months later, he contacted me because he wanted to obtain the scores. We met and discussed this at length–this was when Coro began to enter the Black continent…’ (Stoianova 1985, 194). 15 Among Feld’s publications are 2012a, 2012b. Both books come with abundant sound and audio-visual materials. For a complete overview of Feld’s research, see www.stevenfeld.net. 16 It is important to remember that the term ‘soundscape’ can also have different meanings and usages from Schafer’s. One notable example is Kay Kaufman
(Absent) authors, texts and technologies 229
17
18 19 20 21
22
23 24
Shelemay (2015), who offers something close to a musical version of the notion of ethnoscape, first formulated by anthropologist Arjun Appadurai (1996) in the context of the study of globalisation. There are further strong connections with ecological issues, especially in scholarly debates of recent years. Indeed, Cage’s activity touches many of the fundamental compositional issues surrounding soundscapes, although his work can only be treated in passing here. Suffice it to remind our readers of works like Imaginary Landscapes (1939–1952) or Il treno di John Cage. Alla ricerca del silenzio perduto (1978). See the CDs Voices of the Rainforest (1991), and Rainforest Soundwalks: Ambiences of Bosavi (2001). On DSM microphones, see the website www.sonicstudios.com. For a theoretical discussion, see Lorenzo Ferrarini (2009). Aside from Feld’s seminal ‘Ethnomusicology and Visual Communication’ (1976), it is also important to recall the English translation and edition of the writings of Jean Rouch and Feld (2003). Here it is important to remember Feld’s collaboration with the Australian artist Virginia Ryan in the project entitled Castaways, for which Feld created the video and audio components; the project was presented during the Festival of Spoleto in 2007. Also worth mentioning is the fact that Jean Schwarz, one of the composers once involved with Pierre Schaeffer’s GRM (Group de Recherches Musicales) edited a CD recorded by Feld with Accra Train Station. See Meditations for John Coltrane (VoxLox 107, 2006). Schwarz also curated the editing of the field recordings in the prestigious ethnomusicological collection Les Chants du Monde. This last detail is a reminder of how technological manipulations form the common ground—in the electro-acoustic studio and through the editing practices of recordings – between artistic and ethnographic musical activities. This text reports on research undertaken in the context of the Maggio of Accettura. Here Feld’s soundscape composition is combined with other modes of narration and documentation of the event. The theoretical implications of this combinatorial process are discussed by the two editors in a dialogue published in the volume (Scaldaferri and Feld 2012, 74–91; see also Scaldaferri 2015). On this same topic, see the section Scaldaferri (2015) in which the author discusses the analytic value taken on by soundscape compositions in relation to the Maggio of Accettura. Translated by Delia Casadei.
11 Computer-supported analysis of religious chant Dániel Péter Biró and George Tzanetakis
This chapter presents an overview of transcription methods via computational means based on research conducted from 2007 to 2015 at the University of Victoria, Utrecht University and the Meertens Institute in Amsterdam. In particular, we have developed new computational models for analysing religious chant in order to continue the project of folk music transcription, initiated by Béla Bartók (1881–1945), by incorporating twenty-first century technology. We have applied our analytical and computational tools to examples of Hungarian laments, Jewish Torah cantillation and Qur’an recitation. In analysing relationships among the parameters of pitch, melodic gesture and melodic scale with computational tools we have also created a new paradigm for chant transcription.1
Preamble In transcribing indigenous and world music, ethnomusicologists have to deal not only with subjective hearing, imagination and technological issues but also with the history or histories of transcription. In recent years, we have set out to re-evaluate the traditions of folk music transcription and to extend them into the future with the help of recent technical advances. In this way, we not only connect to previous, historical traditions of transcription, but also forge a way in which the diversity of cultures in the world can be, in a sense, ‘rediscovered’ in terms of their material complexity. One important aspect of computational transcription is the creation of new visualisation platforms to assist ethnomusicological research. Since 2008, we have explored ways to create new platforms for musical transcription of various types of monotheistic and folk chant traditions. While each of these chant traditions is governed by a historical trajectory and specific performance parameters, our research intends to shed light on salient musical features specific to and shared among various types of chant practices.
Formula, gesture and syntax As various types of chant employ melodic formulae, figures that define certain melodic identities that help to define syntax, pronunciation and
Computer-supported analysis 231 expression, one of our first goals was to create visualisation tools for these formulae. As the melodic framework of each tradition is governed by the particular religious context for performance, we have created computational tools and notation software that enable detailed culturally specific as well as cross-cultural analysis of these chant traditions. We started our investigation with the sirató, a ritual lament from Hungary, one of the oldest forms of Hungarian folksong that goes back at least to the Middle Ages.2 This improvised song type is integral to our study, as it exemplifies inherent relationships between speech and singing while demonstrating stable melodic formulae within an oral and aural ritual context. In addition, we used this lament type to compare computational analyses to earlier manual transcriptions done by Béla Bartók in the 1930s. Jewish Torah trope is ‘read’ using the twenty-two cantillation signs of the ta’amei hamikra,3 developed by the Masorete rabbis between the sixth and the ninth centuries.4 The melodic formulae of Torah trope govern syntax, pronunciation and meaning. While the written te’amim have not changed since the tenth century C.E., their corresponding melodic formulae are determined not only by the Jewish tradition of cantillation but also by the melodic framework of their surrounding musical environment. The performance framework for Qur’an recitation is not determined by text or by notation but by rules of recitation that are primarily handed down orally.5 Here the hierarchy of spoken syntax, expression and pronunciation plays a major role in determining the vocal styles of Tajwīd6 and Tartīl.7 The resulting melodic phrases, performed not as ‘song’ but ‘recitation’ are, like those of Torah trope, determined by both the religious and larger musical cultural contexts. The early plainchant neumes came from a logo-genic culture that was based on textual memorisation; the singing of memorised chants was central to the preservation of a tradition that developed over centuries.8 Already in the ninth century, the technology of writing was advanced enough to allow for new degrees of textual nuance. Here the ability for formulae to transcend textual syntax is at hand, pointing to the possibility for melodic autonomy from text.9 Chant scholars have investigated historical and phenomenological aspects of chant formulae to discover how improvised melodies might have developed to become stable melodic entities, paving the way for the development of notation.10 A main aspect of such investigations has been to explore the ways in which melodic contour defines melodic identities (see Karp 1998). Our computational tools allow for new possibilities for paradigmatic and syntagmatic chant analysis in both culturally defined and cross-cultural contexts. These tools and platforms have been developed to give scholars a better sense of the role of melodic gesture in melodic formulae and possibly a new understanding of the evolution from improvised to notation-based singing in and amongst these divergent chant traditions.
232 Dániel Péter Biró and George Tzanetakis
Figure 11.1 Qur’an sura, Al-Qadr recited by Sheikh Mahmûd Khalîl al-Husarî, pitch (top, MIDI units) and energy (bottom, decibels) contours.
Melodic contour analysis tool One of the first analysis tools we developed processes a digitised monophonic or heterophonic recording and produces a series of successively more refined and abstract representations of the melodic contours. This type of computational analysis has also been used by a group in Belgium to examine African music (Six and Cornelis 2011). The tool first estimates the fundamental frequency ‘f0’ (in this case equivalent to pitch) and signal energy (related to loudness) as functions of time. We use the SWIPEP fundamental frequency estimator with all default parameters except for upper and lower frequency bounds hand-tuned for each example (Camacho 2007). For signal energy, we simply take the sum of squares of signal values in each nonoverlapping 10-ms rectangular window. The next step is to identify pauses between phrases in which case we eliminate meaningless f0 estimates that vary wildly in these regions because of background noise. We define an energy threshold, generally 40 decibels below the maximum of each recording. If the signal energy stays below this threshold for at least 100 ms then the quiet region is treated as silence and the f0 estimates are ignored. Figure 11.1 shows an excerpt of the f 0 and energy curves for an excerpt from the Qur’an sura (‘section’) Al-Qadr (‘destiny’) recited by the renowned Sheikh Mahmûd Khalîl al-Husarî from Egypt. The next step is pitch quantisation. Rather than externally imposing a particular set of pitches such as an equally tempered chromatic or diatonic scale, we have developed a novel method for extracting a scale from an f0 envelope that is continuous (or at least very densely sampled) in both time and pitch. Our method is inspired by Krumhansl’s time-on-pitch histograms, where the total time spent on each pitch is added up (Krumhansl 1990). We require a pitch resolution of one cent,11 so we cannot use a simple histogram.12 Instead, we use a statistical technique known as nonparametric
Computer-supported analysis 233
Figure 11.2 Qur’an sura, Al-Qadr recited by Sheikh Mahmûd Khalîl al-Husarî, recording-specific scale derivation.
kernel density estimation, with a Gaussian kernel.13 The resulting curve is a density estimate; like a histogram, it can be interpreted as the relative probability of each pitch appearing at any given point in time. Figure 11.2 shows a density estimate using this method applied to the f0 curve from Figure 11.1. We interpret each peak in the density estimate as a note of the scale. We restrict the minimum interval between scale pitches (currently 80 cents by default) by choosing only the higher or highest peak when there are two or more very close peaks. The free parameter of this method is the standard deviation of the Gaussian kernel, which provides an adjustable level of smoothness to our density estimate; we have obtained good results with a standard deviation of 30 cents. Note that this method does not assume octaves as stable periodic entities; it just considers the dominant peaks in the histogram without any folding of octaves. Once we determine the scale, pitch quantisation is a trivial task of rounding each f0 estimate to the nearest note of the scale. In our opinion, these derived scales are more true to the actual nature of pitch-contour relationships within oral/aural and semi-notated musical traditions. Instead of viewing these pitches as deviations from pre-existing normalised scales, our method defines a more differentiated scale from the outset. With our approach, the scale tones do not require normalisation and thereby exist in an autonomous microtonal environment defined solely by the statistical occurrence of each pitch within a temporal unfolding of the given melodic context.
Interactive web-based visualisation and exploration of melodic contours: paradigmatic analyses of chant We have developed a browsing interface that allows researchers to organise and analyse chant segments in a variety of ways, thereby allowing for new types of paradigmatic analysis. The user manually segments each recording into appropriate units for each chant type (such as a trope, sign, neumes, semantic units or words). The pitch contours of these segments can be viewed at
234 Dániel Péter Biró and George Tzanetakis
Figure 11.3 Screen-shot of interface: paradigmatic analysis of neume types in Graduale Triplex 398 as they relate to melodic gesture.
different levels of detail and smoothness using a density-plot-based method. The segments can also be rearranged in a variety of ways both manually and automatically. That way one can compare the beginning and ending pitches of any Torah trope sign, plainchant neume or word of a given chant as well as the relationships of one neume or trope sign to its neighbours. In the past few years we have researched the stability of melodic gesture and pitch content in a variety of contexts, both within a given chant tradition and across chant traditions. We have also explored relationships between chant texts and textual syntax with computational analysis. Our initial user interface has been designed to assist and support the analysis conducted by expert musicologists without trying to impose a specific approach. Being able to categorise melodic formulae in a variety of ways enables the establishment of a larger database of gestural identities, allowing for the analysis of their functionality to parse syntax and their regional traits and relations. A better understanding of how pitch and contour helps to create gesture in chant might allow for a more comprehensive view of the role of gesture in improvised, semi-improvised and notated chant examples. We have implemented the interface as a web-based Flash program, which can be accessed at http://cantillion.sness.net. Web interfaces can increase the accessibility and usability of a program, make it easier to provide updates and enhance collaboration between colleagues by providing functionality that facilitates communication of results among researchers. The interface (shown in Figure 11.3) has four main sections: a sound player, a main window to display the pitch contours, a control window and a histogram window.
Computer-supported analysis 235 The sound player window displays a spectrogram representation of the sound file with shuttle controls to let the user choose the current playback position in the sound file. It also provides controls to start and pause playback of the sound and change the sound volume. The main window shows all the pitch contours for the song as icons that can be repositioned automatically based on a variety of sorting criteria or alternatively manually positioned by the user. The name of each segment (from the initial segmentation step) appears above its f 0 contour. The shuttle control of the main sound player is linked to the shuttle controls in each of these icons, allowing the user to set the current playback state by clicking on the sound player window or directly in the icon of interest. When the user hovers over these icons with the cursor, additional salient data about the sign is displayed at the bottom of the screen. The control window has a variety of buttons that control the sorting order of the icons in the main f 0 display window. A user can sort the icons in playback order, alphabetical order, length order and by the beginning, ending, highest and lowest f 0. The user can also display the sounds in an X–Y graph, with the x-axis representing highest f 0 minus lowest f 0, and the y-axis showing the ending f 0 pitch minus the beginning f 0 pitch. This section also provides controls that can be toggled to hear individual sounds and controls to hide the pitch contour window leaving just the label. Other buttons allow the user to hear the original sound file, the f 0 curve applied to a sine wave or the quantised f 0 curve applied to a sine wave.14 When an icon in the main f 0 display window is clicked, the histogram window shows the density plot of the distribution of quantised pitches in the selected sign. Below this window is a slider used for choosing how many of the largest histogram bins will be used to generate the simplified contour representation of the f0 curve. In the extreme case of selecting all histogram bins, the reduced curve is exactly the quantised f0 curve. At lower values, only the histogram bins with the most items are used to draw the reduced curve, which has the effect of reducing the impact of outlier values and providing a smoother “abstract” contour. Shift-clicking selects multiple signs; in this case the histogram window includes the data from all the selected signs. We often select all segments with the same word, trope sign or neume; this causes the simplified contour representation to be calculated using the sum of all the pitches found in that particular sign, enhancing the quality of the simplified contour representation. Below the histogram window is a window that shows a zoomed-in graph of the selected f0 contours. When more than one f0 contour is selected, the lines in the graph are colour coded to make it possible to easily distinguish the different selected signs. The development of these computational tools for transcription has allowed us to re-evaluate previous, ‘pre-computational’ traditions of transcription. Through case studies presented below, we show how to establish
236 Dániel Péter Biró and George Tzanetakis a direct interaction between automatically derived scales and traditional practices of transcription, thereby enriching the arsenal of methods ethnomusicologists have at their disposal.
A computational re-evaluation of Bartók’s transcription methods In one particular study, we have re-examined Bartók’s methods of transcription by testing them with technology recently developed as part of our ongoing research project. In comparing Bartók’s transcriptions of Hungarian laments (in Hungarian: siratók) with those done with the help of computer technology, we are able to reassess Bartók’s production process and methodology, as well as to see whether and to what extent his transcription method can be applied, re-evaluated and continued. The sirató records back to the Middle Ages, and lamenting by women was common already in biblical times.15 This song-type exemplifies an inherent relationship among speech, singing, noise and pattern. Sirató, or crying song, is a lament performed by women mourning a deceased relative. This performance usually occurs either just before or immediately after a funeral but can also be re-enacted years after a loved one has passed away.16 The performance of a sirató connects the life of the singer not only with her ancestral past but also with the larger community. It is an integral part of the performer’s life: the enactment of the sirató most often has no clear beginning or ending. Although the sirató is improvised, each time exhibiting a personal melodic expression, the song-type is clearly discernible and exhibits a remarkable consistency of textual and musical form. Elements of both formal semblance and improvisational variability can be observed among the various examples of sirató, proving its mythical nature. The song is not determined individually but collectively, as the boundaries of its enactment are explicit enough to be reconstructed by the individual and recognised by the village collective.17 The sirató is most often improvised in a recitative manner. In its unfolding, melody and text function symbiotically, as gestures of improvised speaking are applied to the melodic and rhythmic domain.18 Kodály described such a dichotomy between singing and speaking: This is the only type of musical prose of this kind and can only be done spontaneously […]. Musical prose, on the border of music and speaking […]. The rhythm therein is no different from the rhythm of spoken speech […] the sections between the rests are not the same.19 The musical vocabulary of the sirató is comprised of the same cadential formulae and modality found in other types of Hungarian folksong and chants. The sirató sung by Mrs. János Péntek was recorded in Körösfő on 14 December 1937 and transcribed by Bartók in 1937–1938 (Somfai 1981). Sometimes a lament is sung by a so-called ‘professional’, and it seems that here Mrs. Péntek is indeed a ‘professional’, who would do this type of lament
Computer-supported analysis 237 for a relative or someone else in her village if requested. Here the lament is for a deceased mother using the language common in siratók, which can be found in renditions done by a variety of performers across large territories. Let us first examine the text in Hungarian and then in English. The sound of the text and the repetitions therein play a major role in the formation of melodic contour, melodic syntax and rhythm within the performance. Jaj, kedves idëssanyám! Jaj, eljött az utolsó óra, kedves öreg idëssanyám! Jaj, el këll mënë abba a hideg fekete sütét födbe. Kedves jó öreg idëssanyám, Jö’ön, búcsúzzon el mind a kilenc szërëncsétlen árvájától, Jajon, búcsúzzon el utóljára Kedves jó idëssanyám! Jaj, kilenc szërëncsétlen árva, Kilenc nagy bánat, kedves jó idëssanyám! Jaj mer a kilenc kilencféle fáradság s kilencféle gond. Jaj, kedves idëssanyám, De még elgondolni is nagy dolog, hogy Kilenc gyermëkët fölnevelni, kedves jó idëssanyám! Jaj, a kilenc felé küszködött, kedves jó idëssanyám! Alas, my dear sweet mother! Alas, the last hour has come, my dear, old sweet mother! Alas, one has to descend into that cold, black, dark earth. My dear, old sweet mother, Come and say goodbye to all your nine orphans, Alas, say goodbye for the last time, My dear, good sweet mother! Alas, nine wretched orphans, Nine great sorrows, my dear, good sweet mother! Alas, because nine is nine kinds of weariness and nine kinds of worry. Alas, my dear sweet mother, It is a great deed-to think that you raised your nine children, my dear, good sweet mother! Alas, the nine, you suffered nine times, my dear good sweet mother! (English translation by Dániel Péter Biró) One can imagine how Bartók the composer might have been intrigued by this text and by the performance and the resulting, for him both archaic and modernist musical structure. Bartók often carried the very heavy cylinder recorder to record his subjects, as this was the most sophisticated way to transcribe folk music in his day. In transcribing the recordings to paper, he would often slow down the recordings, allowing him to achieve detailed transcription of ornaments. Bartók always transcribed the recordings with the g’ as the tonus finalis. This was done to better compare the tonal language of large quantities of transcriptions. In this way, a tonal ‘reduction’ or transposition served to allow for easier analysis within and across folk music traditions. This is also the case in his transcription of Mrs. Péntek shown in Figure 11.4.
Figure 11.4 Béla Bartók, transcription of Mrs. János Péntek (#17b) from 1937.
Computer-supported analysis 239 A product of his education and European musical culture, Bartók employs the five-line stave in his transcriptions. He was very aware of tuning and the differences in tuning within folk music. In Figure 11.5, we have taken Bartók’s recording and found the most prevalent pitches in this scale. This allows us to reinterpret Bartók’s transcription using a scale derived by detecting the peaks in a nonparametric density estimation of the pitch distribution using a Gaussian kernel. These density-plot-based scales can be compared in terms of their melodic contour and pitch identity, and such comparison helps to demonstrate salient structural features of oral transmission. Figure 11.6 shows how the scale pitches derived from the pitch histogram are interpreted over time in a graphical manner. Figure 11.7 presents these pitches as they are ordered in terms of their density. Figure 11.8 displays their ordering in terms of scale tones from lowest to highest. The recording-specific scale derived from the pitch histogram presents a series of pitches determined by the frequency of use in a particular recording. Figure 11.9 presents Bartók’s transcription on top and a transcription of the same music on the bottom with cent deviations based on the scale analysis. The comparison allows us to see how Bartók perceived certain microtonal deviations and integrated them into a conventional tonal framework. In employing these derived scales, it is possible to examine levels of pitch hierarchy. Figure 11.10 shows the most prevalent pitches and where
Figure 11.5 Density plot of the recording of Mrs. János Péntek.
Figure 11.6 Density plot transcription of the recording of Mrs. János Péntek.
Figure 11.7 Pitches, based on the density plot, ordered in terms of their density.
Figure 11.8 Pitches, based on the density plot, ordered in terms of scale degree.
Figure 11.9 Bartók’s original transcription, juxtaposed with the version with scales derived from density plot.
Figure 11.10 Bartók’s Original Transcription, juxtaposed with the version with scales derived from the density plot; primary pitches have note heads marked by an ‘x’, secondary pitches by a triangle and tertiary pitches by a diagonal line through the note head.
242 Dániel Péter Biró and George Tzanetakis
Figure 11.11 Sirató, paradigmatic analysis of text/melody relationship as displayed in the cantillion interface.
they occur in the transcription. In this way, we are able to see a kind of foreground, middle-ground, background of pitch hierarchy.20 We are also better able to appreciate the diversity of scale species present in these recordings. Figure 11.11 presents the recording of the sirató divided into segments based on the words of the text. We can order these segments alphabetically to see that certain motives, contours and pitch hierarchies attach themselves to words of the text (for instance idësanyám = ‘my dear mother’). Here we can validate Benjámin Rajeczky’s theories of motivic repetition and connection to text in the sirató (Kiss and Rajeczky 1966). We are also able to sort the sung words by their beginning or ending pitch, by the length of the segment or by the pitch distance. We have created this interface as a tool for the exploration of chant in which various parameters may be better perceived. This is meant not to substitute transcription but to allow one to ‘test the ear’ and extend the possibilities for understanding the sound object, including its relationship to text and form, from a variety of perspectives.
Computational analysis of Jewish and Islamic chant We have developed a computational approach to explore new possibilities for paradigmatic and syntagmatic analysis of cadences in various chant types. In particular, the question of stability in scale, melodic contour and melodic outline is investigated. Observing the function of melodic cadences in these chant types, we investigate aspects of stability and variation within and across various chant communities. In particular, the stability and variation in scales derived from pitch histograms, melodic contours and melodic outlines are examined in recorded examples from these various traditions. This might give us a better sense of the relationship between melodic gesture and melodic formulae within these chant practices.
Computer-supported analysis 243
Data The melodic formulae of Torah trope govern syntax, pronunciation and meaning, and their clearly identifiable melodic design, determined by their larger musical environment, is produced in a cultural realm that combines melodic improvisation and fixed melodic reproduction within a static system of notation. The te’amim consist of thirty graphic signs.21 Each sign, placed above or below the text, acts as a ‘melodic idea’, that either melodically connects or divides words in order to make the text understandable by clarifying syntax. The signs serve to indicate the melodic shape and movement of a given melody. Even though the notation of the te’amim is constant, their pitches are variable. Although the thirty signs of the te’amim are employed in a consistent manner throughout the Hebrew Bible, their interpretation is flexible: the modal structure and melodic gesture of each sign is determined by the text portion, the liturgy, preconscribed regional traditions and improvisatory elements incorporated by a given ‘reader’. This study employs archival recordings from both the Feher Jewish Music Center in Tel Aviv and field recordings conducted by Dániel Péter Biró in the Netherlands in 2011. In the context of ‘correct’ Qur’an recitation contexts, improvisation and repetition exist in conjunction. Such a relationship becomes increasingly complex within immigrant communities that strive to retain a tradition of recitation, as found in the Indonesian Muslim community in the Netherlands. Comparing recorded examples of sura readings from this community with those from Indonesia, one can observe how melodic contour plays a role in defining the identity of cadence functionality. The recordings of Qur’an recitation were done by Dániel Péter Biró in the fall of 2011 in the Netherlands and comprise of both plain murattal and embellished mujawwad readings. For this study, we collect and compare data from field recordings done in the Netherlands, Indonesia, Israel and the United States. These recordings have been segmented manually. The recordings of Torah trope were segmented into the individual te’amim, just as the recordings of Qur’an recitation were segmented in terms of syntactical units corresponding to a given sura. Each Qur’an and Torah recording has been converted to a sequence of frequency values, again using the SWIPEP fundamental frequency estimator, which establishes the fundamental frequency in nonoverlapping time windows of 10 ms. The Dutch recordings have been converted by the YIN algorithm, which appeared to be better able to cope with the typical kinds of distortion in the Dutch recordings (De Cheveigné and Kawahara 2002). The frequency sequences have been converted to sequences of real-valued MIDI pitches with a precision of 1 cent (see above). In order to visualise and navigate through the data consisting of annotated segments and the frequency estimation results of the pitch extraction, we have developed a web-based interactive interface. This interface
244 Dániel Péter Biró and George Tzanetakis
Figure 11.12 Pitch-histograms of Genesis chapters 1–4 (a) and Genesis chapter 5 (b) as read in The Hague by Amir Na’amani in November 2011. Recorded by Dániel Péter Biró and pitch histogram created by Peter van Kranenburg.
combines both visual and auditory modalities, allowing the researcher to see and listen to the results of some of the algorithms we use (see Figure 11.12).
Pitch histogram and melodic scale Instead of analysing pitch contours in terms of predefined melodic scales, we derive a reference scale for the analysis of pitch automatically from the audio recording. First, we construct a pitch histogram, showing for each pitch the relative frequency of occurrence of that pitch during the recording. Since the melodies show some pitch deviation, we use the kernel density estimation procedure, determining for each recording the optimal parameter values by aurally and visually comparing the histograms and scales for various settings of the parameters. Figure 11.12 shows two sections of the beginning of the Hebrew Bible as read by Amir Na’amani in November 2011. The graph on (a) shows the pitches employed for the reading of Chapters 1–4, and the graph on (b) shows the pitches employed for the reading of Chapter 5. In comparing the data of the two graphs, it is clear that there was a high degree of pitch stability in his reading of these chapters.
Methods for Torah trope For each of the Torah recordings, we derive a melodic scale from a pitch histogram. Of the resulting scale degrees, we choose those two that occur most frequently and use them to scale the pitches in the nonquantised contours. As a result, different trope performances, sung at different absolute pitch heights, are comparable. On the thus acquired scaled pitch contours, we apply an alignment algorithm, interpreting the alignment score as a measure of similarity. The better the alignment succeeds, the more similar the
Computer-supported analysis 245 contours, the higher the resulting alignment score (for more information see, Kranenburg et al. 2011). Since each audio segment represents the rendition of a ta’am, we employ the similarity values of pairs of segments to assess the stability in performance. For that, we use standard evaluation measures from the field of Information Retrieval. We take each segment as a query, and for each of the query segments we construct a ranked list of all other segments by ordering them by the alignment score with the query segment. All renditions of the ta’am that correspond to the same query segment are considered relevant items. Next, for each ta’am we compute the mean average precision (MAP), which is the average precision of all relevant items for all queries. The MAP value reflects to what extent all relevant items are at the top positions of the ranked lists. The more similar all renditions of the same ta’am, the higher the MAP value. Thus, the MAP value can be interpreted as an indicator of stability in performance of the te’amim.
Results for Torah trope In comparing an Italian to a Moroccan reading of the first verses of the book Shir Ha-Shirim (Song of Songs), the obtained mean average precisions are 0.656 for the Italian rendition and 0.299 for the Moroccan, indicating a much higher stability in rendition for the Italian reading. These findings are particularly interesting when observed in connection with musicological and music-historical studies of Torah trope. It has long been known that the variety of melodic formulae in Ashkenazi trope exceeded that of Sephardic trope renditions. Simultaneously, one can see how the Ashkenazi trope melodies show a definite melodic stability. Observing the trope melodies for sof pasuq and tipha in the Italian tradition, one can surmise that they inhibit a definite melodic stability. For the sof pasuq we obtain a mean average precision as high as 0.996 and for the tipha 0.649 (for comparison, the figures for the Moroccan performance are 0.554 and 0.296, respectively). This indicates that the 17 sof pasuqs in the Italian rendition are at once similar to each other and distinct from all other te’amim. The same applies to a somewhat lesser extent to the 24 tiphas. The same can be observed by inspecting the distribution of distances between sof pasuqs in both readings, as is depicted in Figure 11.13b. For comparison, the distribution of distances between all unrelated segments is shown in Figure 11.13a. Clearly, the sof pasuqs in the Italian rendition are more similar to each other. Such melodic stability might have been due to the influence of Christian chant on Jewish communities in Europe, as is the thesis of Hanoch Avenary.22 Simultaneously, our approach using two structurally important pitches also corresponds to the possible influence of recitation and final tone as being primary tonal indicators within Ashkenazi chant practice, thereby allowing for a greater melodic stability per trope sign than in Sephardic chant.
246 Dániel Péter Biró and George Tzanetakis (a)
(b)
Figure 11.13 (a) Distribution of distances between unrelated segments. (b) Distributions of distances between sof pasuq renditions in Italian (a) and Moroccan (b) renditions as exemplified by Peter van Kranenburg and Dániel Péter Biró.
Computational transcription methods for Qur’an recitation In comparing an Indonesian version of the sura al Qadr with versions performed by Indonesian immigrants in the Netherlands, we have found similarities in terms of scale and contour stability. Dániel Péter Biró made recordings of a prominent reciter in Indonesia (Hajja Maria Ulfa) in 2011 and compared the scale content of her recitation to reciters in a predominantly Indonesian mosque in The Hague. We then investigated relationships between scales derived from the pitch histogram, melodic contour and melodic outline in this chant tradition. We set out to investigate how such a melodic outline is employed in recorded examples of the more embellished mujawwad readings, as performed in Indonesia and the Netherlands. Using archival and field recordings, we are presently c omparing density-plot scales and melodic contours of mujawwad readings to one another and to examples of murattal readings, thereby investigating how stable and variational melodic outlines come to be. By comparing examples of Indonesian with Indonesian–Dutch Qur’an r ecitation, we intend to ascertain how such a culture of recitation d evelops within a new c ultural framework in the Netherlands, in
Computer-supported analysis 247 (a)
65
(b)
Maria Ulfa Taty Abbas
64 63 62
pitch (MIDI encoding)
61 60 59 58 57 56 55 54 53 52 51 50
scale degrees
Figure 11.14 (a) Density plots of frequencies occurring in Indonesian (a) and Dutch (b) recitation of sura al Qadr. (b) Scale degrees derived from Indonesian (solid) and Dutch (dashed) pitch density plots for sura al Quadr.
which the r eading is performed predominantly by and for an immigrant population.
Results for Qur’an recitation Figure 11.14a shows the similar, although not identical, content of the density plots derived from the recordings of Hajja Maria Ulfa (Indonesia) and Taty Abbas (Netherlands). Figure 11.14b shows the corresponding melodic scale degrees. In comparing the two recordings for the final line of sura al Qadr, we can also observe salient aspects of stability and variation in regard to melodic contour and scale employment. This is interesting, in that
248 Dániel Péter Biró and George Tzanetakis (c1)
(c2)
Figure 11.14 (c) Contours of the same cadence as sung by Dutch (a) and Indonesian (b) reciters quantised according to the derived scale degrees.
these characteristics point to how melodic cadences come into being in an aural and oral text-based recitation tradition. In analysing the recording of Qur’an from Hajja Maria Ulfa, we discovered that hers had more melodic embellishment, while Taty Abbas performed a similar melodic phrase without the embellishment. In this way, one can observe how the salient melodic identity is carried forward by melodic contour, the scale pitches acting as structural pillars for a similar melodic outline. Employing such a melodic outline, the reciters can integrate elements of melodic improvisation and embellishment within their reading (Figure 11.14c).
Conclusions: transcription and perception In terms of the analysis of Hungarian siratók, Jewish Torah trope and Islamic Qur’an recitation, a number of important questions arise. Are there melodic, durational, contour similarities in cadences within individual
Computer-supported analysis 249 chant recitations, within chant types and between chant types? What are the geographical and historical determinants for such similarity or variation? How does the analysis of melodic formulae in cadences contribute to or make possible the development of new hypotheses about borrowings among and between traditions? Within designated communities, how stable are melodic formulae as they exist in Qur’an recitation as practiced in their local framework? How might the performance practice of melodic formulae in Qur’an recitation remain stable or transform within a globalised oral-culture framework? By developing computational models for analysing these three chant types, we are developing a methodology to test stability and variation in terms of melodic scale and melodic contour. Such cross-cultural comparative studies underline the relationship between orality and fixed rules of recitation.23 The continuation of this computational research might also help to shed light on the early history of plainchant, as it demonstrates how melodic outlines become formed in a text-based chant tradition without pitch notation.24 By extending musical transcription via the employment of a new computational platform, we will also test the subjective bias of the ethnomusicologist as well as test previous theories of the historical development of chant. This type of computational analysis presents new means to re-examine variation and stability within melodic formulae of these chant traditions. Such methods of computational analysis allow the scholar to leave the world of one’s imagination and to test her or his hearing with the ‘external’ computational ‘ear’. This is in no way a replacement of the ethnomusicologist’s own hearing, but rather exists as a way to juxtapose the results of an acculturated hearing with results derived by a computer. While patterns of perception and analysis are developed through acculturation, transcription becomes a way for the ethnomusicologist to reflect on these patterns and to ‘jump over one’s own shadow’. Like Bartók, we chose to use the latest technology, not in order to replace the tools of the past but rather to augment these tools in order to allow for a more critical and ethical musical hearing. In the end, such transcription methods might allow scholars to discover their own creativity in dialogue with the creativity of others and to allow a greater knowledge of and respect for the diversity of musical species.
Notes 1 This chapter presents outcomes of an interdisciplinary research project, appropriately named ‘Computational Ethnomusicology’, undertaken from 2007 to 2017. It brought together scholars in the fields of musicology and computer science, including Dr. Peter van Kranenburg, Meertens Institute, Prof. Andy Schloss, University of Victoria, Dr. Steven Ness, (then) University of Victoria, and Dr. Matthew Wright, (then) University of Victoria, Prof. Anja Volk, Utrecht University and Prof. Frans Wiering, Utrecht University. The authors are grateful for their continuing involvement in this research collaboration. Similarity measurements for melodies from oral tradition
250 Dániel Péter Biró and George Tzanetakis especially designed for the monophonic chant repertoires were developed in coordination with the Department of Computer Science and the School of Music of the University of Victoria, the Department for Information and Computer Science of the University of Utrecht and the Meertens Institute, Amsterdam. This research has resulted in a series joint journal papers (Biró et al. 2008; Ness et al. 2010; Biró et al. 2011; Kranenburg et al. 2012; Biró and Kranenburg 2014). This chapter presents an overview of research, contained in the above-mentioned papers. 2 Lamenting by women was common already in biblical times: ‘Mourning songs for the dead also go back to primitive times. Although every religion and secular form of legislation… has endeavoured to control mourning practices, they are still customary even today’ (Kodály 1960, 38–39). 3 The term ‘ta’amei hamikra’ means literally ‘the meaning of the reading’. 4 ‘Originally, the biblical books were written as continuous strings of letters, without breaks between words. This led to great confusion in the understanding of the text. To ensure the accuracy of the text, there arose a number of scholars known as the Masoretes in the sixth century CE, whose work continued into the tenth century’ (Wigoder 1989, 468). 5 ‘Like the Hebrew miqra’ the word “Qur’an” is derived from the root q-r, i.e. “reading”. However, the visual significance of the text is not implied with this root. Rather the concepts ‘pronounce, calling, reciting are expressed with the word, so that an adequate translation of Qur'an (Qur’ ãn) could be “the recited”’ (Zimmermann 2000, 27, English translation by Dániel Péter Biró). 6 ‘Tajwīd [is] the system of rules regulating the correct oral rendition of the Qur’an. The importance of Tajwīd to any study of the Qur’an cannot be overestimated: Tajwīd preserves the nature of a revelation whose meaning is expressed as much as by its sound as by its content and expression, and guards it from distortion by a comprehensive set of regulations which govern many of the parameters of the sound production, such as duration of syllable, vocal timbre and pronunciation’ (Nelson 1985, 14). 7 ‘Tartīl, another term for recitation, especially implies slow deliberate attention to meaning, for contemplation’ (Neubauer and Doubleday). 8 ‘The rise of music-writing is associated with the normalization of the Latin language and its script, with the spread of writing and literacy, and with language pedagogy … The strongest factors (in neume-origins) relate to the development of language in speech and writing and to the theory and pedagogy of language’ (Treitler 1984, 206–207). 9 ‘The Gregorian Chant tradition was, in its early centuries, an oral performance practice… The oral tradition was translated after the ninth century into writing. But the evolution from a performance practice represented in writing, to a tradition of composing, transmission, and reading, took place over a span of centuries’ (Treitler 1982, 237). 10 ‘The church musicians who opted for the inexact aides-mémoire of staffless neumes – for skeletal notations that ignored exact pitch-heights and bypassed many nuances – were content with incomplete representations of musical substance because the full substance seemed safely logged in memory (Levy 1998, 137). 11 One cent is 1/100 of a semitone, corresponding to a frequency increase of about 0.06%. 12 f0 envelopes of singing generally vary by much more than one cent even within a steadily held note, even if there is no vibrato. Another way of thinking about the problem is that there is not enough data for so many histogram bins: if a 10-second phrase spans an octave (1200 cents) and the f 0 envelope is sampled at 100 Hz, then we have an average of less than one item per histogram bin.
Computer-supported analysis 251 13 Thinking statistically, our scale is related to a distribution giving the relative probability of each possible pitch. We can think of each f0 estimate (i.e. each sampled value of the f0 envelope) as a sample drawn from this unknown distribution, so the problem becomes one of estimating the unknown distribution given the observations. 14 The sine wave amplitudes follow the computed energy curves. 15 Gisela Sulitieanu has written about laments performed by women in ancient Israel, citing from the book of Genesis. ‘Sara died at Kiriath-Arba, namely Hebron in the land of Canaan and Abraham came to lament over Sara and to cry for her’ (Gen. 23: 2), constitutes the first evidence of funeral songs among the Jewish people. The phrase “and Abraham came to lament (lipsod) over Sara and to cry for her (velivkotah)”, indicates with precision that there existed at the time in the course of the funeral ceremony two kinds of manifestations: the “crying” and the “lament”. Thus we can interpret the “crying” as an individual and personal manifestation and the “lament” as a manifestation already ceremonial according to certain rules prescribed by the community’ (Suliteanu 1972, 292). 16 A similar type of song, the keserves (song of sorrow) is also sung by men and women. 17 ‘In non-textual communication, what is not met with immediate acceptance by the general public, can not survive in the moment of its rendition. Conformity of worldview is built in to the attribute of the preserved formal unit. This is regulated through ‘preventive censorship.’ Such censorship does not even start to allow for forms to take hold, which, risking to be labelled a-topos (absurd; actually without place), do not have a secure function within the context of cultural memory’ (Assmann and Assmann 1998, 17 and 31, English translation by Biró). 18 Although the sirató is most often sung by a female relative of the deceased, a ‘professional’ or a designated singer from the local vicinity, will often sing the sirató; this type is termed parodia. Even in this form the song type remains intact as the ‘professional’ singer takes the place of the mourner and improvises the typical expressions of mourning: ‘My dearest mother, why have you left me?’ ‘What will I do without you?’ ‘You were so good to us.’ etc. 19 ‘This folksong is the only example of prosaic recitation that employs a unique type of improvisation, existing as musical prose at the border between music and speech. […] Rhythms in this folksong are none other than the rhythms of speaking, […] its phrases being of unequal durations.’ (Kodály 1952, 38–9, English translation by Biró). 20 It is clear that the investigation of pitch hierarchies was a prime concern to Bartók. While Bartók’s aim was to present a detailed transcription, his use of a single tonus finalis in transcription was intended to better investigate tonal hierarchies in examples of folk song within and across cultures. ‘In principle, melodies ought to be published in the original pitch as sung or played by the performers. In practice, however, we have to make a compromise in order to attain certain goals. One of these goals is to make the survey of the material as easy as possible. The most suitable method by which to attain this is to transpose all the melodies to one pitch, giving the melodies a common “tonus finalis”; to place them, so to speak, over a “common denominator”. In collections where this method is used, one glance at a melody is frequently sufficient to determine its relationship to others. If we indicate, in addition, the original pitch of the tonus finalis by some procedure (see explanation of signs, p. 91), we fully comply with the requirements for recording the original pitch’ (Bartók 1951, 13).
252 Dániel Péter Biró and George Tzanetakis 21 The te’amim actually entail more symbols than necessary for syntactical divisions. Therefore, it is clear that part of their original function was representational. Such qualities might have been lost or homogenised by later generations, especially in Sephardic communities, in which many of the te’amim are identical in their melodic structure. The Talmud shows that singing was important in the study of the Torah Talmud Sanhedrin, 99a–99b. ‘R’ Yehoshua ben Karchach says: whoever learns torah but does not review his learning is like a person who plants but does not harvest, R’ Yehoshua says: Whoever learns Torah and forgets his learning is like a woman who gives birth and buries the child. R’ Akiva says: Sing (“zamer”) every day, sing every day’ (Zimmermann 2000, 95). Similarly, the signs for such ‘singing’ were also represented by cheironomy, described as ‘a doctrine of hand signs: a form of conducting whereby the leading musician indicates melodic curves and ornaments by means of a system of spatial signs’ (Gerson-Kiwi and Hiley). While cheironomy is rare among contemporary Ashkenazi Jews, Uri Sharvit has mentioned the use of cheironomy by Yemenite Jews. Because the scroll does not contain the vowels or accents, cheironomy is used to help a reader remember the te’amim. ‘This phenomenon may be observed during the synagogal services when the performer of the Pentateuch recitation does not remember the cantillation symbols of the text. When this happens, he is helped by a person who stands to his left, who, looking at a printed Bible showing the te’amim, moves his right hand in a certain manner, using a little pointer or his index finger’ (Scharvit 1982, 26). 22 ‘Ever since the days of the Romans, there had been a Jewish settlement in the province of Gaul and on the Western bank of the river Rhine. Archeological findings to this effect go back to the 1st century C.E., and documentary evidence goes back to the 4th century C.E.. The Jewish settlement in the city of Köln (“Colonia” of the Romans) was officially licensed in the years 123 and 331. We know of the existence of synagogues in Paris and Orléans in the 6th century. The ancestors of the Ashkenazi community engaged in agriculture (as vinters and grape-growers), and in various branches of handicraft and trade: there were among them also seafareres engaged mainly in import. The Merovingian and Carolingians kings carefully observed the Jewish rights and protected them against the attacks of the clergy. However, the situation of the Jews worsened as of the 11th century, reaching a crisis with the persecutions of 1096, the year of the first crusade. And yet, already in the Carolingians period the monks opposed the integration of the Jews in the life of the land – and it is from their pamphlets that we learn of cases of conversion to Judaism, of joint festivities and of other instances of good relations (Idelson 1926: 451–452). These conditions facilitated the acceptance of the neighbors’ musical idiom by Askenazi Jewry – though not yet in respect to the sacred song’ (Avenary 1978, 70–2). 23 ‘The introduction of a system of notation, as presented in the Masoretic accents, can be placed within the subject heading that Aleida Assmann and Jan Assmann describe in cultural-historical terms as a measure, which exists for a group ‘that serves recitation in order to transmit knowledge that safeguards identity,’ belonging within the functional category of “cultural memory”. Because the object notation–the cantillation of the text–does not allow itself to be split into an abstract parameter, even this culture of text, existing in the “functional realm of tradition,” can remain oral’ (Zimmermann 2000, 144, English translation by Biró). 24 ‘The fact that the Gregorian Chant tradition was, in its early centuries, an oral performance practice … The oral tradition was translated after the ninth century into writing. But the evolution from a performance practice represented in writing, to a tradition of composing, transmission, and reading, took place over a span of centuries’ (Treitler 1982, 237).
12 Fixing the fugitive A case study in spectral transcription of Luigi Nono’s A Pierre. Dell’azzurro silenzio, inquietum. A più cori for contrabass flute in G, contrabass clarinet in B flat and live electronics (1985) Jan Burle Introduction: concepts and terminology Music is a temporal art form. It is made of sound (mechanical vibrations sensed through the organs of hearing) and thus is, by the very nature of sound, transient. Listening to music and participating in music performance bring aesthetic experience. It can be argued that, for an attentive listener, this is related to expectations and predictions about the music and the realisation of whether those are fulfilled or forestalled by musical development as it unfolds in time.1 Analysis of music and explanation of its aesthetics may be attempted from many aspects; mapping the sonic structure of a musical piece is one of the possible points of departure. In this project, the author took stock of elementary sounds that made up one performance of a live-electronic music work, acknowledging upfront that their mere account (which would be a reductionist approach) is not sufficient for understanding the structure and architecture of the work but that without this accounting, the understanding we seek is near impossible. The approach to the reduction was technical; every effort was made to follow a straightforward, experimental yet rigorous series of steps that could eventually become the basis of a method that could be similarly applied to the analysis of other musical works. The main premise was the assumption that most music is made of and can be therefore segmented into a set of individual sound units of a suitable size. A pragmatic description of such a sound unit would be close to Pierre Schaeffer’s concept of sound object: ‘[…] every sound phenomenon and event perceived as a whole, a coherent entity, and heard by means of reduced listening […] independently of its origin or its meaning’ (Chion 2009; see also Landy et al. n.d.). It would also be related to Curtis Roads’ concept of sound object on the fifth level of his nine time scales of music as ‘a basic unit
254 Jan Burle of musical structure, generalizing the traditional concept of note to include complex and mutating sound events […] ranging from a fraction of a second to several seconds’ (2004, 3). For the purpose of describing the reduction process, in this text the term sound element shall signify a concept of sound unit that in many (but not in all) aspects corresponds to Schaeffer’s and Roads’ sound objects. The term element is to be understood as ‘a member of a set’, and set as ‘a group of things that belong together’, as is common in mathematics, although in this text they are used in a much more relaxed sense. Definitions •
•
•
•
Sound element is a coherent sound unit with a distinct beginning and ending that limit its existence in time. The duration of a sound element – the time from its beginning to its ending – is typically from a fraction of a second to several seconds. Hierarchy of sound elements: Several related sound elements can be grouped together and in that way form a single compound sound element; vice versa, a compound sound element can be split into smaller, related but distinct subordinate sound elements. An element that does not apparently further consist of such smaller elements is a simple sound element. Smaller sound elements that are grouped into a larger, compound sound element are its inner elements, and the compound element is their outer element. At what level a descent into inner elements stops depends mostly on whether the inner structure is audibly apparent, and on the musical context and goals of the analysis. For example, a sequence of short tones, each a distinct sound element, that occur immediately one after another are inner elements of a single compound sound element that has its pitch varying in time. Vice versa, a single tone of a rich timbre (an outer sound element) can be split into its individual harmonics (inner elements), especially if the harmonic content of the tone evolves in time and the harmonics thus stand out. The reverse process of grouping simpler sound elements into larger compound elements could, in theory, extend all the way to the point at which the whole musical piece becomes a single compound sound element.2 Timing of sound elements: A sound element can be investigated individually, in isolation from its context. In that case, only its duration is known, not the specific beginning and ending times. When a sound element is related to a group of surrounding sound elements, the beginning and ending times are fixed relative to their outer compound elements, and the sound element becomes a sound event. The timings of sound events create the rhythmic and other time-related structures of music. Envelopes: A simple sound element – the result of mechanical vibrations – is determined by their amplitude (related to the perceived loudness of the sound), their frequency (that corresponds to musical pitch) and, optionally, noise bandwidth. The evolution of the amplitude, frequency and noise bandwidth of a sound event in time is captured by its amplitude, frequency
Fixing the fugitive 255
•
and noise bandwidth envelopes. The so-called bandwidth-enhanced model (Fitz and Haken n.d.) represents not only musical sounds with a distinct pitch (tones), but also breath sounds, percussive sounds and other noises. Noise, in the context of this analysis, is not ‘an unwanted or unpleasant sound’, but rather sound of a nonpitched character that adds sonic qualities to music, namely sound colour, ambience and a sense of environment. Indeed, the sound character of most traditional musical instruments contains many ‘transients’ and other noise components. The distinction between tones and noises is not strict; there is a full continuum of sounds from tones to noises. Tones may have a noisy character, and noises may have their sonic energy concentrated in a certain frequency band and thus suggest a pitch.
Perception of music Time is fugitive: it is always fleeting, never to return; sound, as a transient temporal phenomenon, flees along with it. Perceiving a sequence of sound events as music depends on musical memory: the ability to remember the transpired sound events, their temporal and pitch relations and the musical structures that they thus form. Short-term musical memory that lasts only a few seconds allows us to recall precise details of musical motifs and phrases quite distinctly.3 Long-term musical memory retains fewer details that must be reinforced through repeated listening, but it allows us to observe longer and larger musical structures, such as themes, their repetitions and variations and the overall architecture of a composition. Composing music, music performance, and musicological studies require the long-term perspective combined with access and attention to short-term details. But since time flies and memories fade rather quickly, most of us need a memory aid of music fixed indirectly in a non-temporal medium; typically in a visual form of a graphical representation, such as simple sketches of melodic and rhythmic shapes, text annotated by neumes, tablatures or chord symbols or an elaborate representation of musical events written in a symbolic, detailed music notation. Such notation can prescribe how to perform a piece of music or describe how it sounds. Prescription and description in music notation are not two completely mutually exclusive concepts; some notational system are more prescriptive, some are appropriate for description and many have aspects of both (Battier 2015, 60).4 The ability of prescriptive notation to describe the outcome of a performance largely depends on the musical context: the style, performance practice, how much improvisation and other indeterminacy is present, etc. If the context is well known (such as ‘classical string quartet’) and if all voices (sound events that occur in a performance) are evidently notated in the score, then the score serves both a prescriptive and descriptive function. It indicates what is to be done to perform the musical piece and with reasonable accuracy predicts what the sound result of the performance will be.
256 Jan Burle On the other hand, if the context is not sufficiently understood, the musical character of the voices is unknown (due to the contribution of electronic audio processing), if additional voices are created during a performance (by live electronics) or if the performance includes indeterminate events, then the score cannot give an accurate description of the musical event. Live electronic music is a musical context that has evolved very rapidly during the past several decades and as such is still relatively new and not always well understood. For the uninitiated, it can sound unusual and often surprising: listeners are confronted with synthesised sound generated by electronic instruments, sound of traditional instruments modified by electronic processing, spatial sound diffusion and other types of sound manipulation. Much of this music is experimental; many compositions are quite unique, and their singular compositional and performance techniques extend beyond what can be captured by Western staff notation. However, since most otation – contemporary composers and performers are skilled in using staff n otation – the use of indeed common musical training is based on the staff n specialised notational systems that composers and performers would have to master for each new work can be problematic (see Chapter 6). Furthermore, due to the physics of sound and psychoacoustics, there is an inescapable pull towards tonality, even in live electronic and other experimental music; a tendency to play the harmonic series and consonant intervals can be observed whenever musicians can, with sufficient precision, control and sustain the pitch of tones produced by their instruments.5 Alternative notation systems have been proposed. These systems, often based on a twelve-tone chromatic model, may have certain advantages, namely: notated intervals are related to the visual distance of note symbols on the pitch axis, octave equivalence is clearly indicated and there are no accidental signs and enharmonic substitutions (Parncutt 1999). And so it seems that for live electronic music it is practical to have a pair of scores: a prescriptive performance score in the staff notation to explain what must be done to perform the work and a descriptive score for the purpose of keeping a permanent record of the outcome of a given performance. A scholarly study of this music requires both. The descriptive score – a transcription of the transpired sound events – is in a way equivalent to a sound recording, albeit in a graphical form. It is to be based in a chromatic, rather than diatonic, model, or even better, its pitch axis should allow a pitch continuum that can be overlaid by various pitch grids: the twelve-tone tempered chromatic division, diatonic scales or microtonal divisions. The transcription process that creates a descriptive score is best undertaken from a sound recording that has been made for this purpose.
Sound spectrograms6 A graphical account of frequencies and intensities using a spectrogram of a sound recording as a source of data is a straightforward and quite intuitive
Fixing the fugitive 257 form of descriptive transcription that is not limited by traditional tonality. The sound spectrogram is a graph or, more generally, an image that shows how sound evolves in time: time is shown on the horizontal axis and musical pitch on the vertical axis. Sound elements are shown as recognisable graphical shapes – glyphs.7 The horizontal position and size of each glyph are related to the time and duration of the respective sound element, the vertical position and shape to the pitch envelope. The hue of its colour indicates sound intensity. Spectrogram data are calculated from an audio signal by the application of Fast Fourier Transform (FFT), a digital version of the short-time Fourier Transform (STFT). The origins of modern-day spectrograms can be traced to a seminal paper, entitled ‘Acoustical Quanta and the Theory of Hearing’, written by Dennis Gabor (1947).8 The content of Gabor’s work can be summarised as follows: •
•
•
An ideal impulse signal is represented on a spectrogram by a vertical line, indicating that all frequencies are present for an infinitely short time.9 An ideal simple harmonic oscillation is shown as a horizontal line, indicating a single frequency present for unlimited time. A real signal, however, corresponds to a time–frequency ‘cell’ with an ‘effective duration’ and an ‘effective frequency width’ (as Gabor calls them). The ‘mean epoch’ and the ‘mean frequency’ are the coordinates of the ‘centre of gravity’ of the ‘energy distribution’ in a cell. The area of the cell must be large enough to represent at least a ‘quantum’ (the minimal amount) of sound energy that is required to register as a sound. Consequently, there is an inherent uncertainty between time and frequency: a longer effective duration is required for more precise frequency determination and vice versa. The same time–frequency uncertainty applies to human hearing: for a sound to be heard as a musical tone, it must be longer than about ten milliseconds; conversely, with longer durations frequency determination improves.
Gabor’s uncertainty principle is closely related to the length of the FFT time window and the size of its frequency bins. Increasing the frequency resolution requires a longer time window, and vice versa. The two-dimensional time–pitch grid of spectrograms bears a close resemblance to the paper rolls of mechanical player pianos and ‘piano rolls’ of MIDI sequencers.10 Nevertheless, since a spectrogram is not limited to a discrete set of pitches (in the same way that any form of staff notation is) but rather displays pitch envelopes, it is particularly well suited to showing the pitch content of sounds that are manipulated microtonally. A transcription of a musical performance based on spectrogram technology will necessarily differ from a purely technical spectrogram. It should be (a) cleaned of sound traces that are less audible and therefore less musically significant, (b) simplified to a form that is more symbolic than a technical
258 Jan Burle spectrogram and (c) extended by various annotations, such as relevant time scales, bar lines, indication of musical pitch, division of the octave, colour-coding of distinct voices, etc. In this way, a spectrogram-like transcription can serve as an acceptable, intuitive record of transpired sound events and satisfy the requirements of descriptive notation. The methodology presented here is only one possible solution to the problem of creating a representative transcription of a musical performance, but it can be an adequate starting point for musicological analysis and perhaps for a subsequent, further-reduced transcription into staff notation.11 One fundamental question is how to deal with harmonic frequencies (overtones) of complex musical sounds. Harmonics are the ‘inner sound elements’ (displayed by separate glyphs on a spectrogram) of sounds that have a rich timbre. In the case of conventional tones produced by instruments with a known and recognisable sound quality (clarinet, flute), all glyphs representing harmonics except the fundamental frequency should be identified and removed from the transcription. If the sound colour of tones is unusual (for example, if they were processed by electronics) or in places where the production of overtones was an important part of the performance, the choice could be to keep the stronger higher harmonics in the transcription. Transcription of A Pierre We set out to create a spectrogram-like transcription of a Luigi Nono’s composition A Pierre. Dell’azzurro silenzio, inquietum (1985) A più cori for contrabass flute in G, contrabass clarinet in B flat and live electronics.12 The first step was to make a representative audio recording that would be suitable for (a) a faithful playback of the performance including its spatial audio aspects, so that the sound field of the concert experience could be re-created, and (b) making a spectrogram-like transcription. Future musicological studies of the work would draw both on listening to the recording and on reading the transcription. For a piece as complex as A Pierre, visually guided listening is especially helpful. The initial study of the work was based on two sources: (a) the authorised performance score (Nono 1996) and (b) the first CD stereo recording made by the performers Roberto Fabbriciani (contrabass flute) and Ciro Scarponi (contrabass clarinet), who collaborated with the composer in the composition of A Pierre, and Alvise Vidolin (sound direction) (Nono 1991). The work is sixty bars long in common time (the time signature is not shown in the score), in a tempo of thirty beats per minute. Taking the numerous fermatas into account, a typical performance should thus last just over eight minutes. The performance notes in the score provide instructions for setting up the live electronics and for the physical arrangement of the two instrumentalists and four loudspeakers (see Figure 12.1). The instrumentalists are seated in front of the audience and the loudspeakers positioned in four corners of the room to surround the instrumentalists and the audience. A pair of microphones (to accommodate the physical size of each contrabass instrument) picks up the
Fixing the fugitive 259
Figure 12.1 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, diagrams of the position of the loudspeakers (left) and the live electronic configuration with line recordings identified (right) (Nono 1996, xv).
sound of each of the instruments. The sound of the flute, seated on the left, is directly amplified by the right front loudspeaker and the sound of the clarinet, seated on the right, by the left front loudspeaker. This results in an ambiguous relationship between sound and sound sources, which is fundamental to the idea of the work (for more on this see the following chapter by Friedemann Sallis). The audio signals from both instruments are then mixed together and processed by live electronics: the sound is (a) independently harmonised by two harmonisers connected in parallel, (b) delayed by twelve seconds and filtered by a bank of three band-pass filters and (c) delayed by twenty-four seconds. The mix of all three signals is processed by a reverberation unit and diffused through the loudspeakers. The technician controls the loudness of the front and back pair of loudspeakers following the instructions in the score. When one tries to follow the score while listening to the recording, it soon becomes apparent that it is quite difficult to relate the recorded sound to the notation. Besides the ordinary instrumental sounds, the authorised score requires that the performers use a variety of extended techniques: ‘con soffio’, in which varying degrees of breath noise and tone are required, ‘suono ombra’ or shadow sound, in which the performer vacillates between the fundamental and upper harmonics, whistle-tones and various types of multi-phonics (Nono 1996, xi–xiii). This sound production is then subjected to electronically produced transformations, which are superimposed on the directly amplified sounds. Consequently, the editors, André Richard and Marco Mazzolini, readily admit that ‘the acoustic and dynamic result will not correspond to the graphic notation’ (Nono 1996, xiv).
Banff recording sessions During the week of 23–28 February 2009, rehearsals, a concert and a post-concert recording session of A Pierre took place at the Banff Centre
260 Jan Burle (Canada). Great care was taken to ensure that the performance was a correct interpretation of the work (according to the performers’ understanding of the composer’s intentions) and that the work was recorded in an adequate way that would allow for (a) a reliable playback and (b) a detailed transcription of the work for the purpose of scholarly study.13 The recorded tracks were conceptually organised in four groups: (a) the original instrumental sound, (b) intermediate steps in sound manipulation by live electronics, (c) the four loudspeakers and (d) the final, composite sonic result with reverberation and other characteristic contributions of the concert hall. The sound of each of the contrabass instruments was recorded by a pair of small-diaphragm condenser microphones.14 On the flute, one microphone was placed near the lip plate (referred to hereafter as flute high) and one near the foot of the instrument (flute low). On the clarinet, which has a paperclip shape, one microphone was placed near the mouthpiece and the folds of the body (clarinet high) and the other at the bell (clarinet low). This direct audio signal of the instruments was used in performance and simultaneously stored for analysis and transcription. Eight separate signals were ‘tapped’ at various points in the live electronics (realised in Max/MSP), namely: the output signal of both harmonisers, of each band-pass filter and of the complete filter bank and the combined processed signal before and after the reverberation unit. Only the signal after the reverberation unit was used in performance; the rest were recorded for analysis and transcription. In order to recreate the concert experience through a spatial playback, three different techniques for spatial sound recording were employed. (a) In addition to the direct sound of the two instruments, the signal of each of the four loudspeakers was recorded separately. In playback, two loudspeakers replaced the two instruments and together the sound of the six loudspeakers created an experience similar to that of the concert, provided that the room had acoustic qualities similar to those of the concert hall. (b) A square configuration of four omni-directional microphones was used to record the surround sound using a standard sound production technique. (c) A first-order ambisonic signal was recorded with the ambisonic Soundfield MKV microphone. The microphone was placed near the centre of the listening space, in the ‘sweet spot’ where the sound of all six sources – the two instruments and four loudspeakers – was balanced and where listeners presumably had the best listening experience. During subsequent playback and listening tests, the ambisonic recording proved to be the best of the three methods, efficiently capturing the final sonic result, including the characteristics of the concert hall.15 This recording has been since used a number of times to reconstruct the concert experience, using the standard hexagon, octagon and ‘5.0’ configuration of loudspeakers. The recording was also examined in a dedicated periphonic listening space with loudspeakers both above and below the listener’s position on 17 May 2010 at the Center for Computer Research in Music and Acoustics (CCRMA) of Stanford University. Ambisonic recordings are, in their
Fixing the fugitive 261 nature, transparent and truthful and thus well suited for critical listening required for a detailed study of a musical performance. Altogether nine takes of the complete A Pierre performances were recorded: five during rehearsals, one during the concert with the audience present and three in a recording session the morning after the performance. Musicians are often at their best following a major performance: their minds are still focused on the music, yet the anxiety is over. It was therefore no surprise that the first take on the morning after the concert turned out to be the best.16
Manual transcription17 Following the recording session, the next step was to become familiar with the recordings (the temporal and tonal structure of the piece and the nature of the recorded sound), experiment with methods of spatial playback and demonstrate that the concept of making a transcription based on the spectrogram technology was feasible and practical. The first examination confirmed that the instrumentalists had tuned the instruments to the standard 440 Hz concert pitch and that the tuning had remained stable throughout the recording. It also established that – since the performance of A Pierre includes overtones of the harmonic series throughout the piece – the musicians played in just intonation. This fact alone justified the decision to create a spectrogram-like transcription (that captures a pitch continuum), rather than to work in staff notation. Some microphone cross-talk was found in the recorded direct sound of the instruments: one could clearly hear, albeit at a lower intensity, the sound of the other instrument played by the front loudspeaker (the loudspeaker amplifying the flute sound being close to the clarinet and vice versa) and also, at a significantly lower intensity, other sounds from the room. Thus, care had to be taken not to confuse the instruments. After careful listening to all takes, the first post-concert recording session was selected to be the object of the analysis. Using audio-editing software, the audio files were trimmed, a few sounds that were clear artefacts of the performance and recording (such as noises before the beginning and after the ending of the piece) were removed and a slight, intermittent feedback ‘ringing’ was cut out using a notch filter. Other than that, the recording was kept raw, i.e. as truthful as possible. A set of spectrograms then provided the first overview of the whole performance. Figure 12.2 shows a full, unprocessed spectrogram of the sound heard in the concert hall. Figure 12.3 presents the spectrogram of the clarinet part. The next step was to find the time values of each of the sixty bars. Although the performers generally maintained a steady tempo, determining the exact times was not always possible. There is no clear, audible pulse, and the numerous fermatas make the task even more difficult, and therefore in the end we had to label some of the times only approximately.
262 Jan Burle
Figure 12.2 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, unprocessed spectrogram of a performance recorded on 28 February 2009. For a colour image of the spectrogram accompanied by an audio file, see the companion website http://live-electronic-music.com.
Figure 12.3 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, spectrogram of the contrabass clarinet sound recorded on 28 February 2009. For colour images of the spectrograms of the flute and clarinet parts with some crosstalk from the other instrument accompanied by audio files, see Figures 12.3a and b on the companion website http://live-electronic-music.com.
Once the start time for each bar was established, an attempt was made to create an initial transcription of the flute part. The intention was to produce a transcription of each sound source that would be reasonably detailed, yet not overly complicated. Figure 12.4 shows the spectrogram of the first nine bars of the flute part. By careful, repeated listening, with frequent reference
Fixing the fugitive 263
Figure 12.4 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, spectrogram of the contrabass flute part, bars 4–7, recorded on 28 February 2009. For colour images of spectrograms of the flute and clarinet parts, bars 1–9 accompanied by audio files, see Figures 12.4a and b on the companion website http://live-electronic-music.com.
to the performance score, the time and frequency values of the relevant ‘glyphs’ in the spectrogram were entered into a table and imported into a custom program that displayed them as a spectrogram-like score. Several types of glyphs were differentiated: (a) simple separate glyphs representing a single steady pitch, (b) glyphs that were short slides between two pitches, (c) glyphs that visually spanned a pitch range (‘fat tones’) and (d) glyphs that were ‘discontinuous’, that is, they contained a number of pitch ‘breakpoints’ and resembled a segmented line. Figure 12.5 shows the final manual spectrogram-like transcription of bars 4 to 7; the flute glyphs are shown in black and the clarinet in grey colour.
Limitations of manual transcription and machine assistance The manual transcription of the flute and clarinet spectrograms was reasonably successful and served very well as both a visual guide for listening and as a fixed form for analysis. However, it took much tedious work to create the transcription of the two instruments. It also became clear that it would be impossible to manually transcribe the sound processed by the filters and harmonisers without machine assistance to interpret the spectrograms. A spectrogram created by the application of short-term Fourier transform is inherently subject to a time-frequency uncertainty. Improving its focus in frequency resolution by increasing the size of the FFT time window causes greater smearing in time. If the time window is shortened to improve time resolution, the frequency bins become larger and
Figure 12.5 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, manual transcription of sound of the flute and clarinet, bars 1–9. For the complete manual transcription of the flute and clarinet parts, see Figure 12.5b on the companion website http://liveelectronic-music.com.
Fixing the fugitive 265 more uncertain. For general, arbitrary signals, this uncertainty cannot be avoided. However, subtle music consists mainly of spectrally and temporally sparse signals. Pitched tones concentrate sonic energy in a relatively low number of isolated, harmonically related frequencies, rather than spread over a frequency continuum; percussive sounds with a broad frequency spectrum have their energy concentrated within short time intervals. The resolution of spectrograms that are spectrally and temporally sparse can be significantly improved in both frequency and time by application of the time-frequency reassignment method (Fitz and Fulop 2009). Time-frequency reassignment is a technique in which the coordinates of each point at ‘mean epoch’ and ‘mean frequency’ (the geometrical centre of a Gabor’s cell) are reassigned (mapped) to a point that is closer to a ‘centre of gravity’ of the cell, by examining the phase component of the Fourier transform values. The reassignment works well either for signals that are concentrated in frequency (musical tones that are signals with near-sinusoidal components) or for impulsive signals (signals with energy concentrated in a short time interval, such as percussive sounds and the transients of musical tones). Reassigned spectrograms are noisy, with a number of ‘random speckles’ – reassigned data points that are not apparently related to any sinusoidal component or an impulse component. The reassigned bandwidth-enhanced additive sound model represents sound as a collection of ‘partials’ (closely corresponding to our definition of sound elements) (Fitz et al. 2003). Each partial is a coherent unit of elementary sound that has both sinusoidal and noise characteristics, the frequency and amplitude of which are represented as ‘breakpoint’ envelopes. The partials are constructed from a reassigned spectrogram by following the ‘ridges’ of high amplitude. The noise components (the ‘random speckles’) are pruned and their energy assigned as a varying noise bandwidth to the nearby partials. The phase information, although not essential to display the shape and position of the partial ‘glyphs’, is also stored, and the complete data can be used to control bandwidth- enhanced oscillators with sinusoidal and noise components to resynthesise the analysed sound. Fitz and Haken implemented the reassigned bandwidth-enhanced additive sound model in the open-source software library Loris for the purpose of sound-morphing.18 Through a few experiments, we confirmed that it is well suited for our purpose, and proceeded to adapt it for a machine- assisted transcription of A Pierre.19
Machine-assisted transcription The author has written several software tools that process the data obtained from Loris. The main functions of the software are to import,
266 Jan Burle edit and present data obtained as a list of partials from the reassigned spectrogram created by Loris. Since the goal was to create a simplified graphical representation (transcription) of the musically salient aspects of A Pierre, the imported data (the amplitude, frequency and noise bandwidth envelopes of partials) were immediately reduced by eliminating details that fell below certain thresholds. The reduced partials can be simply displayed as glyphs in time–pitch coordinates, which is, in principle, a simple, straightforward process. We attempted, however, to design the software so that the display is ‘dynamic’ and allows a variety of views: it can show a global overview of several minutes and the full pitch range of music in its entirety or zoom in on minute details; it can display data filtered by various criteria (duration and amplitude range of partials, the noise bandwidth, etc.) on a variety of tempered and microtonal pitch coordinate grids. We took care that the parameters can be easily changed by the user and that the system responds efficiently even if the data contain information about many thousands of partials. The software allows the user to delete selected glyphs that represent partials not required in a reduced transcription. It assists in selecting partials that ought to be removed, such as partials that are shorter or softer than a given threshold. Audio can be resynthesised from the remaining partials, and the user can thus assess the impact and confirm that salient elements were not discarded. Ordinary scores are organised in staves laid out on a page one below the other; in our software, the score is organised in overlapping ‘layers’. The layers can be made visible or hidden and reorganised in order of display preference. Glyphs in different layers are colour-coded and thus can be clearly distinguished. With the experience gained in the manual transcription of the flute and clarinet tracks, we were ready to attempt a machine-assisted spectrogramlike transcription of A Pierre.20 We chose the same post-c oncert recording that was used for the manual transcription and transcribed the following tracks: flute high, flute low, clarinet high, clarinet low, the filter bank and the two harmonisers. The transcription began by using Loris to convert recorded audio into the reassigned bandwidth-enhanced partials. We listened critically to each track, independently identifying sections with a distinct character or loudness that were separated by silence and made an overall time map of the tracks and their sections. Each recorded track lasts eight and one half minutes each. So as to obtain shorter audio files that could be more easily processed, each track was divided into sections, from five to twelve depending on its structure. For each section (stored in an individual audio file) we found the optimal value of amplitude threshold and the size of the FFT frequency bin. If the amplitude threshold is set too low, the imported data contains too many
Fixing the fugitive 267 nonsalient partials; if the threshold is high, the data are ‘cleaner’ but can miss softer sounds. A smaller size of the frequency bin improves detection of lower frequencies but smears the time resolution. Due to the subdued character of A Pierre, the amplitude thresholds in most sections had to be set quite low, to around -60 dB(FS). The frequency bin size was in most cases set to 80 Hz. In a few sections where the instruments play very low tones, we also had to decrease the bin size to 40 Hz, lower the amplitude threshold to obtain traces of the flute and clarinet breath sounds and then aggregate the obtained partial data. Next, the data had to be cleaned and reduced. That happened in four steps: (a) we filtered out partials that were shorter or quieter than heuristically selected thresholds, (b) we manually eliminated various ‘debris’ (random ‘speckles’) that may have remained, (c) we eliminated partials that resulted from cross-talk, mainly in the flute and clarinet tracks and (d) we eliminated nonessential partials that did not significantly contribute to the character of the instrument sound. This last step was perhaps the most subjective manipulation of the data; in parts where the instruments played conventional tones in their usual register and technique, we identified the fundamental partial and removed all others that were representing overtones, but in parts where the instrument sound was not typical, we kept the more characteristic higher partials. By listening carefully to resynthesised sound and comparing it with the recorded original, we verified that the transcription was reliably close to the original performance. An example of how the process was applied and its results can be seen in the following figures. Figure 12.6 shows an excerpt (bars 15–31) from the authorised performance score. Figures 12.7a–c document the process of transcription of bars 24–25 of the ‘clarinet high’ track. Figure 12.7a shows the data as analysed by Loris. It contains several salient partials (the thickness of the glyphs indicates their amplitude envelope), a number of less significant partials, some cross-talk from the flute and some ‘debris’. Figure 12.7b shows the same section after the short or quiet partials were filtered out and the debris eliminated. In this reduced form, it was then quite possible to determine which glyphs represent clarinet partials and which are due to cross-talk from the flute. Figure 12.7c shows the final version with all nonsalient partials and the flute cross-talk removed. Figures 12.8a–c show the same process applied to the combination of both flute and clarinet tracks. Finally, Figures 12.9a and b show the full bars 17–29 data as analysed by Loris and the processed version, respectively, of the combined flute and clarinet parts. The cleaned and reduced individual parts were then assembled into the aggregate full score and the score was independently verified and corrected. We believe that it currently represents a good transcription of the performance of A Pierre given at the Banff Centre.
(a)
Figure 12.6 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, bars 15–31.
(b)
Figure 12.6 (Continued).
Figure 12.7 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, contrabass clarinet part, bars 24–25, (a–c) present stages of the transcription process. For colour images of spectrograms of the clarinet part, bars 24–25 with an accompanying audio file, see Figures 12.7a–c on the companion website http://live-electronic-music.com.
Figure 12.8 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, contrabass flute part, bars 24–25, (a–c) present stages of the transcription process. For colour images of spectrograms of the flute part, bars 24–25 with an accompanying audio file, see Figures 12.8a–c on the companion website http://live-electronic-music.com.
272 Jan Burle
Figure 12.9 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, contrabass flute and contrabass clarinet parts, bars 17–29, (a) Loris analysis, (b) final transcription. For colour images of spectrograms of the contrabass flute part and the contrabass clarinet part, bars 17–29 with an accompanying audio file, see Figures 12.9a and b on the companion website http://live-electronic-music.com.
Notes 1 Private conversations with Thomas Staples (see also Reimer 1972). 2 These ‘micro’ and ‘macro’ concepts of music assume that the silences have the same importance as sound. For more on this, see Karlheinz Stockhausen’s British Lectures (1972–73).
Fixing the fugitive 273 3 Hans Peter Haller described an experiment done while working with Luigi Nono: I wondered […] what is the time limit of a repetition of a canonical form after which I do not fully recall the original of a repeated audio signal […] The result of our experiment: with a delay of 24 seconds was the time limit reached. The repeated sequence of tones was recognized only partially, more as a new sonority and not as a repetition […] Further tests found out that this psychoacoustic process depends on the timbre, rhythmic forms and the sound location […] for example altering the sound by band-pass filters and adding room reverberation gave a surprising result: this time limit could be shortened by about a half, that is 12 seconds. (Haller 1999a)
4 Charles Seeger (1958) appears to have introduced the terms prescriptive and descriptive notation. 5 Helmholtz, On the sensations of tone as a physiological basis for the theory of music, p. 47. 6 ‘Spectrogram’ or ‘sound spectrogram’ is a synonym of the (less frequently used) term ‘sonogram’. ‘Spectrogram’ without the qualifier ‘sound’ may refer to any signal and not only to sound. However, in the context of spectral analysis of music, ‘spectrogram’ unambiguously denotes the evolution of a sound spectrum over time. 7 We borrow the term glyph from typography, meaning a ‘graphical unit of a specific shape’. 8 At the same time that Gabor was writing his theory of hearing, a ‘sound portrayal’ or ‘sound spectrography’ method was developed at the Bell Telephone Laboratories (Potter 1946). 9 In Gabor’s original text, spectrogram is named ‘information diagram’; its time axis is vertical and frequency axis horizontal. 10 Joseph Schillinger (1946) proposed a graphical notation system in the form of shapes drawn in two-dimensional time–pitch coordinates on a graph paper. 11 The precise microtonal, just intonation, and other spectral aspects of the analysed work would be necessarily lost during the reduction of a spectrogram-like score into staff notation. 12 For more detailed description of the plans that preceded the event, see Burleigh and Sallis (2008) and for preliminary results, see Zattra et al. (2011). 13 The sessions took place in the Rolston Recital Hall of the Banff Centre. The musicians were Marieke Franssen (contrabass flute), Carlos Noain Maura (contrabass clarinet) and Juan Parra Cancino (live electronics). Rehearsals, the concert performance and the post-concert session were recorded by a team of Banff Centre recording engineers, under the direction of John D. S. Adams. 14 Alvise Vidolin suggests that using dynamic microphones would lower the possibility of feedback and therefore be a better choice in a live concert setting. Conversation with Vidolin, Cambridge MA, March 2016. 15 The reverberation and sound colouration of the concert hall is present in the ambisonic recording. Therefore, it is best to play it back in a ‘dry’ listening room. 16 During the post-concert session, takes numbered 8, 9 and 10 were recorded. Take 8 was almost flawless and very close in character to the concert take; there were performance issues and other imperfections in takes 9 and 10, due to intermittent feedback in the electronics and the room and possibly because a complete, uninterrupted, strenuous eight-minute performance drains the instrumentalists’ energy. 17 Most of the work on manual transcription was done by Evan Rothery, an undergraduate research assistant at the University of Calgary. Sonograms were created using the software Sonic Visualiser, http://sonicvisualiser.org.
274 Jan Burle 18 Loris is an open-source C++ library for audio analysis, manipulation of the partials and their resynthesis into modified sound. See Fitz and Haken, Loris Software for Sound Modeling, Morphing, and Manipulation. 19 We used Loris to analyse a number of selected audio samples of recorded and digitally synthesised music and critically assessed the computed partials and resynthesised sound; we found that the time, pitch and temporal envelope data were sufficiently sharp, robust and representative of the original sound. 20 The author is grateful to Mr. Yangke Li, a master’s student of audio engineering at the University of Lethbridge, for his golden ears, nimble mind and endless patience that he applied to this work.
13 A spectral examination of Luigi Nono’s A Pierre. Dell’azzurro silenzio, inquietum (1985)1 Friedemann Sallis
Au lieu d’organiser des sons entre eux, on tire une organisation du sonore lui-même. Hugues Dufourt (1982, cited in Grabócz 2013, 22)
Spectrograms and spectralism This text presents the outcomes of a ‘spectral examination’ of a performance of Luigi Nono’s A Pierre. Dell’azzurro silenzio, inquietum (1985) a più cori for contrabass flute, contrabass clarinet and live electronics. The term refers to the fact that our examination is based primarily on a spectrogram or sonogram of audio data recorded during a performance of A Pierre at The Banff Centre (Alberta, Canada) on 28 February 2009. For more on the data collection and transcription of this performance, see the preceding chapter by Jan Burle. The term ‘spectral’ is rarely if ever associated with Nono’s late work (ca. 1980–90). Indeed, the connection implied by the term can seem counter-intuitive. After all, spectral music, as defined by its practitioners, is usually presented as an oppositional alternative to the music of the so-called Darmstadt generation, namely Nono, Bruno Maderna, Pierre Boulez, Karlheinz Stockhausen et al. For example, in his recent La musique spectrale: Une revolution épistémologique, Hugues Dufourt mentions Nono only twice in lists of names intended to evoke backgrounds against which the characteristics of spectral music are foregrounded (Dufourt 2014, 252 and 345). By the same token, Nono’s published writings contain almost no mention of spectral music or the composers associated with it. Of course, most of his documents were written before the term ‘spectral music’ emerged in a series of texts by Dufourt, Gérard Grisey and Tristan Murail between 1978 and 1982 (Drott 2009, 40). Nevertheless, Nono’s writings include wide-ranging interviews and conferences given during the 1980s in which contemporary developments and tendencies in new music are consistently evoked. The index of the Scritti e colloqui contains no mention of Murail, Michaël Lévinas or Roger Tessier. Dufourt and Grisey are mentioned once, but not by Nono. In an interview given in Berlin in March 1987, Enzo Restagno asked Nono to
276 Friedemann Sallis comment on claims made by Dufourt and Grisey that a new era had arrived thanks to perspectives and possibilities provided by new technology, notably the spectrogram. Nono responded dismissively. Referring to research undertaken at Darmstadt in the 1950s, he observed that the spectralists had simply (re)discovered hot water.2 One is left with the impression that Nono and the spectralist composers studiously avoided any mention of the other’s music. In the following, the use of the term should not be understood as an indirect attempt to classify one of Nono’s remarkable late works as a form of spectral music. By the same token, this does not mean that useful relationships cannot be drawn between some of the compositions of Nono’s last decade and the music of the spectral composers. Indeed, an examination of Nono’s compositions from this perspective provides a rich context that enables a better understanding of its specificities. One commonality that A Pierre does share with spectral music is the socalled ‘acoustic glow’ that one finds in works such as Partiels (1975) for eighteen musicians, the third piece of Grisey’s cycle Les espaces acoustiques (1974–85). The unrelenting focus on the harmonic structure of sound that we find in this music, as well as that of Murail, Dufourt, Lévinas and Tessier, results in luminous masses of sound surging through time and space (Auner 2013, 243). Comparable qualities can be observed in many of Nono’s compositions of the 1980s: Io, frammento da Prometeo (1981) for three sopranos, chamber choir, bass flute, contrabass clarinet and live electronics; Prometeo. Tragedia dell’ascolto (1984, revised 1985) for soloists, choir, orchestra and live electronics; Post-prae-ludium per Donau (1987) for tuba and live electronics. Of course, to say that Nono’s music shares some of the same sound qualities as that of spectral composers does not mean that he adopted the aesthetic orientations of that group. Nonetheless, one does find discursive evidence that Nono’s musical thought during his last years appears to converge with general tendencies in the discourse of spectralist composers. In a public lecture entitled ‘Altre possibilità di ascolto’ [Other possibilities of listening] presented in August 1985 (just six months after he had completed A Pierre), Nono called for a new way of thinking about sound. After stating that the ‘acoustic field of known sounds’ (of which art music had previously been composed) had been completely used up, Nono launches into a series of rhetorical statements questioning the nature of sound and particularly the relationship between sound and space. The English translation of this passage (prepared by this author) does not come close to capturing the energy of the original text. For this reason, the Italian text is given as well. One starts with sound. What is sound? How does one ‘qualify’ sound? How does sound reverberate? How does space ‘compose’ sound? How does space intervene in the transformation of sound? How does sound occur in space, in just the right dimension of combinations of varied sounds according to the various typologies of the space?
A spectral examination of A Pierre 277 [Si parte dal suono. Da cosa è il suono. Da come si « qualifica » il suono. Da come riverbera il suono. Da come lo spazio « compone » il suono. Da come lo spazio interviene nella transformazione del suono. Da come nello spazio avviene, il suono, proprio nella dimensione conbinatoria di vari suoni secondo la varia tipologia dello spazio.] (Nono 2001, Vol. I, 526–27)3 The emphatic flourish of Nono’s words reminds us of Grisey’s oft-cited plea for a new focus on sound made in 1978: ‘We are musicians and our model is sound, and not literature; sound, and not mathematics; sound and not theatre, the plastic arts, quantum physics, geology, astrology or acupuncture!’ (Grisey, cited in Drott 2009, 39–40). There is of course no evidence to suggest that Nono had read or was even aware of Grisey’s statement, initially published in the Darmstädter Beiträge (1982, 16–23). Nevertheless, the content and the tone of voice of these two passages suggest that both Grisey and Nono were clearly dissatisfied with conventional discourse about new music and were moving along parallel trajectories.
The limits of conventional notation and problems of authorship Another interesting point of comparison between Nono and the spectralist composers concerns the limits of conventional notation, as well as the weakening of the strong work concept and related issues of authorship. However, in this case the comparison highlights difference rather than commonality. In Western art music, sign and sound are traditionally considered poles of a dialectic, which can be modified over time but not suspended, because the music is both text that is fixed by the medium of writing, so that it can be passed on, and sound event, which is momentary and ephemeral (Borio 2012, 1). Throughout its history, staff notation has been the locus of a tension between symbol and practice. This is, at least in part, a story of innovation in which established notational systems are forced to adapt to evolving practice. Indeed since at least the fourteenth century, paradigmatic change concerning the status and function of Western music has usually been accompanied by changes in the way it is notated, because staff notation has traditionally been a visual representation of the theoretical concepts that produced and continue to sustain the music of this tradition, as well as an efficient means of conserving and transmitting specific pieces (Duchez 1983, 29–30). The gradual emergence of the strong work concept and the rise of Romantic aesthetics at the end of the eighteenth century tended to both reinforce and destabilise conventional notation at the same time. On the one hand, the score came to be understood as the repository of the ‘opus perfectum et absolutum’. In other words, by the beginning of the nineteenth century the notated artefact was no longer a mere proposal or prescription for performance, but rather could be seen as the work itself (Dahlhaus 1982, 10–11). On the other hand, the principle according to which composers aspiring to greatness were required to produce unique and
278 Friedemann Sallis original works of musical art had an increasingly strong impact on the notational means by which the score was produced. Evidence of this can be found in the early editions of music by Franz Liszt, who ‘advocated inconsistent, but purposeful notation, always anticipating nascent technical developments and ultimately establishing the physical strategies of modern-day piano technique’ (Mueller 2011, 195). The numerous albeit unsuccessful attempts to reform staff notation brought forward during the twentieth century bear witness to both the resilience and the limitations of our current system (Busoni 1910; Schoenberg 1924; Karkoschka 1965; Smalley 1997; Roy 2003, among others). With the advent of electronic music following World War II, these limitations became blatantly obvious. Nonetheless, while work involving sound modulation, filters, oscillators, etc., greatly expanded the idea of what music could possibly be, many composers continued to transmit their musical thought via staff notation. For example, György Ligeti took great pains to transfer sound processes with which he had experimented in his unfinished Pièce électronique no. 3 to the orchestral score of Atmosphères (Iverson 2011, 67–75).4 With regard to staff notation as the primary vehicle for the transmission of musical works of art, a tipping point seems to have been attained in France in the 1970s.5 Working together under the title Itinéraire, the group of young composer-performers, who would eventually engender spectral music (Dufourt, Grisey, Lévinas, Murail, Tessier et al.), used technology to explore the possibility of creating music understood as sound fields (periods of evolving sonic energy) rather than composing music via a system of symbols consigned to staff paper. In their view, conventional notation was not merely a constraint but rather an obstacle, resulting in an ambivalent attitude towards writing music and authorship. On the one hand, staff notation was portrayed as a reifying agent. In the early 1980s, Murail famously asked why we must always speak of music in terms of dots and lines on pieces of paper. Our conception of music is held prisoner by tradition and by our education. … The sound has been confused with its representations, and we work with these, with symbols. Since these symbols are limited in number, we quickly come up against the wall…. (Murail 1984, 157–58) On the other hand, Murail warned against abandoning what he called ‘écriture’. Electronics opened our ears. But electronic music often suffers from an opposite excess: a lack of formalization, of ‘écriture’ or writing in the largest sense, of structuring the sonic universes that it discovers. (Murail 1984, 159) Dufourt agreed. In his Manifesto announcing the Collectif de Recherche Instrumentale et de Synthèse Sonore (CRISS) [Collective of instrumental
A spectral examination of A Pierre 279 research and of sound synthesis] founded by Alan Bancquart, Dufourt and Murail in 1978, Dufourt alluded to the ‘discipline d’écriture’ [the discipline of writing] that was the price to be paid in order to turn mere dreams into creative outcomes (cited in Castanet 1998, 28). What Dufourt and Murail are referring to with terms like ‘écriture in the largest sense’ or the ‘discipline of writing’ is authorship. They were concerned that changes, not merely in the way music is notated, but more importantly how it is conceived and produced, would undermine their authorial roles as composers in the traditional sense of the term. With regard to authorship, Nono took a rather different approach. His interview with Philippe Albèra, given in French in 1987 (first published in the Programme of the Festival d’automne of that year), expresses a more relaxed attitude towards the limits of conventional notation, stating that he and his colleagues did the best they could to fix his musical ideas graphically. However, he was not particularly concerned about the limits because he had lost interest in the concept of ‘écriture’ [‘je ne tiens pas au concept de l’écriture’]. This is certainly not a position that Nono took lightly. He was keenly aware that he was letting go of a concept that had been fundamental to the art music culture of Europe for centuries. In response, Albèra observed that Nono’s recent work suggested that he was more interested in working with sound than in setting his musical ideas into fixed forms. Nono agreed, stating that he was far more interested in the creative process than in a final outcome. When Albèra asked what would happen to his work when he was no longer here, Nono shrugged, exclaiming, ‘Other musicians will make other music!’ [‘D’autres musiciens feront d’autre musiques!] (Albèra 1997, 19). To be sure, Nono’s interview with Albèra does not constitute a sustained theoretical reflection on the limits of conventional notation. Rather it is a form of declaratory discourse with little theoretical elaboration. Since the mid-nineteenth century, many composers (including the spectralists cited above) have indulged in this type of communication (consider the writings of Robert Schumann, Richard Wagner or Claude Debussy). In these texts, composers delineate horizons of expectation, define aesthetic trajectories and stake out territory in relation to colleagues and competitors. Though they lack elaboration, are often imprecise and may include polemics and even vicious attacks, they need to be given the attention they deserve.
The study of music that escapes conventional notation and the question of authorship Nono’s A Pierre is one of a number of late works that pose a challenge to those who wish to examine this music (performers, musicologists, critiques, informed listeners, etc.). Like many of the compositions for chamber ensembles and soloists produced during the 1980s, the score provides the musicians with a reliable source for the realisation of a performance, but it does not give a symbolic rendering of the musical outcome. To test this claim,
280 Friedemann Sallis one need only try to follow any recording of A Pierre with the authorised performance score. With the exception of those few musicians who have performed this music on numerous occasions, readers will be lost within a few bars for two reasons (the latter being far more significant than the former). First, the score is written for two different transposing instruments: contra-bass flute in G (sounding an octave and a fourth lower than written) and contra-bass clarinet in B flat (sounding two octaves and a major second lower when written in the treble clef and a major ninth lower when written in the bass clef). Consequently, the intervals that one can hear do not match with the written intervals. (A transcription of the performance score in sounding pitch does make following a recording somewhat easier.) Second, once the live electronic manipulation of sound becomes audible, which generally occurs around bar 8, electronically processed sounds overwhelm those produced by the two instrumentalists. Within a few bars of the beginning, the listener is quickly confronted with music that escapes conventional notation. This ambiguity is intentional and central to the aesthetics of A Pierre. André Richard and Marco Mazzolini, the editors of the authorised score, note that in performance, the instrumentalists should not remove the mouthpieces from their lips even in the longest rests and pauses (Richard and Mazzolini, cited in Nono 1996, xvi). The reason is that Nono wants the predominantly soft sounds to float through the time and space of the performance. Attentive listeners (even those who have heard the work on numerous occasions) should not be able to tell if the sound was produced directly by the instrumentalist or if one of the speaker pairs reproduced it using the delay function. Figure 13.1 presents a visual representation of a performance of A Pierre recorded at the Banff Centre (Alberta, Canada) on 28 February 2009 (see Chapter 12 for a thorough description of the data capture and transcription technique that enabled this visualisation).6 The image shows a performance outcome. It does not represent the work per se, but rather one among an infinite number of possible instantiations. If applied to another performance by the same musicians, these data capture and transcription techniques would yield different images of A Pierre, though we expect that ‘good’ performances would produce comparable images. Our transcription seizes the ephemeral aspects of performances that escape conventional notation. Figures 13.1–13.4 record the minute agogic and frequency fluctuations that occur in any performance. As such, these traces are not unrelated to the flexible repertoire of signs found in neumatic notation. The grid of the diagram presents duration (x axis) and pitch (y axis). The horizontal lines represent octaves on C. C4 is equivalent to the middle C of the piano keyboard. For the purposes of this text, whenever register needs specification, the octave numbers of the grid will be placed as subscripts next to the note name. The vertical lines indicate bar lines, which are numbered at the bottom of the page. Note that the space between bar lines changes. These fluctuations reflect the fact that the performers sometimes took more
A spectral examination of A Pierre 281
Figure 13.1 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, transcription of the entire performance, recorded in Banff on 28 February 2009. For a colour image, see Figure 13.1 on the companion website http://liveelectronic-music.com.
time in some bars than in others.7 Figure 12.6 presents bars 15–31 from the performance score. Nono placed a fermata over a double bar at the end of bar 16, which is why the performers took more time to complete this bar. On the whole, the performers were remarkably successful in maintaining the slow tempo given at the beginning of the performance score (crotchet = 30), which is why most bars have a similar length. The signs inscribed on this grid present audio information of the performance harvested from a spectrogram. Though presented in black and white here, our method colour-codes these signs to identify the source of the sound: green for flute, blue for clarinet, orange for the harmonisers and red for the filters. A cursory examination of Figure 13.1 reveals a dense web of acoustic events, all of which derive from the sounds produced by the contrabass flute and contrabass clarinet in concert. (A Pierre contains no prerecorded material.) Compared to the rich texture of the performance outcome, the performance score (Figure 12.6) seems, at first glance, surprisingly bare, with its slow succession of relatively long notes. The impression is deceiving. First, the intensity of almost every sound emitted by the two instrumentalists is constantly fluctuating at a very low dynamic level. In Figure 12.6 dynamic markings vary between pppp and mf. This dynamic range is maintained throughout the work except for bars 35–36, where the two instrumentalists
282 Friedemann Sallis
Figure 13.2 Luigi Nono, A Pierre. Dell’azzurro silenzio, inquietum, transcription of sounds produced by the contrabass flute and the contrabass clarinet directly, bars 17–29. For a colour image, please see Figure 13.2 on the companion website http://live-electronic-music.com.
both briefly crescendo to forte. Second, almost every bar is ‘animated’ with extended instrumental techniques (Nono 1996, xi–xiii). Figure 13.2 presents an excerpt from Figure 13.1 in which bars 17–29 have been enlarged permitting a more detailed examination. In this excerpt, we see the sounds produced directly by the flutist and clarinettist. This unmodified reproduction is also subjected to a delay of 24 second, which can be seen in Figure 13.2. Finally, in this work, Nono frequently asked both players to perform in such a way that their sounds emerge out of ‘breath noise’ at a very low dynamic level and then transition back into breath at the end. Nono notated this technique directly: empty triangular note-heads without stems indicate breath only, a triangle added to the stem of a normal note means that the instrumentalist should produce a mixture of breath and pitch; normally written notes should sound the pitch with no breath. In the clarinet part this technique is also indicated with the words ‘con soffio’ (with breath) (see Figure 12.6, bars 17–19). This technique results in light grey ‘plumes’ of sound that rise in the background of the transcription like cumulus clouds on the horizon. The breath sounds produced by the contrabass flute and the contrabass clarinet were colour-coded green and blue respectfully, a distinction that is invisible in Figures 13.1, 13.2 and 13.4. However, whereas the breath sounds produced by the contrabass flute rise in thin columns as high as octave C10, those produced by the contrabass clarinet are largely contained in the register range between C6 and C8.
A spectral examination of A Pierre 283
Figure 13.3 Luigi Nono, A Pierre, transcription of sounds produced by the harmonisers and filter 3, bars 17–29. For a colour image, please see Figure 13.3 on the companion website http://live-electronic-music.com.
Our transcription also takes account of the numerous ways in which the sounds produced by the two instrumentalists are modified in real time during the performance. As noted by Jan Burle in Chapter 12, we not only captured the musical outcome of these modifications but also recorded each stage of the modification process. Figure 12.1 shows the diagram of the live-electronic apparatus. The arrows indicate the points in the apparatus where line recordings were made. As we see, the modifications include reverberation, two harmonisers (one transposes the sound a minor seventh lower and the other a triton lower) and two delays. The 24-second delay, mentioned above, transmits unmodified sounds directly. The 12-second delay subjects the sounds to three band-pass filters. In the score, the bandwidths are set as follows: filter 1 (40–300 Hz); 2 (675–1012 Hz); 3. 2278–3417 Hz) (Nono 1996, xiv). Figure 13.3 shows the sounds emitted by the harmonisers and the filters. The sounds produced by the harmonisers appear primarily as vertical blocks or bursts of sound. Their duration can vary from less than one crochet (see Figure 13.3, bar 29) to approximately a bar and a half (see Figure 13.4, bars 18–19, 24–25 and 27–28). These columns of sound occur regularly throughout the piece and have the effect of blurring the textures in which they are heard. By contrast, in this performance, the sounds subjected to band-pass filters only occur just above and below the seventh octave. In Figure 13.3, traces encased in boxes represent the filtered sound, which corresponds with the bandwidth of filter number 3. The band-pass filters 1 and 2 were turned off in this recording because they produced a
284 Friedemann Sallis
Figure 13.4 Luigi Nono, A Pierre, amalgamation of Figures 13.2 and 13.3, bars 17–29. For a colour image, please see Figure 13.4 on the companion website http://live-electronic-music.com.
ringing feedback sound that had no place in the work. Thus, our transcription also reports on unpredictable aspects of the performance that were not explicitly prompted by the score. Figure 13.4, which amalgamates Figures 13.2 and 13.3, shows an interesting example of how the flutist interpreted and implemented Nono’s notated instructions. Bars 23–24 show the flutist reaching very high pitches in the eighth and ninth octaves. In the performance score, Nono instructed the flutist to perform a dyad (see Figure 12.6): a B flat6 as a breve tied to a crotchet, followed by an E flat7 as a semi-breve tied to a breve. This is an example of what André Richard and Marco Mazzolini (the editors of the performance score) call a multiple sound (suoni multipli in the original Italian text), which should be performed so that the upper partials of each pitch alternate in an attempt to have both pitches sound simultaneously (Richard and Mazzolini, cited in Nono 1996, xvi). In Figure 13.4, we see that the flutist correctly played the B flat6, which sounds as F5, at the beginning bar 23. The second pitch of the dyad, E flat7 sounding B flat6, was only achieved at about the second beat of bar 24, three beats too late according to the performance score. However, during those three beats she successfully sounded two upper partials of the dyad; note the B flat8 and F9 embedded in breath noise in bars 23–24. Consequently, if she did not respect the letter of the Nono’s score in terms of duration, she most certainly respected the spirit of his intentions. This moment in the performance is significant for a number of reasons. First, the flutist’s gesture resulted in the highest sounding distinct pitches in
A spectral examination of A Pierre 285 this performance (see Figure 13.1). Second, the intensity of the gesture was musically justified, because it effectively ushered in the strong glissandi performed by the clarinettist in bar 24, which constitutes the primary musical event in the first half of the piece. Third, it begs the question of authorship, which is not as banal as it might appear at first blush. Traditionally, we would say that by following the score and instructions the performers are simply executing the will of the composer, who remains the author of the work. This is the position adopted by Hans Peter Haller, who worked closely with Nono at the Heinrich Strobel Foundation in Freiburg (Haller 1995, vol. 2, 117). However, in the excerpt presented in Figure 13.4, this interpretation is problematic. The highest notated pitch in the performance score is E flat7. It occurs in the contrabass flute part at bars 23–24 and again just before the end at bars 57–59. The written pitch sounds B flat5, which constitutes a notational ceiling in A Pierre. In other words, the music that takes place above the sixth octave is largely in the hands of the instrumentalists, and, as Figure 13.1 clearly shows, much of the music of this work occurs above B flat5. The sceptical reader could object that the same phenomenon occurs in all European art music. The notated score does not (and was never intended to) account for the sound field generated by a Romantic symphony. One might wish to designate this acoustic resonance as collateral aspects of music in the common practice period. Composers, performers and attentive listeners were doubtless aware of the acoustic properties of certain pieces performed in specific locations. However, these properties were not considered part of the work’s compositional parameters. The same cannot be said of Nono’s music, and especially of the works composed during his last decade at the Strobel Foundation. In these works, microtonal pitch and acoustic space were appropriated as compositional parameters. Indeed, Nono endeavoured to make the concert space sing and alluded to this in his last lecture given at the Centre Acanthes on 16 July 1989 during the Festival d’Avignon. Referring to Spem in allium, Nono observed that although the harmonic content was extremely limited (because the work is written for forty voices), Thomas Tallis nevertheless succeeded in exploring l’espace sonic and in so doing, he brought it alive and even allowed it to sing (Nono 1993, 275).8 Finally, the editors of the performance score insist that the performers should consistently strive to produce sounds that are constantly mobile (Richard and Mazzolini, cited in Nono 1996, xvi). This sonic mobility was not and cannot be notated because it is dependent on the specific acoustics of each concert space. Thus, the composer’s instructions, which the flutist scrupulously followed, forced her to assume the role of an enhanced collaborator. In fact, this is precisely what Nono expected of his performers, and not only during this period in his career. In a program note for a performance of Post-praeludium per Donau for tuba and live electronics, written in German by Haller after a conversation with Nono, we read that though the evolution or development of the work in time is fixed in a detailed way by the composer, the notation is merely a point of departure. The performer should take this
286 Friedemann Sallis initial indication and use it to continuously search for new outcomes (Nono 2001, vol. I, 505). The statement enables a better understanding of what Nono meant when he claimed he was no longer interested in the concept of ‘écriture’. Nono was not abandoning his responsibility for the work, but he was letting go of the ideology of the composer as a solitary author, i.e. as a kind of master puppeteer, manipulating everyone and everything beneath his position. If no one in his entourage contested his leadership, he was doubtless well aware of the numerous contributions made by the performers, technicians and assistants who made up the team that he led. If Nono’s music continues to resonate in the twenty-first century, performers will have to be prepared to assume this responsibility of enhanced collaboration.
The content of A Pierre However well crafted and performed, Nono’s music is more than the frequencies, durations, proportions and intensity that make it up. His art reaches out to extra-musical dimensions, lending depth to the brilliant surface of the sound. As indicated by its title, A Pierre is dedicated to Pierre Boulez and was written in the first two months of 1985 in honour of that composer’s sixtieth birthday.9 As important as the dedication is, the real content of the work lies elsewhere. Nono provided two texts as program notes for performances of A Pierre, both of which are published. The first is a text in French that served as program notes for the presentation at the Festival d’automne in Paris on 5 October 1987 (Nono 1987a, 203, see also Nono 1993, 476–77). Nono evoked Paris and the rumours of revolutionary change that have resonated in that city for the past two centuries. In these notes, which tend to intellectualise the content of the work, Nono alludes to the French Revolution, Boulez’s apartment in the 1950s and to the founding of IRCAM in the 1970s. The second is a short poetic elaboration written in Italian for the liner notes for an LP recording (Edition RZ 1004) of A Pierre, released in 1990. In this case, it is worth citing the original Italian text and its translation. A Pierre, dell’azzurro silenzio, inquetum Più cori continuamente cangianti per formanti di voci – timbri – spazi interdinamizzati e alcune possibilità di Trasformazione del live electronics. (Nono 2001, 495)
Divers choirs continuously changing for formants of voices – timbres – interdynamic spaces and some possibilities of transformation through live electronics.
In both texts, Nono focuses on the idea of fleeting fragments of voices that inhabit spaces, rendering them dynamic. Like the program notes for the Paris concert, this text also evokes the possibility of transformational change.
A spectral examination of A Pierre 287 I believe this short text comes close to circumscribing the content of this music. To reinforce my claim, I would like to add a third text by Nono, which does not mention A Pierre explicitly. This final text brings us back to the lecture, entitled ‘Altre possibilità di ascolto’ cited above. In it, Nono qualified the sounds of Venice on a Friday evening around 19:00. Once again, I include the Italian original because my translation does not capture the poetic flair of the author. Venice, from the perspective of the Giudecca, from San Giorgio, across the mirror of the San Marco basin, around 7 PM on Friday, is a beautiful soundscape, a truly magical experience: when, from the towers, bells sound ancient religious signals (Vespers, Angelus): to those sounds overlap the reverberations, echoes, such that one cannot know which tower emitted the first sound, how and where exchanges of sounds impose themselves in all directions on the reflective surface of the water. [Venezia, dalla parte della Giudecca, da San Giorgio, dallo specchio d’acqua del bacino di San Marco, verso le sette del venerdì, è una bellissma scena sonora, una vera magia: quando suonano dai campanili le campane per battere qualche antico segnale religioso (vespri, Angelus) a quei suoni si sovrappongono i riverberi, gli echi, così che non si capisce più da quale campanile giunga il primo suono, come e dove si infittiscano gli scambi dei suoni in tutte le direzioni sulla superficie riflettente dell’acqua.] (Nono 2001, vol. I, 536) As noted, the published version of this text makes no explicit mention of A Pierre. However, in Figure 13.5, we see the first page of a series of notes that Nono drew up in preparation for this lecture: notice apparent references to A Pierre near the top of the page. That Nono should have considered referring to this composition that he had recently completed and to others he was currently working on is hardly surprising. Furthermore, the fact that A Pierre is alluded to in the notes does not necessarily mean that Nono was thinking of this work when he described the Venetian soundscape later in the lecture. Nonetheless, the same words Nono used to describe the Venetian soundscape can easily be used to describe the experience of listening to A Pierre. Throughout this piece, we hear wisps and snippets of sound that define a perceptible acoustic space. These sounds are not simply delivered to that space from the playing area in front of the audience. Like the sounds of bells, water fowl, Vaporetti and people going about their business that occupy and define Venetian soundscapes, the acoustic space or sound field created in a good performance of A Pierre is also filled with acoustic fragments coming at us from all directions. To be sure, these sounds do not evoke Venice on a Friday evening. However, his lyrical description of that soundscape remains an effective way of metaphorically seizing some of the more ephemeral aspects of this remarkable music.
288 Friedemann Sallis
Figure 13.5 Luigi Nono, Notes for a lecture ‘Altre possibilità di ascolto’ presented during August 1985 at the Fondazioni Cini. Source: Published with permission of the Archivio Luigi Nono, Venice and the © Eredi Luigi Nono.
Notes 1 This chapter reports on a research project undertaken with financial support from the Social Sciences and Humanities Research Council of Canada (SSHRC), for which we are grateful. 2 Restagno: … Questa visione analitica in movimento, che è così tipica del nostro tempo, mi fa venire in mente quella musica recentissima che si fa in Francia da qualche tempo e che viena definite «musica spettrale», grazie alla sua nuova conoscienza dello spettro acustico. La praticano, secondo me, molto bene compositori come Hugues Dufourt o Gérard Grisey, i quali partono dal presupposto che oggi è iniziata nella conoscienza dello spettro acustico una nuova era; come se avessimo appena inventato il microscopio per guardare dentro alla musica. Nono: I compositori che hai citato scoprono un po’ l’acqua calda. Pensa alle analisi, discussioni, recerche degli anni Cinquanta a Darmstadt! (Nono 2001, Vol. II, p. 535) This extensive interview was initially published under the title ‘Un autobiographia dell’autore raccontata da Enzo Restagno’ (Nono 1987c, 3–73). 3 Nono’s text was initially presented as a lecture as part of a course entitled ‘Corso di alta cultura’ at the Fondazione Giorgio Cini in Venice on 30–31 August 1985. The text was later published in a collection of essays entitled L’Europa Musicale. Un nuovo Rinascimento: La Civiltà dell’Ascolto (1988, 107–24). 4 Interestingly, Pièce électronique no. 3 initially bore the title Atmosphères. 5 With the benefit of hindsight, Dufourt suggests that nothing less than a revolution took place in the field of cognitive psychology particularly with regard
A spectral examination of A Pierre 289 to music (2014, 23). A thorough critique of this claim is not possible within the confines of this chapter. Suffice it to say that political, social and technological change was in the air and was having a strong impact, not only on artistic production but also on its critical reception. 6 Jan Burle and I would like to thank Evan Rothery and Yangke Li for their help with this research project. 7 Our transcription method does not allow for the automated placement of bar lines. Their place on the grid was established through careful, repeated listening by the transcribers. 8 ‘Je vais terminer avec Spem in allium de Tallis, qui est l’exemple type d’une pièce spatiale. … Avec 8 chœurs, on ne peut utiliser que la tonique et la dominante. Mais ce qui est extraordinaire, c’est qu’avec cette soi-disant réduction des possibilités, Tallis explore l’espace, fait vivre l’espace, fait que l’espace même devient à chanter’ (Nono 1993, 275). 9 The fact that the piece is exactly sixty bars long is doubtless anecdotally related to the dedication.
14 Experiencing music as strong works or as games The examination of learning processes in the production and reception of live electronic music Vincent Tiffon Live electronic music poses challenges for traditional musicologists and music theorists. Among these is the fact that this music can rarely be captured in its entirety by conventional notation. As a result, the so-called neutral level (Nattiez 1975, 54–55), usually represented by the full score, is absent. To suggest that recordings of live electronic music can replace the score and be treated as a substitute or surrogate for the traditional full score is problematic, because recorded performances are always one instantiation of the work and not the work itself. To get around this problem, I propose an examination of the creative process, which, in the case of live electronic music, will necessarily go well beyond the work of the composer first by including an examination of the collaborative efforts of the different actors: the assistant or Réalisateur en Informatique Musicale (RIM, in English ‘computer music designer’) and the performers who are often engaged in the initial creative development of the work. Second, I will examine how active listeners engage with this music. Indeed, we will see that in one case the line between the active listener and the performer is blurred. My intention is to examine works in which listeners (musicants) are involved in shaping the performance.1 To that end, I will compare two compositions: one in which listeners are subjected to a traditional performance experience and a second in which musicants are actively engaged with the performance outcome. This active involvement is where the ethos of games enters the discussion. In order to better understand the collective creative process and how it is received, we need to analyse the learning procedures present throughout the conception and reception of the work. I argue that the learning dimension is not only essential for the performer’s role in actualising the work, but it also makes their role crucial at the level of conception. Often at the very beginning of the process, the composer and musical assistant observe the performer while configuring a given device (such as the score and live electronics). The assistant then adjusts this device by rewriting elements of the patch, while the composer reconstructs his score, taking into account new possibilities, etc. Therefore, examining this collective learning process is crucial to our methods for analysing live electronic music.
Experiencing music as strong works 291 My primary objects of study are two contemporary pieces from opposite ends of the live-electronic music spectrum. First, Marco Stroppa’s …Of Silence… (2007) for saxophone and chamber-electronics, aided by the software Antescofo (designed by Arshia Cont), capable of learning the behaviour of the performer. Second, a sonic installation entitled XY (2007…) produced by the Équipe Dispositifs, Expérimentations, Situations en Art Contemporain (hereafter EDESAC) of the Université de Lille (France), in which visitors learn to play the device through PureData programming. Access to sources charting the creative processes of these works is essential to the success of our task at hand. We are fortunate to have such access to documents pertaining to the composition of…of Silence… and the XY Project. I often collaborated with Stroppa at IRCAM, and I was the EDESAC team director during the conception of the project and its realisation from material into an interactive device. Focusing on these two works forces us to expand the scope of the category live-electronic music. Doing so requires an account of various interactive features and different types of sound installations in which learning is essential to being able to play. Our task here is not merely to examine the influence of technologies on musical creation, or inversely, how the desires of composers work to shape technological innovations. This study moves beyond the assumption that all technologies are in part tied to creative processes and focuses instead on the way in which these reciprocal influences manifest themselves. Our principal hypothesis is that learning and appropriating new technologies created by, or available to, a network of creators (composers, engineers, technicians, computer music designers, musicians, visitors present at a sound installation, etc.) is central to the ever-shifting relationship between artistic concepts and the instruments of their actualisation. The interaction between technological innovations and musical invention constitutes the very definition of mediology (Tiffon 2005a). Musical mediology takes as its primary object of study processes of musical transmission throughout history and examines the technologies that mediate this transmission (Debray 1997). The traditional interpretive approaches to the study of music focus on the ‘what’ and the ‘why’. By contrast, mediology seeks to explain ‘how’ a work exists and is conserved. Mediology observes technical and institutional aspects in their most material components to better understand how they transform the ways in which music is conceived, conserved and transmitted. In so doing, this perspective approaches a semeiology (the study of symptoms) rather than a semiology (the study of symbols). Put another way, mediology concerns itself with the power of signs rather than a given sign’s own ontology. Through careful observation of each technical medium (media storage, symbolisation processes, diffusion devices) and institutional media (social codes of communication, organising structures, training sites), a mediological perspective examines how these media manifest themselves in specific spheres: orality (logosphere), printing (graphosphere), audio-video recording (audiovisual sphere) and computing
292 Vincent Tiffon (digital cybersphere). Mediology allows us to apply the principle of causality to the study of music: we might, for example, consider how a simple technology can engender a large musical effect (to reference the ‘butterfly effect’ from chaos theory). Researching the symptoms of the propagation of musical ideas or the ‘reverberations’ (Meyer 1967, 67) of a musical practice, is related to what Nattiez (2003, 51) calls the ‘intrigue’ of musical mediology. Works that engage musical interactivity using digital tools from the hypersphere elide musical and scientific research. At the nexus of science and art, these two works – like many pieces since the 1980s – combine the efforts of artists and researchers both of whom hope to have a lasting impact in their respective fields. For…of Silence…, the Stroppa/Cont team developed a new score-following program (Antescofo), which they claim will significantly change common practices in this field. The team responsible for XY framed their project as a demonstration of Gilbert Simondon’s concept of the process of psychic and collective individuation, by a performance of ‘conjectures and refutations’ (Popper 1963).2 To support our central claim, we have chosen to examine these two contrasting examples that highlight creative processes and artistic practices regarding interpretation and the way in which the processes of learning manifest themselves. The two works in question are deliberately close in certain methodological dimensions – assuring a certain objectivity in our comparative analysis – but opposed in terms of their conception, thus giving our discussion a dialectic. We will first outline these two works in terms of their technical feasibility and the strategic choices made by the creators of the works and the creators of devices required to perform them. In so doing, I wish to suggest avenues for further reflexion rather than draw definitive conclusions.
Description of the repertoire Marco Stroppa’s …of Silence…3 for alto saxophone and chamber electronics (Stroppa’s expression) holds an established place in the repertoire of contemporary music. Born in Italy, Stroppa currently lives in France and in Stuttgart (Germany) where he is a professor at the Musikhochschule. He studied piano and composition at the conservatories of Verona, Milan and Venice, as well as electronic music with Alvise Vidolin – notably the Music V program – at the University of Padova’s Centro di Sonologia Computazionale (CSC) between 1980 and 1984. His musical and computing acumen led him to IRCAM (1982) and then to MIT (1984–86) to work on cognitive psychology and artificial intelligence. He then returned to IRCAM (1987) as director of musical research. In 1990, he left this position to concentrate on his own works and on teaching at the Musikhochschule in Stuttgart. Though still close to IRCAM, Stroppa is one of the Institute’s harshest critics regarding the immoderate use of real-time technology. His conviction echoes that of his predecessor Jean-Claude Risset, who also left IRCAM. In his compositions, Stroppa stubbornly refuses live electronics as long as the tools lack
Experiencing music as strong works 293 a sufficient degree of flexibility. For Stroppa, live electronics must be used only if they can reproduce the traditional performance interactivity akin to chamber music playing. Before participating actively in the creation of Antescofo (2007), he created a series of works for chamber electronics that investigate questions of space. His original position on live electronics has not changed, and despite the apparent differences between Traiettoria for piano and computer music, (1982–84, rev. 1988) and …of Silence… (2007), Stroppa remains one of the rare composers who does not rely on computer music designers, staying faithful to his critical position while remaining proactive in the technological application of music. In composing …of Silence…, Stroppa worked with Arshia Cont, a computer scientist at IRCAM, who led the team that designed synchronous reactive language. Cont and Stroppa collaborated closely during the creative process and realisation of this work. Cont’s team (nearly from all of IRCAM) created the technology necessary for the work’s production, a software system entitled Antescofo. Situated at the heart of this project, Cont’s activities overstep the traditional role of a computer music designer through the design of a software program meant to advance the field of human-machine interactivity. In this regard, …of Silence… is quite traditional. An interface detects the movements of an instrumentalist on stage, allowing the software to respond with the shortest possible delay as it carries out programs written by the composer and computer music designer. The computer’s response is diffused by a group of speakers called la Timée.4 In the second version of the work, Stroppa replaced la Timée with an acoustic totem, a column of five speakers directed toward four cardinal points of the room (front/back/left/right), with an additional speaker oriented towards the ceiling (Figures 14.1 and 14.2).5
Figure 14.1 Marco Stroppa, …of Silence…, photo of the ‘acoustic totem’. Source: With permission by IRCAM.
294 Vincent Tiffon
Figure 14.2 Marco Stroppa, …of Silence…, diagram of the audio setup.
The acoustic totem (as Stroppa refers to it) provides a visible counterpart to the soundscape’s production from necessarily invisible sources and creates possibilities for sound diffusion without electronic spatialisation in the hall. One can thus consider …of Silence… a ‘classic’ in the repertoire of live electronic music as it shares many common traits with ‘typical’ music produced at IRCAM.6 However, Stroppa approaches interaction quite differently than the majority of IRCAM productions, particularly in his insistence on an equal balance between the instrumental and electronic elements of his work. The XY project, an immersive interactive sound-installation project, rests at the other end of the live electronic music spectrum.7 The project was undertaken by a team of academic researchers (EDESAC/CEAC/Université de Lille) consisting of professors, post-docs and graduate students, assisted by a part-time computer-music designer. The resulting work (XY) features an interactive device. In 2007, the XY project was installed in a dark room of 100 square metres equipped with a sound-diffusion device (four speakers and a subwoofer) as well as an infrared camera. Upon entering the dark room, each visitor put on a hat, baseball cap, or bracelets with LED lights attached to them. The camera then captured the diodes’ signals to determine the position of each participant (movements, speed, points of rest) and charted this orthonormal data on X and Y axes (see Figures 14.3 and 14.4). The PureData program transmitted the data in real time, responding to the visitors’ movements following scripts preprogrammed by the device’s designers. Multiple musical scenarios emerged as the program performed actions ranging from simple tasks like triggering certain physical areas, to more complex ones that adapt to the number of participants and their speed
Experiencing music as strong works 295
Figure 14.3 XY installation, diagram of the audio device and capture.
Figure 14.4 XY installation, technical schemata.
with which they can learn to use the device. Diffused in stereo, the resulting sound was then simultaneously recorded and stored on a website accessible both to the visitors present in the space and interested Internet users. In the following comparison of … of Silence (hereafter OS) and XY project, my intention is not to promote one of these artistic offerings over the other, but rather to understand more fully how learning processes function in two different, even conflicting, musical outcomes. To do so, we will consider points in common and significant differences.
Comparing OS and XY Firmly situated in the realm of interactivity, the technology used in both OS and XY is cutting edge. Whereas IRCAM boasts technological
296 Vincent Tiffon resources and personnel of the highest international calibre, the democratisation of contemporary technologies has enabled relatively small teams, like EDESAC, to access technology that is both efficient and reliable. Like the majority of musical creations that use digital technology today, both projects employ (though not necessarily to the same degree) distinct modes of electronic media: composition in delayed time and interpretation in real time (Stroppa 1991). The first major difference between these two works is the way in which the music is received. OS is presented by a composer to a listening public; XY, on the other hand, offers multiple performance situations to an audience that participates in the creation of the sounds they hear. Thus, they present two different models for the relationship between a musical work and its audience, and these models coincide with the works’ opposing approaches to spatial projection. On the one hand, OS is a finished work, a prescription meant to be actualised through its mediated performance by a saxophonist. OS is presented to its listeners in an ad hoc concert hall space, which by definition implies a symbolic divide between a seated public and the performers onstage. The natural sounds of the saxophone on stage combined with both the sounds of its amplification and diffusion of electronic sounds from the acoustic totem allow for a spatial dramaturgy. Stroppa initially envisioned OS with a single point of projection made up of three dipoles (producing a sort of cardioid effect): one lateral dipole oriented from left to right, another from front to back, and a vertical dipole from bottom to top. This configuration makes possible a veritable division of space transmitted from a signal source. By carefully controlling the instrumentalist’s position in relation to the totem (and by extension to the position of the public), Stroppa purposely conceals certain elements. The saxophonist plays in various locations – hidden behind the totem, stage right or left, in front of the public or centre stage with his or her back to the audience – creating a ‘conch shell effect’ from the sounds heard without direct mediation from the speakers. In this work, Stroppa strategically investigates various listening potentials in terms of spatial diffusion; rather than immersing the audience in the sound, they are encouraged to become more aware of how sound articulates space. On the other hand, the XY project privileges ambiophonic diffusion, thus immersing visitors in its soundscape.8 XY presents a ‘musical situation’ in which visitors are encouraged to interact with their environment, transforming them into ‘musicants’, or musician-participants (Vinet 2005; Tiffon 2012). The term ‘visitors’ works better than ‘listeners’ here, since their function in this performance goes beyond listening. Rather than passive wanderers, visitors are active explorers; their movements trigger sound events, with which the visitors – six participants at most – interact according to their perception of musical experiences framed by the way in which specific scenarios have been programmed.
Experiencing music as strong works 297 XY’s creators have implicitly created a range of programmed scenarios that frame visitor participation that can be classified in three types. 1 The scenarios of the first type are called ‘ambiophonic scenarios’, the titles of which are La Ville, Espace, (Pro)pulsions, Tropiques and Canons rythmiques. These scenarios immerse visitors in a virtual sonic space by interacting with their physical placement and movements and are conceived primarily for children. Visitors are asked to search for zones of detection, which is a very simple goal: the immersive sound enables a didactic target. These scenarios enable visitors of any age to learn how to use the device. In fact, visitors should forget that their bodies are separate from the experience: an iconic experience rather than a symbolic experience.9 2 The second type is labelled ‘action and reaction scenarios’, the titles of which are Zen, Jamais deux sans trois, Tête de lecture and Concrète. These scenarios are deemed intermediary in terms of difficulty. All include situations in which immediate actions by a participant produce an organised sound. Zen, for example, invites visitors to construct a complex sound, or a spectrum, in which each participant becomes one of the partials of the whole. Contrary to the type 1 scenarios, individual participation here actively influences the sound generated by all of the visitors. 3 The third type is called ‘trial and memory scenarios’, the titles of which are Theremin, Clavier Ionien, SynthéXY and SyntheAléatoire. These scenarios include situations that require skill to operate the installation, almost like a musical instrument, and thus necessitate the participation of multiple visitors in a collective learning process. A solo musicant could not perform alone, not only because the scenarios are not technically designed for this, but also because the philosophical principle behind the concept relies on the collective experience. These scenarios function through collective repetition, and the musical acumen of the participants (however rudimentary) is assumed. Listening then becomes one of multiple creative functions in this acoustic experience. XY promotes a shift from a listener to a visitor-musicant: both an actor and a receiver of music in the centre of a rather simple ambiophonic sound system. The visitor-musicant (or user of the sonic installation) is not obliged to follow a complex operation manual in order to understand the operating principles of the PureData software. Instead, he or she is brought into the technical dimension of instrumental practice, like a beginner pianist practicing scales and exercises to acquire a minimum of virtuosity, which allows the performer to read or play a more demanding repertoire. The visitor, as musicant, must therefore acquire a technical mastery of the device, the game scenario, so as to become better at making music, like a musician with
298 Vincent Tiffon his instrument. As instrumental mastery of the XY device is gained, the musicant will derive greater satisfaction with the sound achievements. Feedback collected since 2010 shows that regular practice (more than fifteen times) of the same scenario on the device allows all musicants to acquire sufficient instrumental mastery. Therefore, though it uses a gaming ethos, XY is much more than a simple entertaining game. By entering into a sound creation process that includes aesthetical, philosophical and political ramifications, musicants (individually or collectively) become creative musicians. XY project is not designed as a single-use installation, arousing an occasional interest. On the contrary, it is first and foremost a musical device that aims to train the participant in the production and organisation of sounds. Furthermore, the visitor-musicant is not merely an interpreter of a new work; indeed XY is not a traditionally conceived work of musical art. Leaving aside the question of authorship, the visitor-musicant is a co-creator of sounds in an open musical environment that resembles a game, rather than an ‘open work’ (Eco 1989). With this description, we can see how these two devices highlight opposing conceptions of the creative process, which then result in two very different musical experiences for their respective publics. The second major rift between these two musical experiences rests in the choice of whether or not to use haptic technologies. Deemed too unwieldy and imprecise for the interaction between man and machine, nonhaptic devices were not a part of OS’ initial plans. Considering the project’s goal of achieving the most accurate synchronisation possible – within the incomprehensible limits of computation time – the choice to limit the use of nonhaptic processes that would slow down the system’s reaction time makes sense. Even today (2014) with such fast-acting technologies as the RFID (Radio Frequency Identification), the detection phase via a nonhaptic device generates latencies incompatible with the requirements of musicians, for whom the processes of anticipation – in the context of chamber music, for example – allows for perfect synchronisation. The choice to use haptic technologies for OS made at the project’s outset was then reinforced through successive actualisations of the score-following program (Antescofo) and is currently still the work’s preferred mode of performance.10 By contrast, the XY project’s choice to embrace a nonhaptic interface freed the device’s users (or visitor-musicants) from requiring skill, allowing them to focus on listening and on interacting with the system. The musicants can then concentrate on how the device responds to their behaviour and on the sounds they produce. These antinomic choices reveal two differing conceptions of instrumental ability. On the one hand (OS), the mastery of an instrument in the Western tradition has, since the nineteenth century, become increasingly specialised, reflecting a shift from the notion of a musician as both a composer and performer (cf. the Baroque period), to one that strongly differentiates these two skill sets and places the composer in a superior position (according to the nineteenth century). On the other hand (XY), the mastery of an instrument is seen as part of a process of appropriating musical works,
Experiencing music as strong works 299 or more generally, an aesthetic experience. An interactive installation cannot succeed in presenting a sensory experience for its visitors if the mastery of its instrument requires virtuosic technique. The writing process for OS is intimately linked to the machine that produces sound, acoustic instrument or electronic instrument. The acoustic totem is designed as an onstage performance partner. All of these dimensions converge in Stroppa’s decision to cast the instrument and totem in the role of mediators of a listener’s aesthetic experience. For example, consider Stroppa’s choice in using an electronic tool to create a spontaneous wah-wah effect over the saxophone part. The score follower and its ability to control aspects of a performance note by note respond to the wishes of the composer, who counts on a listener’s ability to internalise this aesthetic experience. OS does include nominal elements of indeterminism – to avoid overly predictable responses – that contribute, together with the instrumentalist, to the creation of a unique interpretation of a recognisable work at each performance. The XY program was not written to reproduce a given sonic result. On the contrary, the programming aims to create optimal conditions for varied performance situations and to facilitate the musicants’ interaction with the interface. Although a common feature in both OS and XY, musical interaction originates from different poetic notions and even from different aesthetic foundations. By comparing OS and XY, and their similar use of digital technology – even if the Antescofo software program is particularly sophisticated – we can better understand how very different musical outcomes and scientific presuppositions are delivered and translated into sounding results. Fully understanding how the frames within which these devices work is crucial to this goal. The creative processes of both OS and XY move beyond a chronicled list of steps and must be understood within a network of actors, including the musicians, engineers, computer music designers and other participants in the elaboration of a work and its system. This study highlights the way in which these actors appropriate the devices they are in the process of creating and how learning processes inform choices over time, thus engaging in the very evolution of technological and musical creation. This feedback loop, as a cognitive principle, constitutes a central concept to artistic practices and to the emotive potential of a work. The discrete technologies that serve to create works are indeed also implicated in aesthetic expectations. Our framework for understanding interaction between an instrument (OS) or interface (XY) and the machine reflects how the relationship operates between the listener of OS or the musicant in XY and the music that is produced.
Learning in the process of technological elaboration In the creative processes of the two musical situations at hand, both collaborative teams confronted first the question of how to think about interaction: interaction for whom and with what? Should it serve synchronisation, or could chance also generate processes of interaction?
300 Vincent Tiffon We might think generally of interaction in music as consisting of a mutual listening of two performance participants. In OS, the participants are the saxophonist and the electronics, and in XY, the musicants and the electronics. Since the 2000s, the tendency to treat these two aspects as mutual partners is tied to the revival of the human in electronic music, through the capture of gesture and by the ability to follow human activity through synchronisation between the instrument and the machine, or more broadly, by the mastery of synchronous time. Technological questions regarding gesture and its capture are outside the scope of our subject here. That said, in considering what initial choices were made in OS and XY and to what extent they are opposed – the use of haptic versus nonhaptic captures, for example – it is important to note that these choices were conditioned by aesthetic assumptions that influenced the concrete possibilities for dialogue between two realms. The efficiency sought after by the haptic captures in OS imposes and reinforces the notion that the availability of diffusion in a performance setting opposes the relative static nature of the stage. For XY, the conduit that frees users of a necessary mastery of a traditional instrument involves a relatively imprecise control of electronics; both the programmers who write the scenarios and the musicants whose interactions are shaped by the material’s functions must account for these technological constraints. In the case of both works (as in all contemporary musical situations that feature a dialogue between man and machine), technical mastery is of the utmost importance to render effective synchronisation between two staged protagonists. Analogous to a musical score facilitating synchronisation between a pianist and a piano (if one considers a piano as a mechanical and not an automatic machine), the interactive device as a whole, from detection to automatic reaction, has the same objective: to facilitate synchronisation between the musician (OS) or user (XY) and the instrument at an ‘instance of calculation’. In OS, this synchronisation, which is not systematically synchronous, is indispensable for the musician, who must actualise the score while indirectly piloting the machine that produces electronic sounds. In XY, the visitors must be able to pilot the device themselves.
Learning in the process of diffusion However close the gap between these two works in terms of creative processes, there is a significant difference in the listeners’ awareness of the synchronisation that is taking place. Once again, two situations are at play. In traditional performance, for example, in the case of chamber music, the public is often expected to experience synchronisation coming from an imagined interaction. For example, consider Stroppa’s Traiettoria (1984) for piano and synthetic orchestra (using Music V software); a given listener could reasonably assume that the work is generated by an interactive system in real time, even though the electronic sounds were fixed beforehand either on tape or with digital support. Conversely, the subtle interactions in a
Experiencing music as strong works 301 number of live electronic pieces are perceivable by the designers (composers, computer music designers, sound engineers, etc.), but rarely by the audience. In XY on the other hand, the public perceives the interaction since it is only through this process that music sounds. We can then consider these two strategies in opposition. In OS, the composer and the computer program interact primarily for the performer, and this interaction is not perceivable by the audience. The interaction in XY assumes the device’s efficacy; the musicants’ ability to interact with the device is a prerequisite for not only sound production, but also the aesthetic experience. XY therefore aims to place musicants in a self-learning process, shaping their interaction between themselves and the device. A feedback loop is established; the more efficient the learning process becomes, the more developed musicants become at navigating the demands for their instrument. By focusing only on listening, we might define this type of device as a sonic one, even though other installations use the same theoretical basis to put in play other capabilities such as vision, touch, etc. The chain of causality of cognitive activities situates listening as a site of knowledge production by the mastery of the device’s interactive properties. We can then consider XY a system that constructs the process of learning an instrument, which consists of finding the most suitable gestures for producing a given sonic result. The aesthetic experience is then not one of listening passively to a result, but one of active interaction between that result and its originating gesture. This fact reveals the need for a typology of gestures and their products that account for the learning process (Bricout 2009). In OS, the performer, an expert in the mastery of his or her instrument, does not have to learn how to use the interactive device to produce sounds. However, learning how the device functions – since its reactions are not necessarily expected – is part of the rehearsal process, recalling again our chamber music example. This concern is of primary importance to designers Stroppa and Cont; the Antescofo software grants greater control to the composer. Composers can then turn back to writing, their primary activity, engaging in what Varèse termed the organisation of sound. Like most of the ‘classic’ pieces for live electronics, OS resituates authorial writing at the centre of musical activity, affirming a link between the composer’s intent and the listener’s perception. Writing is certainly present in XY, but it is in service of a visitor’s experience in becoming musicants as they learn to play the device as an instrument. In considering XY not as a technological system but rather an instrument available to its visitors who become musicants, the installation can be understood as an aesthetic environment rather than as a work. These two attitudes have in common an attempt to challenge a tendency towards disempowering performers and audiences in both musical creation and multimedia art installations since the 1980s. Before Antescofo, score followers tended to disempower musicians who were assured that the device would follow them, whatever their tempo choices or nuances, sometimes
302 Vincent Tiffon through manual intervention when the software malfunctioned. In a certain sense, this process reverses an earlier practice of an instrumentalist faithfully following the fixed progression of a tape track, before triggering or treating sound in real time was a possibility. Antescofo re-establishes interactive balance by allowing the instrumentalist to listen to the machine- produced sounds and to become aware of the extent to which these sounds result from his or her performance. Adhering to a chamber-music configuration, sometimes the instrumentalist leads the performance, and in other moments, he or she follows another member of the ensemble, in this case the machine. XY empowers listeners, reinforcing Leroi-Gourhan’s claim that one must participate to feel (1964, 201). According to Stiegler, who recontextualises Leroi-Gourhan in visual and digital cultures: ‘The loss of aesthetic individuation strikes the consumer; unable to participate in aesthetics, he loses his sensitivity. He sinks into anaesthesia, indifference and apathy. Symbolic misery at the loss of aesthetic participation becomes psychological and libidinal misery’ (Stiegler 2005, 50–51).11
Learning in the creative process The creative process for works with live electronics is closely tied to computing tools. Chosen by designers to meet the requirements outlined above, computing tools are themselves subject to learning processes as they are appropriated, rejected or adapted by the various actors in the creative process. Antescofo saw major revisions between its 2007 beta version and 2014, but the software’s initial objective has not changed: to emulate as closely as possible a chamber music experience. Like many composers working in a variety of aesthetics, Stroppa has benefitted from technological advancements since 2007. The advancements responsible for Antescofo’s current version better suit its initial intent for the project. In this regard, writing practices approach one another regardless of their aesthetic orientations. As musical styles become standardised in their appropriation and transformations in the hands of their practitioners (composers), this type of AI software approaches standardisation by successive additions made by composers without affecting its initial logic. What follows then is an essential point to our argument: this standardisation is only a technological one and not an aesthetic one, contrary to certain cases in entertainment or cultural industries. Technological innovation here results in standardisation while serving multiple aesthetics. In 2014, Stroppa rewrote OS to account for new innovations and their potential. While Stroppa was once constrained by the limitations of Antescofo’s 2007 version, he was able to rewrite OS in the way that he first envisioned it.12 The way in which composers learn to integrate computing software – Stroppa excels in this as a composer literate in programming – and the
Experiencing music as strong works 303 way in which technicians learn to integrate computer logic with the aesthetic desires of composers reveals a double feedback loop in the learning process. The development of the far more rudimentary PureData patches in XY offers potential programmers (of performances or sound installations) a model to create other scenarios. XY represents not only a different way of thinking about the performer-musicant than OS, but also a different way of approaching an artistic product. Playing through already existing scenarios is certainly possible, but the system strongly encourages programmers to develop new patches and thus new situations in which musicants can perform. Since programming skills are necessary in the case of PureData – just as mastery of musical language is necessary to write instrumental music –the challenge here is to expand the economy of knowledge in computer programming (generally the domain of research centres) to encourage new forms of contemporary artistic expression. This type of work can be understood in the broader context of cognitive activities based on a collaborative economy of knowledge that rethinks the terms of the learning process. We are more likely here to think of reactivation than creation, since the economy of knowledge writ large is widely mediated through research institutions that recycle new knowledge and skills (two different notions not to be conflated) in the learning process. The direct link between research and teaching at the heart of research institutions and universities across the globe already reveals this so-called new economy of knowledge. In conclusion, I argue that the essential difference between OS and XY in terms of their creative processes sketched roughly here reunites the existing rift in our understanding of performance devices meant for use in a concert space and the use of devices in installations. They are indeed two seemingly contradictory manifestations that share closely related intents. The sensory experience of listening passes through the practitioner’s work in the manufacturing of sound. OS adheres to a more stringent and personal perspective regarding the difference in the role of a creator who configures a work and the performer charged with its actualisation. This double articulation responds to the challenge of incorporating sensitivity. XY addresses sensitivity through privileging participation on the part of programmers and musicants as a necessary (though not necessarily sufficient) condition. What remains then is the question of institutions as mediators of these challenges. OS and other works of this type necessarily rely on art schools, universities and other institutions that use graphospheric media and other means of transmission to assure pedagogical standards towards a fuller appropriation of sensitivity. XY and other works of its type rely on the university and especially on new projects mediated by institutions that continue to invent in the nascent digital cybersphere (Tiffon 2005a). In this way, OS and XY are truly works and devices through which art, science and technology interact.13
304 Vincent Tiffon
Notes 1 For the purposes of this chapter, a ‘musicant’ is defined as a listener who is encouraged to interact actively with the environment within which the music is being performed. This transforms the listeners into musician-participants or ‘musicants’. 2 When put in dialogue with individual and collective learning processes, this interaction can then approach what Gilbert Simondon (1964) identified as a process of psychic (or individual) and collective individuation. Simondon’s work focuses on the capacity of certain pieces and/or artistic devices to last over time, or to pass beyond an artwork that communicates an emotion towards one that transfers ideas by the reactivation of sensitivity (Stiegler 2005). From this angle, art remains a type of transmission and is thus part of an anthropological construction of culture, exposing the role of consumer objects when cultural industries attempt to use them. 3 The fourth piece of a cycle that sets the poetry of E.E. Cummings. OS premiered in Paris on 23 November 2007. 4 This first version of … of Silence… used ‘la Timée’ a group of 14 speakers that controlled 14 different sources. 5 Stroppa tends to create multiple versions of his works, not following the concept of works in progress (like Boulez), but rather because of his own problems completing, amending and amplifying certain key moments in his music; the composer is a notorious perfectionist, particularly regarding the electronic aspects of his works. 6 Pierre Boulez, founder of IRCAM, has always defended the idea that electronics should be secondary to the instrument and the instrumentalist. This is why the scientifico-artistic research at IRCAM has always privileged new modes of sonic representation (writing computer programs), developments in instrument building (extended techniques, multiphonics, etc.) and electronic instruments in service of real-time instrument-machine interactivity. IRCAM envisions musique mixte as instrumental music enhanced by electronics; a balance of power between actors is thus not a priority. This common approach shared by much of the musical output at IRCAM encourages the notion of a ‘house aesthetic’, particularly present in the institution’s first two decades, where shared strategies pervaded musical works in a compositional process organised by the institution. Stroppa’s Traiettoria constitutes an exception to this context. Since the 1990s, however, IRCAM has opened its doors to a wide variety of aesthetics – including spectralism – and the idea of a recognisable ‘IRCAM aesthetic’, if it still exists at all, has become less tenable. 7 Hereafter, I will refer to …of Silence… as OS, and project XY as XY. 8 Ambiophonic diffusion means a diffusion in which the listener is completely surrounded by sound, at least in the horizontal plane. To this is added a bass subwoofer reinforcing the impression of the listener to be enveloped by sound, almost embedded in sound. 9 Icon, index, symbol refer to Charles Sanders Peirce's trichotomy, from his theory of signs. 10 Based on personal correspondence with Marco Stroppa (7 March 2014; 13 August 2014; 4 September 2014) and Arshia Cont (12 February 2014). 11 Translated from the French: ‘La perte d'individuation esthétique frappe le consommateur : privé de la possibilité de participer au fait esthétique, il perd sa sensibilité. Il sombre dans l’anesthésie, l’indifférence et l’apathie. La misère symbolique, comme perte de participation esthétique, engendre à son tour une misère psychologique et libidinale’. 12 A new version of OS was performed in Sydney, Australia, on 8 November 2014. 13 My sincere thanks to Alexander Stalarow for this translation, and Friedemann Sallis for the revision.
Bibliography
Agamennone, Maurizio. 2012. ‘Di tanti transiti: il dialogo interculturale nella musica di Luciano Berio’. In Luciano Berio: Nuove Prospettive–New Perspectives, Angela Ida De Benedictis (ed.), 359–97. Florence: Olschki. Akay, Adnan. 2002. ‘Acoustics of Friction’, Journal of the Acoustic Society of America 111/4: 1525–48. Akl, Ahmad and Shahrokh Valaee. 2010. ‘Accelerometer-Based Gesture Recognition Via Dynamic-Time Warping, Affinity Propagation, and Compressive Sensing’. In International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2270–73. Albèra, Philippe. 1997. ‘Entretien avec Luigi Nono’. In Musique en création, Philippe Albèra (ed.), 87–101. Geneva: Contrechamps. Alessandrini, Patricia and Xenia Pestova. 2014. ‘Creating Music for Bodies, Instruments, and Objects: Live-Generated Scoring for Inclusive Interactive Performance.’ Presented at Notation in Contemporary Music: Composition, Performance, Improvisation, Goldsmiths College, University of London. Arjun, Appadurai. 1996. Modernity at Large. Cultural Dimension of Globalization. Minneapolis: University of Minnesota Press. Arom, Simha. 1976. ‘The Use of Play-Back Techniques in the Study of Oral Polyphonies’. Ethnomusicology 20/33: 483–519. ———. 1985. Polyphonies et Polyrythmies Instrumentales d’Afrique Centrale: Structures et Methodologie. 2 vols. Paris: Selaf. ———. 1991. African Polyphony and Polyrhythm: Musical Structures and Methodology. London: Cambridge University Press. ———. 2013. Le Ragioni Della Musica. Maurizio Agamennone and Serena Facci (eds.), Lucca: LIM. Assmann, Aleida and Jan Assmann. 1998. ‘Schrift, Tradition und Kultur’. In Zwischen Festtag und Alltag, Zehn Beiträge zum Thema ‘Mündlichkeit und Schriftlichkeit’, Wolfgang Reible (ed.), 25–49. Tübingen: Günter Narr Verlag. Attali, Jacques. 1985. Noise: The Political Economy of Music. Brian Massumi (trans.). Manchester University Press. Auner, Joseph. 2013. Music in the Twentieth and Twenty-First Centuries. New York and London: W. W. Norton. Auslander, Philip. 2002. ‘Live from Cyberspace: Or I Was Sitting at My Computer This Guy Appeared He Thought I Was a Bot’, PAJ: A Journal of Performance and Art 24/1: 16–21.
306 Bibliography Austin, Larry. 1992. Accidents Two (score). Denton, Texas: Larry Austin Music. Avenary, Hanoch. 1978. The Ashkenazi Tradition of Biblical Chant Between 1500 and 1900. Jerusalem: The World Congress on Jewish Music. Azzolini, Franco and Sylviane Sapir. 1984. ‘Score and/or Gesture – The System RTI4i for Real Time Control of the Digital Processor 4i’. Proceedings of the 1984 International Computer Music Conference, 25–34, San Francisco, Computer Music Association. Baginsky, Nicolas Anatol. n.d. ‘The Three Sirens’. www.baginsky.de/. Bailey, Derek. 1980. Improvisation: Its Nature and Practice in Music. Ashbourne: Moorland. Baron, John. 2010. Chamber Music: A Research and Information Guide. New York and London: Routledge. Bartók, Béla. 1951. Serbo-Croatian Folk Songs: Texts and Transcriptions of Seventy-Five Folk Songs from the Milman Parry Collection and a Morphology of Serbo-Croatian Folk Melodies. New York: Columbia University Press. ———. 1976. ‘The Influence of Peasant Music on Modern Music’. In Béla Bartók Essays, Béla Bartók, Benjamin Suchoff (ed.), 340–45. New York: St. Martin’s Press. Bashford, Christina. 2001. ‘Chamber Music’. In The New Grove Dictionary of Music and Musicians, Stanley Sadie and John Tyrell (eds.), Vol. 5: 434–48. London: Macmillan. Battier, Marc, ed. 1990. Rapport D’activité 1989. Paris: Centre Georges Pompidou. ———, ed. 1999. ‘Aesthetics of Live Electronic Music’. Special issue of Contemporary Music Review 18: 3. ———. 2015. ‘Describe, Transcribe, Notate: Prospects and Problems Facing Electroacoustic Music’, Organised Sound 20/1: 60–67. Becker Howard, Saul. 2009. ‘Préface’. In Sociologie des Groupes Professionnelles. Acquis Récents et Nouveaux Défis, Didier Demazière and Charles Gadéa (eds.), 9–12. Paris: La découverte. Behrman, David, Joel Chadabe, Mauro Graziani, Sylviane Sapir, Richard Teitelbaum and Alvise Vidolin. 1984. ‘Rapporto sul laboratorio. Il sistema 4i ed il tempo reale’, Quaderno LIMB (Laboratorio permanente per l’Informatica Musicale della Biennale) 4: 85–90. Bennett, Gerald. 1995. ‘Thoughts on the Oral Culture of Electroacoustic Music’. Aesthetics and Electroacoustic Music: Proceedings of the International Academy of Electroacoustic Music, 20–25. www.gdbennett.net/texts/Thoughts_on_oral_culture.pdf. Berio, Luciano. n.d. ‘Ofaním (author’s note)’. Centro Studi Luciano Berio. www. lucianoberio. org/node/1516?152413916=1. ———. 1967. Circles, Sequenza I, Sequenza III, Sequenza V. Wergo, WER 60021, LP. ———. 1985. Two Interviews with Rossana Dalmonte and Bálint András Varga. David Osmond Smith (ed. and trans.). New York and London: Boyars. ———. 2006. Remembering the Future. Cambridge, MA: Harvard University Press. ———. 2013. Scritti Sulla Musica, Angela Ida De Benedictis (ed.). Turin: Einaudi. Bertolani, Valentina and Friedemann Sallis. 2016. ‘Live Electronic Music’. The Routledge Encyclopedia of Modernisme. www.rem.routledge.com/articles/ live-electronic-music. Berweck, Sebastian. 2012. ‘It Worked Yesterday: On (Re-)Performing Electroacoustic Music’. PhD Thesis, University of Huddersfield.
Bibliography 307 Biró, Dániel P. and Peter van Kranenburg. 2014. ‘A Computational Re-Examination of Béla Bartók’s Transcription Methods as Exemplified by His Sirató Transcriptions of 1937/1938 and their Relevance for Contemporary Methods of Computational Transcription of Qur’an Recitation’. Proceedings of the Fourth International Workshop on Folk Music Analysis (FMA2014), Andre Holzapfel (ed.), 70–77. Istanbul: Boğaziçi University. Biró, Dániel P., Peter van Kranenburg, Steven R. Ness, George Tzanetakis and Anja Volk. 2011. ‘A Computational Investigation of Melodic Contour Stability in Jewish Torah Trope Performance Traditions’. Proceedings of the International Society on Music Information Retrieval (ISMIR 2011) Conference, 163–8. Miami: [s.n.]. Biró, Dániel P., Steven R. Ness, Andrew Schloss, George Tzanetakis and Matthew Wright. 2008. ‘Decoding the Song: Histogram-Based Paradigmatic and Syntagmatic Analysis of Melodic Formulae in Hungarian Laments, Torah Trope, Tenth Century Plainchant and Qur’an Recitation’. Proceedings of the Agora Expressivity in Music and Speech (EMUS), IRCAM–Institut de Recherche et Coordination Acoustique/Musique. http://recherche.ircam.fr/equipes/a nalysesynthese/EMUS/AGORA/abstract_poster/Biro_poster_EMUS_AGORA_ abstract_poster.pdf. Bittencourt, Pedro. 2014. ‘The Performance of Agostino Di Scipio’s Modes of Interference n. 2: A Collaborative Balance’, Contemporary Music Review, 33/1: 46–58. Blacking, John. 1973. How Musical Is Man? Seattle: University of Washington Press. Bloland, Per. 2005. Elsewhere is a Negative Mirror (score). Self-published. ———. 2010. Of Dust and Sand (score). Self-published. Boorman, Stanley. 2001. ‘The Musical Text’. In Rethinking Music, Nicholas Cook and Mark Everist (eds.), 403–22, Oxford: Oxford University Press. Borges, Jorge Luis. 1999. ‘John Wilkins’ Analytical Language’. In Selected Nonfictions, Eliot Weinberger (trans.), 231–4, New York: Penguin. Borio, Gianmario. 2012. ‘The Relationship between Musical Notation and Performance after 1950: Historical Survey and Theoretical Considerations’. Conference paper presented at McGill University, 21 September 2012. Born, Georgina. 1995. Rationalizing Culture: Ircam, Boulez and the Institutionalization of the Musical Avant-Garde. Berkeley: University of California Press. Bossis, Bruno. 2005. ‘Introduction à l’histoire et à l’esthétique des musiques électroacoustiques’. UNESCO Document. http://ntemusique.free.fr/infomus/sonmusique6.pdf. Boucourechliev, André. 1972. ‘Electronic Music’. In Harvard Dictionary of Music, second edition, Willi Apel (ed.), 285–6. Cambridge MA: The Belknap Press of Harvard University Press. Boulez, Pierre. 1986. ‘Technology and the Composer (1977)’. In Orientations: Collected Writings, Jean-Jacques Nattiez (ed.), 486–94. Cambridge, MA: Harvard University Press. Boulez, Pierre and Gerzso Andrew. 1988. ‘Computers in Music’, Scientific American 258/4. http://articles.ircam.fr/textes/Boulez88c/. Boutard, Guillaume. 2013. ‘Derrière les Potentiomètres, les Musiciens de l’Experimentalstudio – Entretien avec André Richard’, Circuit – Musiques Contemporaines 23/2: 25–35. Boutard, Guillaume and Catherine Guastavino. 2012. ‘Following Gesture Following: Grounding the Documentation of a Multi-Agent Creation Process’, Computer Music Journal 36/4: 59–80.
308 Bibliography Boutard, Guillaume and François-Xavier Féron. 2017. ‘La pratique interprétative des musiques mixtes avec électronique temps réel: positionnement et méthodologie pour étudier le travail des instrumentistes’. In Analyser la musique mixte, Alain Bonardi, Bruno Bossis, Pierre Couprie and Vincent Tiffon (eds.), 39–60. Sampzon: Éditions Delatour. Bowers, John and Phil Archer. 2005. ‘Not Hyper, not Meta, not Cyber but Infra-In-struments’. Proceedings of the 2005 International Conference on New Interfaces for Musical Expression (NIME05), 5–10. Bown, Oliver and Aengus Martin. 2012. ‘Autonomy in Music-Generating Systems’. Musical Metacreation: Papers from the 2012 Conference on Artificial Intelligence and Interactive Digital Entertainment (Association for the Advancement of Artificial Intelligence, Technical Report WS-12–16). www.aaai.org/ocs/index.php/ AIIDE/AIIDE12/paper/viewFile/5492/5740. Bricout, Romain. 2009. ‘Les Enjeux de la Lutherie Électronique: De L’influence des Outils Musicaux sur la Création et la Réception des Musiques Électroacoustiques’. PhD dissertation, Université de Lille. Bruns, Gerald L. 1980. ‘Intention, Authority, and Meaning’, Critical Inquiry 7/2: 297–309. Bullock, Jamie. 2008. ‘Implementing Audio Feature Extraction in Live Electronic Music’. PhD dissertation, Birmingham Conservatoire–Birmingham City University. Bunk, Lou. 2010. Being and Becoming (score). Self-published. Burleigh, Ian and Friedemann Sallis. 2008. ‘Seizing the Ephemeral: Recording Luigi Nono’s A Pierre. Dell’azzurro Silenzio, Inquietum. A più cori (1985) at the Banff Centre’. Musique Concrète — 60 years later, EMS08 Conference, Paris, June. Burns, Christopher. 2003. Hero and Leander. Composition for eight-channel audio. [For a complete list of Burns’s electroacoustic works: http://sfsound.org/~cburns/ works-electroacoustic.html]. Burtner, Matthew. 2004. ‘A Theory of Modulated Objects for New Shamanic Controller Design’. Proceedings of the International Conference on New Interfaces for Musical Expression (NIME), 193–6. Busoni, Ferruccio. 1910. Versuch Einer Organischen Klavier-Noten Schrift Praktisch Erprobt an Joh. Seb. Bachs Chromatischer Phantasie in D moll. Leipzig: Breitkopf und Härtel. Cadoz, Claude. 1999. ‘Musique, Geste, Technologie’. In Les Nouveaux Gestes de la Musique, Hughes Genevois et Raphael de Vivo (eds.), 47–92, Marseille: Éditions Parenthèses. Caduff, Corinna and Tan Wälchli, eds. 2007. Autorschaft in den Künsten: Konzepte – Praktiken – Medien. Zurich: Zürcher Hochschule der Künste. Cage, John. 1970. ‘[Cartridge Music]’. John Cage, Richard Kostelanetz (ed.), 144–5. New York and Washington: Praeger Publishers. Camacho, Arturo. 2007. ‘SWIPE: A Sawtooth Waveform Inspired Pitch Estimator for Speech and Music’. PhD Dissertation, University of Florida. Camacho, Arturo and John G. Harris. 2008. ‘A Sawtooth Waveform Inspired Pitch Estimator for Speech and Music’, Journal of the Acoustical Society of America 124/3, 1638–5. Canazza Sergio, Giovanni De Poli, and Alvise Vidolin. 2012. ‘Visioni del Suono. Il Centro di Sonologia Computazionale Dalla Musica Elettronica al Sound and Music Computing’, Atti e Memorie Dell’Accademia Galileiana di Scienze, Lettere
Bibliography 309 ed arti in Padova: Memorie Della Classe di Scienze Matematiche Fisiche e Naturali 2: 119–64. ———. 2013. ‘Visions of Sound: The Centro di Sonologia Computazionale, From Computer Music to Sound and Music Computing’. In Proceedings of the Sound and Music Computing Conference 2013, SMC 2013, R. Bresin, (ed.), 639–45, Sweden: Stockholm. Castanet, Pierre Albert. 1998. ‘Hugues Dufourt: Les Années de Compagnonnage avec l’Itinéraire 1976–1982’. In Vingt-cinq Ans de Création Musicale Contemporaine: L’Itinéraire en Temps Reel, Danielle Cohen-Levinas (ed.), 15–40, Paris: L’Harmattan. Cecchinato, Manuel. 1998. ‘Il Suono Mobile. La Mobilità Interna ed Esterna dei Suoni.’ In La Nuova Ricerca Sull’opera di Luigi Nono, Gianmario Borio, Giovanni Morelli and Veniero Rizzardi (eds.), 135–53. Firenze: Olschki. Centro di Sonologia Computazionale. 1981. ‘Regolamento per l’utilizzazione Delle Risorse del C.S.C.’, Bollettino Notiziario Dell’Università Degli Studi di Padova 19: 7–8. Chadabe, Joel. 1997. Electric Sound: The Past and Promise of Electronic Music. Upper Saddle River, NJ: Prentice Hall. Chafe, Chris. 2004. ‘Autonomous Virtuosity’. Occasional text for Roger Reynolds’s 70th birthday. https://ccrma.stanford.edu/~cc/pub/pdf/autovirt.pdf. ———. 2005. ‘Oxygen Flute: A Computer Music Instrument that Grows’, Journal of New Music Research 34/3: 219–26. Chalmers, David J. 2006. ‘Strong and Weak Emergence’. The Re-Emergence of Emergence: The Emergentist Hypothesis from Science to Religion, Philip Clayton and Paul Davies (eds.), Oxford: Oxford University Press, 244–54. Chion, Michel. 2009. Guide to Sound Objects. John Dack and Christine North (trans.). Self-published. www.scribd.com/doc/19239704/ Chion-Michel-Guide-toSound-Objects. Chowning, John. 1971. ‘The Simulation of Moving Sound Sources’, Journal of the Audio Engineering Society 19: 2–6. ———. 1973. ‘The Synthesis of Complex Audio Spectra by Means of Frequency Modulation’, Journal of the Audio Engineering Society 21: 526–34. Chowning, John and David Bristow. 1986. FM Theory & Applications — By Musicians For Musicians. Tokyo: Yamaha Music Foundation. Clark, Andy. 2008. Supersizing the Mind: Embodiment, Action and Cognitive Extension. Oxford: Oxford University Press. Clarke, Eric. 2004. ‘Empirical Methods in the Study of Performance’. In Empirical Musicology: Aims, Methods, Prospects, Eric Clarke and Nicholas Cook (eds.), 77–102. Oxford: Oxford University Press. Collins, Nicolas. 2004. ‘Introduction’, In Composers Inside Electronics: Music After David Tudor. Special issue of Leonardo Music Journal 14: 1–3. ———. 2007. ‘Live Electronic Music’. In The Cambridge Companion to Electronic Music, Nick Collins and Julio d’Escriván (eds.), Cambridge: Cambridge University Press. Collins, Nicolas and Julio d’Escrivan Rincon. 2007. The Cambridge Companion to Electronic Music. Cambridge: Cambridge University Press. Collins, Nicolas, Margaret Schedel and Scott Wilson. 2013. Electronic Music. Cambridge: Cambridge University Press. Cont, Arhia. 2008. ‘ANTESCOFO: Anticipatory Synchronization and Control of Interactive Parameters in Computer Music’, Proceedings of International Computer Music Conference, Belfast, UK.
310 Bibliography ———. 2012. ‘Synchronisme Musical et Musiques Mixtes: Du Temps Écrit au Temps Produit’, Circuit 22/1: 9–24. Cook, Nicholas. 1998. Music: A Very Short Introduction. Oxford: Oxford University Press. ———. 2003. ‘Music as Performance’. In The Cultural Study of Music, Martin Clayton, Trevor Herbert and Richard Middleton (eds.), 204–14, New York and London: Routledge. Cook, Perry. 1992. ‘A Meta-Wind-Instrument Physical Model, and a Meta-Controller for Real Time Performance Control,’ In Proceedings of the International Computer Music Conference, 273–76. http://hdl.handle.net/2027/spo.bbp2372.1992.072. Cope, David. 1976. New Directions in Music, second edition. Dubuque IA: Wm. C. Brown Company. ———. 2000. The Algorithmic Composer. Madison, WI: A-R Editions. Craik, Kenneth. 1967. The Nature of Explanation. Cambridge: Cambridge University Press. Cremaschi, Giorgio and Francesco Giomi, eds. 2005. ‘Il Suono Trasparente: Analisi di Opere con Live Electronics’. Special issue of Rivista di Analisi e Teoria Musicale 9/2. Dahlhaus, Carl. 1982. Esthetics of Music. William Austin (trans.). Cambridge: Cambridge University Press. Dannenberg, Roger B., Ben Brown, Garth Zeglin and Ron Lupish. 2005. ‘McBlare: A Robotic Bagpipe Player’, In Proceedings of the International Conference on New Interfaces for Musical Expression, 80–84. Dannenberg, Roger B. and Christopher Raphael. 2006. ‘Music Score Alignment and Computer Accompaniment’. Communications of the Association of Computing Machinery (ACM) 49/8, 38–43. Danuser, Hermann. 1997. ‘Autorintention und Auktoriale Aufführungstradition’. In Musikalische Interpretation, Hermann Danuser (ed.), 27–34. Laaber: Laaber. Daston, Lorraine and Greg Mittman. 2005. ‘The How and Why of Thinking with Animals’. In Thinking with Animals: New Perspectives on Anthropomorphism, Lorraine Daston and Greg Mittman (eds.), 1–14, New York: Columbia University Press. Davies, Hugh. 2001. ‘Gentle Fire: An Early Approach to Live Electronic Music’, Leonardo Music Journal 11: 53–60. ———. 2001. ‘Electronic Instruments’. In The New Grove Dictionary of Music and Musicians, Stanley Sadie and John Tyrell (eds.), Vol. 8: 67–107. London: Macmillan. Davis, Randal. 2008. ‘…And What They Do As They’re Going…: Sounding Space in the Work of Alvin Lucier’, Organised Sound 8/2: 205–12. Dean, Roger, ed. 2009. The Oxford Handbook of Computer Music. New York: Oxford University Press. De Benedictis, Angela Ida. 2004. Radiodramma e Arte Radiofonica. Storia e Funzioni Della Musica per Radio in Italia. Turin: Edt-De Sono. ———. 2015. ‘“Live is Dead?” Some Remarks about Live Electronics Practices and Listening’. Musical Listening in the Age of Technological Reproduction, Gianmario Borio (ed.), Music Cultures of the Twentieth Century vol. 1, 301–21. Farnham and Burlington: Ashgate. ———. 2017. ‘Auktoriale versus freie Aufführungstradition. Zur Interpretationsgeschichte bei Nono und Berio (… und Stockhausen ist auch dabei)’. In Wessen
Bibliography 311 Klänge? Über Autorschaft in neuer Musik, Hermann Danuser und Matthias Kassel (eds.), 47–67. Basel: Paul Sacher Stiftung. De Benedictis, Angela Ida and Nicola Scaldaferri. 2009. ‘Le Nuove Testualità Musicali’. In La Filologia Musicale, Maria Caraci Vela (ed.), Vol. 2: 71–116. Lucca: LIM. Debiasi, Giovanni Battista. 1984. ‘Sistema di Comando Gestuale per il Processore 4i’, Bollettino LIMB 4: 29–32. Debiasi, Giovanni Battista and Giovanni De Poli. 1974. ‘Linguaggio di Trascrizione di Testi Musicali per Elaboratori Elettronici’, Supplemento 1, Atti del IV Seminario di Studi e Ricerche Sul Linguaggio Musicale, Istituto Musicale F. Canneti, Vicenza and Padua: n.p. ———. 1986. ‘MUSICA, A Musical Text Coding Language for Computers’. In Proceedings of the First International Computer Music Conference (ICMC), MIT, Cambridge MA, n.p. Debiasi, Giovanni Battista, Giovanni De Poli, Graziano Tisato and Alvise Vidolin. 1984. ‘Centro Di Sonologia Computazionale C. S. C. University of Padova’. In Proceedings of the 1984 International Computer Music Conference, 287–97, San Francisco, International Computer Music Association. Debray, Régis. 1997. Transmettre. Paris: Odile Jacob. de Cheveigné, Alain and Hideki Kawahara. 2002. ‘YIN, a Fundamental Frequency Estimator for Speech and Music’, Journal of the Acoustical Society of America 111: 1917–30. Deliège, Célestin. 2011. Cinquante Ans de Modernité Musicale: De Darmstadt à l’IRCAM. Contribution Historiographique à une Musicologie Critique, second edition, Irène Deliège-Smismans (ed.), Wavre: Madarga. Della Seta, Fabrizio. 2010. ‘Idea – Testo – Esecuzione’. In Musicologia Come Pretesto: Scritti in Memoria di Emilia Zanetti, Tiziana Affortunato (ed.), 137–46. Rome: Istituto Italiano per la Storia della Musica. Delle Monache, Stefano, Paolo Polotti, Stefano Papetti, and Davide Rocchesso. 2008. ‘Sonically Augmented Found Objects’. In Proceedings of the 2008 International Conference on New Interfaces for Musical Expression (NIME08), 154–7. De Pirro, Carlo. 1993. ‘Intervista ad Alvise Vidolin’, Diastema, Rivista di Cultura e Informazione Musicale 5: 11–15. Derrida, Jacques. 1967. La Voix et le Phénomène. Paris: Presses Universitaires de France. ———. 2008. The Animal That Therefore I Am. New York: Fordham University Press. DeVale, Sue Carole. 1990. ‘Organizing Organology’. In Selected Reports in Ethno musicology, Volume 8: Issues in Organology, Sue Carole DeVale (ed.), 1–28, Los Angeles: UCLA Press. Di Giugno, Giuseppe. 1984. ‘Il processore 4i’, Bollettino LIMB 4: 25–7. Di Scipio, Agostino. 1994. ‘Micro-Time Sonic Design and the Formation of Timbre’, Contemporary Music Review 10/2: 135–48. ———. 1995. ‘Centrality of Téchne for an Aesthetic Approach on Electroacoustic Music’, Journal of New Music Research 24/4: 369–83. ———. 1998. ‘Questions Concerning (Music) Technology’, Angelaki: Journal of the Theoretical Humanities 3/2: 31–40. ———. 2003. ‘Sound is the Interface: From Interactive to Ecosystemic Signal Processing’, Organised Sound 8/3: 269–77.
312 Bibliography ———. 2008. ‘Émergence du Son, Son D’émergence: Essai D’épistémologie Expérimentale par un Compositeur’, Intellectica 48/9: 221–49. ———. 2011a. ‘Listening to Yourself through the Otherself: On Background Noise Study and Other Works’, Organised Sound 16/2: 97–108. ———. 2011b. ‘Untitled Public Lecture. Sound Installation as Ecosystemic Construction’. In Proceedings of the International Conference ‘Sound – System – Theory. Agostino Di Scipio’s work between composition and sound installation’, J.H. Schröder (ed.), special issue of Auditive Perspektiven 3. www.kunsttexte.de/index. php?id=721&ausgabe=38247&zu=907&L=0. ———. 2012. ‘Ascoltare L’evento del Suono. Note per una Biopolitica della Musica’. InMusica & Architettura, Alessandro Capanna, Fabio Cifariello Ciardi, Anna Irene Del Monaco, Maurizio Gabrieli, Luca Ribichini, Gianni Trovalusci (eds.), 63–70, Rome: Edizioni Nuova Cultura. ———. 2014a. ‘Sound Object? Sound Event! Ideologies of Sound and the Biopolitics of Music’. In Music and Ecologies of Sound, Kostas Paparrigopoulos and Makis Solomos (eds.), Special issue of Soundscape: The Journal of Acoustic Ecology, 13/1: 10–14. ———. 2014b. ‘The Place and Meaning of Computing in a Sound Relationship of Man, Machines, and Environment (ICMC 2013 keynote speech)’, Array: The Journal of the International Computer Music Association 37–52. Doati, Roberto and Alvise Vidolin, eds. 1986. Nuova Atlantide: Il Continente della Musica Elettronica. Venezia: La Biennale. Donin, Nicolas and Caroline Traube. 2016. ‘Tracking the Creative Process in Music: New Issues, New Methods’. Musicæ Scientiæ 20/3: 283–6. Dorigo, Wladimiro. 1977. ‘Presentazione’. In Musica/Sintesi: Musica Elettronica, Elettroacustica, per Computer, Alvise Vidolin (ed.), 5–6. Venezia: La Biennale– ASAC Archivio Storico delle Arti Contemporanee. Dorssen, Miles van. n.d. ‘CeLL’, Robocross Machines. www.robocross.de/page19. html. Drott, Eric. 2009. ‘Spectralism, Politics and the Post-Industrial Imagination’. The Modernist Legacy: Essays on New Music, Björn Heile (ed.), 39–60, Farnham and Burlington: Ashgate. Duchez, Marie-Elisabeth. 1983. ‘Des Neumes à la Portée. Élaboration et Organisation Rationnelles de la Discontinuité Musicale et de sa Représentation Graphique, de la Formule Mélodique à l’échelle Monocordale’, Canadian University Music Review/Revue de Musique des Universités Canadiennes 4: 22–65. Dufourt, Hugues. 2014. La Musique Spectrale: Une Révolution Épistémologique. Sampzon: Éditions Delatour France. Dunsby, Jonathan. 2002. ‘Performers on Performance’. In Musical Performance: A Guide to Understanding, John Rink (ed.), 225–36. Cambridge: Cambridge University Press. Eco, Umberto. 1989. The Open Work, Anna Cancogni (trans.). Cambridge, MA: Harvard University Press. Emmerson, Simon. 1994. ‘“Live” versus “Real-Time”‘. Contemporary Music Review 10/2: 95–101. ———. 2006a. ‘Appropriation, Exchange, Understanding.’ In Proceedings of the Electroacoustic Studies Network Conference, Beijing. ———. 2006b. ‘In What Form Can “Live Electronic Music” Live On?’ Organised Sound 11/3: 209–19. ———. 2007a. Living Electronic Music. London: Ashgate.
Bibliography 313 ———. 2007b. ‘Where Next? New Music, New Musicology’. In Proceedings of the 2007 Electroacoustic Music Studies Network Conference. www.ems-network.org/ spip.php?article293. ———. 2009. ‘Combining the Acoustic and the Digital: Music for Instruments and Computers or Prerecorded Sound’. In The Oxford Handbook of Computer Music, Roger T. Dean (ed.), 167–88, Oxford: Oxford University Press. ———. 2012. ‘Live Electronic Music or Living Electronic Music?’ Bodily Expression in Electronic Music: Perspectives on Reclaiming Performativity, Deniz Peters, Gerhard Eckel and Andreas Dorschel (eds.), 152–62. New York and London: Routledge. Essl, Karlheinz and Gerhard Eckel. 1985. Con una Certa Espressione Parlante (score). Self-published. www.essl.at/works/con-una.html. Farrell, Joseph. 2005. ‘Intention and Intertext’, Phoenix 59/1–2: 98–111. Favaro, Roberto ed. 1994. ‘Suono e Cultura: CERM, Materiali di Ricerca 1990–92’. Special issue of Quaderni di musica/realtà 31. Feenberg, Andrew. 1991. Critical Theory of Technology. New York and Oxford: Oxford University Press. Feld, Steven. 1976. ‘Ethnomusicology and Visual Communication’, Ethnomusicology 20/2: 293–325. ———. 2012a. Sound and Sentiment. Birds, Weeping, Poetics, and Song in Kaluli Expression. Durham: Duke University Press. ———. 2012b. Jazz Cosmopolitanism in Accra: Five Musical Years in Ghana. Durham: Duke University Press, 2012. ———. 2015a. ‘Acoustemology’. In Keywords in Sound, David Novak and Matt Sakakeeny (eds.), 12–21. Durham: Duke University Press. ———. 2015b. ‘Listening to Histories of Listening: Collaborative Experiments in Acoustemology with Nii Otoo Annan’. In Musical Listening in an Age of Technological Reproducibility, Gianmario Borio (ed.), 91–103. London: Ashgate. Fels, Florent. 1928. ‘Un Entretien avec Igor Stravinsky à Propos de l’enregistrement de Pétrouchka’, Les Nouvelles littéraires (8 December), 11. Ferrari, Giordano. 2008. ‘Parcours d’une œuvre: Marco Stroppa’, Brahms. http:// brahms.ircam.fr/composers/composer/3074/#parcours. Ferrarini, Lorenzo. 2009 ‘Registrare con il Corpo: Dalla Riflessione Fenomenologica Alle Metodologie Audio-visuali di Jean Rouch e Steven Feld’. In Pratiche Artistiche e Pratiche Etnografiche, Maria Carmela Stella (ed.), 125–48. Milan: Cuem. http://lorenzoferrarini.com/portfolio/recording-with-the-body/. ———. 2017. ‘Embodied Representation: Audiovisual Media and Sensory Ethnography’. In Anthrovision 5/1. https://anthrovision.revues.org/. Fitz, Kelly and Sean Fulop. 2009. ‘A Unified Theory of Time-Frequency Reassignment’. http://arxiv.org/abs/0903.3080. Fitz, Kelly and Lippold Haken. n.d. ‘Loris Software for Sound Modeling, Morphing, and Manipulation’. Last modified: March 23, 2010. www.cerlsoundgroup.org/Loris. ———. 2002. ‘On the Use of Time-Frequency Reassignment in Additive Sound Modeling’, Journal of the Audio Engineering Society 50/11: 879–93. Fitz, Kelly, Lippold Haken, Susanne Lefvert, et al. 2003. ‘Cell-Utes and Flutter-Tongued Cats: Sound Morphing Using Loris and the Reassigned Bandwidth-Enhanced Model’, Computer Music Journal 27/3: 44–65. Foucault, Michel. 1977. ‘What is an Author?’ In Language, Counter Memory, Practice: Selected Essays and Interviews by Michel Foucault, Donald F. Bouchard (ed.), 113–38. Ithaca, NY: Cornell University Press.
314 Bibliography Fox, Christopher. 2014. ‘Opening Offer or Contractual Obligation? On the Prescriptive Function of Notation in Music Today’, Tempo 68/269: 6–19. Fraisse, Paul. 1974. La Psychologie du Rhythme. Paris: Press Universitaire de France. Frasch, Heather. 2014. Frozen Transitions (score). Self-published. Frisius, Rudolf. 1996. Karlheinz Stockhausen 1: Einführung in das Gesamtwerk: Gespräche mit Karlheinz Stockhausen. Mainz: Schott. Fuchs, Mathias. 1986. Gehörte Musik. [Review of] Con una Certa Espressione Parlante von Karlheinz Essl & Gerhard Eckel. www.essl.at/bibliogr/fuchs.html. Gabor, Dennis. 1947. ‘Acoustical Quanta and the Theory of Hearing’. Nature 159: 591–4. Gayou, Évelyne. 2007. GRM Le Groupe de Recherches Musicales: Cinquante ans d’histoire. Paris: Fayard. Georgia Tech Center for Music Technology. 2015. ‘Shimon’. www.gtcmt.gatech.edu/ projects/shimon. Gerson-Kiwi, Edith and David Hiley. ‘Cheironomy’. Grove Music Online. Oxford Music Online. www.oxfordmusiconline.com. Gervasoni, Pierre. 2000a. ‘Le ping-pong de Pierre Boulez’, ‘Festival Agora 2000’, special issue of Le Monde, 5 June. ———. 2000b. ‘L’assistant ne Saurait être Considéré Comme un Simple Technicien [Interview with Eric Daubresse]’, ‘Festival Agora 2000’, special issue of Le Monde, 5 June. ———. 2000c. ‘Un Métier qui se Pratique Aussi en Free-lance’, ‘Festival Agora 2000’, special issue of Le Monde, 5 June. ———. 2000d. ‘L’assistant Musical: Trait d’union Entre Recherche et Creation’, ‘Festival Agora 2000’, special issue of Le Monde, 5 June. ———. 2000e. ‘L’assistant Musical à la Recherche de Son Statut’, ‘Festival Agora 2000’, special issue of Le Monde, 5 June. Giomi, Francesco, Damiano Meacci and Kilian Schwoon. 2003. ‘Live Electronics in Luciano Berio’s Music’. Computer Music Journal 7/2: 30–46. Goebel, Johannes. 1994. ‘Interaktion und Musik’, Positionen 21 Interaktive Musik: 2–5. Golley, Frank. 2000. ‘Ecosystem Structure’. In Handbook of Ecosystem Theories and Management, Sven Erik Jørgensen and Felix Müller (eds.), London: Lewis Publishers. Grabócz, Márta. 2013. Entre Naturalisme Sonore et Synthèse en Temps réel: Images et Formes Expressives Dans la Musique Contemporaine. Paris: Éditions des Archives Contemporaines. Green, Owen. 2013. ‘User Serviceable Parts. Practice, Technology, Sociality and Method in Live Electronic Musicking’. Ph.D Thesis, City University, London. Grisey, Gérard. 1982. ‘Musique, le Devenir des Sons’, Darmstädter Beiträge zur Neuen Musik 21: 16–23. Hajian, Aram Z., Daniel S. Sanchez and Robert D. Howe. 1997. ‘Drum Roll: Increasing Bandwidth through Passive Impedance Modulation’. Proceedings of the IEEE Robotics and Automation Conference, 35–6. Haller, Hans Peter. 1985. ‘Prometeo e il Trattamento Elettronico del Suono’, B ollettino LIMB 5: 21–4. ———. 1995. Das Experimentalstudio der Heinrich-Strobel-Stiftung des Südwestfunks Freiburg 1971–1989. Die Erforschung der Elektronischen Klangumformung und ihre Geschichte. Baden-Baden: Nomos. ———. 1999a. A Pierre/Ommagio a György Kurtág. Lecture given at Venice at the Archivio Luigi Nono, December. www.hp-haller.homepage.t-on-line.de/venedig.html.
Bibliography 315 ———. 1999b. ‘Nono in the Studio — Nono in Concert — Nono and the Interpreters’, Contemporary Music Review 18/2: 11–18. Haller, Hans Peter, André Ricard, Jürg Stenzl et al. 1993. A Proposito di ‘Découvrir la Subversion: Hommage à Edmond Jabès’ e ‘Post-Prae-Ludium n. 3 Baab-arr’ di Luigi Nono. Milano: Ricordi. Hallowell, Ronan. 2009. ‘Humberto Maturana and Francisco Varela’s Contribution to Media Ecology’. In Proceedings of the Media Ecology Assocation 10, Paul A. Soukup (ed.), 144–58. http://media-ecology.org/publications/MEA_proceedings/ v10/13_varela_maturanda.pdf. Harrison, Jonty. 1999. ‘Diffusion: Theories and Practices, with Particular Reference to the BEAST System’, eContact 2/4. http://econtact.ca/2_4/Beast.htm. ———. 2014. Some of its Parts (score). Self-published. Harvey, Jonathan. 1994. Tombeau de Messiaen (score). London: Faber Music. Haus, Goffredo. 1983. ‘A System for Graphic Transcription of Electronic Music Scores’, Computer Music Journal 7/3: 31–6. Hegarty, Paul. 2008. Noise/Music: A History. New York and London: Continuum. Helmholtz, Hermann. 1885. On the Sensations of Tone as a Physiological B asis for the Theory of Music. Alexander J. Ellis (trans.). Second English edition. London: Longmans, Green, and Co. This English edition conforms to Hermann Helmholtz. 1877. Die Lehre von den Tonempfindungen. Fourth edition. B raunschweig: Friedrich Vieweg. Henius, Carla. 1995. Carla Carissima: Briefe, Tagebucher, Notizen. Jurg Stenzl (ed.). Hamburg: Europaische Verlagsanstalt. Hoffman, Guy and Gil Weinberg. 2010a. ‘Gesture-Based Humanrobot Jazz Improvisation’. International Conference on Robotics and Automation (ICRA), 582–7. ———. 2010b. ‘Shimon: An Interactive Improvisational Robotic Marimba Player’. CHI Extended Abstracts, 3097–102. Höller, York. 1988. ‘La Situation Présente de la Musique Électronique’. Entretemps 6: 17–26. Hornbostel, Erich Moritz von and Curt Sachs. 1914. ‘Systematik der Musikinstrumente: Ein Versuch’, Zeitschrift fur Ethnologie 46/4–5: 553–90. Hui, Alexandera. 2013. ‘Changeable Ears: Ernst Mach’s and Max Plack’s Studies of Accomodation in Hearing’. In Music, Sound and The Laboratory From 1750– 1980, Alexandra Hui, Julia Kursell and Myles Jackson (eds.), 119–45, Chicago: University of Chicago Press. Huron, David. 2006. Sweet Anticipation: Music and the Psychology of Expectation. Cambridge, MA: MIT Press. Impett, Jonathan. 1998. ‘The Identification and Transposition of Authentic Instruments: Musical Practice and Technology’. ‘Ghosts and Monsters: Technology and Personality in Contemporary Music’, special issue of Leonardo Music Journal 8: 21–6. Ingold, Tim. 2000. The Perception of the Environment: Essays on Livelihood, Dwelling and Skill. London and New York: Routledge. ———. 2011. Being Alive: Essays on Movement, Knowledge and Description. London and New York: Routledge. Iverson, Jennifer. 2011. ‘The Emergence of Timbre: Ligeti’s Synthesis of Electronic and Acoustic Music in Atmosphères’, Twentieth-Century Music 7/1: 61–89. Jacobi, Peter. 1962. ‘The Strange Realm of Harry Partch’, Music Magazine 164, July: 14–16. Jacobs, Bryan. 2008. Song from the Moment (score). Self-published.
316 Bibliography Jaffe, David and Julius O. Smith. 1983. ‘Extensions of the Karplus-Strong PluckedString Algorithm’, Computer Music Journal 7/2: 56–69. Jazarī, Ismāʻīl ibn al-Razzāz. 1974. Book of Knowledge of Ingenious Mechanical Devices, Donald R. Hill (trans.). Dordrecht and Boston: R. Reidel. Johnson, Will. 2011. ‘First Festival of Live-Electronic Music 1967’. Source: Music of the Avant-garde, 1966–1973, Larry Austin, Douglas Kahn and Gurusinghe Nilendra (eds.), 116–24. Berkeley: University of California Press. Jordà, Sergi. 2002. ‘Afasia: The Ultimate Homeric One-Man Multimedia Band’. In Proceedings of the International Conference on New Interfaces for Musical Expression (NIME), 1–6. Junttu, Kristiina. 2008. ‘György Kurtág’s Játékok Brings the Body to the Centre of Learning Piano’, Finnish Journal of Music Education 11/1–2: 97–106. Kahn, Douglas. 2012. ‘James Tenney at Bell Labs’. In Mainframe Experimentalism: Early Computing and the Foundations of the Digital Arts, Hannah B. Higgins and Douglas Kahn (eds.), 131–46, Berkley, CA: University of California Press. Kajitani, Makoto. 1989. ‘Development of Musician Robots’, Journal of Robotics and Mechatronics 1, 6–9. ———. 1992. ‘Simulation of Musical Performance’, Journal of Robotics and Mechatronics 4/6: 462–5. Kane, Brian. 2007. ‘L’objet Sonore Maintenant: Pierre Schaeffer, Sound Objects and the Phenomenological Reduction’, Organised Sound 12/1: 15–24. Kapur, Ajay. 2005. ‘A History of Robotic Musical Instruments’. In Proceedings of the International Computer Music Conference (ICMC), 21–8. Kapur, Ajay, Michael Darling, Dimitri Diakopoulos, Jim W. Murphy, Jordan Hochenbaum, Owen Vallis and Curtis Bahn. 2011. ‘The Machine Orchestra: An Ensemble of Human Laptop Performers and Robotic Musical Instruments’, Computer Music Journal 35/4: 49–63. Kapur, Ajay, Eric Singer, Manjinder S. Benning, George Tzanetakis and Trimpin. 2007. ‘Integrating Hyperinstruments, Musical Robots and Machine Musicianship for North Indian Classical Music.’ In Proceedings of the 2007 Conference on New Interfaces for Musical Expression (NIME), 238–41. Karkoschka, Erhard. 1965. Das Schriftbuild der Neuen Musik. Celle: Hermann Moeck. Karman, Gregorio Garcia. 2013. ‘Closing the Gap Between Sound and Score in the Performance of Electroacoustic Music.’ In Sound & Score: Essays on Sound, Score and Notation, de Assis, Paulo, William Brooks, Kathleen Coessens (eds.), 143–64, Leuven: Leuven University Press. Karp, Theodore. 1998. Aspects of Orality and Formularity in Gregorian Chant. Evanston, IL: Northwestern University Press. Karplus, Kevin and Alex Strong. 1983. ‘Digital Synthesis of Plucked-String and Drum Timbres’, Computer Music Journal 7/2: 43–55. Kiss, Lajos and Benjámin Rajeczky, eds. 1966. Siratók. Budapest: Akadémiai Kiadó. Klapuri, Anssi and Manuel Davy. 2006. Signal Processing Methods for Music Transcription. New York: Springer. Kodály, Zoltán. 1952. A Magyar Népzene. Budapest: Editio Musica. ———. 1960. Folk Music of Hungary. Budapest: Corvina Press. Kojs, Juraj. 2004. Three Movements (score). Self-published. ———. 2006. All Forgotten (score). Self-published.
Bibliography 317 ———. 2011. ‘Notating Action-Based Music.’ Leonardo Music Journal 21: 65–72. Kontarsky, Aloys and Vernon Martin. 1972. ‘Notation for Piano’, Perspectives of New Music 10/2: 72–91. Kranenburg, Peter van, Dániel P. Biró and Steven R. Ness. 2011. ‘A Computational Investigation of Melodic Contour Stability in Jewish Torah Trope Performance Traditions’. In Proceedings of the International Society on Music Information Retrieval Conference, 163–68. Kranenburg, Peter van, Dániel P. Biró, Steven R. Ness and George Tzanetakis. 2012. ‘Stability and Variation in Cadence Formulas in Oral and Semi-Oral Chant Traditions: A Computational Approach’. Proceedings of the 12th International Conference on Music Perception and Cognition and the 8th Triennial Conference of the European Society for the Cognitive Sciences of Music, E. Cambouropoulos, C. Tsougras, P. Mavromatis and K. Pastiadis (eds.), 98–105. Thessaloniki: School of Music Studies, Aristotle University of Thessaloniki. Krumhansl, Carol L. 1990. Cognitive Foundations of Musical Pitch. New York: Oxford University Press. Kuivila, Ronald. 2001. ‘Open Sources: Words, Circuits, and the Notation/Realization Relation in the Music of David Tudor’. Paper presented at the Getty Research Institute Symposium The Art of David Tudor. www.getty.edu/research/ exhibitions_events/events/david_tudor_symposium/. Kurtág, György. 1973. Játékok (scores). Budapest: Editio Musica. Kvale, Steinar and Svend Brinkmann. 2009. InterViews: Learning the Craft of Qualitative Research Interviewing. Thousand Oaks, CA: SAGE. Labelle, Brandon. 2006. Background Noise: Perspectives on Sound Art. London and New York: Continuum. Lakoff, George and Mark Johnson. 1980. Metaphors We Live By. Chicago: University of Chicago Press. Landy, Leigh. 2007. Understanding the Art of Sound Organization. Cambridge MA: The MIT Press. Landy, Leigh, Pierre Croupie, Rob Weale and Simon Atkinson. n.d. ‘The ElectroAcoustic Resource Site (EARS)’. Latest update: November 4, 2014. http://ears. dmu.ac.uk/. LaTempa, Susan. 2007. ‘Music of the (Delicious Reddish) Spheres’. Los Angeles Times. August 29. http://latimesblogs.latimes.com/dailydish/2007/08/music-of-the-de.html. Lave, Jean and Etienne Wenger. 1991. Situated Learning: Legitimate Peripheral Participation. Cambridge: Cambridge University Press. Lely, John and James Saunders. 2012. Word Events: Perspectives on Verbal Notation. London, New Delhi, New York, Sydney: Bloomsbury Publishing. Lerdahl, Fred. 1992. ‘Cognitive constraints on Compositional Systems’, Contemporary Music Review 6/2: 97–121. Leroi-Gourhan, André. 1964. La Mémoire et Les Rythmes. Vol. 1: Le geste et la parole. Paris: Albin Michel. Levy, Kenneth. 1998. Gregorian Chant and the Carolingians. Princeton: Princeton University Press. Lewis, Andrew and Xenia Pestova. 2012. ‘The Audible and the Physical: A Gestural Typology for “Mixed” Electronic Music’. In Proceedings of the Electroacoustic Music Studies Network Conference, Stockholm. Lewis, George E. 2000. ‘Too Many Notes: Computer, Complexity and Culture in Voyager,’ Leonardo Music Journal 10: 33–9.
318 Bibliography Li, Tao, Mitsunori Ogihara and George Tzanetakis, eds. 2011. Music Data Mining. Boca Raton, London and New York: CRC Press. Ligeti, György. 2007. Gesammelte Schriften, 2 vols. Monika Lichtenfeld (ed.). Mainz: Schott. Lillios, Elainie. 2009. Nostalgic Visions (score). Self-published. Lindberg, Magnus. 1997. Related Rocks (score). London: Chester Music. Liu, Jiayang, Zhen Wang, Lin Zhong, Jehan Wickramasuriya and Venu Vasudevan. 2009. ‘uWave: Accelerometer-Based Personalized Gesture Recognition and Its Applications’, Pervasive Computing and Communications 5/6, 657–75. Livingston, Paisley. 2009. Art and Intention: A Philosophical Study. Oxford: Clarendon. Lord, Albert Bates. 1991. Epic Singers and Oral Tradition: Myth and Poetics. Ithaca: Cornell University Press. ———. 2000. The Singer of Tales. Cambridge, MA: Harvard University Press. Lorenz, Edward. 1963. ‘Deterministic Nonperiodic Flow’, Journal of the Atmospheric Sciences 20:2: 130–41. Maes, Laura, Godfried-Willem Raes and Troy Rogers. 2011. ‘The Man and Machine Robot Orchestra at Logos’, Computer Music Journal 35/4, 28–48. Manning, Peter. 2013. Electronic and Computer Music. Oxford: Oxford University Press. Marano, Francesco. 2013. L’etnografo Come Artista: Intrecci fra Antropologia e Arte. Roma: CISU. Marie, Jean Étienne. 1969. ‘De Quelques Expériences d’électro-acoustique Musicale’. La Revue Musicale 265–266, Varèse, Xenakis, Berio, Pierre Henry: 129–49. Marinelli, Elisabetta, ed. 1995. Il Complesso di Elettra. Mappa Ragionata dei Centri di Ricerca e Produzione Musicale in Italia. Roma: CIDIM. Markoff, John. 2006. What the Dormouse Said: How the Sixties Counterculture Shaped the Personal Computer Industry. New York: Penguin Books. Mathews, Max and Joan Miller. 1969. The Technology of Computer Music. Cambridge, MA: MIT Press. Mathews, Max and Andrew Schloss. 1989 ‘The Radio Drum as a Synthesizer Controller’. In Proceedings of the International Computer Music Conference, 42–5. Maturana, Humberto R. and Francisco J. Varela. 1980. Autopoiesis and Recognition. The Realization of the Living. Dordrecht: Reidel. May, Robert. 1976. ‘Simple Mathematical Models with Very Complicated Dynamics.’ Nature 261: 459–67. Mazzoli, Mario. 2011. Catalog of the Exhibition ‘Sound. Self. Other. Five New Works’ by Agostino Di Scipio. Berlin: Galerie Mario Mazzoli. McCalla, James. 1996. Twentieth-Century Chamber Music. New York: Schirmer Books. McIntyre, Michael, Robert Schumacher and James Woodhouse. 1983. ‘On the Oscillations of Musical Instruments’, Journal of the Acoustical Society of America 74/5: 1325–45. Melvin, Sheila. 2011. ‘A Beijing Exhibition on Art for the “Post-Human Era”‘, New York Times. August 11. www.nytimes.com/2011/08/12/arts/12iht-translife12. html?_r=0. Menger, Pierre-Michel and Dianne Cullinane. 1989. ‘Technological Innovations in Contemporary Music’, Journal of the Royal Musical Association 114/1: 92–101.
Bibliography 319 Meric, Renaud. 2008. ‘Le Bruit de Fond Est-il un Son? À Propos d’Écosystèmes Audibles 3a d’Agostino Di Scipio’, Filigrane 7: 197–213. Meric, Renaud and Makis Solomos. 2011. ‘Écosystèmes Audibles et Structures Sonores Émergentes Dans la Musique d’Agostino Di Scipio. Une Collaboration Entre Philosophie de la Musique et Analyse Musicale’, Musurgia: Analyse et Pratique Musicales 18/3: 39–56. Metheny, Pat. 2009. ‘About Orchestrion’, www.patmetheny.com/orchestrioninfo. Meyer, Leonard B. 1967. Music, the Arts and Ideas. Chicago: University of Chicago Press. Montague, Stephen. 1983–93. Tongues of Fire (score). Bury St Edmunds, Suffolk: United Music Publishers. Morin, Edgar. 1990. Science Avec Conscience. Paris: Seuil. ———. 1992. ‘The Concept of System and the Paradigm of Complexity’. In Context and Complexity: Cultivating Contextual Understanding, Magoroh Murayama (ed.), 125–37, Amsterdam: Springer Verlag. Mueller, Rena Charnin. 2011. ‘“Form Aus Jeder Note”: Liszt’s Intentions – The Devil’s in the Details’. Program and Abstracts of Papers Read at the American Musicological Society, 195. New Brunswick, ME: American Musicological Society Inc. Mumma, Gordon. 1975. ‘Live-Electronic Music’, The Development and Practice of Electronic Music, Jon H. Appleton and Ronald C Perera (eds.), 286–335. Englewood Cliffs: Prentice-Hall. Murail, Tristan. 1984. ‘Spectra and Pixels’, Contemporary Music Review 1: 157–70. Murphy, Jim W., Ajay Kapur and Dale Carnegie. 2012. ‘Better Drumming through Calibration: Techniques for Pre-Performance Robotic Percussion Optimization’. In Proceedings of the International Conference on New Interfaces and Musical Expression (NIME), n.p. Murphy, Jim, James McVay, Paul Mathews, Dale A. Carnegie and Ajay Kapur. 2015. ‘Expressive Robotic Guitars: Developments in Musical Robotics for Chordophones’, Computer Music Journal 39/1: 59–73. Nagy, Gregory. 1996. Homeric Questions. Austin: University of Texas Press. Nancy, Jean-Luc. 2001. ‘Ascoltando’. In Écoute: Une Histoire de Nos Oreilles. Peter Szendy and Jean-Luc Nancy (eds.), 5–12. Paris: Les Éditions de Minuit. Nattiez, Jean-Jacques. 1975. Fondements d’une Sémiologie de la Musique. Paris: Union Générale d’Éditions. ———. 2003. ‘Comment Raconter le XXe siècle?’ In Musiques, une Encyclopédie Pour le XXIe Siècle, Vol. 1: Musiques du XXe Siècle, Jean-JacquesNattiez (ed.), Arles: Actes Sud. Nelson, Andrew J. 2015. The Sound of Innovation: Stanford and the Computer Music Revolution. Cambridge, MA, and London: The MIT Press. Nelson, Kristina. 1985. The Art of Reciting the Qur’an. Austin: University of Texas Press. Nelson, Peter. 2011. ‘Cohabiting in Time: Towards and Ecology of Rhythm’, Organized Sound 16/2: 109–14. Nelson, Peter and Stephen Montague, eds. 1991. ‘Live Electronics’, special issue of Contemporary Music Review, 6: 1. Ness, Steven R., Dániel P. Biró and George Tzanetakis. 2010. ‘Computer- AssistedCantillation and Chant Research Using Content-Aware Web Visualization Tools’, Multimedia Tools and Applications 48/1: 207–24.
320 Bibliography Ness, Steven R., Shawn Trail, Peter F. Driessen, W. Andrew Schloss and George Tzanetakis. 2011. ‘Music Information Robotics: Coping Strategies for Musically Challenged Robots’. In Proceedings of the International Conference of the Society for Music Information Retrieval (ISMIR), 567–72. Neubauer, Eckhard and Veronica Doubleday. ‘Islamic Religious Music’. Grove Music Online. www.oxfordmusiconline.com. Nicolls, Sarah. 2010. ‘Interacting with the Piano. Absorbing Technology into Piano Technique and Collaborative Composition: The Creation of “Performance Environments”, Pieces and a Piano’. PhD Thesis, School of Arts, Brunel University. Nono, Luigi. 1967. A Floresta é Jovem e Cheja de Vida. With Liliana Poli, Kadigia Bove, Franca Piacentini, Elena Vicini, Living Theatre, William O. Smith, and Bruno Canino (eds.), LP, Arcophon, AC 6811, studio engineer: Marino Zuccheri; sound designer: Luigi Nono. ———. 1984. ‘Komponieren heute, Gespräch mit Wilfried Gruhn’, Zeitschrift für Musikpädagogik 9/27: 3–13. ———. 1987a. ‘A Pierre’. Luigi Nono. Festival d’automne à Paris, 203, Genève: Contrechamps and L’Âge de l’Homme. ———. 1987b. Das Atmende Klarsein. Full score. Plate number D20288. Milan: Ricordi. ———. 1987c. ‘Un’autobiografia dell’autore Raccontata da Enzo Restagno’. In Nono, Enzo Restagno (ed.), 3–73, Turin: Edizioni di Torino. ———. 1988. ‘Altre Possibilità di Ascolto’. In L’Europa Musicale. Un nuovo Rinascimento: La Civiltà dell’Ascolto, Anna Laura Bellina and Giovanni Morelli (eds.), 107–24. Florence: Vallecchi Editore. ———. 1991. A Pierre. Dell’azzurro Silenzio, Inquietum. Roberto Fabricciani, flute; Ciro Sarponi, clarinet; Alvise Vidolin, Live Electronics. Dischi Ricordi CRMCD 1003. ———. 1992. …Sofferte Onde Serene…. Alvise Vidolin (ed.), Plate number 132564. Milan: Ricordi BMG. ———. 1993. Écrits. Laurent Feneyrou (ed.), Paris: Christian Bourgois. ———. 1996. A Pierre. Dell’azzurro Silenzio, Inquietum. André Richard and Marco Mazzolini (eds.), Milan: Ricordi. ———. 1999. Quando Stanno Morendo: Diario Polacco n. 2. André Richard and Marco Mazzolini (eds.). Plate number 133462. Milan: Ricordi BMG. ———. 2001. Scritti e Colloqui. Angela Ida De Benedictis and Veniero Rizzardi (eds.), Lucca and Milan: LIM and Ricordi. ———. 2005. Das Atmende Klarsein. André Richard and Marco Mazzolini (eds.). Plate number 139378. Milan: Ricordi BMG. ———. 2010. La Fabbrica Illuminata. Luca Cossettini (ed.). Plate number 139738. Milan: Ricordi BMG. Normandeau, Robert. 1997. Figures de Rhétorique (score). Self-published. Odowichuk, Gabrielle, Shawn Trail, Peter Driessen, Wendy Nie and Wyatt Page. 2011. ‘Sensor Fusion: Towards a Fully Expressive 3D Music Control Interface’. In Communications, Computers and Signal Processing, IEEE Pacific Rim Conference, 836–41. Ohta, Hiroshisa, Hiroshi Akita, Motomu Ohtani, Satoshi Ishikado and Masami Yamane. 1993. ‘The Development of an Automatic Bagpipe Playing Device’. In Proceedings of the International Computer Music Conference, 430–31.
Bibliography 321 Orio, Nicola. 2006. Music Retrieval: A Tutorial and Review. Hanover, MA: Now Publishers Inc. Parncutt, Richard. 1999. ‘Psychological Evaluation of Nonconventional N otations and Keyboard Tablatures’. Music and Signs, Ioannis Zannos (ed.), 146–74, B ratislava: ASCO Art and Science. Parry, Milman. 1987. The Making of Homeric Verse: The Collected Papers of Milman Parry, Adam Parry (ed.). New York and Oxford: Oxford University Press. Patton, Kevin. 2007. ‘Morphological Notation for Interactive Electroacoustic Music’, Organised Sound 12: 123–8. Payzant, Geoffrey, 2008. Glenn Gould: Music and Mind. Toronto: Key Porter. Perkis, Tim. 2009. ‘Some Notes on my Electronic Improvisation Practice’. In The Oxford Handbook of Computer Music, Roger T. Dean (ed.), 161–5. Oxford: Oxford University Press. Pestova, Xenia. 2008. ‘Models of Interaction in Works for Piano and Live Electronics’. PhD Paper, McGill University, Schulich School of Music. www.xeniapestova. com/thesis.pdf. ———. 2009. ‘Models of Interaction: Performance Strategies in Works for Piano and Live Electronics’, Journal of Music, Technology and Education 2/2–3: 113–26. ———. 2011. ‘Performing Music for Piano and Electronics: Interview with British Pianist Philip Mead’, eContact 14/4. http://econtact.ca/14_4/pestova_ mead. html. Pestova, Xenia, Erika Donald, Heather Hindman, Joseph Malloch, Mark T. Marshall, Fernando Rocha, Stephen Sinclair, D. Andrew Stewart, Marcelo M. rchestra Wanderley and Sean Ferguson. 2009. ‘The CIRMMT/McGill Digital O Project.’ In Proceedings of the International Computer Music Conference, Montreal. www.idmil.org/projects/digital_orchestra. Pestova, Xenia, Mark Marshall and Jacob Sudol. 2008. ‘Analogue to Digital: Authenticity Vs. Sustainability in Stockhausen’s MANTRA (1970)’. In Proceedings, International Computer Music Conference, Belfast, 201–204. http://xeniapestova. com/icmc2008.html. Peters, Deniz. 2012. ‘Introduction’. In Bodily Expression in Electronic Music: Perspectives on Reclaiming Performativity, Deniz Peters, Gerhard Eckel and Andreas Dorschel (eds.), 1–14. New York and London: Routledge. Peters, Deniz, Gerhard Eckel and Andreas Dorschel, eds. 2012. Bodily Expression in Electronic Music: Perspectives on Reclaiming Performativity. New York and London: Routledge. Phelan, Peggy. 1993. Unmarked: The Politics of Performance. London and New York: Routledge. Plessas, Peter and Guillaume Boutard. 2015. ‘Transmission et Interprétation de l’instrument Électronique Composé’. In Proceedings JIM (Journées d’informatique musicale), Université de Montréal. jim2015.oicrm.org/#actes. Poletti, Manuel, Tom Mays and Carl Faia. 2002. ‘Assistant Musical ou Producteur? Esquisse d’un Nouveau Métier’. Journées d’Informatique Musicale, Marseille, 241–6. Popper, Karl. 1963. Conjectures and Refutations: The Growth of Scientific Knowledge. London: Routledge. Potter, Ralph K. 1946. ‘Introduction to Technical Discussions of Sound Portrayal’, Journal of the Acoustical Society of America 18/1: 1–3.
322 Bibliography Poullin, Jacques. 1954. ‘L’apport Des Techniques d’enregistrement Dans la Fabrication de Matières et Formes Musicales Nouvelles. Applications à la Musique Concrète’, L’Onde Électrique 34/324: 282–291. Pressing, Jeff. 1998. ‘Psychological Constraints on Improvisational Expertise and Communication in the Course of Performance’. In Studies in the World of Musical Improvisation, Bruno Nettl and Melinda Russel (eds.), 46–7. Chicago: University of Chicago Press. Provaglio, Andrea, Salvatore Sciarrino, Alvise Vidolin and Paolo Zavagna. 1991. ‘Perseo e Andromeda, Composizione, Realizzazione ed Esecuzione’. In Atti del IX Colloquio di Informatica Musicale, Genova, 324–30. Raman, Chandrasekhara Venkata. 1920. ‘Experiments with Mechanically-PlayedViolins’. Proceedings of the Indian Association of Cultivation of Science 6: 19–36. Ravet, Hyacinthe. 2005. ‘L’interprétation Musicale Comme Performance: Interrogations Croisées de Musicologie et de Sociologie’, Musurgia 12/4: 5–18. Reid, Stefan. 2002. ‘Preparing for Performance’. In Musical Performance: A Guide to Understanding, John Rink (ed.), 102–12. Cambridge: Cambridge University Press. Reimer, Bennett. 1972. The Experience of Music. Englewood Cliffs, NJ: Prentice Hall. Rink, John. 2011. ‘In Respect of Performance: The View from Musicology’. In Ereignis und Exegese. Musikalische Interpretation – Interpretation der Musik: Festschrift für Hermann Danuser zum 65. Geburtstag, Camilla Bork (ed.), 433–45. Schliengen: Edition Argus. Risset, Jean-Claude. 2014. ‘A la Recherche du Temps Réel – Pierre Boulez et la Fondation de l’Ircam. Entretien Avec Jean-Claude Risset. [Jean-Claude Risset in Interview with Gabriel Leroux and Frank Madlener]’, L’étincelle: Journal de la création à l’Ircam 12: 12–15. Rizzardi, Veniero. 1998. ‘The Score of A Floresta é Jovem e Cheja de Vida’. In Luigi Nono, A Floresta é Jovem e Cheja de Vida (score), Veniero Rizzardi and Maurizio Pisati (eds.), xxix–xxxiii. Milan: Ricordi. Rizzardi, Veniero and Angela Ida De Benedictis, eds. 2000. Nuova Musica Alla Radio: Esperienze Allo Studio di Fonologia della RAI di Milano 1954–1959 — New Music on the Radio: Experiences at the Studio di Fonologia of the Rai, Milan 1954– 1959. Rome: RAI-CIDIM. Roads, Curtis. 1982. ‘A Conversation with James A[ndy] Moorer’, Computer Music Journal 6/4: 13–24. ———. 2004. Microsound. Cambridge, MA: MIT Press. Roels, Hans. 2014. ‘Interview with Agostino Di Scipio.’ Artistic Experimentation in Music. Darla Crispin and Bob Gilmore (eds.), 315–22. Leuven: Leuven University Press. Rouch, Jean and Steven Feld. 2003. Ciné-Ethnography. Minneapolis: University of Minnesota Press. Rowe, Robert. 1992. Interactive Music Systems. Cambridge, MA: MIT Press. ———. 2004. Machine Musicianship. Cambridge, MA and London: MIT Press. Roy, Stéphane. 2003. L’analyse des Musiques Électroacoustiques: Modèles et Propositions. Paris: L’Harmattan. Sakoe, Hiroaki and Seibi Chiba. 1978. ‘Dynamic Programming Algorithm Optimization for Spoken Word Recognition’, IEEE Transactions on Acoustics, Speech and Signal Processing 26/1: 43–9.
Bibliography 323 Sanden, Paul. 2013. Liveness in Modern Music: Musicians, Technology and the Perception of Performance. New York and London: Routledge. Sapir, Sylviane. 1984. ‘Il Sistema 4i’, Bollettino LIMB 4: 15–24. Sapir, Sylviane and Alvise Vidolin. 1985. ‘Interazioni fra tempo e gesto. Note tecniche alla realizzazione informatica di Prometeo’, Bollettino LIMB 5: 25–33. Scaldaferri, Nicola. 2005. ‘Perché Scrivere le Musiche non Scritte? Tracce per Un’antropologia Della Scrittura Musicale’. Enciclopedia Della Musica, 5: 499–536. ———. 2012. ‘Remapping Songs in the Balkans: Bilingual Albanian Singers in the Milman Parry Collection’. In Balkan Epic. Song, History, Modernity, Philip V. Bohlman and Nada Petković-Djordjević (eds.), 203–23. Lanham, Toronto, Plymouth: The Scarecrow Press. ———. 2014a. ‘Voice, Body, Technologies: Tales from an Arbëresh Village’. TRANS – Revista Transcultural de Música/Transcultural Music Review 18. www.sibetrans. com/ trans. ———. 2014b. ‘The Voice and the Tape: Aesthetic and Technological Interactions in European Studios during the 1950s’. In Crosscurrents: American and European Music in Interaction, 1900–2000, Felix Meyer, Carol J. Oja, Wolfgang Rathert and Anne C. Shreffler (eds.), 335–49. Basel and Martlesham: Paul Sacher Foundation and The Boydell Press. ———. 2015. ‘Audiovisual Ethnography: New Paths for Research in Ethnomusicology’. In Musical Listening in the Age of Technological Reproducibility, Gianmario Borio (ed.), 373–92. London: Ashgate. Scaldaferri, Nicola and Steven Feld, eds. 2012. I Suoni Dell’Albero. Il Maggio di S. Giuliano ad Accettura. Udine: Nota. ———, eds. Forthcoming. When the Trees Resound. Collaborative Media Research on an Italian Festival. Udine: Nota. Schaeffer, Pierre. 1971. ‘À Propos des Ordinateurs’, La Revue Musicale 214/15: 56–7. ———. 2012. In Search of a Concrete Music. Christine North and John Dack (trans.). Berkeley and Los Angeles: University of California Press. Schafer, Raymond Murray. 1977. The Tuning of the World. New York: Knopf. Scharvit, Uri. 1982. ‘The Musical Realization of Biblical Cantillation Symbols (te’amim) in the Jewish Jemenite Tradition’. Yuval, Studies of the Jewish Music Research Center, 179–210. Jerusalem: The Magnes Press. Scherzinger, Martin. 2012. ‘Luciano Berio’s Coro: Nexus between African Music and Political Multitude’. In Luciano Berio: Nuove Prospettive — New Perspectives, Angela Ida De Benedictis (ed.), 399–430. Florence: Olschki. Schillinger, Joseph. 1946. Schillinger System of Musical Composition. New York: C. Fischer, Inc. Schneider, Arnd and Christopher Wright, eds. 2010. Between Art and Anthropology: Contemporary Ethnographic Practice. London and New York: Bloomsbury. Schoenberg, Arnold. 1926. ‘Mechanische Musikinstrumente’, Pult und Taktstock 3/3–4: 71–5. ———. 1975 [1924]. ‘A New Twelve-Tone Notation’. In Style and Idea: Selected Writings of Arnold Schoenberg. Leonard Stein (ed.), Leo Black (trans.), 354–62, Berkeley and Los Angeles: University of California Press. Schröder, Julia H. 2008. ‘Klangkunst von Komponisten: Emergente und performative Aspekte’, Musik-Konzepte 11: 99–114. ———. 2011. ‘Emergence and Emergency. Theoretical and Practical Considerations in Agostino di Scipio’s Works’. In Proceedings of the International Conference
324 Bibliography ‘Sound – System – Theory. Agostino Di Scipio’s work between composition and sound installation’. J.H. Schröder (ed.). Special issue of Auditive Perspektiven 3, n.p. www.kunsttexte.de/auditive_perspektiven. Schwarz, Boris. 1983. ‘Joseph Joachim and the Genesis of Brahms’s Violin Concerto’, The Musical Quarterly 69/4: 503–26. Sciarrino, Salvatore. 1992. Perseo e Andromeda. Plate number 135358. Milano: Ricordi. Seeger, Charles. 1958. ‘Prescriptive and Descriptive Music-Writing’, The Musical Quarterly 44/2: 185–7. Sennett, Richard. 2012. Together: The Rituals, Pleasure and Politics of Cooperation. New Haven and London: Yale University Press. Sève, Bernard. 2013. L’instrument de Musique: Une étude Philosophique. Paris: Éditions du Seuil. Shelemay, Kay Kaufman. 2015. Soundscapes: Exploring Music in a Changing World, 3rd ed. New York and London: Norton. Shusterman, Richard. 1988. ‘Interpretation, Intention, and Truth’, Journal of Aesthetics and Art Criticism 46/3: 399–411. Simondon, Gilbert. 1964. L’individuation Psychique et Collective. Paris: Aubier. Singer, Eric, Jeff Feddersen, Chad Redmon and Bil Bowen. 2004. ‘LEMUR’s Musical Robots’. In Proceedings of the Conference on New Interfaces for Musical Expression, 181–4. Singer, Eric, Kevin Larke and David Bianciardi. 2003. ‘LEMUR GuitarBot: MIDI Robotic String Instrument’. Proceedings of the Conference on New Interfaces for Musical Expression, 188–91. Six, Joren and Olmo Cornelis. 2011. ‘Tarsos: A Platform to Explore Pitch Scales in Non-Western and Western Music’. In Proceedings of the International Conference of the Society for Music Information Retrieval, 169–74. Smalley, Denis. 1997. ‘Spectromorphology: Explaining Sound-Shapes’, Organised Sound 2/2: 107–26. ———. 1999. Piano Nets (score). Self-published. Smith, F.J., 1968. ‘Vers une Phénoménologie du Son’, Revue de Métaphysique et de Morale 73/3: 328–43. Smith III, Julius O. 1997. ‘Nonlinear Commuted Synthesis of Bowed Strings’. In Proceedings of the International Computer Music Conference (ICMC), 264–7. http://hdl.handle.net/2027/spo.bbp2372.1997.071. Smith, William. 1874. A Dictionary of Greek and Roman Antiquities. New York: Harper. Solis, Jorge, Massimo Bergamasco, Shuzo Isoda, Keisuke Chida and Atsuo Takanishi. 2004. ‘Learning to Play the Flute with an Anthropomorphic Robot’. In Proceedings of the International Computer Music Conference (ICMC), n.p. Solomos, Makis. 2010. ‘Notes sur la Notion d’emergence et Sur Agostino Di Scipio’. Manières de Faire Des Sons: Musique-philosophie. Antonia Soulez, Horacio Vaggione, Guilherme Carvalho and Anne Sedes (eds.), 83–100, Paris: L’Harmattan. ———. 2012. ‘Entre Musique et Écologie Sonore: Quelques Exemples’, Sonorités 7: 167–86. ———, ed. 2014. ‘Agostino Di Scipio: Audible Ecosystems.’ Special issue of Contemporary Music Review 33/1. Somfai, László, ed. 1981. Hungarian Folk Music: Gramophone Records with Béla Bartók’s Transcriptions. Budapest: Hungaroton.
Bibliography 325 Sterne, Jonathan. 2012. ‘Sonic Imaginations’. In The Sound Studies Reader, Jonathan Sterne (ed.), 1–17, London and New York: Routledge. Stiegler, Bernard. 2005. De la misère Symbolique. Vol. 2: La Catastrophe du Sensible. Paris: Galilée. Stockhausen, Karlheinz. 1954. Studie II (score). Kuerten: Stockhausen Verlag. ———. 1970. Mantra (score). Kuerten: Stockhausen Verlag. ———. 1972–3. British Lectures. 7 lectures given at the Institute of Contemporary Arts, London, filmed by Allied Artists. http://karlheinzstockhausen.org. ———. 1974. Kreuzspiel, Kontra-Punkte, Zeitmaße, Adieu. The London Sinfonietta. Karlheinz Stockhausen. Polydor – Deutsche Grammophon 2530 443, LP. ———. 1979. Mikrophonie I. UE 15138. Vienna: Universal. ———. [1997]. ‘Stockhausen Interview with Lawrence Pollard’, The Culture Show with Lawrence Pollard (BBC2), www.youtube.com/watch?v=mrzi4YNhvig. ———. 1998. ‘Elektroakustische Aufführungspraxis’. In Texte zur Musik 1984–1991. Vol. 8, Dienstag aus Licht/Elektronische Musik: 549–85. Kürten: Stockhausen. ———. 2004. ‘Electronic and Instrumental Music’, Jerome Kohl (trans.). Audio Culture: Readings in Modern Music, Christoph Cox and Daniel Warner (eds.), 370–80. New York and London: Continuum. ———. 2008. Kontakte. Realisationspartitur. Kürten: Stockhausen-Verlag. Stoianova, Ivanka. 1985. ‘Luciano Berio Chemins en musique’, La Revue Musicale 375–7. Strate, Lance. 2006. Echoes and Reflections: On Media Ecology as a Field of Study. New York: Hampton Press. Strauss, Anselm L. and Juliet M. Corbin. 1998. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory. Thousand Oaks, CA: Sage Publications. Stroppa, Marco. 1984. ‘The Analysis of Electronic Music’, Contemporary Music Review 1/1: 175–80. ———. 1999. ‘Live Electronics or… Live Music? Towards a Critique of Interaction’, Contemporary Music Review 18/3: 41–77. Stroppa, Marco and Jacques Duthen. 1990. ‘Une représentation des Structures Temporelles par Synchronisation De Pivot’. In Musique et Assistance Informatique, 305–22, Marseille: Laboratoire Musique et Informatique de Marseille. Stewart, D. Andrew. 2008. Sounds Between Our Minds (score). Self-published. Suliteanu, Gisela. 1972. ‘The Traditional System of Melopeic Prose of the Funeral Songs Recited by the Jewish Women of the Socialist Republic of Rumania’, Folklore Research Center Studies, 3: 291–351. Sullivan, Charles. 1990. ‘Extending the Karplus-Strong Algorithm to Synthesize Electric Guitar Timbres with Distortion and Feedback’, Computer Music Journal 14/3: 26–37. Supper, Martin. 2016. ‘Elektronische Musik/Elektroakustische Musik/Computermusik’. Lexikon Neue Musik, Jörn Peter Hiekel and Christian Utz (eds.), 218–26. Stuttgart and Kassel: Metzler and Bärenreiter. Takanishi, Atsuo and Manabu Maeda. 1998. ‘Development of Anthropomorphic Flutist Robot WF-3RIV’. Proceedings of the International Computer Music Conference, n.p. Tamburini, Alessandro. 1985. ‘Dopo Prometeo. Incontro con Luigi Nono’, Quaderno LIMB (Laboratorio Permanente per l’Informatica Musicale della Biennale) 5: 11–14.
326 Bibliography Taruskin, Richard. 1995. Text and Act: Essays on Music and Performance. New York: Oxford University Press. Teruggi, Daniel. 2007. ‘Technology and Musique Concrète: The Technical Developments of the Groupe de Recherches Musicales and Their Implication in Musical Composition’, Organised Sound, 12/3: 213–31. Théberge, Paul. 1997. Any Sound You Can Imagine: Making Music/Consuming Technology. Middletown, CT: Wesleyan University Press. Thibault, Dominic. 2012. Igaluk: To Scare the Moon with its Own Shadow (score). Self-published. Tiffon, Vincent. 1994. ‘Recherches Sur les Musiques Mixtes’. PhD Dissertation, Universitéd’Aix-Marseille 1. ———. 2002. ‘L’interprétation des Enregistrements et l’enregistrement des Interprétations: Approche Médiologique’, Revue DEMéter. http://demeter.revue.univlille3.fr/interpretation/tiffon.pdf. ———. 2004. ‘Qu’est-ce Que la Musique Mixte?’ Les Cahiers de Médiologie 18: 132–41. ———. 2005a. ‘Pour Une Médiologie Musicale Comme Mode Original de Connaissance en Musicologie’. Filigrane 1: 115–39. ———. 2005b. ‘Les Musiques Mixtes: Entre Pérennité et Obsolescence’, Musurgia 12/3: 23–45. ———. 2012. ‘Le Public Comme “Musiquant”: Exemple de l’installation Immersive et Interactive XY’. L’ère Post-Média: Humanités Digitales et Cultures Numériques, Jean-Paul Fourmentraux (ed.), 163–80. Paris: Hermann. ———. 2013. ‘Musique Mixte’. In Les Théories de la Composition Musicale au XXe Siècle, Nicolas Donin and Laurent Feneyrou (eds.), 1297–314. Lyon: Symétrie. Tisato, Graziano. 1976. ‘An Interactive Software System for Real-Time Sound Synthesis’. In Proceedings International Computer Music Conference (ICMC), 135–43. ———. 1977a. ‘An Interactive Software System for Real-Time Sound Synthesis’. In Proceedings of Secondo Colloquio di Informatica Musicale, 68–80, Milano. ———. 1977b, ‘Un Sistema Interattivo per la Sintesi dei Suoni e la Loro Analisi Mediante Elaboratore’. In Proceedings of Secondo Colloquio di Informatica Musicale, Milano, 112–24. ———. 1978. ‘ICMS (Interactive Computer Music System): Manuale d’impiego’. In Rapporto Interno Centro di Calcolo, Università di Padova. Toop, Richard. 2004. ‘Expanding Horizons: The International Avant-Garde, 1962– 75’. Cambridge History of Twentieth-Century Music, Nicolas Cook and Anthony Pople (eds.), 453–77. Cambridge: Cambridge University Press. Tormey, Alan. 2011. ‘Developing Music Notation for the Live Performance of Electronic Music.’ Paper presented at Montreal: Concordia Live and Interactive Electroacoustic Colloquium. Trail, Shawn, Michael Dean, Tiago F. Tavares, Gabrielle Odowichuk, Peter Driessen, W. Andrew Schloss and George Tzanetakis. 2012. ‘Non-Invasive Sensing and Gesture Control for Pitched Percussion Hyper-Instruments using the K inect’. In Proceedings of the New Interfaces for Musical Expression (NIME), n.p. Trail, Shawn, Leonardo Jenkins, Duncan MacConnell, George Tzanetakis, Mantis Cheng and Peter Driessen. 2013. ‘STARI: A Self Tuning Auto-Monochord Robotic Instrument’. In Communications, Computers and Signal Processing, IEEE Pacific Rim Conference, 405–9.
Bibliography 327 Tranchefort, François-René (ed.). 1989. Guide de la Musique de Chambre. Paris: Fayard. Treitler, Leo. 1982. ‘The Early History of Music Writing in the West’, Journal of the American Musicological Society, 35: 237–79. ———. 1984. ‘Reading and Singing: On the Genesis of Occidental Music-Writ-ing’, Early Music History 4: 135–208. Tresch, John and Emily I. Dolan. 2013. ‘Toward a New Organology: Instruments of Music and Science’. In Music, Sound and The Laboratory From 1750–1980, A lexandra Hui, Julia Kursell and Myles Jackson (eds.), 278–97, Chicago: University of Chicago Press. Tutschku, Hans. 2007. Zellen-Linien (score). Paris: BabelScores. ———. 2011. ‘Using the iPhone for Live Electronics in My Composition Irrgärten for Two Pianos’. In Proceedings of the International Computer Music Conference (ICMC), Huddersfield, 395–98. Ungeheuer, Elena. 1992. Wie Die Elektronische Musik ‘Erfunden’ Wurde: Quellenstudien zu Werner Meyer-Epplers Entwurf zwischen 1949 und 1953. Mainz: Schott. ———. 2013. ‘L’électronique Live: Vers une Topologie de l’interaction Interprète-machine’, Pascal Decroupet (trans.). In Théories de la Composition Musicale au XXe siècle, Nicolas Donin and Laurent Feneyrou (eds.), 1367–86. Lyon: Symétrie. Vande Gorne, Annette. 1985. Faisceaux (score). Self-published. Vandenborgaerde, Fernand. 1972. ‘Des Musiques Mixtes Aux Dispositifs Électro-Acoustiques Manipulés en Directe’. Musique en jeu 8 (September): 44–9. Varela, Francisco J. 1986. ‘Experimental Epistemology: Background and Future’, ‘Cognition et Complexité’, special issue of Cahiers du CREA 9, 107–123. Varela Francisco J., Evan Thompson and Eleanor Rosch. 1993. The Embodied Mind: Cognitive Science and Human Experience. Cambridge, MA: MIT Press. Varèse, Edgar and Chou Wen-chung. 1966. ‘The Liberation of Sound’, Perspectives of New Music 5/1: 11–19. Vermersch, Pierre. 2011. L’entretien d’explicitation. Issy-les-Moulineaux: ESF. Vidolin, Alvise. 1977. Musica/Sintesi. Musica Elettronica, Elettroacustica, per Computer. (Proceedings of the Incontro/Seminario di Istituti e Studi Europei di Musica Elettronica, Elettroacustica, per Computer. Archivio Storico Delle Arti Contemporanee). Venezia: Conservatorio ‘B. Marcello’, La Biennale di Venezia, Archivio storico delle Arti contemporanee. ———. 1988a. La Musica Elettroacustica in Italia (Relazione Audiovisivi Presentata al Res International Electro Acoustic Music Festival), The Walters Art Gallery, Baltimore, MD, 1988. ———. 1988b. ‘Un Modello per Creare Modelli’, Annuario Musicale Italiano 348–9. ———. 1988c. ‘Sulla Musica Elettronica’. In Veneto in Musica. Dati e Riflessioni Sugli Anni Ottanta, Francesco Dalla Libera e Gianguido Palumbo (eds.), 145–6, Venezia: CIDIM–Marsilio. ———. 1989a. ‘Contatti Elettronici. La Linea Veneta Nella Musica Della Nuova Avanguardia’, Venezia Arti, Bollettino del Dipartimento di Storia e Critica Delle Arti dell’Università di Venezia 3: 97–107. ———. 1989b. ‘Problemi e Prospettive Dell’interprete Della Musica d’oggi: l’impatto Con le Nuove Tecnologie’. In Atti della European Conference of Promoters and Organizers of New Music, Brescia. CHECK a and b in the manuscript!!
328 Bibliography ———. 1990. ‘Ricognizione sui Centri Italiani di Informatica Musicale’. In L’arte Nella Transizione Verso il 2000, Quaderni di Tempo Presente, Pamini et al. (ed.), 321–5, Tivoli: Casa Sella Stampa. ———. 1991. ‘I Suoni di Sintesi di Perseo e Andromeda’. In Orestiadi di Gibellina, Roberto Doati (ed.), 102–4, Milano: Ricordi. ———. 1992. ‘Il Suono Mobile’. In Con Luigi Nono, Roberto Doati (ed.), 42–7, Venezia/Milan: Festival Internazionale di Musica Contemporanea/Ricordi. ———. 1993. ‘Problematiche e Prospettive Dell’esecuzione Musicale Con il Mezzo Elettronico’, In Suono e Cultura. CERM – Materiali di ricerca 1990–92, Roberto Favaro (ed.), Quaderni di Musica/Realtà 31: 45–166. ———. 1996. “L’interprete Elettronico.” Atti del Convegno Conservatori e Nuove Professionalità, Conservatorio di Bologna. ———. 1997. ‘Musical Interpretation and Signal Processing’. In Musical Signal Processing, Curtis Roads, Stephen Travis Pope, Aldo Piccialli, Giovanni De Poli (eds.), 439–59. Lisse: Swets & Zeitlinger. ———. 2002. ‘Diario Polacco 2. E Nono mi Mise Alle Linee di Ritardo’, Alias, weekly supplement to Il Manifesto, 2 February: 21. ———. 2008. ‘I Documenti Sonori Della Musica Elettronica’, Musica/Tecnologia 2: 49–65. www.fupress.net/index.php/mt/article/viewFile/3207/2816. ———. 2013. ‘Les Studios d’électro-acoustique, Outils Collectifs et Traditions Nationales’. In Théorie de la Composition Musicale au XXe Siècle, Nicolas Donin and Laurent Feneyrou (eds.), Vol. 1: 671–88, Lyon: Symétrie. Vinet, Hugues. 2005. ‘Les Nouveaux Musiquants’. L’Inouï 1: 48–58. Voegeling, Salomé. 2010. Listening to Noise and Silence. Towards a Philosophy of Sound Art. London and New York: Continuum. Waters, Simon. 2007. ‘Performance Ecosystems: Ecological Approaches to Musical Interaction’. In Proceedings of the 2007 Electroacoustic Music Studies Network Conference. www.ems-network.org/spip.php?article278. ———. 2013. ‘Tactility, Proxemics and the Development of a Hybrid Virtual/ Physical Performance System’, Contemporary Music Review 32/2–3: 119–34. Waters, Simon, ed. 2011. ‘Performance Ecosystems’. Special issue of Organised Sound, 16/2. Weinberg, Gil, Brian Blosser, Trishul Mallikarjuna and Aparna Raman. 2009. ‘The Creation of a Multi-Human, Multi-Robot Interactive Jam Session’. In Proceedings of the New Interfaces for Musical Expression, 70–73. Weinberg, Gil and Scott Driscoll. 2006. ‘Toward Robotic Musicianship’, Computer Music Journal 30/4: 28–45. Wenger, Etienne. 1998. Communities of Practice: Learning, Meaning, and Identity. Cambridge: Cambridge University Press. Wigoder, Geoffrey. 1989. ‘Masora’. The Encyclopedia of Judaism, 468. New York: MacMillan. Wilf, Eitan. 2013. ‘Contingency as a Cultural Resource for Negotiating Problems of Intentionality’, American Ethnologist 40/4: 605–18. Williamson, Matthew Murray. 1999. ‘Robot Arm Control Exploiting Natural Dynamics’. PhD Dissertation. Massachusetts Institute for Technology. Wilson, Scott. 2010. On the Impossibility of Reflection (score). Self-published. Wörner, Karl. 1973. Stockhausen: Life and Work. Berkley: University of California Press.
Bibliography 329 Ying, Hao. 2011. ‘Aural challenge’. Global Times. May 25. www.globaltimes.cn/content/658840.shtml. Zaldua, Alistair. 2011–12. Contrejours (score). Self-published. Zattra, Laura. 2000. ‘Da Teresa Rampazzi al Centro di Sonologia Computazionale (CSC)’, MA Thesis, Università di Padova. ———. 2002. ‘Storia, Documenti, Testimonianze’. In Vent’anni di Musica Elettronica all’università di Padova. il Centro di sonologia computazionale, Sergio Durante and Laura Zattra (eds.), 13–102. Padova, CLEUP/Palermo CIMS, Archivio Musiche del XX secolo, 2002. ———. 2006a. ‘The Identity of the Work: Agents and Processes of Electroacoustic Music’, Organised Sound 11/2: 113–18. ———. 2006b. ‘La “Drammaturgia” Del Suono Elettronico nel Perseo e Andromeda di Salvatore Sciarrino’. In La Musica Sulla Scena. Lo Spettacolo Musicale e il Pubblico, Alessandro Rigolli (ed.), 41–58, Torino-Parma: EDT-La Casa della Musica. ———. 2007. ‘The Assembling of Stria by John Chowning: A Philological Investigation’, Computer Music Journal 31/3: 38–64. ———. 2008. ‘Tipologie di scrittura nella musica informatica mista’. Musica/Realta 29/87: 89–103. ———. 2009. ‘Introduzione Alla Bibliografia di Alvise Vidolin’. In 60 dB. La Scuola Veneziana di Musica Elettronica. Omaggio ad Alvise Vidolin, Paolo Zavagna (ed.), 163–80, Firenze: Olschki. ———. 2013. ‘Les Origines du Nom de RIM (Réalisateur en Informatique Musicale)’. In Proceedings Journées d’Informatique Musicale (JIM 2013), 113–20, Université Paris VIII, Saint-Denis. www.mshparisnord.fr/JIM2013/actes/ jim2013_14. pdf. Zattra, Laura, Ian Burleigh and Friedemann Sallis. 2011. ‘Studying Luigi Nono’s A Pierre. Dell’azzurro Silenzio, Inquietum (1985) as a Performance Event’, Contemporary Music Review 30/5: 411–37. Zattra, Laura and Nicolas Donin. 2016. ‘A Questionnaire-based Investigation of the Skills and Roles of Computer Music Designers’, Musicae Scientiae 20/3: 436–56. Zattra Laura, Giovanni De Poli and Alvise Vidolin. 2001. ‘Yesterday Sounds Tomorrow. Preservation at CSC’, Journal of New Music Research 30/4: 407–12. Zattra, Laura and Nicolas Donin. Forthcoming. ‘A Questionnaire-Based Investigation of the Skills and Roles of Computer Music Designers’, ‘Tracking the Creative Process in Music’, Special issue of Musicae Scientiae 20/3. Zbikowski, Lawrence Michael. 2002. Conceptualizing Music: Cognitive Structure, Theory, and Analysis. Oxford: Oxford University Press. Zhang, Mingfeng, John Granzow, Gang Ren and Mark Bocko. 2014. ‘Timbre Imitation and Adaptation for Experimental Music Instruments: An Interactive Approach Using Real-Time Digital Signal Processing Framework’. Proceedings of the Audio Engineering Society 137, 1–8. Zimmermann, Heidi. 2000. Tora und Shira: Untersuchungen zur Musikauffassung des Rabbinischen Judentums. Bern: Peter Lang. Zolzer, Udo, ed. 2002. Digital Audio Effects. Chichester: Wiley & Sons.
This page intentionally left blank
Index
4i digital work station 73, 89–95, 97, 100n11 4X digital work station 60, 64, 73 Abbas, Taty 247–248 absolute Musik 9 accelerometer 55, 164 Adams, John D. S. 273n13 aesthetic 3, 5, 23, 43, 63, 72, 75, 85, 87, 88, 90, 96, 100n5, 101, 107, 111, 167, 197, 208, 222, 227, 228n6, 253, 276, 279, 298–302, 304n6; of collage 6; Romantic 277 Agamennone, Maurizio 227n2 akoumenon 39, 45n15 Albèra, Philippe 279 Alessandrini, Patricia, Schattengewächse 153, 158 al-Husarî, Mahmûd Khalîl 232–233 Ambrosini, Claudio 79n50, 83 AMM 2 analysis 12, 13n11, 18, 44n4, 62, 66, 102, 104, 106, 138, 172, 175, 177, 184, 192, 197, 219, 221, 223–224, 228n14, 230–252, 253–255, 258, 260–261, 263, 272, 274n18, 292; computational (of vocal music/chants) 230, 242–249; qualitative 102–104, 117; quantitative 95, 102; spectral (of music) 273n6 Antescofo (software) 291–293, 298, 299, 301–302 Appadurai, Arjun 229n16 Archivio Luigi Nono (Venice) 100n10, 100n11 Archivio Storico delle Arti Contemporanee (ASAC) 100n11 Arduino 173 Arom, Simha 219–224, 228n8, 228n10, 228n11; recording techniques developed by 222–224
Assmann, Aleida and Jan 252n23 Austin, Larry, Accidents Two 144–146, 158, 158n1 authorship 11, 66, 195–214, 214n1, 217, 219, 220–222, 224, 226, 227n1, 277–279, 285, 298; absence of 217–229; and the concept of écriture 278–279, 286; hermeneutic circles involving 196–198, 217; meta-212–213; post-213 Bach, Johann Sebastian 108, 109 Baggiani, Guido 79n50 Banff Centre (Canada) 259–261, 267, 273n13, 275, 280–281 Bancquart, Alan 279 Baron, John 109 Bartók, Béla 217, 227n2, 230–231, 249, 251n20; transcription (sirató) 231, 236–242 Battier, Marc 65 Battistelli, Giorgio 79n50, 83 Baumgart, Bruce 78n30 Bayle, Laurent 62, 66 Beethoven, Ludwig van 9, 108 Behrman, David 73, 79n50; Oracolo 73 Bekker, Paul 61 Bell Telephone Laboratories 100n15, 273 Belletti, Giovanni 88 Bennett, Gerald 62 Berberian, Cathy 207, 215n8, 228n6 Berger, Michael 163 Berio, Cristina 215n8 Berio, Luciano 11, 37, 69, 79n44, 83, 159n4, 197–202, 206–209, 211, 212, 213, 214, 215n6, 215n8, 215n11, 218, 219, 221, 224, 227n4, 228n6, 228n14; Altra voce 207, 209, 215n9, 215n12; Coro 220–222, 228n14; Cronaca del Luogo 215n9; Naturale 218; Ofaním 207–209, 215n9; Outis 215n9; Ritratto
332 Index di città (with Bruno Maderna) 224; Sequenza I 198–202, 207, 212; Sequenza III 202, 206–207; Visage 228n6; Voci 218 Berweck, Sebastian 106, 147, 159n9 Birnbaum, David 153, Biró, Dániel Péter 243, 246 Blacking, John 220 Bloland, Per 157–158; Elsewhere Is a Negative Mirror 144; Of Dust and Sand 144–145, 158 Boie, Bob, Radiodrum 185 Borges, Jorge Luis, Celestial Emporium of Benevolent Knowledge 161 Boulez, Pierre 4, 59–61, 63–66, 69–70, 75, 77n10, 77n20, 79n49, 111, 123, 275, 286, 304n5, 304n6; Répons 4, 59–61, 65 Boutard, Guillaume 100n6, 102, 118 Brahms, Johannes 61, 85, 219; Violin Concerto 61 Brandeis University 1 Breitscheid, Johannes 100n8 Bristow, David 49 British Broadcasting Corporation (BBC) 207 Britten, Benjamin 61 Brooklyn College 71 Brown University-Pembroke College 207 Bryars, Gavin 167 Bunk, Lou 138; Being and Becoming 138–140, 157 Bussotti, Sylvano 99n2 Cadoz, Claude 166 Cage, John 1, 159n11, 172, 224–225, 229n17; Cartridge Music 1; Fontana Mix 224; Il treno di John Cage. Alla ricerca del silenzio perduto 229n17; Imaginary Landscapes 229n17 Cahill, Thaddeus, Telharmonium 9 California Institute of the Arts, Machine Orchestra 177 Cananzi, Anselmo 79n50 Cancino, Juan Parra 273n13 Celano, Peppino 218–219 Center for Computer Research in Music and Acoustics (CCRMA) 10, 62–63, 66–70, 71, 75, 76, 78n29, 78n30, 78n36, 79n45, 100n9, 260 Centre for Interdisciplinary Research in Music, Media and Technology (CIRMMT) 116, 125
Centre Georges Pompidou 63, 76n6 Centro di Sonologia Computazionale (CSC) 10, 62–63, 70–74, 75, 76, 79n46, 79n47, 80n52, 83, 84, 88–90, 92–94, 97, 98, 99, 100n14, 100n15, 292 Centro Ricerche Musica e Sperimentazione Acustica (CERM Sassari) 227n3 Centro Studi Luciano Berio (Florence) 227n4 Chabade, Joel 66, 73–74, 79n50; Canzona veneziana 73 Chafe, Chris 46, 161, 163, 170; Animal (algorithm) 10, 11, 46–56, 161, 163–171; Oxygen Flute 47; Phasor 10, 46–48, 55, 163–164; Tomato Music 48, 56–57; Tomato Quintet 10, 46–48, 55, 56; Tomato Quintet II 47; Vanishing Point 51 chaos 51, 168; theory (butterfly effect) 292 Charvet, Pierre 78n26 Chiesa di San Lorenzo (Venice) 90, 92 Chiesa di San Rocco (Venice) 96 Chowning, John 49, 62–63, 66–70, 75, 76n5, 78n30, 78n36, 100n9; Sabelithe 66; Turenas 66 Clark, Andy 166 Clarke, Eric 102 Clementi, Aldo 79n50, 83 Collins, Nicolas 6 composition 1, 3–4, 7–9, 11, 13n4, 13n9, 13n11, 23, 38, 44n12, 45n13, 48, 56, 83–85, 88, 91–93, 96, 98, 100n14, 101, 108, 112, 117, 119, 125, 133–134, 150, 166, 169, 174, 180, 189, 191, 197, 198, 202, 209, 211, 213, 214, 217–219, 224–225, 227, 227n2, 227n4, 228n6, 228n7, 229n17, 229n22, 229n23, 255–256, 258, 276, 279, 285, 287, 290–293, 296, 304n6; algorithmic 160, 166, 170, 175; as an ‘opus perfectum et absolutum’ 277; by a single composer 286; collaborative 10, 59–80, 151, 175, 290, 299, 303; soundscape 224–227, 287; writing and rewriting 198 computer music designer (réalisateur en informatique musicale RIM) 10, 13n4, 59–61, 63, 66, 76n3, 77n18, 83–85, 87, 88, 89, 92–93, 95, 98–99, 100n6, 112, 113, 118, 130n4, 130n6, 290, 291, 293–294, 299, 301 computer vision 187
Index 333 Collectif de Recherche Instrumentale et de Synthèse Sonore (CRISS) 278–279 Conservatorio Cesare Pollini (Padova) 99 Cont, Arshia 291–293, 301, 304n10 Cook, Nicolas 212, 213, 216n8 Cook, Perry 45 Cope, David H. 1, 166 Craft. Robert 61 Craik, Kenneth 169 creative process 7, 10, 67, 72, 75, 83, 85, 86, 87, 98, 100n14, 112, 166, 198, 207, 209, 211, 219, 220, 279, 290–292, 293, 298, 299, 300, 302–303 Ctesibius of Alexandria 58n3 Cullinane, Diane 105 Cummings, E. E. 304n3 Dack, John 7 Dalla Vecchia, Wolfango 79n50, 83 Danuser, Hermann 196, 197, 215n6 Darmstädter Beiträge 277 Dashow, James 70, 71, 79n50; Effetti Collaterali 70, 79n50 Daston, Lorraine 170 Daubresse, Eric 78n26 daxophone 11, 161–165, 167, 168, 170; bowing techniques 55, 161, 164 De Benedictis, Angela Ida 3, 11, 12 Debiasi, Giovanni Battista 70 Debussy, Claude 279 de Coudenhove, Christophe 78n26 Decoust, Michel 79n44 delay line 21, 44n2, 164, 170 De Poli, Giovanni 70 Derrida, Jacques 45n15, 170 DeVale, Sue Carole 160 Di’an, Fan 47 Di Giugno, Giuseppe 60, 63, 65, 89 Di Scipio, Agostino 10, 79n50; enactivism 19; [Works]; 3 pezzi muti 44n7, 44n8; 3 stille stücke 44n8; 3 difference-sensitive circular interactions 45n13; Audible Ecosystemics 18, 37; Audible Ecosystemics n. 2a (Feedback Study) 44n5; Audible Ecosystemics n. 3a (Background Noise Study) 41, 44n4; Audible Ecosystemics n. 3b (Background Noise Study, in the Vocal Tract) 44n7; Audible Ecosystemics n. 3c (Background Noise Study, with own sounds) 41; Book of Flute Dynamics 44n12;
Condotte Pubbliche 40, 44n1; Due di Uno 44n12; Koinoi Topoi 41; Modes of interference n. 1 45n13; Modes of interference n. 2 45n13; Pulse Code 44n12; Stanze Private 40; Texture Residue 44n8; Texture Multiple 45n13; Two Pieces of Listening and Surveillance 10, 18, 19, 20–39, 41, 42–43, 43n1; Two Sound Pieces with Repertoire String Music 44n8; Untitled 2005 44n4 Disklavier (Yamaha) 49, 145, 174, Doati, Roberto 79n50 Dodge, Charles 71, 100n9 Dolan, Emily 161 Donatoni, Franco 79n50 Donin, Nicolas 7 Dorssen, Miles van 175 Dorigo Wladimiro 72 Driessen, Peter 192n1 Dufourt, Hugues 275–276, 278–279, 288n2, 288n5 Dunsby, Jonathan 106 dynamical system 34, 44n5, 51, 164 Eckel, Gerhard, Con una Certa Espressione Parlante (with Karlheinz Essl) 153–155, 158, 158n1 Eco, Umberto, Opera Aperta and the concept of the openwork 198, 298 Emmerson, Simon 3, 4–5, 6. 13n11, 134, 159n4, 163, 170, Équipe ‘dispositifs, expermentations, situations en art contemporaine’ (EDESAC) 291, 296 Erikson, John, Loops 66 Essl, Karlheinz, Con una Certa Espressione Parlante (with Gerhard Eckel) 153–155, 158, 158n1 ethnomusicology 11, 219, 223, 249n1; computational 249n1 Evangelisti, Franco, Spazio a 5 36 Fabbriciani, Roberto 85–86, 100n8, 212, 258 Favreau, Emmanuel 65, 79n49 feature-extraction methods 31, 44n10 feedback 21–23, 25–26, 28, 33, 42, 43, 45n13, 49, 50–52, 53–54, 97, 153, 166, 170, 261, 273n14, 273n16, 284; human-instrument 150, 163, 165, 166–167, 298–300, 303 Feher Jewish Music Center (Tel Aviv) 243
334 Index Feld, Steven 223, 224–227, 228n15, 229n20, 229n21, 229n22; recording techniques developed by 225–226 Féron, François-Xavier 102 Festival Agora (Paris) 76n2 Festival d’automne (Paris) 279, 286 Festival d’Avignon 285 First Festival of Live Electronic Music 2 Fitz, Kelly 265 Fondazione Giorgio Cini 88, 288n3 Fondazione Isabella Scelsi 43n1 FORTRAN 66 Foucault, Michel 196 Fourier transform 257, 263, 265, 266 Franssen, Marieke 273n13 Frasch, Heather 136; Frozen Transitions 136–137, 142, 157 fundamental frequency (f0) 232–233, 235, 243, 250n12, 251n13, 258 futurism (Italian) 224 Gabor, Dennis 257, 265, 273n8, 273n9 Gazzelloni, Severino 198 Gentle Fire 2 Georgia Institute of Technology, Shimon 177 Gervasoni, Pierre 59, 76n2 Gerzso, Andrew 59–60, 65, 76n2, 76n3, 76n4 Giomi, Francesco 215n11, 215n12 Glass, Philip 61 Globokar, Vinko 79n44 Gaussian kernel 233, 239 Goebel, Johannes 8 Gould, Glenn 165 Goves, Larry, My name is Peter Stillman. That is not my real name 159n11 Graziani, Mauro 71, 73–74, 79n50, 89, 93 Gregorian chant 250n9, 252n24 Grey, John 66 Grisey, Gérard 13n9, 275–276, 277, 278, 288n2; Les espaces acoutiques, Partiels 276 Guarnieri, Adriano 80n50, 83 Haken, Lippold 265 Haller, Hans Peter 13n5, 96, 100n8, 105, 273n3, 285, Hammond organ 9 harmonics 51, 54, 56, 151, 254, 258, 259 Harrison, Jonty 133, 136; Some of its Parts 134–135, 142, 153 Harvey, Jonathan 67, 138; Tombeau de Messiaen 138, 157
Haynes, Stanley 67 Harvard Dictionary of Music 2 Heinrich-Strobel-Stiftung (Freiburg) 13n5, 89, 91, 92, 96–97, 100n9, 115, 285 Henius, Carla 215n15 Henry, Pierre 8 Höller, York 114 Holtz, Imogen 61 Hopkins, Bart 168 Howe, Hubert Jr 79n50 Hugo, Victor 75, 80n55 Hui, Alexandra 168 Hummel, Thomas 78n26 IBM System 370 (hardware) 70 idiophone 162, 164, 169, 170, 176 Impett, Jonathan 80n50 improvisation 11, 48, 96, 132, 153, 161, 163, 165–171, 172–192, 211, 212, 236, 243, 248, 251n19, 255; algorithmic 175; automated 11, 168–169, 189–191; collective 1–2, 11, 96, 175, 184, 211; free 161, 163, 165, 168, 169 installations 10, 12, 18, 25, 40–41, 44n1, 44n4, 45n16, 46, 47, 56, 66, 93, 110, 120, 153, 163, 172, 173, 176, 184, 189, 192, 192n2, 225, 227, 291, 294–295, 297–299, 301, 303 Institut de Recherche et Coordination Acoustique/Musique (IRCAM) 4, 6, 10, 59–60, 62, 63–66, 68–69, 70–71, 72, 73, 75–76, 76n3, 76n6, 77n7, 77n8, 77n9, 77n11, 78n26, 78n28, 79n44, 79n45, 79n49, 89, 11n7, 115, 117, 122, 127, 228n14, 286, 291, 292–294, 295, 304n6 Instituto di Ricerca per l’Industria dello Spettacolo (IRIS) 97 instrument (musical) 18, 20, 50, 55–56, 57, 58n3, 93, 104, 130n3, 165–168, 172–174, 179, 192, 255, 297; algorithmic 161, 163; as a physical machine 165; autonomous 164, 169–170; definition of 104; design 11, 63, 168, 185; digital 157; extended technique 282; hyper-151, 175; meta7; organology 11; robotic 173–177, 179–180, 184, 189–192; taxonomies of 160–161, 169 interactivity 3, 4, 8–9, 175, 292–293, 295, 304n6; Interactive Computer Music System (ICMS) 73, 80n53 International Computer Music Conference (ICMC) 79n48, 80n54, 84 Itinéraire 278
Index 335 Jacobs, Brian, Song from the Moment 149–150, 158 Jaffe, David A. 67; Silicon Valley Breakdown 67; The Space between Us 180 Jazarī, Ismāʻīl ibn al-Razzāz 174 Joachim, Joseph 61, 85, 86 Johnson, Mark 169 Jordà, Sergi, Afasia 176 Kapur, Ajay 192n1; Mahavidebot 174–175 Karchach, Yehoshua ben 252n21 Karman, Gregorio Garcia 131 Karpen, Richard 79n50, 69, 71 Kinect (Microsoft) 186–189, 191 Koenig, Gottfried Michael 88 Kojs, Juraj 142–144, 159n10; All Forgotten 142, 144; Three Movements 142, 143, 144, 157, 158 Kontarsky, Alois 159n5 Kott, Jean 63 Kranenburg, Peter van 249n1 Kremer, Gidon 5 Krenek, Ernst 61 Kurtág, György, Játékok 151, 158 La Biennale (Venice) 67, 73–74, 79n49, 84, 96, 99n2, 100n4 Laboratorio Permanente per l’Informatica Musicale della Biennale di Venezia (LIMB) 84, 99n2 Lakoff, George 169 laments, Hungarian and Jewish 11, 230, 231, 236–237, 250n2, 251n15, Lemouton, Serge 78n26, 78n28 Leroi-Gourhan, André 302 Lévinas, Michaël 275–276, 278 Lewis, George 169 Leydi, Roberto 224 Li, Yangke 274n20 Ligeti, György 219, 228n10, 278; Atmosphères 278, 288n4; Harmonies 56; Pièce électronique no. 3 288n4; Volumina 56 Lillios, Elainie, Nostalgic Visions 138, 141, 157, 159n12 Lindberg, Magnus, Related Rocks 145 Lippe, Cort 78n26 Liszt, Franz 61, 278 live electronic music; agents (human and machine) and agency 4, 8, 13n4, 19, 37, 42, 76n1, 104–105, 109, 116, 161, 163–165, 166, 167, 170–171, 222; and listening 4, 12, 19–20, 30, 33–34, 36,
38–43, 61, 87, 99n3, 117, 134, 140, 157, 158, 170, 173, 175, 179–180, 181, 198, 207, 222, 226, 227, 253, 255, 258–262, 263, 267, 276, 287, 289n7, 296–298, 300–301, 303; as a binary ‘human vs machine’ concept 2, 3–4, 6, 17–18, 33, 104, 105, 107, 108, 111, 112, 115, 116, 119, 161–164, 165, 217, 224, 293, 300; as a game-like event 12, 50, 290, 297–298; as chamber music 101, 107–109, 110–111, 114, 115, 117–119, 136, 157, 293, 298, 300–302; as commercial popular music 13n3; definitions of 1–3, 9, 17–18, 79, 132; history of 3, 4, 9, 101; piano and 6, 11, 44n7, 79n49, 88, 131–159, 172–174, 189–192, 202, 293, 300, produced by autonomous sound-generating systems 25, 31, 36, 42; three concepts of 3–4, 107; three paradigms of technological development of 4–5; using original electronic equipment 195–196 live electronics 2, 17–18, 19, 36–37, 42, 61, 72, 83, 91–92, 96, 97, 100n6, 100n9, 101–129, 131–159, 196, 198, 207, 208, 210–212, 215n10, 215n11, 218, 256, 258, 259, 260, 273n13, 286, 290, 292–293, 301, 302 liveness (definitions) 2, 6, 13n11, 17, 36; real time 2, 4, 5, 8–9, 22, 47, 50, 56, 61, 73, 90, 95, 97, 105, 138, 153, 175, 190, 196, 283, 294, 296, 300, 302 logistic map 49, 51–52, 55 loop detection 178, 188–189 Lord, Albert Bates 220 Loris (audio analysis library and software) 265–267, 272, 274n18, 274n19 Lorrain, Denis 65 Loy, Gareth 78n36 Lucier, Alvin 36, 45n16; I am Sitting in a Room 26, 56 Maderna, Bruno 224, 275, Musica su due dimensioni 3, 101; Ritratto di città (with Luciano Berio) 224 Machover, Tod, Fusione Fugace 79n49 Maggio di Accettura 226, 229n22, 229n23 Maggio Musicale Fiorentino 96 Mahler, Gustav, Symphony No. 1 91 Malouf, Fred, Chromatonal 69 Manning, Peter 3 Manoury, Philippe, Sonus ex machina 4
336 Index Manzanilla, Ivan 161 Marie, Jean-Étienne 5–6 Martin, Steve 70 Massachusetts Institute of Technology (MIT) 79n47, 80n54, 175, 292 Mathews, Max 70, 100n15 Maura, Carlos Noain 273n13 Max/MSP 4, 76n5, 105, 114, 119, 128, 142, 144, 157, 260 Mayr, Albert 80n50 Mazzolini, Marco 258, 280, 284 McCalla, James 103 McIntyre, Michael 50, 55 McLuhan, Marshall 18 McMillan, Keith 57n1, 171n2; K-bow 55, 57n1, 171n2 McNabb, Michael, Invisible cities 69 Meacci, Damiano 215n12 Mead, Philip 159n6 Meertens Institute (Amsterdam) 249n1 Melby, John 80n50 Mel-Frequency Cepstral Coefficient (MFCC) 180 melody; contour 231, 232–236, 237, 242, 243, 246–248; formula and gesture 230–232, 234, 242, 243, 245, 249 Menger, Pierre-Michel 105 Mercato, José 70 mediology 291–292 Meric, Renaud 44n4 Merleau-Ponty, Maurice 166 Messinis, Mario 99n2 Metheny, Pat, Orchestrion 174 microphone 13n3, 20–24, 28, 31, 44n7, 105, 111, 114, 116, 119, 175, 179–180, 184, 190, 215n7, 224, 258, 260, 261; ambisonic (Soundfield MKV) 260, 273n15; contact 153, 161, 163, dynamic (Dimensional Stereo Microphone DSM) 225–226, 229n9, 273n14; small diaphragm condenser 260 Milan 62, 86, 88, 92, 93, 100n4, 224, 292 Mills College Tape Center 2 Mittman, Greg 170 Montague, Stephen, Tongues of Fire 158n1 Moore, F. Richard 66 Moorer, James (Andy) 63, 66, 68, 70, 77n7, 78n36; Lions are Growing 79n37; THX Logo Theme 79n37; We Stopped At Perfect Days 79n37 Morales, Roberto 161 Motz, Wolfgang 80n50
Mumma, Gordon 9, 160 Murail, Tristan 13n9, 275–276, 278–279 music (see also live electronic music); acousmatic 2, 61, 72, 86–87, 115, 133, 134, 157, 163; as an ideal aesthetic object (the strong work concept) 10, 167, 277; conceived as a game 12, 290, 297–298; conceived as sound 276–277, 297; as text (written, oral or electronic) 30, 131, 159n7, 196–200, 209–211, 213–214, 216n20, 217–229, 231, 234, 248, 249, 252n21, 252n23, 255, 277; computer 3, 10, 13n4, 23, 37, 44n10, 46, 48, 60–62, 65, 66, 70–71, 73, 76, 76n5, 78n35, 79n45, 79n47, 80n54, 83–85, 100n 9, 100n14, 108, 165, 177, 293–295, 299–303; electroacoustic 3, 5, 7, 13n2, 17–18, 23, 36, 42, 45n14, 62, 79n44, 83–84, 86, 87, 88, 104, 112, 115, 131, 160, 163, 186, 225, 227; memory of 255; of the Banda Linda 220–222, 224; of the Kaluli women 225; perception of 19, 21, 28, 33, 38, 39–40, 43, 44n2, 47, 66, 106, 133, 136, 226, 248–249, 255–256, 296, 301; Sardinian 227n3; spectral 13n9, 275–277, 278, 304n6; technophobia in discourse about 2; textualisation of 218–219, 223–224, 228n5; traditional 11, 37, 55, 93, 110, 225, 255 Music 4BF (software) 70–71 MUSIC 5 (software) 71, 80n53, 89, 95, 96, 98, 100n15, 300 MUSICA (software) 70 Musica Elettronica Viva 1, 13n1 musical assistant 10, 59–80, 83, 92, 113, 115, 290 Musical Audio Research Station (MARS) 97 Musical Instrument Digital Interface (MIDI) 4, 97, 132, 137, 140, 145, 147, 150, 158, 174, 180, 181, 184, 186, 190, 232, 243, 247, 257 musicants (audience participants of musical events) 12, 290, 296–302, 304n1 music history; as defined by compositional outcomes 9; by composers 279; misread by composers 9 Music Information Retrieval (MIR) 177, 192 musique concrète 7, 8, 61, 94, 132, 133, 224
Index 337 musique mixte (mixed music) 5–7, 13n6, 13n7, 17–18, 36, 42–43, 101, 103–105, 107, 110–111, 112–118, 120, 122, 123, 124, 126, 159n15, 304n6; temps réel-temps différé 5–6, 13n7 Na’amani, Amir 244 Nakamura, Toshimaru, no-inptu-mixer 36 National Art Museum of China (NAMOC) (Beijing) 47 Nattiez, Jean-Jacques 110n5, 292 Natural Sciences and Engineering Research Council (NSERC Canada) 192n1 Nelson, Andrew J. 64 Ness, Steven 192n1 New York Times, The 8 NeXT computer 67 Nicolls, Sarah 159n5, 159n11 Niemeyer, Greg 47 Noll, Bernd 100n8 non-linearity 21, 33–34, 36, 44n5, 50–51, 56, 90, 162, 164, 168, 170, 181, 186 Nono, Luigi 5, 10, 11, 12, 36, 73, 79n49, 80n50, 83, 84, 86–87, 88–93, 94, 96–97, 98, 99n2, 99n3, 100n4, 100n9197–198, 209–213, 215n13, 215n15, 273n3, 275–289; ‘Altre possibilità di ascolto’ 276, 287–288; Scritti e colloqui (writings) 275; suono mobile 91; [Works]; 1º Caminantes….. Ayacucho 90; A floresta é jovem e cheja de vida 86–87, 100n4m 209–210, 215n13; A Pierre. Dell’azzurro silenzio, inquietum 12, 210, 258–273, 275–276, 279–287; Como una ola de feurza y luz 90; Con Luigi Dallapiccola 88; Das atmende Klarsein 210, 212; Das atmende Klarsein-Fragmente 212; Découvrir la subversion: Hommage à Edmond de Jabès 211; Guai ai gelidi mostri 89, 96; Io, frammento dal Prometeo 89, 96, 210, 276; La fabbrica illuminata 209, 215n15, 227n3; La lontananza nostalgica utopica futura 5; Omaggio a György Kurtág 89, 96, 210, 211; Post-prae-ludium per Donau 276, 285; Prometeo. Tragedia dell’ascolto 4, 5, 10, 73, 85, 89–93, 96–97, 100n11, 210, 276; Quando stanno morendo. Diario polacco n. 2 89, 96, 210; Ricorda cosa ti hanno fatto in Auschwitz 86; Risonanze erranti:
Liederzyklus a Massimo Cacciari 210;…..sofferte onde serene… 88, 210 Normandeau, Robert 151, 153; Figures de rhétorique 151 North, Christine 7 Norton Lectures (Harvard University) 214, 215n6 notation 61, 98, 105, 131–159, 231, 243, 250n10, 252n23, 255–256, 257, 258, 259, 261, 273n11, 277–279, 290; conventional staff 11–12, 256, 257–258, 261, 273n11, 277–280, 290; for chant (text based) 249; graphic (descriptive) 26–28, 30, 31, 134–141, 157, 207, 255–256, 258, 259, 273n4, 273n10; hybrid 131, 151–157, 158; morphological 159n15; music that escapes conventional Western staff 9–12, 279–286; neumatic (neume) 231, 233–235, 250n8, 250n10, 255, 280; of electroacoustic/electronic music 83–84, 106, 118, 132–141; of sound spatialisation 158n1, 159n3; performance score 138, 145, 147, 158, 256, 263, 267, 280–281, 284–285; piano roll 172, 257; proportional 198, 200; quartertone 138; Schaefferian categories (partition causale vs. partition des effets) 132–134; software for 70–71, 231; study score 138, 147, 158, tablature (prescriptive /realization score) 31, 42, 131–134, 142–150, 151, 157–158, 207, 255–256, 273n4; The Rulers 153, 156, 157; verbal (score) 30 Nuova Atlantide. Il Continente della Musica Electronica 84 Otto, Susanne 100n8 Odowichuk, Gabrielle 192n1; Prepared Dimensions (with David Parfit) 11, 189, 191 ONCE Group 2 Ondes Martenot 9, 130n3 Open Sound Control (OSC) 180 Open Space Arts Society (Victoria, Canada) 192n2 Oppo, Franco 227n3 Parfit, David, Prepared Dimensions (with Gabrielle Odowichuk) 11, 189, 191 Parra, Helga Arias, Astraglossa, or First Steps in Celestial Syntax 159n11
338 Index Parry, Milman, on Homeric texts 222, 228n9 Partch, Harry 160 Pasquotti, Corrado 80n50 Pattella, Gianantonio 733 Patton, Kevin 159n15 Paul Sacher Foundation 100n11, 215n8 Pays, Anthony 216n19 PDP system 64, 66, 77n7, 93 PEATA (computer application) 91–92 Péntek, Mrs. János 236–240 performance (see also improvisation) 1, 2–4, 7–12, 13n4, 13n9, 19, 20–22, 24–25, 26, 28, 33, 39, 41, 43n1, 44n5, 44n6, 48, 61, 63, 64, 68, 79n49, 83–85, 89–92, 95–99, 100n4, 100n6, 101–104, 106, 108–109, 110–119, 129n1, 130n4, 131–159, 159n9, 160–161, 215n4, 217, 220, 225, 227, 253, 255–256, 257–258, 260–261, 267, 273n13, 275, 277, 279–287, 290, 292, 293, 296, 298–300, 302, 303 and authorial intention 196–197, 202, 215n4, 216n20, 217, 222, 285–286; and recording 11, 218–219, 222–224, 227n3, 228n6, 261–263, 273n16, 274n16, 290; as an ecosystem 18–19, 29–31, 33–38, 41–43; automated software and robotic 11, 16, 46, 50, 163–164, 172–192, 298–299; collective 220, 223; indeterminacy in 2, 132, 134–141, 255–256; ontologies of 169–170; traditions of 11, 196–202, 209–213 performers 1–2, 3, 6, 7, 10–12, 33, 38, 41, 44n7, 79n49, 87, 88, 90, 97, 99, 102, 104, 106–109, 112, 118–119, 132, 133, 136, 151, 157, 159n5, 171n4, 186, 196–197, 200, 208–214, 215n13, 216n18, 220, 222, 232, 228n10, 237, 251n20, 256, 258, 259, 260, 261, 278, 279–281, 285–286, 290, 296, 301; access to technology 103, 115–116, 296; definition of 104; gestures of 8, 24, 28, 33, 55, 91, 107, 133, 142, 151, 161, 163–164, 166–167, 172, 175, 178, 181, 185–191, 219, 230–31, 234, 236, 242–243, 284–285, 300–301; human vs. non-human 48, 168–169, 173, 178, 191; interactions among 97, 106–109, 112, 113–114, 145, 158n1; live electronic 104, 113, 118, 130n3, 130n4; rehearsal strategies 38, 94,
106, 108, 110–111, 113, 115, 118–119, 132, 136, 140, 150, 157, 158, 178, 180, 216n20, 259, 261, 273n13, 301, technology and 17–45, 113, 114, 118, 132, 133 Piano, Renzo 90 Pierce, Charles Sanders 304n9 pitch; histogram 232–235, 239, 242, 244, 246, 250n12; quantisation 232–233 Plessas, Peter 100n6, 118 Portsmouth Sinfonia 167 Poullin, Jacques 8 Prati, Walter 99n2 Pressing, Jeff 169 Puckette, Miller 4 PureData (software) 291, 294, 297, 303 Qur’an, recitation of 11, 230, 231–232, 243, 246–249, 250n5, 250n6 Radio Bremen 227 Raff, Joseph Joachim 61 Raman, Chandrasekhara Venkata 175 Rampazzi, Teresa 80n50; Fluxus 73 Razzi, Fausto 80n50 Reichel, Hans (see also daxophone) 161–162 Restagno, Enzo 275, 288n2 Reynolds, Roger 57 Richard, André 113, 114–115, 118, 215n14, 259, 280, 284 Rink, John 102 Risset, Jean-Claude 6, 63, 69, 70, 75–76, 76n5, 79n44, 292 Rizzardi, Veniero 100n4 Roads, Curtis 23, 253–254 robotics (music and musicianship) 52, 172–192; and performance 10–11, 189–192; detection of musical parameters through 179–189; gesture control 178, 186–189, 191; humanoid musician (Cog) 175; proprioception (self-awareness) 178–179 Rodet, Xavier 64 Rolnck, Neil B. 63 Rossi, Tommaso 43n1 Rothery, Evan 274n17 Rouch, Jean 229n20 Rowe, Robert 173 Rush, Loren 66 Russolo, Luigi 224 Ruttmann, Walter 224 Ryan, Virginia 229n21
Index 339 Stanford Artificial Intelligence Language (SAIL) 66, 68–69, 78n36 Sanden, Paul 2 Sapir, Sylviane 73, 89–90, 91, 93, 97 Scaldaferri, Nicola 214n1 Scarponi, Ciro 86, 100n8, 285 Schaeffer, Pierre 7–8, 94. 132–134, 158, 166, 224, 229n21, 253–254; pupitre potentiomètrique de relief (potentiometric desk) 8; [Works]; Étude aux tourniquets 132; Symphonie pour un homme seul 8–9 Schafer, R. Murray 224, 228n16 Schiaffini, Giancarlo 86, 100n8 Schillinger, Joseph 273n10 Schloss, Andrew 180, 192n1, 249n1 Schumann, Robert 279 Schwoon, Kilian 215n12 Sciarrino, Salvatore 10, 80n50, 83, 93–95, 97–98, 100n14; Cantare con silenzio 93; Lohengrin 2 93; Nom des Airs 93; Perseo e Andromeda 10, 85, 93, 97–98 Seeger, Charles 273n4 self-tuning 184, 192n6 Sennett, Richard 74 Shelemay, Kay Kaufman 218–219n16 signal processing 4, 84; digital (DSP) 4, 7, 19, 22–23, 24, 29, 31–32, 44n3, 44n11, 44n12, 44n13, 45n13, 48–49, 98, 165, 175, 177–178, 179, 188, 192 Simondon, Gilbert 292, 304n2 Sinclair, Steven 153 Singer, Eric, Guitarbot 176, Modbot 177 Smalley, Denis, Piano Nets 138, 140 Smith, F. J. 45n15 Smith, Julius O. 67, 78n35 Smith, Leland 66; SCORE 66, 78n36β; Rondino 66 Social Sciences and Humanities Research Council (SSHRC Canada) 192n1, 288n1 Solomos, Makis 44n4, 44n5 Sonic Arts Research Centre (Sonic Lab, Belfast) 20 Sonic Arts Union (originally Sonic Arts Group) 1 Sonic Visualiser (software) 274n17 sound (defined as an); element 254, 257, 265; event 21, 22, 26, 31, 33, 39–40, 45n14, 145, 254–256, 277; object 10, 242, 253 space (musical); acoustic phenomenon 3, 8, 19, 21, 34, 35, 39, 41, 43, 69,
90, 95, 96, 107, 109, 110, 123, 126, 178, 260, 276, 285, 287, 293, 295, 296, 303; compositional category (spatialisation) 26, 29, 34, 66, 89 93, 132, 133, 145, 158n1, 164, 171, 276–277, 280, 285, 286, 289n8, 294, 296, 297 spectrogram (sonogram) 12, 53–55, 235, 256–259, 261–272, 273n6, 273n9, 237n11, 274n17, 275–276 Staatliche Hochschule für Musik und Darstellende Kunst Stuttgart 292 Staatstheater Stuttgart 93, 97 Stanford University 10, 62, 66–70, 75n6, 78n30, 100n9, 161, 260 Staples, Thomas 273n1 Stewart, D. Andrew, Sounds between Our Minds 156–158 Stiegler, Bernard 66, 302 Stockhausen, Karlheinz 1, 5, 11, 134, 170, 195–198, 202, 206–208, 211, 213, 215n7, 216n19, 216n20, 273n2, 275; Texte zur Musik 215n7; [Works]; Adieu 202; Kreuzspiel 202–205, 215n7; Kontra-Punkte 202; Mantra 3, 151; Microphonie I 3, 213; Microphonie II 3; Mixtur 5; Refrain 216n20; Studie II 134, 142; Zeitmaβe 202; Zyklus 109 Stravinsky, Igor 61, 171n4, 215n6, 217 Stroppa, Marco 12, 71, 79n49, 80n50, 291–296, 298, 301, 302, 304n4; La Timée (acoustic totem made of loudspeakers) 293–294, 296, 304n4; [Works]; Dialoghi 79n49; …Of Silence… 12, 291, 292–294, 295–296, 298–303, 304n4, 305n5; Traiettoria 6, 79n49, 293, 300, 304n6 Stuck, Leslie 78n26, 78n28 Studio di Fonologia della RAI in Milan 86, 88–89, 93, 100n4, 224 Sulitieanu, Gisela 251n15 Sullivan, Charles 52 Suvini Zerboni 198 synthesis (sound) 4, 23, 46, 49, 55, 56, 61, 66, 69, 70, 72–73, 76n5, 84, 91, 92, 93–95, 98, 129n1, 134, 153, 163–164, 165, 256, 265–267, 274n18, 274n19, additive 49, 265; amplitude modulation (AM) 52; frequency modulation (FM) 49, 66, 73, 91; granular 23–24, 29, 92; real time 47, 89, 91; subtractive 49, 94
340 Index Tamburini, Alessandro 89 Taruskin, Richard 61 Tavares, Tiago 192n1 Tchaikowsky, Pyotr Ilyich, Fifth Symphony 160 Teatro La Fenice 93 Teitelbaum, Richard 73–74, 80n50; Barcarola 73 Tenney, James 78n5 Tessier, Roger 275–276, 278 Théâtre des Champs-Elysées 5 Theremin 9 Thibault, Dominic, Igaluk: to Scare the Moon with its own Shadow 145, 147, 158 Tiffon, Vincent 5, 6, 13n10, 13n11, 101, XY Project 12, 291–292, 294–303, 304n7 timbre 21–22, 44n10, 49–52, 53, 55, 91, 94–95, 136, 142, 157, 161–162, 163, 178, 179–184, 190, 191, 250n6, 254, 258, 273n3, 286 Tindale, Adam 192n1 Tisato, Graziano 70, 73 Toop, Richard 13n2 Torah, cantillation (trope) 11, 230, 231, 234, 243, 244–245, 248, 252n21 Torresan, Daniele 71 traditional music 11, 37, 55, 93, 110, 225, 255 Tranchefort, François-René 109 transcription 11, 12, 177, 192, 217–218, 223, 224, 230–252, 256–272, 275, 280–283; and perception 248–249; as an act of reinscription 218; computational tools for 235–242, 246–248; function of 228n12; machine-assisted vs manual 261–272, 274n17, 289n7; of a performance 258–272, 279–286; transcription of non-Western music 217–229 Traube, Caroline 7 Tresch, John 161 Trial, Shawn 192n1 Triennial ‘Translife: Media Art China 2011 (Beijing) 47 Trimpin, Gerhard 175, 190, 192n1, 192n2; Canon X + 4'33" = 100 172, 173, 189, 192; If VI was IX 176, 184; Kranktkontrol 176 Trovalusci, Gianni 43n1 Truempy, Balz, Wellenspiele 63
Tudor, David 36–37, 45n14; Bandonion! 43 Tutschku, Hans, Zellen-Linien 147–148, 150, 158 Tzanetakis, George, Red + Blue = Purple 11, 189–190 Ulfa, Hajja Maria 246–248 Ungeheuer, Elena 3–5, 6, 8, 107 Universal Edition 202 Università di Padova 10, 70–71 Université de Lille 291, 294 Universiteit Utrecht 249n1 University of Calgary 274n17 University of California (Davis) 2 University of Lethbridge 274n20 University of Victoria 249n1 Vande Gorne, Annette 86, 151; Faisceaux 151 Vandenborgaerde, Fernand 5 Varèse, Edgar 72, 166, 301; Déserts 5 velocity calibration 178, 180–182, 192n5 Vermersch, Pierre 106–107 Vidolin, Alvise 10, 70–72, 73–74, 83–100, 258, 273n4, 292 Volk, Anja 249n1 Wagner, Richard 9, 279 Waseda University (Tokyo) 176 Waters, Simon 43n1 Wergo (label) 207 Weston, Alex 61 Wiering, Franz 249n1 Wilson, Scott, On the Impossibility of Reflection 150, 157, 158 Woodhouse, James 50 Wright, Matthew 249n1 Xenakis, Iannis 23 Zaldua, Alistair, Contrejours 151–152, 158 Zattra, Laura 13n4, 104, 118 Zavagna, Paolo 95 Zero1 Biennial ‘Build Your Own World’ (San Jose) 47 Zikang, Zhang 47–48, 56 Zuccalmaglio, Anton Wilhelm Florentin von 217 Zuccheri, Marino 86, 215n13 Zurria, Manuel 43n1