An approach to a Theory of Musical Attractions

  

                 (On the project for piano and electronics for Mr. Daniel Barenboïm)

                                                                                                By Philippe Manoury

I will start soon a new work for piano and live electronics that Mr. Daniel Barenboïm asked me to compose for him. This piece, which I imagine being about 20 minutes, will be premiered in the BoulezSaal in Berlin during the season 2021/2022. This piece could be considered as a chamber piece where the piano will have a dialogue with an electronic discourse, not totally fixed in advance, as a sort of open form. In fact, many aspects of the electronic music will come from the piano itself and the way it will sound. The computer will analyze the sound of the piano during the performance, and this analysis will be transmitted to sound generators in order to determine several categories of the electronic music such as tempi, transpositions, spatialization, etc. It is what I call “Virtual Scores” when a part of the electronic composition is depending on the performer. I begin by exposing a few basic ideas of this project.

When the audience enters the hall, the electronic music is already playing in evolving sonic forms and movements. This will be an automatic generative process, which will generate musical textures independently. When Mr. Barenboïm will begin, the music he will play, and even more the way he will play it, will orientate the electronic music in a coherent direction in reproducing or imitating some of the structures coming from the piano. In other terms, the piano will affect the evolution of the electronic music around it. We will hear harmonies, pitches and figures in the electronic music that will be “inspired” by the music coming from the piano. Of course, a computer can not et any genuine “inspiration” at all, but we have models and techniques that allow piano music to be analyzed in real time, and that enable us to reproduce it in parts in the electronic music. The piano will be connected to the computer, which will produce the real time electronic music, thanks to microphones that will feed the sound of the piano to the computer. When Mr. Barenboïm will stop playing, in other terms when the piano becomes silent, the electronic music will have the tendency to take its autonomy. To summarize, this piece will “talk” about “influences” and “attractions” and tell the story of a human being influencing the world around him, despite resistance, inertia and, sometimes, enmity. That is the basic idea for the piece. Of course, this view is not a complete description of the whole piece. It is just a basis and a direction for composing. The main ideas, as ever, will arise during the process of composition itself.

How birds could be an inspiration for a composer?

We know many composers that have considered bird songs in order to create their own music. Olivier Messiaen is the best known of them. However, it is not the songs of birds that fascinate me, but the wonderful figures they create and that we can observe in the sky when some species –starlings, notably – are flying in groups or flocks of thousands[1]. This is a very complex phenomenon in which the forms are granular, meaning organized by thousands of little individual birds that create real forms which we could describe in simple terms like circles, lines, cones, curves, clouds, etc. Never these forms become chaotic… and never a bird hits another in its very energetic and very fast flights! These figures will be used as models for electronic music creating its own evolutions before the true beginning of the piece. Everyone has seen these wonderful figures created by groups of birds that produce visual forms in a constant evolution and metamorphosis.

CREATOR: gd-jpeg v1.0 (using IJG JPEG v80), quality = 75

Studland starlings, Tanya Hart CC 2.0, 2017

Owen Humphreys/ Press Association, 2013             

This phenomenon is called « flocking ». Much research has been done to explain the rules driving these kinds of behavior, but I would like to highlight just some of them:

a) In a flocking there is no central leader, there is not one bird guiding the entire group all the time. Different birds can each become a temporary leader in turn (change of leadership).

b) A flock (or a cloud, a chattering) can fragment at any moment into several flocks and then be recomposed into a single one.

c) The basic rule to describe this phenomenon is the long-range attraction of birds towards a single one following its direction.

Starting from these simple points, my intention is not to apply rigorously the rules of the flocking behavior of birds to sounds and to music but to create a musical “behavior” that has a similar effect. I can outline several explanations of this.

First : in a natural bird flocking system, the flock is constantly composed by thousands of birds. It makes no sense to speak of “thousands of” sounds in music because the lifetime of sounds is very short. They are “born” when they begin and they “die” when they stop. Sounds have no permanency unless aggregated in a monstrous cluster in which it becomes quickly impossible to discern the smallest detail.

Second : in the case of birds, each agent (each bird) can communicate with a small number of neighbors. Such a constraint in music would have no sense. The creation of a cloud of sounds can be easily done by a succession of single sounds in a very fast motion. After a certain threshold of speed it is impossible for the perception to discern if we listen to a monophonic or a polyphonic line. And for that reason I decided to have four flows of sounds, each of them organized by a very fast, random sequence of successive sounds. So it makes no sense to talk about the number of neighboring sounds that will communicate with each other.

Third : it is always perilous to transpose directly a visual phenomenon into a sonic one. Our perceptual system does not react in the same way to optical and auditory stimuli. The boundaries concerning the perception of very fast motion are different. But, having said that, it is always possible to transpose these two specific domains into each other by adapting the effects.

In the last point shortly described above, a word is well known to musicians, this word is attraction. Every musician can understand what this word means in music. In the tonal harmony, the role of the resolutions, the cadenzas, is an example. In melodic lines and in some rhythmic patterns, we can also easily perceive some poles of attractions. In a non-tonal system, the role of attraction is not reduced at all but it takes different aspects. The main effect of this attraction in our way of listening to music, whatever its style is that it creates the possibility for us to anticipate what will come next. Sometimes, we know in advance what will be the next note, the next chord or the next rhythm. Because I consider this perceptual faculty as one of the most predominant in our way of listening to music, my idea would be to generalize this concept of « attraction » to several musical parameters like pitches, spectral components, temporal behavior or positions, and directions in the physical space, in short, to all perceptual aspects that are defining the sounds in a given system of synthetic sounds. This will be a first attempt at what I imagine as a sort of System of Musical Attractions.

How to modify the sounds qualities with a system of attraction?

The best way to work in this direction is to reconsider the status of the sound itself. A sound was historically represented by four basic components: pitch, duration, intensity and timbre. This conception is nowadays very old-fashioned, and today we know that sound is composed by a very large panel of several attributes: pitch, spectral components, duration, attack, sustain, release, time behavior, degree of harmonicity or/and inharmonicity, noisy components, brightness, roughness, etc. The list of these attributes is getting longer and longer over time. One particularly important aspect of the way I would like to develop my ideas would be different from the traditional manner. In the traditional conception, sounds were organized by external rules, conscious or unconscious, explicit or implicit. As such, sounds were considered as individual particles, which obeyed some general laws, which indicated their positions, the successions or the superimpositions. I would like now to consider sounds not as particles, but as “agents” which can “talk” together and exchange information among them. In other terms, I would like that sounds have some influence on other sounds in the way they behave in time, the direction they travel in the physical space, and also on the structures of their internal components. Here, one can perhaps glimpse a first relationship with the image of “flocking” where one bird influences temporarily a group of birds in the direction it is flying. This conception requires an increase of the number of parameters, which define sound profiles. The list of the new parameters will include the position of the sound in the physical space, its trajectory and its speed. Let me give an example of such an organization.

I can imagine having a collection of four sonic flows. By sonic flow I mean a fast aleatory generation of sounds realized through a model of Markov chains where the speed of the succession of each new sound will be high enough to create the illusion of a cluster of sounds while it will actually just be a succession of single, individual sounds, one after another. Each flow has to follow a specific spatial trajectory (determined or random) in the hall. Therefore we have four trajectories, totally independent in their directions and speeds. We can decide, with an aleatory choice, which of these four flows will be leading the other three. Let us say that flow 3 is chosen. Flow 3 will follow its own trajectory and, because it is the leader, will influence the other three flows in beginning to follow its own trajectory, with some delays and some degrees of inertia, which can be also controlled. All the sounds will then go in the same direction until… we choose randomly another flow as a leader that will, in turn, influence the other flows to follow it, etc. The position of the sound, element of a leading flow, will become a target for the other sound flows, which will progressively approach the current leader until a new one is chosen.

This attractiveness need not be quartered only in the field of spatialization, but should be extended to all the other main important parameters that construct the sounds. Here is a description of such a process.

For an overview of the reality, I will simply describe a model of a synthetic sound. A program to generate synthetic sounds is constructed with several parameters that are in charge of the diverse attributes of sounds: pitch, duration, spectral components, attack duration, dynamic evolution in time, etc. These parameters will receive different values, which will modify the sound qualities over time. In this regard, a unique model of synthetic sound could produce a large variety of different sonic expressions by modifying the values sent to the parameters. In order to differentiate the four flows, it will be necessary to organize them according to different sound qualities: bright/flat, very short/longer, resonant/dry, harmonic/inharmonic, etc. All these sonic attributes will be generated by specific values for the parameters. Now, I will take the same situation as described above with the third flow chosen as a leader for the three others. To provide a simple explanation of this system, I will only take the example of the evolution of one of these parameters, and the simplest one: the duration of the sounds. Let us suppose that the sounds produced by the leading flow (the third) have their durations in a range between 30 and 80 milliseconds, meaning they will be very short (like pizzicati, for example), while the other sounds from the other flows will have longer durations. Since the leading flow produces “pizzicati” sounds, all the other sounds coming from flows 1, 2 and 4 will be attracted to this one and will have the tendency to imitate the behavior of flow 3 in time in producing in their turn “pizzicato sounds”… until another flow is chosen as a leader. Let us suppose that flow 1 has been thus chosen. In this case the durations of the sounds produced by flow 1 will return immediately to their original values, those before moving toward the “pizzicati sounds”. And now, as leader, all the durations of the other flows will have the tendency to imitate it. At this point, we could imagine that all the other parameters defining the other characteristics of the sounds, such as harmonicity, tuning, brightness, etc., will be submitted to the same evolving processes. We can imagine that if the process of attraction is total, all 4 flows will have the same spatial trajectory, the same durations, the same family of spectra, the same behavior in time, etc., but according to the fact that each flow will be generated by an independent Markovian process, all the transpositions and the melodic movements will be different.

The piano as a big attractor.

The kind of process described above will occur mainly when the electronic music will be autonomous, when the piano will not play, like at the beginning of the piece. But let us see nowwhat will happen when the piano will play. During these moments, the piano will become the main attractor for all the flows; the four flows will stop interacting with each other in order to interact with the piano. By using different processes of analyzing the sound of the piano (pitch detections, spectral analysis, score following, etc.) the musical electronic textures will be transformed in order to reproduce part of the musical structures played by the soloist. It is important that the piano and the electronic music share a common structure, even if in some other aspects each keeps its own independent personality. There are two different approaches for that, which can be used simultaneously. The first approach is to construct an electronic structure in composing a sort of variation, or complement, of the structure played by the piano. This will concern mainly the pitch domain, because on a piano, the pitches are not submitted to interpretation. The only incertitude is the exact moment when a pitch (or a group of pitches) will be produced. To resolve this important question, we have a very interesting tool called “score follower”[2], which permits to synchronize automatically the electronic events with the performer. The second approach is subtler. It does not concern only the notes that will be played, but the manner in which they will be played. In other words, it concerns not only the score, but also the interpretation of the score. It has been a wide panel of my activities as a composer for real time electronic music to integrate, as far as possible, the interpreter in the generation of the electronic music in real time[3]. The pianist can control the spatialization of the electronic music as well as several other aspects only by his touch and the way he is playing on the keyboard.

I intend to work on this piece, with the assistance of Gilbert Nouno for the realization of the electronic part, between March and the end of June 2020.

                                                                                    Strasbourg, 12-16-2019


[1] Those flocks have fancy names as “murmurations” , “ chatterings”, “constellations” or “exaltations” according to the species of birds who produce them. See https://www.thespruce.com/flock-names-of-groups-of-birds-386827.

[2] I will work on the software “Antescofo” by Arhia Cont.

Seehttps://forum.ircam.fr/projects/detail/antescofo

[3] My work Pluton for piano and electronics, in 1987, was entirely composed in this direction.

Tags: , , , , , , , , , , , , , , , , , , , ,

Comments are closed.