Composition diary for “Quasar” #1

Quasar is meant to be a 20 minute piece for sinfonietta (12 musicians) and electronics. The piece is inspired by having read quite a bit of science fiction and cosmology books in the last. I have thought of this piece for quite a long time before actually setting myself down to write it. Like several of my earlier pieces, I had drawn the form in general, and some general parameters. This can be seen in the picture below:

Then I had a private lesson with the fantastic Michael Obst, in which we discussed this piece. We had discussed how to further plan the piece, and be able to work on it in an effective manner. I also explained the electronics (I will come back to this). What he told me was often simple, nothing very complicated, but he simply saw composition in a different manner than I do, which helped me greatly when it comes to planning the piece, and I’m incredibly thankful for his advice.

Now, before discussing the piece further it might be worth it to discuss electronics. The piece is meant to be in ambisonics around the audience, with an extra tower in the middle of the audience in the same way Marco Stroppa uses speaker totems. However, after discussing this with one of my thesis advisors and testing it out, it doesn’t work exactly as I want. Another option is now a hemispherical speaker which I’m soon to test, but otherwise there might be some discrete speakers in front of some of the musicians. It’s important to have these two distinct “systems” of speakers and use them spatially throughout the composition. The speakers around the audience are in ambisonics, making it much easier to move the piece to different rooms as well as affect their spatial field in interesting ways.

The electronics are to be controlled by several different elements. A score follower will be used in several sections, at least when it makes sense. To help with the other sections, the conductor will have a MIDI pedal which also gives back a bit of control to the conductor. This also permits him/her to be able to control a few more processes. A third aspect will be the use of NGIMU sensors to get more data from the real-world and be able to use that in real-time to influence some of the sound synthesis.

The electronic processes that are so far planned are quite different. The first one is a physical modelling of a bowed plate. This is done using Modalys, and the sound is to simulate one of the opening sounds of the pieces which is the quasar itself, in many ways a theme. Using Modalys is rather difficult as the tutorials are lacking to put it mildly. Any problems generally are not answered online, meaning one is left to his/her business. However, I did find the idea of working with acoustic/physical parameters to be exciting in a completely different way than normal synthesis for example. This was inspirational although difficult but it did take me a rather long time to come near the sound I was hearing in my head.

There are other more normal electronic processes such as ring modulation, spatial decorrelation, granular synthesis, spectral delays, etc. As I’ve been writing the piece so far, I’ve also programmed several of these effects to really be able to hear them and then go back and forth between paper and programming. I’ve been drawing a lot of the effects and how I want them to sound as well, a type of visualization of the sound.

Having planned out most of the composition, I started working on the different sections up to E at the same time. Several of these sections share some material, sometimes only in the acoustic world but sometimes also certain acoustic sections being “re-interprerated” by the electronics. For each section, I’ve now been drawing how they sound, but also making graphs of different parameters or ideas varying from how the electronics sound, to timbral areas, melodies, etc. All of the sections also had clearly defined interval sets, lengths, ideas, etc.

The A1 section was very easy to score as it was already so clear in my mind. At this point it’ll be mainly to program and test all of the electronics. There are also some notational issues which I will have to work on to make it clear and concise for the musicians. The B1 section is electronics only and it’s all planned out on a time line, also timbrally and melodically. I will have to work more on finishing this section, but it will also be using, or hinting at further developments in sections C1 and D1 which are not finished yet. The C1 section has been very problematic at this point, and I’ve started it and deleted it at least 5 times. I have yet to find a way to organize it that feels organic and natural, yet still breathes as I see the piece. This will now be my main focus. The D1 section has been evolving very well. My drawing for it was also very complete and specific so it has been rather quick to compose the skeleton. I have now been concentrating more on developing the different ideas in it, and its orchestration to make it as effectful as possible. Another problem that has arisen in this section is that what I want is truly too much for the allotted time length. If I make the section longer, I also feel it will throw off the balance of the piece. Therefore I have been thinking about cutting off one of the sections in D1, and I’ve also been testing out how to let the ideas breathe more and not make it too claustrophobic.

At this point it’s also not yet possible to test out the whole orchestration with electronics which does make it quite difficult. The way one can synchronize certain events or ideas will also be something to start deciding very soon. I’m thinking of writing the next diary entry mostly about these two issues.


Life in the Anthropocene composition diary #4

I finished the piece about 3 weeks ago now, and had to put it away for a little while to reflect more on it. As I was putting the finishing touches on the piece, a more direct conceptual idea of how to describe the piece arose, and is written in its program notes at the beginning of the notation.

When finishing to edit the piece, the main difference was to condense some of the material, and make it clearer what the music should be. The only large editing that happened is removing the battuto section which was previously mentioned. This changed a bit the proportions of the piece, which I then had to change the end a bit, but it works much better musically. I feel that now, the proportions of the piece are nice and “even”, not in the sense of seconds per section, but its dramaturgy.

I now also have the electronics for the piece thought out and done AFTER having written everything. This was a quite different experience for me. The electronics are now much looser than what I would normally do, but it does however also fit the concept of the composition. Is it a 100% correlation to me having done the electronics after the fact? I don’t think so, but it definitely does have an influence. Both musicians with have NGIMU sensors on their bowing wrist. The sensor data as well as audio descriptors will analyse the sound and at certain conditions sound files of natural sounds (mainly glaciers crashing) will be played. The sounds should not melt into the acoustic sound either, but they should disrupt it completely. There is also some granular synthesis going on here and there, but once again always in a way to disrupt the natural flow of acoustic contemporary music. It’s a bit of a clash between genres in a way as well.

I also showed the piece to my mentor through the composer’s guild in Norway and it still needs a bit of editing mainly in notation. But after having it on the side for a few weeks, there’s also a few things I will have to change before I can call the piece “finished”. Mainly a few rhythmical things that could be more interesting, some changes in dynamics and notation of techniques.

The piece will be part of a reading session at Mixtur festival next month in Barcelona.


Composition diary for “Anthropocene” #3

Last week I had a few hours with a good friend and colleague, Hilmar Thordarson. It’s often incredibly positive to have another composer one can use to bounce ideas off, when working on a new composition. I had sent him the notation as well as these blogs before meeting up. I find this useful to have another composer see if certain elements stand out or function, without necessarily having the other composer understand all of the “insides” when it comes to organization. Hilmar was incredibly helpful as always, and he helped confirm several aspects that I had thought about. The first one was having to rewrite several bars especially in the cello to have the amount of tension I want in the music (specifically in section A1 which I’ll come back to). The second is that my ideas for the form and structure of the piece work and feel balanced. Thirdly that my organization of the pitch-material does not feel too random or too structured, but has a nice balance. This aspect is especially important to me for this piece. There is a very fine balance between too much and too little within post-tonal music. He also gave many other suggestions and tips which will be mentioned throughout. Fourthly, he had a few comments on certain elements I had written in the composition which I was not aware when it comes to compositional parameters or “themes” and such.

Currently, most of the composition is finished. The structure is finished and feels finely balanced. Both B sections are completely scored. A1 is almost completely finished. A large chunk of the C section is also written out. The second A section is planned although not written out. The ending/coda of the piece has also been written. The sections that aren’t finished yet are generally very planned out with many sketches to help their elaboration. Having a second composer look at some of my sketches afterwards as well to get a second opinion on the structure I’ve planned was useful. I now feel that my ideas to help hold all of the piece together make much more sense after having explained several of them to someone else. Sometimes all it takes is really to have to explain your ideas out loud.

The piece as mentioned before starts as rather fragmented. I had been recently fighting the idea of having to extend certain sentences to generate an answer. However, I’ve confirmed the exact need to NOT do this. Not only for musical reasons, but also for extra-musical reasons related to the theme of the piece. At a certain point in this section, the cello voice quickly delved into being more a bass-like part. Having someone else mention this to me helped me realize how quickly that had happen to be able to re-write the section into what I actually want it to be. So, several bars of the A1 section were re-written when it comes to the cello. The section is meant to be rather fragmented, not chaotic but that certain musical elements are never truly develop and don’t exactly fit together in time.

The C section is also coming along very nicely. I’m still slightly uncertain about its start, but I’m currently working on it. Having tried out several ideas which I was not completely satisfied with pitch-wise, I’m still trying out a few concepts. I still feel that its rhythmical aspect is quite important as a transition between B and C. The middle section of C is not complete yet but it is planned. This section is much more rhythmically intense than the rest of the piece, never letting the listener completely find her or her bearings. All of these aspects are also used earlier in the piece in the different divisions mentioned in earlier posts with 6-4-7 3-5-2.

The second A section is still not written but quite planned. I’ve now found myself often using a printed version of my Sibelius score, and then highlighting or circling around certain musical motifs I have been using in the piece and then developing them further or trying to find different ways to play with them in A2.

The coda has been a part of the composition that I have been thinking about for a long time. This weekend I wrote it out finally, and it sounds good. It’s a rather short coda, but I feel that it rounds off the composition nicely as well as helps balance certain ratios within it. It brings back certain elements, yet never in an abundantly clear or pedantic manner.

I have also been often correcting small parts of the composition or making it tighter and denser in the way that I want it to sound. This is often the most time-consuming and demanding part of the work: editing it and molding it into something good and hopefully very good. It takes time to make solid links between different sections and molding the musical elements into a meaningful composition.

By the end of the week I’m hoping to be completely finished with A1 and C, leaving only the A2 section to be completed. This would also give me more time to correct the notation of the piece into something clearer. I find that writing in a notation that I understand to do things clearly and quickly, without hindering my creative impulses to be more important than correct notation until the end. It is only at the end of the process that I feel it is important to make my intentions as clear as possible for anyone else to be able to read. Otherwise, I often feel that I am losing too much time to such issues. It’s also one of the reasons that I often like to write by hand, the computer can quickly make us lose time as to “how do I do Y?”.


Composition diary for “Anthropocene” #2

Work continues on the new duo… The A1 section is now more fleshed out than it was, but I still am unsatisfied with how the material develops as it doesn’t feel natural enough. This is something that I feel is holding back the composition, as I have not written a single note for A2 because of this. I feel that I have clear enough compositional parameters, but that I am not developing them enough, or perhaps throwing to many things in the air at the same time. Would an audience (even an expert audience) be able to follow the logic in it? A lot of the different processes also never completely line up, making it sometimes feel like it is perhaps too scrambled. This brings me always back to think about Webern and Boulez… When did they understand to stop a certain process in their composition? Both composers often write in such a clear and defined matter, yet it is still chaotic and feels intuitive. I have also been reading a bit about Jonathan Harvey’s compositions and compositional processes. His music always is exactly between that line of chaotic, systematic and intuitive. I feel the A section isn’t up there yet, although it is slowly getting there.

Both B sections are finished and add a nice contrast to the A1 section, as well as the start of the C section. In the end, I found the use of Markov chains in B1-2 to be interesting, although I have intuitively changed certain aspects of it. The main difference between the two sections is that B2 has more statistical chances to get different articulations. This felt important to have a slight development in what is being played, although being clearly related.

The C section starts with an interpolation between the different rhythmic “characters” of the piece to create a sort of interlude within the piece. Both pitch-material and rhythmic material making it more related to what comes before and after. The section ends with a rather chaotic and symmetrical section based on the four different harmonic environments of the piece. The C section has always been planned as being rather chaotic, and this ending fits nicely in before B2. However, the journey between the start and end of C is still rather uncertain. I’ve been experimenting with different developments of the harmonic and melodic environments for the piece, but I’m still not completely certain. The rhythmical aspect also has to be quite present as it is both at the end and start. The original plan also shows the C section as being rather chaotic. I’m currently testing out some almost Sacre-like rhythms that are quite fun, and shifting between the different harmonic environments. This feels chaotic enough, but does it fit? Can I make the different characters and elements clear enough within this idea?

The ending of the piece is slowly becoming clearer in my mind as well, although I have yet to set it to paper or computer. It still needs some development, but I do think it is a fitting ending, or in reality a short coda. It seems so far that the piece will run at 7-8 minutes which is shorter than the planned 10 but these things can quickly change once one plays around with the material and listens to it many times.

While writing this, I’m also thinking about electronics… At this point, I hear the composition as too “full” to have any electronics. Perhaps this is a reflex since I have not had electronics since the first compositional sketches? It is either way interesting for my research questions that are part of my doctoral research. If the piece will have electronics, they would have to be rather subtle and only influence/colour certain musical elements, or perhaps amplification for the two B sections. As part of a paper I’m currently writing on string quartets with electronics, I have come to hear many different possibilities of electronics with string instruments. Sometimes the subtlest methods are those that fit best with the écriture of the piece.


Composition diary for “Anthropocene” #1

Life in the Anthropocene composition diary #1

In December when I got my acceptance letter for Mixtur, I already started slowly – but surely! – planning the composition. A title that kept recurring in my mind is “Life in the Anthropocene”. During the Autumn, I read a lot about the Anthropocene, as well as articles from the Dark Mountain Project (see here). As I see in the news today from David Attenborough talking at Davos (article here), it is easy to see that this theme is incredibly relevant. In this blog I will not go over the environmental crisis that we’re living through, but it has been a big part of the inspiration for this piece which is a duo for violin and cello.

For Mixtur, we are only allowed acoustic compositions. I have been wondering if, as an exercise in compositional process, I should only do the electronics AFTER the full composition is finished, to see how that affects the electronic processes. Having already written quite a bit of the work, I’m uncertain if I will even add electronics. It feels slightly disingenuous considering the theme of the piece.

Now for the composition… After using several weeks to think about the form, I came up with the following drawing:

Basic form graphically. First sketch of the whole piece

Sorry for my very poor drawing skills. The form is mainly an A1-B1-C-B2-A2-D if taken down to its essence. The D being a sort of coda to finish the piece. The important aspects are the differences between A, B and C which I would like to discuss. My main idea for A and C, was a deep type of polyphonic and chaotic material. B on the other hand is a type of convergence where the different voices come together, at least seemingly.

Originally, the idea of using fractals such as Bernsley’s fern for sections A and C seemed interesting. But having played around with the idea and creating certain simple prototypes, it wasn’t really the type of music I wanted. Putting several of these over each other also didn’t really appeal to me because of the composition’s theme. Therefore, the only algorithmic section is the B section which will be explored a bit more later on.

In the last few days I have also been re-writing the first A section several times. Here’s an example taken from the first few bars (notation is rough when I’m still writing). This:

Turned into, then followed by the pizzicato line which is also extended:

Ignore the lower G in bar 5, it’s from a previous idea

In my sketches I have several small “theme” and different parameters to play around with throughout the composition. At this point I’m still not completely satisfied with the A section. I feel it needs to be more chaotic in a sense, and that many of the parameters are still perhaps not clear enough for myself in how they should be changed and organized. Although I have sketches for each section (such as the drawing further up), I never tend to follow these like a slave. They really are just a springboard for exploration.

For the two B sections, I create an algorithm (which I might post if wanted) loosely based on the Bernsley’s fern which can in a way be seen as a mix between a Lidenmeyer and Markov system. The algorithmic sections only calculate groupings and articulations. The pitches are already set within my general form. They also follow a rhythmic “grid” which is also interpolated in the C section. Once again, the algorithm is not followed like a slave, but really more as a reservoir of possibilities to be mutated into something that fits the compositional desire I have.

I only have a very rough sketch of the start of the C section at this point. As mentioned earlier, some of the rhythmic ideas of the B section are then interpolated to never exactly fit together. Throughout the composition these small rhythmical fragments come back and again. They come together, and move away. This is a direct inspiration from the idea of the Anthropocene and what we have done to the planet. An example of the interpolation grid can be seen here:

Rhythmic idea
Interpolation between A and B

For the rest of the week I’m hoping to fully flesh out and become satisfied with the A section, as well as finishing a very rough sketch of the rest. I tend to have bits and pieces of the whole composition, and never compose linearly. While writing this, I’ve also come to realize how much writing on paper and not only on computer affects my ideas. I came to write on paper because of a professor I had in counterpoint, and it just became a habit. I tend to quickly move between paper and computer, both giving me different feedback and different ways to visualize that I need for the composition at hand.


Composition diary for “Anomie” #4

It’s been a long time since the last update, although not because of laziness. The premier has been pushed back to the 23rd of March in Trondheim, Norway. The piece will still be played by Bahareh Ahmadi from Sweden.

Because of the short deadline at first this Autumn, I finished the written score before the electronics. All of the sketches for specifically what the electronics would do, and how it should sound were however already done. This has led me to then program the electronics afterwards. Although some ideas have slightly changed such as for example where a certain process starts. However, what the electronics do has not changed at all. The concept is exactly the same, but more detailed and thought through. A few details about sound has changed though.

An example of this is that I had realized certain FM sounds would contrast too much with the acoustic sound of the piano. Although the electronics are not through a traditional PA, it still felt too cold or mechanical in a sense. Therefore, I have used some convolution to help loosen up the sounds and it has done a rather good job. I also thought that some of the piano sounds could be a bit more “blurry” or “busy” so I added a simple spectral delay that is barely heard.

A lot of time was spent on implementing these different systems in an effective manner that can work easily by listening to the performer. It’s still an important aspect that for this piece the performer does NOT need an extra person doing the electronics. Therefore, the programming has to be incredibly solid and function algorithmically. It must also be possible to export the Max patch to something useable for the performer (more on this later on).

The synchronization method also had to be tightly connected with the performer. It is still only based on the use of a MIDI pedal. The pedal triggers different scenes and within the scenes things happen automatically. However, the triggering is also used to approximate tempo in a few sections. The electronics aren’t incredibly striated (to use Boulez’s terminology) but a bit of extra information still helps keeping things together.

The other aspect is how the program “listens”. Originally, I had planned to use Øyvind Brandtsegg’s tools from the cross-adaptive processing project. However, his tools both as a VST and Csound script caused many problems once being exported from MaxMSP. I couldn’t find the reason after a lot of testing, so therefore I have opted to use the zsa descriptors by Mikhail Malt and Emmanuel Jourdan. There’s a few less parameters to analyse, but for this piece it is not an issue as the parameters analysed are rather general. In certain sections, the analysis of these parameters is used to set amplitude, how much of a certain effect is present, etc.

Although many of these parameters are rather “abstract” for most composers, as they come from the world of acoustics, they can definitely be used. As part of Philippe Manoury’s stay at the Collège de France, Mikhail Malt did a very interesting presentation exactly about this, giving many examples of compositions. Sadly, this presentation is only in French and without subtitles.

In the literature, it is often mentioned that the acoustic music often suffers for the electronics, but I have been very careful with this. Sometimes sacrificing something very exact, for less exact to make it fit musically within the context. This is a difficult balance I find. In a solistic piece it is perhaps much easier than in a complex ensemble piece. A typical problem that I often also have heard is that X musical gesture does not trigger Y electronic process because of a sensor not getting enough information, etc. This composition avoids these problems by its simplicity. At this point I am still rather uncertain how to handle these problems in more complicated music, and they are definitely something I can still struggle with sometimes. It does however make it clear to me that there is an inherent relationship between how we synchronize, the electronic processes and the compositional processes.

Another issue has been making the electronics available to the performer. Originally the plan was to have the electronics on an iPad, much as Hans Tutschku has done for several of his compositions. I had foolishly believed that these apps were created in MaxMSP, and then ported to the iPad which is not the case. There are two apps for his compositions, one for MaxMSP on computer, and one for the iPad (in cases like Zellen-Linien at least). It seems to be impossible to directly port a full MaxMSP patch to iPad at this point, and possibly ever. I have found certain solutions such as Swift or directly C++, but at this point I do have the time or resources to do so before the premier. It is perhaps a project to undertake afterwards, also making the piece more available to the public, if relevant. It is especially demanding when one uses many externals in MaxMSP which would be difficult to reproduce alone. An easy example of this would be a score follower. However, one could also argue that there is little point in having a musician play a piece alone for score follower, there would be too many possibilities for mistakes. These are issues that I should definitely explore more.

Therefore, the only solution is to create an application from MaxMSP. This also has its own challenges to make everything run properly with many different externals. Cycling 74 luckily has a pretty good series of tutorials that goes over many different concepts. Max is incredibly picky as to where files are placed even when relative paths are in the patch. This is an aspect of development that takes quite a bit of time, even though it feels like very little is done.

Another problem is still getting performers more comfortable with electronics. A big part of this is that it is not even mentioned in their conservatory studies. In many countries even at the master level, it is normal for a performer to have never performed with electronics. This issue is being worked on especially in France and Canada, where more and more children are being exposed to electroacoustic and mixed music at a young age such as shown by the composer Grégoire Lorieux’s efforts, as well as Suzu Enns research. I have been writing long instructions, but this is perhaps not enough. We will soon be having a skype meeting to discuss some of these issues. Rehearsing in person several times before the premier will also help quite a bit, but I am still quite concerned by how many musicians seem almost scared of electronics.

On a slightly more light-hearted note… we composers and programmers really need to get better at doing GUIs. Mine is currently incredibly ugly, and most of the ones I’ve seen are pretty bad. Having an effective and easy to understand GUI is perhaps the first order of business to make these electronic processes slightly more approachable for performers?

More coming soon as I continue to test out the piece, and make the code even more compact, coherent and without any bugs.


Composition diary for “Anomie” #3

The last few days have been rather busy with other projects than this composition, but it has always been at the back of my mind. I can hear the piece in my mind much better at this point, but must develop the ideas more as well as work on its structure in a more substantial way. I believe I have been thinking about its structure and formalism a bit too much, trying to include too many ideas in a short piece. Therefore, I have been wondering how to peel back several layers and concentrate on making it a good and interesting peace. Listening to Chaya Czernowin’s music, specifically her two CDs Shifting Gravity / Wintersongs III and Hidden, has also been a catalyst to rethink my structuring of the piece (as a side note, fantastic music which rethinks how to work with dramaturgy). I was hearing something less structured and formal, yet trying to write in a very formalistic way which did not work with my mental “image” (for lack of a better word. Sound perhaps?).


This rethinking of how I had started to write the piece has only solidified my ideas for the electronics. It has also made me go for the more pragmatic solution when it comes to the synchronization: the use of a MIDI pedal for cues. This will also make the piece easier to learn for the pianist. However, I will still be using Antescofo to manage certain processes as it is a very convenient programming language where one can describe events in musical time instead of absolute. This will give me the musical and temporal flexibility I want, yet remain practical for the musician. The original idea of using two smaller speakers inside the piano will also be used for the sake of simplicity and it will also force me to think about the mix of the electronics in a different way. Most the small speakers that can be found in conventional stores tend to be Bluetooth here in Norway. This can affect the electronics quite a bit as Bluetooth introduces quite a bit of latency when I have tried it. Although the electronics will not be tightly synchronized to what the pianist is played, this could still be slightly problematic for a few cues I thought of.


The signal chain so far would be thus:

Although it is a relatively simple set-up, there are still many musical and poetical possibilities in such a set-up.


As I sit here writing this diary entry, I’m also working between the electronics and notation, trying to structure them together and give them meaning. I will be using several electronic “modules” that I have programmed before, but slightly modified to have the proper poetic language for this piece. The main module to be re-used is the FM-synthesis system I built for the piece “North Star” for solo flute and electronics. Between the MaxMSP coding and the Antescofo coding, the synthesis notes will be coming more as waves crashing into each other. I am also experimenting with the concept of more complex FM synthesis with more complex ratios. For example, starting with 3 different overtones of a single note and each overtone has a slightly different ratio that varies in time.


The next few days will bring more writing of this piece.


Composition diary for “Anomie” #1

After reading many different methodologies about practice-based research, I have concluded that it could be useful to start recording my compositional process while writing pieces that are relevant to my research areas. This will not be as thorough as for example Philippe Leroux’s recording of his process with the composition “Voi(rex) (2002), but should still give a clear view of my compositional process and its relationship with the use of electronics and especially synchronization methods.


This piece will be the result of Bahareh Ahmadi asking me to write a short piece with electronics for one of her concerts. She has been a good friend of mine for several years, and has gotten curious about the use of electronics in contemporary classical music after hearing a few of my own compositions. We have sent each other much music back and forth by composers like Rolf Wallin, Emmanuel Nunes, Pascal Dusapin, etc.


The limitations I have been given are that the piece should not be too difficult, or be much longer than about eight minutes. Out of pragmatism, the use of electronics should also be relatively simple and easy to set-up. This is partly inspired by reading and listening to Hans Tutshku and Pierre Alexandre Tremblay. As much as I would like to always be able to have complex systems such as those found in Manoury’s “Tensio” (2010), time and money will rarely allow it. Discussing these issues with several musicians as well as reading articles by performers has made me want to try to make something more pragmatic. Therefore I am also thinking of the possibility of only having two small speakers inside the piano for the electronics, instead of a PA.


The first idea that came to mind was inspired by reading about technical failures in mixed music. A composer (whom I sadly cannot remember the name of right now) wanted to make a piece in which the piano plays with white noise. As the pianist presses different pieces, the area around the played pitch is filtered out of the white noise. For some reason, this became technically difficult to realize at the concert and they therefore shifted to using political speeches, which in turn completely changed the message and concept of the composition.


My original idea was in many ways to reverse this concept. What if the pianist is slowly adding overtones, pitches, etc to form something in the background? After listening to a lot of Thinking Plague recently, I settled for the idea of forming some polytonal chords. A technique that I sometimes use for fun is to “hide” relatively banal tonal elements in a post-tonal context. So here for example, an element of the electronics that the piano will activate is the progression I-vi-ii-V7 but polytonal and reversed, essentially creating: I/V7-vi/ii-ii/vi-V7/I. From this idea of symmetry, I thought that the two tonalities to be used should be related by a tritone. This relatively “tonal process” is to happen in the background while the foreground is closer to traditional post-tonal writing (although with allusions to the polytonality). The main figure I have figured out so far is the cell [0-3-11-8] and all of its permutations or ways of using this cell.


I have also thought of the dynamic form of the piece and have drawn it.


The electronics that are drawn out of the piano pitches should sound like waves that do NOT fall completely synchronized with how the pianist is playing and essentially create a vibrating sound mass that moves. Other electronic processes such as the use of reverb and delays are also thought to be used to highlight certain passages.


As for the synchronization… The piano is often a difficult instrument because although it is ONE instrument, it is generally played polyphonically. Although I have had some success by using Antescofo (score following) in a piano piece, it is often slightly less reliable, especially if the music is very polyphonic. Therefore, this synchronization method is seen as less desirable. Using it would possibly confine my writing to being either less polyphonic or more based on block chords which the program manages to follow. Once again on this subject I must refer to Hans Tutschku’s ideas of triggering events in his piece Zellen-Linien. An envelope follower combined with a MIDI pedal could a good combination which allows me the compositional flexibility I want, yet be precise enough for what I want the electronics to do. Since I doubt any of the electronics will be precise rhythmic figures, short delays and errors are not necessarily as noticeable. Because of this, the use of tape could also be done, but this would make the electronics mostly non-responsive to real-time changes. The use of the pedal and for example, an envelope follower can still allow me to extract acoustic features of how the pianist is playing, and use those to affect the electronics.


These relationships are to be sketched out more today…


Bird flocking mechanics

I was recently starting to plan a new composition when I came upon the idea of using bird flocking mechanics to determine the harmony of a piece. I had just come over Daniele Ghisi’s research which includes the external package Dada for MaxMSP, which includes exactly a tool to recreate this.


This is the beauty of sonification. Any data can be used and mapped into musical parameters. Many of my students seem to find this idea slightly bizarre but it has become part of contemporary composition. It is also incredibly popular in sound art. Much like Brian Eno’s cards, I find that sonification can give me an extra push to try something interesting. The meaning of the data you use also becomes an extra layer in the work on a composition. There really is no bad data, only a bad way to implement the data. Daniel Shiffman’s The Nature of Code also influence my view in this sense (although in the field of the visual arts). We as composers control the data, and it can be manipulated into anything we want from completely diatonic harmony, to the most dense twelve-tone or aleatoric composition you can imagine.

In the picture that you can see, I added 600 birds in the swarm, however the code is really meant to work with 6. I put 600 just to show the possibilities in a more visual light. The X axis represents pitch, where negative and positive represent the same note. The Y axis represents octaves. This data is sent to a bach.roll and then exported into a MIDI file to be manipulated further in Sibelius. Although the material may not seem interesting at first, it’s really about manipulating it in different ways with different methods. In this case, the variables of alignment and avoidance between the birds can be used throughout the composition as an extra-musical parameter. Although these might not necessarily be something the audience can hear, if it helps you structure your work, why not use it?