It’s been a long time since the last update, although not because of laziness. The premier has been pushed back to the 23rd of March in Trondheim, Norway. The piece will still be played by Bahareh Ahmadi from Sweden.
Because of the short deadline at first this Autumn, I finished the written score before the electronics. All of the sketches for specifically what the electronics would do, and how it should sound were however already done. This has led me to then program the electronics afterwards. Although some ideas have slightly changed such as for example where a certain process starts. However, what the electronics do has not changed at all. The concept is exactly the same, but more detailed and thought through. A few details about sound has changed though.
An example of this is that I had realized certain FM sounds would contrast too much with the acoustic sound of the piano. Although the electronics are not through a traditional PA, it still felt too cold or mechanical in a sense. Therefore, I have used some convolution to help loosen up the sounds and it has done a rather good job. I also thought that some of the piano sounds could be a bit more “blurry” or “busy” so I added a simple spectral delay that is barely heard.
A lot of time was spent on implementing these different systems in an effective manner that can work easily by listening to the performer. It’s still an important aspect that for this piece the performer does NOT need an extra person doing the electronics. Therefore, the programming has to be incredibly solid and function algorithmically. It must also be possible to export the Max patch to something useable for the performer (more on this later on).
The synchronization method also had to be tightly connected with the performer. It is still only based on the use of a MIDI pedal. The pedal triggers different scenes and within the scenes things happen automatically. However, the triggering is also used to approximate tempo in a few sections. The electronics aren’t incredibly striated (to use Boulez’s terminology) but a bit of extra information still helps keeping things together.
The other aspect is how the program “listens”. Originally, I had planned to use Øyvind Brandtsegg’s tools from the cross-adaptive processing project. However, his tools both as a VST and Csound script caused many problems once being exported from MaxMSP. I couldn’t find the reason after a lot of testing, so therefore I have opted to use the zsa descriptors by Mikhail Malt and Emmanuel Jourdan. There’s a few less parameters to analyse, but for this piece it is not an issue as the parameters analysed are rather general. In certain sections, the analysis of these parameters is used to set amplitude, how much of a certain effect is present, etc.
Although many of these parameters are rather “abstract” for most composers, as they come from the world of acoustics, they can definitely be used. As part of Philippe Manoury’s stay at the Collège de France, Mikhail Malt did a very interesting presentation exactly about this, giving many examples of compositions. Sadly, this presentation is only in French and without subtitles.
In the literature, it is often mentioned that the acoustic music often suffers for the electronics, but I have been very careful with this. Sometimes sacrificing something very exact, for less exact to make it fit musically within the context. This is a difficult balance I find. In a solistic piece it is perhaps much easier than in a complex ensemble piece. A typical problem that I often also have heard is that X musical gesture does not trigger Y electronic process because of a sensor not getting enough information, etc. This composition avoids these problems by its simplicity. At this point I am still rather uncertain how to handle these problems in more complicated music, and they are definitely something I can still struggle with sometimes. It does however make it clear to me that there is an inherent relationship between how we synchronize, the electronic processes and the compositional processes.
Another issue has been making the electronics available to the performer. Originally the plan was to have the electronics on an iPad, much as Hans Tutschku has done for several of his compositions. I had foolishly believed that these apps were created in MaxMSP, and then ported to the iPad which is not the case. There are two apps for his compositions, one for MaxMSP on computer, and one for the iPad (in cases like Zellen-Linien at least). It seems to be impossible to directly port a full MaxMSP patch to iPad at this point, and possibly ever. I have found certain solutions such as Swift or directly C++, but at this point I do have the time or resources to do so before the premier. It is perhaps a project to undertake afterwards, also making the piece more available to the public, if relevant. It is especially demanding when one uses many externals in MaxMSP which would be difficult to reproduce alone. An easy example of this would be a score follower. However, one could also argue that there is little point in having a musician play a piece alone for score follower, there would be too many possibilities for mistakes. These are issues that I should definitely explore more.
Therefore, the only solution is to create an application from MaxMSP. This also has its own challenges to make everything run properly with many different externals. Cycling 74 luckily has a pretty good series of tutorials that goes over many different concepts. Max is incredibly picky as to where files are placed even when relative paths are in the patch. This is an aspect of development that takes quite a bit of time, even though it feels like very little is done.
Another problem is still getting performers more comfortable with electronics. A big part of this is that it is not even mentioned in their conservatory studies. In many countries even at the master level, it is normal for a performer to have never performed with electronics. This issue is being worked on especially in France and Canada, where more and more children are being exposed to electroacoustic and mixed music at a young age such as shown by the composer Grégoire Lorieux’s efforts, as well as Suzu Enns research. I have been writing long instructions, but this is perhaps not enough. We will soon be having a skype meeting to discuss some of these issues. Rehearsing in person several times before the premier will also help quite a bit, but I am still quite concerned by how many musicians seem almost scared of electronics.
On a slightly more light-hearted note… we composers and programmers really need to get better at doing GUIs. Mine is currently incredibly ugly, and most of the ones I’ve seen are pretty bad. Having an effective and easy to understand GUI is perhaps the first order of business to make these electronic processes slightly more approachable for performers?
More coming soon as I continue to test out the piece, and make the code even more compact, coherent and without any bugs.