Categories
News Research

EMS presentation published

The presentation I did at the Electroacoustic Music Studies Network conference in Italy in June 2018 has now been published in article form. It can be accessed here. Be sure to check out a lot of the other articles from the conference, a lot of people had interesting opinions and presentations.

Thanks for my thesis advisors and everyone that is part of the EMS team for their feedback on the presentation and the article.

Categories
Composition

Composition diary for “Anthropocene” #2

Work continues on the new duo… The A1 section is now more fleshed out than it was, but I still am unsatisfied with how the material develops as it doesn’t feel natural enough. This is something that I feel is holding back the composition, as I have not written a single note for A2 because of this. I feel that I have clear enough compositional parameters, but that I am not developing them enough, or perhaps throwing to many things in the air at the same time. Would an audience (even an expert audience) be able to follow the logic in it? A lot of the different processes also never completely line up, making it sometimes feel like it is perhaps too scrambled. This brings me always back to think about Webern and Boulez… When did they understand to stop a certain process in their composition? Both composers often write in such a clear and defined matter, yet it is still chaotic and feels intuitive. I have also been reading a bit about Jonathan Harvey’s compositions and compositional processes. His music always is exactly between that line of chaotic, systematic and intuitive. I feel the A section isn’t up there yet, although it is slowly getting there.

Both B sections are finished and add a nice contrast to the A1 section, as well as the start of the C section. In the end, I found the use of Markov chains in B1-2 to be interesting, although I have intuitively changed certain aspects of it. The main difference between the two sections is that B2 has more statistical chances to get different articulations. This felt important to have a slight development in what is being played, although being clearly related.

The C section starts with an interpolation between the different rhythmic “characters” of the piece to create a sort of interlude within the piece. Both pitch-material and rhythmic material making it more related to what comes before and after. The section ends with a rather chaotic and symmetrical section based on the four different harmonic environments of the piece. The C section has always been planned as being rather chaotic, and this ending fits nicely in before B2. However, the journey between the start and end of C is still rather uncertain. I’ve been experimenting with different developments of the harmonic and melodic environments for the piece, but I’m still not completely certain. The rhythmical aspect also has to be quite present as it is both at the end and start. The original plan also shows the C section as being rather chaotic. I’m currently testing out some almost Sacre-like rhythms that are quite fun, and shifting between the different harmonic environments. This feels chaotic enough, but does it fit? Can I make the different characters and elements clear enough within this idea?

The ending of the piece is slowly becoming clearer in my mind as well, although I have yet to set it to paper or computer. It still needs some development, but I do think it is a fitting ending, or in reality a short coda. It seems so far that the piece will run at 7-8 minutes which is shorter than the planned 10 but these things can quickly change once one plays around with the material and listens to it many times.

While writing this, I’m also thinking about electronics… At this point, I hear the composition as too “full” to have any electronics. Perhaps this is a reflex since I have not had electronics since the first compositional sketches? It is either way interesting for my research questions that are part of my doctoral research. If the piece will have electronics, they would have to be rather subtle and only influence/colour certain musical elements, or perhaps amplification for the two B sections. As part of a paper I’m currently writing on string quartets with electronics, I have come to hear many different possibilities of electronics with string instruments. Sometimes the subtlest methods are those that fit best with the écriture of the piece.

Categories
Composition

Composition diary for “Anthropocene” #1

Life in the Anthropocene composition diary #1

In December when I got my acceptance letter for Mixtur, I already started slowly – but surely! – planning the composition. A title that kept recurring in my mind is “Life in the Anthropocene”. During the Autumn, I read a lot about the Anthropocene, as well as articles from the Dark Mountain Project (see here). As I see in the news today from David Attenborough talking at Davos (article here), it is easy to see that this theme is incredibly relevant. In this blog I will not go over the environmental crisis that we’re living through, but it has been a big part of the inspiration for this piece which is a duo for violin and cello.

For Mixtur, we are only allowed acoustic compositions. I have been wondering if, as an exercise in compositional process, I should only do the electronics AFTER the full composition is finished, to see how that affects the electronic processes. Having already written quite a bit of the work, I’m uncertain if I will even add electronics. It feels slightly disingenuous considering the theme of the piece.

Now for the composition… After using several weeks to think about the form, I came up with the following drawing:

Basic form graphically. First sketch of the whole piece

Sorry for my very poor drawing skills. The form is mainly an A1-B1-C-B2-A2-D if taken down to its essence. The D being a sort of coda to finish the piece. The important aspects are the differences between A, B and C which I would like to discuss. My main idea for A and C, was a deep type of polyphonic and chaotic material. B on the other hand is a type of convergence where the different voices come together, at least seemingly.

Originally, the idea of using fractals such as Bernsley’s fern for sections A and C seemed interesting. But having played around with the idea and creating certain simple prototypes, it wasn’t really the type of music I wanted. Putting several of these over each other also didn’t really appeal to me because of the composition’s theme. Therefore, the only algorithmic section is the B section which will be explored a bit more later on.

In the last few days I have also been re-writing the first A section several times. Here’s an example taken from the first few bars (notation is rough when I’m still writing). This:

Turned into, then followed by the pizzicato line which is also extended:

Ignore the lower G in bar 5, it’s from a previous idea

In my sketches I have several small “theme” and different parameters to play around with throughout the composition. At this point I’m still not completely satisfied with the A section. I feel it needs to be more chaotic in a sense, and that many of the parameters are still perhaps not clear enough for myself in how they should be changed and organized. Although I have sketches for each section (such as the drawing further up), I never tend to follow these like a slave. They really are just a springboard for exploration.

For the two B sections, I create an algorithm (which I might post if wanted) loosely based on the Bernsley’s fern which can in a way be seen as a mix between a Lidenmeyer and Markov system. The algorithmic sections only calculate groupings and articulations. The pitches are already set within my general form. They also follow a rhythmic “grid” which is also interpolated in the C section. Once again, the algorithm is not followed like a slave, but really more as a reservoir of possibilities to be mutated into something that fits the compositional desire I have.

I only have a very rough sketch of the start of the C section at this point. As mentioned earlier, some of the rhythmic ideas of the B section are then interpolated to never exactly fit together. Throughout the composition these small rhythmical fragments come back and again. They come together, and move away. This is a direct inspiration from the idea of the Anthropocene and what we have done to the planet. An example of the interpolation grid can be seen here:

Rhythmic idea
Interpolation between A and B

For the rest of the week I’m hoping to fully flesh out and become satisfied with the A section, as well as finishing a very rough sketch of the rest. I tend to have bits and pieces of the whole composition, and never compose linearly. While writing this, I’ve also come to realize how much writing on paper and not only on computer affects my ideas. I came to write on paper because of a professor I had in counterpoint, and it just became a habit. I tend to quickly move between paper and computer, both giving me different feedback and different ways to visualize that I need for the composition at hand.

Categories
Composition

Composition diary for “Anomie” #4

It’s been a long time since the last update, although not because of laziness. The premier has been pushed back to the 23rd of March in Trondheim, Norway. The piece will still be played by Bahareh Ahmadi from Sweden.

Because of the short deadline at first this Autumn, I finished the written score before the electronics. All of the sketches for specifically what the electronics would do, and how it should sound were however already done. This has led me to then program the electronics afterwards. Although some ideas have slightly changed such as for example where a certain process starts. However, what the electronics do has not changed at all. The concept is exactly the same, but more detailed and thought through. A few details about sound has changed though.

An example of this is that I had realized certain FM sounds would contrast too much with the acoustic sound of the piano. Although the electronics are not through a traditional PA, it still felt too cold or mechanical in a sense. Therefore, I have used some convolution to help loosen up the sounds and it has done a rather good job. I also thought that some of the piano sounds could be a bit more “blurry” or “busy” so I added a simple spectral delay that is barely heard.

A lot of time was spent on implementing these different systems in an effective manner that can work easily by listening to the performer. It’s still an important aspect that for this piece the performer does NOT need an extra person doing the electronics. Therefore, the programming has to be incredibly solid and function algorithmically. It must also be possible to export the Max patch to something useable for the performer (more on this later on).

The synchronization method also had to be tightly connected with the performer. It is still only based on the use of a MIDI pedal. The pedal triggers different scenes and within the scenes things happen automatically. However, the triggering is also used to approximate tempo in a few sections. The electronics aren’t incredibly striated (to use Boulez’s terminology) but a bit of extra information still helps keeping things together.

The other aspect is how the program “listens”. Originally, I had planned to use Øyvind Brandtsegg’s tools from the cross-adaptive processing project. However, his tools both as a VST and Csound script caused many problems once being exported from MaxMSP. I couldn’t find the reason after a lot of testing, so therefore I have opted to use the zsa descriptors by Mikhail Malt and Emmanuel Jourdan. There’s a few less parameters to analyse, but for this piece it is not an issue as the parameters analysed are rather general. In certain sections, the analysis of these parameters is used to set amplitude, how much of a certain effect is present, etc.

Although many of these parameters are rather “abstract” for most composers, as they come from the world of acoustics, they can definitely be used. As part of Philippe Manoury’s stay at the Collège de France, Mikhail Malt did a very interesting presentation exactly about this, giving many examples of compositions. Sadly, this presentation is only in French and without subtitles.

In the literature, it is often mentioned that the acoustic music often suffers for the electronics, but I have been very careful with this. Sometimes sacrificing something very exact, for less exact to make it fit musically within the context. This is a difficult balance I find. In a solistic piece it is perhaps much easier than in a complex ensemble piece. A typical problem that I often also have heard is that X musical gesture does not trigger Y electronic process because of a sensor not getting enough information, etc. This composition avoids these problems by its simplicity. At this point I am still rather uncertain how to handle these problems in more complicated music, and they are definitely something I can still struggle with sometimes. It does however make it clear to me that there is an inherent relationship between how we synchronize, the electronic processes and the compositional processes.

Another issue has been making the electronics available to the performer. Originally the plan was to have the electronics on an iPad, much as Hans Tutschku has done for several of his compositions. I had foolishly believed that these apps were created in MaxMSP, and then ported to the iPad which is not the case. There are two apps for his compositions, one for MaxMSP on computer, and one for the iPad (in cases like Zellen-Linien at least). It seems to be impossible to directly port a full MaxMSP patch to iPad at this point, and possibly ever. I have found certain solutions such as Swift or directly C++, but at this point I do have the time or resources to do so before the premier. It is perhaps a project to undertake afterwards, also making the piece more available to the public, if relevant. It is especially demanding when one uses many externals in MaxMSP which would be difficult to reproduce alone. An easy example of this would be a score follower. However, one could also argue that there is little point in having a musician play a piece alone for score follower, there would be too many possibilities for mistakes. These are issues that I should definitely explore more.

Therefore, the only solution is to create an application from MaxMSP. This also has its own challenges to make everything run properly with many different externals. Cycling 74 luckily has a pretty good series of tutorials that goes over many different concepts. Max is incredibly picky as to where files are placed even when relative paths are in the patch. This is an aspect of development that takes quite a bit of time, even though it feels like very little is done.

Another problem is still getting performers more comfortable with electronics. A big part of this is that it is not even mentioned in their conservatory studies. In many countries even at the master level, it is normal for a performer to have never performed with electronics. This issue is being worked on especially in France and Canada, where more and more children are being exposed to electroacoustic and mixed music at a young age such as shown by the composer Grégoire Lorieux’s efforts, as well as Suzu Enns research. I have been writing long instructions, but this is perhaps not enough. We will soon be having a skype meeting to discuss some of these issues. Rehearsing in person several times before the premier will also help quite a bit, but I am still quite concerned by how many musicians seem almost scared of electronics.

On a slightly more light-hearted note… we composers and programmers really need to get better at doing GUIs. Mine is currently incredibly ugly, and most of the ones I’ve seen are pretty bad. Having an effective and easy to understand GUI is perhaps the first order of business to make these electronic processes slightly more approachable for performers?

More coming soon as I continue to test out the piece, and make the code even more compact, coherent and without any bugs.