String quartet piece #1 – 60 Loops Update

The last few weeks since the first string quartet rehearsal for Jodlowski’s 60 Loops, I have been working on the different forms for the electronics (see the rehearsal notes for more information).
The main issue has been programming the versions with score follower (in this case Antescofo). The reason for choosing 60 Loops as the first piece was the idea that it was “simple” both from a computation and musical point of view. The emphasis on rhythm also makes it an ideal piece to get a certain sense of chemistry between the quartet players, as well as just having fun. Rhythmical illusions are often a blast to rehearse with other musicians, and in our case, it hasn’t been uncommon to break out into laughter together. This is something I’ve also experienced myself as a drummer while rehearsing polyrhythms and syncopations. However, after starting to program the piece, it is really quite far from simple. So far, I have well over 3500 of code, and it is far from being finished.
Although many of the code lines are quite simple because there is not that much going on, it is demanding for the computer to understand exactly where we are. An issue which score following has often had as well is note repetition. In the early days, some of the writing had to be adapted to be able to be followed (Puckette & Lippe, 1992). Stroppa’s (1999) critique has often been based on this concept as well. In a case such as this one, it is not possible to change the music, therefore the techniques must be adapted to the music which presents its own set of interesting challenges.
In the last post about 60 Loops I had defined two different methods using score following:
Score following + files
Score following + live generation (by physical modelling)
The first method is rather straight forward. The main issue will be to see how Antescofo reacts to such repetitive music and if the computer will be able to follow it. During the next rehearsal we should have enough time to try it a few times. Although Antescofo has problems with polyphony, I find that use a selector~ object to look at the correct voice in the quartet. At several points in the composition which voice is analyzed is shifted. This is a technique that I have used previously in my piece “Solace” and it has been unproblematic. This was a question that was brought up during my formation at IRCAM on Antescofo by another student. It would also be possible to run several instances of Antescofo, but it would use more CPU and in generally doesn’t seem to be necessary for the electronics I have worked on so far.
The second method is proving to be slightly more difficult. The idea was to use physical modelling as to have a more “live” feeling to the music when it comes to tempo but also velocity, etc. Ideally, to bring the piece closer to Manoury’s idea of real-time music (Manoury, 1997). I’m starting to develop a system with the poly~ to allocate the needed information to do this into different instances, but this could mean 40*4 instances running at the same time at worst, which is perhaps not realistic at this point in time when it comes to CPU. This will have to be thoroughly tested or perhaps create an even smarter system. Sending the needed information to different instances is rather easy with a sprintf object with “%i_PARAMNAME” coming from a thispoly~. However, finding a method to combine different instances together to save CPU is probably a more realistic method. And then comes the question of how much time should one use on the electronics of a single piece when it is done easily with the tape?
This has led me to think of a combination of the two methods. What if I combine live generation via physical modelling, and looping? For example, 5 different loops can be done live but any others must be recorded and then looped automatically. This could combine some of the positive and negative aspects of all the synchronization methods discussed. However, this introduces one major problem: temporal discontinuity.
Antescofo anticipates and follows the musicians which is its greatest strength. This allows parts of be tightly synchronized with the performance at hand. However, in this case if there is a larger tempo difference between a few sections (I’m thinking especially of the large section which is on the second sound file), there could be temporal discontinuities between the live generation and the loops. In the case of 100% live generation this is not an issue since everything would be generated then and there at the current and correct tempo. However, since we now introduce loops again, small discrepancies could become larger over time.
A solution to this could be to make Antescofo stricter with the tempo of the electronics, and more separated from the musicians than one would normally want. However, the effect this would have on the musicians is what we should be concentrated on. This is another aspect of the score following I will have to figure out and try for this piece. If anything, it shows that there are always many ways to organize the electronics of a piece, and that they should work around the composition at hand. There truly is no “tried and true” method, but only what works for the music.
Bibliography
Manoury, P. (1997). Les partiitions virtuelles. Retrieved from
http://www.philippemanoury.com/?p=340 < http://www.philippemanoury.com/?p=340>
Puckette, M., & Lippe, C. (1992). Score following in practice. In Proceedings of the International Computer Music Conference (pp. 182–182). INTERNATIONAL COMPUTER MUSIC ACCOCIATION. Retrieved from http://www.music.buffalo.edu/sites /www.music.buffalo.edu/files/pdfs/Lippe-SanJose.pdf < http://www.music.buffalo.edu/sites /www.music.buffalo.edu/files/pdfs/Lippe-SanJose.pdf>
Stroppa, M. (1999). Live electronics or…live music? Towards a critique of interaction. Contemporary Music R eview, 18(3), 41–77. https://doi.org/10.1080/07494469900640341 < https://doi.org/10.1080/07494469900640341>