Categories
News Research

EMS presentation published

The presentation I did at the Electroacoustic Music Studies Network conference in Italy in June 2018 has now been published in article form. It can be accessed here. Be sure to check out a lot of the other articles from the conference, a lot of people had interesting opinions and presentations.

Thanks for my thesis advisors and everyone that is part of the EMS team for their feedback on the presentation and the article.

Categories
News Research

Back home & grant awarded

Yesterday I came back home from the world’s first international conference on mixed music pedagogy at McGill University in Montreal, Canada. I presented some of my doctoral research as part of the conference for which I got a lot of praise and support.

Recently, I have also been awarded a very generous grant from the Norwegian Composer’s Fund. With this grant I will now be able to work on my new commission piece Quasar for 12 musicians and live electronics. The piece will be premiered in November 2019 by the Trondheim Sinfonietta. The premier will also be accompanied by a workshop for musicians that want to learn more about playing with electronics. I will soon start posting about the composition process, as well as work on other compositions.

Categories
Research

String Quartet Session #3 – 28th of May 2018

Session length: 2 hours

Location: Olavskvartalet Studio

Pieces: 60 Loops by Pierre Jodlowski

 

Between the last session and today, the two studio technicians and myself have spent several hours troubleshooting what went wrong last session. This included 4 hours in the studio in which several routing issues were fixed, as well as finding out that a few lines have extraneous noise in them. The only issue that presented itself during this session, is that if the secondary mac (used for processing) goes to sleep, the software from Focusrite can shift where the world clock is from, therefore not sending out any sound. This is an issue which made me lose 15 minutes, but it is now well understood and marked. The only other remarks to make was one member that came in about 30 minutes too late, and that we had to end 20 minutes earlier as to go through a composition for a friend of one of the members. The session was recorded both with audio and video. The video part is with one of the older cameras of NTNU, therefore not of great quality, but it still helpful.

 

An interesting turn of events has been that two members of the quartet have mentioned wanting longer rehearsal times. This was my original wish, but it seemed impossible because of their schedule. I find this to be a very positive development. As they have said themselves, they are more used to these types of longer rehearsals and they feel we could better master the material and come forward in a quicker fashion. They also mentioned this because they feel that we have not managed to master the 60 Loops piece yet at this point. Another issue that was brought up related to this is the issue of efficiency in rehearsals since there is no natural leader in the quartet (including myself). This is an interesting note, and several of them are very used to playing in bigger ensembles where the conductor will naturally lead. This is something to take into consideration later and that we had agreed we must discuss.

 

Now, onto the music! The session was again only concentrated on 60 Loops with tape as they needed to be refreshed. Several members mentioned that they felt they cannot rehearse this piece without the full quartet and electronics. This is an interesting observation, and would then also mean that Jodlowski has managed to create a piece in which the sum of its part is greater than the parts alone. It does however, also mean that the quartet is more dependent on these full rehearsals to get through the piece. Another observation that was made during the rehearsal was once again that by playing with the headphones, they feel much more like they are alone and have difficulty to get chemistry together. While discussing this point further, several members of the quartet mentioned that they would really like to have a monitor that is playing the tape in the room. This presents a few logistical problems in the studio (because of its set-up) but also for the recording. However, I am determined to try this out as well to see how well they react to this compared to the headphones.

 

Another issue that was mentioned is the monitoring situation. In 60 Loops, the monitoring is done in two different stereo pairs. Channels 3-4 have the click track while channels 5-6 have the stereo playback (channels 1-2 are the sum of what is in the control room which is muted on their Avioms). This allows them to set-up the levels as they wish. However, in both mian sections of 60 Loops, the playback starts off at a very low amplitude. By the end, the amplitude is very high. The musicians feel that they do not have the time to change the settings on their Aviom. During the session I rode the fader to give them more amplitude. I would normally have simply added a compressor; however, latency is a big issue. Therefore, I will be creating another version of the playback that is highly compressed for the musicians. They made it clear that they do not need the dynamics of the playback, only its rhythm to keep the correct beat, and especially to know where 1 is.

 

The section that was most practiced is the second in 5/8. They find this section still easier because of its groove. We practiced it both with and without a click track. They specifically asked me to put the click back on at bar 226 to make the end easier. One musician went to the point of saying that it is not possible without a click track. This is quite interesting since they have managed to play it before without a click track. I do suspect that they are simply a bit rusty because it has been a long time since the last proper rehearsal. Although I will be complying to their wishes at first, I suspect that it will not be necessary.

 

It was also interesting to note that they had a lot of trouble at first at bar 219. By playing the tape to them accurately from that section with a click about 5 times made them specifically understand where everything fell. Afterwards, it was not problematic to play this section.

 

Afterwards, we rehearsed the first section for about 30 minutes. It is still this part that is the most difficult for the musicians. It’s also clear that it is this section that has the most rhythmical illusions, that make the listener feel like the 1 has shifted place. In this section, they seemed to have performed better without the click track. While playing with the click track, they would often fall a bit behind the beat. However, it was clear that they weren’t comfortable with this section. They often rushed it a bit, but as a musician mentioned, it is difficult to play the first section after the section since its tempo is much lower and more laidback.

 

In the last two days since the session, I have also been listening to parts of it with the filmed video synced to the Reaper session. While listening back to the session there are a few things that strike me as being challenges for recordings of this piece, compared to a concert. In a concert situation, the audience gets the immediate feeling of the quartet in front of them. They hear its energy, its acoustic sound and possibly its dry sound through the PA system as well depending on the venue. Because of this energy, I am led to believe that the balance between the acoustic quartet and the tape part becomes slightly less important. However, in the case of a studio recording that balance is much more critical. The tape part to 60 Loops is heavily compressed and becomes incredibly busy which leads an engineer to have to use techniques that are generally used more with popular music than classical recording.

 

Another aspect is having to match the reverb between the tape and dry quartet. In a concert situation, most engineers would just send both to the same reverb unit even though the tape already has some reverb. In the studio, this balance is subtler and I mainly want to match the reverb that is on the tape. To me it sounds like a small-ish hall with some damped acoustics. I am currently trying to create a similar effect by using the IRCAM SPAT. However, all other aspects of the recording done in our studio also influences the sound. Therefore, it will never be completely possible to match them (and it is perhaps not artistically relevant either!).

 

Both of these aspects are also very relevant to think about when designing the electronics again with different techniques, especially from the bottom up. While using physical modelling, shall I also compress the sound heavily? Is that possible to do in real-time with the generation and not maxing out my CPU? These are very relevant questions that have more to do with the production side than the technical side in many ways. A relevant question is then also if it would be designed for concert use or studio use.

Finally, this is a piece that I’ve only listened to on CD before playing it with the quartet. I have never seen a youtube video, or a live performance. However, by looking at the video while listening, I felt a strong disassociation between what we hear and what we see. This is especially true in the sections where many loops are playing over each other. The quartet might be playing a single note, but we hear many things in the background. Because the electronic sounds are exactly from a quartet, and not heavily processed, the listener can experience a certain amount of cognitive dissonance.

Categories
Research

String Quartet Session #2 – 11th of May 2018

Session length: 2 hours

Location: Olavskvartalet Studio

Pieces: 60 Loops by Pierre Jodlowski

 

This session was plagued with problems from its first moments. It was decided about two hours later, that the session should just stop at this point. Some of these issues are out of my control, while others were my own fault and will be fixed as promptly as possible.

 

There had already been some schedule changes, so the use of the studio was a bit down to luck. Problems already started once I got into the studio, a cable to link my own laptop to the main system was missing as well as the main pre-amp. They had been moved a few days before for tests and not re-installed yet. The engineer also didn’t come until the previously discussed time slot, and even then, a bit late. Already there, having a full quartet having to wait 20 minutes to set up, is not a good start to any session.

 

The second problem that came up was some internal routing problems. The system in the studio is explained in another post. However, for some reason we had some trouble with the correct routing this time. And once it was set up, the main problem was with monitoring. Some of the problems stemmed from the monitoring equipment, which is starting to be old. One of the Aviom units started making noises, so we had to split one of the signals between two quartet members. The next issue with monitoring was how the multiple channels between the HDX system and Aviom did not correspond. At first, we tried to just do the rehearsal with only the master channel but this did not work very well for the quartet as they couldn’t get the metronome as loud as they wanted compared to the tape parts.

 

The third problem was some sync issues between the click track and the metronome parts. The quartet were the first to hear this problem and it was small, but after a while one could hear that they weren’t completely in sync. I’m not completely sure of why this problem arose, but I am thinking it might have something to do with the difference in sampling rates between some files. Therefore, I will be converting everything to 44.1 and 48 as to avoid further issues. Because of these issues, the general mood in the studio was rather negative and that is never a good start.

 

Out of the little playing we managed to do, a few interesting observations came out of the quartet. The first one was a comment on playing together. The monitoring system was with headphones, and it made the quartet a bit more uncomfortable. Even disregarding the other issues, I could hear in the recordings that they weren’t as comfortable together, and not as rhythmically together as normally. One player expressed it as feeling that she was playing alone although they were besides each other, because of how she felt she could only listen to her headphones. The other players agreed unanimously. It was mentioned that they felt it would be better once the monitoring issues were fixed so they could choose exactly what they each individually wanted. They did however say that playing with headphones will take some getting used to. I asked them if they would actually prefer a PA system (or a typical monitoring system in concert situations), to which they answered that they would prefer learning to use the headphones. I’m slightly uncertain if they meant that, or were perhaps slightly shy.

 

Because of the problems, we had some discussions about which part of the composition to rehearse to try to optimize our time. This led us to discuss their own perception of how well the electronics are elaborated and how the piece is written. Now, this is not meant to be a critique of the piece, but the musicians’ perception of how the writing and electronics influence them. They found that the first section in 7/8 is more difficult, especially because of how the electronics start on off-beats, while the 5/8 starts off much easier with the first loop falling on the 1. At the same time, I understand how this playfulness by Jodlowski is the whole point of the piece really. They also felt that the 5/8 section is easier to play without any click track clews because of the inherent “groove” in what they are playing. In effect, they have compared this last section to a groove quite close to some rock music. They did not know of Jodlowski’s interest in rock music either.

 

A camera had also been loaned to be able to film the session, but there was no point since the session never really got going. However, the project now has a possibility to loan a camera to film the sessions for studying purposes afterwards.

 

We also discussed a bit how changing the synchronization method would be and how. I had expected to be able to test out a first version of the score following with Antescofo triggering the tape. However, it wasn’t possible to test out then and there because of the problems (although it will be possible through some of the recordings). It was mentioned by one player that it should at least be better than now, referring to a third party triggering a tape. This was an interesting observation, which didn’t get directly answered by the other players. It will be an issue and idea to bring up again after testing other synchronization methods to see their reactions and what they personally prefer.

 

A final discussion took place on our current plans to move forward. The quartet thought it would be a good idea to have a second piece to already start rehearsing so that we can perhaps soon book a few short concerts as tests for ourselves and have something to work towards. The idea was mainly to have a piece that serves as a contrast to 60 Loops and its rhythmical focus. This also suits my own research as I don’t need to have many different pieces that test out the same concepts and techniques. This has also given me an opportunity to look into my database of compositions and choose a few which I find might be interesting before the quartet votes on what to play.

Categories
Research

A few technical aspects of the string quartet sessions

In a lot of the literature on mixed music, I have found that the level of information on the technical and aesthetic aspects of recordings to be quite lacking. A lot of time is used to explain why the electronics are on tape or live, etc but very little is explained about how to combine both in a (for example) studio production. Having worked as a mixing engineer, FOH engineer and stage hand this perplexes me quite a bit. Both researchers and composers have often left these aspects to the mercy of whatever house technician is there, to greatly varying degrees of quality.

 

Therefore, as part of my research I find it important to bring to the fore several aspects of production. And of course, there is a large discrepancy between in concert and in the studio, but the practical and aesthetical aspects of the production itself will still have a large role in the final product. This post is not about going through all of the possible combinations or aesthetic versus practical arguments. The point is simply to go through the current set-up as it is currently going to be, for the sake of clearness but also to start to address certain issues.

 

The studio at the university is becoming quite versatile with a new patchable system, as well as the possibility to start connecting yourself remotely. I won’t go into the details of this system as they are not part of my research, and could not explain as eloquently as many of my colleagues. However, an important aspect of it is that it seamlessly allows me to be able to patch pre-amps between my own laptop to run electronics, and the studio computer. This gives me an extra security that the recording is on a second computer, in case of for example jitter. The set-up would also easily allow me to connect a second laptop to do a back-up recording if wanted.

The current set-up is like this:

A note on the current diagram is that it will be 16 pre-amp channels going through to the DAD to my laptop, just in case I need more lines. The same will be done with my computer’s outputs as well. I will also be adding a stereo patch from the Pro Tools system back to my computer to be able to easily run processing tests then and there in the playback room. This can be especially useful to show an effect to the musicians for example. In effect, one could say I will have 16 ins and outs from my laptop to the Pro Tools system. Monitoring is also easily done in the software to Aviom a16-ns which will let each musician create their own mix if wanted. The musicians will not necessarily be using monitoring, as it will be to their discretion. In my own experience, it really depends on the piece for most musicians.

 

This system is flexible, and easy to patch differently since it is done with the Dante Virtual Soundcard software. It also allows me to export a stereo output, with additional “processing spots” if wanted. For example, I could send back to the studio computer raw sound out from physical modelling software which I will mix in later, or any other possibility. This makes it robust, useful for research but also for actual productions.

 

On the recording side, the system will be quite easy. The rehearsal room used is quite small, and designed for anything from acoustic music (mainly jazz) to popular music. Therefore, it is relatively dry. However, it is not dry to the point of being uncomfortable for most string players. The recording rig can essentially be separated into two different entities: one to make a production, and the other to be used for processing. There are several reasons for this, the first one being that a closer sound is often better for processing. It’s also much better when it comes to signal analysis for score-following. Another aspect is that this can then be used aesthetically for the processing sound. For example, changing where the spot mics are on the instrument to get a different sound. Another reason is to minimize the amount of bleeding between the processing mics. In a concert situation, it’s also very useful to minimize the risk of feedback. The typical microphone to use for this is a DPA4009, however the university currently only has one. At this point it is on my to buy list if the quartet receives any grants. Otherwise I will have to check the possibility to loan some for rehearsals.

 

The second rig is a proper recording rig. This will most probably be done as an AB pair with omni capsules to have a good capture of all four instruments, such as shown in King (2017). It will probably be a bit on the dry side to hear the details of the performance, with added reverberation afterwards. The natural reverb of the room isn’t the most flattering for string instruments, but it’s also not disturbing.

 

Additionally, I will try to film the sessions with a single camera on the string quartet to have a visual reference when thinking of which synchronization techniques make them most comfortable or uncomfortable. This will also give a clearer picture (no pun intended) of how the rehearsals are going.

 

Bibliography:

King, R. (2017). Recording orchestra and other classical music ensemble. New York, USA: Routledge.
Categories
Research

String quartet piece #1 – 60 Loops Update

The last few weeks since the first string quartet rehearsal for Jodlowski’s 60 Loops, I have been working on the different forms for the electronics (see the rehearsal notes for more information).

 

The main issue has been programming the versions with score follower (in this case Antescofo). The reason for choosing 60 Loops as the first piece was the idea that it was “simple” both from a computation and musical point of view. The emphasis on rhythm also makes it an ideal piece to get a certain sense of chemistry between the quartet players, as well as just having fun. Rhythmical illusions are often a blast to rehearse with other musicians, and in our case, it hasn’t been uncommon to break out into laughter together. This is something I’ve also experienced myself as a drummer while rehearsing polyrhythms and syncopations. However, after starting to program the piece, it is really quite far from simple. So far, I have well over 3500 of code, and it is far from being finished.

 

Although many of the code lines are quite simple because there is not that much going on, it is demanding for the computer to understand exactly where we are. An issue which score following has often had as well is note repetition. In the early days, some of the writing had to be adapted to be able to be followed (Puckette & Lippe, 1992). Stroppa’s (1999) critique has often been based on this concept as well. In a case such as this one, it is not possible to change the music, therefore the techniques must be adapted to the music which presents its own set of interesting challenges.

In the last post about 60 Loops I had defined two different methods using score following:

  • Score following + files
  • Score following + live generation (by physical modelling)

The first method is rather straight forward. The main issue will be to see how Antescofo reacts to such repetitive music and if the computer will be able to follow it. During the next rehearsal we should have enough time to try it a few times. Although Antescofo has problems with polyphony, I find that use a selector~ object to look at the correct voice in the quartet. At several points in the composition which voice is analyzed is shifted. This is a technique that I have used previously in my piece “Solace” and it has been unproblematic. This was a question that was brought up during my formation at IRCAM on Antescofo by another student. It would also be possible to run several instances of Antescofo, but it would use more CPU and in generally doesn’t seem to be necessary for the electronics I have worked on so far.

The second method is proving to be slightly more difficult. The idea was to use physical modelling as to have a more “live” feeling to the music when it comes to tempo but also velocity, etc. Ideally, to bring the piece closer to Manoury’s idea of real-time music (Manoury, 1997). I’m starting to develop a system with the poly~ to allocate the needed information to do this into different instances, but this could mean 40*4 instances running at the same time at worst, which is perhaps not realistic at this point in time when it comes to CPU. This will have to be thoroughly tested or perhaps create an even smarter system. Sending the needed information to different instances is rather easy with a sprintf object with “%i_PARAMNAME” coming from a thispoly~. However, finding a method to combine different instances together to save CPU is probably a more realistic method. And then comes the question of how much time should one use on the electronics of a single piece when it is done easily with the tape?

 

This has led me to think of a combination of the two methods. What if I combine live generation via physical modelling, and looping? For example, 5 different loops can be done live but any others must be recorded and then looped automatically. This could combine some of the positive and negative aspects of all the synchronization methods discussed. However, this introduces one major problem: temporal discontinuity.

 

Antescofo anticipates and follows the musicians which is its greatest strength. This allows parts of be tightly synchronized with the performance at hand. However, in this case if there is a larger tempo difference between a few sections (I’m thinking especially of the large section which is on the second sound file), there could be temporal discontinuities between the live generation and the loops. In the case of 100% live generation this is not an issue since everything would be generated then and there at the current and correct tempo. However, since we now introduce loops again, small discrepancies could become larger over time.

 

A solution to this could be to make Antescofo stricter with the tempo of the electronics, and more separated from the musicians than one would normally want. However, the effect this would have on the musicians is what we should be concentrated on. This is another aspect of the score following I will have to figure out and try for this piece. If anything, it shows that there are always many ways to organize the electronics of a piece, and that they should work around the composition at hand. There truly is no “tried and true” method, but only what works for the music.

 

Bibliography

Manoury, P. (1997). Les partiitions virtuelles. Retrieved from http://www.philippemanoury.com/?p=340

Puckette, M., & Lippe, C. (1992). Score following in practice. In Proceedings of the International Computer Music Conference (pp. 182–182). INTERNATIONAL COMPUTER MUSIC ACCOCIATION. Retrieved from http://www.music.buffalo.edu/sites/www.music.buffalo.edu/files/pdfs/Lippe-SanJose.pdf

Stroppa, M. (1999). Live electronics or…live music? Towards a critique of interaction. Contemporary Music Review, 18(3), 41–77. https://doi.org/10.1080/07494469900640341

 

Categories
Research

String Quartet Session #1 – April 20th 2018

This was the first time the quartet played together as a quartet. Most of the players have played together before in other capacities but never as a quartet.  This was our first rehearsal and it was therefore not held in a studio, not recorded and not filmed at all. The only piece that was rehearsed was Pierre Jodlowski’s 60 Loops

 

Session length: 2 hours

 

We tried at first to play the piece at once with the tape, but the musicians found this too difficult especially around bar 43 when they begin to play on off beats. Therefore we rehearsed the piece without any tape at first, and by using a metronome.

 

By rehearsing the piece part by part, it was much easier for the musicians. We often had to stop to discuss which sections are tutti and which sections are against each. After having rehearsed most of the section that use the second tape file, we rehearsed it several times with the tape. It was easier for the musicians with the tape after they knew exactly what they were going to play. Many of their reactions to playing the piece were in line with how they need to know the tape better. It was also commented several times how the musicians are screwed if they miss a single beat. However, in several playthroughs, the musicians might have fallen off in a few bars, but often managed to come back on the beat. Sometimes this was with my own help (as I would often mark the first beat of new sections) or by themselves.

 

The second section where the third tape part starts had to be at first played at a slower tempo since the 88 bpm is quite quick. It was interesting to remark here that they really struggled with the metronome, but one of them mentioned that we should try it with the tape quicker as “this section has a groove”. When we first played it with the tape, it went actually better than with the clicktrack.

 

One of the interesting aspects of today’s rehearsal was the way we used the click track. At first, I would count for them at show them where the one is, etc. This could quickly be stopped, although they still appreciated to be shown where the 1 is at the start of new sections to be clear. For the second part of the tape, they often found it easier to play with the tape AND the metronome. However, in certain sections, especially later in the piece they often found it difficult to hear the metronome. This is an issue that will be resolved especially with monitoring since we weren’t in a proper studio for this rehearsal. For the third section of the tape, it was more difficult for them to play with tape and clicktrack, while only the tape seemed easier for all of the members.

 

There was also a signalled difference in the type of click track wanted. Some members thought it was easier to have a higher pitch as the 1, while others thought it was better with a lower pitch. This was quite an interesting aspect which I had not expected at all. To me as a drummer and studio engineer, having a lower pitch as the one feels natural, as it is also often what is by default in DAWs like Reaper, Pro Tools, etc.

 

Another aspect which was mentioned earlier is if they wanted me to count or not. It was not clear if it was my counting, or my counting in French which was disturbing to them. I have also no training as a conductor, and therefore accented exactly where the 1 is, not anything else.

 

So far, the most problematic sections were bars 52, 81, 164, 168, 215, 219. There’s a clear correlation with two aspects that the musicians have found difficult: off beats and when they are not playing tutti.

 

During the session, we also discussed a few of the other synchronisation methods I have planned. I do not think they understood specifically what score following involves. It also seems that although they see severe limitations to the tape such as having to start from specific questions and that you feel as if you are on a train, unable to fall, that they still like the synchronisation method. It seems to give them a certain amount of insurance that whatever happens, it’s on them, not the electronics. They have control of what they are doing and the tape just continues no matter what.

 

The thought of one of them having to control a sustain pedal did seem a bit scary at first, but this might have been a false impression. We will test this out at the next rehearsal to see if it affects them.

 

We have also discussed making an app so that they can rehearse the composition with tape at home which does make things much easier. The first version of the app currently looks like this:

It allows them to practice at home by using the app and they can trigger the tape either by keyboard or with a MIDI sustain pedal. The GUI will have to eventually be improved.

Categories
Research

String Quartet Piece #1 – Pierre Jodlowski’s 60 Loops

Pierre Jodlowski is a well-known young composer that has won several prizes as well as gotten important commissions and has been performed by prestigious ensembles like Ensemble Intercontemporain and Ensemble Les Éléments. He was one of Philippe Manoury’s students many years ago.

 

60 Loops is a piece he wrote in 2006 which was commissioned by Compagnie Myriam Naisy. He describes the piece as:

the opportunity to approach the music world of Steve Reich and give to musical time a funny and relentless meaning. From Steve Reich, I borrowed the principle of repetition, but it is compounded by a stacking principle up to 40 quartets that play simultaneously.” (Jodlowski, N.D.)

His piece works on the concept of what is played by the physical string quartet is them looped by a prerecorded soundtrack. This results in a very playful piece with lots of rhythmical illusions and syncopation. Towards the climax of the piece, the texture created is incredibly complex and interesting.

 

In the context of this research, the interesting aspect is to first ask ourselves why Jodlowski made this piece with a soundtrack system. The electronics are separated into three different files. The first file is simply an introduction. The second file is the main section of the piece lasting from bar 11 to 147. The third file is from bar 152 until the end of the piece.

 

What is the main reason for cutting the tape into three different files? When trying to play the piece it becomes quite apparent. The first file is used as an introduction, after which the second violinist will start the piece. The second file is to be played as soon as the first loop (bars 6 to 11) is finished, from there on starting the looping process. At this point, there is a break of three measures of 4/4 before a new set of loops start in 5/8 at a higher tempo and the third file is started after this first new loop series (bars 147 to 151) until the end.

 

As I see it there are two main reasons to have separated this into three files. The first reason is that a single file would have made the piece even more difficult for the performers. The whole duration of the second file is over 6 minutes of playing perfectly in mainly 7/8 (except for the last section which varies between 3/8 and 6/8 but the tape does give a clear sense of the downbeats).

 

The second reason I imagine is a minimization of risk. Because of the chosen synchronization method, there is a rather high risk of problems. If the musicians come out of time with the tape, there is nothing the sound designer can do then and there. Separating the tape into three files minimizes the risks to a certain extent, especially for the two main sections that change from 7/8 to 5/8.

 

When discussing this research project with Jodlowski, he showed skepticism to other possibilities than the tape. He thinks that other methods won’t work rhythmically with the ensemble. This is also the exact reason I chose this piece as the first example for the string quartet. There is very little of the repertoire of string quartet and electronics which is so rhythmical in a Reich-sort of way, yet still very interesting. It presents different challenges than many of the other pieces that rely more on for example processing or loose temporal events and density. If something doesn’t work in this piece, everything falls down rather quickly since it is composed as two build-ups. This is the exact reason why I thought it would be a great piece to start off the string quartet with. It provides us a completely different challenge than most other pieces, and it’s a very fun piece to play.

 

Way Forward / Synchronization Possibilities

At this point, I have thought of the following possibilities for this piece. Please note that this list is not exhaustive, and neither do I believe that anyone of them can work. However, exploring which strategies make the musicians more comfortable and that work temporally will be very interesting. It could also be interesting to analyze the final recordings between these techniques such as Wing et al (2014) have done with a short part of a Haydn string quartet to evaluate asynchrony in the playing of two quartets.

  •             Original method: tape separated into three files triggered by the sound engineer.
  •             Tape + pedal: use the same three files, but they will be triggered via a MIDI pedal by one of the musicians. I’m thinking that we can also switch who would have the pedal. Who leads the piece rhythmically also varies throughout the composition making this method very interesting to try out.
  •             Score following + files: The score following will be done by using Antescofo and will follow only a single voice at a time since the polyphonic possibility is rather limited.  However, switching which voice to follow throughout the piece is not a problem. With this method, the computer will play the files automatically in the correct section.
  •             Score following + live generation files: By combining Antescofo with some physical modelling software to do the loops in real-time. The software used is the SWAM engine created by Audio Modeling. This method will also allow a bigger number of other parameters to change from performance to performance, such as the amount of pressure on the virtual bows, etc.

 

The rehearsals with the string quartet will be starting this week as well. It will be highly interesting to see which strategies make them most comfortable and uncomfortable. These rehearsals will be writtena bout in-depth on this blog. Hopefully it will also be possible to post video and audio snippets.

 

I would also like to take the time to thank Pierre Jodlowski for being positive about this project. You can download the score and parts at his official website. 60 Loops is also available on the excellent album Cumulative Music (2011).

 

Bibliography

 

Wing, A. M., Endo, S., Bradbury, A., & Vorberg, D. (2014). Optimal feedback correction in string quartet synchronization. Journal of The Royal Society Interface, 11(93), 20131125–20131125. https://doi.org/10.1098/rsif.2013.1125

Categories
Research Tools

A special type of amplitude modulation combined with the Leslie effect

This is an effect I wanted to try out for a composition that I am currently working on. A sample is recorded from whatever source you choose. This sample is then looped and has some AM applied to it. At the same time, it also spins around recreating a Leslie cabinet effect. The frequency of the AM can result in interesting effects. Under 10 Hz you have a type of spinning tremolo, while over 10 Hz you get your original signals with added sidebands. The code also includes the possibility to make the loop from one position to another.

 

This is a basic set-up which I have since modified to include in specific situations but I thought it might be useful for some people.

The code as always is done in Antescofo but shouldn’t be too hard to port to another language. I have included a MaxPatch as well for the sake of completion, but it requires both the Antescofo and SPAT externals from IRCAM to function. However, you could replace SPAT by any spatialization external you wish to use.

Code is here

MaxPatch is here

Categories
Research Tools

A tool to calculate and see Nørgård’s infinity series

Per Nørgård is a Danish composer with an interesting catalogue. Part of his compositional style is defined by what he called the infinity series, which is an integer sequence that he used on intervals, rhythms, etc.

Here is the code for its implementation in Antescofo within MaxMSP. The code is rather simple and could probably be optimized. I’ve created two different “modes”, or methods to use the integer sequence. One of the things I have added though is an implementation that uses the Bach external (check it out) which lets us visualize it better than with the default Max objects.

Another added element is the possibility to distort the series by a specific factor which creates interesting melodies.

With a few changes to the code it would be easy to adopt it to any other programming language as well as use it in many different ways. If you find any bugs just comment on here and I’ll try to fix them.

Download the patch here: infinity_V1