from 
Mon 1.06.09
(All day)
to 
Fri 31.07.09
(All day)

Mix[ing]redients

in 

Mix[ing]redients is an interactive dance-music performance. The initial idea was developed during a residency together with media artist Maria Karagianni at Nadine in Brussels. Since then various versions have been performed at festivals in Zagreb, Carlstad, Prague and Rotterdam.

The audience can influence the performance with their laptops and smartphones. A web application enables them to control the light that is coming from dia projectors and the sound that is generated by a solar sound module which is attached to the body of the dancer.

mix[ing[redients is a project by the open mode collective.




30th and 31st of July
Both at 20:00 at Nadine Plateau, Brussels

On the 30th and 31st of July we presented the current state of our project to a small audience. We consider these two presentations as experiments which give an insight in the kind of interaction and control which our project could accomplish.

On the 30th of July we placed the buttons at the side of the audience: the “sound button” on the left side, the “light button” on the right side. The light button was a button on a connector box, which would switch the light on or off. The sound button consisted of a mouse with which you had to click on a box on a computer screen. What the sound button actually did, was switching between the two different modes of the piece: the “rigid time” and the “free time”. The rigid time contained a sound composition which lasted exactly 6 minutes, which had been built up out of 30 sequences all lasting exactly 12 seconds. These sequences could, again, be divided into 6 little blocks, which literally have been labeled in the sound from A ranging to F. So we hear a voice say “A”, when block A starts. A,B,D and E would all last one second, C and F lasted two seconds, altogether forming the twelve seconds of one full sequence. 4 smaller radiospeakers and one portable radio were used to broadcast the composition. Only shortly near the end of the composition the larger speakers in back of the theatre were used.

The piece began when the sound button was pressed and rigid time began. The dancer had “fixed” rigid movements made to “fit” on the composition. When the sound button was pressed the piece would jump to the free mode, in which the solar sound modules attached to the shoulder and leg of the dancer influence the sounds now reacting to the light which is switched on/off by the audience members.

We told the audience they were free to control the buttons, that any member of the audience was free to press one of the two buttons. In practice two persons gathered near the sound button and two with the light button. The other members of the audience remained passive and observed the resulting performance.

After the performance we had a discussion with the audience.  It was mentioned that it could be helpful to make some kind of visual or sonic representation of the progression of time, because of the fact that that the “free time” actually functions as an interruption of the fixed “rigid time” piece. If someone would start the piece and would not do anything anymore, the piece would just last exactly 6 minutes. With every x seconds of “free time” that was added to the piece, the whole piece was extended with exactly those x seconds. So the “free time” functions both as a pause for the “rigid time”, and as an extension for the whole piece. It could be helpful to make this more transparant to the audience, so they are more aware of the performance's conditions. This could raise more responsibility and consciousness on the side of the (controlling) audience.

It was also suggested that the audience could use flashlights to aim at the solar sound modules on the body of the dancer instead of controlling the spotlight above the dancer.  Then they would have more control over the behaviour of the light in relationship to the body of the dancer(moving it also around instead of only switching it on/off). Another suggestion was that the audience could also tune the radios to different frequencies in order to look for different sounds, which may be broadcasted by the authors or some external broadcaster.

After this first try-out we both felt confused. We had a thorough talk about the project. We were reminded of the remark of the night before of friends of Eleni who said that the piece seemed pre-composed. The idea came up to hide the buttons and the people controlling them. This gave us new inspiration.

On the 31st of July we made quite a radical change in our concept by placing the two buttons behind the stage. Two members of the audience were invited before the performance to position themselves behind the curtain, behind the stage not visible for the other members of the audience. We did not mention what they were going to do down there. A camera was placed on stage so these two could watch the performance on the computer's screen, from where they would also click the button for jumping from one mode to another.  The rest of the audience had to try to figure out what was going on.

It lasted relatively long before the performance started. Interesting was to notice that we, as makers, could notice that in the course of the performance they started to understand what their control actually entailed. So the first half of the performance was a very unpredictably performance of novice users, while halfway the performance they became intermediate users.

Some thought that the whole piece was fixed. Peter mentioned that he was very actively watching to understand what was happening and who was in control of what. Also, two persons were stuck in traffic and came in halfway during the performance. They were already present during some rehearsals, so they knew more or less what the whole thing was about. But, because before it was visible who was controlling the buttons, they were now completely confused about who was in control and what was actually happening here. Peter also asked about the relationship between the movements and the letters.
 Femke spoke about discovering the tools with which they have to play and reverse engineering. Paul mentioned the control room-cabine where they could manipulate things. Femke also spoke about interruption more than interaction and that while one button - the light switch is very binary and physical the other one could be more subtle and difficult to explore in terms of the interface design. Also it could be helpful to have some feedback from the video indicating in which state/mode we are at any time.

Because of all these different points of views and control Femke mentioned the “diversity of roles” during this performance. Spectators controlling light, spectators controlling sound, spectators just passively watching from the beginning, spectators coming in later. It seemed like a starting point for distinguishing different audience roles and places of interaction, i.e site specific interaction and even remotely online.

 

python code for the dance notation and trigering sound in supercolider

 

#!/usr/bin/pyhton

""" connecting remote machines, one running puthon, the other supercollider, via internet with the OSC protocol-each time a new letter-movement is displayed on the screen an osc message is sent to supercollider triggering the correspondant sound samplesending.""

import pygame
from pygame.locals import *
import glob, sys, osc, random, time, os 
##############################
#DATABASE OF THE MOVEMENTS 
##############################
LEFT = -1
RIGHT=1
Sec=[0,1,2,3,3,4,5,6,7,8,8,9,10,11,11]        
Keys=["k",  "f1", "y", "y", "f6", "b", "m", "s", "d", "d", "u", "f7", "j", "j" ,";"]
keyProps = {
    
    "f1":    ('b','left hand up', LEFT, -7), 
    
    "b":    ('e', 'two steps backwards', RIGHT, 0),
    
    "d":    ('h','change of weight', LEFT, -1),
    
    "f6":    ('d', 'folded', RIGHT, 2),
    
    "f7":    ('j', 'turn', RIGHT, 2),
    
    "k":    ('a', 'turn right', RIGHT, 0),
    
    "j":    ('k', 'squeeze', RIGHT, 0),
    
    "m":    ('f', 'unfold and small arch', RIGHT, 0),
    
    "s":    ('g', 'knee up', LEFT, -1),
    
    "u":    ('i', 'elbows', RIGHT, 0),
    
    "y":    ('c','head down', RIGHT, 7),
    
    ";":    ('l', 'bend legs', RIGHT, 0)

}
#######################################################
#CLASSES FOR DRAWING TEXT AND ITS POSITION ON THE PAGE#
#######################################################
class Text:
    """represents a text based score of a movement""" 
    def __init__(self, name):
        self.name=name
        
    def loadName(self):
        surf=font.render(str(self.name).strip("[]_\'"), True, (230, 230, 230))
        return surf

class Stamp:
    """represents a specific mark on the page/combination of text with the position, and page"""
    def __init__(self, text):
        self.text=text.loadName() #pygame surface
    
    def set_pos(self, col, row):
        #list of valid positions
        self.pagex=width/2 + col*IMG_WIDTH
        self.pagey=height-((row+2)*GRIDHEIGHT)
        
    def draw(self, screen):
        screen.blit(self.text, (self.pagex, self.pagey))

class Page:
    """list of stamps"""
    def __init__(self, stampsByPos={}):
        self.stampsByPos=stampsByPos
        self.cur_row = 0
        
    def draw(self, screen):
        for s in self.stampsByPos.values():
            s.draw(screen)

########################################
#FUNCTIONS FOR PYGAME DISPLAY ANIMATION
########################################
numberList=[]
keylist=[]
(cursor_col, cursor_row) = (0, -1)
symbolsByPos={}
pages=[]
page=Page(symbolsByPos)
count=0
filename = "dump.txt"
keydump = open(filename,"a")
keydump.write("----\n")

def onKey(key, page):
    """find the position on the page"""
    global cursor_row, cursor_col, symbolsByPos, keylist, pages, count
    for k in keyProps.keys():
        if k!=key:
            pass
        else:
            cursor_col= keyProps[key][3]
            cursor_row+= 1
            t=Text(keyProps[key][0])
            st=Stamp(t)
            print cursor_col, cursor_row
            symbolsByPos[(cursor_col, cursor_row)]=st 
            st.set_pos(cursor_col, cursor_row)

def notate(name):    
    """making the notation on the screen"""
    global cursor_row, cursor_col, symbolsByPos, keylist, pages, count, page, numbers4uffers
     i=0
    while 1:
        for k in Keys:
            keylist.append(k)
        while i < len(keylist)+1: #look up how many movements in the list and do the code for all them
            for event in pygame.event.get():
                if event.type == pygame.QUIT or \
                (event.type == KEYDOWN and event.key == K_ESCAPE):
                    sys.exit()
            if i==15: #15 is the mumber of the seconds of the dance phrase
                i=0
            if keylist[i]== 'k':    #check if the position of i is in the first movement
                    msg= sendMsg(add, msg, ip, port)
                    msg
                    print keylist[i], msg, 'WOW'
                    onKey(keylist[i], page)    
            elif keylist[i]==keylist[i-1]:
                print keylist[i], 'the previous is: ', keylist[i-1], 'do nothing'
                pass                                 
            else:
                msg= osc.sendMsg(add, msg, ip, port)
                print msg, 'WOW'
                onKey(keylist[i], page)    
            if cursor_row >=VAR:
                keylist=[]
                symbolsByPos={}
                pageFile=name.join('_') + ('%d.bmp' % count)
                pygame.image.save(screen, pageFile)
                pages.append(pageFile)
                page = Page(symbolsByPos)
                count+=1
                cursor_row=-1
            i+=1
            time.sleep(2)         
                #DRAW SCREEN
            screen.fill((100, 100, 100))
                #DRAW CURSOR
            cursor_width = IMG_WIDTH
            cx = width/2
            pygame.draw.rect(screen, (30, 47, 47), (cx + (cursor_col*IMG_WIDTH), stafftop, IMG_WIDTH, staffheight))
            pygame.draw.rect(screen, (30, 47, 47), (0, (staffbottom-((cursor_row+1)*GRIDHEIGHT)),  width, GRIDHEIGHT))
                #DRAW STUFF
            pygame.draw.rect(screen, (0, 0, 0), (cx-2*IMG_WIDTH, stafftop, 4*IMG_WIDTH, staffheight), 1) 
            pygame.draw.line(screen, (0, 0, 0), (cx, staffbottom), (cx, stafftop), 1)    
               #DRAW STUFF TIME FRAMES
            for row in range(0, 16, 1):
                pygame.draw.line(screen, (0, 0, 0), (cx+0.2*IMG_WIDTH, height-(row+1)*GRIDHEIGHT),( cx-0.2*IMG_WIDTH, height-(row+1)*GRIDHEIGHT), 1)
                #WRITE BODY INDICATIONS
            rside=fontSmall.render('right side', True, (255, 150, 50))
            screen.blit(rside, (cx+30, staffbottom+35))    
            
            lside=fontSmall.render('left side', True, (255, 0, 0))
            screen.blit(lside, (cx-2*IMG_WIDTH, staffbottom +35))
            
            lleg=fontSmall.render('leg', True, (255, 0, 10))
            screen.blit(lleg, (cx-1.6*IMG_WIDTH, staffbottom +3))
            
            lbody=fontSmall.render('body areas', True, (255, 0, 20))
            screen.blit(lbody, (cx-4*IMG_WIDTH, staffbottom +3))
            
            larm=fontSmall.render('arm', True, (255, 0, 30))
            screen.blit(larm, (cx-6*IMG_WIDTH, staffbottom +3))
            
            rleg=fontSmall.render('leg', True, (255, 160, 60))
            screen.blit(rleg, (cx+1.2*IMG_WIDTH, staffbottom +3))
        
            rbody=fontSmall.render('body areas', True, (255, 170, 70))
            screen.blit(rbody, (cx+2.5*IMG_WIDTH, staffbottom+3))
        
            rarm=fontSmall.render('arm', True, (255, 180, 80))
            screen.blit(rarm, (cx+5.5*IMG_WIDTH, staffbottom +3))
                #WRITE COPYRIGHT HOLDER NAME
            identity=fontSmall.render('Copyleft '+ name +' 2009-06', True, (30, 20, 30))
            screen.blit(identity, (10, 22))
            #DRAW CUR PAGE
            page.draw(screen)
            #refresh frames
            pygame.display.flip()
            clock.tick(FPS)

#########################################
#FUNCTIONS FOR OSC MESSAGES HUNDLERS
#########################################
def listLet(*msg):
    """ split the list of incoming message and retreive the necessary data """
   i=0
    print 'got', msg
    print len(msg[0][1])
    
    while i < len(msg[0][1]):
    #    print 'i is ', i, 'and typetag is ', msg[0][1][i]
        if msg[0][1][i] == 'f':
             print
            print 'so incoming data are ', int(msg[0][i+1]*10)
            #puredata=int(msg[0][i+1]*10)
            #numberList.append(puredata)
            print
         if msg[0][1][i] == 'i':
             print
             print 'so incoming data are ', msg[0][i+1]
             #puredata=msg[0][i+1]
             print
             #numberList.append(puredata)
         i+=1
         print numberList

def sendMsg(osc_add, msg, ip_add, port):
    """ listens to OSC messages sent by a set ip address and port and split them in single characters; numbers or letters"""
    #osc.listen(ip_add, port)    #uncomment for use it when sending to localhost or when receiving data from remote computer
    #osc.bind(listLet, osc_add)       
    print 'ip address is ', ip_add, ', port is ', port
    osc.sendMsg(osc_add, msg, ip_add, port)      
    for event in pygame.event.get():
        if event.type == pygame.QUIT or \
        (event.type == KEYDOWN and event.key == K_ESCAPE):
            sys.exit()
            osc.dontListen()
        
def clearscreen():
    os.system("clear")

###################################
#GET PYGAME and OSC READY TO RUN 
###################################
osc.init()
pygame.init()
pygame.font.init() # we need fonts
clearscreen()        
###############################
#CONSTANTS-TWEAK ME-PYGAME
size = width, height = 1100, 765
FPS = 60
IMG_WIDTH=50
GRIDHEIGHT=50
VAR=int(height/GRIDHEIGHT)-1 #a whole number => devide the screen height with the grid size and substruct 2 => twice the grid size (once from bottom/once from top)
staffheight=VAR*GRIDHEIGHT
staffbottom=height-GRIDHEIGHT
stafftop= staffbottom-staffheight    
pygame.display.set_caption('Labanotation')
fontpath='/Library/Fonts/liberation-fonts-0.2/LiberationSerif-Regular.ttf'
font=pygame.font.Font(fontpath, 25)
fontSmall=pygame.font.Font(fontpath, 18)
# open pygame window
screen = pygame.display.set_mode(size)# pygame.FULLSCREEN)
# make a pygame clock
clock = pygame.time.Clock()#########################################################
#SET THE VAR AND CALL THE FUNCTIONS OF OSC AND PYGAME
numbers4buffers=[1000, 1001, 1002, 1003, 1003, 1004, 1005, 1006, 1007, 1008,1008,  1009, 1010, 1011, 1011]
(add, msg, ip, port) = ("/s_new", ["Player", numbers4buffers, 0,1, "bufnum", Sec], '10.0.1.5', 57110)
notate('test')

Documentation of rehearsal and notes

 

Video from the rehearsal on 22 July:http://pzwart2.wdka.hro.nl/~mkaragianni/sites/jw_flv_player/AIRvideos.html

Background

Laban notation is not comprehendible for a great many of people, both dancers and non-dancers. Because of the limitation of using a specialized system of movement analysis such as the laban-notation system, the audience cannot really have a control or understanding of the connections between the notation symbols and the movements of the dancers. Aiming and focusing the eye and ear is the method that the audience uses to perceive a performance. We give to the audience the control over what is visible or not during the dance. The audience member (or two at the same time?) gets a spotlight (or two) which s/he uses to switch on/off the light. This determines what we see of the choreography and listen of the sound.

The light would be used as a simple interface of influencing the score of the performance. A  direct way of controlling the events of the performance. The rhythm of the time is based on 1 sec bit. There's a dance vocabulary of movements of 1 sec duration each. The dance and sound loops every 7 or 15 sec. The dancer executes the movements the first time in a very dry way accompanied by a dry sound of letters (from A to L -each letter correspond to a movement). Every time the loop is repeated new small elements-details are introduced both in the sound and movement design.

DIY wireless microphone

 

Microphone and sound modules | Workshop with Ralf Screiber | 29.06-01.07.

See more about solar sound modules -> Ralf Screiber.
How to make your own cheap wireless microphone with a car fm-transmitter [normally used for connecting mp3 player to a car radio].

We used the fm-transmitter mpman which can be found cheaply on Ebay, more expensive in media shops. The idea was to find a way to shift from a fixed precomposed dance-sound module to a more open system ontrolled both by audience and dancer. Trying to find a way to connect sound from the computer and sounds generated by the solar modules built on the dancer's dress we built a wireless microphone. In this way we can get the sound from the sound modules to a radio tuner and from there to the computer via an external sound card. In the computer we can analyze the sound coming from the sound modules,using supercollider, and trigger new sounds.

/// Monday
Open the fm-transmitter with a screwdriver and splice a wire on the circuit board. This serves as an antenna to have a better signal and less noise within a distance of 5 meters (pic 2). Have to try out to find a place where an antenna can be connected on the transmitter.

Two crocodile clips spliced on the coaxial input of the fm-tranmitter and connected to the (-) output of the inverter of the sound module (see schema).

Set up a frequency on the fm-transmitter and then tune in the same frequency on the radio.

 

/// Tuesday
testing the signal of the fm-transmitter with different possibilities: aluminum foil as an antenna extension from the fm-transmiter and also for the radio antenna (pic 3). Find ways to reduce noise from the radio which was broadcasting the local radio station each time the signal coming from the sound module is weak.

 

/// Wednesday
Buy a second hand analogue radio tuner in the flee market. Testing it with an extension antenna cable. Cut the edge and split the copper shield (metallic foil) from the insulating layer and connect it to ground. Connect the core wire to the right or left output of the tuner. The signal is far more better and the noise is almost none. Build few more sound modules with adjustable register and other connections to try different low and high frequencies from the sound modules.

Work Process First Month

 

1st of June

 

Global Concept. Zoom out. Initial intentions. Order of movements by audience ===> Visibility ==> sound modules

 

2nd of June

 

Track the dancer! (working title)

 

Motivation: Laban notation is not comprehendible for a great many of people, both dancers and non-dancers.

Therefore it creates a constraint regarding the comprehension between the symbols of the notation and the correspondent movements. Although this adds to the absurdity of copyright movements which the author blindly designs by typing randomly keys on the keyboard, it makes the project less inviting for dancers who haven't worked with the labanotation system. Therefore we would like to explore a more open system of interaction in the creation and structure of the dance performance, but again reflecting on the established notions of authoring in platforms where collaboration and media interaction allows unpredictable elements in light sound and movement design. So by exploring some more direct and ON/OFF ways for audience members to participate we give the control over a more binary approach of interaction.

 

  

Possible Experiment

 

Someone is singing a quiet song in the background, accompanying himself on guitar. Every time when the dancer is visible(because she is in the spotlight), the song will stop. It will be continued when the dancer is not to be seen anymore. When the dancer is visible, we hear the 'sound modules', which are attached to her suit. These modules react to light. They make squeeking, saw-tooth like sounds. We use the 'sound modules' as a trigger for other devices. When the 'sound modules' do not make sound, there's a 50% chance that the electronic, generative composition is triggered. In this composition there is also a significant amount of silence present. If the generative composition is not triggered, the singer/songwriter starts playing his song. Every time that the singer/songwriter has to play again, he starts from the point in the song where he was interrupted.

 

3rd of June

 

/Sound Idea Sjoerd

 

Fibonacci Series in guitar tuning. Sound modules influence the sound of the guitar. We map the data in such a way that we get usable data to affect the Laban notation and the sound of the guitar (which runs through SuperCollider). Fixed choreography and fixed guitarpiece. When there is no light everything runs like it is supposed to, but when the audience turns on the light everything gets altered by the data that comes from the squeeking sound modules.

 

5th of June

 

Instead of laban-notation, we could use short text instructions to describe a short sequence of movements. Five or six would be sufficient. This is projected on a screen, only when it's dark. The dancer keeps continuing this sequence when the light is on. After a while, when the light is off, the order of the sequence is altered.  

 

11th of June

 

We have been trying to set up communication between our programming languages, namely Python and Supercollider. We do this by using OSC(open sound control). We conducted small experiments in which Sjoerd, for example, would play guitar. His guitar playing was not audible: it was only analyzed by an algorithm which followed the frequency and an algorithm which followed the amplitude. This data was used to control the sound of a bunch of synthesizers. Only the frequency of the guitar would influence the amplitude of the synthesizers, and the amplitude of the guitar would influence the frequency of the synthesizers. At the same time the amplitude-data was sent to Python, where we tried to use it to change the order of the notated movements.

 

During a conversation the idea arose to use the sound of a person who is just about to speak. The sound of

 

- Rhythmic patterns in Buffers;

 

- Voice says a,b,c,etc which correspond to the movements;

 

- Geluiden van vlak voordat iemand iets uitspreekt;

 

18th of June

 

- Simple, transparant relationships between the different elements (sound, movement, light).

 

- Developement within the piece: changing relationships, order and character.

 

19th of June

 

We worked on our code and discussed new ideas.

 

While working with the FM-transmitter we came up with the idea for a radio piece. Audience participates by choosing frequencies on a radio. We broadcast different soundsources on different frequencies that they can find. Still local radiostations are coming through. What would happen if we send sound from different sources with two FM-transmitters of equal strength to the same frequency?

 

- Working title: Piece for a dancer, a lamp and a transistorradio

 

- It could be an idea to keep it compact as an image. So on the stage there's a dancer, a lamp and a transistorradio (old, crappy one!). The dancer has to tune the radio to the right frequency on which Sjoerd transmits.

 

- Experiment with text instructions. Instructions in sound with A,B,C... etc. Distorted by sound modules.

 

- An experiment in which there is a very clear connection between sound and light. E.G. when the light is on, the sound off, and vice versa. Possibly develop this with consequences for movement and/or quality of sound.

 

- An experiment in which we use pronounced letters in sound together with a short percussive motive made up of speech too, which we correspond. At a certain point the screen with text instructions turns black. The sound instructions continue, only with a different logic: if the light is on, there's one sound instruction constantly looping. When the light is off, there's a continuous picking of different sound instructions.

 

22nd of June

 

Rehearsal

 

Today we worked together with our dancer, Eleni. We tried out the instructions. For this occasion we hooked up the text instructions, which are visible on a screen, to the sound instructions, which are audible from the speakers. Later on we tried this out in combination with the Solar Sound Module, which we connected to Eleni's shoulder. The sound responded in a very direct way to Eleni's movements. The FM signal was a bit too weak, though. We should get some amplifier to make the radiosignal stronger, too much noise. Sjoerd fooled around with the sound and initiated a very brutal change(a sudden outburst of constant generated sine waves, of which the frequency and amplitude were altered by the Solar Sound Module on Eleni's shoulder).

 

Link to Video

 

A few ideas arose during this session.

 

- We liked the fact that some local radio stations would occasionally come through, when the sound module did not generate sound. An interesting, indeterminate aspect which we could possibly use for our benefits.

 

- The idea to implement gradually subtle changes in the blocks, both for sound and movement. To let them evolve in a transparant way.

 

- To develop the sound instructions in a musical way. The letters could be sung, in different modes(dorian,lydian,etc). Different durations, rhythms, etc. Fill the buffers up with different musical data.

 

23rd of June

 

Sound Instructions

 

Idea

 

The dancer gets direct instructions from the sound about which movements she must dance. The instructions consist in first instance only of letters(A to L), but slowly the instructions start to develop in a musical way: they morph into music, yet still remaining instructions.

 

Today Sjoerd worked on a musical adaptation of the sound instructions for the dancer. The sound instructions consist of the letters A to L. Every letter corresponds with a certain movement. All letters last one second, except letters C, H and K: they last two seconds. In the beginning only the letters can be heard. Then, after a while, percussive voice samples are added. Then later on tonal voice samples are added. In the beginning the cells are executed in alphabetical order, later the letters are executed in random order. Down here you can listen to a short example.

 

Link to Demo

 

Machine vs Person

 

Idea

 

During a discussion we concluded that the dancer was too constrained. She has to deal with a quite high tempo and an unpredictable order of the movements. This avoids that she can express her personal vocabulary. The idea arose to make the distinction between the dancer 'as a machine' and the dancer 'as a human'. In certain parts of the piece she has to follow the instructions quite direct and we can see how the dancer moves when she is constrained by time and yet also has to be attentive, because the order of the movements is constantly changing. At other moments the instructions vanish. The dancer is now outside of this system. She has the control. Yet still she performs the same movements, but now in her own time and relating it to the sounds she triggers with the 'solar sound module'(a light sensor with a sound generating device). Now we can see how she interprets the movements herself.

Code Sound Instructions

 

(            // Sound Files

~a_files = "sounds/Letters/1stPercussion/*.wav".pathMatch;
~b_files = "sounds/Letters/2ndPercussion/*.wav".pathMatch;
~c_files = "sounds/Letters/1stTonal/*.wav".pathMatch;
~d_files = "sounds/Letters/1stTonalPercussion/*.wav".pathMatch;
~e_files = "sounds/Letters/2ndTonalPercussion/*.wav".pathMatch;
~f_files = "sounds/Tonen/DLydischLZ/*.wav".pathMatch;
~g_files = "sounds/Tonen/DLydischLZ2/*.wav".pathMatch;

)
(             // collect files in buffers

~a_buffer = ~a_files.collect{|item| Buffer.read(s, item) };
~b_buffer = ~b_files.collect{|item| Buffer.read(s, item) };
~c_buffer = ~c_files.collect{|item| Buffer.read(s, item) };
~d_buffer = ~d_files.collect{|item| Buffer.read(s, item) };
~e_buffer = ~e_files.collect{|item| Buffer.read(s, item) };
~f_buffer = ~f_files.collect{|item| Buffer.read(s, item) };
~g_buffer = ~g_files.collect{|item| Buffer.read(s, item) };
)
(
                    // Sample Player

SynthDef("Player", { arg out=0, bufnum;    
            var sig;
            sig = PlayBuf.ar(2, bufnum, BufRateScale.kr(bufnum), loop: 0);
            FreeSelfWhenDone.kr(sig);
            Out.ar(out, Pan2.ar(sig, 0));
                        }).send(s);
        
                    // the task which plays the samples            
t = Task({                    
        1.do({
            12.do({ arg i;
            Synth("Player",[\bufnum, ~a_buffer.wrapAt(i)]); 
            case { i <= 1 } { 1.0.wait;}
                { i == 2 } { 2.0.wait;}
                { i == 3 } { 1.0.wait;}
                { i == 4 } { 1.0.wait;}
                { i == 5 } { 1.0.wait;}
                { i == 6 } { 1.0.wait;}
                { i == 7 } { 2.0.wait;}
                { i == 8 } { 1.0.wait;}
                { i == 9 } { 1.0.wait;}
                { i == 10 } { 2.0.wait;}
                { i == 11 } { 1.0.wait;};
                i.value.postln;
                });
            });
            1.do({
            12.do({ arg i;
            Synth("Player",[\bufnum, ~b_buffer.wrapAt(i)]); 
            case { i <= 1 } { 1.0.wait;}
                { i == 2 } { 2.0.wait;}
                { i == 3 } { 1.0.wait;}
                { i == 4 } { 1.0.wait;}
                { i == 5 } { 1.0.wait;}
                { i == 6 } { 1.0.wait;}
                { i == 7 } { 2.0.wait;}
                { i == 8 } { 1.0.wait;}
                { i == 9 } { 1.0.wait;}
                { i == 10 } { 2.0.wait;}
                { i == 11 } { 1.0.wait;};
                i.value.postln;
                });
            });
            1.do({
            12.do({ arg i;
            Synth("Player",[\bufnum, ~c_buffer.wrapAt(i)]); 
            case { i <= 1 } { 1.0.wait;}
                { i == 2 } { 2.0.wait;}
                { i == 3 } { 1.0.wait;}
                { i == 4 } { 1.0.wait;}
                { i == 5 } { 1.0.wait;}
                { i == 6 } { 1.0.wait;}
                { i == 7 } { 2.0.wait;}
                { i == 8 } { 1.0.wait;}
                { i == 9 } { 1.0.wait;}
                { i == 10 } { 2.0.wait;}
                { i == 11 } { 1.0.wait;};
                i.value.postln;
                });
            });
            1.do({
            12.do({ arg i;
            Synth("Player",[\bufnum, ~d_buffer.wrapAt(i)]); 
            case { i <= 1 } { 1.0.wait;}
                { i == 2 } { 2.0.wait;}
                { i == 3 } { 1.0.wait;}
                { i == 4 } { 1.0.wait;}
                { i == 5 } { 1.0.wait;}
                { i == 6 } { 1.0.wait;}
                { i == 7 } { 2.0.wait;}
                { i == 8 } { 1.0.wait;}
                { i == 9 } { 1.0.wait;}
                { i == 10 } { 2.0.wait;}
                { i == 11 } { 1.0.wait;};
                i.value.postln;
                });
            });
            1.do({
            12.do({ arg i;
            Synth("Player",[\bufnum, ~e_buffer.wrapAt(i)]); 
            case { i <= 1 } { 1.0.wait;}
                { i == 2 } { 2.0.wait;}
                { i == 3 } { 1.0.wait;}
                { i == 4 } { 1.0.wait;}
                { i == 5 } { 1.0.wait;}
                { i == 6 } { 1.0.wait;}
                { i == 7 } { 2.0.wait;}
                { i == 8 } { 1.0.wait;}
                { i == 9 } { 1.0.wait;}
                { i == 10 } { 2.0.wait;}
                { i == 11 } { 1.0.wait;};
                });
            });
            1.do({
            12.do({ arg i;
            Synth("Player",[\bufnum, ~e_buffer.wrapAt(i)]); 
            Synth("Player",[\bufnum, ~d_buffer.wrapAt(i)]); 
            case { i <= 1 } { 1.0.wait;}
                { i == 2 } { 2.0.wait;}
                { i == 3 } { 1.0.wait;}
                { i == 4 } { 1.0.wait;}
                { i == 5 } { 1.0.wait;}
                { i == 6 } { 1.0.wait;}
                { i == 7 } { 2.0.wait;}
                { i == 8 } { 1.0.wait;}
                { i == 9 } { 1.0.wait;}
                { i == 10 } { 2.0.wait;}
                { i == 11 } { 1.0.wait;};
                });
            });
            2.do({
            12.do({ arg i;
            Synth("Player",[\bufnum, ~e_buffer.wrapAt(i)]); 
            Synth("Player",[\bufnum, ~d_buffer.wrapAt(i)]); 
            Synth("Player",[\bufnum, ~c_buffer.wrapAt(i)]);
            case { i <= 1 } { 1.0.wait;}
                { i == 2 } { 2.0.wait;}
                { i == 3 } { 1.0.wait;}
                { i == 4 } { 1.0.wait;}
                { i == 5 } { 1.0.wait;}
                { i == 6 } { 1.0.wait;}
                { i == 7 } { 2.0.wait;}
                { i == 8 } { 1.0.wait;}
                { i == 9 } { 1.0.wait;}
                { i == 10 } { 2.0.wait;}
                { i == 11 } { 1.0.wait;};
                });
            });
            2.do({
            12.do({ arg i;
            Synth("Player",[\bufnum, ~e_buffer.wrapAt(i)]); 
            Synth("Player",[\bufnum, ~d_buffer.wrapAt(i)]); 
            Synth("Player",[\bufnum, ~c_buffer.wrapAt(i)]);
            Synth("Player",[\bufnum, ~b_buffer.wrapAt(i)]);
            case { i <= 1 } { 1.0.wait;}
                { i == 2 } { 2.0.wait;}
                { i == 3 } { 1.0.wait;}
                { i == 4 } { 1.0.wait;}
                { i == 5 } { 1.0.wait;}
                { i == 6 } { 1.0.wait;}
                { i == 7 } { 2.0.wait;}
                { i == 8 } { 1.0.wait;}
                { i == 9 } { 1.0.wait;}
                { i == 10 } { 2.0.wait;}
                { i == 11 } { 1.0.wait;};
                });
            });
            
            2.do({
            12.do({ arg i;
            Synth("Player",[\bufnum, ~e_buffer.wrapAt(i)]); 
            Synth("Player",[\bufnum, ~d_buffer.wrapAt(i)]); 
            Synth("Player",[\bufnum, ~c_buffer.wrapAt(i)]);
            Synth("Player",[\bufnum, ~b_buffer.wrapAt(i)]);
            case { i <= 1 } { 1.0.wait;}
                { i == 2 } { 2.0.wait;}
                { i == 3 } { 1.0.wait;}
                { i == 4 } { 1.0.wait;}
                { i == 5 } { 1.0.wait;}
                { i == 6 } { 1.0.wait;}
                { i == 7 } { 2.0.wait;}
                { i == 8 } { 1.0.wait;}
                { i == 9 } { 1.0.wait;}
                { i == 10 } { 2.0.wait;}
                { i == 11 } { 1.0.wait;};
                });
            });
            
            2.do({
            12.do({ arg i;
            Synth("Player",[\bufnum, ~f_buffer.wrapAt(i)]); 
            Synth("Player",[\bufnum, ~e_buffer.wrapAt(i)]); 
            Synth("Player",[\bufnum, ~d_buffer.wrapAt(i)]); 
            Synth("Player",[\bufnum, ~c_buffer.wrapAt(i)]);
            Synth("Player",[\bufnum, ~b_buffer.wrapAt(i)]);
            case { i <= 1 } { 1.0.wait;}
                { i == 2 } { 2.0.wait;}
                { i == 3 } { 1.0.wait;}
                { i == 4 } { 1.0.wait;}
                { i == 5 } { 1.0.wait;}
                { i == 6 } { 1.0.wait;}
                { i == 7 } { 2.0.wait;}
                { i == 8 } { 1.0.wait;}
                { i == 9 } { 1.0.wait;}
                { i == 10 } { 2.0.wait;}
                { i == 11 } { 1.0.wait;};
                });
            });
            
            4.do({
            12.do({ arg i;
            Synth("Player",[\bufnum, ~g_buffer.wrapAt(i)]); 
            Synth("Player",[\bufnum, ~f_buffer.wrapAt(i)]); 
            Synth("Player",[\bufnum, ~e_buffer.wrapAt(i)]); 
            Synth("Player",[\bufnum, ~d_buffer.wrapAt(i)]); 
            Synth("Player",[\bufnum, ~c_buffer.wrapAt(i)]);
            Synth("Player",[\bufnum, ~b_buffer.wrapAt(i)]);
            case { i <= 1 } { 1.0.wait;}
                { i == 2 } { 2.0.wait;}
                { i == 3 } { 1.0.wait;}
                { i == 4 } { 1.0.wait;}
                { i == 5 } { 1.0.wait;}
                { i == 6 } { 1.0.wait;}
                { i == 7 } { 2.0.wait;}
                { i == 8 } { 1.0.wait;}
                { i == 9 } { 1.0.wait;}
                { i == 10 } { 2.0.wait;}
                { i == 11 } { 1.0.wait;};
                });
            });

           

        });
)

t.start;                // start task
t.pause;                // pause task
t.resume;                // resume task
t.stop;                 // stop task
        

 

Light-computer interface + transmitters

 

About light
Trying to switch light ON/OFF  from sound feedback and connect light with the sound and the movements in a real-time processing effects, we used arduino to power the light spot's cable via computer messages.
I hooked a 500W/ 220V light spot to arduino board by using a 9DV/10A relay. The tutorial for interfacing 220V light with arduino's board 5V was taken from:http://www.glacialwanderer.com/hobbyrobotics/?p=9

Pockets for FM-transmitters
Those are old generation mobile pockets of a good size for the fm-transmitters. It is easier to be carried on the dancer's costume. 

Controlling Light From SC3 via Python to Arduino

 

Supercollider|Python|Arduino Code

Supercollider Code

// SWITCHING A LIGHT ON/OFF FROM SUPERCOLLIDER VIA PYTHON TO ARDUINO

b = NetAddr.new("10.0.1.12", 57110); // create the NetAddr on which the application you're sendin to is listening

// we create a Task
t = Task({
inf.do({
b.sendMsg("/ard", "ON");    // via the arduino-board we turn the light on
"ON sent".postln;        
3.0.wait;                // we wait 3 seconds
b.sendMsg("/ard", "OFF");    // then we turn it off
"OFF sent".postln;
3.0.wait;                // and we wait another three seconds to turn it on again
        });
    });
    
t.start;                // start the Task
t.stop;                // stop the Task

 

Python Code

#!/usr/bin/python
import serial, osc, time
osc.init() #initiate the osc library
serArd = serial.Serial('/dev/tty.USB', 9600) # open the arduino port

def listLet(*msg):
     """function for reading the OSC message"""
     print 'got address', msg[0]
     print len(msg[0][1])
     print typetag is ', msg[0][1][1]
     if msg[0][1][i] == 'f': #check if osc message is a float number
        print
        print 'incoming data are ', int(msg[0][2]) # turn it to an intiger
        data=msg[0][2]
        print data
        print
    if msg[0][1][1] == 's': #check if osc message is a string
        print
        #print 'incoming data are ', msg[0][2]
        data=msg[0][2]
        print
        print "data are ", data
        serArd.write(str(data)); # send data to arduino
 
#serMouse = serial.Serial('/dev/tty.usb-AE105jB', 9600) # for mouse events
while 1:
    osc.listen('10.0.1.12', 57110)
    osc.bind(listLet, "/ard") # listens to osc message with the address: "/ard"
    time.sleep(1)

 

Arduino Code

int RELAY_PIN = 3;

void setup()

 

{

 

  pinMode(RELAY_PIN, OUTPUT);

 

  Serial.begin(9600); // open serial

 

  Serial.println("waiting for message OSC");

 

}

void loop()

 

{

 

  static int relayVal = 0;

 

  int cmd;

 

 

 

  while (Serial.available() > 0)

 

  {

 

    cmd = Serial.read();

 

 

 

     

 

    switch (char(cmd))

 

    {

 

    case 'ON':

 

      {

 

        relayVal = 1; // xor current value with 1 (causes value to toggle)

 

        if (relayVal)

 

          Serial.println("Relay on");

        break;

 

      }

 

    case 'OFF':

 

      {

 

        relayVal = 0; // xor current value with 1 (causes value to toggle)

 

        if (relayVal)

 

          Serial.println("Relay off");

 

        break;

 

      }

 

    default:

 

      {

 

        Serial.print(char(cmd));

 

        Serial.print(relayVal);

 

      }

 

    }

 

     

 

    if (relayVal)

 

      digitalWrite(RELAY_PIN, HIGH);

 

    else

 

      digitalWrite(RELAY_PIN, LOW);

 

   }

 

}

Improved Code Sound Instructions & Slower Tempo

 

In order to make it easier for our dancer Eleni to carry out the movements, we decided that the tempo should be slower. Also in a musical way this way works better: there's more room for the instructions to "morph" into music and then develop. The order of the letters is always the same. Half of the sound instructions has been pre-determined, the other half picks randomly its bits from the buffers. See Supercollider-code below:

 

(            // Sound Files
~a_files = "sounds/InstructionsAF/First/*.wav".pathMatch;
~b_files = "sounds/InstructionsAF/Second/*.wav".pathMatch;    // percussive
~c_files = "sounds/InstructionsAF/Third/*.wav".pathMatch;     // percussive
~d_files = "sounds/InstructionsAF/Fourth/*.wav".pathMatch;    // percussive
~e_files = "sounds/InstructionsAF/Fifth/*.wav".pathMatch;     // short tones
~f_files = "sounds/InstructionsAF/Sixth/*.wav".pathMatch;     // short tones
~g_files = "sounds/InstructionsAF/Seventh/*.wav".pathMatch;   // short tones
~h_files = "sounds/InstructionsAF/Eighth/*.wav".pathMatch;    // short tones
~i_files = "sounds/InstructionsAF/Ninth/*.wav".pathMatch;     // short tone + perc
~j_files = "sounds/InstructionsAF/Tenth/*.wav".pathMatch;     // short tone + perc
~k_files = "sounds/InstructionsAF/Eleventh/*.wav".pathMatch;   // short tone + perc
~l_files = "sounds/InstructionsAF/Twelfth/*.wav".pathMatch;    // long tones
~m_files = "sounds/InstructionsAF/Thirteenth/*.wav".pathMatch; // long + short
~n_files = "sounds/InstructionsAF/Fourteenth/*.wav".pathMatch; // long + short
~o_files = "sounds/InstructionsAF/Fifteenth/*.wav".pathMatch;  // long + short
~p_files = "sounds/InstructionsAF/Sixteenth/*.wav".pathMatch;  // long + short
~q_files = "sounds/InstructionsAF/Seventeenth/*.wav".pathMatch; // long + short
~r_files = "sounds/InstructionsAF/Eightteenth/*.wav".pathMatch; // long + short

             // collect files in buffers and make 
~buffers = [
    ~a_buffer = ~a_files.collect{|item| Buffer.read(s, item) },
    ~b_buffer = ~b_files.collect{|item| Buffer.read(s, item) },
    ~c_buffer = ~c_files.collect{|item| Buffer.read(s, item) },
    ~d_buffer = ~d_files.collect{|item| Buffer.read(s, item) },
    ~e_buffer = ~e_files.collect{|item| Buffer.read(s, item) },
    ~f_buffer = ~f_files.collect{|item| Buffer.read(s, item) },
    ~g_buffer = ~g_files.collect{|item| Buffer.read(s, item) },
    ~h_buffer = ~h_files.collect{|item| Buffer.read(s, item) },
    ~i_buffer = ~i_files.collect{|item| Buffer.read(s, item) },
    ~j_buffer = ~j_files.collect{|item| Buffer.read(s, item) },
    ~k_buffer = ~k_files.collect{|item| Buffer.read(s, item) },
    ~l_buffer = ~l_files.collect{|item| Buffer.read(s, item) },
    ~m_buffer = ~m_files.collect{|item| Buffer.read(s, item) },
    ~n_buffer = ~n_files.collect{|item| Buffer.read(s, item) },
    ~o_buffer = ~o_files.collect{|item| Buffer.read(s, item) },
    ~p_buffer = ~p_files.collect{|item| Buffer.read(s, item) },
    ~q_buffer = ~q_files.collect{|item| Buffer.read(s, item) },
    ~r_buffer = ~r_files.collect{|item| Buffer.read(s, item) };
            ];
           

)
(
                    // Sample Player
SynthDef("Player", { arg out=0, bufnum, pan=0;    
            var sig;
            sig = PlayBuf.ar(2, bufnum, BufRateScale.kr(bufnum), loop: 0);
            FreeSelfWhenDone.kr(sig);
            Out.ar(out, Pan2.ar(sig, pan, 0.3));
                        }).send(s);
                    // Reverb
SynthDef("Reverb", { arg out=0,mix=0.0,room=0.4, damp=0.5,amp=1.0;
                var sig; 
                sig = In.ar(out, 2);
                ReplaceOut.ar(0,
                            FreeVerb2.ar(
                            sig[0],
                            sig[1],
                            mix, room, damp, amp));
                            }).send(s);

(
                    // the task which plays the sample
t = Task({   

        1.do({
            z.set(\mix, 0.0);
            6.do({ arg i;
            Synth("Player",[\bufnum, ~a_buffer.wrapAt(i), \pan, 1.0]);     
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });    
            
        1.do({
            z.set(\mix, 0.05);
            6.do({ arg i;
            Synth("Player",[\bufnum, ~b_buffer.wrapAt(i), \pan, 0.0]);     
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });    
            
        1.do({
            z.set(\mix, 0.05);
            6.do({ arg i;
            Synth("Player",[\bufnum, ~c_buffer.wrapAt(i), \pan, 0.0]);     
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });    
            
        1.do({
            z.set(\mix, 0.05);
            6.do({ arg i;
            Synth("Player",[\bufnum, ~d_buffer.wrapAt(i), \pan, 0.0]);     
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });    
            
        1.do({
            z.set(\mix, 0.1);
            6.do({ arg i;
            Synth("Player",[\bufnum, ~e_buffer.wrapAt(i), \pan, 0.0]);     
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });
            
        1.do({
            z.set(\mix, 0.15);
            6.do({ arg i;
            Synth("Player",[\bufnum, ~e_buffer.wrapAt(i), \pan, 1.0]);
            Synth("Player",[\bufnum, ~f_buffer.wrapAt(i), \pan, -1.0]);     
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });
            
        1.do({
            z.set(\mix, 0.15);
            6.do({ arg i;
            Synth("Player",[\bufnum, ~f_buffer.wrapAt(i), \pan, 1.0]);
            Synth("Player",[\bufnum, ~g_buffer.wrapAt(i), \pan, -1.0]);     
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });
            
        1.do({
            z.set(\mix, 0.15);
            6.do({ arg i;
            Synth("Player",[\bufnum, ~g_buffer.wrapAt(i), \pan, 1.0]);
            Synth("Player",[\bufnum, ~h_buffer.wrapAt(i), \pan, -1.0]);     
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });
        
        1.do({
            z.set(\mix, 0.2);
            6.do({ arg i;
            Synth("Player",[\bufnum, ~i_buffer.wrapAt(i), \pan, 0.0]);
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });
            
        1.do({
            z.set(\mix, 0.2);
            6.do({ arg i;
            Synth("Player",[\bufnum, ~j_buffer.wrapAt(i), \pan, 0.0]);
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });
        
        1.do({
            z.set(\mix, 0.25);
            6.do({ arg i;
            Synth("Player",[\bufnum, ~j_buffer.wrapAt(i), \pan, 1.0]);
            Synth("Player",[\bufnum, ~k_buffer.wrapAt(i), \pan, -1.0]);
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });
            
        2.do({
            z.set(\mix, 0.25);
            6.do({ arg i;
            j = 6.rand;
            k = 5.rand;
            k = k + 6;
            Synth("Player",[\bufnum, ~buffers[j].wrapAt(i), \pan, 1.0]);
            Synth("Player",[\bufnum, ~buffers[k].wrapAt(i), \pan, -1.0]);
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });
            
        2.do({
            z.set(\mix, 0.3);
            6.do({ arg i;
            k = 10.rand;    
            Synth("Player",[\bufnum, ~l_buffer.wrapAt(i), \pan, 1.0]);
            Synth("Player",[\bufnum, ~buffers[k].wrapAt(i), \pan, -1.0]);
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });
            
        1.do({
            z.set(\mix, 0.35);
            6.do({ arg i;    
            Synth("Player",[\bufnum, ~l_buffer.wrapAt(i), \pan, 1.0]);
            Synth("Player",[\bufnum, ~m_buffer.wrapAt(i), \pan, -1.0]);
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });
            
        1.do({
            z.set(\mix, 0.35);
            6.do({ arg i;    
            Synth("Player",[\bufnum, ~m_buffer.wrapAt(i), \pan, 1.0]);
            Synth("Player",[\bufnum, ~n_buffer.wrapAt(i), \pan, -1.0]);
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });
            
        1.do({
            z.set(\mix, 0.35);
            6.do({ arg i;    
            Synth("Player",[\bufnum, ~n_buffer.wrapAt(i), \pan, 1.0]);
            Synth("Player",[\bufnum, ~o_buffer.wrapAt(i), \pan, -1.0]);
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });
            
        1.do({
            z.set(\mix, 0.35);
            6.do({ arg i;    
            Synth("Player",[\bufnum, ~o_buffer.wrapAt(i), \pan, 1.0]);
            Synth("Player",[\bufnum, ~p_buffer.wrapAt(i), \pan, -1.0]);
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });
            
        1.do({
            z.set(\mix, 0.35);
            6.do({ arg i;    
            Synth("Player",[\bufnum, ~p_buffer.wrapAt(i), \pan, 1.0]);
            Synth("Player",[\bufnum, ~q_buffer.wrapAt(i), \pan, -1.0]);
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });
            
        1.do({
            z.set(\mix, 0.35);
            6.do({ arg i;    
            Synth("Player",[\bufnum, ~q_buffer.wrapAt(i), \pan, 1.0]);
            Synth("Player",[\bufnum, ~r_buffer.wrapAt(i), \pan, -1.0]);
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });
        
                
        4.do({
            z.set(\mix, 0.4);
            6.do({ arg i;
            j = 18.rand.postln;
            k = 18.rand.postln;
            Synth("Player",[\bufnum, ~buffers[j].wrapAt(i), \pan, 1.0]);
            Synth("Player",[\bufnum, ~buffers[k].wrapAt(i), \pan, -1.0]);
             
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                });
            });    
            
        4.do({
            z.set(\mix, 0.45);
            6.do({ arg i;
            j = 3.rand.postln;
            j = j + 11;
            k = 4.rand.postln;
            k = k + 14;
            Synth("Player",[\bufnum, ~buffers[j].wrapAt(i), \pan, 1.0]);
            Synth("Player",[\bufnum, ~buffers[k].wrapAt(i), \pan, -1.0]);
            
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                
                });
            });    
            
        1.do({
            z.set(\mix, 0.0);
            6.do({ arg i;
            Synth("Player",[\bufnum, ~a_buffer.wrapAt(i), \pan, 0.0]);
            case { i <= 1 } { 1.5.wait;}
                { i == 2 } { 3.0.wait;}
                { i == 3 } { 1.5.wait;}
                { i == 4 } { 1.5.wait;}
                { i == 5 } { 3.0.wait;};
                
                });
            });    
                                
        });
)
)
(
t.start;                // start task
z = Synth(\Reverb, addAction:\addToTail);    // put reverb on
)

Supercollider Code for "Free Time" + Sound Examples

 

Down below you can see the code of four experimental "patches" for the sound in the "free time". In these patches the sound is either triggered and manipulated by the sound of the SSmodule, or the sound of the SSmodule itself is manipulated. The corresponding sound examples can also be found down below.

// 1 // Noisy

a = Buffer.read(s, "sounds/AF7.wav");        // load sample in buffer

SynthDef("Effect", { 
        var sig, sig2, amp;
        sig = AudioIn.ar([3,4]);                // Input Solar Sound Module
        amp = Amplitude.kr(AudioIn.ar([3,4])); // Track amp ssModule
        sig = sig ring3: SinOsc.ar(Lag.kr(amp*10000, FSinOsc.kr(2))); // ringmodulate with Sine of which
                                                                     // freq is modulated by amp ssModule
                                                  
        sig = DelayN.ar(sig, 3.0, amp);                       // delaytime modulated by amp
        sig2 = sig ring4: PlayBuf.ar(2, a.bufnum, 1.0, loop:1); // ringmodulate with sample
        Out.ar(0, Pan2.ar(sig+sig2, 0));                        // add the two signals
                    }).send(s);
                    
Synth(\Effect);                                 // Play Synth

// 2 // FM grain-synthesizer controlled by Solar Sound Module with ringmodulation modulated by Gingerbreadman Chaos and Convolution
//

SynthDef("Grains", { arg trigger,gate = 1;
        var sig, ampfol, freq, hasFreq;
        # freq, hasFreq = Pitch.kr(AudioIn.ar(3),ampThreshold:0.02); // follow frequency module
        ampfol = Amplitude.kr(AudioIn.ar(3));                        // follow amp of module
        sig = FMGrain.ar(                                                // FM grains
                Impulse.ar(Lag.kr(ampfol*1000, 20)),                // trigger rate modulated by amp
                ampfol*5,                                          // grainlength mod by amp
                freq,                                             //carfreq mod by Lorenz
                freq,                                            // same for modfreq
                LFNoise1.kr(1).range(1, 10),                     // index of modulation mod by noise
                EnvGen.kr(                                       // amp evelope
                    Env([0, 1, 0], [1, 1], \sin, 1),
                    gate,
                    levelScale: ampfol,
                    doneAction: 2)
                );
            sig = Convolution.ar(sig, AudioIn.ar([3,4]), 1024, 0.5); // convolved with module as kernel
            sig = sig ring4: Lag.ar(GbmanL.ar(freq/1000), 20);       // ringmodulated with gingerbreadman
                                                                       // chaos of which the iteration freq
                                                                    // is mod by 
            Out.ar(0, Pan2.ar(sig, 0.0,0.3));
            }).send(s);

Synth(\Grains);       // Play Synth

 

// 3 // FM grain-synthesizer controlled by Solar Sound Module with Lorenz and Gingerbreadman-chaos generators with convolution and ringmodulation //

SynthDef("Grains", { arg gate = 1;
        var sig, ampfol, freq, hasFreq;
        # freq, hasFreq = Pitch.kr(AudioIn.ar(3),ampThreshold:0.02); // follow frequency of module
        ampfol = Amplitude.kr(AudioIn.ar(3));                        // follow amp of module
        sig = FMGrain.ar(                                                 // FM grains                
                Impulse.ar(Lag.kr(ampfol*1000, 20)),                //trigger rate modulated by amp
                 ampfol*30,                                          //grain length modulated by amp
                 LorenzL.ar(freq,3e-3)*800+900,   //carfreq mod by Lorenz chaos of which the iteration freq
                                                      // is modulated by freq of module             
                 LorenzL.ar(freq,3e-3)*500+400,   // same for modfreq
                LFNoise1.kr(1).range(1, 10),     // index of modulation modulated by noise
                EnvGen.kr(                             // amp envelope
                        Env([0, 1, 0], [1, 1], \sin, 1),
                        gate,
                        levelScale: ampfol,
                        doneAction: 2)
                        );
            sig = Convolution.ar(sig, AudioIn.ar([3,4]), 1024, 0.5);  // convolved with module as kernel
            sig = sig ring4: Lag.ar(GbmanL.ar(freq/1000), 20);   // ringmodulated with Gingerbreadman of
                                                                 // which iteration freq is modulated by
                                                                 // freq of module. Lag is used to smoothen. 
                                                                             Out.ar(0, Pan2.ar(sig, 0.0,0.3));
            }).send(s);

a = Synth(\Grains);                                   // start synth            
           

// 4 // Noisy with Chaos

SynthDef("NoisyChaos", { 
        var sig, sig2,sig3, amp, freq, hasFreq;
        sig = AudioIn.ar([3,4]);                // Input Solar Sound Module
        # freq, hasFreq = Pitch.kr(AudioIn.ar(3));
        amp = Amplitude.kr(AudioIn.ar([3,4])); // Track amp ssModule
        sig2 = sig ring3: SinOsc.ar(Lag.kr(amp*10000, FSinOsc.kr(2))); // ringmodulate with Sine of which
                                                                     // freq is modulated by amp ssModule
                                                  
        sig2 = DelayN.ar(sig2, 3.0, amp);  // delaytime modulated by amp
        sig3 = sig ring4: FBSineC.ar(freq/100, 
                            amp*10, amp, 1.005, 0.7) * 0.2; // ringmodulate with chaotic sine
        sig3 = Resonz.ar(sig3, freq,1.0);
        Out.ar(0, Pan2.ar(sig2+sig3, 0));                        // add the two signals
                    }).send(s);
                    
Synth(\NoisyChaos);                 // Play Synth