Welcome to the Bad Sekta website

Recent & forthcoming releases -

BADvinyl001 - 50th release!   BADvinyl001
Various
BADvinyl001
12" vinyl
1 October 2013

$p!tTiNg V!tRi[]L - 'Good Grief [echoes from the grunge-pit]'   BADmpfree032
$p!tTiNg V!tRi[]L
Good Grief
MPFree/FLAC
23 March 2013

Ascetic - 'Loss'   BADmpfree031
Ascetic
Loss
MPFree/FLAC
23 April 2013

Phuq - '10,958 days of error'   BADmpfree030
Phuq
10,958 Days of Error
MPFree
13 January 2013

Various - 'The Cavity Church'   BADmpfree029
Various
The Cavity Church
MPFree
23 November 2012

FZV - 'antic.decay'   BADmpfree028
FZV
antic.decay
MPFree
23 December 2012

Recent tunes

Recent tweets

Media

Zeropointenergy - Movement & Gesture Interface (Brief)

This is an essay about my MIDI controller I built. In a nut shell, it sends out MIDI data from movements & light levels. If you don't want to read it off the screen, please feel free to print a copy off. Also it is not including the diagrams, so if you want a copy of them then contact me.

Ryan Jordan (Zeropointenergy), 2006.

Movement & Gesture Interface (MGI)

"That which is born of the flesh is flesh; & that which is born of the spirit is spirit"

Christ, in John 3:6.

Introduction

Working with laptops & computers can offer almost seemingly infinite possibilities in sound creation, but what about a computer as a performance tool? Again the possibilities for sound can be huge & one could argue that the sound is all that matters. But what really involves & connects people & what exactly is a performance? A performance can be seen as ritual, a ceremonial action where alternative worlds can be suggested & temporarily made real for the audience.

So what does this have to do with computer sound performances? With creating sound worlds there is an implied landscape, meanings, emotions, etc within this world & only this sound world. There may be no visual stimuli for the audience, which isnt necessarily a bad thing, however some people enjoy having a visual as well as sonic stimuli, because it adds to the reality of this imaginary world. This paper will focus on the humans connection to reality, imagination & environment through the use of technology in performance.

With reference to Wisharts Operational Fields, Stelarcs body based performances, various MIDI controllers for the body & sociological views, the purpose & meaning of the Movement & Gesture Interface (MGI) shall be explained.

Natural Technology

Technology has always been part of human existence & is natural to us. We shape it, it shapes us, we shape nature to accommodate it, & nature shapes itself to accommodate us. Stelarc has been exploring the body, technology & culture since the 1970s & he makes a very similar statement:

Technology is what defines being human. Its not an antagonistic alien sort of object, its part of our human nature. We shouldnt have a Frankensteinian fear of incorporating technology into the body.[1]

He has created many works on the body, including Amplified Body in which he amplified body processes such as brain waves, muscles, pulse, blood flow, limb motion & body posture. Some of his philosophy states that the body is (for him) an impersonal, evolutionary, objective structure[2] & that we should focus more on altering this physical frame work in order to access significantly different philosophies & thoughts about the world. His statement is well founded & interesting, but is the body really impersonal? If we each have our own bodies, then surely they are personal & ultimately, they are our own & without them we simply would not have contact with the world. We should not forget this because if we begin to alter & modify our bodies too drastically they may cease to function or, culturally & socially we will become unacceptable & limit our lives due to this. Similarly if we modify our existing bodies we alter our moral standards of what is acceptable & if this spans too far then we may possibly get lost in a vain quest for selfish perfection. So to conclude this section we cannot simply achieve new philosophies & thoughts through our idea & perception of the body alone. It must go hand in hand with a mental preparedness, understanding, & rational reason for the need to do so.

Senses, Sensors & Receptors & the Environment

Our physical body is the direct link & connection to our understanding & interpretation of our world & our reality. Through our senses of taste, touch, smell, vision & sound (& the debatable sixth paranormal sense, which is probably more to do with perception) we can detect, locate & manoeuvre through the world. Our senses are indicated by our receptors that can be placed into five general groups[3]:

1. Chemoreceptors stimulated by changes in concentration of chemical substances. e.g. smell & taste.
2. Pain Receptors (noiceptors) stimulated when tissue is damaged. May be triggered by excessive exposure to mechanical, heat or chemical energy.
3. Thermoreceptors sensitive to temperature changes.
4. Mechanoreceptors detect changes that cause the receptors to become deformed. These sensory receptors are sensitive to mechanical forces, such as changes in pressure or movement of fluids. e.g. proprioceptors are sensitive to changes in muscles & tendons.
5. Photoreceptors Eyes, sensitive to changes in light.

Mapping & representing these sensors & receptors of our body via technology is fairly easy, after all machines can do jobs our bodies already do but more efficiently; it must be remember that technology is natural to us. What is more difficult is how we use sensors & receptors, what is their purpose & why use them. It may be a good idea to put an infrared distance sensor across a room & have it trigger sounds when a person breaks the beam but if there is no thought or philosophy behind it, it is disposable.

Stelarc is by no means the only person who has experimented & researched using the human body & technology. In fact body & technology interaction is very much inescapable now, especially with mobile phones. A group called FoAM transform entire environments into responsive spaces where people are inextricably part of influencing their environment by their presence, actions, & even intentions.[4] Using many sensors & switches such as accelerometers, light emitting diodes (LEDs) & stretch sensors, FoAM works in public spaces & tries to create tension between reality & imaginary, with an interest in systems that can sense rather then merely detect absence or presence. They also encourage users to engage with computational systems as they would a living entity, as the system should be capable of interpreting the input given by the performer into meaningful responses for the audience. This is extremely useful as there is often an overwhelming sound presence in a computer performance & with the visual & physical presence of someone moving the sound, the audience will be in a better position to interpret the intent of the action by the performer.

Yoichi Nagashima has been developing & researching the use of sensors as interactive communication interfaces. Nagashimas research project called PEGASUS (Performing Environment of Granulation, Automata, Succession, & Unified-Synchronism) has produced many systems with sensors such as[5]:

* Heartbeat sensor by optical information at human earlobe.
* Electrostatic touch sensor with metal contacts.
* Single/dual channel electromyogram sensor with direct muscle noise signal (muscle sensor).
* Vocal breath sensor measuring expansion/contraction of breast & stomach.
* SHO breath sensor (SHO is a Japanese mouth organ) measuring breath pressure.
* Bio-Feedback System used for detecting performers cues without the audience knowing, delicate control of graphics & sounds, etc.

These exploratory sensors are mainly used to control MIDI & offer the performer a new, or possibly old & forgotten way to connect with the sound, audience & freedom over the performance. As well as aiding us to challenge philosophies & thoughts about our bodies & how we interact with the environment, these extensions of control over the computer are also a natural evolution from the traditional idea of a musical instrument.

A Musical Instrument or a Performance Tool?

What is the difference between a musical instrument & a performance tool? A musical instrument restricts the freedom of movement & theatrical expression. A performance tool enhances it but it still has its own limitations & restrictions. A performance tool can be seen more accurately as an extension & progression of a musical instrument.

Musical instruments are mainly related to breathing, plucking, strumming or hitting but what happens when we have some way of mapping bodily or vocal gestures into the flow properties of a sound[6]? Again we return to Stelarc & we have to change our thoughts & philosophies about musical instruments & musical performance. Now we are not only concerned with a traditional musical performance but also with a technological sound performance; a performance where the musical instrument is the human body.

Is a traditional performer more talented than a technological one? A technological performance employs physiological-intellectual behaviour as does a traditional one & with practicing his/her interface the performer will gain greater control over this new instrument, as in traditional practice. The main problem that will arise from these two disciplines is a lack of understanding between the two. This is why it is important for the technological side to be open & demonstrate how & why things work & for the traditional side to listen & understand & not dismiss the technological performance as a gimmick. This will be studied further by explaining Wisharts idea of Operational Fields.

Operational Fields

We may group particular parameters & types of articulation into fields governed by rules. These operational fields may themselves be articulated through other inputs (physiological-intellectual performance behaviour or higher level rules) given by the composer.[7]

The operational fields for a traditional instrument such as a piano will include rules such as the material its made from (i.e. wood), the ratio of tension between the strings, etc for the parameter of sound; & the operational fields for performance on a piano will include rules such as play louder, softer, quicker, etc for the physiological-intellectual interpretation of the piece.

For the technological hardware & performance tool we shall look at my design for an interface, the Movement & Gesture Interface (MGI). Three light dependant resistors (LDRs) are attached to the fingertips & an accelerometer attached to the head which both send out MIDI data which is controlled by bodily movements. The operational fields for the hardware will include the following rules:

* Computer chip used & programming language (in this case it is an Atom & the programme Basic [see Appendix 1 for Basic code & Appendix 2 for flow chart]).
* The types of sensors used (three LDRs, & an accelerometer).
o Amount of light available for LDRs.
o Amount of x-y tilt for accelerometer.
o Data range for LDRs & accelerometer.
* What its controlling (Max/MSP via MIDI).
o MIDI limitation (0-127).
o Max/MSP limitation.
o Virtual synthesiser limitation.

& for the physiological-intellectual performance the operational field will include the following rules:

Personal perception & feedback of the sound in order to alter physical position.

Limitation of finger, hand, neck & head movement.

Performer interaction with environment depending on light levels.

Move head forward/backward to control modulation.

Move head left/right to control pitch.

Move index finger on right hand closer to/further from light to increase/decrease frequency on a filter.

Move middle finger on right hand closer to/further from light to increase/decrease Q level on a filter.

Move little finger on right hand closer to/further from light to increase/decrease gain on a filter.

As we can see there are several different operational fields taking place in both traditional & technological interfaces & this proves that they are both equally valid instruments & controllers.

Although there are advantages over electronic & digital control of sound compared to a traditional instrument (reach very high or low frequencies, play at any speed, change sounds, etc) there isnt much difference between the physical control of the sound. With the MGI, as with a traditional instrument I must practice moving my body in particular ways, remembering where to position myself for the best sound, etc. The MGI may lack finer articulations; such as I may not be able to physically play softly, but the computer programme being used (such as Max) can be fine tuned to register velocity very precisely with one sensor & a second sensor to control pitch, so I can virtually play softly. Although very different, they are very similar in physical ways of movement & gesture.

Conclusion of the MGI

I developed the MGI to be used as a physical MIDI controller in live laptop performance in order to map bodily movements to sound & to give the laptop performer more freedom of movement & physical expression. Using three LDRs attached to the fingers allows the performer greater arm movement so they are not just restricted to the keyboard. It also encourages interaction with light, & in the performance space consideration of the actual lighting to be used. The physical movements of the LDRs when on the finger tips suggest a puppetry of light.

Originally designed to be placed on the back of the hand, the accelerometer is now positioned on top of the performers head measuring tilting forwards, backwards, left & right. Placing the accelerometer on top of the head encourages awareness of ones movements because even with just walking you are moving your head. With these sensors in place on the laptop performer it will hopefully encourage a more theatrical & environmentally aware performance, with interactions of light & space.

Figure 1 is a circuit diagram of the MGI.

Figure 1.

The MGI is fairly robust & the only major cause for any malfunction is damaging the wires & sensors. Still as a prototype, the MGI will be developed to incorporate a small safety case & head strap for the accelerometer & a finger glove for each LDR. The MGI is also very flexible for the performers choice of control because it is using MIDI data & can therefore control anything that is using MIDI.

The only limitation this prototype has is that the flow of data is continuous, therefore there is a need for some switches to be incorporated into it to allow greater performer control. For example a simple on/off switch for each component would be useful so you can isolate individual movements, etc.

The MGI has good potential as a controller not only for sound but also anything that can be controlled via MIDI. For example a performance could be an audio-visual one with one person controlling everything. The images could be triggered by MIDI note numbers from LDR 1, a filter effect for the images controlled from LDR 2, zoom in/out on LDR 3 & pitch & modulation on a synthesiser controlled by the accelerometer. The MGI requires practice in order to control the output accurately & fine tuning from the software being controlled.

To summaries this paper, the MGI enables a human controllers bodily movements & gestures to be mapped & connected directly to a visual & sonic world. There is a greater & move believable reality created because the controller has to move in his environment in order to get a sonic or visual response. He is connected to a virtual, imaginary sound world & brings it to life. The most important thing to remember though is that we are all already connected to a world & reality through our own natural senses, but by adding new sensors to ourselves we are becoming aware of the present environment & have to find new ways to move through it & interact with it.

Appendix 1

Basic code for MGI:

LDR1 VAR word

LDR2 VAR word

LDR3 VAR word

x VAR word

y VAR word

velocity VAR byte

pitch VAR byte

freq VAR byte

accx VAR byte

accy VAR byte

Start:

PULSIN 1, 1, x

PULSIN 2, 1, y

Adin AX0,2,AD_RON,LDR1

Adin AX1,2,AD_RON,LDR2

Adin AX3,2,AD_RON,LDR3

LDR1 = LDR1 / 7 MAX 127

pitch = LDR1

LDR2 = LDR2 / 7 MAX 127

velocity = LDR2

LDR3 = LDR3 / 7 MAX 127

freq = LDR3

X = X - 3000

x = x / 30 MAX 127

accx = x

Y = Y - 3000

y = y / 30 MAX 127

accy = y

DEBUG[dec LDR1," ",dec LDR2," ",dec LDR3," ", 13]

DEBUG [dec x," ",dec y, 13]

serout 15,$C01F,[144,pitch,127]

pause 20

serout 15,$C01F,[145,velocity,127]

pause 20

serout 15,$C01F,[146,freq,127]

pause 20

serout 15,$C01F,[147,accx,127]

pause 20

serout 15,$C01F,[148,accy,127]

pause 20

goto start

Appendix 2

Flow chart for MGI.

Bibliography

Books:

Bishop, O. 2002. Electronics a First Course. Newnes Press: Butterworth-Heinemann Ltd.

Hole, J. W. Jr. & Coos, K. A. 1994. Human Anatomy; Second Edition. Wm. C. Brown Publishers.

Roads, C. 1996. The Computer Music Tutorial. Massachusetts: The MIT Press.

Wilson, S. 2002. Information Arts; Intersections of art, science, & technology. Massachusettes: MIT Press.

Wishart, T. (S. Emmerson, ed.) 1996. On Sonic Art. Reading: Harwood Academic Publishers.

Journals:

Kuzmanovic, M. & Gaffney, N. (D. D. Seligmann, ed.) 2005. Human-Scale Systems in Responsive Environments. IEEE Multimedia 12(1): pp 8-13. Available from http://ieeexplore.ieee.org/xpl/tocresult.jsp?isYear=2005&isnumber=30053&Submit32=Go&To&Issue [Accessed 1 March 2006].

Nagashima, Y. 2002. Interactive Multi-Media Performance with Bio-Sensing & Bio-Feedback. Japan: Shizuoka University of Art & Culture. Available from http://nagasm.suac.net/ASL/paper/ICAD2002.pdf [Accessed 1 March 2006].

Wei, S. X., Serita, Y., Dow, S., Iachello, G., Fistre, J. 200?. Gestural Audio Software Instruments. USA: Georgia Institute of Technology. Available from www.scholar.google.com [Accessed 1 March 2006].

Manuals:

Basic Stamp Programming Manual Version 2.0c. 2000. Parallax, Inc.

Websites:

www.parallax.com

www.milinst.com

www.memsic.com

www.basicmicro.com

www.scholar.google.com

[1] Wilson, S. Information Arts: Intersections of art, science, & technology. Ch 2.5 Body & Medicine, pp158.

[2] Wilson, S. Information Arts: Intersections of art, science, & technology. Ch 2.5 Body & Medicine, pp159.

[3] Hole, J. W. Jr, Coos, K. A. Human Anatomy; Second Edition. Ch 10 Somatic & Special Sensors, pp331.

[4] Kuzmanovic, M., Gaffney, N. Human-Scale Systems in Responsive Environments. Quote taken from editors note.

[5] Nagashima, Y. Interactive Multi-Media Performance with Bio-Sensing & Bio-Feedback.

[6] Wishart, T. On Sonic Art. Ch 16 Beyond the Instrument: Sound Models, pp 328.

[7] Wishart, T. On Sonic Art. Ch 16 Beyond the Instrument: Sound Models, pp 329.

- Back to Top

Site Map Contact Us Bad Sekta Fear Control Subscribe © 2005-2013 Bad Sekta > Site by Phuq