top of page

[News] Find out about two participatory workshops on MuseIT Pilot 2!

Updated: Jul 1

The terms “co-creation” and “co-design” entered the mainstream vocabulary of arts, science and technology only recently. But the practice of such collaborative work does back to the very origins of human societies. These practices remain particularly important in the design of technology for the people with disabilities. Initiatives for the people with disabilities often begin with the best of intentions, but then frequently fail because of lack of proper consultation with the people directly concerned. The Participatory Workshops are just one of the ways we are trying to make sure that proper consultation and co-design take place and that power is handed to the people who are most competent to use it. 

 

Conditions of lock-down were particularly problematic and limiting for people with disability - and in reality an unfortunate amplification of the profound problems people face in normal times. The idea  of MuseIT was to help people to be able to visit museums and concerts online with much more mobility, flexibility and vivid emotional experience than is currently available on the usual platforms. So we are developing ways of helping users not only to visit art installations and events virtually, but also to navigate within and around them, and to “hear” and “touch” experiences in art, either directly or in multimodal ways


A particular concern during lock-down was how people could make music and perform together. The standard platforms like Zoom have a long delay and poor sound, so MuseIT has been working with Stanford University in California to adapt their Jack Trip system, which has beautiful sound and no delay.


The only thing missing is the physical and emotional contact musicians have when they are playing together in the same space. So we are working with sensors, haptics and avatars to help players communicate physically and emotionally and understand one another’s feelings. The same technology allows us to help people who have profound difficulties in communication to express themselves in music, by offering them, if they wish, control of the music of their own minds and bodies. As in the science fiction of Arthur C. Clarke, users will be able to “think, feel, hear and then communicate” their own inner creative musical sensations.


In the last Participatory Workshop, which took place in Göteborg in Sweden in March, a group of musicians with disabilities explored heart rate sensors and avatars. They made very clear recommendations and guidelines, which we were then brought directly to the Edinburgh workshop. In turn, the recommendations and guidelines from the experience of 24th April workshop will shape the direction of the next phase of the project.




Göteborg 11th-12th March


The objectives were:

  • to work with potential users making use of heart rate sensors to evaluate to what extent heart rate signals may be of use in the communication of states of mind and body between remote co-creators (for the purposes of this workshop, the co-creators were in proximity)

  • to work with users to evaluate the effectiveness and comfort of haptic signals in the communication of heart rate and other vibrational information

  • to begin the process of designing “avatars” which will be visual representations of human states of mind and body to assist in emotional communication within the process of remote co-creation

 

The audification and haptic technology, developed by XSL, involves Python script to read live-streaming heart rate data from a Polar H10 sensor, to transform the data to audio and to send and receive data over Jack Trip. For haptic transmission, Python script is also used to transmit live heart rate data as haptic output, using the Actronica HSD mk.2 board and 2 HapCoil Plus actuators.


There were a number of creative exercises including improvisation with different heartbeats, recordings of different heartbeats, audifications of the heartbeats of the participants themselves and haptic communication of the heartbeats of participants. Audifications of heart beats of those in the room produced very strong reactions.


The next stage was communicating the heart beat through a single haptic actuator that could be held in the hand, pressed against chosen parts of the body or worn in a sleeve. The team hooked up two participants.


Passions were indeed high, and passing around the actuator seemed to create a less rhythmicaily obliging but an even more emotionally informative experience for the participants. The group discussed avatars as a way of communicating states of mind and body from one remote co-creator to another. Various examples were projected on to a screen in the studio, ranging from realistic faces, to cartoon- or emoji-like images, to more abstract shapes. In general, participants responded most enthusiastically to expressive and richly colourful “painterly” images of human faces with aesthetic ambition, fantasy and elements of abstraction.

 

Edinburgh 24th April

 

The objectives of the Edinburgh workshop were:

  • to implement and explore sensor diagnostic technologies

  • to follow on from the workshop in Göteborg, exploring colour

  • to follow on exploring painterly figurative abstraction

  • to repeat and develop some of the heart beat exercises

 

The sensor diagnostics included CERTH’s Affective Computing Framework and Datalink’s Mood Estimation algorithms. The Affective Computing Framework service for Music (ACF-Music) comprises  a grouping of AI emotional recognition algorithms. Results are plotted on Russell's two-dimensional valence-arousal space model - serendipitously exactly the same approach as X-System. An important sensor input is Galvanic Skin Response (GSR) measuring galvanic conductance across the surface of the skin using, in the case of the Edinburgh workshop, an Emphatic watch. For Mood Estimation the Catalink team chose to develop, train and validate a classification model based on Facial Emotion Recognition (FER) using cameras and associated software. HR and HRV from a Polar H10 sensor are also used for stress estimation. We are still processing the results of diagnostic sensor exploration, but using an interface with XSL’s colour circle of emotions we could at the time see significant changes in the emotional state of co-creators.


On the whole, users were happy with the cameras, Empatica watch and Polar H10 sensors, although one member of the group found the Polar band too tight. The colour experiments, using a direct projection of the single octave of electromagnetic energy of coloured light on to an octave of the mechanical energy of sound and music and further on to a model of tone in the autonomic nervous system were very successful, both creatively and scientifically, and may well indicate a way forwards for encoding colours, emotions and pitches in avatars. As in Göteborg, co-creators responded most strongly to colourful, expressive and painterly avatars. The music resulting from one of the negative emotional images caused some distress, but this was associated with the quality and amplitude of sound rather than its expressive content. The results with both audified and haptic heartbeats were much the same as in Göteborg. Similarly, the haptic heartbeat had the most powerful affect on both the ensemble and on observers of the workshop.

 

Much invention at the mainstream of our society is predictable and uninspiring, but at the margins of society interesting things are still happening. In MuseIT the world of disability has offered an opportunity for mainstream invention in arts technology to take a leap forwards.



 

Bình luận


bottom of page