The relationship between music and the mind fascinates scientists and composers alike. We picked the brains of musicians, data scientists and medical researchers exploring the potentials of this ever-pulsating field.
Moises Horta Valenzua sits at a desk in the bedroom of his Prenzlauer Berg WG, wearing a flimsy plastic headset secured by a clip on his lower left earlobe. The noise of the busy street outside blends with a series of apparently random tones in complex, constantly shifting rhythms. In this makeshift studio, surrounded by homemade instruments constructed from manikin hands, fidget spinners and bedsprings, Valenzua is making music with his brainwaves.
The technology that detects brainwaves through electrodes – the electroencephalogram or EEG – was first utilised musically by Alvin Lucier in his 1965 piece “Music for Solo Performer”. A set of electrodes monitored Lucier’s alpha brain waves: oscillations from 8 to 12Hz, created by neurons firing electric impulses. The signals passed through a number of loudspeakers attached to instruments. The result: a man in a headband sitting quite still, slowly opening and closing his eyes amidst a cacophony of percussive sounds.
Valenzua’s set-up works on a similar principle but with a different process. A single electrode detects a range of brainwave frequencies from his forehead. The data is fed into a program on his computer which, using machine-learning software, recognises set patterns in the data. “Through neural network algorithms, it can detect specific reactions in the raw data,” he explains. “That means I can train the computer to recognise specific thoughts. I can try thinking about cats, for example, get the computer to learn the brainwave pattern that makes, and associate it with a specific sound or set of parameters.”
But it’s not quite that easy. As well as the background noise caused by electronic disturbance, and the amount of electrical signal lost in the skull and scalp before it can get to the electrode, there’s the fact that we don’t necessarily know what we’re thinking all the time. “You can take an algorithm that recognises specific thoughts, and use it straight away, and it will work,” says Valenzua, “but that model probably won’t work the next day.”
From data to aesthetics
While Valenzua’s pieces embrace the chaos of brainwave readings, for others it’s the process of filtering out the chaos that makes brainwave-generated music interesting. Data scientists Olivia Haas and Ulf Schöneberg of Charlottenburg’s The unbelievable Machine Company have built an 8-electrode EEG set using parts salvaged from Schöneberg’s father’s basement. “My father was a technician and when I was a kid, he made his own EEG,” he explains. Schöneberg decided to rebuild his father’s EEG device while recovering from a foot operation. “I had to lie around for a long time and I thought the EEG from my childhood was as big as a refrigerator; it must be easier now. I found that you could get what had been the size of a fridge as a small chip, 2mm by 2mm, for $400.”
What had started as a welcome distraction from his slow recovery from the operation became an exercise in data representation. Although not musical himself, Schöneberg was inspired by conversations with his musician son to see if the data could be turned into music. He created software that would generate music drawing on an algorithm built of Irish folk melodies. While the algorithm provides the musical material, the EEG data affects how it is generated. “It responds to different frequencies of brainwave, so you can do some things to alter the music,” he explains. “Open or close your eyes, concentrate on something – this affects things like tempo, volume, and timbre.” The software also produces a real-time visualisation of the brainwave frequencies and locations; an aesthetic rendering of the data in criss-crossed lines and colours reminiscent of an animated Kandinsky painting.
Schöneberg looked to his colleague Haas – who has a PhD in neuroscience – to provide him with an understanding of the brain. “We work and play with data; that’s our actual job,” says Haas, referring to their work as data scientists at The unbelievable Machine Company. Fundamentally, Haas explains, she and Schöneberg are mathematicians – they are more interested in creating models to understand data than in creating tools for artists. As she puts it: “These brainwaves are also data, or data points, that one can utilise. The music is just one possible interpretation of the brain data. It’s the act of interpreting it that makes it beautiful.”
Interfacing the brain
“I’m very sceptical of approaches that try to enable people to consciously make music using brainwaves,” says Benjamin Blankertz. Blankertz chairs the Neurotechnology Group at Technische Universität Berlin and heads the Berlin Brain-Computer Interface (BBCI) project. Although critical, Blankertz’ team suggests that music itself could form the basis for a universally accessible Brain Computer Interface. “We used a version of Depeche Mode’s ‘Just Can’t Get Enough’, arranged for three instruments – the user would concentrate on one of those instruments, played in a loop. The loops contained tiny mistakes, occurring at different points in time. They were so slight that you could only realise the deviation if you were concentrating specifically on that instrument.” By choosing which instrument to concentrate on, users created a specific brain response that the BBCI can read.
So perhaps it’s in the power of the brain as a listener that new technologies can be developed. Rona Geffen, an Israeli producer based in Berlin, has been using EEG readings to develop her own therapeutic practice with sound. Geffen’s roots are in hard techno and rave, but a family tragedy led her to explore the healing aspects of her work. “My brother had a brain injury,” she explains. “They did invasive brain surgery after he had started to recover – a pressure wound from the surgery became infected, and the infection killed him. And I thought, what if there was a solution that could be used without the surgery? Because the brain is such a delicate organ, but it’s changing all the time. It can heal itself.”
A residency at the Spatial Sound Institute in Budapest gave Geffen the chance to experiment with 4DSOUND – an omnidirectional, fully immersive sound system that moves sound around and between audience members. During the residency, she became increasingly interested in 4DSOUND’s potential beyond a musical performance. She began studying sound therapy, in particular the use of certain resonant frequencies supposed to promote physical or mental wellbeing. Using a set of tuning forks and chimes calibrated to these frequencies, Geffen created pieces of music that, when played through the 4DSOUND system, would immerse her audience.
Geffen wanted to understand what effect this was having on her listeners. “I wanted to have numbers and scientific facts,” she explains. “So I invited a neurologist – a neurofeedback specialist – to take EEG readings from participants.” Neurofeedback analyses brainwave data to detect frequencies associated with certain mental states such as alertness, anxiety or panic. Geffen invited participants to an experiment at the MONOM 4DSOUND studio in Berlin’s Funkhaus. Using 19 electrodes, the neurologist monitored participants’ brain activity during the session. As publication is still pending, Geffen is keen to keep the results confidential. “I’m looking for a new, noninvasive type of medicine,” she says. So could this be a groundbreaking discovery?
A few years ago, Finke worked with a cellist who had lost his memory. The musician could not only play pieces from memory, but also generate new musical memories.”
While it has been shown that brainwave frequencies adapt in response to certain sound frequencies, neurofeedback is still somewhat of a fringe science. But among more mainstream medical practitioners, too, the connective attributes of the listening brain present an interesting opportunity. Dr. Carsten Finke of Charité’s Zentrum für Musikermedizin has had astonishing results with amnesiacs. A few years ago, he worked with a cellist who had lost his memory due to brain lesions caused by the herpes encephalitis virus. The cellist could not only play pieces from memory, but also generate new musical memories. Finke took well-known pieces of music composed before the cellist fell ill, such as Vivaldi’s Four Seasons, and paired them with similar sounding pieces composed afterward. Asked to say which he knew better, the cellist named the older scores 93 percent of the time. He also recognised 77 percent of pieces he had played earlier in the day, suggesting he had the capacity to learn new music. Similar tests of musical memory were also conducted on a non-musical patient, and proved equally successful.
Musical memory can trigger autobiographical and other types of memory – resulting in sometimes highly detailed recollections that are otherwise impossible. Creating new musical associations with particular memories, and triggering these memories with music, could help ease the everyday challenges amnesiacs face. As well as providing a respite from the emotional stress of memory loss, a piece of music could, when played at certain times, trigger a memory to take medication, or be used to bring up memories associated with a specific person to compensate for forgotten faces and names.
Whether composed through EEG devices or used to develop new brain interface technology, or whether the base of a new healing practice or a progressive aid in memory loss, music provides us with a unique way into the brain’s complexity. The practical applications might still be some way off – in medicine as well as in music – but the potential is fascinating.