The Brain-Body Augmented Reality ToolKit: Futuristic Food Experience

Chris Hill, Rishi Vanukuru, Amy Zhang, Jessie Lause

Final Project for ATLS 4519-5519: Brain-Body Music Interfaces, University of Colorado at Boulder, ATLAS Institute

Introduction:

This project is a result of a Brain-Body Music Interfaces Fall semester class taught by Grace Leslie at the University of Colorado, Boulder. The purpose of the class was for us to learn about principles of assistive devices, interface design, and the history of brain-computer interface  (BCI) systems. The final assignment for the class was to design a new brain-body music interface and performance. This document reports on the process, inspiration, and design of our finished project: The Brain-Body Augmented Reality Toolkit: Futuristic Food Experience.

Performance Scenario

In the future humans eat flavorless pellets and drink flavorless paste for their daily nutrients. While eating, users can opt-in to augment their experiences with an augmented reality overlay with sounds that actuate based on their bandpass, chewing, fork interactions, and straw interactions. If a user can afford the high costs, they can have their dining experiences overlaid with sounds and objects from their childhood. If they can’t afford the premium experience, they can opt for an ad-supported experience where they are overwhelmed with sounds and objects that promote a particular brand (in this case, Frosted Flakes). The system from the user’s perspective plays through AR lenses on their eyes, the sound is played through bone conduction headphones implanted in their skull, and their biosignals are obtained through a wearable device implanted on their head. The performance is that an observer can view (from a phone and headset) how this type of futuristic food experience plays out.

Photo from our performance

Initial Project Ideas

Our project is a combination of two ideas we had during our first session thinking of new ideas for a BCI/neurofeedback system:

Idea 1:  The Sound of Taste is a BCI and biofeedback performance where sounds are created/changed as the performers taste flavors chosen/altered by the audience. The performers start with paper strips in front of them, and the audience (or maybe a composer) can add drops of sweet, sour, salty, bitter, and umami to create different biological reactions in the performers and different sounds. Some performers would wear a BCI and the others would wear electrocardiogram (EKG) or galvanic skin response (GSR) sensors, these readings would then be transduced into musical notes.

Images generated by DALL-E 2 of its interpretation of what The Sound of Taste performance would look like

Idea 2: Music from Memory Lane. Music evokes memories, but what if we could use memories to create music instead? A compositional tool/performance piece, where musicians sift through a collection of “memories” — photographs, audio recordings, objects — to create a narrative of experienced emotions, that are then translated into music by operating on recorded electroencephalogram (EEG) signals. The audio could also be complemented by conscious music, either chosen or performed.

Image representing the various system components of Music from Memory Lane

CAD:

While designing the wearable for The Brain-Body Augmented Reality Toolkit: Futuristic Food Experience, we wanted a system that both held the electrodes in place throughout the entire performance and would fit the futuristic aesthetic we were going for. All models were created in Fusion 360 and 3D printed in PETG filament.

The first iteration of the design was based on plastic head massagers. This design inspiration is also similar in design as products like the MindTooth[17] and EMOTIV EPOC X headsets. The issue with this design was that it didn’t have the flexibility we needed during this phase of our ideation process. We were still determining what electromyography (EMG) signals we wanted to process and what locations on the body we needed to attach electrodes. Taking in these considerations, we created a second iteration that would allow us to attach electrodes as needed dynamically.

How the first iteration of the wearable is worn on the head

The design of the second iteration of the wearable was inspired by helping hands used when soldering . The benefit of using helping hands as inspiration is that each electrode can be maneuvered to any position we want, the system can put tension on the electrode to help hold it in place, and any of our members can wear the device. The ball connectors for the design were inspired by this open-source helping hands project. The OpenBCI Cyton board and battery sit in a holder that is secured to the substrate with 8 socket connectors

Finished render of all components together

After finishing the design of the main unit that holds the Cyton board, we then printed a couple-dozen socket connectors to attach to the device. After having all the components 3D printed and assembled, we then tested the placement of all the electrodes.

Finished renders of the device

Unity (Lead by Rishi Vanukuru):

In order to have more control over the audio-visual environment and incorporate augmented reality elements into the experience, we decided to move from Max/MSP to Unity [12].

Communicating between OpenBCI and Unity

Using the UnityOSC [13] package, we developed a server application that runs on a laptop computer and forwards OSC messages received from the OpenBCI GUI to any number of devices on the same local network. This was done because OpenBCI natively allows for 4 simultaneous OSC streams. We planned to share both EEG and EMG signals, and this would allow us to only communicate with two other devices. Through the server, we have tested that up to 4 devices (8 OSC streams) can function simultaneously, with a delay of about 1 second.

Processing Toolkit in Unity

We then wrote scripts in Unity that received and scaled the EEG and EMG values between 0 and 1 based on the minimum and maximum values received in a configurable period of time. We also developed a mechanism where these values can be plugged into other objects to influence visual properties such as size and color and audio parameters such as volume, frequency filtering, chorus, and distortion effects.

For audio generation, we experimented with both Chuck for Unity  and FMOD but decided to use pre-recorded audio clips and modulate other parameters, as that offered us more technical and creative control.

The toolkit allows for continuous and thresholded trigger-based relationships between the incoming data stream and audio-visual output.

The AR component was implemented using Unity’s AR Foundation package.

Photos of our Unity setup