A live audiovisual cinema translating comprivisational music into a dynamic field of procedurally animated dancers, lights and Earth tremors.
2026. Live audiovisual performance. 30 minutes.
In collaboration with: composer, sound artist & performer Raven Tao.
Noma Tongues is a real-time cinematic audiovisual performance translating polyphonic musical energies into a dynamic field of procedurally animated dancers, lights and Earth tremors. The music draws from traditional voice layers, electronica, and cinematic soundscapes to create psychedelic compositions that blend synthesized sounds with ancestral vocals. Electroacoustic audio tracks, analyzed with breakout attributes, are input through a transductive process in an Unreal Engine-based world where the layers (vocal, insrtumental, from sub-bass to high) map to various layers of character, object, lighting, and postprocessing changes, created in real time.
Since November 2025, Yiou Wang began the research and development of procedural dancer characters whose bone animation are individually mapped to audio analysis, with a blend of noise, in an obsession to recreate music-to-dance energy transduction, with the procedural dancers expanding in complexity and intrigue as MFCC, onset and release detection, envelope following, peaks, and other audio attributes are individually picked up and mapped onto bone animation with the indeterminacy of noise addition.
The first test of this project was done as a remote collaboration where New York-based composer and electroacoustic artist Raven Tao sends a composition’s several tracks to Yiou Wang, then working on a different project in Taipei, and the result was a rendered video, accompanying Raven Tao’s performance at Neuromantics organized by UAAD, December 12, 2025. Following that experimental first test, starting from January 2026 after Yiou had returned to New York, the two began to work towards a version finetuned for live performance.

Drawing from the relation between real world music and movement from trance rituals to popping dance in the 1970s, Noma Tongues extends the concept of polyphony into worldbuilding: just as traditional song layers voices into collective spatial expression, the musician and visual performer dynamically shape a live polyphonic system, where musical structure becomes a worldbuilding instrument. The avatars (robotuals, robot x rituals) move in real-time driven by different tracks of music animating different bones with noise, where dance is both technologized and ritualized as a reinvented trance technique.

Tongues mean polyphony of voices that translate into excitable bodies (avatars called robotuals) and energy translate from bodies (performers) into voices, like a two-way transducer.

Polyphony happens directly from embodied sensor wearables the performers wear – called Antennae, just like insects antennae that capture and transmit signals and pheromones, the artists wear biomimetic headpieces with interactive sensors that deliver their involuntary gesture as procedurally generated sounds and visual earth tremors, and their conscious gesture as summoning protocols for the audio-driven dancing avatars. The media artist and musician form a two-tongue polyphony on stage, creating immersive and interactive soundscape in real time.



Live performance photography and recorded visual stills at Brooklyn Art Haus, Feb. 28, 2026
Photography courtesy of: Corvus Visio, Sawin, Gumi Lu, Linru Wang, Allan Haocheng Wang






























Stills from the recorded performance visual, Feb 2026
Live performance at New Media Caucus Art Festival, Enhanced Immersion Studio, ASU MIX Center, Mesa Arizona
Photo courtesy of MIX Center.












