‘artificial otoacoustics’ is an investigation into a physical deep learning synthesiser that comes to understand itself and its surrounding, exploring how a machine learning system with an unconventional soundmaking apparatus learns to produce sounds, and to use them in dialogue with humans.

The body of the piece is a light based synthesiser: spinning prisms, rotating mirrors and other objects refract a set of lights into phototransistors that creates evolving, grainy, organic electronic sounds. Around this, a deep learning system takes control of the input parameters: the speed of motors, intensity of lights, positions of actuators and so on. It continually samples the output, slowly learning the ways that it’s parameters of action shape the sounds coming out. Over time, by making many many sounds, the system learns both what sounds a set of parameters is likely to make, and also the best set of parameters to reproduce a sound from outside, opening up a terrain for sonic interaction.