EchoNet [IN PROGRESS]

Category
Product Design
Date
Jan 2024
Scope
MaxMSP, JavaScript, Machine Learning

EchoNet [IN PROGRESS]

A Machine Learning Approach to Creating Dynamic Visual Music Systems

Building artistic visual music systems currently requires a strong understanding of complex software such as MaxMSP or TouchDesigner. While there are many easy-to-use visual music devices already in existence, they are limited in artistic flexibility and personalization. EchoNet aims to enable musicians, particularly free improvising performers, to easily build personalized visual systems that accurately represent their music and artistic vision.

This is possible through an all-in-one Max for Live device that enables flexible mapping between audio features and visual parameters, as well as building ML models for complex mappings, all in the m4l UI. The graphics are based on media files imported by the artist, which allows for high variation in the output. Additionally, the system is capable of learning the nuances of a performer’s unique musical language which aids in developing a representative visual vocabulary.

The system is on track to be completed by May 2024.

Building artistic visual music systems currently requires a strong understanding of complex software such as MaxMSP or TouchDesigner. While there are many easy-to-use visual music devices already in existence, they are limited in artistic flexibility and personalization. EchoNet aims to enable musicians, particularly free improvising performers, to easily build personalized visual systems that accurately represent their music and artistic vision.

This is possible through an all-in-one Max for Live device that enables flexible mapping between audio features and visual parameters, as well as building ML models for complex mappings, all in the m4l UI. The graphics are based on media files imported by the artist, which allows for high variation in the output. Additionally, the system is capable of learning the nuances of a performer’s unique musical language which aids in developing a representative visual vocabulary.

The system is on track to be completed by May 2024.

Full-Feature UI - Various Device Configurations

Mapping Diagram