Ever since the first days of synthesizers – then midi files and Pro Tools – musicians and producers dabbling in the world of electronic instrumentation have had to battle pushback against the legitimacy of their music. For traditionalists, music had to be performed by something that also had an analog parallel. The reality is far from being so black and white, but nevertheless the perception lingers. However, traditionalists on both sides of the spectrum may align in protest against Aphex Twin’s latest collaboration, the midimutant.
Aphex Twin collaborated with ex-Korg engineer Tatsuya Takahashi on the midimutant, which runs on Raspberry Pi and allows for artificial programming of synths.
How it works: Every sound in a population of initially random patches is sent and auditioned via sysex midi messages, sampled and checked for similarity using MFCC analysis. The best patches are chosen to form the next generation using the sysex patch data as genetic material, converging (most of the time) on similar sounds. Unlike a neural network or machine learning algorithms, the artificial evolution does not need to model the underlying parameter space – i.e. how the synth internally functions to create sound. Midimutant can therefore be used on any synthesiser with a documented sysex dump format.
More information can be found here.
The main question, however lies in where this frontier of artificial music programming will push music next. There are often complaints that one artist sounds too similar to another ten; what about when a program is in charge of the evolution?
It’s worth contemplating, but not fretting over. Artistic vision will always live on in music, and it isn’t going away anytime soon. Still, what are your thoughts? Sound off below.