Introducing the AI-backed touchscreen synth from Google

A research team at Google has developed NSynth Super, an experimental open source instrument that uses machine learning and neural networks to generate sounds.

The ongoing Magenta project has been set up to explore how machine learning tools can help people to create art and music in new ways; one of its earlier creations was the NSynth Neural Synthesizer. This uses a deep neural network to learn the characteristics of sounds, and then creates new sounds based on these characteristics.

As part of a bid to make this technology more accessible, the Magenta team has now developed NSynth Super in collaboration with Google Creative Lab. This open source hardware features a touchscreen interface and enables musicians to generate sounds from four different sources.

From 16 original source sounds across 15 pitches, it’s said to be possible to generate more than 100,000 new sounds. Four sound sources are assigned to each of the four dials, and musicians can use these to select the source sounds that they want to explore between. Using the touchscreen, it’s possible to navigate the new sounds that combine the acoustic qualities of the original four.

NSynth Super isn’t a commercial product, but you can download all the source code, schematics and design templates for the prototype on GitHub. Find out more on the NSynth Super website.

About the Author

Director and DJ, Ian French (Naif) is passionate about every genre of music from Breakbeat, to Drum & Bass, to Techno and House. If he was to describe his preferred style of music he would probably describe it simply as electronic music. Besides his love for music and DJing his other passions are travel, wine, and eating too much good food!