FEW things can move us quite like a maestro on the violin. Could a computer soon be able to twist our feelings in the same way, by learning musicians’ best tricks?
Prolody, a start-up based in the Netherlands,is pioneering a new approach to synthesised music that emulates the richness of analogue instruments and the sensitivity of human players. Unlike the flat jangle that often typifies synthetic tones, Prolody’s sounds are full and alive because they make use of human-produced notes. Its system is setting the scene for beautiful music that is played by a machine with its own aesthetic sense.
The team started with the violin – a notoriously difficult instrument to synthesise. They got a human violinist to play tens of thousands of notes and phrases in the studio, encompassing loud and soft, bright and mellow, trembling and majestic. The goal was to capture as much expressiveness as possible, for a computer to digest and process into a system capable of mimicking that expressiveness.
Creating libraries of sound samples in this way is not new, but Prolody has a twist. “We’re not just recording single notes, we’re paying attention to context,” says the firm’s co-founder, Dennis Braunsdorf. The company has built a machine-readable database from those thousands of samples, tagged with the musical context in which they were played, paying special attention to how notes sound in sequence.
When rendering music using these samples, the computer chooses the note or sound that best meshes with the rest of the piece. The goal is a rendition which sounds more natural than anything existing synthesisers can produce. Prolody is already in talks to license its system to a music software developer, and plans to repeat the process for other instruments.
The new sound impresses Julian Gregory, a first violinist with the BBC Philharmonic Orchestra in Salford, UK. Traditional synthesisers have trouble with smooth transitions between notes. “The connections between notes are really important and that’s vastly improved here,” Gregory says.
Prolody’s output still isn’t indistinguishable from a human performance, says Trevor Cox at the University of Salford. But with synthesised music already in use in theatrical shows and elsewhere, any improvements will enhance many performances and create new avenues for it, says Gregory. Cox points to corporate videos and video games as potential applications.
As well as teaching machines to make authentic sounds, Braunsdorf wants them to learn to perform. He is creating another database of the diverse ways in which musicians interpret a melody. He plans to apply machine-learning algorithms to this data so that a computer can acquire the ability to perform its own interpretation of a score.
Cox sees the potential for such a system. “A lot of pop acts play recorded material and perform live with it,” he says, but one problem is that is the backing track can’t alter according to the audience’s reaction.
A gigging computer which could produce its own take on the musical score at every performance could make that a thing of the past. It looks like the days of soulless muzak are numbered.