Today, we’ve got a spot by Re-Compose, an Austrian company that creates software to analyze and re-synthesize digital music. Liquid Notes is the company’s first product. The company’s chief developer, Stefan Lattner, is here to chat.
What inspires you about your work?
I’m interested in both Artificial Intelligence and music, and that led to me bringing these fields together to try to help computers better understand musical input.
What does this mean in the context of your product, Liquid Notes?
So, today, many people want to create their own music. There are roughly thousands of plug-ins that do this. Most of these plugins, however, deal with sound synthesis or manipulating signals. Very few allow you to create actual notes, upon which tracks are built. So, users with little composition experience are completely left alone when it comes to designing a song from scratch. They can draw on rather static loops, but this doesn't allow for much personalization. We tried to offer a way for musicians to manipulate their pieces on a level between being able to create single notes or entire unchanging loops.
Can you explain how your program works in slightly more detail?
Sure, so a musical piece, opened and manipulated in Liquid Notes, passes through three consecutive steps – all of which can be considered within the scope of Artificial Intelligence. First, single tracks of the input arrangement are classified into musically relevant classes like Melody, Harmony, Bass, or Drums, this process is necessary for both the subsequent harmonic analysis and the re-harmonization. It’ll classify through properties like polyphonic density, average note length, or variance of pitch.
The next step is the harmonic analysis, which was developed by one of our guys currently living in Hollywood. A detailed description of his input would probably be too involved, but suffice to say it's a combination of looking up which notes are in the piece, weighing them for harmonic relevance, dividing the whole song in areas with valid chords, and throwing out ambiguous choices by comparing them with detected scales and probability tables (i.e. what is the probability of a certain chord following another).
The last step, re-harmonization, is a combinatorial problem with a large search space and sometimes more than one optimal solution. The optimal types of algorithms for such problems are heuristics. Such algorithms are quite convenient because it is only necessary to define a fitness function assessing different solution candidates. The optimal solution doesn't have to be known. It is sufficient to know if a solution candidate is better or worse than the other one.
Is this the future of music then? Do you see it as being algorithm-driven?
With an ever increasing pressure on producers to deliver music faster and at much lower cost, algorithm-driven music production will play a very important role among composers for the media (TV, film, commercials, music libraries, computer games, etc.) as well as creators of electronic club music.
However, the big revolution in music is currently happening at much lower ends of the scale, namely where the iPad and other tablets and devices that enable a large portion of the population to make inroads into composing and music production. Algorithms will allow these people to get acquainted with the basics of music and then, step by step, to advance. They might even utilize that kind of technology to reach a professional level, although it is too early to make any predictions if that can be accomplished through technology alone.
So in parallel to catering to composers and producers of "traditional" digital music, we see our mission in kicking off an entirely new paradigm in music creation through our future technologies. We don't intend to take away the magic from well established and time-proven methods of music making but to extend the spectrum of creative possibilities beyond current limits.
At Re-Compose, we picture ourselves as technology suppliers for developers of end customer applications. So our algorithms could be delivered in the form of an SDK, some kind of "blackbox technology", or in parts even as open source code to be integrated in software and hardware in need of music analysis and resynthesis. The span of conceivable application would be limitless.