Global futurist music and technology festival, MUTEK, held its 20th anniversary in Montreal this September.
From daytime workshops and panels to immersive experiences and late-night club excursions, MUTEK Montreal captured the collective imagination of innovators from a range of fields including algorithmic music, musical AI, and audiovisual experiences. While the scope of MUTEK transcended just a few topics, let’s focus on these three to unpack some key takeaways for forward-thinking creators.
Algorithmic and generative music
Algorithmic music is a creative and technical process where music can make itself based on minimal human input. It’s the producer’s role to create the structure and the parameters of the algorithm, but once these are established, the music is free to grow and evolve in a generative fashion. This outcome has been dubbed generative music, and is embraced by producers such as Brian Eno. Generative music making can be likened to gardening; you plant a seed and create the conditions for the music to grow on its own.
During MUTEK’s IMG program, which constitutes the festival’s daytime talks and workshops, a variety of tools for algorithmic and generative music making were discussed. Many of them come in the form of Max MSP patches, which work like plugins in Ableton Live. The following Max for Live patches can be found at maxforlive.com and range in cost from free to 12 USD.
1. Integral Clock Divider
The Integral Clock Divider is a useful tool for creating generative polyrhythmic sequences. It’s great for creating interesting drum patterns, slowly evolving melodies, and fast, complex arpeggiator-like effects.
2. Adlais IV
Adlais’ free-drawn 32-step pitch sequencer and multi-track Euclidean velocity pattern are driven by a powerful, versatile Master Clock capable of generating complex melodies and rhythmic patterns.
WurrmGen is a simple MIDI device for creating generative music through randomization. It generates note information based on the input it receives.
ProbablyGEN is a three-channel ‘timing’ generator MIDI effect influenced by a step sequencing approach.
Employing the concepts of algorithmic creation alongside machine learning, artificial intelligence holds potential in the creative space for autonomous music production and previously unimagined musical outcomes. Iranian music maker and technologist Ash Koosha is the founder of a startup called Auxuman which creates “virtual beings.” Among them is YONA, who has been trained on a data set of melody, rhythm, and the english language to be an artist of her own volition since 2018.
Made in collaboration with digital creator Isabella Winthrop, YONA is an ambitious, CGI-enhanced AI pop idol whose lyrics, expressive voice, and chords are churned out via generative software. Perhaps most impressive is YONA’s ability to write compelling poetry, equal parts abstract and logical, rivaling the writing of many modern-day lyricists. One might think of YONA as an extension of Koosha and Winthrops’ cognitions; however, Ash was quick to dismiss this notion, attributing all intellectual capabilities to YONA and raising fundamental questions about the nature and relevance of authorship itself.
Visuals and A/V productions are nothing new to the live music space in 2019, but the notion of composing sound and image in tandem is still developing and seldom seen. Ryoichi Kurokawa’s performance was a standout example of audiovisual synthesis, where sound and image were inextricably linked. Kurokawa built and traversed a virtual forest with musical gestures. In Kurokawa’s world, a low-end burst brought us to a new scene, while a rattling tone cluster expanded the framework of a tree’s root system.
Ryocihi Kurokawa. Photo by Bruno Destombes.
Among the leading tools to facilitate this kind of production is Touch Designer, “a visual development platform that equips you with the tools you need to create stunning real-time projects.” While Touch Designer is a limitless and relatively complex creative environment, Max MSP offers more user-friendly audiovisual options.
Video Rack is a patch (also available from maxforlive.com) that clones a drum rack for triggering video clips and still images. By triggering both a video rack and a drum rack with the same gestures, audiovisual arrangements can be composed in real-time. The result could look something like some stop-animation drumming on a Push or MIDI drum pad.
Keeping it human
While the MUTEK program celebrated experiences with high production values and impressive technology, I was often struck most by the works and performances that incorporated subtle, low-tech gestures. Matmos for instance–a Baltimore-based duo known for decades of experimentation and co-productions with visionaries such as Björk–live-sampled the popping of a plastic container, composing an entire track on the fly. Using the simplest of technology in 2019–a plastic jug and a microphone–Matmos brought us into their creative process and showed us how their work is created, in effect, making one of the strongest human connections at the festival.
No matter how radically we reinvent our creative realities and transcend the human experience as we know it, a friendly symbiosis of man and machine seems all the more inviting after the 20th anniversary of MUTEK Montreal.
September 4, 2019