Alejandro Koretzky wears many hats.
Known as a musician in his home country of Argentina, Ale is the Head of Machine Learning & Audio Science Innovation at Splice. An advocate for creators in tech, Ale has worked with The Roux Institute Techstars Accelerator and Cornell Tech’s Breakthrough AI program, and is an advisor to Anyone AI, connecting LATAM AI talent with the industry worldwide. In 2015 he invented tuneSplit, one of the first machine learning based products that allowed users to extract vocals, drums, and other parts from full stereo tracks in real time.
So it’s fair to say he’s one of the world’s leading experts at the intersection of audio and machine learning, driving Splice’s innovation strategy and execution around intelligent sound discovery and the future of music creation surfaces.
What is CoSo?
CoSo—short for ‘complementary sounds’—is Splice’s new revolutionary technology that uses AI to quickly find sounds that work together. Utilizing Splice’s vast catalog, CoSo uses technology to find loops that complement each other for frictionless creativity. Starting as a mobile app, CoSo uses Stacks—collections of up to eight looping Layers—that consist of anything from vocals, beats, basslines, guitar, and keys to FX, pads, and more. Swiping right on any layer prompts CoSo to quickly find another compatible and complementary sound—not just to any one Layer, but to the entire loop Stack.
The result is a fluid sound discovery and music-making experience. No search boxes, no auditioning sounds, no browsing—just using your ear to find what appeals to you. The magic happens behind the scenes, allowing you to stay focussed on being creative, whether you’re an established artist or have never made music before.
“Historically, music creation has presented high barriers of friction, even to the most proficient songwriters, producers, and composers,” explains Koretzky. “From idea to completion, the process of finding, crafting, and combining the right sounds towards something that sounds great, is in most cases, a fragmented and hard-to-replicate ritual.”
“When I joined Splice to build the Audio Science team, the first task was to figure out what machine learning could do for the business,” he continues. “We asked ourselves as creators, ‘What are the points of friction in digital music making today?’ It seemed that even with a vast catalog at your fingertips, finding the ‘right’ sound wasn’t always simple. That led us to the first user-facing AI-powered feature, Similar Sounds, which redefined the way users find samples on Splice.”
“The second point of friction is figuring out how and where to use a sound—it’s often an even more challenging task,” continues Koretzky. “We asked, ‘What can we do that is universal, workflow-agnostic, and almost product-agnostic?’ That derived into an even more fundamental question: ‘What makes two sounds work well together?’ But for as long as we continued investing in discovery only, or creation only, friction would still remain. So two years ago, we redefined our strategy as discovery-driven creation. That became our north star that led us to CoSo.”
For Koretzky, CoSo isn’t just an app; it’s a whole new landscape for music making: “This can become a new paradigm that seamlessly integrates discovery, curation, and creation. To me, that is revolutionary. I believe we will hear a richer and more diverse selection of sounds bubbling up through this technology, that we might surprise creatives with new ideas. We’re still discovering CoSo’s potential and will continue to develop it and improve. But if we can prove this new paradigm is of value and accelerates the process of finding inspiration and completing an idea, then that is a huge win for our users.”
While it’s possible to build a rule-based system to match up sounds using annotated metadata like key, chords, or BPM, these approaches have many limitations and they don’t scale or generalize well. CoSo works at a much lower level and goes beyond metadata, matching sounds in real-time and always listening, adapting, and shifting its results based on the sonic qualities of the Stack as a whole. CoSo has no prior knowledge of any of the sounds and relies only on the audio data itself, only leveraging basic metadata about instruments to give users more creative choices across styles and genres. Given the millions of sounds in the Splice catalog and the up-to-eight Layers within a Stack in CoSo, anyone using the app can generate billions of combinations as CoSo reacts to creative decisions in milliseconds. But how does it work?
“The rules of music harmony took centuries to crystallize—but at their core, it comes down to the physics of sound,” says Koretzky. “Waves that are matching or not, that are consonant or dissonant, that create stable and coherent relationships. CoSo is modeling all of that and more, even accounting for rhythmic coherence. And because it works at such a primitive level, CoSo can scale to millions of sounds, match across keys and tempos, and generate arrangements that would be almost impossible to achieve using rules and heuristics. That’s as far as I can go in terms of revealing the tech.”
Secretive design aside, introducing AI in the studio can sometimes make creatives feel uncomfortable. Given the highly personal nature of music making, do we really want a machine telling us what sounds go ‘best’ together? Art and music are subjective things, and one person’s ‘best’ might be another person’s ‘boring.’ While math can predict what sounds match in theory, how can AI replicate a spark of creativity, a moment of madness in the studio, or a happy accident that leads to a unique and personal musical moment?
“Music will always be subjective,” says Koretzky. “No machine should ever tell you: ‘This is the best music you could aim for.’ The core principle for CoSo is a good starting point, that delivers on the promise of ‘sounding good.’ We can all agree that something that sounds good might not necessarily be interesting. That’s why we are giving users a set of mechanics to navigate that starting point and make it whatever they want eventually.”
Once you’re feeling your Stack and are ready to take things further, you can open each loop on Splice to download them for your DAW or music-making tools of choice. Soon, you’ll be able to export your Stack directly to Ableton Live, and Koretzky is excited about the potential for subscribers to capture their own audio for use in CoSo.
While the tech is impressive, it’s nothing without the creativity of its users. By offering up sounds that you might not even realize you’re looking for, CoSo can shake you from your comfort zone, sparking new ideas from a more diverse range of sounds than you might have considered. The fluidity of CoSo means you can follow your ear, not pigeonholed to one genre, style, tempo, or key, contextualizing the Splice catalog in a way that wasn’t possible before. While many of us know what kind of sound we want, sticking to what’s familiar can lead to an echo chamber of ideas. CoSo is the perfect solution to writer’s block, or being stuck in a loop, something many creatives can relate to.
Using human-centered AI and CoSo to create ideas and break down barriers to entry for new music makers is undoubtedly an exciting new era for music production. CoSo is a revolutionary tool that could only be possible using Splice’s vast and industry-leading catalog of sounds. For creators who are looking for quick results, it’s a groundbreaking app that can generate impressive arrangements in seconds, with no prior music knowledge. For experienced producers and established artists, it’s a frictionless and fun sound discovery platform that bypasses traditional DAW and plugin interfaces to deliver high-quality building blocks of inspiration for your next track. For everyone, it’s a fun and fluid way to engage with music-making like never before.
For more information, and to download CoSo, head to tools.splice.com/coso.
May 2, 2022