AI Means Everyone Can Produce—But Should They?
- Sam Dubiner & Elijah Jones-Young

- Mar 31, 2024
- 5 min read
Updated: Oct 8
New developments mean the barrier in music production isn’t one of entry anymore—it’s one of skill.

The almost three-decade long streak of democratisation in the music industry has shown no signs of fading. Beginning in the 2000s with the entrance of Digital Audio Workstations (DAWs) by hobbyists and professionals, the more recent push for accessibility in music creation has landed studio-grade software in the hands of virtually anyone with a passion for production.
Expanded by the transformative, volatile and exponentially-growing wave of Artificial Intelligence (AI) production tools, a metaphoric sea of options has emerged—but below trendy waters, a big question lurks. Are these new technologies a blessing or a curse for the future of music?
The idea of art being overshadowed by algorithms can be scary. Easily accessible AI music tools can be abused by those outside of the audiosphere who view music as a commodity, with the end-goal of creating viral TikTok sound bites or by amateurs looking to project a higher level of skill than what they really have.
In the last few years, we’ve seen visual artists oppose generative AI image models trained on their stolen artwork. There has also been pushback against AI language models, like ChatGPT, becoming a mainstay in student work and amateur copywriting.
But in the music industry, AI isn’t so much a threat as it is a tool for new producers to enhance their work without the need for an understanding of music or technical theory. In 2024, AI isn’t yet a replacement for human creativity—and while amateur producers may look to generative programs for creating music, skilled producers should still be able to maintain their positions because their experiences and styles cannot be emulated.
While generative music only really bridges a gap between the amateur and the mediocre, AI’s value universally shines through as a tool to ease the burden of musical labour by automating the tedious aspects of beatmaking.
For example, the handoff of more technical elements in production to a program like iZotope Ozone’s AI mastering feature would free up a producer’s time at the cost of a final mix that still needs some tuning by human ears. But because not everyone has access to an engineer or time to learn how to master their own tracks, programs like Ozone provide a stepping stone for amateur producers to clear up muddy mixes and work towards a better sounding song—even if they aren’t learning anything in the process.
The element of working without understanding has both upsides and downsides. On one hand, there’s room for innovation by being oblivious to the rules. On the other hand, innovation can occur through deliberately challenging and breaking those rules. Take, for instance, the contrast between EDM icon Aphex Twin, who openly admits to lacking formal music theory training and someone like Jacob Collier, a highly proficient musician known for his harmonisation and intricate compositions. While Aphex Twin’s approach has led to unconventional creativity, strangeness, and genre-defying tracks like those featured on Drukqs. Collier’s expertise has drawn criticism for compositions that are seen as overly complex and less accessible.

Trends in AI Sampling
Right now, sampling is huge in the music industry. It seems like every few months David Guetta or Kygo drops a new rendition of an 80s or 90s hit dance and every few weeks a tribute to Nujabes or JDilla blows up on TikTok.
The process of sourcing samples in record stores, thrift shops, and your grandparents house called “crate-digging” is now a digitised and free activity that’s done through archival YouTube channels or through sites like Samplette.io, an AI-powered roulette that draws from a master list of older songs suitable for sampling.
Once a song has been found, paid; free; or DAW-integrated AI solutions can split a sample into several “stems.” These stems consist of different layers of the song, like a piano, drums, guitar, and vocals—and are all extracted with a quality comparable to that of a slightly worsened studio recording.
Higher end software like RipX uses AI to further break each stem into controllable notes (MIDI) that let producers manipulate a sample’s rhythm and pitch on a micro level. Similar splitting software like Serato and Rekordbox can also be used by DJs to create smoother mixes by isolating certain instrumental tracks.
While flipping and mixing music is becoming easier than ever, the use of AI to hunt down hidden samples could mark less tracks getting an official release because of unlicensed song use.
In 2023, Danny Veekens wrote an article for Tracklib about Google Assistant’s sample snitching capabilities, bringing up the precedent of putting AI with similar capabilities to YouTube’s (currently) non-AI Content ID system into the hands of the public. Reportedly detecting samples less than a second long, Google Assistant found unlicensed sources to audio snippets in the work of notable samplers like Daft Punk, Nujabes and Madlib.
As “sample snitching” becomes easier and automated, genres built on sampling like jungle and hip-hop could bear the brunt of an overactive content identification system using an AI model still in development.
Moving forward with AI
As of early 2024, AI can’t replicate a human’s ear. While it’s good at handling the intellectual and technical aspects of production, there isn’t yet a replacement for the unique experiences and feelings behind a human being’s individual creative process. AI song generators like AudioCraft, AIVA, Mubert, Beethoven, and Google’s MusicLM (among many others) have popped up in recent years as the easiest way for amateurs or businesses to create their own music without needing to go through an actual artist.
These song generators are the last stop after melody, harmony, chord and arrangement AI in the grand ride to musical commodity. By almost completely removing the human element from production and putting a computer centre stage, generative song programs wind up creating more like parodies of the genres they try to emulate rather than innovative works.
What AI is good at right now is supplementing a production or a still-learning producer’s knowledge. Despite not being trained specifically for music, the language model ChatGPT is capable of teaching basic music theory and giving instrumentation, chord and progression breakdowns for different genres. It’s accessible, free and takes a minimal amount of time to get an answer from at the cost of accuracy and humanization—the latter of which currently makes up AI’s biggest hurdle.
How does an AI, newly-born with all the knowledge fed to it but no real tangible experiences rival the emotions of, say, Eric Clapton when he wrote “Tears in Heaven”? How can AI shake things up like Björk, Yellow Magic Orchestra, Kraftwerk, Frank Zappa and the countless other human creatives that have shaped different avenues in modern music?
Maybe we’re entering a post-post modernist parody age of music and maybe someday the formulaic top 40s will truly be distilled down to an algorithm.
Until then, and in the foreseeable future, nothing can replace the human soul and human stories we share through music. It can be scary and it certainly can threaten to commercialise music even more. But at the end of the day, AI is a tool that isn’t going anywhere, so we may as well try to make the most of it.

Quick Facts
Most DAWs come with built in sounds, synthesisers, and effects without the purchase of additional materials and/or the use of plugins.
Before attempting to choose a desired DAW ensure that it is compatible with your operating system as some operating systems do not support certain types of DAW, for example AudioUnits (AUs) are only usable on Mac.
AI audio plugins allow users to access individual stems, sounds, and aspects of existing tracks that would previously only be available through access to the original recordings themselves allowing for users to manipulate the sounds of professional tracks freely.
Zynatiq was one of the earliest adopters of AI plugins producing the technology as early as 2012.
List of DAW/AI Plugins
AudioCipher: $30 (one time fee)
Splash Music: $10 (Monthly)
Riffusion 2.0: Free
Mubert: $19-$499 (Varies per track)
Chirp: Free
Google Music LM: Free
Wavtool: $20 (Monthly)
Synplant 2: $149
Izotope Ozone 11 : $199 (Standard) $399 (Advanced)
RipX DAW Pro: $198






Comments