Skip to main content

System-Building Syndrome

Synthesizer forums have a long-running joke about “Gear Acquisition Syndrome” that pictures the stereotypical hobbyist with all their Hainbach-endorsed gear, seemingly oblivious to the fact that they’re intended for making music. Trite as the joke is, the culture that reduces synthesizers to boutique furniture deserves all the mockery it gets. I’m a bit annoyed that I don’t remember who said this: Eurorack is millennials’ equivalent of model trains.

A lot of the audience of this blog is electronic music DIYers and open source fanatics, and many of us have good reasons to sneer at this culture. I myself own very little music hardware and basically never use VSTs, and I admit I feel a little smug when I read about how much money people can blow on synths and even plugins. Then I remember my own history with electronic music.

When I was first getting into SuperCollider — my first ever experience making electronic music — I found my way to DSP conference papers and became totally obsessed. I read every DAFx paper that seemed remotely interesting to me; I learned C++ to implement these in my own UGens. I was pretty certain that with enough effort I could replace any VST plugin I read about.

I thought I was being clever circumventing GAS by building everything myself, but what really happened is that I had fallen into my own version of GAS: I had System-Building Syndrome. System-building can be personally enriching, and no doubt more cost-effective than buying roomfuls of synths, but if your mission is to produce and release music then getting into engineer mode will actively hinder your efforts to do so. When I sat down to make music with all the nice fancy UGens I made, my music sucked. It discouraged me from making more music, so I’d spend more time building UGens since clearly I didn’t have enough of them.

Ultimately, it’s a success story, since that obsession led me to a career in signal processing and an audience for my writing. You learn a lot in the process of building systems, and it’s hard to see how building cool stuff can be a problem. But I’m one of many people who feel spiritually compelled to make music; if I don’t make it I will likely go insane, and as much as I enjoy programming and DSP they are not wired into me on that level. In my mid-20’s I slowly realized that the only way to make music I liked was to stop trying to engineer software, and simply write the most basic SuperCollider code that gets me sounds. At a high level, this is how you make music:

  1. Make music.

  2. Listen to and evaluate music.

  3. Improve music.

  4. Repeat.

Your music improves if and only if you’re spending time in the Core Loop. This is obvious enough, surely tautological for those who have learned instruments, but System-Building Syndrome and its many comorbid afflictions make it easy to lose sight of this. Here are some activities that have nothing to do with the Core Loop:

  • Buying synths or plugins.

  • Watching synth reviews.

  • Watching documentaries on electronic music.

  • Watching interviews with your favorite musicians.

  • Promoting your work on social media.

  • Reading music theory textbooks.

  • Reading artist manifestos.

  • Reading books or papers on the philosophy of music composition.

  • Reading or writing about tuning systems.

  • Reading DSP papers.

  • Implementing DSP algorithms out of papers.

  • Building live performance systems.

  • Building algorithmic composition systems.

  • Reading ModWiggler, Gearspace, CDM, Lines, or

Most these activities are benign, some personally enriching or professionally valuable. If you’re an academic then I’m sure you have specific research topics in mind; if you’re a hobbyist and just want to putter around with tech, then obviously you can do whatever you want. But if you’re a serious musician and you’re committed to making the best music you can, then you have to internalize that the above pursuits are at best side projects or supplements to musical growth, and at worst tactics of procrastination. Unchecked, they can become ways to avoid confronting how your own music sounds.

People who joke about GAS are certainly self-aware about its distant relationship to music, but the DIY projects — audio effects, live performance systems — have a sneaky way of giving you the impression that they’re part of the Core Loop when they’re not. When I was busily coding up virtual analog filters in SC, I really did believe they would make my music more cool and unique than if I was using the vanilla SuperCollider installation. These kinds of projects are especially tempting to musicians stuck in a creative rut or feeling dissatisfied with their own work. We have to understand that these activities will not dig us out of such situations. There is no “one weird tip” to music. You get into the Core Loop, that’s the only way to do it.

I want to elaborate particularly on algorithmic/procedural/generative composition, since it’s a useful case study and one that I have considerable experience with. [1] The biggest misconception about algorithmic composition is that it saves time compared to composing manually. I’ll just build this system to write melodies for me, perhaps even write entire tracks autonomously, and I won’t have to put in the effort for the little details ever again.

Of course, the reality is that after building what you envisioned, ultimately you’re left with the process of actually setting the parameters of your algorithm, auditioning various points in the parameter space by hand to generate the most compelling music. We’ve arrived back at the Core Loop. You can’t avoid it. Regardless of the level of abstraction or autonomy your system has, you have to set aside ample time to experiment with the system that you’ve already built. William Fields’ system allows him to randomly generate patches in a variety of musical styles, but he also spends a lot of time curating and tweaking its output; it’s nothing like the dream of “set it and forget it” that many people have when they start with algorithmic composition.

FieldsOS is a successful system by any measure, but here’s a scenario that I suspect is more common. Someone — perhaps a young SuperCollider user with a lot more confidence than production ability — starts building an elaborate algorithmic live performance system, finishes most of the features he wants two weeks before his undergrad final show, tries to make music with it, and belatedly realizes “well, damn, I still actually have to sit down and learn how to use this thing.” Insufficient time has been left for making and evaluating music, and he’s too exhausted by all the software engineering work to actually sit down and focus on the art, and the results are audibly half-assed. The cliche of art being 1% inspiration, 99% perspiration rings true; if the inspiration eats the perspiration, then the art usually sucks.

Letting algorithmic composition predominate actual composition is a trap that often catches musicians with a software background. I understand why. If I have dozens of files that I want to rename in a systematic manner, I’ll cook up a shell script to get it over with. It seems logical enough that if I want to make dozens of tunes, I just need to write the correct automated tool to take care of that. But this reasoning falls apart — creating music is nothing like renaming files. Art as a whole isn’t a computational process, even when it employs systems at the surface level. To think that I can somehow be more artistically prolific by putting in less effort is an oxymoron and a programmer-brain delusion; in fact, algorithmic composition often seems to require more effort than manual composition of comparable quality. I’m down with algorithmic composition as much as anyone, I use algorithmic techniques all the time in my work, but over time I’ve learned to keep realistic expectations: algorithms cannot reduce the amount of time you need to spend in the Core Loop, it doesn’t make the Core Loop more efficient. More often than not, it delays starting the Core Loop in the first place.

I focus on algorithmic composition because it’s a clear example, but so many hobbyhorses typical of “wonk” musicians are similar. You can obsess all you want over nonstandard synthesis or cybernetics or wavelet transforms, but none of them will do your sound design for you, and if you want good sounds you’re still tweaking envelopes and EQ settings for hours. You can build the ultimate live electronics performance system you’ve been dreaming of, but you’ll also have to spend a year learning how to use it. You can edit Xenharmonic Wiki all day, but tunings on paper bear little relation to tunings in tunes. I stress again that these kinds of diversions can be as gratifying as making music — and yes, they can impact your music, sometimes dramatically — but they should never be confused with actual music-making.

If your mission is to perform or release music, know what you’re getting into if you plan on adding side quests. The more ambitious and far-reaching a system-building project is, the more such projects are inherently distant from basic process of iteration on sounds. Overgeneralizing too early is a great way to ensure that this kind of project fizzles out. Successful system-building integrated into a musical practice is usually a highly incremental process, and one backed with experience; always build yourself tiny systems before you start building the big one you envision. (You may be surprised how musically powerful even the most elementary works of software can be.) And never, ever build systems with the expectation that they will develop your musicianship for you.

Some might find my assertions about how music should be made too reductive or binaristic. There are certainly musicians and artists throughout history where their systems seem more of a centerpiece than actual sounds — I’m thinking in particular of serial composers, whose work I enjoy especially when I read about its construction. And maybe you find that you don’t want to make music anymore and you’re perfectly content in music tech; I have known many people who followed that trajectory. It’s your choice in the end, you have agency over how you want to allocate your time for creative and technical work, it’s all cool. I’m just here to give practical advice: try taking the simplest path to making it sound good. Contain the distractions and big visions, keep asking yourself and others how it sounds, and you might get there.