Making Synthesized Sounds More Acoustic
I have been experimenting a lot with finding ways to get more acoustic sounds out of synthesizers. These sounds don’t need to be perfect recreations of any particular real instrument, but I want a piece of the complexity and depth that those have, and also to investigate “Pinocchio” synth patches that fall short of becoming a real boy in hopefully interesting ways.
Computer music types often jump to physical modeling, a field that I adore and have casually researched. But with the exception of modal synthesis, most physical modeling paradigms take considerable software engineering tenacity to get good results — especially the finite difference models, but waveguides too. I do intend to explore them further, but also I believe that some cool sounds can come out of chains of completely ordinary oscillators and effects. In my experiments, I’ve come across a bunch of little tricks that can help lend some more realism to those kinds of synth patches. Many of these also apply to sophisticated physical models too (after all, physical models can’t deliver you from having to do sound design).
In general, randomize everything a little bit and modulate everything with a slow, smooth random LFOs.
Real percussive envelopes have a very tall initial spike. Inspecting the waveform of an unprocessed xylophone hit, I was surprised by how loud the transient of a typical percussive instrument is compared to its resonating tail.
The high dynamic range can make such sounds tough to bring up in the mix, and can often be addressed by adding clipping or similar distortion that only gets driven during the initial transient. This improves “bite” as well. However, acoustic sounds by nature have higher dynamic range, and clipping and compression can take away from that. Find a balance that works for you.
Key tracking (modifying synth parameters based on register) is essential. No acoustic instrument has the same physics in every register, and some have very limited pitch range in the first place. I usually at least key track amplitude and the cutoff of a lowpass filter. Don’t get discouraged if something sounds good in one octave but bad if you transpose it. You may even need an entirely different patches for different octaves.
In a tonal percussive synth, it’s essential that partials decay at different rates. A rule of thumb for damped physical resonators is that the decay time is roughly proportional to the inverse of the square of frequency. For example, going up an octave will multiply the decay time by about 0.25. This is not only true of partials within a note, but even of different keys of many mallet instruments. (In a piano, different registers have different physical constructions including different numbers of strings per key, which I believe is specifically compensating for this phenomenon.)
Synthesized drums made using oscillators benefit from some subtle stereo detuning.
You can spruce up standard ADSR or percussive envelopes by using multiple little spikes in sequence, a bit like an 808 clap envelope. These spikes can be obvious or subtle.
Add little noise bursts and puffs to every sharp transient. Delay the noise bursts relative to each other a little bit and randomize all properties slightly. Even if the bursts are subtle, the effect will add up tremendously. Noise doesn’t need to be white or pink; crackly impulsive noise is fun too, and more metallic noise is possible using banks of bandpass filters or inharmonic FM.
Adding a little puff of noise before the transient can sound really nice for reed instruments, and I’ve gotten a pretty decent thumb piano sound with it by simulating the thumb scraping against the key. Watch yourself, Four Tet.
Add “box tone” to every sound source using a bunch of random peaking filters that boost different bands by +-2 dB (maybe more if you’re feeling adventurous!). Wiggle the parameters around slowly if desired, which is pretty ad hoc but might mimic physical changes in temperature, posture, grip of the instrument, etc. Box tone is a good idea in general to compensate for the relative cleanliness of an all-digital signal path. You can even use multiple layers of wiggly EQ with gentle nonlinearities sandwiched between them.
This is an obvious one, but almost all acoustic instruments have resonating bodies, so reverb can make or break the realism of a sound. Use slightly different subtle reverbs on every instrument, and prefer “weird” and metallic reverbs with short decay times. I often just use a bank of parallel comb filters; you can also use short multitap delays. The lush Alesis/Lexicon sound has its place in mixing, but I find that sound a little too smooth to work as an instrument body. Obviously, reverb on master is pretty essential if your instruments are in a concert hall.
Inharmonic modal synthesis (either with actual resonance or with decaying sine waves) can be enhanced with parallel ring modulation with one or more sine waves. This greatly multiplies the number of partials. I like to add a decay on the “modulator” sine waves. This works best for a grungy junk percussion sound, banging on pots and pans that aren’t carefully tuned by instrument builders.
It’s not just the patch, it’s the sequencing. Humanize velocity; don’t just completely randomize it, make the velocity follow the musical phrasing. Also, louder playing is correlated with higher timing accuracy, and conversely softer playing benefits from humanization in arrival times.
String players performing in detache style tend to crescendo a little bit when they anticipate the bow change for the next note. I’ve never played winds, but I wouldn’t be surprised if they did something similar.
For instruments that comprise a different physical source for every pitch (pianos, mallet instruments, pipe organs, harmonicas), try detuning each key by a tiny, fixed amount to emulate imperfections in the instrument. You can use a lookup table, but my favorite approach is to use the pitch to seed a random number generator; I use the Hasher UGen in SuperCollider a lot for this. Timbral parameters can be similarly randomized in this manner.
Haas effect panning: delay one channel and put it through a random EQ that’s sloped for high-frequency loss.
SuperCollider’s PitchShift is really great for adding a weird metallic “splash” which I mix back into the signal, sometimes fairly subtly. In general, find weird effects and use them in parallel at a low volume to simulate little imperfections that add up.
The notion that bass has to be mono is a total myth, and only matters today if your music is being pressed to vinyl. (Trust me, nobody is summing your stereo channels.) Low instruments can absolutely have stereo image and using that can make a mix sound less electronic.
If simulating an ensemble, some instruments will be further away than others, which can be mimicked with higher wet-dry ratio in a reverb and some high-frequency loss. This helps particularly for homogeneous ensembles like string orchestras.
Wind and string instruments require significant dexterity or physical exertion to reach the higher notes in their registers. To mimic this, random detuning should be more dramatic for higher notes.
Winds and strings playing legato are typically done with rapid portamento, which sounds fine. Realism can be further improved by briefly fading in some noise and/or high passing the source, simulating instability as the instrument physically transitions between consecutive notes.
Saw and pulse waves are pretty obviously synthetic and often need considerable massaging to sound remotely acoustic. Consider other sources if you aren’t getting success with those. Breakpoint synthesis (GENDY) is a favorite of mine.
Common mixing advice states that you should avoid putting effects or processing on anything unless necessary as demonstrated by A/B test. This is a wise idea for recordings, but in mostly synthesized music, a long chain of subtle effects can create an “imperfection cascade” that help get an extra 5% of realism. This only really helps if the sounds are already good enough to stand on their own.
Unisons of multiple instruments can sound more realistic as a whole than exposed instruments, since they can mask each other’s imperfections, especially if those instruments are very different from each other.
Slow parameter fluctuations happen not only for individual instruments, but for the entire ensemble, especially for a homogeneous group like a string quartet. Create an automation and map it to the intensity parameter of many instruments, which also fluctuate individually.