0% Interest for 24 Months! Learn more »
(800) 222-4700
  • Español: (800) 222-4701
Cart
Microphone Month 4

Secrets of making realistic orchestral MIDI sequences – Part I

The most prevalent criticism of MIDI sequencing is that the sequenced music sounds too machine-like. The problem is not because of MIDI, but in the way that MIDI sequencing is approached. In essence, the basis of a great recording is a great performance, and with MIDI, if properly handled, you have the luxury of crafting nothing but great performances, but it means taking the time to edit every note if necessary. That may sound daunting, but when you add up the time spent doing take after take and factor in performer’s burnout to get the best performance, the time spent is often pretty much the same.

The trick is to observe and understand how the actual instrument that the sample is taken from responds in reality. From there, we use the editing power and real time control (continuous controllers) of MIDI to emulate that response, but that’s only part of it. The secret for making MIDI sound real when it comes to Orchestral MIDI sequencing is not to use the same Instrumentation guidelines you’d use for a real orchestra. This may sound a bit contradictory at first, but realize that samples don’t respond in the context of a sequence the way the actual instrument does in the context of an orchestra in a live reverberant space.

Attack and Sustain
Let’s start with the very first sound an instrument makes: the attack. An instrument’s attack holds a tremendous amount of information including the type of instrument it is (In acoustic studies, subjects were unable to tell the difference between a sustained horn sound and string sound without the instrument’s attack.). What gives away a sample-based orchestral score more than anything are the beginnings of phrases, the embouchure, or attack portion of the sample. Brass, woodwinds, and string attacks are always followed by a slight dip in volume before the sustained sound begins. Apparently, the inventors of synthesizers understood this as well: In an analog synthesizer, we have the means of electronically duplicating the envelope of an acoustic sound in the ADSR section (Attack, Decay, Sustain, Release). The decay is the change in volume between the attack and sustain portion (duration) of the sound (but a preset decay on the sample will perform exactly the same way on each note and will not sound real), while the release determines how long it takes for the sound to die out completely.

Performance
How does this apply to our purposes? The first step in creating a realistic orchestral sound is to use the expression pedal of your keyboard to emulate the slight dip in sound between attack and sustain, by de-emphasizing the portion just after the initial attack of nearly every note. As you hit a note, pull back on the volume briefly to create the kind of attack that humans make with their wind or string instruments. Remember, a sound doesn’t just start at full volume: there’s the attack followed by the sustain which starts a bit quieter than the initial attack. This can be performed along with the part, or recorded after the fact as MIDI data. The key is to have the variation that comes with performance. The same applies to the release of the note. Again, use the expression pedal to create a release so that it fits in context with the music. If you have a preset release that that takes three beats to die out, you’ll have a contextual problem if it has to be gone in two beats.

Quantizing
When dealing with marcato strings for example, it’s necessary to understand the sample’s attack and deal with it accordingly. There is a time lag when the attack itself reaches its loudest peak. In this case, quantizing the strings would make them sound late, so it becomes necessary to anticipate the notes. When you perform these kinds of samples, you have to play ahead of the tick (metronome click) or behind, depending on the sample. Listen to the part. Play according to what your ear tells you, not what your hand wants to do in relation to the sequencer’s clock. Quantize nothing when it comes to strings or winds. Drums are a different matter, but don’t fall into the trap of over-quantizing percussion either.

Multi-sample Patches
For most of your multi-sample patches, it’s advisable to have two or three velocity-split layers for different dynamic ranges, but be careful not to overdo it when you play.

Output Gain
Another trick is to lower the output gain on your samplers themselves. For soft passages where a sound is really exposed, it’s easy to hear the zippering in volume as the part makes its entrance. With the output gain set low, the sensitive, more quiet range where you’d hear zippering is disguised.

Sustain Pedal
The sustain pedal can lead to a lack of dynamics since you are just holding a note or a chord; there’s no shape or direction to the sound. Avoid the use of the sustain pedal.

To Be Continued…
Performance is just one part of using MIDI to create realistic orchestral renditions. Stay tuned for the next tech tip where we will cover using instrument samples themselves and the point where we abandon traditional orchestral guidelines.

Share this Article