29-Mar-2024

Have supported a range of live events spanning stage sound and lighting design, rigging and microphone deployment, and front-of-house and monitor mixes, all within a range of smaller spaces including social clubs, school and community halls.

The objective of live sound engineering is to provide sound reinforcement such that the audience can share the experience of being in the optimum location to hear the speaker, an orchestra or live band. This is different from recorded sound as, typically, live sound is affected by the room (or space) in which the performance is taking place and any backline amplification, and the original natural acoustic sound from singers and instruments - and sound reinforcement takes this into account. Where this is livestreamed on the Internet, the listener isn't in the same space, so ambient microphones must be mixed in to make up for this.

In a live music setting, the sound will first be converted to an electrical form by the microphone, an instrument pickup or be generated directly such as an electronic keyboard or synthesiser. Other than for a large choir or orchestra where one is only looking to capture "the sound of the room", in a live setting we will have a number of individual microphones or sources, each assigned to an instrument (or possibly shared), but where we really want to isolate that instrument's sound to contribute to an overall mix. There are one or two instruments that will often be heard by every microphone in the room, so we seek to minimise the unwanted pickup both by chosing microphones with pick up patterns that will focus on the sound source we want to hear, through directional pick up patterns, and also by microphone placement - typically "up close and personal" with the instrument we do want to hear. Other techniques may include using acoustic shields to limit the spread of the problem instrument - a screen around the drums, or a shield by the trumpet, for example. Microphone placement for live performance can be a compromise. In a perfect world and in a recording studio setting (where each instrument is recorded separately and alone), the most authentic sound is heard at a distance from the instrument - the sound develops and the acoustics of the space shape it - that is why we can employ different and more sensitive microphones in recording studios. On stage, the size of the microphone can be dictated by the need to place it close to the instrument (for isolation or highlighting the wanted sound) and we want rugged microphones that can survive handling on a live gig.

Whilst modern digital sound mixers pack in a variety of effects and tools, normally we are seeking provide an authentic representation of the original source. But I almost always have to apply some equalisation ajustments (EQ) to reduce or enhance certain frequency ranges of individual input channels (typically one channel is a microphone or individual sound source). An EQ cut may be necessary to alleviate the room's acoustics or to allow space in the mix for another instrument. For example, I may reduce the sound level of a guitar or piano playing the same notes / frequencies as those of of a lead singer, to ensure that the singer can cut through the mix and be heard clearly. It's not a total cut, of course, but just taming the competing sources. The most experienced musicians (keyboard players and guitarists, in particular) will ensure their accompaniment compliments the leading line, rather than conflicting with it, by harmonising and by playing notes well above or below the lead or melody line, but surgery can be needed when working with less experienced groups!

Another feature of most mixers is a set of auxilliary (Aux) outputs. These allow you to route a copy of the input channel to a separate mix. These Aux outputs are commonly used to provide monitoring to the band members to help them perform, by feeding just the instruments they need to hear into a louspeaker or earphones for their own use. In this setting, the lead singer is likely to want a different mix from, say, the guitarist or drummer - the singer will need a lot of their voice plus, say, the keyboard and the main rhythm instrument, whereas the guitarist will want to hear more of their own instrument - a different mix. Monitor mixes are generally base of "Pre-Fade" Aux settings. That is to say, the amount of each instrument in the Aux channel mix is independent of the setting of the channel fader used to mix the main (front of house) mix. Why? This means what they have in their monitors will not change as you change the mix balance, for the audience, throughout the show.