Computer Automation in sound-reinforcement consoles: Computer automation and livesound shouldn't be mutually exclusive. Newtechnology can minimize uncertainty and savetime and costs.
Aug 1, 1997 12:00 PM, Nick Franks, Geoff Muizr and Dave Lewty
It is often said that in live sound, there is no rehearsal and no second chance; each performance is considered to be a unique and unpredictable event. As a consequence, the myth of live sound mixing as potentially mortal combat with chaotic forces has been propagated for many years. It has been used by engineers to create their rock'n'roll, can-do image.
This article explains how computer automation of sound-reinforcement consoles can minimize the uncertainty and save time and costs by introducing repeatability and programmability into the equation, all without ruining any carefully nurtured reputations.
To begin with, let's take a look at where we are now. Here is the typical scenario for the vast majority of live sound engineers in the computer age: As you stand behind the console, hands and ears poised, you cast a glance at the automated lighting console. The pre-programmed lights dim, the video screens burst into life, the intro tape (doubtless mixed on a studio console with comprehensive computer automation) thunders through the speakers and then, in true 1960s style, you proceed to mix the whole show by hand with only experience, memory and split-second reaction to guide you.
But does it really need to be that way?
For years lighting engineers have been able to program the cues for a complete show into the board. Secure in the knowledge that even if everything goes very, very wrong - for example, the automation dies - they know that they can at least get various washes up and limp through the show quite successfully.
Studio engineers have had some level of computer assistance since the mid-1970s; this technology has now become very sophisticated. The mix can be rehearsed until perfect and reproduced when required. If the artist returns after three months demanding a remix, everything can be recreated more or less exactly as it was.
Why, therefore, should the live sound engineer not share in the benefits of these technological developments as a matter of course?
The revolution starts in the theater During the 1980s, the development of theater productions using extensive amounts of technology began to change the traditional situation with regard to computer automation of consoles. The necessity was for an even reproduction of audio night after night, often following complex scene changes. Thus the emergence of sound design as a new and seperate discipline; the show's audio would be programmed during production rehearsals, but the equipment would be operated by an engineer required to accurately follow cues created by the designer. Without a computer assistant linked to a suitable console, these cues would become increasingly difficult to handle.
The real possibilities for sound-reinforcement console automation opened up with the simultaneous emergence of powerful and rugged portable computers and the availability of flexible and friendly software for studio consoles. Both of these key components were affordable. The question was no longer, "Is it possible?" but rather, "What are we waiting for?"
Repeatability The essence of the matter is repeatability and resettability. Such facilities are currently available in studio consoles at many different levels of inclusiveness. Simple consoles provide storage of fader and mute information; complex consoles allow virtually every control to be reset on demand. The distinguishing factor in most cases is currently cost - audio hardware cost. Software development costs are high initially, but when amortized over a sufficient number of sales can be reduced to manageable proportions.
Various methods of repeatability have been introduced in the studio console. These should be examined in outline before we consider how these technologies can be applied in the real-time world of sound reinforcement.
The basic studio automation system stores fader movements and mute switch presses made in real time. These are typically synchronized to the source material - for example, the tape being mixed - via time code, the console running as a slave. Level settings are normally generated from VCAs or servo-assisted (moving) faders. When the tape is replayed, the automated controls will be dynamically controlled by the computer, recreating the mix exactly.
In addition to dynamic automation, some systems provide snapshots of automated functions. Snapshots are freeze-frame images of fader and mute settings, which can be loaded either statically, at the operator's command, or dynamically, against time code.
At a more advanced level, the ability to store data generated from additional module switches, such as EQ in/out or aux on/off, can be incorporated in the system. This can be extended to include every console control, and settings can be stored and reloaded either dynamically or statically. Thus, for example, a fully dynamically automated console would replay adjustments to auxiliary or equalizer controls as they were made originally. As an auxiliary send was adjusted during the mix, so it would be adjusted by the computer.
Finally, a half-way system is generally called recall. In recall systems, the positions of all console controls are stored in the computer but can only be reloaded manually, typically using graphics displayed on the computer monitor. Although entirely static, recall is nevertheless inexpensive and guarantees a high degree of accuracy when resetting the console and may be sufficient in many applications.
Some manufacturers have extended the software beyond console automation operations to include dynamics and outboard effects control via MIDI. Thus the automation system can provide a range of dynamics controllers, which can be assigned to the channels; settings can be stored with the mix data, and dynamics parameters can be adjusted during a mix. Such virtual systems save huge amounts of rack space. Effects control software allows storage, editing and loading of effects devices from the console via MIDI, thus freeing the engineer from the need to turn away from the console and attempt to manipulate complex heirarchical menus.
Various combinations of these automation methods are obviously possible, and the best interrelation of function and cost must be taken into consideration when designing a console automation system. A proper assessment of what is appropriate for the intended customer's application is inevitably necessary. Moreover, a thorough knowledge of sound-reinforcement mixing techniques is absolutely essential if a translation of existing studio automation technology into a form usable by live sound engineers is to take place.
On-line and off-line The capture and replay of automation information may be considered the on-line aspect of console automation. Mixes are generated as required, then stored by the computer for retrieval later. However, it is quite common for a mix to be less than perfect or to require adjustments. Thus appears the need to edit the mix data off-line.
>From the computing point of view, a mix is no more or less than a data file and, as with any other data, can be manipulated. A parallel example is word-processing, where the wording of a basic text can be worked on until it reads to the writer's total satisfaction. Other text files can be incorporated, different versions of the text can be merged together, sections can be deleted or extracted for use elsewhere and so on. All of these possibilities and more are inherent in console automation systems. As soon as mix files have been created, editing can begin.
Furthermore, it becomes equally possible to create the mix, or cues in the mix, before the system has been set up. The sound designer or engineer can take the portable computer with him while travelling; instead of idling his time away in the hotel room spending money on pay TV movies or the minibar, he can more profitably occupy himself with programming cues off-line. Different versions can then be tried out in the concert hall or arena.
Failsafe, technofear and pilot error One of the main concerns of the diligent engineer who is considering surrendering some of his power to a computer must be, what happens if the computer crashesIt is at this point that the immediate, real-time nature of live work casts its shadow over all aspects of automation. The answer naturally must be that it should be possible to run the console in a fully manual mode. Thus, the engineer will still need all his skills just in case. Automation is not going to make the engineer redundant. It will make his life easier, but it is not going to replace him.
On the other hand, technofear must not be underestimated. Computers in sound reinforcement are a new technology. They are notorious for not obeying instructions. In other words, they do as they are programmed and, unlike humans, do not adapt to circumstances. At the present time, unlike humans (who prefer to think of themselves as intelligent), computers are not intelligent. The result is that the computer can only work in a certain way, and any unwillingness to respond is most often the result of human error, ie., pilot error. This dictates that the successful live sound automation program must be easy to learn and use. Furthermore, it is essential that any editing is, so far as is possible, non-destructive, making it possible to return to the original data.
Given these considerations, the console itself must provide all the facilities, which would normally be required of a high-quality sound-reinforcement board. Comprehensive equalization, audio and VCA subgrouping (or servo faders with similar facilities), multiple auxiliary sends and output matrices must all be standard and operable without the computer. So what can we do with the computer, given the function-cost equation mentioned above?
Snapshot automation and the cue list The snapshot is a static picture of the settings of the console's automated controls. It can include levels, mutes, dynamics, outboard effects and any other automated functions. It can be loaded manually or to incoming time code. It should also, preferably, provide some means of triggering multiple external events; MIDI is the preferred method at the present time.
Snapshots may be created by capturing, on line, a particular console set-up or by programming the required configuration off-line. Either way, the data can be edited as required. Snapshots, which may also be called scenes, can be combined into a list of cues that apply to a particular piece of music, and the cue list can be saved as a performance. Because the scenes are independent of the cue list, different cue lists can be created out of the available scenes, allowing experimentation with different approaches to automation of the mix.
For example, a song can be broken up into sections. It may begin with voice and piano, with all other inputs muted, soft compression on the voice and an intimate reverb. As the song progresses through the verse and the stage illumination increases, channels can be unmuted, bringing rhythmic instruments into the mix and, at the same time, changing the vocal compression and reverb settings. By the time the chorus is reached the whole console can be opened up to bring in full drum kit, backing vocals, different effects and dynamics settings, and changes in level. If at some point the song reverts to simple voice and piano combination, all the engineer needs to have done is inserted the appropriate scene into the cue sequence and he will then revert to his opening position.
Following this approach, an entire concert or club set, theatrical show, foldback system or audio-visual presentation can be analyzed into sections and pre-programmed in minute detail, giving the engineer much greater freedom to concentrate on artistic matters. Furthermore, if the running order of a show is suddenly changed during rehearsal or a song is repeated as an encore, the engineer only has to load the respective performance and he will be ready to provide an accurate re-run of his mix. Time-code-synchronized loading of the cue list is also possible, which will certainly be of value in shows based on MIDI or other forms of sequencers.
Recall Recall is a means of storing the positions of all non-automated controls on a console. Because recall dictates that settings must be reloaded manually, it is quite slow, typically 15 to 20 minutes to reconfigure a 56-channel console. Nevertheless, recall has certain powerful advantages.
The first of these is found where a number of acts are rotating through a stage on a tour. During the changeover, there will be time to make a full recall of console settings so that the basic mix parameters for each act will be in place by the time the artist takes the stage. In clubs or broadcast sound stages, where a number of artists or shows use the venue on a regular basis, settings can be stored away on hard or floppy disk and used when required. Data may even be transported from one console to another. Not insignificant is the fact that recall allows one console to be used where previously two or more might have been required, one for the support act and one for the main act. Even more important, in many stage monitor applications, there is insufficient space for more than one console anyway, so recall facilities can provide a much higher standard of foldback for all artists.
Finally, occasionally members of an audience or congregation are so impressed by the audio console that they try their own hand at tweaking the controls. As this doubtless innocent interference may take place when the engineer is away or, worse still, after the show has finished and the engineer has gone home, he or she may be unaware of any changes until the curtain goes up the next day on an audio horror show. A quick recall scan of the console every night before the show will do much to cure this problem because the recall system will quickly point to any fascinating but most likely unwanted settings.
This brief overview of the origins and possibilities of automation in sound reinforcement gives an idea of the present position. This area is ripe for rapid development, and we are at the beginning of an era of far-reaching changes in live performance consoles. The speed of this change will mainly result from the translation of technology that has been laboriously developed in related areas of audio; technology that took 20 or more years to develop in the studio is ready to be applied to sound reinforcement almost immediately.
If this surmise is correct, then digital consoles are not far away in sound reinforcement, because they have already arrived in recording and broadcast. The digital console can be much smaller than its analog counterpart, with any input accessible immediately by selecting it to controls located in front of the engineer instead of seven feet away across the console. Furthermore, software routines can be developed that will constantly monitor and analyze the acoustic conditions of the hall and the signals from the stage. Automatic adjustment of room equalization, removal of feedback, optimization of microphone signals and so on could all be done within the console processing engine, giving the engineer the best possible mixing environment in which to work without him even having to think about the parameters. In fact only the control surface will be located front-of-house, because the console itself can be located stageside. Cable runs will be greatly reduced because signals will not have to flow down hundreds of feet of multicore.
Those who ignore the first steps in sound-reinforcement console automation do so at their peril. The future is arriving quickly, and failure to adapt will lead to redundancy. Those who embrace the new possibilities, however, will unleash higher levels of creativity for themselves and will enjoy their art even more than they do now. And the only way to tell what the computer did and what the engineer did will be by listening to the mixes of those who do not use automation.
Acceptable Use Policy blog comments powered by Disqus