Behind the Scenes with Stage Tec: Producing Live Multisite Operas, Part 2
Nov 23, 2010 10:30 AM, With Bennett Liles
Editor’s note: For your convenience, this transcription of the podcast includes timestamps. If you are listening to the podcast and reading its accompanying transcription, you can use the timestamps to jump to any part of the audio podcast by simply dragging the slider on the podcast to the time indicated in the transcription.
Taking opera to the streets, Stage Tec and Swiss TV took up the challenge with Aida on the Rhine and Christian Fuchs of Stage Tec was right in the middle of the production manning an RS digital mixing console. He’s here to wrap up his account of how it all came together over a Nexus network.
SVC: Christian, thanks for being back with me for part two on a talk about the Aida on the Rhine opera production, and this was a huge thing. It would have been a big enough job just doing all this in a theater, but this was taking the opera out to the audience through locations out in the town of Basel, Switzerland. I wanted to get into the details of the Nexus network and the advantages of maybe using this over possibly other types of audio networking.
Yeah, the heart of the Nexus system is the Star router and the Star router handles 4,096 inputs by 4,096 outputs at a time. You have 256 audio signals per I/O unit—what we call a base device, and the same amount of audio channels you run on one fiber-optic connection. We can do really huge distances, up to 120km between the I/O units. We have a very low latency all over the networks so our True-Match A/D conversion just takes 0.3 milliseconds and every routing step from one unit to the next I/O unit is just one sample latency. The A/D conversion on the microphones is running with 32 bits, a 258dB dynamic microphone inputs in all of the analog in and outputs handle 24dBU. We have all kind of digital audio interfaces, starting with AES3, of course, AES-42 MADI SDI—very important for HD TV now, HDI 3G embedded de-embedder cards, DolbyE encoder decoder. We can transfer the digital AES plus data screen for intercom systems through the Nexus network so that you don’t have to run extra cables for your intercom and commentator systems. [Timestamp: 2:55]
And that impressed me quite a bit—the ability to run audio, intercom, and everything you need over the same network, especially since you probably had to run this fiber-optic cable through some difficult areas while the people of the town and the traffic are going about as usual.
Yeah, which of course, that’s pretty usual for when you go outside with a system like that that you have redundant cabling and the redundant fiber optic on the Nexus takes over in between one sample so you just get an error message on your control computer but you don’t hear that there’s a failure of fiber optic. [Timestamp: 3:31]
And you had sort of a make-shift studio set up at the Grand Hotel?
We had an extra studio set up in the Grand Hotel that was my studio. We sat it up on 48-fader Aurus console with five base devices, and this was a very old conference room with really nice wooden walls all around and a big fluffy carpet on the deck so it was actually to concentrate on classic music; it was a nice surrounding to work there. And everything else they had was a set of stereo monitor speakers and a separate PFL speaker. I just thought it out and mixed the different microphones coming in on the singles, and I didn’t do any effects on it—I just did the premix for the OB truck doing the opera sound. [Timestamp: 4:29]
And you mentioned the Aurus console from Stage Tec. Are there specific features on the mixer that were, say, particularly handy for doing this show?
What was very useful for me was that I had direct access to all the parameters in every channel strip so I was able to filter and compress a few channels at a time. Also the constant latency of one sample through an audio channel was, of course, a big advantage that we don’t get any issues after sending back the signals through so many digitals steps back to the in-ear monitoring, so we always stayed less than 1 millisecond total latency. [Timestamp: 5:14]
I read you had some of the sound on MADI links; how are those used?
We had MADI links only between my studio and the two OB trucks and the monitor desk, but the whole Nexus system from one OB truck on the Nexus system from my studio was always our Time Division Multiplex fiber optic. But the MADI lines, they were to interlink the different OB trucks to my studio to the monitor desk and of course, the MADI feeds to the multitrack recording. [Timestamp: 5:45]
You mentioned in part one that you used a lot of wireless mics—I think with two on each principle performer. You were also using a lot of RF for communication behind the scene because on something like this, the cuing—getting everybody performing at the right time—in all those locations could be a tricky job with everybody spread out over such a wide area.
Yeah, we had 40 wireless microphones for principles and another special thing: We had four multi-ambience mics; there were sound assistants running around with these wireless ambience mics to pick up certain ambience—like the water splash when one of the actors was falling into the Rhine River. It was actually a dummy falling into the Rhine River, but they wanted to pick up the splash from the water, and so for all these tiny ambience things with the few wireless ambience mics as well. And then we had 16 in-ear systems for the singers, also for the narrators and some of the production assistants, and there were, on top of that, 10 channels for communication radios on the set and four wireless HDTV cameras, so quite a lot of RF in the air. [Timestamp: 7:08]
Acceptable Use Policy blog comments powered by Disqus