SVC on Twitter    SVC on Facebook    SVC on LinkedIn

Related Articles

 

The ABCs of DSP

Nov 1, 2006 12:01 PM, By Gary Hardesty

Broad overview lets you brush up on the basics.


   Follow us on Twitter    

Sidebar
DAP discussion

I can't believe I'm writing this article. Or, more correctly, I can't believe I am 56 years old and writing this article. After all, this year marks the 20th anniversary of a feature article on digital signal processing (DSP) for audio that I authored for this magazine. Obviously, time has flown by, and here we are again, talking about DSP basics.

Meyer Sound has entered the digital domain with the release of the Galileo loudspeaker management system.

The editor gave me an understandable space limitation for a subject normally covered in a few dozen books, so here it goes. I'll do my best. To my critics: I've exercised some journalistic license herein to oversimplify a few things, due to lack of space. After all, the purpose of this article is to cover the basics of DSP as applied to professional audio for all readers, not just DSP experts, and to go into some detail as a refresher course.

I am an engineer/musician who has always been passionate about good sound and the process of making everything from guitars to speaker systems more efficient, better sounding, and easier to program and use.

Twenty years ago, I'd done sound design/engineering work for many large rock-and-roll and broadcast events. In 1986, I was owner/president of Audio/Digital (ADI), a company deeply involved in various aspects of designing, manufacturing, marketing, and selling professional audio signal processing products, along with OEM work for other pro audio companies, and military DSP work.

We've come a long way in 20 years. DSP and digital audio are commonplace these days. The ability to design audio DSP systems and create software for them are within the grasp of even die-hard analog engineers. Let's take a look at what they are dealing with.

WHAT IS DSP?

As the term suggests, DSP is the processing of signals by digital means. The origins of signal processing were in electrical engineering, and a signal back then referred to an electrical signal carried by a wire or telephone line, or perhaps by a radio wave. More generally, a signal is a stream of information representing anything from audio to data from a remote-sensing satellite.

The term “digital” comes from “digit,” and literally means numerical. A digital signal consists of a stream of numbers, usually, but not necessarily, in binary form. A digital signal is therefore processed by performing numerical calculations.

Way back when, simple DSP was done using vacuum tubes through early IBM and UNIVAC computers. When I started my audio-related DSP work (circa 1974), the only way to perform simple audio-related DSP was to use individual integrated circuits normally found in a computer system (gates, adders, multiplexers, etc.)

The programs the DSP circuits ran back then were largely hard-wired; there was no software involved by today's understanding of the word. A change required a modification to the hardware. Performing a DSP function thus required deep knowledge of the individual integrated circuits, circuit timing structure, and so forth. You really had to invent what the circuit was supposed to do from a purely hardware perspective. Semiconductor technology was still evolving, and you performed mathematical functions in the digital domain using many building blocks.

I recall 1986 was the year in which we started a slow transition to DSP for professional audio. In fact, that was the year I started design on what, arguably, was the world's first digital speaker processing system (crossover, EQ, etc.) using the then-new Motorola DSP56000 chip, which cost more than $400 per chip. We were so excited in those days about the fact that audio was coming in and going out, and we had the ability to modify it in between, that the ultimate quality of the audio was often secondary.

These days, DSP calculations are primarily processed by a single, dedicated integrated circuit, such as the popular Analog Devices line of DSP chips (integrated circuit). An audio processor may use one DSP chip or many, depending on the amount and type of processing required. Therefore, today's DSP chip is really a highly complex, dedicated, high-speed computer capable of performing many millions of calculations per audio sample.

Technology development generally follows a simple formula, basically beginning with military and secret government work, and then trickling down to high-quantity, more affordable consumer products. Professional markets (in this case, audio) reap the benefits of this process. Most of the semiconductor products we use today were designed originally for high-quantity consumer products. In fact, DSP and DSP chip technology is in everything from TVs and cars to the simplest, least expensive audio systems, both consumer and pro. We take it for granted. Many DSP chips exist, and simple ones can be purchased for less than $4 in 100-piece quantities.

DSP chips and their associated development software have allowed us to easily develop average DSP systems. Indeed, you can buy a ready-to-go development kit for less than $500 and be running your DSP program in a day.

But note the word “average.” Doing it correctly, for the sake of accurate and good-sounding audio — not to mention a good user interface — is not so simple. Doing something unique is even more complex. A significant number of the professional audio DSP-based products on the market today follow similar circuit and software topology. In other words, they are, dare I say, boring, and the sound quality is so-so, in my opinion.

In this article, we are going to focus primarily on processing speaker systems and routing signals.

We tend to use the term DSP generically, referring to an entire system, including audio-to-digital and digital-to-audio converters and everything in between. To be more correct, DSP actually refers to the numerical processing “engine” system and/or chip. Here are some useful words and definitions (adapted from Rane's Professional Audio Reference [www.rane.com/par-d.html]):

Digital audio: The use of sampling and quantization techniques to store or transmit audio information in binary form. The use of numbers (typically binary) to represent audio signals.

Digitization: Any conversion of audio information into a digital form.

Digital signal: Any signal that is quantized (i.e., limited to a distinct set of values) into digital words at discrete points in time. The accuracy of a digital value is dependent on the number of bits used to represent it.

For the sake of this article, we will use the term digital audio processor (DAP) to refer to audio-to-digital converters; DSP and host processing/display systems, including memory, user interfaces, displays; digital-to-audio converters; associated analog I/O systems; other associated I/O interfaces (Ethernet, etc.) for audio/control and transport; power supplies; miscellaneous components required to make the system work as a whole; and all of the above combined to make a functioning high-speed number-crunching tool, eventually taking in audio and outputting modified audio.

Note that audio is realtime — we can't wait (perceptibly) for the processor or the user to compute it, as you can with other types of digital technologies. Thus, DAPs require high-speed dedicated chips (DSP chips) to perform complex calculations in realtime.

[Editor's note: The following sections include material adapted from Analog Devices' website, www.analog.com.]

What does “realtime” actually mean? In an audio system, every task is performed in realtime in terms of continuous signals and processing going on. After all, we can't stop the show while we wait for processing to complete.

In a digital signal processing (DSP) system, though, signals are represented with sets of samples, i.e., values at discrete points in time. Thus, the time for processing a given number of samples in a DSP system can have an arbitrary interpretation in realtime, depending on the sampling rate.

To address this issue, we use the Nyquist Criterion. Simply put, the concept of sampling and the Nyquist Criterion require the sampling frequency to be at least twice the frequency of the highest frequency component of interest in the (audio, in this case) signal in realtime applications. That's the Nyquist rate. The time between samples is referred to as the sampling interval. To qualify a system as operating in realtime, all processing of a given set of data must be completed before new data arrives. This definition of realtime implies that, for a processor operating at a given clock rate, the speed and quantity of the input data determines how much processing can be applied to the data without falling behind the datastream.

The idea of having a limited amount of time to handle data may seem odd to analog audio designers, because this concept does not have a parallel in strictly analog audio systems, which process signals continuously. One of the few penalties in a slow analog system is limited frequency response.

Figure 1. Analog is realtime audio; digital audio is not. Since there is a finite amount of time available to perform any given algorithm, managing time is a central part of DSP system software design.
For a larger image, click here.

By comparison, digital systems process parts of the signal, enough for very accurate approximations, but only within a limited block of time. Figure 1 shows a comparison. As it illustrates, realtime DSP can be limited by the amount of data or type of processing that can be completed within the algorithm's time budget. For example, a given DSP processor handling data values sampled at 96kHz has less time to process those data values, including execution of all necessary tasks, than the same DSP sampling 48kHz data.

Since there is a finite amount of time that can be budgeted to perform any given algorithm, managing time is a central part of DSP system software design. Time management strategy determines how the processor is notified about events, influences data handling, and shapes processor communications.

EVENT NOTIFICATIONS, INTERRUPTS, AND DATA I/0

You can program DSP to process data using one of several strategies for handling the “event” — the arrival of data. A status bit or flag pin could be read periodically to determine whether new data is available. But so-called “polling” wastes processor cycles. The data may arrive just after the last poll and be unable to make its presence known until the next poll. This makes it difficult to develop realtime systems.

The second strategy is for the data to “interrupt” the processor on arrival. Using interrupts to notify the processor is efficient, though not as easy to program — clock cycles can be wasted during the wait for an interrupt. Nevertheless, since event-driven interrupt programming is well suited to processing real-world signals promptly, most DSPs are designed to respond quickly to interrupts. A typical high-end DSP chip's response time to an interrupt is about three processor cycles (approximately 75ns).

In DSP systems, interrupts are typically generated by the arrival of data or the requirement to provide new output data. Interrupts may occur with each sample, or they may occur after a frame of data has been collected. The differences greatly influence how the DSP algorithm deals with data. Filters, reverberation systems, limiters, etc., are all mathematical algorithms used within a DAP.

Rane's reference defines an algorithm as “a structured set of instructions and operations tailored to accomplish a signal processing task. For example, a fast Fourier transform (FFT), [used in Meyer's SIM or SIA's Smaart systems], or a finite impulse response (FIR) filter are common DSP algorithms.” Here are definitions of two commonly used digital filters that use typical algorithms.

Finite Impulse Response (FIR) filter: Digitized samples of the audio signal serve as inputs, and each filtered output is computed from a weighted sum of a finite number of previous inputs. An FIR filter can be designed to have linear phase (i.e., constant time delay, regardless of frequency). FIR filters designed for frequencies much lower that the sample rate and/or with sharp transitions are computationally intensive, with large time delays. Popularly used for adaptive filters.

Infinite Impulse Response (IIR) filter: IIR's recursive structure accepts digitized samples of the audio signal as inputs, and then each output point is computed on the basis of a weighted sum of past output (feedback) terms, as well as past input values. An IIR filter is more efficient than an FIR filter, but poses more challenging design issues. Its strength is in not requiring as much DSP power as an FIR filter, while its weaknesses are possible instabilities and a lack of linear group delay. Its popular use is to emulate analog filters.

Hundreds of algorithms are available to designers. They don't all sound the same, and they don't all sound good. The method used to create a rough equivalent of an analog Butterworth filter, for example, requires different coding (writing chip-specific software) for each brand of DSP chip. Several software packages exist that allow you to input the system topology on your PC as an analog drawing. The software actually creates the difficult and detailed machine code to make it all work. Oftentimes, as the designer of a DSP system, you may not be aware of the specific algorithm used to create your filter, limiter, etc.

For algorithms that operate on a sample-by-sample basis, DSP software may be required to handle each incoming and outgoing data value. Each DSP serial port incorporates two data I/O registers: a receive register (Rx) and a transmit register (Tx). Serial ports are used to receive and transmit digital audio information. In other words, the output of the audio-to-digital converter connects to the receive register and vice versa for the digital-to-audio converter. When a serial word is received, the port will typically generate a receive interrupt. The processor stops what it is doing, begins executing code at the interrupt vector location, reads the incoming value from the Rx register into a processor data register, and either operates on that data value or returns to its background task.

To transmit data, the serial port can generate a transmit interrupt, indicating that new data can be written to the Tx register. DSP can then begin code execution at the Serial Port Tx interrupt vector and typically transfer a value from a data register to the Serial Port Tx register. If data input and output are controlled by the same sampling clock, only one interrupt is necessary. For example, if a program segment is initiated by receive interrupt timing, new data would be read during the interrupt routine; then, either the previously computed result (which is being held in a register) would be transmitted, or a new result would be computed and immediately transmitted as the final step of the interrupt routine.

All of these mechanisms help a DSP to emulate what an audio system does naturally — continuously process data in realtime — but with digital precision and flexibility.

DIGITAL FILTERS

Now that we're all experts in the processing of digital audio information, let's briefly examine how a basic digital filter works and why it may be better than an analog equivalent.

There are two main kinds of filters — analog and digital. They are quite different in their physical makeup and in how they work. An analog filter uses analog electronic circuits made up from components such as resistors, capacitors, and op amps to produce the required filtering effect. Such filter circuits are widely used in such applications as noise reduction, video signal enhancement, and graphic equalizers in hi-fi systems.

There are well-established standard techniques for designing an analog filter circuit for a given requirement. At all stages, the signal being filtered is an electrical voltage or current, which is the direct analog of the physical quantity involved (for example, a sound or video signal or transducer output). A digital filter, by contrast, uses a digital processor to perform numerical calculations on sampled values of the signal. The processor may be a general-purpose computer such as a PC, or a specialized DSP chip.

The analog input signal must first be sampled and digitized using an analog-to-digital converter (ADC). The resulting binary numbers, representing successive sampled values of the input signal, are transferred to the processor, which performs numerical calculations on them. These calculations typically involve multiplying the input values by constants and adding the products together. If necessary, the results of these calculations, which now represent sampled values of the filtered signal, are output through a digital-to-analog converter (DAC) to convert the signal back to analog form. Note that in a digital filter, the signal is represented by a sequence of numbers, rather than a voltage or current.

Digital filters offer several advantages over analog filters. First, a digital filter is programmable; its memory can easily be changed without affecting the circuitry (hardware). An analog filter can only be changed by redesigning the filter circuit. Digital filters are also easily designed, tested, and implemented on a general-purpose computer or workstation. Another advantage of digital is the characteristics of analog filter circuits (particularly those containing active components) are subject to drift and are dependent on temperature. Digital filters do not suffer from these problems, and are thus extremely stable with respect to both time and temperature. In addition, unlike their analog counterparts, digital filters can handle low-frequency signals accurately. And, as the speed of DSP technology continues to increase, digital filters are being applied to high-frequency signals in the RF domain, which, in the past was the exclusive preserve of analog technology. Finally, digital filters are much more versatile in their ability to process signals. This versatility includes the ability of some types of digital filters to adapt to changes in the characteristics of the signal.

Fast DSP processors can handle complex combinations of digital filters in parallel or cascade (series), making the hardware requirements relatively simple and compact in comparison with the equivalent analog circuitry.

CONVERSION TO DIGITAL

An ADI circuit board used as part of my popular TC-2 digital effects processor to convert audio to digital and digital to audio. Sound quality was equivalent to 20 bits. This early floating point delta modulation pro audio converter system still sounds great today.

The DAP requires a means of getting the audio into and out of the digital domain for DSP. Audio-to-digital and digital-to-audio converters perform this function. The audio signal consists of an infinite combination of amplitudes. The digital domain, using binary notation, represents a finite range, determined by the number of bits used. Eight bits would be able to represent only 256 steps, or digital levels — clearly too few to accurately reproduce an analog signal, which could be any conceivable voltage between the noise floor and maximum output.

Two methods for converting analog signals to digital have been around a long time:

Successive approximation: As the name implies (warning: simplistic explanation follows), a circuit successively converts the audio into finer and finer information (bits) to represent the audio signal in digital form. The more bits are used, the better the representation of the audio signal, the finer the resolution, and hence, the better representation of low-level signals in the digital domain.

Not long ago, 16-bit systems were commonplace. Now, 24-bit is rapidly becoming the norm.

I recall Lexicon digital delays of years past using 12-bit successive approximation: floating point converters. (Floating point is a type of analog/digital compression and expansion to “fool” the converters into using a higher number of bits, creating a more accurate representation of the audio signal.)

Delta modulation: A single-bit coding technique in which a constant step size digitizes the input waveform. Past knowledge of the information permits encoding only the differences between consecutive values.

This technique has proven to be a good conversion system; virtually every modern DAP uses some form of delta modulation. Single chips can contain stereo pairs of A/D and D/A converters, making for quick, easy integration into the DAP and DSP system.

Back in my ADI days (1980), I recall finding delta modulation dramatically superior to the successive approximation chips I was using. The only challenges were no delta modulation audio converters existed as single chips to get the job done, and no one was using delta modulation for high-quality digital audio at the time. To make it work, I had to “invent” a converter system using many discrete components and chips originally intended for other purposes.

These days, the method has evolved into delta-sigma modulation, which the Rane reference describes as, “An audio-to-digital conversion scheme rooted in a design originally proposed in 1946, but not made practical until 1974 by James C. Candy. … Characterized by oversampling and digital filtering to achieve high performance at low cost, a delta-sigma A/D thus consists of an audio modulator and a digital filter. The fundamental principle behind the modulator is that of a single-bit A/D converter embedded in an audio negative feedback loop with high open loop gain. The modulator loop oversamples and processes the audio input at a rate much higher than the bandwidth of interest. The modulator's output provides 1-bit information at a very high rate and in a format that a digital filter can process to extract higher resolution (such as 20 bits) at a lower rate.”

In closing, here's some advice: Don't assume all DAPs are created equal. Listen to them. By this, I mean, gather several of your favorites, set everything flat, and listen to them and compare. You may be surprised to learn that they don't sound the same when set flat. Now do the same test with a filter or two added — might be interesting.

Finally, listen to music at the noise floor of the DAP. Hear anything unusual? You might — perhaps random, idle tones or other artifacts.

Thanks for listening, and I'll see you in another 20 years.


DAP discussion

I recently discussed Meyer Sound's new loudspeaker product, Galileo, with Perrin Meyer, Meyer's software R&D manager. Meyer sound recently acquired Level Control Systems (LCS), long known for creative DAPs.

Here is a portion of our conversation:

SVC: Why did Meyer and LCS form a relationship, and what exactly is the relationship, why is it important, and how will both benefit?

Perrin Meyer: Meyer Sound and LCS Audio had a casual relationship going back a number of years that … became much closer when we began collaborating on the creation of the Galileo loudspeaker management system. The relationship was so successful that it seemed natural that we integrate LCS' extensive expertise in digital audio into our company, and leverage our decades of experience … on behalf of LCS Audio's existing products.

The other obvious reason to merge the two companies was that we shared many customers who had proven how effectively Meyer Sound and LCS Audio products worked together, especially in the theatrical world. Even the corporate cultures felt compatible. So Meyer Sound acquired LCS Audio last year and integrated the products as the LCS Series in Meyer Sound's product line.

SVC: Meyer has stayed away from digital technology in the past, so why Galileo, and why is this significant? Does this indicate a paradigm shift for Meyer?

Meyer: We approach new technology with the attitude that if we're going to utilize it, it must perform a function in a better way than the technology it replaces, or enable us to do something we were not able to do with previous technology. … using technology to follow a trend rarely, if ever, provides clear benefits to anyone. … It took some time before Meyer Sound felt that digital audio could satisfy our quality standards. By the time we felt it was practically possible to do truly high-fidelity audio in the digital domain, the need for large-scale loudspeaker management had emerged. The combination of these factors led us to create the Galileo system.

While Galileo certainly represents a shift for us, it's a shift toward utilizing new technology rather than toward a new philosophy. Our paradigm remains creating tools that provide effective solutions to real-world problems, and we employ any technology that can meet our standards in achieving that goal. Galileo completely embodies that paradigm. While it was our first digital audio product, its significance is in providing a great deal of processing in a comparatively small package, with high input/output count and uncompromised sound quality. That did not exist before Galileo.

SVC: What is the Meyer philosophy for DSP, and will DSP replace analog processing for Meyer in the near term?

Meyer: Our philosophy is overarching — we want to create products that provide solutions to the problems users face, are easy to use, sound excellent, and are reliable. Our position on DSP is that where it achieves those goals better than analog technology, we will use it. Loudspeaker management is an area where DSP enables us to do things we could not do in the analog domain, and we have received an enthusiastic response to it. On the other hand, the processing that is internal to our self-powered loudspeakers is an area where there is not a convincing argument to turn to DSP over our existing analog circuitry.

SVC: Meyer has long been known for fine analog processing. Why digital now? Is it because digital has arrived in terms of sound quality?

Meyer: Yes, that was a factor, but it is important to understand what it means to say that the sound quality “arrived.” Making a digital filter that sounds good takes a lot of care in software design and appreciable processing power. Only recently has that power been available in a practical sense, but making high-quality digital audio requires a lot more than powerful chips. It takes the right algorithms, and that, too, turns out to be much more involved than choosing a standard filter topology and implementing it.

When we implemented filters in Galileo that came from analog products of ours like the CP-10 and VX-1, they had to sound at least as good as the originals. That turned out to be a very complex task. … But we applied ourselves to careful measurement and fine detail and finally came up with filters that we felt sounded, and measured, right.

SVC: What is different about Galileo, and is it dedicated for use only with Meyer product?

Meyer: There are several features that set Galileo apart from other loudspeaker management systems. The most obvious thing when you look at the processor is the number of inputs and outputs, but under the hood is where the real distinctions are. Because the Galileo processor is based on a monolithic DSP architecture, we are able to guarantee fixed latency, regardless of the combination of processing you are using. We are also able to implement very high-quality algorithms, and we used them to create filtering that enables the user to treat response problems effectively while introducing the least possible amount of phase shift.

Galileo uses a client/server architecture, which makes it possible to update parameters from anywhere in a venue, and it can connect directly to a SIM analyzer to provide data from the signal path for analysis. Currently, Galileo is focused primarily on use with Meyer Sound loudspeakers. Of course, you could use it with other systems, but wouldn't you rather be using Meyer Sound loudspeakers anyway?


Gary Hardesty is corporate director for Sound.Media.Fusion. and HTG Motorsports.


Editor's note: This article has been designed to be an instructive overview of past, present, and likely future developments in the world of digital signal processing (DSP) from the point of view of industry expert Gary Hardesty. As part of that effort, the piece includes certain sections, as noted, that feature material borrowed with permission from an article originally created by Analog Devices (www.analog.com), and from a web glossary offered by Rane (www.rane.com). Those sections are noted below, along with links to the original articles from those two manufacturers. In some places, Hardesty has altered particular wording to better reflect his approach to the issues involved. We hope you find this DSP examination instructive.



Acceptable Use Policy
blog comments powered by Disqus

Browse Back Issues
BROWSE ISSUES
  November 2014 Sound & Video Contractor Cover October 2014 Sound & Video Contractor Cover September 2014 Sound & Video Contractor Cover August 2014 Sound & Video Contractor Cover July 2014 Sound & Video Contractor Cover June 2014 Sound & Video Contractor Cover  
November 2014 October 2014 September 2014 August 2014 July 2014 June 2014