Signal Detection: Matched Filter
Some notes on the topic of matched filtering. An example based on AIS is presented.
(and here in dB scale:)
Considering the derivative as a filter, it’s an unusual one: it cancels completely the DC component and amplifies higher frequencies. This amplification is the troublesome part in the practical world where engineers dwell. In this world we are going to work with band-limited signals, signals that are sampled at some frequency that we choose high enough to capture completely our signal and with some margin to spare. We are also going to have noise, noise that is going to be ubiquitous. It’s going to look something like this in the frequency domain:
The important idea here is that the derivative of this signal is going to amplify up to 5 dB all that noise that shows up at the higher frequencies, outside of our signal’s bandwidth and it’s going to be amplified to a higher extent than our signal. Moreover, the lower frequency components of our signal are going to be attenuated, being its DC component completely removed.
We can intuitively see that the signal-to-noise ratio of our resulting derivative is going to suffer and we can end up with something that might look very different from what we expected. So much so the bigger the noise power in our spare bandwidth and the higher the concentration of power of our signal at the lower frequencies.
We can see that we need to be really careful here, but we haven’t reached the end of the story yet.
So far we have been talking about the derivative operation in its theoretical form. When it comes to its practical implementation we are going to be computing approximations to the derivative and we need to understand their properties and limitations.
Probably the first idea that come to our minds when we are tasked with computing the derivative of a signal in the digital domain is to put together something like this:
\[\dot{s}(k) = s(k) - s(k-1)\]We compute the derivative (its approximation) as the difference of the current with the previous sample. This is called the first-difference differentiator and exploring its frequency response with Octave is a one-liner:
The approximation holds for the lowest frequencies but it’s not able to sustain the gain for higher frequencies:
We can compare this differentiator with another well-known expression:
\[\dot{s}(k) = \frac{s(k+1) - s(k-1)}{2}\]This is called the central-difference differentiator and its frequency response can also be computed easily:
In this case we see how the frequency response for higher frequencies drops significantly. It might sound surprising, but this is actually a desirable characteristic. Remember that for bandwidth limited signals, the advice was to remove the out-of-band noise as much as possible. This differentiator offers both filtering and differentiation at the same time in a very simple expression. As long as our signal is sampled at 8 times or more its bandwidth (so we stay at \(f/f_s \lt 0.125\) in the chart), the central-difference will give us an accurate estimate without amplifying the noise at higher frequencies.
This low-pass filtering effect of the central-difference formula is expected since its expression is just the average of two consecutive first differences and we know that low-pass filtering, smoothing and averaging are all synonyms:
\[\frac{(s(k+1) - s(k)) + ((s(k) - s(k-1))}{2} = \frac{s(k+1) - s(k-1)}{2}\]The theory from Savitzky-Golay filters give us even more options. If we are willing to increase the complexity, we can go to grades 5, 7 or 9:
Let’s see what is their frequency response:
Notice how these higher order Savitzky-Golay filters are just focused on the smoothing of the results, filtering out the higher frequency components with still relatively simple expressions.
Is this the best we can do? Do we always need to trade accuracy and frequency response? No, as long as we are ready to bring more computing power to the table there is one more approach: design an ad-hoc differentiating filter that takes your exact signal sampling rate and bandwidth in full consideration. To illustrate this, let’s follow a practical example. Given this signal:
which is a composition of 21 phasors that cover 1/10 of the sampling frequency:
Computing the central-difference produces the following result:
So far so good. Let’s add some white noise and repeat the computation:
the derivative is barely recognizable:
That’s a lot of noise, probably even enough to crash a rocket. Let’s now design our ideal derivative filter for this particular signal. It should follow the ideal derivative frequency response (a straight line with slope \(2\pi\)) up to the signal bandwidth (which is 1/10 of our sampling rate) and drop to flat zero from then on.
It took me a while to tinker around with Octave’s remez
function, probably other DSP tools can produce better results. This is as far as I got:
The additional application of a Hamming window is a nice improvement to reduce the filter’s ringing, as Mr. Lyons explains in the aforementioned reference.
The frequency response shows like this:
Quite close. We could make it even closer adding more taps but I think 60 are already quite a lot. Let’s see how if behaves with our input signal:
Not bad, the rocket has a better chance now. Notice that the output has been aligned in the plot to take into account the group delay that this filter produces.
With all these tools, it’s up to the particular scenario we face to choose the right one, making compromises where they can be safely made. Let a full understanding of our signal bandwidth and the impact of the noise be our guide to make the right choice in each circumstance. And let’s keep those rockets steady, even when the bars are left out of the requirements!
Leave a Comment