Tag Archives: signal processing

On Measurements and Prior Knowledge

The other day, I found a fun little post in one of my favorite blogs (Nuit Blanche). Miki Lustig, a professor working on applying Compressed Sensing (CS) to fast MRI, drew a set of XKCD-like comics explaining the basic principles of CS in MRI. You can check out the comics here (the drawing of David Donoho is cute and funny, in my opinion).

I found one of the diagrams shown in the comics to be particularly interesting:
Knowledge vs Measurements

This graph shows a concept that may seem remarkably obvious: the more you know about a system/signal/event, the less measurements you need to make. Isn’t that intuitive?

Let’s try to dig a little deeper here. To put things in perspective, there is a well-studied theorem in signal processing called the Shannon-Nyquist criterion, which states that if a signal is bandlimited (finite support in the frequency domain), then it can be perfectly reconstructed from samples taken at a rate greater than at least twice the bandwidth [1]. As the diagram above points out, we don’t need to sample higher than double the highest frequency (plus some factor to account for safety) for lowpass signals. From the engineer’s point of view, this is great as you can tailor your measurement system to the signal of interest; however, there are cases in which Nyquist sample rates may still be too high and expensive to implement.

This is where more knowledge comes into play. CS theory says that if the signal is compressible (as the majority of interesting signals are), then you can, in a sense, measure the compressed data directly thus relaxing the acquisition requirements [2]. For instance, it has been shown that the data collection process in MRI can be significantly sped up without a loss in image quality if the sensing and reconstruction is performed with the tools developed by CS. The compressiblity (signal sparsity in some domain) prior knowledge allows us to reduce the number of measurements needed.

How do we acquire this magical prior knowledge? How do we know this knowledge is correct or even useful? I think that is the engineer’s job: to develop models and design systems in order to meet some requirements. Maybe the diagram needs another axis that shows how wrong the knowledge and how many measurement are needed to compensate for such mistake; however, two axes are enough to get the point across. As an engineer, this prior knowledge and modeling business comes into play very frequently. In Bayesian classification for example, you can incorporate your knowledge in the form of a prior distribution over your measurements to improve your metrics. In CT reconstruction, it has been shown that model based approaches for reconstruction allow the reduction of x-ray dose without sacrificing image quality. I could name many other examples, but I think you get my point; it is the engineer’s task to carefully incorporate knowledge into the system design. Avoiding this could lead to costly or maybe non-realizable systems.

This post might have felt almost like I was rambling without a particular direction, but I thought it would be cool to share this comic as well as my thoughts about it. I am sure that drawing this diagram on the whiteboard in the kitchen area of a research lab would spark interesting discussions at lunch. I think this was a complicated post, so I promise that the next one won’t be engineering related

[1] This discovery has made the digital revolution possible by allowing the implementation of signal processing techniques on computers or embedded devices
[2] I am brushing the context and limitations of this theory aside for this post; in reality, the comparison is a bit unfair as the assumptions between Nyquist and compressive sampling are different. For example, Nyquist deals with infinitely long, continuous signals, whereas compressive sampling has been developed for finite dimensional vectors