Title:

Dictionary optimisation for representing and sensing sparse signals

Compressed sensing takes advantage that most of the natural signals can be sparsely represented via linear transformations, therefore it is possible to have accurate reconstruction from subNyquist samplings. The properties of the measurement matrix directly affect the relation between the sampling rate and the distortion of the reconstructions. People have been trying to either design measurement matrices from the signal statistics, or train the matrices from large amount of similar signals. Hence the relevant techniques they keep developing become very hot research topics. This thesis focuses on discussing the impact of the measurement matrices on representing and sensing sparse signals. The full text is divided into four parts (presented in Chapter 2 to 5, respectively). In Chapter 2 we focus on the dictionary update stage in dictionary learning. Given observations of the sparse signals via an overcomplete measurement matrix, , dictionary learning is to find this measurement matrix, i.e., dictionary, to accurately reconstruct the sparse signals. Usually a dictionary learning problem includes two stages that are implemented iteratively, sparse coding and dictionary update. Sparse coding is to fix the dictionary and update the sparse pattern of the estimated sparse signals. Dictionary update is to fix the sparse pattern and update the dictionary. We show that the failure of the update procedure to find a global optimum is not because of their converging to local minima or saddle points but to singular points. Afterwards, against this singularity issue, we revise the original objective function and propose a continuous counterpart. This modification is applied in the SimCO dictionary update framework and can be proved that in the limit case, the new objective function is the best possible lower semicontinuous approximation of the original. In Chapter 3 we present a joint source separation and dictionary learning algorithm to separate the noise corrupted mixed sources. The idea behind is that for our different targeted sources, such as images and audios, have different sparse representations. We choose the deterministic scenarios, where the number of mixtures is not less than that of sources. The technique presented in Chapter 2 to alleviate singularity is used in the algorithm and we use examples to show its benefit. In Chapter 4, we notice that rely on the prior known statistics of the sparse signals, it is possible to allocate the sensing power accordingly to achieve the best possible performance. Given the nonuniform signal sparsity and the total power budget, we study how to optimally allocate the power across the columns of a Gaussian random measurement matrix so as to meet the reconstruction requirements. We revise the so called approximate message passing algorithm and quantify the MSE performance in the asymptotic regime. The obtained closed form of the optimal power allocation shows that in the presence of measurement noise, uniform power allocation is not optimal for nonuniformly sparse signals. In Chapter 5 we study distributed compressed sensing problem. We consider the scenarios where unequal number of measurements can be assigned for each signal block, and look for the optimal measuring rate allocation for recovering the sparse signals with common support. For simplification we assume the signals have BernoulliGaussian distribution and again use AMP for analysis and obtain the exact phase transition curve in an asymptotic region. Interestingly, via the state evolution technique it can be shown that the rate region is concave, suggesting the corner points at the curve are optimal operating points and equal rate allocations is strictly suboptimal. Besides the rate allocation, we also numerically quantify how the expected reconstruction error is affected by lack of enough measurements, the presence of Gaussian noise and the inter correlation across the signal blocks.
