Block Sparsity

This section covers algorithms for recovery of block sparse signals from compressive measurements.

Related Examples

  1. Block Sparse Bayesian Learning

Problem Formulations

The basic compressive sensing model is given by

(1)\[\by = \Phi \bx + \be\]

where \(\by\) is a known measurement vector, \(\Phi\) is a known sensing matrix and \(\bx\) is a sparse signal to be recovered from the measurements.

We introduce the block/group structure on \(\bx\) as

(2)\[\bx = \begin{pmatrix} \bx_1 & \bx_2 & \dots & \bx_g \end{pmatrix}\]

where each \(\bx_i\) is a block of \(b\) values. The signal \(\bx\) consists of \(g\) such blocks/groups. Under the block sparsity model, only a few \(k \ll g\) blocks are nonzero (active) in the signal \(\bx\) however, the locations of these blocks are unknown.

We can rewrite the sensing equation as:

(3)\[\by = \sum_{i=1}^g \Phi_i \bx_i + \be\]

by splitting the sensing matrix into blocks of columns appropriately.

Following possibilities are there:

Block Sizes:

  1. All blocks have same size.

  2. Different blocks have different sizes.

  3. Block sizes are known.

  4. Block sizes are unknown.

Intra Block Correlation:

  1. Nonzero coefficients in each active block are independent of each other.

  2. Nonzero coefficients in each active block exhibit some correlation.

Measurements

  1. Measurements are noiseless.

  2. Measurements have some noise at high SNR.

  3. Measurements have high amount of noise with low SNR.

  4. Measurements are real valued.

  5. Measurements are quantized and hence integer valued introducing quantization noise.

Different block sparse recovery algorithms exhibit different capabilities along these choices.

Intra Block Correlation

We can model the correlation among the values within each active block as an AR-1 process. Under this assumption the matrices \(\bB_i\) of intra block covariance take the form of a Toeplitz matrix

(4)\[\begin{split}\bB = \begin{bmatrix} 1 & r & \dots & r^{b-1}\\ r & 1 & \dots & r^{b-2}\\ \vdots & & \ddots & \vdots\\ r^{b-1} & r^{b-2} & \dots & 1 \end{bmatrix}\end{split}\]

where \(r\) is the AR-1 model coefficient.

Block Sparse Bayesian Learning

The Block Sparse Bayesian Learning (BSBL) algorithm and its extensions are described in [ZR12, ZR13].

Our implementation is restricted to the case of blocks with identical sizes which are known a priori. This is done to take advantage of JIT (just in time) compilation abilities of JAX that require sizes of arrays to be statically determined.

Following extensions of BSBL algorithm have been implemented:

  • BSBL-EM (Expectation Maximization)

  • BSBL-BO (Bound Optimization)

Methods

bsbl_em

Reconstructs a block sparse signal using BSBL-EM algorithm

bsbl_em_jit

Reconstructs a block sparse signal using BSBL-EM algorithm

bsbl_bo

Reconstructs a block sparse signal using BSBL-BO algorithm

bsbl_bo_jit

Reconstructs a block sparse signal using BSBL-BO algorithm

bsbl_em_options

Helper function to initialize options for the BSBL-EM algorithm

bsbl_bo_options

Helper function to initialize options for the BSBL-BO algorithm

Data Types

BSBL_Options

Options for the BSBL algorithm

BSBL_State

Sparse Bayesian Learning algorithm state