# Greedy Sparse Recovery¶

Algorithm versions

Several algorithms are available in multiple versions.

The library allows a dictionary or a sensing process to be represented as either:

• A matrix of real/complex values

• A linear operator with fast implementation (see cr.sparse.lop module)

E.g. a partial sensing sensing matrix can be implemented more efficiently using a linear operator consisting of applying the fourier transform followed by selecting a subset of fourier measurements.

• A prefix matrix_ indicates an implementation which accepts matrices as dictionaries or compressive sensors.

• A prefix operator_ indicates an implementation which accepts linear operators described in cr.sparse.lop module as dictionaries or compressive sensors.

• A suffix _jit means that it is the JIT (Just In Time ) compiled version of the original implementation.

• A suffix _multi means that it is the version of implementation which can process multiple signals/measurement vectors simultaneously. The recovery problem $$y = \Phi x + e$$ is extended to $$Y = \Phi X + E$$ such that:

• Each column of $$Y$$ represents one signal/measurement vector

• Each column of $$X$$ represents one representation vector to be recovered

• Each column of $$E$$ representation corresponding measurement error/noise.

Conditions on dictionaries/sensing matrices

Different algorithms have different requirements on the dictionaries or sensing matrices:

• Some algorithms accept overcomplete dictionaries/sensing matrices with unit norm columns

• Some algorithms accept overcomplete dictionaries/sensing matrices with orthogonal rows

• Some algorithms accept any overcomplete dictionary

## Basic Matching Pursuit Based Algorithms¶

Matching Pursuit

 mp.solve(Phi, y[, max_iters, res_norm_rtol]) Solves the sparse recovery problem $$y = \Phi x + e$$ using matching pursuit algorithm mp.matrix_solve(Phi, y[, max_iters, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using matching pursuit algorithm

Orthogonal Matching Pursuit

 omp.solve(Phi, y, max_iters[, max_res_norm]) Solves the recovery/approximation problem $$y = \Phi x + e$$ using Orthogonal Matching Pursuit omp.matrix_solve(Phi, y, max_iters[, …]) Solves the recovery/approximation problem $$y = \Phi x + e$$ using Orthogonal Matching Pursuit omp.matrix_solve_jit(Phi, y, max_iters[, …]) Solves the recovery/approximation problem $$y = \Phi x + e$$ using Orthogonal Matching Pursuit omp.matrix_solve_multi(Phi, y, max_iters[, …]) Solves the MMV recovery/approximation problem $$Y = \Phi X + E$$ using Orthogonal Matching Pursuit

Compressive Sensing Matching Pursuit (CSMP) Algorithms

Compressive Sampling Matching Pursuit

 cosamp.solve(Phi, y, K[, max_iters, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Compressive Sampling Matching Pursuit for linear operators cosamp.matrix_solve(Phi, y, K[, max_iters, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Compressive Sampling Matching Pursuit for matrices cosamp.matrix_solve_jit(Phi, y, K[, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Compressive Sampling Matching Pursuit for matrices cosamp.operator_solve(Phi, y, K[, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Compressive Sampling Matching Pursuit for linear operators cosamp.operator_solve_jit(Phi, y, K[, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Compressive Sampling Matching Pursuit for linear operators

Subspace Pursuit

 sp.solve(Phi, y, K[, max_iters, res_norm_rtol]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Subspace Pursuit for linear operators sp.matrix_solve(Phi, y, K[, max_iters, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Subspace Pursuit for matrices sp.matrix_solve_jit(Phi, y, K[, max_iters, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Subspace Pursuit for matrices sp.operator_solve(Phi, y, K[, max_iters, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Subspace Pursuit for linear operators sp.operator_solve_jit(Phi, y, K[, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Subspace Pursuit for linear operators

## Hard Thresholding Based Algorithms¶

Iterative Hard Thresholding

 iht.solve(Phi, y, K[, normalized, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Iterative Hard Thresholding for linear operators iht.matrix_solve(Phi, y, K[, normalized, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Iterative Hard Thresholding for matrices iht.matrix_solve_jit(Phi, y, K[, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Iterative Hard Thresholding for matrices iht.operator_solve(Phi, y, K[, normalized, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Iterative Hard Thresholding for linear operators iht.operator_solve_jit(Phi, y, K[, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Iterative Hard Thresholding for linear operators

Hard Thresholding Pursuit

 htp.solve(Phi, y, K[, normalized, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Hard Thresholding Pursuit for linear operators htp.matrix_solve(Phi, y, K[, normalized, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Hard Thresholding Pursuit for matrices htp.matrix_solve_jit(Phi, y, K[, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Hard Thresholding Pursuit for matrices htp.operator_solve(Phi, y, K[, normalized, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Hard Thresholding Pursuit for linear operators htp.operator_solve_jit(Phi, y, K[, …]) Solves the sparse recovery problem $$y = \Phi x + e$$ using Hard Thresholding Pursuit for linear operators

## Data Types¶

 RecoverySolution Represents the solution of a sparse recovery problem

## Utilities¶

 Returns the index of entry with highest magnitude Incrementally updates the Cholesky factorization $$G = L L^T$$ where $$G = \Phi^T \Phi$$

## Using the greedy algorithms¶

These algorithms solve the inverse problem $$y = \Phi x + e$$ where $$\Phi$$ and $$y$$ are known and $$x$$ is desired with $$\Phi$$ being an overcomplete dictionary or a sensing matrix.

For sparse approximation problems, we require the following to invoke any of these algorithms:

• A sparsifying dictionary $$\Phi$$.

• A signal $$y$$ which is expected to have a sparse or compressible representation $$x$$ in $$\Phi$$.

For sparse recovery problems, we require the following to invoke any of these algorithms:

• A sensing matrix $$\Phi$$ with suitable RIP or other properties.

• A measurement vector $$y$$ generated by applying $$\Phi$$ to a sparse signal $$x$$

A synthetic example

Build a Gaussian dictionary/sensing matrix:

from jax import random
import cr.sparse.dict as crdict
M = 128
N = 256
key = random.PRNGKey(0)
Phi = crdict.gaussian_mtx(key, M,N)


Build a K-sparse signal with Gaussian non-zero entries:

import cr.sparse.data as crdata
import jax.numpy as jnp
K = 16
key, subkey = random.split(key)
x, omega = crdata.sparse_normal_representations(key, N, K, 1)
x = jnp.squeeze(x)


Build the measurement vector:

y = Phi @ x


We have built the necessary inputs for a sparse recovery problem. It is time to run the solver.

Import a sparse recovery solver:

from cr.sparse.pursuit import cosamp


Solve the recovery problem:

solution =  cosamp.matrix_solve(Phi, y, K)


You can choose any other solver.

The support for the non-zero entries in the solution is given by solution.I and the values for non-zero entries are given by solution.x_I. You can build the sparse representation as follows:

from cr.sparse import build_signal_from_indices_and_values
x_hat = build_signal_from_indices_and_values(N, solution.I, solution.x_I)


Finally, you can use the utility to evaluate the quality of reconstruction:

from cr.sparse.ef import RecoveryPerformance
rp = RecoveryPerformance(Phi, y, x, x_hat)
rp.print()


This would output something like:

M: 128, N: 256, K: 16
x_norm: 3.817, y_norm: 3.922
x_hat_norm: 3.817, h_norm: 1.55e-06, r_norm: 1.72e-06
recovery_snr: 127.83 dB, measurement_snr: 127.16 dB
T0: [ 27  63  79  85  88 111 112 124 131 137 160 200 230 234 235 250]
R0: [ 27  63  79  85  88 111 112 124 131 137 160 200 230 234 235 250]
Overlap: [ 27  63  79  85  88 111 112 124 131 137 160 200 230 234 235 250], Correct: 16
success: True