.. _api:pursuit: Greedy Sparse Recovery ================================================ .. contents:: :depth: 2 :local: .. rubric:: Algorithm versions Several algorithms are available in multiple versions. The library allows a dictionary or a sensing process to be represented as either: * A matrix of real/complex values * A linear operator with fast implementation (see ``cr.sparse.lop`` module) E.g. a partial sensing sensing matrix can be implemented more efficiently using a linear operator consisting of applying the fourier transform followed by selecting a subset of fourier measurements. * A prefix ``matrix_`` indicates an implementation which accepts matrices as dictionaries or compressive sensors. * A prefix ``operator_`` indicates an implementation which accepts linear operators described in ``cr.sparse.lop`` module as dictionaries or compressive sensors. * A suffix ``_jit`` means that it is the JIT (Just In Time ) compiled version of the original implementation. * A suffix ``_multi`` means that it is the version of implementation which can process multiple signals/measurement vectors simultaneously. The recovery problem :math:`y = \Phi x + e` is extended to :math:`Y = \Phi X + E` such that: * Each column of :math:`Y` represents one signal/measurement vector * Each column of :math:`X` represents one representation vector to be recovered * Each column of :math:`E` representation corresponding measurement error/noise. .. rubric:: Conditions on dictionaries/sensing matrices Different algorithms have different requirements on the dictionaries or sensing matrices: * Some algorithms accept overcomplete dictionaries/sensing matrices with unit norm columns * Some algorithms accept overcomplete dictionaries/sensing matrices with orthogonal rows * Some algorithms accept any overcomplete dictionary .. currentmodule:: cr.sparse.pursuit .. _api:pursuit:matching: Basic Matching Pursuit Based Algorithms ------------------------------------------ .. rubric:: Matching Pursuit .. autosummary:: :toctree: _autosummary mp.solve mp.matrix_solve .. rubric:: Orthogonal Matching Pursuit .. autosummary:: :toctree: _autosummary omp.solve omp.operator_solve omp.matrix_solve omp.matrix_solve_jit omp.matrix_solve_multi Compressive Sensing Matching Pursuit (CSMP) Algorithms .. rubric:: Compressive Sampling Matching Pursuit .. autosummary:: :toctree: _autosummary cosamp.solve cosamp.matrix_solve cosamp.matrix_solve_jit cosamp.operator_solve cosamp.operator_solve_jit .. rubric:: Subspace Pursuit .. autosummary:: :toctree: _autosummary sp.solve sp.matrix_solve sp.matrix_solve_jit sp.operator_solve sp.operator_solve_jit .. _api:pursuit:ht: Hard Thresholding Based Algorithms ----------------------------------------- .. rubric:: Iterative Hard Thresholding .. autosummary:: :toctree: _autosummary iht.solve iht.matrix_solve iht.matrix_solve_jit iht.operator_solve iht.operator_solve_jit .. rubric:: Hard Thresholding Pursuit .. autosummary:: :toctree: _autosummary htp.solve htp.matrix_solve htp.matrix_solve_jit htp.operator_solve htp.operator_solve_jit Data Types ------------------------------- .. autosummary:: :nosignatures: :toctree: _autosummary :template: namedtuple.rst RecoverySolution Utilities ------------------------------- .. autosummary:: :toctree: _autosummary abs_max_idx gram_chol_update Using the greedy algorithms ------------------------------- These algorithms solve the inverse problem :math:`y = \Phi x + e` where :math:`\Phi` and :math:`y` are known and :math:`x` is desired with :math:`\Phi` being an overcomplete dictionary or a sensing matrix. For sparse approximation problems, we require the following to invoke any of these algorithms: * A sparsifying dictionary :math:`\Phi`. * A signal :math:`y` which is expected to have a sparse or compressible representation :math:`x` in :math:`\Phi`. For sparse recovery problems, we require the following to invoke any of these algorithms: * A sensing matrix :math:`\Phi` with suitable RIP or other properties. * A measurement vector :math:`y` generated by applying :math:`\Phi` to a sparse signal :math:`x` .. rubric:: A synthetic example Build a Gaussian dictionary/sensing matrix:: from jax import random import cr.sparse.dict as crdict M = 128 N = 256 key = random.PRNGKey(0) Phi = crdict.gaussian_mtx(key, M,N) Build a K-sparse signal with Gaussian non-zero entries:: import cr.sparse.data as crdata import jax.numpy as jnp K = 16 key, subkey = random.split(key) x, omega = crdata.sparse_normal_representations(key, N, K, 1) x = jnp.squeeze(x) Build the measurement vector:: y = Phi @ x We have built the necessary inputs for a sparse recovery problem. It is time to run the solver. Import a sparse recovery solver:: from cr.sparse.pursuit import cosamp Solve the recovery problem:: solution = cosamp.matrix_solve(Phi, y, K) You can choose any other solver. The support for the non-zero entries in the solution is given by ``solution.I`` and the values for non-zero entries are given by ``solution.x_I``. You can build the sparse representation as follows:: from cr.sparse import build_signal_from_indices_and_values x_hat = build_signal_from_indices_and_values(N, solution.I, solution.x_I) Finally, you can use the utility to evaluate the quality of reconstruction:: from cr.sparse.ef import RecoveryPerformance rp = RecoveryPerformance(Phi, y, x, x_hat) rp.print() This would output something like:: M: 128, N: 256, K: 16 x_norm: 3.817, y_norm: 3.922 x_hat_norm: 3.817, h_norm: 1.55e-06, r_norm: 1.72e-06 recovery_snr: 127.83 dB, measurement_snr: 127.16 dB T0: [ 27 63 79 85 88 111 112 124 131 137 160 200 230 234 235 250] R0: [ 27 63 79 85 88 111 112 124 131 137 160 200 230 234 235 250] Overlap: [ 27 63 79 85 88 111 112 124 131 137 160 200 230 234 235 250], Correct: 16 success: True