First Order Methods

This module aims to implement the first order methods for sparse signal recovery problems proposed in [BCandesG11]. The implementation is adapted from TFOCS [BCG12].

First order methods exploit information n values and gradients/subgradients (but not Hessians) of the functions comprising the models under consideration [Bec17].

API

Solvers

fom(smooth_f, prox_h, A, b, x0[, options])

First order methods driver routine

scd(prox_f, conj_neg_h, A, b, mu, x0, z0[, …])

First order solver for smooth conic dual problems driver routine

l1rls(A, b, lambda_, x0[, options])

Solver for l1 regulated least square problem

l1rls_jit(A, b, lambda_, x0[, options])

Solver for l1 regulated least square problem

lasso(A, b, tau, x0[, options])

Solver for LASSO problem

lasso_jit(A, b, tau, x0[, options])

Solver for LASSO problem

owl1rls(A, b, lambda_, x0[, options])

Solver for ordered weighted l1 norm regulated least square problem

owl1rls_jit(A, b, lambda_, x0[, options])

Solver for ordered weighted l1 norm regulated least square problem

dantzig_scd(A, b, delta, mu, x0, z0[, options])

Solver for the (smoothed) Dantzig selector problem using smoothed conic dual formulation

Data types

FomOptions

Options for FOCS driver routine

FomState

State of the FOCS method

Utilities

matrix_affine_func([A, b])

Returns an affine function for a matrix A and vector b

First Order Methods

[BCandesG11] considers problems in the unconstrained form:

(1)\[\text{minimize } \phi(x) = g( x) + h(x)\]

where:

  • Both \(g, h: \RR^n \to (\RR \cup \infty)\) are convex functions.

  • \(g : \RR^m \to \RR\) is a smooth convex function.

  • \(h : \RR^n \to \RR\) is a non-smooth convex function.

Of particular interest are problems where \(g\) takes a specific form:

(2)\[g(x) = f( \AAA(x) + b)\]

where

  • \(\AAA\) is a linear operator from \(\RR^n \to \RR^m\)

  • \(b\) is a translation vector

  • \(f : \RR^m \to \RR\) is a smooth convex function

and the function \(h\) is a prox-capable convex function (to be described later).

We can then rewrite \(\phi\) as:

(3)\[\text{minimize } \phi(x) = f( \AAA(x) + b) + h(x)\]

For a smooth function, its gradient \(\nabla f\) must exist and be easy to compute.

For a prox-capable function, it should have an efficient proximal operator:

(4)\[p_f(x, t) = \underset{z \in \RR^n}{\text{arg} \min} \left ( f(x) + \frac{1}{2t} \| z - x \|_2^2 \right )\]

for any \(x \in \RR^n\) and the step size \(t > 0\).

See the following sections for details:

The routine fom() provides the solver for the minimization problem described above.

  • Unconstrained smooth minimization problems can be handled by choosing \(h(x) = 0\). See cr.sparse.opt.prox_zero().

  • Convex constraints can be handled by adding their indicator functions as part of \(h\).

For solving the minimization problem, an initial solution \(x_0\) should be provided. If one is unsure of the initial solution, they can provide \(0 \in \RR^n\) as the initial solution.

Smooth Conic Dual Problems

A number of sparse signal recovery problems can be cast as conic optimization problems. Then efficient first order methods can be used to solve the problem. This procedure often involves the following steps:

  • Cast the sparse recovery problem as a conic optimization problem

  • Augment the objective function with a strongly convex term to make it smooth.

  • Formulate the dual of the smooth problem.

  • Solve the smooth conic dual problem using a first order method.

  • Apply continuation to mitigate the effect of the strongly convex term added to the original objective function.

Our goal is to solve the conic problems of the form [BCandesG11]

(5)\[\begin{split}\begin{aligned} & \underset{x}{\text{minimize}} & & f(x) \\ & \text{subject to} & & \AAA(x) + b \in \KKK \end{aligned}\end{split}\]

where:

  • \(x \in \RR^n\) is the optimization variable

  • \(f :\RR^n \to \RR\) is a convex objective function which is possibly extended valued but not necessarily smooth

  • \(\AAA : \RR^n \to \RR^m\) is a linear operator

  • \(b \in \RR^m\) is a translation vector

  • \(\KKK \subseteq \RR^m\) is a closed, convex cone

Dual Problem

The Lagrangian associated with the conic problem is given by:

(6)\[\LLL(x, \lambda) = f(x) - \langle \lambda, \AAA(x) + b \rangle\]

where \(\lambda \in \RRR^m\) is the Lagrange multiplier which must lie in the dual cone \(\KKK^*\).

The dual function is:

(7)\[g(\lambda) = \underset{x \in \RR^n}{\inf} \LLL(x, \lambda) = - f^*(\AAA^*(\lambda)) - \langle b, \lambda \rangle\]
  • \(\AAA^*: \RR^m \to \RR^n\) is the adjoint of the linear operator \(\AAA\)

  • \(f^* : \RR^n \to (\RR \cup \infty)\) is the convex conjugate of \(f\) defined by

(8)\[f^*(z) = \underset{z}{\sup} \langle z, x \rangle -f(x)\]

Thus, the dual problem is given by:

(9)\[\begin{split}\begin{aligned} & \underset{\lambda}{\text{maximize}} & & - f^*(\AAA^*(\lambda)) - \langle b, \lambda \rangle \\ & \text{subject to} & & \lambda \in \KKK^* \end{aligned}\end{split}\]

Smoothing

The dual function \(g\) may not be differentiable (or finite) on all of \(\KKK^*\).

We perturb the original problem as follows:

(10)\[\begin{split}\begin{aligned} & \underset{x}{\text{minimize}} & & f_{\mu} (x) \triangleq f(x) + \mu d (x)\\ & \text{subject to} & & \AAA(x) + b \in \KKK \end{aligned}\end{split}\]
  • \(\mu > 0\) is a fixed smoothing parameter

  • \(d : \RR^n \to \RR\) is a strongly convex function obeying

(11)\[d(x) \geq d(x_0) + \frac{1}{2} \| x - x_0 \|_2^2\]

A specific choice for \(d(x)\) is:

(12)\[d(x) = \frac{1}{2} \| x - x_0 \|_2^2\]

The new objective function \(f_{\mu}\) is strongly convex.

The Lagrangian becomes:

(13)\[\LLL_{\mu}(x, \lambda) = f(x) + \mu d(x) - \langle \lambda, \AAA(x) + b \rangle\]

The dual function becomes:

(14)\[g_{\mu}(\lambda) = \underset{x \in \RR^n}{\inf} \LLL_{\mu}(x, \lambda) = - (f+\mu d)^*(\AAA^*(\lambda)) - \langle b, \lambda \rangle\]

First order methods may be employed to solve this problem with provable performance.

The smoothed conic dual problem is then given by:

(15)\[\begin{split}\begin{aligned} & \underset{\lambda}{\text{maximize}} & & - (f+\mu d)^*(\AAA^*(\lambda)) - \langle b, \lambda \rangle \\ & \text{subject to} & & \lambda \in \KKK^* \end{aligned}\end{split}\]

Composite Forms

Often, it is convenient to split the dual variable \(\lambda \in \RR^m\) as \(\lambda=(z, \tau)\) where \(z \in \RR^{m-\bar{m}}\) and \(\tau \in \RR^{\bar{m}}\). For example, if \(\KKK^* = \LLL^n_1\), the \(\ell_1\) norm cone (i.e. the epigraph of the \(\ell_1\) norm function, then, if we split \(\lambda \in \RR^{n+1}\) as \(\lambda = (z, \tau)\) where \(z \in \RR^n\) and \(\tau \in \RR\), then:

(16)\[\lambda \in \LLL^n_{1} \iff \| z \|_1 \leq \tau\]

This is particularly useful, if the smoothed dual function \(g_{\mu}(\lambda) = g_{\mu}(z, \tau)\) is linear in \(\tau\). Then, we can rewrite \(g_{\mu}\) as:

(17)\[g_{\mu}(\lambda) = - g_{\text{sm}} (z) - \langle \nu_{\mu}, \tau \rangle\]

Since \(g_{\mu}\) is a smooth concave function, \(g_{\text{sm}}\) will be a smooth convex function.

Problem Instantiations

In the rest of the document, we will discuss how specific sparse signal recovery problems can be modeled and solved using this module as either primal problems or a smooth conic dual problem.

L1 regularized least square problem

We consider the problem:

(18)\[\text{minimize} \frac{1}{2} \| \AAA x - b \|_2^2 + \lambda \| x \|_1\]

Choose:

With these choices, it is straight-forward to use fom() to solve the L1RLS problem. This is implemented in the function l1rls().

LASSO

LASSO (least absolute shrinkage and selection operator) s a regression analysis method that performs both variable selection and regularization in order to enhance the prediction accuracy and interpretability of the resulting statistical model.

We consider the problem:

(19)\[\begin{split}\begin{aligned} \underset{x}{\text{minimize}} \frac{1}{2} \| \AAA x - b \|_2^2\\ \text{subject to } \| x \|_1 \leq \tau \end{aligned}\end{split}\]

Choose:

With these choices, it is straight-forward to use fom() to solve the LASSO problem. This is implemented in the function lasso().

Ordered weighted L1 regularized least square problem

We consider the problem:

(20)\[\underset{x \in \RR^n}{\text{minimize}} \frac{1}{2} \| A x - b \|_2^2 + \sum_{i=1}^n \lambda_i | x |_{(i)}\]

described in [lBvdBSC13].

Choose:

With these choices, it is straight-forward to use fom() to solve the ordered weighted L1 regularized least square problem. This is implemented in the function owl1rls().

The Dantzig selector

We wish to recover an unknown vector \(x_0 \in \RR^n\) from the data \(y \in \RR^m\) and the model:

(21)\[y = A x_0 + z\]

where

  • \(A\) is a known \((m, n)\) sized design matrix

  • \(z\) is the noise term

  • There are fewer observations/measurements than unknowns (\(m \lt n\))

We consider the optimization problem

(22)\[\begin{split}\begin{aligned} & \underset{x}{\text{minimize}} & & \| x \|_1 \\ & \text{subject to} & & \| A^* (y - A x ) \|_{\infty} \leq \delta \end{aligned}\end{split}\]
  • We are minimizing the \(\ell_1\) norm of the solution (thus promoting sparsity)

  • We have an upper bound \(\delta\) on the correlation between the residual vector \(r = y - A x\) and the columns/atoms of \(A\).

This formulation is known as the Dantzig selector.

The conic form

We can rewrite the Dantzig selector problem as:

(23)\[\begin{split}\begin{aligned} & \underset{x}{\text{minimize}} & & \| x \|_1 \\ & \text{subject to} & & ( A^* (y - A x ), \delta) \in \LLL^n_{\infty} \end{aligned}\end{split}\]

where \(\LLL^n_{\infty}\) is the epigraph of the \(\ell_{\infty}\)-norm. It is a convex cone.

Then the Dantzig selector can be modeled as a standard conic form as follows:

  • \(f(x) = \| x \|_1\)

  • \(\AAA(x) = (-A^* A x, 0)\) is a mapping from \(\RR^n \to \RR^{n+1}\)

  • \(b = (A^* y, \delta)\); note that \(b \in \RR^{n+1}\)

  • \(\KKK = \LLL^n_{\infty}\)

Note carefully that:

(24)\[\AAA(x) + b = (-A^* A x, 0) + (A^* y, \delta) = (A^* (y - Ax), \delta)\]

Thus:

(25)\[\AAA(x) + b \in \KKK = \LLL^n_{\infty} \iff \| A^* (y - Ax) \|_{\infty} \leq \delta\]

Dual problem

The dual of the \(\LLL^n_{\infty}\) cone is the \(\LLL^n_{1}\) cone. The dual variable \(\lambda\) will lie in the dual cone \(\LLL^n_{1}\). It will be easier to work with defining \(\lambda = (z, \tau)\) such that

(26)\[\lambda \in \LLL^n_{1} \iff \| z \|_1 \leq \tau\]

The convex conjugate of the l1 norm function \(f(x) = \| x \|_1\) is the indicator function for the \(\ell_{\infty}\) norm ball.

(27)\[\begin{split} f^*(x) = I_{\ell_{\infty}}(x) = \begin{cases} 0 & \text{if } \| x \|_{\infty} \leq 1 \\ \infty & \text{otherwise} \end{cases}\end{split}\]

The adjoint of the linear operator is given by:

(28)\[\AAA^*(\lambda) = \AAA^*((z, \tau)) = -A^* A z\]

Plugging into the dual problem definition:

(29)\[\begin{split}\begin{split}\begin{aligned} & \underset{\lambda}{\text{maximize}} & & - f^*(\AAA^*(\lambda)) - \langle b, \lambda \rangle \\ & \text{subject to} & & \lambda \in \KKK^* \end{aligned}\end{split}\end{split}\]

We get:

(30)\[\begin{split}\begin{split}\begin{aligned} & \underset{z}{\text{maximize}} & & - I_{\ell_{\infty}}( -A^* A z) - \langle A^*y, z \rangle - \delta \tau \\ & \text{subject to} & & (z, \tau) \in \LLL^n_1 \end{aligned}\end{split}\end{split}\]

For \(\delta > 0\), the solution \(\lambda\) will be on the boundary of the convex cone \(\LLL^n_1\), giving us:

(31)\[\|z \|_1 = \tau\]

Plugging this back, we get the unconstrained maximization problem:

(32)\[\underset{z}{\text{maximize}} - I_{\ell_{\infty}}( -A^* A z) - \langle A^*y, z \rangle - \delta \| z \|_1\]