ECE Seminar: Smoothed and Parallel Sparse Optimization

Presenter: Wotao Yin, Ph.D., Associate Professor, Department of Computational and Applied Mathematics, Rice University

Sparse optimization has found interesting applications in many areas such as machine learning, signal processing, compressive sensing, medical imaging, etc. Sparse optimization involves minimizing the l1-norm or l1-like functions, which are non-differentiable. In order to apply the classic methods such as gradient descent and quasi Newton, a traditional trick is to consider smoothed approximates of the 1-norm such as the Huber-norm, (1+eps)-norm, and the sum of sqrt(x_i^2 + eps), where eps>0 is a small parameter. They cause a (slight)loss in solution sparsity and often require tuning eps. This talk introduces an alternative "smoothing" approach that does not directly generate a smooth function but produces an unconstrained dual problem whose objective is differentiable and enjoys a "restricted" strongly convex property. Not only can one apply a rich set of classic techniques such as gradient descent, line search, and quasi-Newton methods to this dual problem, exact sparse solutions and global linear convergence is guaranteed. In addition, parallelizing the algorithm becomes very easy. Theoretically, this approach gives the first global linear convergence result among the gradient-based algorithms for sparse optimization, including the recent algorithms based on operator splitting (ISTA/FISTA), variable spitting (Bregman/ADMM), and/or coordinate descent, which converge sublinearly in the global sense. Numerical examples are presented on problems in compressive sensing.

Date: Friday, Jan 25, 2013
Time: 1:00 pm - 2:00 pm
Where: Hudson Hall 115A
Contact: McLain, Paul
Phone: 660-5254
Email: paul.mclain@duke.edu