Smooth and Non-smooth Dynamics Perspectives on Accelerated Optimization

This abstract has open access
Summary
We discuss analogies between first-order optimization algorithms and smooth/non-smooth dynamical systems. These analogies are not only used to understand the phenomenon of acceleration in both convex and nonconvex settings, but allow us to derive a new class of algorithms for constrained optimization. An important property of these algorithms is that constraints are expressed in terms of velocities instead of positions, which naturally leads to sparse, local, and convex approximations of the feasible set (even if the feasible set is nonconvex). Thus, the complexity tends to grow mildly in the number of decision variables and in the number of constraints, which makes the algorithms suitable for solving large-scale constrained optimization problems. We illustrate our algorithms on a compressed sensing and a sparse regression problem, showing that we can treat nonconvex $\ell^p$ constraints ($p< 1$) efficiently, while recovering state-of-the-art performance for $p=1$.
Abstract ID :
107

Associated Sessions

10 visits