Friday, 15 November, 2019
Courtney Paquette from Google Brain in Montreal, will present “Algorithms For Stochastic Nonconvex And Nonsmooth Optimization,” at 4 p.m. Friday, Nov. 15, in Room 242 of Ritter Hall. Refreshments will be served beforehand in the Ritter Hall Lobby.
Machine learning has introduced new optimization challenges with its use of nonconvex losses, noisy gradients, and statistical assumptions. While convergence guarantees in the deterministic, convex settings are well-documented, algorithms for solving large-scale nonsmooth and nonconvex problems remain in their infancy.
Paquette will begin by isolating a class of nonsmooth and nonconvex functions that can be used to model a variety of statistical and signal processing tasks. Standard statistical assumptions on such inverse problems often endow the optimization formulation with an appealing regularity condition: the objective grows sharply away from the solution set. Paquette will show that under such regularity, a variety of simple algorithms converge rapidly when initialized within constant relative error of the optimal solution. Paquette illustrates the theory and algorithms on the real phase retrieval problem, and survey a number of other applications, including blind deconvolution and covariance matrix estimation.
One of the main advantages of smooth optimization over its nonsmooth counterpart is the potential to use a line-search for improved numerical performance. A long-standing open question is to design a line-search procedure in the stochastic setting. In the second part of the talk, Paquette will present a practical line-search method for smooth stochastic optimization that has rigorous convergence guarantees and requires only knowable quantities for implementation.