Electrical and Computer Engineering
Iowa State University
Fast(er) Algorithms for Machine Learning in High Dimensions
Dataset sizes continue to grow at an unprecedented rate, and rigorous methods for analyzing the data quickly become very cumbersome from a computational standpoint. In particular, several machine learning algorithms suffer from super-linear (sometimes quadratic; often much worse) running time in terms of the ambient dimension of the data. This challenge of high running time is acutely felt in a broad class of algorithms for sparse linear regression, graphical model estimation, and (provable) neural network learning.
In this talk, I will outline a general algorithmic approach that can considerably speed up the solution of key canonical learning problems. The high level idea is to use a composition of inexact sub-routines into classical statistical approaches for solving inverse problems. We will see that using such inexact algorithmic components in a careful manner considerably reduces running time, while suffering little to no loss in statistical accuracy overall.
I will then show how this approach can be used to accelerate several high-dimensional machine learning tasks, such as sparsity-based regression, low-rank matrix estimation, and provable learning of polynomial neural networks. In these cases, these result in the first algorithms that are statistically as good as the best available, while succeeding with quasi-linear running time.
Refreshments at 3:45pm in Snedecor 2101.