Aaron Sidford
About Me:
I am an assistant professor in the department of Management Science and Engineering and the department of Computer Science at Stanford University. I received my PhD from the department of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology where I was advised by Professor Jonathan Kelner.
Research Interests:
My research interests lie broadly in optimization, the theory of computation, and the design and analysis of algorithms. I am particularly interested in work at the intersection of continuous optimization, graph theory, numerical linear algebra, and data structures.
Email: sidford@stanford.edu
Conference Publications
-
Improved Lower Bounds for Submodular Function Minimization
With Deeparnab Chakrabarty, Andrei Graur, and Haotian Jiang.
To appear in Symposium on Foundations of Computer Science (FOCS 2022)
(arXiv)
-
RECAPP: Crafting a More Efficient Catalyst for Convex Optimization
With Yair Carmon, Arun Jambulapati, and Yujia Jin.
International Conference on Machine Learning (ICML 2022).
(arXiv)
-
Efficient Convex Optimization Requires Superlinear Memory
With Annie Marsden, Vatsal Sharan, and Gregory Valiant.
Conference on Learning Theory (COLT 2022).
Best Paper Award.
(arXiv)
-
Sharper Rates for Separable Minimax and Finite Sum Optimization via Primal-Dual Extragradient Method
With Yujia Jin and Kevin Tian.
Conference on Learning Theory (COLT 2022).
(arXiv)
-
Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales
With Jonathan A. Kelner, Annie Marsden, Vatsal Sharan, Gregory Valiant, and Honglin Yuan.
Conference on Learning Theory (COLT 2022).
(arXiv)
-
Regularized Box-Simplex Games and Dynamic Decremental Bipartite Matching
With Arun Jambulapati, Yujia Jin, and Kevin Tian.
International Colloquium on Automata, Languages and Programming (ICALP 2022).
(arXiv)
-
Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary
With Aaron Bernstein, Jan van den Brand, Maximilian Probst, Danupon Nanongkai, Thatchaphol Saranurak, and He Sun.
International Colloquium on Automata, Languages and Programming (ICALP 2022).
(arXiv)
-
Faster Maxflow via Improved Dynamic Spectral Vertex Sparsifiers
With Jan van den Brand, Yu Gao, Arun Jambulapati, Yin Tat Lee, Yang P. Liu, and Richard Peng.
In Symposium on Theory of Computing (STOC 2022).
(arXiv)
-
Semi-Streaming Bipartite Matching in Fewer Passes and Optimal Space
With Sepehr Assadi, Arun Jambulapati, Yujia Jin, and Kevin Tian.
In Symposium on Discrete Algorithms (SODA 2022).
(arXiv)
-
Algorithmic trade-offs for girth approximation in undirected graphs
With Avi Kadria, Liam Roditty, Virginia Vassilevska Williams, and Uri Zwick.
In Symposium on Discrete Algorithms (SODA 2022).
-
Computing Lewis Weights to High Precision
With Maryam Fazel, Yin Tat Lee, and Swati Padmanabhan.
In Symposium on Discrete Algorithms (SODA 2022).
(arXiv)
-
Stochastic Bias-Reduced Gradient Methods
With Hilal Asi, Yair Carmon, Arun Jambulapati, and Yujia Jin.
In Advances in Neural Information Processing Systems (NeurIPS 2021).
(arXiv)
-
Thinking Inside the Ball: Near-Optimal Minimization of the Maximal Loss
With Yair Carmon, Arun Jambulapati, and Yujia Jin.
In Conference on Learning Theory (COLT 2021).
(arXiv)
-
The Bethe and Sinkhorn Permanents of Low Rank Matrices and Implications for Profile Maximum Likelihood
With Nima Anari, Moses Charikar, and Kirankumar Shiragur.
In Conference on Learning Theory (COLT 2021).
(arXiv)
-
Towards Tight Bounds on the Sample Complexity of Average-reward MDPs
With Yujia Jin.
In International Conference on Machine Learning (ICML 2021).
(arXiv)
-
Minimum cost flows, MDPs, and ℓ1-regression in nearly linear time for dense instances
With Jan van den Brand, Yin Tat Lee, Yang P. Liu, Thatchaphol Saranurak, and Zhao Song, Di Wang.
In Symposium on Theory of Computing (STOC 2021).
(arXiv)
-
Ultrasparse Ultrasparsifiers and Faster Laplacian System Solvers
With Arun Jambulapati.
In Symposium on Discrete Algorithms (SODA 2021).
(arXiv)
-
Relative Lipschitzness in Extragradient Methods and a Direct Recipe for Acceleration
With Michael B. Cohen and Kevin Tian.
In Innovations in Theoretical Computer Science (ITCS 2021).
(arXiv)
-
Acceleration with a Ball Optimization Oracle
With Yair Carmon, Arun Jambulapati, Qijia Jiang, Yujia Jin, Yin Tat Lee, and Kevin Tian.
In Conference on Neural Information Processing Systems (NeurIPS 2020).
Oral presentation
(arXiv)
-
Instance Based Approximations to Profile Maximum Likelihood
With Nima Anari, Moses Charikar, and Kirankumar Shiragur.
In Conference on Neural Information Processing Systems (NeurIPS 2020).
(arXiv)
-
Large-Scale Methods for Distributionally Robust Optimization
With Daniel Levy*, Yair Carmon*, and John C. Duch (* denotes equal contribution).
In Conference on Neural Information Processing Systems (NeurIPS 2020).
(arXiv)
-
High-precision Estimation of Random Walks in Small Space
With AmirMahdi Ahmadinejad, Jonathan A. Kelner, Jack Murtagh, John Peebles, and Salil P. Vadhan.
In Symposium on Foundations of Computer Science (FOCS 2020).
(arXiv)
-
Bipartite Matching in Nearly-linear Time on Moderately Dense Graphs
With Jan van den Brand, Yin Tat Lee, Danupon Nanongkai, Richard Peng, Thatchaphol Saranurak, Zhao Song, and Di Wang.
In Symposium on Foundations of Computer Science (FOCS 2020).
Invited to the special issue.
(arXiv)
-
Coordinate Methods for Matrix Games
With Yair Carmon, Yujia Jin, and Kevin Tian.
In Symposium on Foundations of Computer Science (FOCS 2020).
(arXiv)
-
Unit Capacity Maxflow in Almost $O(m^{4/3})$ Time
With Tarun Kathuria and Yang P. Liu.
In Symposium on Foundations of Computer Science (FOCS 2020).
Invited to the special issue.
(Faster Divergence Maximization for Faster Maximum Flow (arXiv before merge))
-
Solving Discounted Stochastic Two-Player Games with Near-Optimal Time and Sample Complexity
With Mengdi Wang, Lin Yang, and Yinyu Ye.
In International Conference on Artificial Intelligence and Statistics (AISTATS 2020).
(arXiv)
-
Efficiently Solving MDPs with Stochastic Mirror Descent
With Yujia Jin.
In International Conference on Machine Learning (ICML 2020).
(arXiv)
-
Near-Optimal Methods for Minimizing Star-Convex Functions and Beyond
With Oliver Hinder and Nimit Sharad Sohoni.
In Conference on Learning Theory (COLT 2020).
(arXiv)
-
Solving Tall Dense Linear Programs in Nearly Linear Time
With Jan van den Brand, Yin Tat Lee, and Zhao Song.
In Symposium on Theory of Computing (STOC 2020).
Invited to the special issue.
(arXiv)
-
Constant Girth Approximation for Directed Graphs in Subquadratic Time
With Shiri Chechik, Yang P. Liu, and Omer Rotem.
In Symposium on Theory of Computing (STOC 2020).
(arXiv)
-
Leverage Score Sampling for Faster Accelerated Regression and ERM
With Naman Agarwal, Sham Kakade, Rahul Kidambi, Yin Tat Lee, and Praneeth Netrapalli.
In International Conference on Algorithmic Learning Theory (ALT 2020).
(arXiv)
-
Near-optimal Approximate Discrete and Continuous Submodular Function Minimization
With Brian Axelrod and Yang P. Liu.
In Symposium on Discrete Algorithms (SODA 2020).
(arXiv)
-
Fast and Space Efficient Spectral Sparsification in Dynamic Streams
With Michael Kapralov, Aida Mousavifar, Cameron Musco, Christopher Musco, Navid Nouri, and Jakab Tardos.
In Symposium on Discrete Algorithms (SODA 2020).
(arXiv)
-
Variance Reduction for Matrix Games
With Yair Carmon, Yujia Jin, and Kevin Tian.
In Conference on Neural Information Processing Systems (NeurIPS 2019).
Oral presentation
(arXiv)
-
Complexity of Highly Parallel Non-Smooth Convex Optimization
With Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, and Yuanzhi Li.
In Conference on Neural Information Processing Systems (NeurIPS 2019).
Spotlight presentation
(arXiv)
-
Principal Component Projection and Regression in Nearly Linear Time through Asymmetric SVRG
With Yujia Jin.
In Conference on Neural Information Processing Systems (NeurIPS 2019).
Spotlight presentation
(arXiv)
-
A Direct Õ(1/ϵ) Iteration Parallel Algorithm for Optimal Transport
With Arun Jambulapati and Kevin Tian.
In Conference on Neural Information Processing Systems (NeurIPS 2019).
(arXiv)
-
A General Framework for Efficient Symmetric Property Estimation
With Moses Charikar and Kirankumar Shiragur.
In Conference on Neural Information Processing Systems (NeurIPS 2019).
-
Parallel Reachability in Almost Linear Work and Square Root Depth
With Arun Jambulapati and Yang P. Liu.
In Symposium on Foundations of Computer Science (FOCS 2019).
(arXiv)
-
Faster Matroid Intersection
With Deeparnab Chakrabarty, Yin Tat Lee, Sahil Singla, and Sam Chiu-wai Wong.
In Symposium on Foundations of Computer Science (FOCS 2019).
(arXiv)
-
Deterministic Approximation of Random Walks in Small Space
With Jack Murtagh, Omer Reingold, and Salil P. Vadhan.
In International Workshop on Randomization and Computation (RANDOM 2019).
Invited to the special issue.
(arXiv)
-
A Rank-1 Sketch for Matrix Multiplicative Weights
With Yair Carmon, John C. Duchi, and Kevin Tian.
In Conference on Learning Theory (COLT 2019).
(arXiv)
-
Near-optimal method for highly smooth convex optimization
With Sébastien Bubeck, Qijia Jiang, Yin Tat Lee, and Yuanzhi Li.
In Conference on Learning Theory (COLT 2019).
(arXiv)
-
Efficient profile maximum likelihood for universal symmetric property estimation
With Moses Charikar and Kirankumar Shiragur.
In Symposium on Theory of Computing (STOC 2019).
(arXiv)
-
Memory-sample tradeoffs for linear regression with small error
With Vatsal Sharan and Gregory Valiant.
In Symposium on Theory of Computing (STOC 2019).
(arXiv)
-
Perron-Frobenius Theory in Nearly Linear Time: Positive Eigenvectors, M-matrices, Graph Kernels, and Other Applications
With AmirMahdi Ahmadinejad, Arun Jambulapati, and Amin Saberi.
In Symposium on Discrete Algorithms (SODA 2019).
(arXiv)
-
Exploiting Numerical Sparsity for Efficient Learning: Faster Eigenvector Computation and Regression
With Neha Gupta.
In Conference on Neural Information Processing Systems (NeurIPS 2018).
(arXiv)
-
Near-Optimal Time and Sample Complexities for Solving Discounted Markov Decision Process with a Generative Model
With Mengdi Wang, Xian Wu, Lin F. Yang, and Yinyu Ye.
In Conference on Neural Information Processing Systems (NeurIPS 2018).
(arXiv)
-
Coordinate Methods for Accelerating ℓ∞ Regression and Faster Approximate Maximum Flow
With Kevin Tian.
In Symposium on Foundations of Computer Science (FOCS 2018).
Invited to the special issue.
(arXiv)
-
Solving Directed Laplacian Systems in Nearly-Linear Time through Sparse LU Factorizations
With Michael B. Cohen, Jonathan A. Kelner, Rasmus Kyng, John Peebles, Richard Peng, and Anup B. Rao.
In Symposium on Foundations of Computer Science (FOCS 2018).
(arXiv)
-
Efficient Convex Optimization with Membership Oracles
With Yin Tat Lee and Santosh S. Vempala.
In Conference on Learning Theory (COLT 2018).
(arXiv)
-
Accelerating Stochastic Gradient Descent for Least Squares Regression
With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli.
In Conference on Learning Theory (COLT 2018).
(arXiv)
-
Approximating Cycles in Directed Graphs: Fast Algorithms for Girth and Roundtrip Spanners
With Jakub Pachocki, Liam Roditty, Roei Tov, and Virginia Vassilevska Williams.
In Symposium on Discrete Algorithms (SODA 2018).
(arXiv)
-
Variance Reduced Value Iteration and Faster Algorithms for Solving Markov Decision Processes
With Mengdi Wang, Xian Wu, and Yinyu Ye.
In Symposium on Discrete Algorithms (SODA 2018).
(arXiv)
-
Efficient Õ(n/ϵ) Spectral Sketches for the Laplacian and its Pseudoinverse
With Arun Jambulapati.
In Symposium on Discrete Algorithms (SODA 2018).
(arXiv)
-
Stability of the Lanczos Method for Matrix Function Approximation
With Cameron Musco and Christopher Musco.
In Symposium on Discrete Algorithms (SODA 2018).
(arXiv)
-
Spectrum Approximation Beyond Fast Matrix Multiplication: Algorithms and Hardness
With Cameron Musco, Praneeth Netrapalli, Aaron Sidford, Shashanka Ubaru, and David P. Woodruff.
In Innovations in Theoretical Computer Science (ITCS 2018).
(arXiv)
-
Derandomization Beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space
With Jack Murtagh, Omer Reingold, and Salil P. Vadhan.
In Symposium on Foundations of Computer Science (FOCS 2017).
(arXiv)
-
"Convex Until Proven Guilty": Dimension-Free Acceleration of Gradient Descent on Non-Convex Functions
With Yair Carmon, John C. Duchi, and Oliver Hinder.
In International Conference on Machine Learning (ICML 2017).
(arXiv)
-
Almost-Linear-Time Algorithms for Markov Chains and New Spectral Primitives for Directed Graphs
With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, Anup B. Rao, and, Adrian Vladu.
In Symposium on Theory of Computing (STOC 2017).
Invited to the special issue.
(arXiv)
-
Subquadratic Submodular Function Minimization
With Deeparnab Chakrabarty, Yin Tat Lee, and Sam Chiu-wai Wong.
In Symposium on Theory of Computing (STOC 2017).
(arXiv)
-
Faster Algorithms for Computing the Stationary Distribution, Simulating Random Walks, and More
With Michael B. Cohen, Jonathan A. Kelner, John Peebles, Richard Peng, and Adrian Vladu.
In Symposium on Foundations of Computer Science (FOCS 2016).
(arXiv)
-
Geometric Median in Nearly Linear Time
With Michael B. Cohen, Yin Tat Lee, Gary L. Miller, and Jakub Pachocki.
In Symposium on Theory of Computing (STOC 2016).
(arXiv)
-
Routing Under Balance
With Alina Ene, Gary L. Miller, and Jakub Pachocki.
In Symposium on Theory of Computing (STOC 2016).
(arXiv)
-
Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm
With Prateek Jain, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli.
InConference on Learning Theory (COLT 2016).
(arXiv)
-
Principal Component Projection Without Principal Component Analysis
With Roy Frostig, Cameron Musco, and Christopher Musco.
In International Conference on Machine Learning (ICML 2016).
(arXiv)
-
Faster Eigenvector Computation via Shift-and-Invert Preconditioning
With Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, and Praneeth Netrapalli.
In International Conference on Machine Learning (ICML 2016).
(arXiv)
-
Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis
With Rong Ge, Chi Jin, Sham M. Kakade, and Praneeth Netrapalli.
In International Conference on Machine Learning (ICML 2016).
(arXiv)
-
A Faster Cutting Plane Method and its Implications for Combinatorial and Convex Optimization
With Yin Tat Lee and Sam Chiu-wai Wong
In Symposium on Foundations of Computer Science (FOCS 2015).
Machtey Award for Best Student Paper.
(arXiv)
-
Efficient Inverse Maintenance and Faster Algorithms for Linear Programming
With Yin Tat Lee
In Symposium on Foundations of Computer Science (FOCS 2015).
(arXiv)
-
Competing with the Empirical Risk Minimizer in a Single Pass
With Roy Frostig, Rong Ge, and Sham Kakade,
In Conference on Learning Theory (COLT 2015)
(arXiv)
-
Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization
With Roy Frostig, Rong Ge, and Sham Kakade,
In International Conference on Machine Learning (ICML 2015)
(arXiv)
-
Uniform Sampling for Matrix Approximation
With Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, and Richard Peng.
In Innovations in Theoretical Computer Science (ITCS 2015).
(arXiv)
-
Path-Finding Methods for Linear Programming : Solving Linear Programs in Õ(√rank) Iterations and Faster Algorithms for Maximum Flow
With Yin Tat Lee.
In Symposium on Foundations of Computer Science (FOCS 2014).
Best Paper Award and Machtey Award for Best Student Paper.
(arXiv Part I) and (arXiv Part II)
-
Single Pass Spectral Sparsification in Dynamic Streams
With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco.
In Symposium on Foundations of Computer Science (FOCS 2014).
Invited to the special issue.
(arXiv)
-
An Almost-Linear-Time Algorithm for Approximate Max Flow in Undirected Graphs, and its Multicommodity Generalizations
With Jonathan A. Kelner, Yin Tat Lee, and Lorenzo Orecchia.
In Symposium on Discrete Algorithms (SODA 2014).
Best Paper Award
(arXiv)
-
Efficient Accelerated Coordinate Descent Methods and Faster Algorithms for Solving Linear Systems
With Yin Tat Lee.
In Symposium on Fondations of Computer Science (FOCS 2013).
(arXiv)
-
A Simple, Combinatorial Algorithm for Solving SDD Systems in Nearly-Linear Time
With Jonathan A. Kelner, Lorenzo Orecchia, and Zeyuan Allen Zhu.
In Symposium on the Theory of Computing (STOC 2013).
(arXiv)
Journal and Book Publications
-
Unit Capacity Maxflow in Almost $O(m^{4/3})$ Time
With Tarun Kathuria and Yang P. Liu.
SIAM Journal on Computing
(Faster Divergence Maximization for Faster Maximum Flow (arXiv before merge))
-
Derandomization beyond Connectivity: Undirected Laplacian Systems in Nearly Logarithmic Space
With Jack Murtagh, Omer Reingold, and Salil Vadhan
SIAM Journal on Computing, 2021.
(arXiv)
-
Deterministic Approximation of Random Walks in Small Space
With Jack Murtagh, Omer Reingold, and Salil P. Vadhan.
Theory of Computing, 2021.
(arXiv)
-
Variance Reduced Value Iteration and Faster Algorithms for Solving Markov Decision Processes
With Mengdi Wang, Xian Wu, and Yinyu Ye.
Naval Research Logistics, 2021.
(arXiv)
-
Efficient Convex Optimization with Membership Oracles
With Yin Tat Lee and Santosh S. Vempala.
Book chapter in Building Bridges II: Mathematics of László Lovász,
2020.
(arXiv)
-
Lower bounds for finding stationary points I
With Yair Carmon, John C. Duchi, and Oliver Hinder.
Mathematical Programming, 2019.
(arXiv)
-
Accelerated Methods for NonConvex Optimization
With Yair Carmon, John C. Duchi, and Oliver Hinder.
SIAM Journal on Optimization, 28(2), pp 1751-1772, 2018.
(arXiv)
-
Parallelizing Stochastic Gradient Descent for Least Squares Regression: Mini-batching, Averaging, and Model Misspecification
With Prateek Jain, Sham M. Kakade, Rahul Kidambi, and Praneeth Netrapalli.
Journal of Machine Learning Research, 18, pp 223:1-223:42, 2017.
(arXiv)
-
Single Pass Spectral Sparsification in Dynamic Streams
With Michael Kapralov, Yin Tat Lee, Cameron Musco, and Christopher Musco.
SIAM Journal on Computing, 46(1), pp 456-477, 2017.
(arXiv)
Students
I have the great privilege and good fortune of advising the following PhD students:
I have also had the great privilege and good fortune of advising the following PhD students who have now graduated:
Courses