DISSERTATION

Frameworks for High Dimensional Convex Optimization

Abstract

We present novel, efficient algorithms for solving extremely large optimization problems. A significant bottleneck today is that as the size of datasets grow, researchers across disciplines desire to solve prohibitively massive optimization problems. In this thesis, we present methods to compress optimization problems. The general goal is to represent a huge problem as a smaller problem or set of smaller problems, while still retaining enough information to ensure provable guarantees on solution quality and run time. We apply this approach to the following three settings. First, we propose a framework for accelerating both linear program solvers and convex solvers for problems with linear constraints. Our focus is on a class of problems for which data is either very costly, or hard to obtain. In these situations, the number of data points m available is much smaller than the number of variables, n. In a machine learning setting, this regime is increasingly prevalent since it is often advantageous to consider larger and larger feature spaces, while not necessarily obtaining proportionally more data. Analytically, we provide worst-case guarantees on both the runtime and the quality of the solution produced. Empirically, we show that our framework speeds up state-of-the-art commercial solvers by two orders of magnitude, while maintaining a near-optimal solution. Second, we propose a novel approach for distributed optimization which uses far fewer messages than existing methods. We consider a setting in which the problem data are distributed over the nodes. We provide worst-case guarantees on the performance with respect to the amount of communication it requires and the quality of the solution. The algorithm uses O(log(n+m)) messages with high probability. We note that this is an exponential reduction compared to the O(n) communication required during each round of traditional consensus based approaches. In terms of solution quality, our algorithm produces a feasible, near optimal solution. Numeric results demonstrate that the approximation error matches that of ADMM in many cases, while using orders-of-magnitude less communication. Lastly, we propose and analyze a provably accurate long-step infeasible Interior Point Algorithm (IPM) for linear programming. The core computational bottleneck in IPMs is the need to solve a linear system of equations at each iteration. We employ sketching techniques to make the linear system computation lighter, by handling well-known ill-conditioning problems that occur when using iterative solvers in IPMs for LPs. In particular, we propose a preconditioned Conjugate Gradient iterative solver for the linear system. Our sketching strategy makes the condition number of the preconditioned system provably small. In practice we demonstrate that our approach significantly reduces the condition number of the linear system, and thus allows for more efficient solving on a range of benchmark datasets.

Keywords:
Computer science Bottleneck Optimization problem Mathematical optimization Set (abstract data type) Focus (optics) Convex optimization Class (philosophy) Regular polygon Algorithm Mathematics Artificial intelligence

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Stochastic Gradient Optimization Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Complexity and Algorithms in Graphs
Physical Sciences →  Computer Science →  Computational Theory and Mathematics
Parallel Computing and Optimization Techniques
Physical Sciences →  Computer Science →  Hardware and Architecture

Related Documents

JOURNAL ARTICLE

Frameworks for High Dimensional Convex Optimization

London, Palma Alise den Nijs

Journal:   CaltechTHESIS (California Institute of Technology) Year: 2020
JOURNAL ARTICLE

High-Dimensional Structured Regression Using Convex Optimization

Yu, Guo

Journal:   eCommons (Cornell University) Year: 2018
BOOK-CHAPTER

Large-Dimensional Convex Optimization

Romain CouilletZhenyu Liao

Cambridge University Press eBooks Year: 2022 Pages: 313-336
JOURNAL ARTICLE

Convex Clustering and Lyapunov Functional Minimization in High Dimensional Optimization

Rupesh KumarDr. Brij Pal Singh

Journal:   Zenodo (CERN European Organization for Nuclear Research) Year: 2025
© 2026 ScienceGate Book Chapters — All rights reserved.