JOURNAL ARTICLE

ZEROTH-ORDER STOCHASTIC PROJECTED GRADIENT DESCENT FOR NONCONVEX OPTIMIZATION

Abstract

In this paper, we analyze the convergence of the zeroth-order stochastic projected gradient descent (ZO-SPGD) method for constrained convex and nonconvex optimization scenarios where only objective function values (not gradients) are directly available. We show statistical properties of a new random gradient estimator, constructed through random direction samples drawn from a bounded uniform distribution. We prove that ZO-SPGD yields a O(d/(bq√T) + 1/√T) convergence rate for convex but non-smooth optimization, where d is the number of optimization variables, b is the minibatch size, q is the number of random direction samples for gradient estimation, and T is the number of iterations. For nonconvex optimization, we show that ZO-SPGD achieves O(1/√T) convergence rate but suffers an additional O((d + q)/(bq)) error. Our the oretical investigation on ZO-SPGD provides a general framework to study the convergence rate of zeroth-order algorithms.

Keywords:
Stochastic gradient descent Mathematics Rate of convergence Gradient descent Estimator Convergence (economics) Mathematical optimization Convex function Convex optimization Stochastic optimization Optimization problem Bounded function Proximal Gradient Methods Regular polygon Applied mathematics Computer science Mathematical analysis Statistics Geometry

Metrics

41
Cited By
2.18
FWCI (Field Weighted Citation Impact)
39
Refs
0.89
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Stochastic Gradient Optimization Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Sparse and Compressive Sensing Techniques
Physical Sciences →  Engineering →  Computational Mechanics
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.