JOURNAL ARTICLE

Variance Reduction Optimization Algorithm Based on Random Sampling

Abstract

The stochastic gradient descent (SGD) algorithms have been applied to machine learning and deep learning due to their superior performance. However, SGD requires the stochastic gradient of a single sample to approximate the full gradient of all samples, introducing additional variance in each iteration. This makes the convergence curve of SGD oscillate or even diverge. Therefore, effectively reducing variance becomes a key challenge at present. To address the above challenge, a variance reduction optimization algorithm, DM-SRG (double mini-batch stochastic recursive gradient), based on mini-batch random sampling is proposed and applied to solving convex and non-convex optimization problems. The main feature of the algorithm including an inner and outer double loop structure is designed: the outer loop structure uses mini-batch random samples to calculate the gradient, approximating the full gradient and reducing the gradient calculation cost; the inner loop structure also uses mini-batch random samples to calculate the gradient and replace the single sample random gradient, improving convergence stability of the algorithm. In this paper, a sublinear convergence rate of DM-SRG algorithm is theoretically guaranteed for both non-convex and convex objective functions. Furthermore, a dynamic sample size adjustment strategy based on the performance evaluation model of computing unit is designed to improve the training efficiency. The effectiveness of the algorithm is evaluated via numerical simulation experiments on real datasets of varying sizes. Experimental results show that the loss function of the DM-SRG algorithm is reduced by 18.1%, and the average time of the algorithm is reduced by 8.22%.

Keywords:
Variance reduction Stochastic gradient descent Convergence (economics) Gradient descent Stochastic optimization Random optimization Stochastic approximation Reduction (mathematics) Stability (learning theory) Convex function

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.25
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Stochastic Gradient Optimization Techniques
Physical Sciences →  Computer Science →  Artificial Intelligence
Metaheuristic Optimization Algorithms Research
Physical Sciences →  Computer Science →  Artificial Intelligence
Advanced Multi-Objective Optimization Algorithms
Physical Sciences →  Computer Science →  Computational Theory and Mathematics

Related Documents

JOURNAL ARTICLE

A Proximal Random Newton Algorithm Based on Variance Reduction

康乐 杜

Journal:   Advances in Applied Mathematics Year: 2022 Vol: 11 (07)Pages: 4708-4717
JOURNAL ARTICLE

Random sampling-based gradient descent method for optimal control problems with variance reduction

Jeongho KimDongnam KoChohong MinByungjoon Lee

Journal:   Mathematical Models and Methods in Applied Sciences Year: 2025 Vol: 35 (13)Pages: 2797-2829
BOOK-CHAPTER

Eigenvalues, Sampling, Variance Reduction

Gerhard Winkler

Year: 2003 Pages: 203-207
© 2026 ScienceGate Book Chapters — All rights reserved.