Discrete Forward-Backward Algorithms in Optimization: Advances and Applications
الملخص
This article explores the discrete forward-backward (FB) algorithm, a fundamental method in numerical optimization grounded in monotone operator theory, as introduced by Abbas and Attouch (2014). The FB algorithm addresses structured monotone inclusion problems in Hilbert spaces, specifically targeting systems of the form , where is a maximal monotone operator and is a locally Lipschitz monotone operator. By discretizing continuous dynamical systems, the algorithm leverages Newton-like dynamics to generate iterative sequences that converge to solutions. The approach ensures stable updates through the cocoercivity condition, implying Lipschitz continuity with a constant . This work builds on foundational studies by extending convergence results to nonconvex settings, introducing adaptive step sizes, and improving convergence rates. Key advancements include explicit convergence rates for nonconvex problems, as shown by Li and Pong (2021), and accelerated methods achieving O(1/k2) rates without strong convexity, as per Chambolle and Pock (2022). Additionally, Zhang et al. (2023) enhance robustness by addressing step-size sensitivity. The algorithm's applications span optimization, image processing, and distributed networks, these applications benefit from the algorithm’s ability to handle large-scale, complex datasets efficiently. Recent developments incorporating inexact methods and adaptive strategies. Future research directions include exploring hybrid acceleration-adaptive methods and integrating the algorithm with quantum computing frameworks. Large-scale experiments are recommended to validate these advancements in real-world scenarios, highlighting the FB algorithm’s versatility and potential in modern optimization challenges.