最速下降法 Resources

Showing items tagged with "最速下降法"

This guide covers MATLAB implementations for unconstrained optimization methods including Steepest Descent, Newton's Method, Conjugate Gradient, Variable Metric Methods (DFP and BFGS), and Nonlinear Least Squares. For constrained optimization, it explores Exterior Penalty Function and Generalized Multiplier Methods. The documentation includes detailed algorithm explanations, MATLAB code examples, and practical problem analysis with optimization techniques.

MATLAB 251 views Tagged

The Conjugate Gradient (CG) method serves as an intermediate approach between Steepest Descent and Newton's Method. It leverages only first-order derivative information while overcoming the slow convergence of Steepest Descent and avoiding the computational burden of storing, computing, and inverting the Hessian matrix required by Newton's Method. The CG method is not only one of the most useful techniques for solving large linear systems but also stands as one of the most efficient algorithms for large-scale nonlinear optimization problems. In implementation, CG typically uses iterative updates with conjugate directions computed through recurrence relations rather than matrix operations.

MATLAB 240 views Tagged

Optimization algorithms including Conjugate Gradient, Newton's Method, Golden Section Search, and Steepest Descent methods with implementation insights

MATLAB 242 views Tagged

Constrained Optimization Method – Steepest Descent Method (also called Gradient Method) is one of the earliest approaches developed for solving extremum problems of multivariable functions. This iterative algorithm utilizes gradient information to locate local minima through directional updates, with implementations often involving step size selection and convergence criteria.

MATLAB 262 views Tagged

This source code package constitutes my major assignment for the Optimization Theory course, featuring self-implemented versions of the following prevalent optimization algorithms: Steepest Descent Method, Newton's Method, Nonlinear Least Squares Method, and DFP (Davidon-Fletcher-Powell) Method. The implementation includes two test functions, fun1 and fun2, designed to validate algorithm performance and convergence behavior across different optimization landscapes.

MATLAB 224 views Tagged

Traditional deterministic methods for path planning include intelligent search algorithms (A* and D*), steepest descent method, visibility graph approach, artificial potential field method, cell decomposition, optimal control methods, simulated annealing, and genetic algorithms. These methods face challenges such as combinatorial explosions in high-dimensional spaces, local optima, high computational complexity, sensitivity to noise, and convergence issues. Modern approaches like deep learning, swarm intelligence, and hybrid methods offer solutions to overcome these limitations by capturing complex data structures, simulating collective behaviors, and leveraging combined algorithmic strengths.

MATLAB 209 views Tagged