Lecture

Trust region methods: framework & algorithms

In course
DEMO: ex dolore consequat
Adipisicing non ipsum pariatur irure ex magna ex id quis qui sint. Ea occaecat mollit eiusmod in ut. Ipsum occaecat occaecat quis ex. Laborum duis consectetur commodo id sit ex aliquip irure reprehenderit id tempor sit eu anim.
Login to see this section
Description

This lecture introduces trust region methods, focusing on the framework and algorithmic details. It covers the trust-region subproblem, variants of Newton's method, and Algorithm 6.1 for trust-region optimization. The goal is to establish simple rules for updating the trust-region radius and visiting points with small gradients.

Instructor
sit deserunt consequat
Dolor tempor consequat ut excepteur dolore deserunt reprehenderit laborum sit proident. Magna ipsum laborum pariatur reprehenderit dolore. Officia aliquip dolor dolore do Lorem magna exercitation occaecat. Pariatur non qui amet consequat pariatur consectetur proident occaecat mollit cillum tempor non duis magna. Ut labore est id nulla excepteur adipisicing ad velit ipsum. Consectetur est ipsum culpa commodo consectetur ut voluptate exercitation exercitation et consequat in dolore. Id adipisicing veniam ad mollit do Lorem tempor incididunt.
Login to see this section
About this result
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Related lectures (46)
Generalized Optimistic Methods for Convex-Concave Saddle Point Problems
Presents a generalized optimistic framework for solving convex-concave saddle point problems, extending to higher-order methods.
Optimisation in Energy Systems
Explores optimization in energy system modeling, covering decision variables, objective functions, and different strategies with their pros and cons.
Optimization Methods: Theory Discussion
Explores optimization methods, including unconstrained problems, linear programming, and heuristic approaches.
Richardson Method: Preconditioned Iterative Solvers
Covers the Richardson method for solving linear systems with preconditioned iterative solvers and introduces the gradient method.
Optimality of Convergence Rates: Accelerated Gradient Descent
Explores the optimality of convergence rates in convex optimization, focusing on accelerated gradient descent and adaptive methods.
Show more

Graph Chatbot

Chat with Graph Search

Ask any question about EPFL courses, lectures, exercises, research, news, etc. or try the example questions below.

DISCLAIMER: The Graph Chatbot is not programmed to provide explicit or categorical answers to your questions. Rather, it transforms your questions into API requests that are distributed across the various IT services officially administered by EPFL. Its purpose is solely to collect and recommend relevant references to content that you can explore to help you answer your questions.