Are you an EPFL student looking for a semester project?
Work with us on data science and visualisation projects, and deploy your project as an app on top of Graph Search.
This lecture by the instructor explores the optimality of splines for solving inverse problems in imaging and designing deep neural networks. It covers a recent representer theorem linking linear inverse problems with total-variation regularization to adaptive splines. The lecture demonstrates the sparsity of continuous-domain solutions and their compatibility with compressed sensing. It also discusses the application of this theorem in optimizing deep neural network activations, resulting in deep-spline networks with piecewise-linear splines. The lecture delves into the connection between splines and operators, gTV regularization, and the variational formulation of inverse problems. It concludes by highlighting the global optimality achieved with spline activations and the ability to control complexity in deep spline networks.