**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Program optimization

Summary

In computer science, program optimization, code optimization, or software optimization, is the process of modifying a software system to make some aspect of it work more efficiently or use fewer resources. In general, a computer program may be optimized so that it executes more rapidly, or to make it capable of operating with less memory storage or other resources, or draw less power.
Although the word "optimization" shares the same root as "optimal", it is rare for the process of optimization to produce a truly optimal system. A system can generally be made optimal not in absolute terms, but only with respect to a given quality metric, which may be in contrast with other possible metrics. As a result, the optimized system will typically only be optimal in one application or for one audience. One might reduce the amount of time that a program takes to perform some task at the price of making it consume more memory. In an application where memory space is at a premium, one might deliberately choose a slower algorithm in order to use less memory. Often there is no "one size fits all" design which works well in all cases, so engineers make trade-offs to optimize the attributes of greatest interest. Additionally, the effort required to make a piece of software completely optimal - incapable of any further improvement - is almost always more than is reasonable for the benefits that would be accrued; so the process of optimization may be halted before a completely optimal solution has been reached. Fortunately, it is often the case that the greatest improvements come early in the process.
Even for a given quality metric (such as execution speed), most methods of optimization only improve the result; they have no pretense of producing optimal output. Superoptimization is the process of finding truly optimal output.
Optimization can occur at a number of levels. Typically the higher levels have greater impact, and are harder to change later on in a project, requiring significant changes or a complete rewrite if they need to be changed.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications (10)

Related people (3)

Related units (2)

Related concepts (62)

Related courses (39)

Related MOOCs (20)

Related lectures (851)

Introduction to optimization on smooth manifolds: first order methods

Learn to optimize on smooth, nonlinear spaces: Join us to build your foundations (starting at "what is a manifold?") and confidently implement your first algorithm (Riemannian gradient descent).

Inline expansion

In computing, inline expansion, or inlining, is a manual or compiler optimization that replaces a function call site with the body of the called function. Inline expansion is similar to macro expansion, but occurs during compilation, without changing the source code (the text), while macro expansion occurs prior to compilation, and results in different text that is then processed by the compiler. Inlining is an important optimization, but has complicated effects on performance.

Recursion (computer science)

In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem. Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science. The power of recursion evidently lies in the possibility of defining an infinite set of objects by a finite statement.

Optimizing compiler

In computing, an optimizing compiler is a compiler that tries to minimize or maximize some attributes of an executable computer program. Common requirements are to minimize a program's execution time, memory footprint, storage size, and power consumption (the last three being popular for portable computers). Compiler optimization is generally implemented using a sequence of optimizing transformations, algorithms which take a program and transform it to produce a semantically equivalent output program that uses fewer resources or executes faster.

The goal of the lecture is to present and apply techniques for the modelling and the thermo-economic optimisation of industrial process and energy systems. The lecture covers the problem statement, th

Students learn several implementation techniques for modern functional and object-oriented programming languages. They put some of them into practice by developing key parts of a compiler and run time

This course is an introduction to linear and discrete optimization.
Warning: This is a mathematics course! While much of the course will be algorithmic in nature, you will still need to be able to p

Energy optimization strategiesME-454: Modelling and optimization of energy systems

Covers brainstorming options for smart operation changes, heat recovery, and PV panel performance.

Electroacoustic Absorbers: Performance OptimizationEE-548: Audio engineering

Covers the design and optimization of electroacoustic absorbers for room modal equalization.

Flotter for Better SwimmingME-314: Concurrent engineering project

Analyzes the trade-off between power consumption and speed in swimming.

The potential energy savings resulting from the cooperative management of community districts, also known as energy hubs, has been widely demonstrated throughout the literature. Various developed predictive control strategies have proven to generate remarkable gains by exploiting the shared energy generation, conversion and storage devices of building communities. However, one difficulty amongst these methods lie in the integration of information regarding the long term perturbations brought to the system. While most of the existing work considers prediction horizons ranging from day ahead to weekly time frames, recent studies have explored the control of inter-seasonal storage units on a yearly basis through available historical data sets. This study focuses on testing and improving the existing methods while integrating combined and stand alone wind generation to the system. In particular, the contribution of the seasonal storage state bounds for Value function approximation and control process has been investigated for several well-known methods, including scenario approach, stochastic dual dynamic programing and adaptive dynamic programing. Solving the resulting multistage stochastic optimization problem reveals very good results and demonstrate the contributions of the bounds to almost all Value function approximation methods. In fine, the integration of wind production to the system underlines the importance of seasonal trends for efficient optimization of long term storage and suggests using tolerance margins around the bounds for systems particularly sensitive to perturbations.

2018We present the first accelerated randomized algorithm for solving linear systems in Euclidean spaces. One essential problem of this type is the matrix inversion problem. In particular, our algorithm can be specialized to invert positive definite matrices in such a way that all iterates (approximate solutions) generated by the algorithm are positive definite matrices themselves. This opens the way for many applications in the field of optimization and machine learning. As an application of our general theory, we develop the first accelerated (deterministic and stochastic) quasi-Newton updates. Our updates lead to provably more aggressive approximations of the inverse Hessian, and lead to speed-ups over classical non-accelerated rules in numerical experiments. Experiments with empirical risk minimization show that our rules can accelerate training of machine learning models.

Anastasia Ailamaki, Bikash Chandra, Srinivas Karthik Venkatesh, Riccardo Mancini, Vasileios Mageirakos

Analytics on modern data analytic and data warehouse systems often need to run large complex queries on increasingly complex database schemas. A lot of progress has been made on executing such complex queries using techniques like scale out query processing, hardware accelerators like GPUs and code generation techniques. However, optimization of such queries remains a challenge. Existing optimal solutions either cannot be effectively parallelized, or are inefficient while doing a lot of unnecessary work. In this demonstration, we present our system, GPU-QO, which aims to demonstrate query optimization techniques for large analytical queries using GPUs. We first demonstrate Massively Parallel Dynamic Programming (MPDP) – a novel query optimization technique that can run on GPUs to generate optimal plans in a (massively) parallel and efficient manner. We then showcase IDP2-MPDP and UnionDP – two heuristic techniques, again using GPUs, that can even optimize queries containing 1000s of joins. Furthermore, we compare our techniques with current state-of-the-art solutions, and demonstrate how our techniques can reduce optimization time for optimal solutions by nearly two orders of magnitude and produce much better query plans for heuristics (up to 7x).