Program optimizationIn computer science, program optimization, code optimization, or software optimization, is the process of modifying a software system to make some aspect of it work more efficiently or use fewer resources. In general, a computer program may be optimized so that it executes more rapidly, or to make it capable of operating with less memory storage or other resources, or draw less power. Although the word "optimization" shares the same root as "optimal", it is rare for the process of optimization to produce a truly optimal system.
Combinatorial optimizationCombinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combinatorial optimization problems are the travelling salesman problem ("TSP"), the minimum spanning tree problem ("MST"), and the knapsack problem. In many such problems, such as the ones previously mentioned, exhaustive search is not tractable, and so specialized algorithms that quickly rule out large parts of the search space or approximation algorithms must be resorted to instead.
Interprocedural optimizationInterprocedural optimization (IPO) is a collection of compiler techniques used in computer programming to improve performance in programs containing many frequently used functions of small or medium length. IPO differs from other compiler optimizations by analyzing the entire program as opposed to a single function or block of code. IPO seeks to reduce or eliminate duplicate calculations and inefficient use of memory and to simplify iterative sequences such as loops.
Mathematical optimizationMathematical optimization (alternatively spelled optimisation) or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfields: discrete optimization and continuous optimization. Optimization problems arise in all quantitative disciplines from computer science and engineering to operations research and economics, and the development of solution methods has been of interest in mathematics for centuries.
Optimal controlOptimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and operations research. For example, the dynamical system might be a spacecraft with controls corresponding to rocket thrusters, and the objective might be to reach the moon with minimum fuel expenditure.
Plain bearingA plain bearing, or more commonly sliding contact bearing and slide bearing (in railroading sometimes called a solid bearing, journal bearing, or friction bearing), is the simplest type of bearing, comprising just a bearing surface and no rolling elements. Therefore, the journal (i.e., the part of the shaft in contact with the bearing) slides over the bearing surface. The simplest example of a plain bearing is a shaft rotating in a hole. A simple linear bearing can be a pair of flat surfaces designed to allow motion; e.
Fluid bearingFluid bearings are bearings in which the load is supported by a thin layer of rapidly moving pressurized liquid or gas between the bearing surfaces. Since there is no contact between the moving parts, there is no sliding friction, allowing fluid bearings to have lower friction, wear and vibration than many other types of bearings. Thus, it is possible for some fluid bearings to have near-zero wear if operated correctly. They can be broadly classified into two types: fluid dynamic bearings (also known as hydrodynamic bearings) and hydrostatic bearings.
Rolling-element bearingIn mechanical engineering, a rolling-element bearing, also known as a rolling bearing, is a bearing which carries a load by placing rolling elements (such as balls or rollers) between two concentric, grooved rings called races. The relative motion of the races causes the rolling elements to roll with very little rolling resistance and with little sliding. One of the earliest and best-known rolling-element bearings are sets of logs laid on the ground with a large stone block on top.
Optimizing compilerIn computing, an optimizing compiler is a compiler that tries to minimize or maximize some attributes of an executable computer program. Common requirements are to minimize a program's execution time, memory footprint, storage size, and power consumption (the last three being popular for portable computers). Compiler optimization is generally implemented using a sequence of optimizing transformations, algorithms which take a program and transform it to produce a semantically equivalent output program that uses fewer resources or executes faster.
Multi-objective optimizationMulti-objective optimization or Pareto optimization (also known as multi-objective programming, vector optimization, multicriteria optimization, or multiattribute optimization) is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives.