**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Concept# Average cost

Summary

In economics, average cost or unit cost is equal to total cost (TC) divided by the number of units of a good produced (the output Q):
AC=\frac{TC}{Q}.
Average cost has strong implication to how firms will choose to price their commodities. Firms’ sale of commodities of certain kind is strictly related to the size of the certain market and how the rivals would choose to act.
Short-run average cost
Short-run costs are those that vary with almost no time lagging. Labor cost and the cost of raw materials are short-run costs, but physical capital is not.
An average cost curve can be plotted with cost on the vertical axis and quantity on the horizontal axis. Marginal costs are often also shown on these graphs, with marginal cost representing the cost of the last unit produced at each point; marginal costs in the short run are the slope of the variable cost curve (and hence the first derivative of variable cost).
A typical average cost curve has a U-shape, becaus

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related publications

Loading

Related people

Loading

Related units

Loading

Related concepts

Loading

Related courses

Loading

Related lectures

Loading

Related people (1)

Related publications (5)

Loading

Loading

Loading

Related courses (8)

MGT-454: Principles of microeconomics

The course allows students to get familiarized with the basic tools and concepts of modern microeconomic analysis. Based on graphical reasoning and analytical calculus, it constantly links to real economic issues.

MGT-200: Economic thinking

This course introduces frameworks and tools for understanding the economic dimensions of the world we live in. The course includes applications to real-world situations and events. Assessment is through group projects. The course is divided into two parts: Microeconomics and Macroeconomics.

MGT-303: Economics of ideas

This class will provide students with an understanding of some real-world issues related to the "knowledge economy." Why should we innovate as a society? Why innovation doesn't just happen and how can the government help firms innovate? We will answer these questions and others using economic tools.

Related units (1)

Related concepts (12)

In economics, a cost curve is a graph of the costs of production as a function of total quantity produced. In a free market economy, productively efficient firms optimize their production process by

Economics (ˌɛkəˈnɒmᵻks,_ˌiːkə-) is a social science that studies the production, distribution, and consumption of goods and services.
Economics focuses on the behaviour and interactions of econom

In economics, specifically general equilibrium theory, a perfect market, also known as an atomistic market, is defined by several idealizing conditions, collectively called perfect competition, or a

Fabio Nobile, Erik Gustaf Bogislaw Von Schwerin

We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending with the desired one. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding weak and strong errors. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a nontrivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical examples substantiate the above results and illustrate the corresponding computational savings.

In this paper we consider the class of anti-uniform Huffman codes and derive tight lower and upper hounds on the average length, entropy, and redundancy of such codes in terms of the alphabet size of the source. Also an upper bound on the entropy of AUH codes is also presented in terms of the average cost of the code. The Fibonacci distributions are introduced which play a fundamental role in AUH codes. It is shown that such distributions maximize the average length and the entropy of the code for a given alphabet size. Another previously known bound on the entropy for given average length follows immediately from our results.

This thesis focuses on non-parametric covariance estimation for random surfaces, i.e.~functional data on a two-dimensional domain. Non-parametric covariance estimation lies at the heart of functional data analysis, andconsiderations of statistical and computational efficiency often compel the use of separability of the covariance, when working with random surfaces. We seek to provide efficient alternatives to this ambivalent assumption.In Chapter 2, we study a setting where the covariance structure may fail to be separable locally -- either due to noise contamination or due to the presence of a non-separable short-range dependent signal component. That is, the covariance is an additive perturbation of a separable component by a non-separable but banded component. We introduce non-parametric estimators hinging on shifted partial tracing -- a novel concept enjoying strong denoising properties. We illustrate the usefulness of the proposed methodology on a data set of mortality surfaces.In Chapter 3, we propose a distinctive decomposition of the covariance, which allows us to understand separability as an unconventional form of low-rankness. From this perspective, a separable covariance has rank one. Allowing for a higher rank suggests a structured class in which any covariance can be approximated up to an arbitrary precision. The key notion of the partial inner product allows us to generalize the power iteration method to general Hilbert spaces and estimate the aforementioned decomposition from data. Truncation and retention of the leading terms automatically induces a non-parametric estimator of the covariance, whose parsimony is dictated by the truncation level. Advantages of this approach, allowing for estimation beyond separability, are demonstrated on the task of classification of EEG signals.While Chapters 2 and 3 propose several generalizations of separability in the densely sampled regime, Chapter 4 deals with the sparse regime, where the latent surfaces are observed only at few irregular locations. Here, a separable covariance estimator based on local linear smoothers is proposed, which is the first non-parametric utilization of separability in the sparse regime. The assumption of separability reduces the intrinsically four-dimensional smoothing problem into several two-dimensional smoothers and allows the proposed estimator to retain the classical minimax-optimal convergence rate for two-dimensional smoothers. The proposed methodology is used for a qualitative analysis of implied volatility surfaces corresponding to call options, and for prediction of the latent surfaces based on information from the entire data set, allowing for uncertainty quantification. Our quantitative results show that the proposed methodology outperforms the common approach of pre-smoothing every implied volatility surface separately.Throughout the thesis, we put emphasis on computational aspects, since those are the main reason behind the immense popularity of separability. We show that the covariance structures of Chapters 2 and 3 come with no (asymptotic) computational overhead relative to assuming separability. In fact, the proposed covariance structures can be estimated and manipulated with the same asymptotic costs as the separable model. In particular, we develop numerical algorithms that can be used for efficient inversion, as required e.g.~for prediction. All the methods are implemented in R and available on~GitHub.

Related lectures (23)