In software development, effort estimation is the process of predicting the most realistic amount of effort (expressed in terms of person-hours or money) required to develop or maintain software based on incomplete, uncertain and noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets, investment analyses, pricing processes and bidding rounds.
Published surveys on estimation practice suggest that expert estimation is the dominant strategy when estimating software development effort.
Typically, effort estimates are over-optimistic and there is a strong over-confidence in their accuracy. The mean effort overrun seems to be about 30% and not decreasing over time. For a review of effort estimation error surveys, see. However, the measurement of estimation error is problematic, see Assessing the accuracy of estimates.
The strong overconfidence in the accuracy of the effort estimates is illustrated by the finding that, on average, if a software professional is 90% confident or "almost sure" to include the actual effort in a minimum-maximum interval, the observed frequency of including the actual effort is only 60-70%.
Currently the term "effort estimate" is used to denote as different concepts such as most likely use of effort (modal value), the effort that corresponds to a probability of 50% of not exceeding (median), the planned effort, the budgeted effort or the effort used to propose a bid or price to the client. This is believed to be unfortunate, because communication problems may occur and because the concepts serve different goals.
Software researchers and practitioners have been addressing the problems of effort estimation for software development projects since at least the 1960s; see, e.g., work by Farr and Nelson.
Most of the research has focused on the construction of formal software effort estimation models. The early models were typically based on regression analysis or mathematically derived from theories from other domains.
This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.
Ce cours donne une introduction au traitement mathématique de la théorie de l'inférence statistique en utilisant la notion de vraisemblance comme un thème central.
We discuss a set of topics that are important for the understanding of modern data science but that are typically not taught in an introductory ML course. In particular we discuss fundamental ideas an
In project management (e.g., for engineering), accurate estimates are the basis of sound project planning. Many processes have been developed to aid engineers in making accurate estimates, such as Analogy based estimation Compartmentalization (i.e., breakdown of tasks) Cost estimate Delphi method Documenting estimation results Educated assumptions Estimating each task Examining historical data Identifying dependencies Parametric estimating Risk assessment Structured planning Popular estimation processes fo
Cost estimation in software engineering is typically concerned with the financial spend on the effort to develop and test the software, this can also include requirements review, maintenance, training, managing and buying extra equipment, servers and software. Many methods have been developed for estimating software costs for a given project.
The Constructive Cost Model (COCOMO) is a procedural software cost estimation model developed by Barry W. Boehm. The model parameters are derived from fitting a regression formula using data from historical projects (63 projects for COCOMO 81 and 163 projects for COCOMO II). The constructive cost model was developed by Barry W. Boehm in the late 1970s and published in Boehm's 1981 book Software Engineering Economics as a model for estimating effort, cost, and schedule for software projects.
Activity-based models offer the potential for a far deeper understanding of daily mobility behaviour than trip-based models. Based on the fundamental assumption that travel demand is derived from the need to do activities, they are flexible tools that aim ...
EPFL2024
, ,
In the field of choice modeling, the availability of ever-larger datasets has the potential to significantly expand our understanding of human behavior, but this prospect is limited by the poor scalability of discrete choice models (DCMs): as sample sizes ...
This paper presents the experimental validation of a linear recursive state estimation (SE) process for hybrid AC/DC microgrids proposed in the authors' previous work. The SE uses a unified and linear measurement model that relies on the use of synchronize ...