**Are you an EPFL student looking for a semester project?**

Work with us on data science and visualisation projects, and deploy your project as an app on top of GraphSearch.

Publication# Coverage Estimation in Floorplan Visual Sensor Networks

Abstract

The issue of Coverage in visual sensor networks (VSNs) has attracted considerable attention due to sensors unique directional sensing characteristic. It answers the question that how well the target field is monitored by a network of sensors with video/image capturing capability. In floorplan scenario the network is to monitor a plane parallel to the sensors' deployment plane. Coverage probability estimation based on both the sensors and the network related parameters is a fundamental issue in this field. For a large scale application in which the sensors' deployment is done by dropping sensors from an airplane, random sensors' placement and orientation according to their respective distribution is a practical assumption. Although some studies exist on the coverage problem of floorplan VSNs, none of them has derived a closed form solution for the coverage estimation, which is the main contribution of this paper. The Simulation results validated the proposed mathematical solution.

Official source

This page is automatically generated and may contain information that is not correct, complete, up-to-date, or relevant to your search query. The same applies to every other page on this website. Please make sure to verify the information with EPFL's official sources.

Related concepts (31)

Coverage probability

In statistics, the coverage probability, or coverage for short, is the probability that a confidence interval or confidence region will include the true value (parameter) of interest. It can be defined as the proportion of instances where the interval surrounds the true value as assessed by long-run frequency. The fixed degree of certainty pre-specified by the analyst, referred to as the confidence level or confidence coefficient of the constructed interval, is effectively the nominal coverage probability of the procedure for constructing confidence intervals.

Interval estimation

In statistics, interval estimation is the use of sample data to estimate an interval of possible values of a parameter of interest. This is in contrast to point estimation, which gives a single value. The most prevalent forms of interval estimation are confidence intervals (a frequentist method) and credible intervals (a Bayesian method); less common forms include likelihood intervals and fiducial intervals.

Confidence interval

In frequentist statistics, a confidence interval (CI) is a range of estimates for an unknown parameter. A confidence interval is computed at a designated confidence level; the 95% confidence level is most common, but other levels, such as 90% or 99%, are sometimes used. The confidence level, degree of confidence or confidence coefficient represents the long-run proportion of CIs (at the given confidence level) that theoretically contain the true value of the parameter; this is tantamount to the nominal coverage probability.

Related publications (32)

Related MOOCs (12)

Digital Signal Processing I

Basic signal processing concepts, Fourier analysis and filters. This module can
be used as a starting point or a basic refresher in elementary DSP

Digital Signal Processing II

Adaptive signal processing, A/D and D/A. This module provides the basic
tools for adaptive filtering and a solid mathematical framework for sampling and
quantization

Digital Signal Processing III

Advanced topics: this module covers real-time audio processing (with
examples on a hardware board), image processing and communication system design.

Anthony Christopher Davison, Igor Rodionov

We derive confidence intervals (CIs) and confidence sequences (CSs) for the classical problem of estimating a bounded mean. Our approach generalizes and improves on the celebrated Chernoff method, yielding the best closed-form "empirical-Bernstein" CSs and ...

Anthony Christopher Davison, Timmy Rong Tian Tse

Universal inference enables the construction of confidence intervals and tests without regularity conditions by splitting the data into two parts and appealing to Markov's inequality. Previous investigations have shown that the cost of this generality is a ...

Bolometry is an essential diagnostic for calculating the power balances and for the understanding of different physical aspects of tokamak experiments. The reconstruction method based on the Maximum Likelihood (ML) principle, developed initially for JET, h ...