**Êtes-vous un étudiant de l'EPFL à la recherche d'un projet de semestre?**

Travaillez avec nous sur des projets en science des données et en visualisation, et déployez votre projet sous forme d'application sur GraphSearch.

Publication# Pattern Recognition in Non-uniformly sampled electrocardiogram signal for wearable sensors

Résumé

In this thesis, we explore 3 main topics: non-uniform (in time) sub-sampling, QRS detection in an event-based sub-sampled ECG (electrocardiogram) and implementation on a low-power MCU (Micro Controller Unit). The main idea behind this work is to reduce the energy consumption of a QRS detection algorithm by adapting the sampling frequency using the local frequency of the signal while maintaining the overall performance on the QRS detection without degradation. In particular, we will focus on the compréhension and re-adaptation of 2 popular algorithms for QRS detection: the Pan-Tompkins and the gQRS. This choice was guided by some constraints given by the event-based sub-sampling. The re-adaptation, in particular, was performed in 2 parts: the first step was to change the behavior of the algorithms in order to be able to work in an event-based sampled domain. The results achieved from the first step led to the selection of gQRS as the designed algorithm to undergo step 2: parallelization and optimization for the chosen low-power device. The results achieved are comparable to the results achieved by the original version in the classical uniform-sampled domain. The selected low-power device is the PULP platform, an MCU composed of a single core, low power CPU and a cluster composed of other 8 smaller cores.

Official source

Cette page est générée automatiquement et peut contenir des informations qui ne sont pas correctes, complètes, à jour ou pertinentes par rapport à votre recherche. Il en va de même pour toutes les autres pages de ce site. Veillez à vérifier les informations auprès des sources officielles de l'EPFL.

Concepts associés

Chargement

Publications associées

Chargement

Publications associées (8)

Chargement

Chargement

Chargement

Concepts associés (14)

Algorithme

thumb|Algorithme de découpe d'un polygone quelconque en triangles (triangulation).
Un algorithme est une suite finie et non ambiguë d'instructions et d’opérations permettant de résoudre une classe de

Reconnaissance de formes

thumb|Reconnaissance de forme à partir de modélisation en 3D
La reconnaissance de formes (ou parfois reconnaissance de motifs) est un ensemble de techniques et méthodes visant à identifier des régular

Signal électrique

vignette|Signaux électriques sur l'écran d'un oscilloscope : signal rectanglaire (haut), signal harmonique ou sinusoïdal (bas).
Un signal électrique est une grandeur électrique dont la variation dans

Machine Learning is a modern and actively developing field of computer science, devoted to extracting and estimating dependencies from empirical data. It combines such fields as statistics, optimization theory and artificial intelligence. In practical tasks, the general aim of Machine Learning is to construct algorithms able to generalize and predict in previously unseen situations based on some set of examples. Given some finite information, Machine Learning provides ways to exract knowledge, describe, explain and predict from data. Kernel Methods are one of the most successful branches of Machine Learning. They allow applying linear algorithms with well-founded properties such as generalization ability, to non-linear real-life problems. Support Vector Machine is a well-known example of a kernel method, which has found a wide range of applications in data analysis nowadays. In many practical applications, some additional prior knowledge is often available. This can be the knowledge about the data domain, invariant transformations, inner geometrical structures in data, some properties of the underlying process, etc. If used smartly, this information can provide significant improvement to any data processing algorithm. Thus, it is important to develop methods for incorporating prior knowledge into data-dependent models. The main objective of this thesis is to investigate approaches towards learning with kernel methods using prior knowledge. Invariant learning with kernel methods is considered in more details. In the first part of the thesis, kernels are developed which incorporate prior knowledge on invariant transformations. They apply when the desired transformation produce an object around every example, assuming that all points in the given object share the same class. Different types of objects, including hard geometrical objects and distributions are considered. These kernels were then applied for images classification with Support Vector Machines. Next, algorithms which specifically include prior knowledge are considered. An algorithm which linearly classifies distributions by their domain was developed. It is constructed such that it allows to apply kernels to solve non-linear tasks. Thus, it combines the discriminative power of support vector machines and the well-developed framework of generative models. It can be applied to a number of real-life tasks which include data represented as distributions. In the last part of the thesis, the use of unlabelled data as a source of prior knowledge is considered. The technique of modelling the unlabelled data with a graph is taken as a baseline from semi-supervised manifold learning. For classification problems, we use this apporach for building graph models of invariant manifolds. For regression problems, we use unlabelled data to take into account the inner geometry of the input space. To conclude, in this thesis we developed a number of approaches for incorporating some prior knowledge into kernel methods. We proposed invariant kernels for existing algorithms, developed new algorithms and adapted a technique taken from semi-supervised learning for invariant learning. In all these cases, links with related state-of-the-art approaches were investigated. Several illustrative experiments were carried out on real data on optical character recognition, face image classification, brain-computer interfaces, and a number of benchmark and synthetic datasets.

In recent years, population aging and the consequent higher incidence of noncommunicable diseases, have increased the need for long-term health monitoring. Moreover, as healthcare cost is projected to grow substantially by 2030 in OECD countries, the demand for portable, easy-to-use and low cost ultra-low power means of monitoring, diagnosis and prevention rises. Wearable sensors technology for remote health and wellness monitoring is an optimal candidate to tackle this problem and has advanced drastically in the last years. However, wearable sensors impose several design constraints. They must process data in real-time and provide highly accurate diagnosis and adapt to different cases. At the same time, to perform long-term monitoring, they must maximize battery lifetime, hence, usability. However, the need for more accurate algorithms and the need to obtain energy-efficient implementations can work against each other. For this reason, enhancing the energy-accuracy trade-off is essential. Several works in the literature have addressed the energy-accuracy trade-off problem. The general approach is to first develop offline methodologies that maximize the algorithm's accuracy, using signal processing and machine learning. Then, these methods are optimized to be implemented as online designs in resource-constrained ultra-low power platforms. However, different problems can occur in these two steps. Most methods use highly variable datasets (mainly having different subjects), others use fixed parameters tailored to specific conditions, which overall decreases the robustness of the algorithm. In traditional single-core devices, some optimizations whose goal is to lower the algorithms' complexity and computational burden, such as downsampling and features reduction, lead to a loss in precision. With the advances of ultra-low power platforms and new machine learning strategies, even more challenges arise. However, these advances allow to exploit the growing capabilities of the platforms and use innovative and more complex strategies that achieve high levels of accuracy, robustness and energy-efficiency. In this thesis, I propose a set of adaptive strategies in the context of remote health and wellness monitoring for an enhanced energy-accuracy trade-off in wearable sensors. First, I present three methodologies for multi-biosignal monitoring and pathology detection, which adapt to the specific physiological conditions by means of personalization to the subject and knowledge acquired from the signal. Second, in the context of modern heterogeneous wearable platforms, I propose a modular approach to software parallelization and hardware acceleration for biomedical applications to maximize the attainable speed-up and, therefore, minimizing energy consumption. Moreover, I propose an approach to scale computing resources and independent memory banks based on the specific characteristics of the patient in modern wearable sensors. Finally, in the context of intensive physical exercise, I propose an online design that adapts to the sudden physiological changes occurring in the signal. This method combines a lightweight algorithm with a more robust though more complex one to reduce energy consumption while maintaining a very high accuracy. Moreover, this adaptive strategy exploits the heterogeneity of modern platforms by matching the complexity of each algorithm with the capabilities of each core, which further enhances the energy-accuracy trade-off.

This paper presents insertions-only algorithms for maintaining the exact and/or approximate size of the minimum edge cut and the minimum vertex cut of a graph. The algorithms output the approximate or exact size k in time O(1) and a cut of size K in time linear in its size. For the minimum edge cut problem and for any O < E

1997