Biometric authentication can be cast as a signal processing and statistical pattern recognition problem. As such, it relies on models of signal representations that can be used to discriminate between classes. One of the assumptions typically made by the practioner is that the training set used to learn the parameters of the class-conditional likelihood functions is a representative sample of the unseen test set on which the system will be used. If the test set data is distorted, the assumption no longer holds and the Bayes decision rule or Maximum Likelihood rules are no longer optimal. In biometrics, the distortions of the data come from two main sources: intra-user variability, and changes in acquisition conditions. The aim of the thesis is to increase robustness of biometric verification systems to these sources of variability. Since the signals under consideration are not deterministic, but stochastic, steady-state signal analysis techniques are not adequate for modelling. By using probabilistic methods instead, we can obtain models describing, amongst other, the amount of spread in the random variables, meaning that we can take into account the uncertainty on the realisation of the random variables (features) due to intra-user variability. Furthermore, we posit that modelling information reflecting the acquisition conditions (signal quality measures) should be useful in improving the robustness of biometric verification systems to changes of data from the training conditions. In this thesis, we use probabilistic approaches at all stages of the biometric authentication processing chain, while taking into account the quality of the signal being modelled. We use the theoretical framework of Bayesian networks, a family of graphical models offering important flexibility. We use them both for single-classifier systems (base classifier and reliability model) and for multiple-classifier systems (classifier combination with and without quality measures). In the single-classifier part, we propose to use a Bayesian network topology equivalent to a Gaussian mixture model for signature verification, and show that experimental results are equivalent to state-of-the-art signature verification systems. Furthermore, the model can be used for speaker verification as well. Quality measures are auxiliary information that can be used in both single-classifier systems and multi-classifier systems. We define precisely the concept of quality measure, and show the different potential types of quality measures. We propose new quality measures for both speech and signature, as well as the concept of modality-independent quality measure, as an additional type of auxiliary information. We show that the effect of signal degradation could be different on impostor and client score distributions, an important effect to take into account when designing quality-based fusion models. We propose a principled evaluation methodology for quality measures. The use of reliability mo
Andrea Wulzer, Siyu Chen, Alfredo Glioti