This thesis describes a novel digital background calibration scheme for pipelined ADCs with nonlinear interstage gain. Errors caused by the nonlinear gains are corrected in real-time by adaptively post-processing the digital stage outputs. The goal of this digital error correction is to improve the power efficiency of high-precision analog-to-digital conversion by relaxing the linearity and matching constraints on the analog pipeline stages and compensating the resulting distortion through digital post-processing. This approach is motivated by the observation that technology scaling reduces the energy cost of digital signal processing and at the same time makes high-precision analog signal processing harder because of reduced intrinsic device gain and reduced voltage headroom. In particular, the proposed calibration approach enables the use of power efficient circuits in noise-limited high-resolution, high-speed converters. Alternative stage circuit topologies that are more power efficient than their traditional counterparts are typically too nonlinear and too sensitive to temperature and bias variations to be employed in the critical stages of such converters without adaptive error correction. The proposed calibration scheme removes the effects of nonlinear interstage gain, sub-DAC nonlinearity, and mismatch between reference voltages of different stages. Gain errors and reference voltage mismatch are continuously tracked during normal operation and may thus be time-varying. Sub-DAC nonlinearity is assumed to be constant. A method to characterize the time-invariant non-ideal sub-DAC characteristics during an initial one-time offline calibration phase is proposed. Because the method only uses the existing uncalibrated analog hardware, it can only determine the relative sizes of the DAC error terms. One or two scale factors per sub-DAC remain to be estimated by the adaptation algorithm used to track the time-varying gain parameters. Because the scale factor is constant, it can be excluded from adaptation after its estimate has converged. This offline characterization of sub-DACs ensures that the entire characteristic of all sub-DACs can be estimated, and that calibration of DAC errors can be permanently turned off after initial convergence. Furthermore, it eliminates degrees of freedom in the error correction function, and fixes the gain of the calibrated ADC. The digital postprocessor linearizes the ADC transfer characteristic by applying an adaptive inverse model of the analog signal path to the digital outputs of the pipeline stages. The model uses piecewise linear (PWL) functions to approximate the inverse of the nonlinear stage gains. Previously reported background calibration methods are limited to low-order polynomial gain models. The PWL model is more general than low order polynomial models. The analog signal path can thus be optimized for power efficiency without any constraint on high order distortion. The previously reported split-ADC
Kyojin Choo, Li Xu, Yimai Peng