Real-time calibration of gain and timing errors in two-channel time-interleaved A/D converters for SDR applications: Page 3 of 6

June 25, 2014 //By Djamel Haddadi, Integrated Device Technology, Inc.
The explosion of mobile data is driving new receiver architectures in communication infrastructure in order to provide higher capacity and more flexibility.
If x0 and x1 denote the outputs of the two sub-ADCs with the calibration signal as input, it can be shown using Equation 1 that these two signals are linked by the following expression (the noise has been ignored):


....... (Equation 2)

The coefficients h0 and h1 of this linear filtering formula are related explicitly to the gain g and timing Δt errors by:

........................................................................................................... (Equation 3)

This nonlinear set of equations can be linearized and inverted by using a first order approximation, given the fact that the mismatch errors are kept small by design.
The estimation algorithm consists of three steps:

  1. The calibration signal is extracted and cancelled from the output of the sub-ADCs using an LMS algorithm, yielding the discrete-time signals x0 and x1. This algorithm requires a digital cosine/sine reference signals at the calibration frequency. The cosine signal is generated with a small Look Up Table (LUT) of size 4K (K < 64 in practice). The sine signal is derived from the cosine by a simple delay of K.
  2. The coefficients h0 and h1 are estimated adaptively from the extracted x0 and x1 signals using an LMS algorithm as shown in Figure 2.
  3. The gain and timing errors are then computed from the linearized set of equations as derived from Equation 3.

Figure 2: Background estimation of gain and timing errors through a 2-tap digital adaptive filter. Click image to enlarge.

Design category: