Imec and Holst Centre have presented an innovative solution, which successfully implements, on chip, a low-power fully automated background calibration. This calibration utilizes a redundancy-facilitated error-detection and correction scheme.
Introducing redundancy in the analog-to-digital conversion process is another popular solution to deal with errors. It differs from calibration in the sense that the errors are neither measured, nor corrected, but simply tolerated and rejected by the conversion algorithm. Combining calibration and redundancy is often required to make certain calibration techniques work. In our design, the redundancy not only facilitates the proposed background calibration, it also relaxes the DAC settling requirements and saves power by using a two-mode comparator.
The proposed ADC uses a total of 15 cycles to perform a 13 bit conversion. The two-mode comparator works in low-power mode first (mode 1), and switches to high-precision mode (mode 2) in the last 5 cycles, resulting in a two-fold energy reduction. However, two errors are still present. First, the DAC matching is limited to <10 bits, which is due to the presence of small elements (0.3 fF) used in the DAC capacitor to save area. Second, a dynamic offset occurs when the comparator switches from mode 1 to mode 2.
The automated background calibration successfully suppresses both errors, with negligible overhead in area or power. The calibration logic is only enabled for a limited set of SAR codes which are suitable for DAC or comparator calibration. As a result, the large initial DNL (or differential non-linearity) errors caused by the dynamic comparator offset are effectively reduced, and the INL (or integral non-linearity) errors due to DAC mismatch are suppressed.
Figure ADC architecture: The ADC architecture, including the comparator, the SAR logic, the feedback DAC and the calibration logic. Click image to enlarge.
Figure ADC chip: Photo of the ADC chip.