There are two primary considerations when validating power integrity in embedded systems; the operation of the various power converters and the power distribution system’s integrity. The relative importance of those two considerations depends, in part, on the power system architecture used. In systems with a centralized multiple-output ac/dc power supply, the power supply qualification is relatively more important. The previous FAQ in this series focused on “How to verify ac/dc power supply performance.” This FAQ will focus on factors affecting the power distribution system’s integrity, especially in systems using distributed power architectures (DPAs).
A relatively high voltage distribution bus is used in a DPA to deliver power to the various subsystems. The power distribution bus voltage can range from 12Vdc up to 720Vdc and higher, depending on the system’s size and functionalities. DPAs have been used in computer servers, telecommunications, and other embedded equipment for many years. More recently, DPAs have found applications in transportation and robotics systems.
The figure above shows a mid-sized DPA that could be used in automotive or robotics applications. The 48Vdc main power bus can directly power motors for various functions such as pumps and fans in automobiles or motive power in robots. The 48Vdc distribution voltage can be down-converted to voltages between 24Vdc and 44Vdc to drive various LED lighting, cameras, LIDAR, and so on. And it can be converted down to 12Vdc to power an intermediate power distribution bus for various digital and analog functions with the additional non-isolated point of load (PoL) dc/dc converters used to power the functions in the “computer” section.
As the DPA voltages drop lower and lower, the challenges in ensuring power integrity (PI) grow in importance and difficulty. Particularly in the above diagram’s computer section, digital ICs such as ASICs and FPGA can require 1Vdc or lower voltages and currents ranging up to 100 amps. At those levels, PI is a serious challenge.
Power distribution network impedance
In embedded systems, a PI analysis is an important aspect of system design, and analyzing the power distribution network (PDN) impedance is a critical part of that analysis. A PI analysis determines if the needed voltage and current are accurately delivered to the load. PI consists of several related elements starting on the chip, moving to the chip package, up to the circuit board (PCB), and ending up at the system level. Four PDN performance factors need to be optimized to ensure PI on the PCB:
- The voltage ripple at the power pins of the various chips must be within specification (usually under +/-50 mV variation around 1V)
- Ground bounce must be controlled. Ground bounce is sometimes referred to as synchronous switching noise or simultaneous switching noise (SSN), or simultaneous switching output (SSO).
- Maintaining electromagnetic compatibility (EMC) and controlling electromagnetic interference (EMI). The power distribution network is usually one of the largest sets of conductors on a PCB and can significantly impact EMC and EMI.
- Controlling and minimizing the dc voltage drops between the various PoLs and the loads. The power needs of ASICs, FPGAs, MCUs, and other large digital ICs can swing rapidly from 1 to 100+ amps at sub-1Vdc VDD levels with voltage margins of tens of millivolts.
Controlling the various impedances in the PDN is critical to proper system operation. Dynamic current changes are an especially important consideration. The dynamic current change is often much greater than the PoL (referred to as a VRM, or voltage regulator module, in the figure below) can deliver and can result in pulling down the voltage on the rail or cause the voltage to spike creating voltage ripple. Decoupling capacitors must be properly sized to handle the changing current demands. Designers should carefully review the maximum output capacitances specified for the various PoLs. Too much capacitance can degrade the performance of power converters.
As illustrated in the diagram above, the voltage ripple frequency domain changes at different points in the PDN. The highest frequencies are handled with decoupling on the die itself. Intermediate frequencies up to 100MHz are often handled in the package. Frequencies below 100MHz are typically left to the PCB designer to use a combination of plane capacitance and decoupling capacitors.
Because of the range of frequencies found at various points in the PDN, different types and sizes of capacitors are used. Controlling (minimizing) the parasitic inductance of the decoupling capacitors is an important design consideration. The parasitic inductance can create impedance spikes at particular frequencies. Since the physical size of a capacitor affects its parasitic inductance, it is generally recommended that designers use smaller capacitors whenever possible. The smallest value capacitor should be placed directly adjacent to the leads of the IC.
There is an impedance target in each design at which the ripple created by the dynamic currents is within the required range. The target impedance is related to the various frequencies and the anticipated levels of the dynamic currents. When designing high-performance PDNs, simulation is often employed to ensure that the target impedance is reached. Depending on the circumstances, the simulation can be done with SPICE, EDA software tools, or even chip and/or capacitor vendor-supplied simulation tools.
Noise and jitter characterization
Today’s high-speed digital systems require low jitter oscillators and clocks. An important consideration is how those oscillators and clocks perform in the system’s non-ideal PDN. Jitter performance of 100 fs or better, specified for a 12kHz to 20MHz integration range, is often required. Unlike target impedances, which can be simulated during the design process, noise and jitter need to be measured. While an ideal PDN will deliver the best noise and jitter performance, measurement of actual system performance is necessary.
A frequency range from 50Hz to several MHz is typically used to measure power supply jitter rejection for power line frequencies and dc/dc converter power switching frequencies. Testing is done by superimposing a sinusoidal disturbance at a given frequency on the dc rail voltage. The amount (in dBm) of the phase noise line relative to the voltage level of the power rail disturbance (also in dBm) is the PSNR at the disturbance frequency. Testing is performed at multiple frequencies to produce the PSNR curve over the required frequency range.
Power rail sequencing
Most of today’s high-performance digital ICs use multiple tightly-regulated voltage rails. The various voltage rails are also required to power-up and power-down at certain rates and in certain sequences. Violation of the powering requirements can cause the IC to latch-up and/or degrade the device’s long-term reliability. Three of the most common implementations of power sequencing are Sequential, Ratiometric, and Simultaneous:
- Sequential sequencing powers each rail up and down in sequence. Often the various rails are allowed to reach their operating level before the next rail starts.
- Ratiometric sequencing energizes the various rails simultaneously, with each rail reaching regulation at the same time. To hold the differential voltage ratios constant, higher voltage rails have higher slew rates.
- Simultaneous sequencing energizes the rails simultaneously, and the rails have the same slew rate, minimizing the differential voltages between the various rails.
In addition to voltage rail sequencing, some devices also require a monotonic rise in the voltages. For a monotonic rise, the voltage rises linearly (not necessarily at a constant rate) and does not decrease at any point. A non-monotonic voltage increase can occur when the load pulls a current spike at certain points during startup. To prevent a voltage drop, the PoL needs to respond to any current spikes demanded by the load quickly.
Wireless communications systems such as 5G have signals with high peak-to-average power ratios (PAPR). High PAPR values can result in significant reductions in power amplifier (PA) efficiencies. Average power tracking (APT) and envelope tracking (ET) have been developed to increase PAs’ power efficiencies in a variety of radio applications ranging from base stations to handsets.
APT provides a simpler and coarser adjustment. APT changes the dc voltage of the PA based on relatively large time slices. The PA’s output is a function of the average power but is optimized to limit (or eliminate) clipping RF signal peaks.
ET is faster, more accurate, and more complex than APT. ET adjusts the PA dc voltage in real-time, based on the input signal to the PA. Implementation of ET requires a dedicated power control module. APT and ET both use a combination of hardware and software controls to modulate the PA’s voltage supply to reduce overall power consumption, thereby reducing heat generation and increasing operational efficiency.
Power system design is an important and complex part of system design. This FAQ series began by looking at ac power measurement basics and then considered how to verify ac/dc power supply performance. This concluding FAQ dug into some of the nuances to consider when validating power system integrity in embedded systems.
Measuring power supply induced jitter and PSNR in low jitter oscillators and clocks, Rohde & Schwarz
Power Integrity, Wikipedia
Power Management for FPGAs, Analog Devices
Power-up Sequencing Techniques, Infineon Technologies
Leave a Reply