• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Subscribe
  • Advertise

Power Electronic Tips

Power Electronic News, Editorial, Video and Resources

  • Products
    • Power Supplies
    • AC-DC
    • DC-DC
    • Battery Management
    • Capacitors
    • Magnetics
    • MOSFETS
    • Power Management
    • RF Power
    • Resistors
    • Transformers
    • Transistors
  • Applications
    • 5G
    • AI
    • Automotive
    • EV Engineering
    • LED Lighting
    • Industrial
    • IoT
    • Wireless
  • Learn
    • eBooks / Tech Tips
    • EE Training Days
    • FAQ
    • Learning Center
    • Tech Toolboxes
    • Webinars & Digital Events
  • Resources
    • Design Guide Library
    • Digital Issues
    • Engineering Diversity & Inclusion
    • LEAP Awards
    • Podcasts
    • White Papers
  • Video
    • EE Videos & Interviews
    • Teardown Videos
  • EE Forums
    • EDABoard.com
    • Electro-Tech-Online.com
  • Engineeering Training Days
  • Newsetter Subscription

Vertical power delivery reduces losses in AI processor designs

February 4, 2026 By Bill Pelletier, TDK Leave a Comment

GPUs, ASICs, and other AI processors require enormous amounts of power delivered at low voltages and high currents. Moving point-of-load DC-DC converters underneath those processors wastes less energy and provides cleaner power than having power components placed alongside loads.

The semiconductor industry is approaching a transformative inflection point driven by the explosive growth of artificial intelligence (AI) and machine learning (ML) workloads. These applications are pushing silicon to unprecedented levels of performance and power consumption. The new breed of AI processors now demands core voltages supplied at current levels of 2,000 A to 5,000 A or more. Having already surpassed the 1,000 A barrier, the traditional approach to powering processors, namely, multi-phase regulators mounted laterally, can no longer keep up.

To meet the high-current demand, the power electronics ecosystem is rapidly evolving by adding a new architecture called Vertical Power Delivery (VPD). By rethinking the power supply’s physical placement and electrical path, VPD introduces a scalable, efficient, and performance-enhancing solution to one of AI hardware’s most pressing challenges.

Lateral power delivery limits

Historically, processors have been powered by multiphase voltage regulators positioned laterally around the package. While proven and mature, this design is constrained by basic physical limitations. As the processor current increases, so does the power loss due to resistive and inductive effects within the power-delivery network (PDN). Even with advanced board designs and power optimization techniques, lateral delivery introduces long, tortuous paths between the processor and its local power supplies. These paths introduce additional parasitics that reduce efficiency and degrade transient response, an especially critical performance factor for AI workloads with rapid and unpredictable current demands.

Today’s lateral architectures are often unable to meet both the electrical and thermal performance needed for next-generation processors, particularly as board space becomes more constrained and system complexity increases.

Enter vertical power delivery

VPD reimagines the PDN by flipping the power supply architecture literally on its head. Instead of routing power from regulators located laterally on the top of the board, VPD places the power supply on the back side, directly beneath the processor. Figure 1 compares the two architectures. VPD minimizes the physical distance between the power source and the load, significantly reducing the series impedance and parasitic inductance.

Figure 1. A comparison of lateral and vertical power delivery shows power components moving from the top of a board to underneath the processor load. (Image: TDK)

Vertical power delivery offers several advantages over lateral power delivery:

  • Lower resistance: A shorter, more direct power path naturally means lower resistance. This reduces I²R losses, helping deliver more power to the processor without excess heat.
  • Lower inductance: With fewer path discontinuities and shorter power loops, VPD delivers a much faster transient response. This is essential for modern processors that experience sudden and extreme shifts in current draw.
  • Improved signal integrity: By relocating the high-frequency switching components to the backside of the board and integrating shielding layers into the PCB, the sensitive signal layers are isolated from power-supply noise. By preserving more contiguous copper on the top layers, VPD supports better high-speed signaling performance and electromagnetic compatibility (EMC).
  • Space optimization: Freeing up real estate on the top side of the board lets designers pack more memory, optics, and system-level functionality around the processor. This increase in available component space enables higher memory bandwidth, additional processing resources, and expanded system features without increasing board area.

Meeting AI’s demands

AI processors often include embedded telemetry systems capable of anticipating cycle-by-cycle load changes. This allows for predictive power regulation, where the power supply can ramp up in advance of a load step. The voltage regulator must, however, operate at exceptional speed to take advantage of this foresight.

That’s where the Integrated Voltage Regulator (IVR) plays a role. Historically, leading CPU companies such as Intel and AMD have embedded IVR circuitry directly onto the processor die. These regulators run at tens of megahertz, an order of magnitude faster than traditional 1 MHz switching regulators. The result is a rapid, localized response to load transients, critical for sustaining performance during AI inferencing and training.

Unfortunately, silicon real estate is expensive. IVRs consume significant die area, introduce thermal hotspots, and suffer from limited efficiency, often operating with only a 2:1 voltage conversion ratio. As current demands soar, IVRs embedded in silicon become less attractive.

Startups and established power-supply manufacturers are now exploring ways to deliver IVR-like performance from discrete modules. When implemented in a VPD configuration, these ultra-fast switching regulators aim to match the performance of embedded IVRs while avoiding their limitations. But running at tens of megahertz outside the processor introduces new challenges in EMI, thermal dissipation, and efficiency.

VPD’s engineering challenges

VPD is promising, but it poses new challenges. It brings on substantial mechanical and thermal requirements for placing high-performance power supplies beneath the processor.

  • Heat concerns: Current densities exceeding 3–4 A/mm² are becoming the norm for high-performance computing. Achieving this power level in a confined area, especially beneath a board’s hottest component, demands highly integrated, multi-layered module designs with advanced magnetics, planar inductor technologies, and optimized switching topologies.
  • Height constraints: Because VPD modules sit between the PCB and the system enclosure, their z-height is severely limited. Designers often work with 2 mm or less height budgets (Figure 2), constraining the available volume for inductors and capacitors. This, in turn, puts pressure on efficiency and thermal management.

    Figure 2. A stackup shows the vertical spacings needed for cooling.(Image: TDK)
  • Thermal isolation: In traditional designs, both the processor and voltage regulators share the top-side cooling infrastructure, but VPD places the regulators on the bottom, effectively isolating them thermally from the system heatsinks and airflow. Managing this challenge while maintaining high current throughput requires creative packaging, heat-spreading materials, and potentially even localized active cooling.
  • Footprint overlap: The power module and processor now share the same x-y footprint, concentrating two heat sources in the same area. This exacerbates the cooling challenge and increases the risk of performance degradation or failure. Figure 3 shows the spacing between power modules on the underside of a board.
Figure 3. Two 25 A power modules in proximity to each other form a 50 A design. (Image: TDK)

Despite these hurdles, VPD represents a compelling path forward. The industry is now investing in advanced packaging, ultra-thin magnetic materials, and thermal interface innovations to overcome these challenges and bring VPD to scale.

System-Level Benefits

The advantages of VPD go beyond delivering current. Reducing PDN impedance and improving transient response directly support modern AI cores’ tight voltage tolerance requirements of less than 1 V. Faster regulation translates to more stable operation, fewer voltage droops, and less derating. This unlocks the processor’s full performance.

At a system level, the freed-up top-side space enables denser integration of optics, memory, and high-speed I/O, critical for AI inference and training workloads. Designers can route higher bandwidth connections, reduce board layer count, and simplify signal integrity management.

Moreover, by relocating noisy switching regulators to the bottom layer, VPD enables cleaner signal paths and better overall EMC behavior, which is particularly important in dense server environments and co-packaged optical interconnects.

The path forward

The race is on to develop Vertical Power Delivery designs to meet three interrelated and demanding criteria:

  1. First is power density, the ability to deliver extremely high currents within a compact footprint and strict height limitations. This is critical for keeping up with the escalating demands of AI processors without sacrificing board space or structural integrity.
  2. Second is efficiency, essential for managing heat and minimizing energy losses at scale. In high-performance computing environments, even small gains in efficiency can translate to substantial savings in power and cooling costs as well as improvements in sustainability metrics.
  3. Third is thermal performance. With the power supply and processor sharing the same x-y footprint and only one side benefiting from system cooling, effective heat removal from both sides of the board is a decisive factor in whether VPD can scale.

Solving all three challenges simultaneously is no small task. As lateral PDN architectures approach their limits, VPD offers a clear and necessary evolution in power delivery design.

Tomorrow’s architecture

The rapid rise of AI has created new demands for power delivery systems. Current levels once considered extreme are becoming commonplace, and traditional lateral multi-phase regulators struggle to keep up. Vertical Power Delivery offers a transformative alternative, potentially delivering more power efficiently and performing better at the system level.

As the industry works through the thermal, packaging, and efficiency challenges that VPD presents, a new era of power architecture is emerging, one that is optimized not just for raw wattage but for enabling the full potential of next-generation computing.

In the end, the physics of power delivery will always assert themselves. With VPD, designers find new ways to bend those constraints to their will.

You may also like:


  • Bonding dissimilar materials in EV structures: challenges and adhesive solutions

  • From AI Data Centers to Earth Monitoring: APEC 2026 Plenary…

  • How do I choose the right mechanical attachment for my…

  • FAQ on Hot Swap: from “don’t dare do it” to…

  • How do inverter switching strategies influence battery health?

Filed Under: AI, AI Engineering Collective, Applications, FAQ, Featured Tagged With: AI, ASICs, GPUs, Vertical power delivery, VPD

Reader Interactions

Leave a Reply

You must be logged in to post a comment.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Primary Sidebar

Featured Contributions

Protecting Ethernet interfaces in telecommunications applications against common high energy surges

Ionic cooling: a silent revolution in thermal management

Robust design for Variable Frequency Drives and starters

Meeting demand for hidden wearables via Schottky rectifiers

The case for vehicle 48 V power systems

More Featured Contributions

EE LEARNING CENTER

EE Learning Center

EE TECH TOOLBOX

“ee
Tech Toolbox: Connectivity
AI and high-performance computing demand interconnects that can handle massive data throughput without bottlenecks. This Tech Toolbox explores the connector technologies enabling ML systems, from high-speed board-to-board and PCIe interfaces to in-package optical interconnects and twin-axial assemblies.

EE ENGINEERING TRAINING DAYS

engineering
“power
EXPAND YOUR KNOWLEDGE AND STAY CONNECTED
Get the latest info on technologies, tools and strategies for EE professionals.
“bills

RSS Current EDABoard.com discussions

RSS Current Electro-Tech-Online.com Discussions

  • RC Electronic Speed Control Capacitors
  • Annex32 / Annex RDS For ESP Micros - A Quick and Dirty Example
  • Convenient audio FFT module?
  • CR2/CR123A Batteries In Projects
  • Harman Kardon radio module BMW noise

Footer

EE World Online Network

  • 5G Technology World
  • EE World Online
  • Engineers Garage
  • Analog IC Tips
  • Battery Power Tips
  • Connector Tips
  • EDA Board Forums
  • Electro Tech Online Forums
  • EV Engineering
  • Microcontroller Tips
  • Sensor Tips
  • Test and Measurement Tips

Power Electronic Tips

  • Subscribe to our newsletter
  • Advertise with us
  • Contact us
  • About us

Copyright © 2026 · WTWH Media LLC and its licensors. All rights reserved.
The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of WTWH Media.

Privacy Policy