PodcastsTechnologyThe Data Center Frontier Show

The Data Center Frontier Show

Endeavor Business Media
The Data Center Frontier Show
Latest episode

Available Episodes

5 of 173
  • Bridging the Data Center Power Gap: Utilities, On-Site Power, and the AI Buildout
    Recorded live at the 2025 Data Center Frontier Trends Summit in Reston, VA, this panel brings together leading voices from the utility, IPP, and data center worlds to tackle one of the defining issues of the AI era: power. Moderated by Buddy Rizer, Executive Director of Economic Development for Loudoun County, the session features: Jeff Barber, VP Global Data Centers, Bloom Energy Bob Kinscherf, VP National Accounts, Constellation Stan Blackwell, Director, Data Center Practice, Dominion Energy Joel Jansen, SVP Regulated Commercial Operations, American Electric Power David McCall, VP of Innovation, QTS Data Centers Together they explore how hyperscale and AI workloads are stressing today’s grid, why transmission has become the critical bottleneck, and how on-site and behind-the-meter solutions are evolving from “bridge power” into strategic infrastructure. The panel dives into the role of gas-fired generation and fuel cells, emerging options like SMRs and geothermal, the realities of demand response and curtailment, and what it will take to recruit the next generation of engineers into this rapidly changing ecosystem. If you want a grounded, candid look at how energy providers and data center operators are working together to unlock new capacity for AI campuses, this conversation is a must-listen.
    --------  
    36:04
  • AI for Good: Building for AI Workloads and Using AI for Smarter Data Centers
    Live from the Data Center Frontier Trends Summit 2025 – Reston, VA In this episode, we bring you a featured panel from the Data Center Frontier Trends Summit 2025 (Aug. 26-28), sponsored by Schneider Electric. DCF Editor in Chief Matt Vincent moderates a fast-paced, highly practical conversation on what “AI for good” really looks like inside the modern data center—both in how we build for AI workloads and how we use AI to run facilities more intelligently. Expert panelists included: Steve Carlini, VP, Innovation and Data Center Energy Management Business, Schneider Electric Sudhir Kalra, Chief Data Center Operations Officer, Compass Datacenters Andrew Whitmore, VP of Sales, Motivair Together they unpack: How AI is driving unprecedented scale—from megawatt data halls to gigawatt AI “factories” and 100–600 kW rack roadmaps What Schneider and NVIDIA are learning from real-world testing of Blackwell and NVL72-class reference designs Why liquid cooling is no longer optional for high-density AI, and how to retrofit thousands of brownfield, air-cooled sites How Compass is using AI, predictive analytics, and condition-based maintenance to cut manual interventions and OPEX The shift from “constructing” to assembling data centers via modular, prefab approaches The role of AI in grid-aware operations, energy storage, and more sustainable build and operations practices Where power architectures, 800V DC, and industry standards will take us over the next five years If you want a grounded, operator-level view into how AI is reshaping data center design, cooling, power, and operations—beyond the hype—this DCF Trends Summit session is a must-listen.
    --------  
    57:27
  • Flex on the Future of AI-Scale Data Centers: Integrated, Modular, and Ready to Deploy
    On this episode of The Data Center Frontier Show, Editor in Chief Matt Vincent sits down with Rob Campbell, President of Flex Communications, Enterprise & Cloud, and Chris Butler, President of Flex Power, to unpack Flex’s bold new integrated data center platform as unveiled at the 2025 OCP Global Summit. Flex says the AI era has broken traditional data center models, pushing power, cooling, and compute to the point where they can no longer be engineered separately. Their answer is a globally manufactured, pre-engineered platform that unifies these components into modular pods and skids, designed to cut deployment timelines by up to 30 percent and support gigawatt-scale AI campuses. Rob and Chris explain how Flex is blending JetCool’s chip-level liquid cooling with scalable rack-level CDUs; how higher-voltage DC architectures (400V today, 800V next) will reshape power delivery; and why Flex’s 110-site global manufacturing footprint gives it a unique advantage in speed and resilience. They also explore Flex’s lifecycle intelligence strategy, the company’s circular-economy approach to modular design, and their view of the “data center of 2030”—a landscape defined by converged power and IT, liquid cooling as default, and modular units capable of being deployed in 30–60 days. It’s a deep look at how one of the world’s largest manufacturers plans to redefine AI-scale infrastructure.
    --------  
    38:02
  • Powering the AI Era: Inside Next-Gen Data Centers
    Artificial intelligence is completely changing how data centers are built and operated. What used to be relatively stable IT environments are now turning into massive power ecosystems. The main reason is simple — AI workloads need far more computing power, and that means far more energy. We’re already seeing a sharp rise in total power consumption across the industry, but what’s even more striking is how much power is packed into each rack. Not long ago, most racks were designed for 5 to 15 kilowatts. Today, AI-heavy setups are hitting 50 to 70 kW, and the next generation could reach up to 1 megawatt per rack. That’s a huge jump — and it’s forcing everyone in the industry to rethink power delivery, cooling, and overall site design. At those levels, traditional AC power distribution starts to reach its limits. That’s why many experts are already discussing a move toward high-voltage DC systems, possibly around 800 volts. DC systems can reduce conversion losses and handle higher densities more efficiently, which makes them a serious option for the future. But with all this growth comes a big question: how do we stay responsible? Data centers are quickly becoming some of the largest power users on the planet. Society is starting to pay attention, and communities near these sites are asking fair questions — where will all this power come from, and how will it affect the grid or the environment? Building ever-bigger data centers isn’t enough; we need to make sure they’re sustainable and accepted by the public. The next challenge is feasibility. Supplying hundreds of megawatts to a single facility is no small task. In many regions, grid capacity is already stretched, and new connections take years to approve. Add the unpredictable nature of AI power spikes, and you’ve got a real engineering and planning problem on your hands. The only realistic path forward is to make data centers more flexible — to let them pull energy from different sources, balance loads dynamically, and even generate some of their own power on-site. That’s where ComAp’s systems come in. We help data center operators manage this complexity by making it simple to connect and control multiple energy sources — from renewables like solar or wind, to backup generators, to grid-scale connections. Our control systems allow operators to build hybrid setups that can adapt in real time, reduce emissions, and still keep reliability at 100%. Just as importantly, ComAp helps with the grid integration side. When a single data center can draw as much power as a small city, it’s no longer just a “consumer” — it becomes part of the grid ecosystem. Our technology helps make that relationship smoother, allowing these large sites to interact intelligently with utilities and maintain overall grid stability. And while today’s discussion is mostly around AC power, ComAp is already ready for the DC future. The same principles and reliability that have powered AC systems for decades will carry over to DC-based data centers. We’ve built our solutions to be flexible enough for that transition — so operators don’t have to wait for the technology to catch up. In short, AI is driving a complete rethink of how data centers are powered. The demand and density will keep rising, and the pressure to stay responsible and sustainable will only grow stronger. The operators who succeed will be those who find smart ways to integrate different energy sources, keep efficiency high, and plan for the next generation of infrastructure. That’s the space where ComAp is making a real difference.
    --------  
    15:41
  • 1623 Farnam CEO Bill Severn Talks Midwest Interconnection at the Crossroads of AI and the Edge
    In this episode of the DCF Show podcast, Data Center Frontier Editor in Chief Matt Vincent sits down with Bill Severn, CEO of 1623 Farnam, to explore how the Omaha carrier hotel is becoming a critical aggregation hub for AI, cloud, and regional edge growth. A featured speaker on The Distributed Data Frontier panel at the 2025 DCF Trends Summit, Severn frames the edge not as a location but as the convergence of eyeballs, network density, and content—a definition that underpins Farnam’s strategy and rise in the Midwest. Since acquiring the facility in 2018, 1623 Farnam has transformed an underappreciated office tower on the 41st parallel into a thriving interconnection nexus with more than 40 broadband providers, 60+ carriers, and growing hyperscale presence. The AI era is accelerating that momentum: over 5,000 new fiber strands are being added into the building, with another 5,000 strands expanding Meet-Me Room capacity in 2025 alone. Severn remains bullish on interconnection for the next several years as hyperscalers plan deployments out to 2029 and beyond. The conversation also dives into multi-cloud routing needs across the region—where enterprises increasingly rely on Farnam for direct access to Google Central, Microsoft ExpressRoute, and global application-specific cloud regions. Energy efficiency has become a meaningful differentiator as well, with the facility operating below a 1.5 PUE, thanks to renewable chilled water, closed-loop cooling, and extensive free cooling cycles. Severn highlights a growing emphasis on strategic content partnerships that help CDNs and providers justify regional expansion, pointing to past co-investments that rapidly scaled traffic from 100G to more than 600 Gbps. Meanwhile, AI deployments are already arriving at pace, requiring collaborative engineering to fit cabinet weight, elevator limitations, and 40–50 kW rack densities within a non–purpose-built structure. As AI adoption accelerates and interconnection demand surges across the heartland, 1623 Farnam is positioning itself as one of the Midwest’s most important digital crossroads—linking hyperscale backbones, cloud onramps, and emerging AI inference clusters into a cohesive regional edge.
    --------  
    22:17

More Technology podcasts

About The Data Center Frontier Show

Data Center Frontier’s editors are your guide to how next-generation technologies are changing our world, and the critical role the data center industry plays in creating our extraordinary future.
Podcast website

Listen to The Data Center Frontier Show, Hard Fork and many other podcasts from around the world with the radio.net app

Get the free radio.net app

  • Stations and podcasts to bookmark
  • Stream via Wi-Fi or Bluetooth
  • Supports Carplay & Android Auto
  • Many other app features

The Data Center Frontier Show: Podcasts in Family

Social
v8.1.1 | © 2007-2025 radio.de GmbH
Generated: 12/10/2025 - 12:59:03 PM