It’s Gettin’ Hot in Here: Industry veteran Jim Fister unpacks the history of the data center
“We need NASA, because we need a tower of flame 60 miles high burning at temperatures hotter than the surface of the sun.” - Nova, Sharknado III
Technology: it’s all about the sublime new features. That intricate blend of performance and programming, design and development, brilliant insights leading to giant leaps of innovation. Right? We’re talking about the best and the brightest innovators, those ultra-cool kids who opted to take all those math classes, knowing that the Psychology majors were already hitting quarter beer night while they wrote up their fourth page of derivatives. But it was all worth it, because someday…some wonderful day…the world would recognize that Linear Algebra would make things safe for, um, whatever they’d get paid a lot to algebrate.
As cool as technology is, the reality of any good high-tech solution involves more boring logistics than elegant engineering. For all the results, the hours of sweat are more about jamming six new ideas in a space that holds three and hoping that something doesn’t break once the test suite is finished. Innovation is built on the shoulders of a lot of swearing in small rooms with smudged whiteboards. And maybe the archaeology of those smudges reminds some about how it was done before.
The modern data center began sometime in the early ‘00s, when commodity servers began to fill the racks, displacing RISC machines. (Sometime later, we’ll talk about how it was a stock market bust that revolutionized the datacenter.) Virtualization hadn’t hit volume yet, so most servers were single-task machines. The really innovative IT organizations were just starting to think at a rack level with stateless transactions.
The move from computer rooms to data centers was only twenty years old or so. The common architecture was a raised floor that delivered power and pulled air downward (it was fun to watch the airflow using the cigarette smoke back in the day). We were moving on from a legacy of big servers that filled a tile or two and pulled in the range of 6-8kW of power, and now facing a world where evolving commodity servers in a rack were - gasp - demanding 10kW, maybe 12kW, per tile. This was also when the data center was generally located in prime real-estate on the buffer floor in between IT and management, so forgive us for the infrastructure costs back then. We thought local, not global.
The conversations back then went to two simple vectors. One: make servers with less power. It would take a bit for us to figure out how to turn off portions of silicon that weren’t doing anything, and performance dictated that we’d probably claim that power back anyway. Two, and usually prevalent: brute-force the data center to adapt. In some cases, that meant that a rack or two of servers on a tile would be isolated by a few blank tiles to keep up with the limited cooling capacity in the room. It was not entirely uncommon to find a box fan or two on top of a rack to shove air over to another air intake.
The next innovations focused on logistics, mechanics, and air flow over server engineering. The data center began to evolve as the cloud companies led the charge to dedicated buildings with specific architecture to optimize power delivery and air movement. Hot/cold aisles started to appear, as did plenums and upward air flow. That alone bought a few kW of breathing room, even as the racks began to evolve to support a mix of compute, network, and storage to handle virtualized workloads. These days, it’s not uncommon to see 20kW racks in a vanilla data center, standing on those shoulders of innovation.
Of course, the demands never end. The commodity server isn’t as cool (pun intended) as it used to be, we processor folks blame the memory. And the rise of dedicated GPU workloads is back to driving power of individual rack units to new heights. It’s not uncommon for a full rack of AI/ML servers to demand up to 60kW of power. As before, the brute-force solutions appear to be back in vogue. Depopulating space to support the boutique solutions is back, minus the box fans. We don’t allow humans in the data center anymore, they’re just wasted heat. It’s likely that commodity will start to pull in AI innovation, and a bit of re-architecture will save some power, but the end result is likely the same. It’s likely time for another innovation in the actual buildings to accommodate the new rack reality.
It’s not hard to imagine another twenty years when we’re all chuckling about those old days where 50kW created a week of meetings in the conference room, and maybe the AI can peer back through the smudges to realize the old solutions might be useful again. So for now, here’s to the next generation of geeks skipping out on the vape party to do homework. We’ll need you then like we always did.
“We need NASA, because we need a tower of flame 60 miles high burning at temperatures hotter than the surface of the sun.” - Nova, Sharknado III
Technology: it’s all about the sublime new features. That intricate blend of performance and programming, design and development, brilliant insights leading to giant leaps of innovation. Right? We’re talking about the best and the brightest innovators, those ultra-cool kids who opted to take all those math classes, knowing that the Psychology majors were already hitting quarter beer night while they wrote up their fourth page of derivatives. But it was all worth it, because someday…some wonderful day…the world would recognize that Linear Algebra would make things safe for, um, whatever they’d get paid a lot to algebrate.
As cool as technology is, the reality of any good high-tech solution involves more boring logistics than elegant engineering. For all the results, the hours of sweat are more about jamming six new ideas in a space that holds three and hoping that something doesn’t break once the test suite is finished. Innovation is built on the shoulders of a lot of swearing in small rooms with smudged whiteboards. And maybe the archaeology of those smudges reminds some about how it was done before.
The modern data center began sometime in the early ‘00s, when commodity servers began to fill the racks, displacing RISC machines. (Sometime later, we’ll talk about how it was a stock market bust that revolutionized the datacenter.) Virtualization hadn’t hit volume yet, so most servers were single-task machines. The really innovative IT organizations were just starting to think at a rack level with stateless transactions.
The move from computer rooms to data centers was only twenty years old or so. The common architecture was a raised floor that delivered power and pulled air downward (it was fun to watch the airflow using the cigarette smoke back in the day). We were moving on from a legacy of big servers that filled a tile or two and pulled in the range of 6-8kW of power, and now facing a world where evolving commodity servers in a rack were - gasp - demanding 10kW, maybe 12kW, per tile. This was also when the data center was generally located in prime real-estate on the buffer floor in between IT and management, so forgive us for the infrastructure costs back then. We thought local, not global.
The conversations back then went to two simple vectors. One: make servers with less power. It would take a bit for us to figure out how to turn off portions of silicon that weren’t doing anything, and performance dictated that we’d probably claim that power back anyway. Two, and usually prevalent: brute-force the data center to adapt. In some cases, that meant that a rack or two of servers on a tile would be isolated by a few blank tiles to keep up with the limited cooling capacity in the room. It was not entirely uncommon to find a box fan or two on top of a rack to shove air over to another air intake.
The next innovations focused on logistics, mechanics, and air flow over server engineering. The data center began to evolve as the cloud companies led the charge to dedicated buildings with specific architecture to optimize power delivery and air movement. Hot/cold aisles started to appear, as did plenums and upward air flow. That alone bought a few kW of breathing room, even as the racks began to evolve to support a mix of compute, network, and storage to handle virtualized workloads. These days, it’s not uncommon to see 20kW racks in a vanilla data center, standing on those shoulders of innovation.
Of course, the demands never end. The commodity server isn’t as cool (pun intended) as it used to be, we processor folks blame the memory. And the rise of dedicated GPU workloads is back to driving power of individual rack units to new heights. It’s not uncommon for a full rack of AI/ML servers to demand up to 60kW of power. As before, the brute-force solutions appear to be back in vogue. Depopulating space to support the boutique solutions is back, minus the box fans. We don’t allow humans in the data center anymore, they’re just wasted heat. It’s likely that commodity will start to pull in AI innovation, and a bit of re-architecture will save some power, but the end result is likely the same. It’s likely time for another innovation in the actual buildings to accommodate the new rack reality.
It’s not hard to imagine another twenty years when we’re all chuckling about those old days where 50kW created a week of meetings in the conference room, and maybe the AI can peer back through the smudges to realize the old solutions might be useful again. So for now, here’s to the next generation of geeks skipping out on the vape party to do homework. We’ll need you then like we always did.