The future of the data center is green: Takeaways from WiredRE data center event

By | November 20th, 2009

Green Datacenter

What do Google providing search, Coca-Cola operating its systems to track inventory, and Backblaze backing up your data have in common? The computers that handle all of this live in data centers. And those data centers use power – lots of it.

In the U.S. alone there are over 20,000 data centers – each of which houses thousands or tens of thousands of servers. Combined, these data centers make up 3% of all U.S. energy consumption (not just electricity) – more than the entire domestic air fleet.

So when I went to an event on Wednesday called:
THE TRUTH ABOUT THE FUTURE OF THE DATA CENTER:CLOUD, COLOCATION, & DATA CENTER REAL ESTATE
it should be no surprise that the focus was on power, power, power.

And lest you think this is people getting wrapped up in the green movement or just jumping on a marketing trend – let me dissuade you. Datacenters in the U.S. spend $23 billion a year on electricity according to KC Mares of MegaWatt Consulting. In fact, electricity can often cost over 50% of the purchase price of a server over it’s lifetime. Minor improvements can have massive implications not only on global warming but also company bottom lines.

KC provided a fascinating overview of innovations and experiments that operators of data centers and the companies building out large server deployments are pursuing. Some examples:

  • VFDs – variable frequency drives to adjust the speed of blower fans that adjust to need rather than spinning at a constant rate.
  • Natural cooling – using outside air and fans rather than air-conditioning to keep data centers cool; it turns out most servers are perfectly happy running at temperatures much higher than what data centers attempt to keep them at.
  • Shorter cooling regions – having air flow almost directly around a server in the process of cooling it rather than through the entire building; shorter distances mean less air friction and less energy spent moving it around.
  • Eliminating UPS systems – getting rid of the backup power systems and assuming servers will go down…and having backup servers or data centers instead.
  • Using 480 volts – higher voltage means lower amperage and thus less heat loss and higher efficiency. More of today’s server systems are capable of handling this voltage.
  • Higher efficiency power supplies – switching to 90% efficient power supplies on servers rather than using 70% or 80% ones; these are more expensive upfront but can still pay off fairly quickly.

A number of these items pay for themselves in a couple months and then generate savings ongoing from then on. KC has a variety of information on his site and blog.

Gleb Budman
Co-founder and CEO of Backblaze. Founded three prior companies. He has been a speaker at GigaOm Structure, Ignite: Lean Startup, FailCon, CloudCon; profiled by Inc. and Forbes; a mentor for Teens in Tech; and holds 5 patents on security.

Follow Gleb on: Twitter / LinkedIn / Google+
Gleb Budman

Latest posts by Gleb Budman (see all)

Category:  Backblaze Bits · Events · TechBytes