Tuesday, January 12, 2010

It's All About the Kilowatts

Although my demand for servers increases at 25% per year, I've been able to virtualize my entire infrastructure and keep the real estate foot print small.

At the same time, my demand for high performance computing and storage is increasing at 250% per year. With blade servers and 2 terabyte drives, my rack space is not a rate limiter.

It's all about the kilowatts.

Today, I'm using 220 kilowatts. My 2 year forecast is over half a megawatt.

What are we doing?

1. Measure and track power consuming and growth. At HMS we have 2 data centers - a primary and a disaster recovery site. Our primary site is .16 cents per kilowatt hour x 140 kW in use (that's an electrical bill of $16,128 per month). Our backup site is .12 centers per kilowatt hour x 80 kW in use. (That's an electrical bill of $6,912.00 per month). Unless you understand your power costs in detail, you'll never be able to control them.

2. Forecast the future. We use data center modeling software from SAP called Xcelsius that enables us to examine the impact of moving servers, adding capacity, changing square footage, adding power/cooling etc. The graphic above illustrates our modeling.

3. Create tiers of data center power capabilities. Rather than use a one size fits all strategy, we have begun to rent co-location space that includes specialized rooms for high power density racks (25kw/rack). We can use liquid cooled cabinets and other specialized technologies to achieve the right power/cooling support for high performance computing instead of trying to design one room to serve all purposes.

4. Investigate lower cost alternatives. Google's strategy has been to locate server farms near hydroelectric plants with lower kilowatt costs. We're considering the options in Western Massachusetts along with other collaborators. One challenge of this approach is backup power. What happens to a high performance computing facility if the hydroelectric power fails? Creating a megawatt of backup generate power is not easy or cost effective. Instead of protecting all our high performance computing assets, one strategy is to protect only storage which is less tolerant of power failures. Since high performance computing cores are often distributed geographically, failure of anyone data center could be invisible to the users.

5. Engineer for efficiency. As we purchase new equipment, we examine power supply designs, cooling profiles, possibilities for shutting down unused equipment until it is needed etc. I expect some of the greatest software and hardware innovates of the next several years to be power saving technologies, because real estate is no longer the issue.

4 comments:

  1. Dr. Halamka,

    Since you mentioned Google's server farm strategy in your blog, I thought you may also find their power backup strategy innovative as well.

    See http://bit.ly/kFVWr

    ReplyDelete
  2. Point #3 regarding high density computing is right on. While it runs contrary to conventional thinking, it is actually more energy efficient to cool high density racks (20 kVA to 60 kVA). Row-based or rack-based cooling modules are about 30% more efficient than traditional chilled water air conditioners. Less energy is needed to move air since the modules reside in the row and, because there’s no need for over-cooling to eliminate hot spots, chiller capacity can be reduced. Besides operational/energy savings, high density can also save on capital costs as fewer racks will be needed since they’re filled to capacity rather than half full.

    ReplyDelete
  3. John,

    Interesting post. Have you given any thought to calculating the power savings for desktops by going to thin or even stateless, thin clients?

    ReplyDelete
  4. Regarding cloud computing, have you assessed using it for high volume storage for your clinical imagery? Maybe only for DR/Backup? Perhaps an Amazon or Iron Mountain?

    ReplyDelete