As a Prius-driving vegan, I'm doing everything I can to reduce my carbon impact on the planet. This also includes an effort to build "green" data centers. My next few posts will be about the power consumed by the technology we use in healthcare. It's estimated that between 1.5 and 3.5% of all power generated in the US is now used by computers.
I recently began a project to consolidate two data centers. We had enough rack space, enough network drops, and enough power connections, so the consolidation looked like a great idea way to reduce operating costs. All looked good until we looked at the power and cooling requirements of our computing clusters and new racks of blade servers. For a mere $400,000 we could run new power wiring from electrical crypts to the data center. However, the backup generators would not be able to sustain the consolidated data center in the event of a total power loss. So, we could install a new $1 million dollar backup generator. Problem solved? The heat generated by all this power consumption would rapidly exhaust the cooling system, driving temperatures up 10 degrees. We investigated floor tile mounted cooling, portable cooling units, and even rack mounted cooling systems. All of these take space, consume power and add weight. At the end of the planning exercise, we found that the resulting new data center cost per square foot would exceed the cost of operating two less densely packed data centers. We looked at commercial data hosting options and ran into the same issue. Power limits per rack meant half full racks and twice as much square footage to lease, increasing our operating costs.
At my CareGroup data center, we recently completed a long term planning exercise for our unused square footage. Over the past few years, we've met increasing customer demand by adding new servers and power has not been a rate limiting step. However, as we retire mainframe, mini and RISC computing technologies and replace them with Intel/AMD-based blades, the heat generated will exceed our cooling capacity long before real estate and power are exhausted.
The recent rise in the cost of energy has also highlighted that unchecked growth in the number of servers is not economically sustainable. In general, IT organizations have a tendency to add more capacity rather than take on the more difficult task of controlling demand, contributing to growth in power consumption.
Power consumption and heat is increasing to the point that data centers cannot sustain the number of servers that the real estate can accommodate. The solution is to deploy servers much more strategically. We’ve started a new “Kill-a-watt” program and are now balancing our efforts between supply and demand. We are more conservative about adding dedicated servers for every new application, challenging vendor requirements when dedicated servers are requested, examining the efficiency of power supplies, and performing energy efficiency checks on the mechanical/electrical systems supporting the data center.
We have also begun the extensive use of VMware, Xen and other virtualization techniques. This means that we can host farms of Intel/AMD blades running Windows or Linux, deploying CPU capacity on demand without adding new hardware. We're connecting two geographically distant data centers together using low cost dark fiber and building "clouds" of server capacity. We create, move and load balance virtual servers without interrupting applications.
Managing a data center is no longer simply a facilities or real estate task. We've hired a full time power engineer to manage the life cycle of our data center, network closets and disaster recovery facilities. New blade technologies, Linux clusters, and virtualization are great for on demand computing, but power and cooling are the new infrastructure challenge of the CIO.
Subscribe to:
Post Comments (Atom)
4 comments:
John
I am a local VC interested in health IT (hence following your blog with pleasure). In fact I was at PCHRI last year and heard you talk.
I see lots of interesting startups ... On this subject, check out DegreeControls in NH ... they have an interesting system for active management of cooling in a data center that has shown 20-30% better performance than "default" http://www.degreec.com/dc_pd_ADAPTIVCOOL.htm
Have you done any tests yet with Server 2008 data center beta and IIS7? With the virtuals "baked" in, would this offer some additional bandwidth with some QOS adjustments and help with the power scenario? Being it is still beta and not completed I realize it may not be quite ready for a full utilization study yet, but was just curious if you believe this will also perhaps be a viable option to consider with some 64 bit Xeon powered consoles?
Have you heared about a game which you need use priston tale Gold to play, and you can also borrow priston tale Money from other players? But you can buy priston tale Gold, or you will lose the choice if you do not have cheap priston tale Gold. If you get it, you can continue this game.
Do you know 2moons dil? I like it. My brother often go to the internet bar to buy 2moons gold and play it. After school, He likes playing games using these 2moon dil with his friend. One day, he give me many buy 2moons dil and play the game with me. I came to the bar following him and found cheap 2moons gold was so cheap.
Post a Comment