CIOs are responsible for achieving at least 99.9% uptime and that implies high reliability engineering of every component. In planning for disaster recovery, we tend focus on power, storage, servers, networks, and desktops. However if cooling fails, no amount of redundant engineering will save the day.
BIDMC's primary data center uses glycol coolers (installed before we took possession of the building) to maintain a constant room temperature. We were concerned that the piping which carries the glycol to/from the roof top dry coolers and the computer room air conditioning units may have deteriorated over time and might pose a risk of joint rupture. Ty Dell, our data center facilities engineer, arranged to have the pipes inspected via ultrasound imaging to assess pipe and joint thickness. They passed all inspections.
Here's the report which provides us with reassurance that we have low risk for failure in our cooling system plumbing.
Non-invasive ultrasound testing of data center cooling infrastructure - that's "cool".
Subscribe to:
Post Comments (Atom)
1 comment:
Dr. Halamka:
Have you followed Intel's approach to data center cooling using air economizers? Here are links to a 5 minute Intel video on their proof-of-concept test in one of their New Mexico centres and to an Intel document on the idea, respectively:
http://www.youtube.com/watch?v=SRn_xW7VtWc
http://www.intel.com/content/www/us/en/data-center-efficiency/data-center-efficiency-xeon-reducing-data-center-cost-with-air-economizer-brief.html
By the way, I've never worked or shilled for Intel. I was a sys admin for the final 22 years of my career.
Regards,
Tony Kocurko
Post a Comment