Wednesday, August 6, 2008

Storage for Research

I've written several blog entries about the challenges of providing enterprise storage for clinical and financial operations. We have half a petabyte of spinning storage at BIDMC and 200 Terabytes at Harvard Medical School.

At HMS, we've recently hired BioTeam to assist us with planning the next generation of storage for the research community. Although the acquisition and management of storage for the research community can be done by the same team that has been responsible for our enterprise storage, there will be unique challenges providing storage to the research community over the next few years that will require a different approach.

I recently attended a community-wide storage planning conference with all the HMS research stakeholders. We heard from them about research projects in imaging and genomics that will generate terabytes per day. Most of this data can be archived after analysis then eventually deleted.

This means that IT must be able to provision three kinds of storage - high performance storage for analysis on our Linux clusters, mid-tier storage for near term access and very inexpensive archival storage.

Further, IT needs to be able to extremely nimble in providing this storage on demand to support the evolving needs of investigators. Of course, a charge back model will be mutually agreed upon so that the demand is not infinite and the budget is not zero.

BioTeam prepared this excellent overview which frames the issues.

Over the next few months, we'll be considering our hardware options, service levels and processes to support the new storage requirements of the research community. It's a challenge that all IT organizations in academic must embrace.

1 comment:

ukdataguy said...

If you are looking for an innovative approach to high capacity storage, check out Atrato. I looked at their product the other day an I think they have an innovative approach to overcoming some of the operational issues surrounding high capacity disk arrays