As president of the Mayo Clinic Platform, I lead a portfolio of new digital platform businesses focused on transforming health by leveraging artificial intelligence, the internet of things, and an ecosystem of partners for Mayo Clinic. This is made possible by an extraordinary team of people at Mayo and collaborators worldwide. This blog will document their story.
Thursday, February 28, 2008
Cool Technology of the Week
As I've mentioned in my entries about personal health records and my recent Dispatch from HIMSS , 2008 is an important year for personal health records that are linked to clinical EHRs, employer sponsored, payer based, and commercially offered. Yesterday, Google publicly announced their Google Health application which is the Cool Technology of the Week. A disclosure - I did serve on the Google Health Advisory Council over the past year.
The concept behind Google Health is that patients login to the Google application using credentials that are secure but not trusted. This means that anyone can set up a Google Health profile, but there is no specific assertion of identity. I can claim I'm Bill Clinton if I want to.
Once in Google Health, I can manually add information about my problem lists, medication lists, allergies etc. and get decision support about my conditions. However, it's unlikely that many people will enter their data manually. A much more powerful approach is self populate the personal health record based on standards-based connections to hospitals, laboratories, clinics and pharmacies. Cleveland Clinic was the first partner to support this connection. Beth Israel Deaconess will be a part of the next group of connections.
To self populate the Google Health record, a patient who has a relationship to one of the Google interfaced providers, just clicks on the icon of their hospital. That icon offers up to 3 links. In the case of BIDMC, we'll offer
Upload your records
Make an Appointment
Securely Email your Clinicians
If a patient clicks on Upload your records, they will be asked to login to BIDMC's Personal Health Record, Patientsite, using the secure credentials that have been issued by their doctor, validating the patient's identity. Once they sign a consent, they will be given the option to initiate an upload of problems, medications, allergies and laboratories into Google Health. The patient initiates this transfer, with their consent, after understanding the risks and benefits of doing so. Once the data is in Google Health, the value to the consumer is expert decision support, disease information, and medication information based on the patient's data.
There have been several articles about the Google/Cleveland Clinic pilot and Microsoft Health Vault/Mayo pilot noting that none of the organizations have signed HIPAA business associate agreements with each other. The reason for this is that Google and Microsoft are not HIPAA covered entities or business associates. Their products are just secure storage containers used by the patient, like a flash drive. The patient can delete the data at any time, apply privacy flags, print the data, and add to the data. Since the patient is in total control, there are no covered entity or business associate issues.
As part of the Google Advisory Council, I can tell you that many thoughtful people worked on the legal, technical, and policy issues around data use. Google will not advertise based on this data, resell it, data mine it , or repurpose it in an way. These consumer centric policies are similiar to the best practices adopted by Microsoft Health Vault.
It's important to me that in my role as chair of the national standards effort, HITSP, that I support all the major personal health record initiatives with interoperability. I've committed to Microsoft that BIDMC will work with Health Vault. I've committed to Dossia that we'll link with their Indivo Health platform. It's my hope that all of these efforts will converge to use one plug and play standard for clinical content and transport. Once they do, patients will be able to select the personal health record of their choice based on features, not just data.
Tuesday, February 26, 2008
How to be a Bad CIO
In my decade as a CIO, I've seen a lot of turnover in the IT industry. Each time a CIO is fired, I've asked around to learn about the root cause. Here's my list of the top 10 ways to be a bad CIO.
1. Start each meeting with a chip on your shoulder
Human nature is such that every organization has politics and conflicts. Sometimes these differences of opinion lead to emotional email or confrontational meetings. If the CIO develops an attitude the presupposes every request will be unreasonable and every interaction unpleasant, then every meeting will become unproductive. I find that listening to naysayers, understanding common ground, and developing a path forward works with even the most difficult customers. Instead of believing that meetings with challenging customers will be negative, I think of them as opportunities for a "walk in the woods"
2. Bypass governance processes and set priorities yourself
Although it's true that some budget decisions must be made by the CIO, such as maintaining infrastructure, the priorities for application development should be based on customer driven Governance Committees . Even the best of intentions can lead to a mismatch between customer expectations and IT resource allocation. I recently participated in a meeting to discuss technology problems, when in fact the problem was governance - a lack of communication among the stakeholders, resulting in unclear priorities and unmet expectations. Once the governance is clarified and communication channels established, IT can deliver on customer priorities and meet expectations.
3. Protect your staff at the expense of customer and institutional needs
As a CIO, I work hard to prevent my 'lean and mean' staff from becoming 'bony and angry'. However, I also work with the customers to balance resources, scope and timing, rather than just saying 'no'. Sometimes organizational priorities will be overwhelming due to sudden compliance issues or "must do" strategic opportunities. I do my best to redirect resources to these new priorities, explaining that existing projects will slow down. My attitude is that I do not know the end of play in the middle of Act I, so I cannot really understand the impact of new priority initiatives until I accept their positive possibilities and start working on the details. Tolerate some ambiguity, accept change, support the institution and if a resource problem evolves, then ask for help.
4. Put yourself first
I've written that life as a CIO is a lifestyle, not a job. Weekends and nights are filled with system upgrades. Pagers and cell phones go off at inopportune moments. Vacations and downtime are a balance with operational responsibilities. When I go on vacation, I get up an hour before my family, catch up on email, then spend the day with my family. At night, I go to bed an hour after they do, catching up on the day's events. It's far worse to ignore email and phone calls for a week then come back to a desk filled with loose ends. Being a CIO requires a constant balance of personal and professional time.
5. Use mutually assured destruction negotiating tactics
Walking into the CEO's office and saying that you will quit unless your budget is increased does not win the war. It may result in temporary victory but it demeans the CIO. Similarly, telling customers that the CEO, COO and CFO are to be blamed for lack of resources does not make the organization look good. The CIO should be a member of senior management and all resource decisions should be made together by consensus, even if the outcome is not always positive for IT.
6. Hide your mistakes/undercommunicate
My network outage in 2002 resulted in what was called "the worst IT disaster in healthcare history". By sharing all my lessons learned with the press and internal customers, everyone understood the combination of issues and events that caused the problem. I received email from CIOs all over the world explaining their similar problems that had been hidden due to PR concerns. I have found that transparency and overcommunication may be challenging in the short term, but always improves the situation in the long term.
7. Burn Bridges
It's a small world and the best policy is to be as cordial and professional as possible with every stakeholder, even your worst naysayers. A dozen years ago before I was CIO, I presented to the IT steering committee about the need to embrace the web. I was told by a senior IT leader that they did not care what I had to say since I was not an important stakeholder. A year later, I became CIO and that IT senior leader left the organization within a week.
8. Don't give your stakeholders a voice
Sitting in your office and not meeting with customers is doom for the CIO. Every day, I fill my schedule with meetings in the trenches with all the stakeholders to understand what is working and what is not. I never shoot the messenger when I'm told that our products or services need improvement. A CIO can earn a lot of respect just by listening to the honest feedback from every part of the organization.
9. Embrace obsolete technologies
The CIO should never be the rate limiting step for adoption of new technologies and ideas. If Open Source, Apple products, and Web 2.0 are the way is world going, the CIO should be the first in line to test them.
10. Think inside the box
Facebook as a Rapid Application Development platform? Empowering users to do self service data mining? Piloting thin client devices and flexible work arrangements? Although exploring new ideas will not always result in a breakthrough, it's much more likely to create innovation than maintain the status quo.
Each time you approach a senior manager, a customer or an employee, remind yourself of the top 10 ways to be a bad CIO. By avoiding these behaviors, you may find yourself embraced by the organization for many years to come.
1. Start each meeting with a chip on your shoulder
Human nature is such that every organization has politics and conflicts. Sometimes these differences of opinion lead to emotional email or confrontational meetings. If the CIO develops an attitude the presupposes every request will be unreasonable and every interaction unpleasant, then every meeting will become unproductive. I find that listening to naysayers, understanding common ground, and developing a path forward works with even the most difficult customers. Instead of believing that meetings with challenging customers will be negative, I think of them as opportunities for a "walk in the woods"
2. Bypass governance processes and set priorities yourself
Although it's true that some budget decisions must be made by the CIO, such as maintaining infrastructure, the priorities for application development should be based on customer driven Governance Committees . Even the best of intentions can lead to a mismatch between customer expectations and IT resource allocation. I recently participated in a meeting to discuss technology problems, when in fact the problem was governance - a lack of communication among the stakeholders, resulting in unclear priorities and unmet expectations. Once the governance is clarified and communication channels established, IT can deliver on customer priorities and meet expectations.
3. Protect your staff at the expense of customer and institutional needs
As a CIO, I work hard to prevent my 'lean and mean' staff from becoming 'bony and angry'. However, I also work with the customers to balance resources, scope and timing, rather than just saying 'no'. Sometimes organizational priorities will be overwhelming due to sudden compliance issues or "must do" strategic opportunities. I do my best to redirect resources to these new priorities, explaining that existing projects will slow down. My attitude is that I do not know the end of play in the middle of Act I, so I cannot really understand the impact of new priority initiatives until I accept their positive possibilities and start working on the details. Tolerate some ambiguity, accept change, support the institution and if a resource problem evolves, then ask for help.
4. Put yourself first
I've written that life as a CIO is a lifestyle, not a job. Weekends and nights are filled with system upgrades. Pagers and cell phones go off at inopportune moments. Vacations and downtime are a balance with operational responsibilities. When I go on vacation, I get up an hour before my family, catch up on email, then spend the day with my family. At night, I go to bed an hour after they do, catching up on the day's events. It's far worse to ignore email and phone calls for a week then come back to a desk filled with loose ends. Being a CIO requires a constant balance of personal and professional time.
5. Use mutually assured destruction negotiating tactics
Walking into the CEO's office and saying that you will quit unless your budget is increased does not win the war. It may result in temporary victory but it demeans the CIO. Similarly, telling customers that the CEO, COO and CFO are to be blamed for lack of resources does not make the organization look good. The CIO should be a member of senior management and all resource decisions should be made together by consensus, even if the outcome is not always positive for IT.
6. Hide your mistakes/undercommunicate
My network outage in 2002 resulted in what was called "the worst IT disaster in healthcare history". By sharing all my lessons learned with the press and internal customers, everyone understood the combination of issues and events that caused the problem. I received email from CIOs all over the world explaining their similar problems that had been hidden due to PR concerns. I have found that transparency and overcommunication may be challenging in the short term, but always improves the situation in the long term.
7. Burn Bridges
It's a small world and the best policy is to be as cordial and professional as possible with every stakeholder, even your worst naysayers. A dozen years ago before I was CIO, I presented to the IT steering committee about the need to embrace the web. I was told by a senior IT leader that they did not care what I had to say since I was not an important stakeholder. A year later, I became CIO and that IT senior leader left the organization within a week.
8. Don't give your stakeholders a voice
Sitting in your office and not meeting with customers is doom for the CIO. Every day, I fill my schedule with meetings in the trenches with all the stakeholders to understand what is working and what is not. I never shoot the messenger when I'm told that our products or services need improvement. A CIO can earn a lot of respect just by listening to the honest feedback from every part of the organization.
9. Embrace obsolete technologies
The CIO should never be the rate limiting step for adoption of new technologies and ideas. If Open Source, Apple products, and Web 2.0 are the way is world going, the CIO should be the first in line to test them.
10. Think inside the box
Facebook as a Rapid Application Development platform? Empowering users to do self service data mining? Piloting thin client devices and flexible work arrangements? Although exploring new ideas will not always result in a breakthrough, it's much more likely to create innovation than maintain the status quo.
Each time you approach a senior manager, a customer or an employee, remind yourself of the top 10 ways to be a bad CIO. By avoiding these behaviors, you may find yourself embraced by the organization for many years to come.
Dispatch from HIMSS
I'm at HIMSS for 24 hours. For those who want say hello, I'm keynoting the Electronic Health Record Vendor's Association Breakfast from 7am to 8am in Convention Center Room 240D, keynoting the HITSP Town Hall from 8:30am-9:30am in Convention Center Room 204C, then meeting with several groups on the Exhibition floor until my plane departs back to Boston in the afternoon.
Here are a few observations from my hallway discussions at HIMSS thus far
1. Personal Health Records have gone mainstream . With Microsoft's Healthvault, Revolution Health, Dossia, and anticipated announcements from Google, it's clear that patients will have many options to become stewards of their own healthcare data. The next step will be for labs, pharmacies, payers, clinics and hospitals to provide standards-based connections to these personal health records. Now that Secretary Leavitt has recognized the national standards for personal health records, there should be convergence on the use of the continuity of care document for personal health record interoperability.
2. Personal Health Records will accelerate Health Information Exchange efforts. Health Information Exchanges and RHIOs have faced many challenges, including funding for central infrastructure and privacy concerns. Personal Health Records which enable the patient to move records from place to place are peer to peer, and use the free infrastructures provided by Microsoft, Google et al. Since the patient does the transfer, there are no HIPAA business associate agreements or covered entity issues. The patient is in full control of who sees what and when.
3. Electronic Health Records and Computerized Provider Order Entry are now seen as essential. Hospitals realize that Joint Commission accreditation, pay for performance programs, care coordination efforts, and quality measurement require these technologies. They are committed to making the investments. Of course there are still issues with rural hospitals and solo practioners, but the larger organizations have passed the tipping point.
4. Interoperability is being taken seriously. At the Interoperability Showcase, the IHE Theater, the EHRVA meetings, and throughout the Exhibition floors, I'm hearing about the possibilities of using the standardized patient summaries for EHR, PHR, Quality, Public Health, and clinical research. The next year should see significant adoption of these new standards. Here's an overview of all the interoperability events at HIMSS.
5. Security is increasingly a focus of many CIOs. Several states, such as Massachusetts and California have new data protection compliance and reporting requirements. Intrusion detection and prevention technologies are hot.
6. Storage and archiving technologies are become more sophisticated and lower cost. Healthcare CIOs are faced with storing more images and other data, so having easy to manage enterprise class storage and archiving is key.
7. Software as a service/ASP models are increasingly popular as a means to reduce total cost of ownership and ease deployment.
8. New mobile devices for clinicians have longer battery life, larger screens and lighter weight. PDAs are vanishing. Small form factor tablets/laptops are growing.
9. Collaboration tools for virtual teams are growing in popularity as local healthcare IT expertise becomes hard to find and retain.
10. Open Source is finding its way into the healthcare data center, with Linux providing the server side operating system support for Oracle databases, MySQL, and numerous vendor supported appliances.
A great gathering with 25,000 of my closest friends!
Here are a few observations from my hallway discussions at HIMSS thus far
1. Personal Health Records have gone mainstream . With Microsoft's Healthvault, Revolution Health, Dossia, and anticipated announcements from Google, it's clear that patients will have many options to become stewards of their own healthcare data. The next step will be for labs, pharmacies, payers, clinics and hospitals to provide standards-based connections to these personal health records. Now that Secretary Leavitt has recognized the national standards for personal health records, there should be convergence on the use of the continuity of care document for personal health record interoperability.
2. Personal Health Records will accelerate Health Information Exchange efforts. Health Information Exchanges and RHIOs have faced many challenges, including funding for central infrastructure and privacy concerns. Personal Health Records which enable the patient to move records from place to place are peer to peer, and use the free infrastructures provided by Microsoft, Google et al. Since the patient does the transfer, there are no HIPAA business associate agreements or covered entity issues. The patient is in full control of who sees what and when.
3. Electronic Health Records and Computerized Provider Order Entry are now seen as essential. Hospitals realize that Joint Commission accreditation, pay for performance programs, care coordination efforts, and quality measurement require these technologies. They are committed to making the investments. Of course there are still issues with rural hospitals and solo practioners, but the larger organizations have passed the tipping point.
4. Interoperability is being taken seriously. At the Interoperability Showcase, the IHE Theater, the EHRVA meetings, and throughout the Exhibition floors, I'm hearing about the possibilities of using the standardized patient summaries for EHR, PHR, Quality, Public Health, and clinical research. The next year should see significant adoption of these new standards. Here's an overview of all the interoperability events at HIMSS.
5. Security is increasingly a focus of many CIOs. Several states, such as Massachusetts and California have new data protection compliance and reporting requirements. Intrusion detection and prevention technologies are hot.
6. Storage and archiving technologies are become more sophisticated and lower cost. Healthcare CIOs are faced with storing more images and other data, so having easy to manage enterprise class storage and archiving is key.
7. Software as a service/ASP models are increasingly popular as a means to reduce total cost of ownership and ease deployment.
8. New mobile devices for clinicians have longer battery life, larger screens and lighter weight. PDAs are vanishing. Small form factor tablets/laptops are growing.
9. Collaboration tools for virtual teams are growing in popularity as local healthcare IT expertise becomes hard to find and retain.
10. Open Source is finding its way into the healthcare data center, with Linux providing the server side operating system support for Oracle databases, MySQL, and numerous vendor supported appliances.
A great gathering with 25,000 of my closest friends!
Saturday, February 23, 2008
Electronic Health Records for Non-owned doctors - Managing the project
As I've indicated in my blog about managing IT projects and managing consulting engagements projects do not manage themselves. Although we've put together a remarkable partnership of vendors and service providers for our Electronic Health Record for non-owned doctors project, it's all wrapped in $1 million dollars of project management, coordination and "air traffic control". The geographically dispersed set of independent physician practices makes the project that much harder to manage. Our partners for this project are
eClinicalWorks - a leading provider of practice management and CCHIT certified electronic health records, accessible over the internet using a smart web client, from anywhere in the world. They will provide the software, training, and review of all our infrastructure designs.
Concordant - a leading provider of desktop, network, and server hosting services for clinician offices throughout our region. They will provide the hosting center for our Software as a Service (SaaS) EHR applications, operate our help desk, and deploy all our hardware to clinician offices.
Massachusetts eHealth Collaborative - our regional implementer of electronic health records with expertise in practice transformation. They will provide the practice consulting expertise to move clinicians from paper-based workflows to electronic systems.
Third Brigade - a leading provider of security, ethical hacking and host-based intrusion protection services. They will ensure we protect the privacy of patient records, since confidentiality is foundational to the entire project.
My internal staff, consisting of a Project Director, Project Manager, Project Coordinator, and design engineer will coordinate all the work done by our partners, design the model office/ideal configurations for the entire rollout, and manage the budget. Our first 4 pilot sites will go live this Summer and by Fall we will have gained enough experience that we'll refine our project plans and management oversight to be the equivalent of a "Starbucks franchising model." We expect that this model will enable us to choose a practice and then 6 weeks later have them fully up and running with hardware, software supplied from our central hosting facility, training, data conversions and interfaces. Being able to rollout practices in this timeframe, leveraging economies of scale, and using our partners most efficiently will result (we hope) in low cost and high customer satisfaction, since we'll do all the work with a minimum amount of wasted effort.
My experience with a project of this complexity is that a few additional months spent planning, project managing, and piloting will improve the quality of the project immensely and ultimately reduce our costs. The expense of doing the project twice to get it right far exceeds an investment in project management to get it right the first time. As we develop our "Starbucks franchise" Gantt charts, I'll post them, so all can see the critical path items that are being managed for each practice site.
eClinicalWorks - a leading provider of practice management and CCHIT certified electronic health records, accessible over the internet using a smart web client, from anywhere in the world. They will provide the software, training, and review of all our infrastructure designs.
Concordant - a leading provider of desktop, network, and server hosting services for clinician offices throughout our region. They will provide the hosting center for our Software as a Service (SaaS) EHR applications, operate our help desk, and deploy all our hardware to clinician offices.
Massachusetts eHealth Collaborative - our regional implementer of electronic health records with expertise in practice transformation. They will provide the practice consulting expertise to move clinicians from paper-based workflows to electronic systems.
Third Brigade - a leading provider of security, ethical hacking and host-based intrusion protection services. They will ensure we protect the privacy of patient records, since confidentiality is foundational to the entire project.
My internal staff, consisting of a Project Director, Project Manager, Project Coordinator, and design engineer will coordinate all the work done by our partners, design the model office/ideal configurations for the entire rollout, and manage the budget. Our first 4 pilot sites will go live this Summer and by Fall we will have gained enough experience that we'll refine our project plans and management oversight to be the equivalent of a "Starbucks franchising model." We expect that this model will enable us to choose a practice and then 6 weeks later have them fully up and running with hardware, software supplied from our central hosting facility, training, data conversions and interfaces. Being able to rollout practices in this timeframe, leveraging economies of scale, and using our partners most efficiently will result (we hope) in low cost and high customer satisfaction, since we'll do all the work with a minimum amount of wasted effort.
My experience with a project of this complexity is that a few additional months spent planning, project managing, and piloting will improve the quality of the project immensely and ultimately reduce our costs. The expense of doing the project twice to get it right far exceeds an investment in project management to get it right the first time. As we develop our "Starbucks franchise" Gantt charts, I'll post them, so all can see the critical path items that are being managed for each practice site.
Scanning Technologies
One of the great things about writing a blog is that I can share my experiences so that other CIOs can avoid mistakes I've made. Over the past 5 years, I'm made several mistakes - not understanding network technology enough to prevent our 2002 network outage, underestimating the popularity of voice recognition dictation systems, and believing that users would migrate to our new intranet site purely based on the advanced technology we implemented. One area where I need to formally state I was wrong is scanning technologies. I have never liked scanned images of medical records. They are not interoperable, they are challenging to store, and they are difficult to navigate because they are not searchable. However, I have recently found a few use cases for scanning technology that have proven me wrong.
1. Inpatient records
Although BIDMC's outpatient records are entirely electronic, our inpatient progress notes, nursing notes,and Input/Output records are still paper-based. In 2008, we're creating electronic History and Physicals that will serve as the foundation for future work on inpatient clinical documentation, but in the meantime we need to make our paper records available electronically for several reasons. Hiring and retaining medical record coders is challenging in the Boston area. If we can make scans of our inpatient records available electronically via a secure web-application, we can hire medical record coders anywhere in the country. Additionally, real estate in the Longwood Medical Area of Boston is very expensive. Storing paper records nearby is just too expensive, so we built a storage facility in Dedham 15 miles away. Retrieving a chart from the storage facility can take a few hours. Having an electronic version of paper records saves time, storage space and energy.
2. Doctor's doodles, outside labs and lab requisitions
In some clinics, doctors make drawings of skin lesions and physical exam findings. Our ambulatory medical record does not include real time graphical input via Wacom tablets or other electronic drawing devices. Hence we need some way to include these doodles as part of the electronic record. Creating a drawing, then bar coding it, makes automated scanning into the right patient's record possible. Also, every day we receive 15 inches of paper from referring clinicians and outside providers. Today, that's all filed in a paper chart. Scanning paper received from outside providers and making it available within our ambulatory record ensures continuity of care. Finally, we receive paper-based lab requisitions from clinicians who want to order BIDMC labs but are not using our electronic health records or provider order entry system. Although we do not consider these requisitions part of the medical record, the lab needs to retain them as proof that the specified tests were performed on the basis of a signed order. Scanning them eliminates the need to store paper and makes retrieving them for audits much easier.
3. Consent workflow
Although we've experimented with automating the consent process, we've found that most consents are done in private clinician offices where we have no control of the technology or workflow. By scanning these paper consents into the record and making them available as part of peri-anesthesia testing, we ensure that all documentation necessary for a successful surgery is available before the patient arrives at the Operating Room.
We're now live with scanning our lab requisitions and inpatient records. We will soon go live with scanning doctor's drawings and outside labs. Consent scanning is planned for next year. The technology we use includes Fujitsu high speed scanners and Captiva image capture software. Our Health Information Management professionals scan any written documentation in the paper record and generate a PDF for each tab in the record, making the electronic version easy to navigate. We've created an automated link to a web-based viewer (screen shot above) that associates the scanned records with the right patient based on a bar code included on the first page of each scan that is optically recognized by Captiva. We've made these scans available to our medical record coders working at home. Homesourcing saves time, reduces real estate costs, and enhances productivity. It's a win/win.
Thus, scanning technology with automated creation of PDFs and web-based viewing organized by document type does work very well during the transition from paper to natively electronic workflows. I stand corrected.
1. Inpatient records
Although BIDMC's outpatient records are entirely electronic, our inpatient progress notes, nursing notes,and Input/Output records are still paper-based. In 2008, we're creating electronic History and Physicals that will serve as the foundation for future work on inpatient clinical documentation, but in the meantime we need to make our paper records available electronically for several reasons. Hiring and retaining medical record coders is challenging in the Boston area. If we can make scans of our inpatient records available electronically via a secure web-application, we can hire medical record coders anywhere in the country. Additionally, real estate in the Longwood Medical Area of Boston is very expensive. Storing paper records nearby is just too expensive, so we built a storage facility in Dedham 15 miles away. Retrieving a chart from the storage facility can take a few hours. Having an electronic version of paper records saves time, storage space and energy.
2. Doctor's doodles, outside labs and lab requisitions
In some clinics, doctors make drawings of skin lesions and physical exam findings. Our ambulatory medical record does not include real time graphical input via Wacom tablets or other electronic drawing devices. Hence we need some way to include these doodles as part of the electronic record. Creating a drawing, then bar coding it, makes automated scanning into the right patient's record possible. Also, every day we receive 15 inches of paper from referring clinicians and outside providers. Today, that's all filed in a paper chart. Scanning paper received from outside providers and making it available within our ambulatory record ensures continuity of care. Finally, we receive paper-based lab requisitions from clinicians who want to order BIDMC labs but are not using our electronic health records or provider order entry system. Although we do not consider these requisitions part of the medical record, the lab needs to retain them as proof that the specified tests were performed on the basis of a signed order. Scanning them eliminates the need to store paper and makes retrieving them for audits much easier.
3. Consent workflow
Although we've experimented with automating the consent process, we've found that most consents are done in private clinician offices where we have no control of the technology or workflow. By scanning these paper consents into the record and making them available as part of peri-anesthesia testing, we ensure that all documentation necessary for a successful surgery is available before the patient arrives at the Operating Room.
We're now live with scanning our lab requisitions and inpatient records. We will soon go live with scanning doctor's drawings and outside labs. Consent scanning is planned for next year. The technology we use includes Fujitsu high speed scanners and Captiva image capture software. Our Health Information Management professionals scan any written documentation in the paper record and generate a PDF for each tab in the record, making the electronic version easy to navigate. We've created an automated link to a web-based viewer (screen shot above) that associates the scanned records with the right patient based on a bar code included on the first page of each scan that is optically recognized by Captiva. We've made these scans available to our medical record coders working at home. Homesourcing saves time, reduces real estate costs, and enhances productivity. It's a win/win.
Thus, scanning technology with automated creation of PDFs and web-based viewing organized by document type does work very well during the transition from paper to natively electronic workflows. I stand corrected.
Tuesday, February 19, 2008
Cool Technology of the Week
Storage backup and data recovery is at the top of my list of things that will keep me awake in 2008. In healthcare IT, we need short recovery times with minimal or no data loss. Accomplishing this with once-a-day tape backups is not possible. The Cool Technology of the Week, Data Domain de-duplication storage, solves this problem.
At BIDMC, we generate 28 terabytes of new file and email storage each year. Basic file stores have grown so large that we struggle to copy them within our 24 hour backup window. Additionally, our disaster recovery efforts now require us to replicate our data across two geographic locations.
Tape backup, which has been in use at BIDMC for decades, suffers from a variety of problems. Tape backups are time-consuming. Tapes are fragile and require physical security when transported. The time required to retrieve and recover from tape stresses our service availability objectives. In years past, we considered backup to disk, but the economics did not work. Data Domain de-duplication now makes disk an economical backup media. Here's how.
Instead of making full tape backups, we can backup changes on a sub-block level to disk then compress the result. There is an important distinction between a tape-based, incremental backup and de-duplication. With incremental backups, files that changed since the last backup are copied. A major problem with this approach is that each incremental backup must be recalled in sequence to recover files. This is a slow and complex process that does not detect if the same file was stored in several locations.
De-duplication, on the other hand, has sophisticated methods for identifying changes at the sub-block level. For example, if a spreadsheet has '&date' in the heading, each time you save it, the date in the title will change. An incremental backup will copy the whole document again. De-duplication at the sub-block level will only copy the date change. If multiple copies of the file are sent in email, it will only save one copy.
Over a two year period we examined many products from many companies. Most of them required proprietary hardware, specialized software, new management tools, training, and multiple staff to support the technology 24x7x365. We believe in information life cycle management/hierarchical storage management, but want one set of tools and compliance with the technology standards already in use in our data center. We chose Data Domain because:
We're so impressed with Data Domain's performance as a backup infrastructure that we're also planning to use it as an archival tool for less frequently accessed files. To do so, we'll first implement file virtualization technology such as Acopia or Rainfinity. This will enable us to move content from one storage medium to another without impacting file shares, our web-based file access tools, or our SSLVPN remote file access applications. The combination of file virtualization and Data Domain will enable us to support three tiers of storage.
Tier 1 - SAN storage with lower density, high performance drives.
Tier 2 - SAN or NAS storage with high density, low performance drives.
Tier 3 – NAS-based, archival storage with high density drives coupled with Data Domain de-duplication and compression.
With these 3 tiers of storage, we'll reduce our cost of information life cycle management while reducing complexity.
After a 2 year journey exploring backup, recovery, and archiving solutions, I feel we've finally found the answer that will let me sleep at night in 2008.
At BIDMC, we generate 28 terabytes of new file and email storage each year. Basic file stores have grown so large that we struggle to copy them within our 24 hour backup window. Additionally, our disaster recovery efforts now require us to replicate our data across two geographic locations.
Tape backup, which has been in use at BIDMC for decades, suffers from a variety of problems. Tape backups are time-consuming. Tapes are fragile and require physical security when transported. The time required to retrieve and recover from tape stresses our service availability objectives. In years past, we considered backup to disk, but the economics did not work. Data Domain de-duplication now makes disk an economical backup media. Here's how.
Instead of making full tape backups, we can backup changes on a sub-block level to disk then compress the result. There is an important distinction between a tape-based, incremental backup and de-duplication. With incremental backups, files that changed since the last backup are copied. A major problem with this approach is that each incremental backup must be recalled in sequence to recover files. This is a slow and complex process that does not detect if the same file was stored in several locations.
De-duplication, on the other hand, has sophisticated methods for identifying changes at the sub-block level. For example, if a spreadsheet has '&date' in the heading, each time you save it, the date in the title will change. An incremental backup will copy the whole document again. De-duplication at the sub-block level will only copy the date change. If multiple copies of the file are sent in email, it will only save one copy.
Over a two year period we examined many products from many companies. Most of them required proprietary hardware, specialized software, new management tools, training, and multiple staff to support the technology 24x7x365. We believe in information life cycle management/hierarchical storage management, but want one set of tools and compliance with the technology standards already in use in our data center. We chose Data Domain because:
- The product de-duplicates at the sub-block level yielding better reduction ratios
- The product looks like regular storage supporting NFS and CIFS file mounts
- The product requires little training since it's completely managed by Data Domain
- The product is an in-line appliance and does not require installation of server agents
- The product works with all our existing backup software
- The product is highly reliable, using RAID 6 SATA drives and built in hardware redundancy
We're so impressed with Data Domain's performance as a backup infrastructure that we're also planning to use it as an archival tool for less frequently accessed files. To do so, we'll first implement file virtualization technology such as Acopia or Rainfinity. This will enable us to move content from one storage medium to another without impacting file shares, our web-based file access tools, or our SSLVPN remote file access applications. The combination of file virtualization and Data Domain will enable us to support three tiers of storage.
Tier 1 - SAN storage with lower density, high performance drives.
Tier 2 - SAN or NAS storage with high density, low performance drives.
Tier 3 – NAS-based, archival storage with high density drives coupled with Data Domain de-duplication and compression.
With these 3 tiers of storage, we'll reduce our cost of information life cycle management while reducing complexity.
After a 2 year journey exploring backup, recovery, and archiving solutions, I feel we've finally found the answer that will let me sleep at night in 2008.
Media Services in a Web 2.0 World
One of my responsibilities as CIO at BIDMC and HMS includes oversight of Media Services. Historically Media Services produced slides/posters and provided slide projectors, TVs and VCRs. In this digital world, it is now the focal point for streaming, telemedicine, and all digital presentation services including LCD projectors/plasma screens/digital whiteboards. My media services organizations at BIDMC and HMS have been evolving to provide services required by today's digital savvy customer. Strategic questions include:
1. What is our video teleconferencing strategy?
Several years ago, Caregroup acquired bulky Picture-Tel teleconferencing units to link together our hospital executive teams. They were challenging to use, and required engineers to establish and maintain connections. They were not mobile and provided few features other than basic voice and picture connectivity. They were used less than 5% of the time. Today, with H320 (ISDN) and H323 (IP) mobile units, are we seeing more video teleconferencing? For group meetings, telemedicine consultation, and interpreter's Carelink (video delivery of interpreter services to the bedside), we're seeing more interest. To support these initiatives, BIDMC installed a teleconferencing bridge so we can link internal IP teleconferencing to external ISDN teleconferencing units, host video conference calls and provide security for internal to external IP calls. A year ago 90% of our video calls were ISDN, currently 90-95% of our calls are IP, All internal video calls are IP and about 60% of our external calls are IP over the public internet, with very reliable results.
Video teleconferencing is still not a seamless technology, as easy and reliable to use as a telephone. Cisco Teleprescence has amazing clarity and ease of use, but it requires specialized equipment in a dedicated space. BIDMC will embrace video teleconferencing more when desktop to desktop communication becomes easy and reliable.
At HMS, we're running several tele-learning pilots, making lectures at the medical school available in real time to students at Harvard University and linking together outside institutions for collaborative coursework over Internet 2. As with BIDMC, doing each of these requires significant media services resources, so they are not broadly deployed. HMS, along with the other Harvard schools are jointly investigating enterprise licenses for Webex and Elluminate as real time collaboration tools, realizing that the value of a real time video feed is limited.
2. Should we offer full service presentation support or self service classrooms?
I've lectured in numerous auditoriums around the world which are outfitted with sophisticated Crestron equipment. Each one I use has a custom user interface which hides the details of room lighting, screens, curtains, video projectors, and video sources from the user. This may sound like an advantage, but I've had many presentations delayed because the Crestron unit itself becomes unresponsive and no one knows how to reset it. If I had a simple AV switch box, light dimmer, and screen switch, I could operate the room myself with ease. Reseting a projector is just an off/on switch. Cabling is a VGA or DVI connector. However, not all users are completely comfortable with AV equipment, so some prefer Crestron user interfaces.
Do we provide staff at every event to set up and assist all presenters? Do we use complex Crestron programming to make room control easy for the average user? Do we provide a basic set of off/on switches and assume most users can figure it out?
My personal favorite is Extron HideAway desktop PC Interface, which is a basic AV switchbox and VGA cable. It works every time. I think its likely that we'll try to standardize most conference rooms using Extron equipment, ensure most users are familiar with it and staff just large public events or those events with visiting faculty to ensure their success. This means we'll need to invest in our conference rooms to bring them all up to a consistent level. The capital to do that has been challenging to obtain, but I believe striking a better balance between consistent self service infrastructure and staffed presentation support will increase customer satisfaction and better utilize the time of skilled AV staff.
3. What is our video streaming strategy?
We're increasingly asked to host video streaming for grand rounds, resident report, simulations, operating room procedures and continuing medical education. BIDMC does not have internal video streaming hosting. Harvard Medical School hosts video streaming for 1600 courses and has made this infrastructure available to BIDMC on a case by case basis.
Our challenge is what to do with streaming video available to the public. Providing streaming servers and networks with appropriate bandwidth is no problem for a few hundred students, but what about public video streams that could be watched by thousands. We'd used Akamai for public streaming and self hosted infrastructure at HMS for private streaming. HMS uses several tools to automate the video streaming process including AnyStream and Apreso. We currently host Real formats, but in the future we'll likely host additional Flash formats.
4. What is our podcasting strategy?
HMS currently has an audio podcasting infrastructure to support 1600 courses. We're investigating video podcasting but at present, there is limited demand. The value of watching a talking head while listening to a lecture on an iPod is limited. We've recently developed public podcasts too.
5. What is our new media strategy?
Historically, media services at BIDMC provided graphic design for public web pages. This resulted in graphically interesting pages, but designs were so sophisticated that real time revisions were not possible. We're moving our external website to a content management system so that complex HTML coding and graphic design are no longer required. This may result in less graphically interesting pages, but makes real time changes easy. Currently, we do not offer Flash programming services, but we do offer photography, video encoding, and basic graphic design. In the future, it probably makes sense that Media Services will offer services to produce still images, flash animations, and streamline video to support websites using our new content management system.
6. What's the right mix of services to provide?
Creating the right mix of poster services, graphics services, photography, video, and presentation services is challenging. At both HMS and CareGroup, we've thinking about this, realizing that presentation services (both self service and full service) are important, and that various digital graphics services will be increasingly important.
7. What media services do patients expect?
Media services at BIDMC include providing Television services to the hospital. Patients now expect additional services such as movies on demand, video game services, and web access via in room televisions. At present, we offer wireless to all patients at no charge, but do not provide rentable laptops. We're investigating partnerships to provide additional media services to patients.
8. What should we insource and what should we outsource?
Clearly, presentation services are something that we must insource because demand is unpredictable and services are often needed at the last minute. Flash and high end collaboration services might best be outsourced. We're thinking about the best balance of internal and external staff to support basic verses advanced digital media services.
9. How should we divide the role of web content management?
At BIDMC, Corporate Communications has taken the lead in our web content management project. They will lead the governance effort which oversees departmental production of content. Media services has taken a lesser role in web design and daily web content production. In the future, it's likely that content production will be divided between departments, corporate communication, media services, and outsourced high end content providers.
10. How do we optimize management of team?
Given the evolving roles of media services, the need for presentation support, and the increasing customer demand for advanced digital services, how do we manage a forward thinking and agile media services team? At HMS, we're considering adding additional operational team leadership such as an "air traffic controller" to schedule each day's events and ensure all presentation services are well orchestrated. At BIDMC, we're considering the addition of a "new media" specialist.
Working together, the management and staff of Media Services will refine their skill sets, optimize their alignment with customer needs, and evolve to meet the demands of a web 2.0 world.
1. What is our video teleconferencing strategy?
Several years ago, Caregroup acquired bulky Picture-Tel teleconferencing units to link together our hospital executive teams. They were challenging to use, and required engineers to establish and maintain connections. They were not mobile and provided few features other than basic voice and picture connectivity. They were used less than 5% of the time. Today, with H320 (ISDN) and H323 (IP) mobile units, are we seeing more video teleconferencing? For group meetings, telemedicine consultation, and interpreter's Carelink (video delivery of interpreter services to the bedside), we're seeing more interest. To support these initiatives, BIDMC installed a teleconferencing bridge so we can link internal IP teleconferencing to external ISDN teleconferencing units, host video conference calls and provide security for internal to external IP calls. A year ago 90% of our video calls were ISDN, currently 90-95% of our calls are IP, All internal video calls are IP and about 60% of our external calls are IP over the public internet, with very reliable results.
Video teleconferencing is still not a seamless technology, as easy and reliable to use as a telephone. Cisco Teleprescence has amazing clarity and ease of use, but it requires specialized equipment in a dedicated space. BIDMC will embrace video teleconferencing more when desktop to desktop communication becomes easy and reliable.
At HMS, we're running several tele-learning pilots, making lectures at the medical school available in real time to students at Harvard University and linking together outside institutions for collaborative coursework over Internet 2. As with BIDMC, doing each of these requires significant media services resources, so they are not broadly deployed. HMS, along with the other Harvard schools are jointly investigating enterprise licenses for Webex and Elluminate as real time collaboration tools, realizing that the value of a real time video feed is limited.
2. Should we offer full service presentation support or self service classrooms?
I've lectured in numerous auditoriums around the world which are outfitted with sophisticated Crestron equipment. Each one I use has a custom user interface which hides the details of room lighting, screens, curtains, video projectors, and video sources from the user. This may sound like an advantage, but I've had many presentations delayed because the Crestron unit itself becomes unresponsive and no one knows how to reset it. If I had a simple AV switch box, light dimmer, and screen switch, I could operate the room myself with ease. Reseting a projector is just an off/on switch. Cabling is a VGA or DVI connector. However, not all users are completely comfortable with AV equipment, so some prefer Crestron user interfaces.
Do we provide staff at every event to set up and assist all presenters? Do we use complex Crestron programming to make room control easy for the average user? Do we provide a basic set of off/on switches and assume most users can figure it out?
My personal favorite is Extron HideAway desktop PC Interface, which is a basic AV switchbox and VGA cable. It works every time. I think its likely that we'll try to standardize most conference rooms using Extron equipment, ensure most users are familiar with it and staff just large public events or those events with visiting faculty to ensure their success. This means we'll need to invest in our conference rooms to bring them all up to a consistent level. The capital to do that has been challenging to obtain, but I believe striking a better balance between consistent self service infrastructure and staffed presentation support will increase customer satisfaction and better utilize the time of skilled AV staff.
3. What is our video streaming strategy?
We're increasingly asked to host video streaming for grand rounds, resident report, simulations, operating room procedures and continuing medical education. BIDMC does not have internal video streaming hosting. Harvard Medical School hosts video streaming for 1600 courses and has made this infrastructure available to BIDMC on a case by case basis.
Our challenge is what to do with streaming video available to the public. Providing streaming servers and networks with appropriate bandwidth is no problem for a few hundred students, but what about public video streams that could be watched by thousands. We'd used Akamai for public streaming and self hosted infrastructure at HMS for private streaming. HMS uses several tools to automate the video streaming process including AnyStream and Apreso. We currently host Real formats, but in the future we'll likely host additional Flash formats.
4. What is our podcasting strategy?
HMS currently has an audio podcasting infrastructure to support 1600 courses. We're investigating video podcasting but at present, there is limited demand. The value of watching a talking head while listening to a lecture on an iPod is limited. We've recently developed public podcasts too.
5. What is our new media strategy?
Historically, media services at BIDMC provided graphic design for public web pages. This resulted in graphically interesting pages, but designs were so sophisticated that real time revisions were not possible. We're moving our external website to a content management system so that complex HTML coding and graphic design are no longer required. This may result in less graphically interesting pages, but makes real time changes easy. Currently, we do not offer Flash programming services, but we do offer photography, video encoding, and basic graphic design. In the future, it probably makes sense that Media Services will offer services to produce still images, flash animations, and streamline video to support websites using our new content management system.
6. What's the right mix of services to provide?
Creating the right mix of poster services, graphics services, photography, video, and presentation services is challenging. At both HMS and CareGroup, we've thinking about this, realizing that presentation services (both self service and full service) are important, and that various digital graphics services will be increasingly important.
7. What media services do patients expect?
Media services at BIDMC include providing Television services to the hospital. Patients now expect additional services such as movies on demand, video game services, and web access via in room televisions. At present, we offer wireless to all patients at no charge, but do not provide rentable laptops. We're investigating partnerships to provide additional media services to patients.
8. What should we insource and what should we outsource?
Clearly, presentation services are something that we must insource because demand is unpredictable and services are often needed at the last minute. Flash and high end collaboration services might best be outsourced. We're thinking about the best balance of internal and external staff to support basic verses advanced digital media services.
9. How should we divide the role of web content management?
At BIDMC, Corporate Communications has taken the lead in our web content management project. They will lead the governance effort which oversees departmental production of content. Media services has taken a lesser role in web design and daily web content production. In the future, it's likely that content production will be divided between departments, corporate communication, media services, and outsourced high end content providers.
10. How do we optimize management of team?
Given the evolving roles of media services, the need for presentation support, and the increasing customer demand for advanced digital services, how do we manage a forward thinking and agile media services team? At HMS, we're considering adding additional operational team leadership such as an "air traffic controller" to schedule each day's events and ensure all presentation services are well orchestrated. At BIDMC, we're considering the addition of a "new media" specialist.
Working together, the management and staff of Media Services will refine their skill sets, optimize their alignment with customer needs, and evolve to meet the demands of a web 2.0 world.
Performance Measurement
A few weeks ago, I was asked to describe the way we do performance measurement at BIDMC.
Our strategy is simple - all our clinical systems are built on hierarchical databases and our decision support databases are relational, updated by nightly extracts of our clinical systems.
Clinical data is inherently hierarchical - patients have many visits, with many labs, with many observations. This creates a tree of data - a hierarchy that does not fit nicely into columns and rows. Doctors typically ask questions about individual patients, such as their most current results. Retrieving data the way it is stored, hierarchically, is blazingly fast. We retrieve patient specific date from terabytes of historical information in milliseconds.
Hierarchical data is great for clinical care, but not so wonderful for decision support. Asking a question such as "how many patients in the past 10 years have had a creatinine of 2.0 and a Hemoglobin A1c greater than 9.0" would require that every lab result ever done be examined one at a time.
For population health and performance measurement, querying an indexed relational database that is optimized for reporting makes the most sense. To enable this, we've created numerous data marts based on nightly extracts of our clinical and financial systems. Our current data marts include admissions, ED, outpatient appointments, OR, laboratory, microbiology, blood bank, radiology, cardiology procedures, inpatient pharmacy, outpatient medications, billing and payroll.
We use these data marts for 3 kinds of queries:
1. Expert analysis using relational query tools such as SAS, Access, and SQL Reporting services. We have a team of decision support professionals reporting to the CFO and a team of dedicated IS analysts performing these queries. Creating such queries requires an understanding of the quality of the underlying data, its source, and its meaning. Years ago, I hired a new analyst who noted that the average length of stay in the operating room was 120 days. The person did not know that the length of stay in ORs is measured in minutes
2. Parameterized web-based reports that can be run by anyone. When a new report has been developed by an analyst and is ready for broader distribution, we go through a process to make it available via our web-based Performance Manager tool. This typically involves developing result rollups to enhance performance, writing parameterized stored procedures, and developing a web page interface that allows users to select parameters via dropdowns and checkboxes and to easily navigate drilldowns and trending. Performance manager has over 150 web-based reports which allow untrained users to create accurate reports and explore results at the touch of a button. Reports include financial performance (discharged not final billed, ED and inpatient volumes, gross patient services revenue by cost center), clinical performance (antibiotic resistance and sensitivity, ED throughput) and even IS uptime.
3. Self service queries via a drag and drop tool. In 2008, we're launching a graphical tool which enables user defined queries of our datamarts for clinical research. This tool will returns counts of patients that can be used for pre-research investigation such as ensuring enough data is available to conduct an actual study. IRB approval would be required for any more detailed information. The drag and drop interface includes enough metadata to limit the queries to those data elements which make logical sense, building in the expertise of our analysts but allowing untrained researchers to do patient de-identified data mining, protecting patient confidentiality.
Using these three methods, we provide the data needed to empower our quality reviews, our process improvement efforts, and clinical research. Of course, we do not sell data or ever release data to third parties without patient consent. All our secondary uses of data are reviewed by our privacy officer, our IRB, and our security professionals. In this way, we ensure that all data in our enterprise is used on a need to know basis, following HIPAA to the letter of the law.
Our strategy is simple - all our clinical systems are built on hierarchical databases and our decision support databases are relational, updated by nightly extracts of our clinical systems.
Clinical data is inherently hierarchical - patients have many visits, with many labs, with many observations. This creates a tree of data - a hierarchy that does not fit nicely into columns and rows. Doctors typically ask questions about individual patients, such as their most current results. Retrieving data the way it is stored, hierarchically, is blazingly fast. We retrieve patient specific date from terabytes of historical information in milliseconds.
Hierarchical data is great for clinical care, but not so wonderful for decision support. Asking a question such as "how many patients in the past 10 years have had a creatinine of 2.0 and a Hemoglobin A1c greater than 9.0" would require that every lab result ever done be examined one at a time.
For population health and performance measurement, querying an indexed relational database that is optimized for reporting makes the most sense. To enable this, we've created numerous data marts based on nightly extracts of our clinical and financial systems. Our current data marts include admissions, ED, outpatient appointments, OR, laboratory, microbiology, blood bank, radiology, cardiology procedures, inpatient pharmacy, outpatient medications, billing and payroll.
We use these data marts for 3 kinds of queries:
1. Expert analysis using relational query tools such as SAS, Access, and SQL Reporting services. We have a team of decision support professionals reporting to the CFO and a team of dedicated IS analysts performing these queries. Creating such queries requires an understanding of the quality of the underlying data, its source, and its meaning. Years ago, I hired a new analyst who noted that the average length of stay in the operating room was 120 days. The person did not know that the length of stay in ORs is measured in minutes
2. Parameterized web-based reports that can be run by anyone. When a new report has been developed by an analyst and is ready for broader distribution, we go through a process to make it available via our web-based Performance Manager tool. This typically involves developing result rollups to enhance performance, writing parameterized stored procedures, and developing a web page interface that allows users to select parameters via dropdowns and checkboxes and to easily navigate drilldowns and trending. Performance manager has over 150 web-based reports which allow untrained users to create accurate reports and explore results at the touch of a button. Reports include financial performance (discharged not final billed, ED and inpatient volumes, gross patient services revenue by cost center), clinical performance (antibiotic resistance and sensitivity, ED throughput) and even IS uptime.
3. Self service queries via a drag and drop tool. In 2008, we're launching a graphical tool which enables user defined queries of our datamarts for clinical research. This tool will returns counts of patients that can be used for pre-research investigation such as ensuring enough data is available to conduct an actual study. IRB approval would be required for any more detailed information. The drag and drop interface includes enough metadata to limit the queries to those data elements which make logical sense, building in the expertise of our analysts but allowing untrained researchers to do patient de-identified data mining, protecting patient confidentiality.
Using these three methods, we provide the data needed to empower our quality reviews, our process improvement efforts, and clinical research. Of course, we do not sell data or ever release data to third parties without patient consent. All our secondary uses of data are reviewed by our privacy officer, our IRB, and our security professionals. In this way, we ensure that all data in our enterprise is used on a need to know basis, following HIPAA to the letter of the law.
Monday, February 18, 2008
Electronic Health Records for Non-owned Doctors - Planning for Distributed Users
This is the fourth entry in my series about electronic health records for non-owned doctors. Today's topic is about supporting hundreds of clinicians, spread over a wide geographical area with varying levels of IT infrastructure and technology savvy.
CIOs of academic healthcare facilities are used to highly controlled and predictable environments. We oversee the quality of service from end to end. Desktops have a managed image with updated anti-virus software. The network is physically secured in closets we control, using fiber and cables we install. Our teams and our management is optimized to deliver service that are consistent and standardized.
The Electronic Health Record project for non-owned doctors requires a different approach. The initial 300 doctors in 173 physical locations spread over 450 square miles have diverse needs and heterogenous access to infrastructure. Some already have computers, wired and wireless networks. Most do not. Those in rural areas may have limited access to bandwidth, making business DSL their only choice for connectivity.
The alternatives we considered for serving these geographically distributed users was
- Expand our current IS offsite team which currently focuses on BIDMC owned clinicians and those occupying BIDMC leased space. This matrix illustrates the different kinds of physicians we support and the services we currently provide.
- Negotiate group purchasing agreements with vendors and make these available to clinicians, reducing heterogeneity but not providing installation and management of the infrastructure. Physicans could hire local consultants, family members, or do it themselves.
- Outsource the infrastructure of these practices to a firm specializing in managing the IT needs of independent clinicians
We weighed the benefits and costs of each approach and elected to outsource infrastructure to Concordant.
Here's our thinking
-Geographically distributed practices needing 24x7 support would require a large internal team to provide a high service level, weekend coverage and vacation coverage. Although we are currently planning on 300 clinicians, that number might expand to 500 or 1000, hence scaling up with agility would be challenging, especially in a job market where many hospitals are competing for skilled IT professionals.
-Our current offsite group is extremely good and focused on providing infrastructure and application services to sites we operate. Expanding this group to support a very different kind of practice with very different infrastructure would dilute their current focus.
-Enabling these distributed offices to purchase their own equipment and establish their own local infrastructure could be disastrous. Guaranteeing service levels means that we must have an understanding of the network performance, desktop configuration, and local infrastructure (printers, scanners, fax machines) of each office.
Our plan is to operate a highly reliable hosted electronic health record, housed a commercial co-location facility and make it available to each of these practices via the public internet without having to create network or telecom connections ourselves. At each office location, however, the desktops, wired and wireless network will be completely homogeneous and managed by Concordant. We'll leverage the scale of the project to obtain the best discounts possible from hardware vendors. We'll even retire existing office hardware to achieve homogeneity. Help desk services will be staffed by Concordant, so that we will not need to train our existing help desk staff to support these distributed non-owned clinicians.
We elected not to place servers in any clinician offices since physician offices do not have backup power, environmentally controlled server rooms, or appropriate physical security for machines hosting the data. Our plan is to maintain a central hardware depot, assemble all the equipment needed for an office, deliver it, configure and test it. Everyone wants to minimize on-site support, but some on-site service will still be needed for hardware failures and very "high-touch" support. Remote support and monitoring techniques can help, though minimally, since we're implementing a centralized architecture.
It is our hope that a dedicated outsourced infrastructure service, optimized for the needs of the geographically distributed small physician office will work better and cost less than expanding our existing IS teams or enabling physicians to do it themselves. It also enables us to track costs more closely since there is a strict separation between support for owned sites and non-owned sites. Our first non-owned sites go live in June and I'll let you know how it goes.
CIOs of academic healthcare facilities are used to highly controlled and predictable environments. We oversee the quality of service from end to end. Desktops have a managed image with updated anti-virus software. The network is physically secured in closets we control, using fiber and cables we install. Our teams and our management is optimized to deliver service that are consistent and standardized.
The Electronic Health Record project for non-owned doctors requires a different approach. The initial 300 doctors in 173 physical locations spread over 450 square miles have diverse needs and heterogenous access to infrastructure. Some already have computers, wired and wireless networks. Most do not. Those in rural areas may have limited access to bandwidth, making business DSL their only choice for connectivity.
The alternatives we considered for serving these geographically distributed users was
- Expand our current IS offsite team which currently focuses on BIDMC owned clinicians and those occupying BIDMC leased space. This matrix illustrates the different kinds of physicians we support and the services we currently provide.
- Negotiate group purchasing agreements with vendors and make these available to clinicians, reducing heterogeneity but not providing installation and management of the infrastructure. Physicans could hire local consultants, family members, or do it themselves.
- Outsource the infrastructure of these practices to a firm specializing in managing the IT needs of independent clinicians
We weighed the benefits and costs of each approach and elected to outsource infrastructure to Concordant.
Here's our thinking
-Geographically distributed practices needing 24x7 support would require a large internal team to provide a high service level, weekend coverage and vacation coverage. Although we are currently planning on 300 clinicians, that number might expand to 500 or 1000, hence scaling up with agility would be challenging, especially in a job market where many hospitals are competing for skilled IT professionals.
-Our current offsite group is extremely good and focused on providing infrastructure and application services to sites we operate. Expanding this group to support a very different kind of practice with very different infrastructure would dilute their current focus.
-Enabling these distributed offices to purchase their own equipment and establish their own local infrastructure could be disastrous. Guaranteeing service levels means that we must have an understanding of the network performance, desktop configuration, and local infrastructure (printers, scanners, fax machines) of each office.
Our plan is to operate a highly reliable hosted electronic health record, housed a commercial co-location facility and make it available to each of these practices via the public internet without having to create network or telecom connections ourselves. At each office location, however, the desktops, wired and wireless network will be completely homogeneous and managed by Concordant. We'll leverage the scale of the project to obtain the best discounts possible from hardware vendors. We'll even retire existing office hardware to achieve homogeneity. Help desk services will be staffed by Concordant, so that we will not need to train our existing help desk staff to support these distributed non-owned clinicians.
We elected not to place servers in any clinician offices since physician offices do not have backup power, environmentally controlled server rooms, or appropriate physical security for machines hosting the data. Our plan is to maintain a central hardware depot, assemble all the equipment needed for an office, deliver it, configure and test it. Everyone wants to minimize on-site support, but some on-site service will still be needed for hardware failures and very "high-touch" support. Remote support and monitoring techniques can help, though minimally, since we're implementing a centralized architecture.
It is our hope that a dedicated outsourced infrastructure service, optimized for the needs of the geographically distributed small physician office will work better and cost less than expanding our existing IS teams or enabling physicians to do it themselves. It also enables us to track costs more closely since there is a strict separation between support for owned sites and non-owned sites. Our first non-owned sites go live in June and I'll let you know how it goes.
Provider Order Entry
Last week, I served on a panel to discuss the Massachusetts Technology Collaborative's release of its study about the benefits of Computerized Provider Order Entry (CPOE).
The study concluded that one in every 10 patients admitted to six Massachusetts community hospitals suffered serious and avoidable medication mistakes. This has created a new urgency for all hospitals in the state to install CPOE.
At BIDMC, we implemented CPOE in 2001 and have not had a handwritten order in most areas (we are just launching inpatient chemotherapy POE, planning NICU POE and discussing prep-Op holding area POE since these have very specialized workflows) , except for the 2 days of our network outage in 2002. Implementing CPOE is challenging and requires significant planning to do it right. Here's my top 10 lessons leanred about CPOE implementation based on 7 years of doing it.
1. Bad software, implemented badly, can cause bad results
You're probably familiar with the Cedars Sinai CPOE rollout failure and the Pediatrics article linking CPOE to increased mortality.
These studies are not about the failure of CPOE, they are about the failure to deliver software that meets clinician needs. Clinicians need easy to use, intuitive software that enhances their workflow. In the case of Cedars, the software was slow and cumbersome. The workflow was so confusing that nurses did not know when when orders were placed and asked doctors to place orders multiple times. In the case of the Pediatrics article, the software was archaic and challenging for physicians to use correctly. At BIDMC, we took a lesson from Amazon.com. If you can order a DVD in one click, why should a renal dosed antibiotic, heparin or insulin be any different? We engineered a quick picks system that enables a doctor to click on a drug name, then have it dosed, interaction checked and routed to the pharmacy in one click. Pick the right patient from a dashboard, click the drug name, done. We've not seen any errors using web-based, intuitive software that automates a logical workflow.
2. CPOE is a platform not a product
Shortly after going live with CPOE, we established the Provider Order Entry Triage Committee (POET), to prioritize new development. Everyday we're asked to add new features that support new workflow, clinical resource management, research, and compliance requirements. Every change must be analyzed for its impact on clinicians. For example, if a new clinical trial requires that we capture the hair color of every patient on every order, we'll add hundreds of keystrokes to every provider's day. The POET committee ensures the right balance between safety, compliance, functionality, and clinician impact. Assume that your CPOE system will be very dynamic, with continuous revision of the decision support rules. CPOE Governance committees can
- Manage clinician expectations regarding their suggested changes to programs and the availability of resources to make the changes.
- Work with a clinical practice committee so that the CPOE committee does not bog down in adjudicating clinical issues, or creating a fix for one group of docs, only to find that another stakeholder group does not agree.
- Consider human factors to insure the learned responses hold true across applications-- e.g., if there is renal dose adjustment for one type of drug, does the adjustment happen for all renally excreted drugs?
- Anticipate and build in reports from POE-- topic, questions and recipient of report. Our data from the transfusion screen was extremely powerful for the Transfusion Committee to improve practice because we were able to capture overrides in monthly reports and target education where appropriate.
3. Clinicians will not go to training
Clinicians are time bankrupt. Requiring them to go to a half day CPOE training will cause resentment and will not result in much knowledge retention. The right way to train clinicians is in the field as they are using the system. When we rolled out CPOE, we staffed our nursing stations with roving IT professionals 24x7 for 6 weeks. As doctors entered orders, these trainers were available to help them with real patients, resulting in a successful first time ordering experience. You only have one chance to make a good first impression and having trainers elbow to elbow with clinicians is the best way to achieve a positive outcome.
4. Doctors will feel a loss of autonomy
Experienced clinicians note that medicine is art and science. Cookbook medicine which follows strict guidelines may not be personalized care. During implementation, some clinicians will feel a loss of autonomy as protocols, care paths and order sets replace handwritten orders. Our experience is that 85% of orders suggested by CPOE are accepted by clinicians without revision. Once clinicians realize they can create customized pick lists and that the computer provides value added decision support, the feelings of loss of autonomy disappear. It's important that the decision support be tuned just right - having thousands of rules that warn physicians about every potential minor side effect will cause 'cry wolf syndrome' - doctors will ignore the decision support. Having too few rules will make the doctors question the value add of the system. In our case, we've used a few hundred rules that seem to strike an appropriate balance.
5. Big Bang IT never works
Going live with CPOE across an entire hospital in one day would be a nightmare. The degree of training, communication and management of workflow change really requires a phased rollout. We picked logical clusters of related units to implement together i.e. medicine floors, surgical wards, related specialty floors, ICUs, the ED etc. This worked well because our staff could be present to train clinicians real time, coordinate additional hardware rollouts where needed to enhance workflow, and ensure any software issues were resolved rapidly. It did create some inconvenience during the transition. If a patient was transfered from a medicine service (on CPOE) to an Ortho service (not on CPOE) and back, the interns on the teams had to move the patient from electronic to paper to electronic workflows. This was a small price to pay for the 100% clinician acceptance of CPOE we achieved through a phased rollout.
6. CPOE must be a system created by the clinicians, not inflicted on them
One of the major problems with the Cedars Sinai rollout was that the administration created the software and then forced doctors to use it. Even worse, the administration planned to resell the software once the doctors had worked the bugs out. The doctors revolted and refused to use the system, which was perceived as a moneymaker for administrators. At BIDMC, we engaged key thought leaders from the medical executive committee, nursing, pharmacy, social work, lab, and radiology in the design of the system, so it was perceived as the clinician's system, not administration's system. When it went live, many doctors were eager to show off 'their system'. The Medical Executive Committee even voted to require use of the system as part of hospital practice, since it was widely perceived as improving safety without burdening the clinicians.
7. Many CPOE systems are a toolkit without rules
Many commercial CPOE systems are 'some assembly required'. They provide a container for rules, but do not come with an initial set. You can establish internal committees to build best practice rulesets, purchase rules from vendors such as Zynx or First Data Bank, or use rules others have created, such as ours.
8. CPOE decision support is only as good as the data available
Decision support depends upon accurate medical history. Safe drug dosing requires a current medication list, updated allergies, creatinine and other current labs, a problem list, and even genomic testing results. This means that all aspects of the hospital information system must be fully integrated into CPOE to achieve the best result. There is no such thing as a standalone CPOE system and it's best that CPOE be purchased as part of an integrated hospital information system.
9. Infrastructure must be reliable
Before CPOE, we could schedule downtimes on Sundays between 2am-4am for maintenance. After CPOE, no downtime is acceptable, since downtime implies that orders for medications, labs, radiology, transfer etc. cannot be placed. We've implemented redundant networks, redundant servers, clustered databases and even redundant data centers to ensure CPOE is available 24x7x365. A note to CIOs - implementing CPOE means that you'll now be carrying a pager to ensure real time response to any interruption of service.
10. Automating a bad process does not improve anything
When I was a resident, I was told that heparin should be dosed as a 5000 unit bolus then an infusion of 1500 units per hour for every patient. I was not taught about relating heparin dosing to body mass index, creatinine clearance or the presence of other medications. Unfortunately, it often took days to get the heparin dosing right because 5000/1500 is definitely not a one size fits all rule. Creating an automated CPOE order for 5000/1500 is not going to improve the safety or efficacy of heparin dosing. Implementing a new protocol for dosing based on evidence that includes diagnosis, labs, and body mass index will improve care. Our experience is that it is best to fix the process, then automated the fixed process. By doing this, no one can blame the software for the pain of adapting to the process change.
Our experience with CPOE over the past 7 years is that it has reduced medication error by 50%, it paid for itself within 2 years, and clinicians have embraced it. In 2008-2009 we're completing the bar coding for all our unit dose medications (including repackaging every dose of Tylenol in bar coded baggies) so we can scan the patient wrist band, scan the medication, and scan the nurse, achieving a completely automated medication administration record. Once this is complete, the last causes of medication error will be removed from our hospital and we hope to achieve a truly zero rate of adverse drug events.
The study concluded that one in every 10 patients admitted to six Massachusetts community hospitals suffered serious and avoidable medication mistakes. This has created a new urgency for all hospitals in the state to install CPOE.
At BIDMC, we implemented CPOE in 2001 and have not had a handwritten order in most areas (we are just launching inpatient chemotherapy POE, planning NICU POE and discussing prep-Op holding area POE since these have very specialized workflows) , except for the 2 days of our network outage in 2002. Implementing CPOE is challenging and requires significant planning to do it right. Here's my top 10 lessons leanred about CPOE implementation based on 7 years of doing it.
1. Bad software, implemented badly, can cause bad results
You're probably familiar with the Cedars Sinai CPOE rollout failure and the Pediatrics article linking CPOE to increased mortality.
These studies are not about the failure of CPOE, they are about the failure to deliver software that meets clinician needs. Clinicians need easy to use, intuitive software that enhances their workflow. In the case of Cedars, the software was slow and cumbersome. The workflow was so confusing that nurses did not know when when orders were placed and asked doctors to place orders multiple times. In the case of the Pediatrics article, the software was archaic and challenging for physicians to use correctly. At BIDMC, we took a lesson from Amazon.com. If you can order a DVD in one click, why should a renal dosed antibiotic, heparin or insulin be any different? We engineered a quick picks system that enables a doctor to click on a drug name, then have it dosed, interaction checked and routed to the pharmacy in one click. Pick the right patient from a dashboard, click the drug name, done. We've not seen any errors using web-based, intuitive software that automates a logical workflow.
2. CPOE is a platform not a product
Shortly after going live with CPOE, we established the Provider Order Entry Triage Committee (POET), to prioritize new development. Everyday we're asked to add new features that support new workflow, clinical resource management, research, and compliance requirements. Every change must be analyzed for its impact on clinicians. For example, if a new clinical trial requires that we capture the hair color of every patient on every order, we'll add hundreds of keystrokes to every provider's day. The POET committee ensures the right balance between safety, compliance, functionality, and clinician impact. Assume that your CPOE system will be very dynamic, with continuous revision of the decision support rules. CPOE Governance committees can
- Manage clinician expectations regarding their suggested changes to programs and the availability of resources to make the changes.
- Work with a clinical practice committee so that the CPOE committee does not bog down in adjudicating clinical issues, or creating a fix for one group of docs, only to find that another stakeholder group does not agree.
- Consider human factors to insure the learned responses hold true across applications-- e.g., if there is renal dose adjustment for one type of drug, does the adjustment happen for all renally excreted drugs?
- Anticipate and build in reports from POE-- topic, questions and recipient of report. Our data from the transfusion screen was extremely powerful for the Transfusion Committee to improve practice because we were able to capture overrides in monthly reports and target education where appropriate.
3. Clinicians will not go to training
Clinicians are time bankrupt. Requiring them to go to a half day CPOE training will cause resentment and will not result in much knowledge retention. The right way to train clinicians is in the field as they are using the system. When we rolled out CPOE, we staffed our nursing stations with roving IT professionals 24x7 for 6 weeks. As doctors entered orders, these trainers were available to help them with real patients, resulting in a successful first time ordering experience. You only have one chance to make a good first impression and having trainers elbow to elbow with clinicians is the best way to achieve a positive outcome.
4. Doctors will feel a loss of autonomy
Experienced clinicians note that medicine is art and science. Cookbook medicine which follows strict guidelines may not be personalized care. During implementation, some clinicians will feel a loss of autonomy as protocols, care paths and order sets replace handwritten orders. Our experience is that 85% of orders suggested by CPOE are accepted by clinicians without revision. Once clinicians realize they can create customized pick lists and that the computer provides value added decision support, the feelings of loss of autonomy disappear. It's important that the decision support be tuned just right - having thousands of rules that warn physicians about every potential minor side effect will cause 'cry wolf syndrome' - doctors will ignore the decision support. Having too few rules will make the doctors question the value add of the system. In our case, we've used a few hundred rules that seem to strike an appropriate balance.
5. Big Bang IT never works
Going live with CPOE across an entire hospital in one day would be a nightmare. The degree of training, communication and management of workflow change really requires a phased rollout. We picked logical clusters of related units to implement together i.e. medicine floors, surgical wards, related specialty floors, ICUs, the ED etc. This worked well because our staff could be present to train clinicians real time, coordinate additional hardware rollouts where needed to enhance workflow, and ensure any software issues were resolved rapidly. It did create some inconvenience during the transition. If a patient was transfered from a medicine service (on CPOE) to an Ortho service (not on CPOE) and back, the interns on the teams had to move the patient from electronic to paper to electronic workflows. This was a small price to pay for the 100% clinician acceptance of CPOE we achieved through a phased rollout.
6. CPOE must be a system created by the clinicians, not inflicted on them
One of the major problems with the Cedars Sinai rollout was that the administration created the software and then forced doctors to use it. Even worse, the administration planned to resell the software once the doctors had worked the bugs out. The doctors revolted and refused to use the system, which was perceived as a moneymaker for administrators. At BIDMC, we engaged key thought leaders from the medical executive committee, nursing, pharmacy, social work, lab, and radiology in the design of the system, so it was perceived as the clinician's system, not administration's system. When it went live, many doctors were eager to show off 'their system'. The Medical Executive Committee even voted to require use of the system as part of hospital practice, since it was widely perceived as improving safety without burdening the clinicians.
7. Many CPOE systems are a toolkit without rules
Many commercial CPOE systems are 'some assembly required'. They provide a container for rules, but do not come with an initial set. You can establish internal committees to build best practice rulesets, purchase rules from vendors such as Zynx or First Data Bank, or use rules others have created, such as ours.
8. CPOE decision support is only as good as the data available
Decision support depends upon accurate medical history. Safe drug dosing requires a current medication list, updated allergies, creatinine and other current labs, a problem list, and even genomic testing results. This means that all aspects of the hospital information system must be fully integrated into CPOE to achieve the best result. There is no such thing as a standalone CPOE system and it's best that CPOE be purchased as part of an integrated hospital information system.
9. Infrastructure must be reliable
Before CPOE, we could schedule downtimes on Sundays between 2am-4am for maintenance. After CPOE, no downtime is acceptable, since downtime implies that orders for medications, labs, radiology, transfer etc. cannot be placed. We've implemented redundant networks, redundant servers, clustered databases and even redundant data centers to ensure CPOE is available 24x7x365. A note to CIOs - implementing CPOE means that you'll now be carrying a pager to ensure real time response to any interruption of service.
10. Automating a bad process does not improve anything
When I was a resident, I was told that heparin should be dosed as a 5000 unit bolus then an infusion of 1500 units per hour for every patient. I was not taught about relating heparin dosing to body mass index, creatinine clearance or the presence of other medications. Unfortunately, it often took days to get the heparin dosing right because 5000/1500 is definitely not a one size fits all rule. Creating an automated CPOE order for 5000/1500 is not going to improve the safety or efficacy of heparin dosing. Implementing a new protocol for dosing based on evidence that includes diagnosis, labs, and body mass index will improve care. Our experience is that it is best to fix the process, then automated the fixed process. By doing this, no one can blame the software for the pain of adapting to the process change.
Our experience with CPOE over the past 7 years is that it has reduced medication error by 50%, it paid for itself within 2 years, and clinicians have embraced it. In 2008-2009 we're completing the bar coding for all our unit dose medications (including repackaging every dose of Tylenol in bar coded baggies) so we can scan the patient wrist band, scan the medication, and scan the nurse, achieving a completely automated medication administration record. Once this is complete, the last causes of medication error will be removed from our hospital and we hope to achieve a truly zero rate of adverse drug events.
Thursday, February 14, 2008
Cool Technology of the Week
This is a somewhat unusual Cool Technology entry.
Last night, three bloggers - Paul Levy , Jessica Lipnack , and I were invited to attend a performance of Shakespeare's Julius Caesar at the American Repertory Theatre (ART) in Cambridge, Massachusetts.
Paul wrote about the leadership lessons learned from the play's theme and dialog.
Jessica wrote about the network of people who made the ART production happen.
This leaves me to talk about the engineering behind this Avant-garde production.
The scenery was pure 1960's, reflecting the surroundings of the Loeb Theater, which was built in 1964. The set included two-tone leather couches and a dinette complete with moon chairs. The backdrop mirrored the banked seats of the theater, blurring the distinction between stage and audience.
The set changes included dynamic transitions of the background, with curtains falling into place to create the perfect 1960's pied de terre, right out of an Austin Powers film. Just how do curtains, walls, lights, microphones, and furniture move around during productions like Julius Caesar? The answer is a theatrical fly system.
A fly system is a collection of ropes, counterweights, pulleys, and other such tools within a theater designed to quickly move objects on and off stage by 'flying' them in from the area above the stage, known a flyspace, flyloft, fly tower, or fly gallery. In this case, the entire theater backdrop rose and fell, and walls of curtains were raised and lowered as needed to create different room views. To think that all these seamless movements were accomplished with ropes and pulleys is pretty cool.
Another interesting technology was used to create the Quentin Tarantino-like spattering of blood as Julius was asssasinated and the conspirators covered their hands in his blood. Just how does Hollywood (or Shakespearan theater) create the illusion of injury and bleeding? They use a technology called Blood Squibs . Balloons filled with blood are exploded using small electrically activated explosives. The conspirators in Julius Caesar eventually meet their own bloody ends and the production did a remarkable job with creating subtle but sanguine changes in clothing throughout the final act using blood squibs.
In the final act of the play, an entire Cadillac limousine is lowered from the ceiling, in homage to the 1960's era Ant Farm performance art installation of the Cadillac Ranch , an artwork which included several images of JFK. The juxaposition of 1960's era Kennedy themes with the assassination of Julius Caesar was striking.
Just how does a theater raise and lower a Cadillac limousine from the ceiling? The answer is 4 one ton winches mounted in the ceiling.
Theatrical flies, blood squibs, and high powered winches - cool technologies when "The play's the thing!"
Rapid Application Development with Facebook
"Rapid Application Development" and "Extreme Programming" are buzzwords for new ways to deliver software that meets initial user requirements and continues to improve based on customer feedback. These approaches turn the IT department into an agile and forward thinking service provider.
The typical approach to software selection - requirements definition, an RFP process, pilots, implementation, integrated testing, and go live - can take 18 months and by the time the software is in use, the initial requirements may have changed. For some applications, the notion of rapidly prototyping a solution then iteratively releasing new versions can deliver more functionality faster than traditional approaches. Given the budgets, staffing, and integration challenges challenges facing most IT departments, the notion of an agile response to organizational imperatives is challenging. Is there a disruptive technology solution to this?
However odd this sounds, the answer may be Facebook.
A case in point. BIDMC is enhancing its external website and is currently preparing an RFP for online giving software. At 8am last Sunday, our BIDMC CEO, Paul Levy, created an online giving page using the Facebook Causes application
It's already been used by hundreds of people and the funds are beginning to roll in.
The IT department did not need to be involved, other than to offer support that the experiment was safe, secure and worth doing.
Facebook is a perfect example of a rapid application development platform that empowers users to help themselves. It includes tools for creating of groups, forums, multimedia uploads, viral marketing, fund raising, and group mailing lists using any web browser, on any operating system, for free.
Should CIOs embrace it as a short term solution to many of the user requests for collaboration technologies?
The answer is yes, with caveats. Facebook is not a HIPAA business associate nor covered entity, so protected healthcare information should not be placed on Facebook. There is no service level agreement/quality of service guarantee, so it may be go down without notice (unlikely, but possible). It does not integrate with enterprise single signon based on Active Directory or LDAP.
However, these issues are not real barriers to supporting the ad hoc collaborations that are often needed by organizations to start projects, create a social network of internal staff, or support a discussion forum.
Should CIOs try to replicate Facebook functionality on internal portals? For some circumstances that involve patients, the need for a guarenteed application availability, and integration with existing systems, the answer is yes. But for others, there is an important reason why Facebook should be considered as part of rapid application development:
So many people are using Facebook at this point (60 million), that many users will resist using any other social networking software. They may even demand Facebook in lieu of corporate solutions so that all social networking activity - inside and outside the office - is integrated in one place.
In my next generation of portal frameworks, I will support our own versions of all the Web 2.0 functionality (forums, wikis, groups, multimedia uploads) that is in Facebook, but I will also ensure that Facebook itself is used strategically. Staying agile and responsive to my customers requires that I embrace Facebook, not resist it.
The typical approach to software selection - requirements definition, an RFP process, pilots, implementation, integrated testing, and go live - can take 18 months and by the time the software is in use, the initial requirements may have changed. For some applications, the notion of rapidly prototyping a solution then iteratively releasing new versions can deliver more functionality faster than traditional approaches. Given the budgets, staffing, and integration challenges challenges facing most IT departments, the notion of an agile response to organizational imperatives is challenging. Is there a disruptive technology solution to this?
However odd this sounds, the answer may be Facebook.
A case in point. BIDMC is enhancing its external website and is currently preparing an RFP for online giving software. At 8am last Sunday, our BIDMC CEO, Paul Levy, created an online giving page using the Facebook Causes application
It's already been used by hundreds of people and the funds are beginning to roll in.
The IT department did not need to be involved, other than to offer support that the experiment was safe, secure and worth doing.
Facebook is a perfect example of a rapid application development platform that empowers users to help themselves. It includes tools for creating of groups, forums, multimedia uploads, viral marketing, fund raising, and group mailing lists using any web browser, on any operating system, for free.
Should CIOs embrace it as a short term solution to many of the user requests for collaboration technologies?
The answer is yes, with caveats. Facebook is not a HIPAA business associate nor covered entity, so protected healthcare information should not be placed on Facebook. There is no service level agreement/quality of service guarantee, so it may be go down without notice (unlikely, but possible). It does not integrate with enterprise single signon based on Active Directory or LDAP.
However, these issues are not real barriers to supporting the ad hoc collaborations that are often needed by organizations to start projects, create a social network of internal staff, or support a discussion forum.
Should CIOs try to replicate Facebook functionality on internal portals? For some circumstances that involve patients, the need for a guarenteed application availability, and integration with existing systems, the answer is yes. But for others, there is an important reason why Facebook should be considered as part of rapid application development:
So many people are using Facebook at this point (60 million), that many users will resist using any other social networking software. They may even demand Facebook in lieu of corporate solutions so that all social networking activity - inside and outside the office - is integrated in one place.
In my next generation of portal frameworks, I will support our own versions of all the Web 2.0 functionality (forums, wikis, groups, multimedia uploads) that is in Facebook, but I will also ensure that Facebook itself is used strategically. Staying agile and responsive to my customers requires that I embrace Facebook, not resist it.
Tuesday, February 12, 2008
Biometric Authentication
Last week, my BIDMC CEO Paul Levy posted a question in his blog about the utility of fingerprint biometrics for USB storage drives. This raises the more global issue of the usefulness of biometric authentication in hospitals.
Today, authentication at BIDMC and Harvard Medical School is done with a strong username and password - the usual alphanumeric/mixed case password which must be changed frequently, cannot be repeated, is not an English word etc. Using complex passwords is great on desktops, but works less well on mobile devices without keyboards or in crisis situations. Trying to type an 8 character password on a tablet while the patient is crashing can be very anxiety provoking.
Over the past 5 years, I've worked with various biometric technologies including fingerprint scanning, iris scanning, hand geometry, and facial recognition. My experience has been mixed. In general, biometrics have been
-immature, hard to support technology
-challenged by false positive (granting access inappropriately) /false negative issues (denying access inappropriately), impacting user acceptance of the technology
-characterized by lack of integration with existing enterprise security systems
However, new products are being introduced which have caused us to re-evaluate biometrics.
Clinicians find the fingerprint an easy to use authentication method when they are in a hurry. It has 3 positive attributes
-you're unlikely to forget your finger at home
-although identify theft of a fingerprint is theoretically possible, we can "reset" the password by selecting another finger (it's like having 10 different passwords)
-Since laptop data theft is a highly visible problem, protecting laptop logins with a fingerprint scan seems like a good security practice.
There are issues
-As we further deploy this technology, we'll have to review our policies and procedures. For example, if biometrics were used to encrypt corporate issued laptops, the employee termination procedures would need to be changed to ensure access to the “finger” is available to recover the system.
-Recovery of a "lost" fingerprint (due to injury or absence) can be problematic for an institution. -Non-contact biometrics might be better in healthcare settings for infection control
We've tested Omnipass in the Emergency Department as a way to accomplish authentication using multiple methods - fingerprint or username/password, all linked to our enterprise Active Directory (AD). Omnipass supports central storage of fingerprint scans and maps them to AD users. It also provides secure authentication of web pages.
The issue we had in our pilot is the multi step process to log into Omnipass, then log into our ED dashboard application, then log out of Omnipass. For a workflow where the user has the tablet for hours, this wouldn't be a problem. For Emergency Department workflow, the user picks up the tablet, uses it for 3-5 minutes, then puts it down. A 1 minute login/logoff process eliminates the time savings of using a portal device.
For those seeking early experimentation with biometrics, I recommend a pilot of fingerprint scanning. Iris scanning requires more expensive hardware, hand geometry is harder to deploy, and facial recognition is much more experimental technology.
Today, authentication at BIDMC and Harvard Medical School is done with a strong username and password - the usual alphanumeric/mixed case password which must be changed frequently, cannot be repeated, is not an English word etc. Using complex passwords is great on desktops, but works less well on mobile devices without keyboards or in crisis situations. Trying to type an 8 character password on a tablet while the patient is crashing can be very anxiety provoking.
Over the past 5 years, I've worked with various biometric technologies including fingerprint scanning, iris scanning, hand geometry, and facial recognition. My experience has been mixed. In general, biometrics have been
-immature, hard to support technology
-challenged by false positive (granting access inappropriately) /false negative issues (denying access inappropriately), impacting user acceptance of the technology
-characterized by lack of integration with existing enterprise security systems
However, new products are being introduced which have caused us to re-evaluate biometrics.
Clinicians find the fingerprint an easy to use authentication method when they are in a hurry. It has 3 positive attributes
-you're unlikely to forget your finger at home
-although identify theft of a fingerprint is theoretically possible, we can "reset" the password by selecting another finger (it's like having 10 different passwords)
-Since laptop data theft is a highly visible problem, protecting laptop logins with a fingerprint scan seems like a good security practice.
There are issues
-As we further deploy this technology, we'll have to review our policies and procedures. For example, if biometrics were used to encrypt corporate issued laptops, the employee termination procedures would need to be changed to ensure access to the “finger” is available to recover the system.
-Recovery of a "lost" fingerprint (due to injury or absence) can be problematic for an institution. -Non-contact biometrics might be better in healthcare settings for infection control
We've tested Omnipass in the Emergency Department as a way to accomplish authentication using multiple methods - fingerprint or username/password, all linked to our enterprise Active Directory (AD). Omnipass supports central storage of fingerprint scans and maps them to AD users. It also provides secure authentication of web pages.
The issue we had in our pilot is the multi step process to log into Omnipass, then log into our ED dashboard application, then log out of Omnipass. For a workflow where the user has the tablet for hours, this wouldn't be a problem. For Emergency Department workflow, the user picks up the tablet, uses it for 3-5 minutes, then puts it down. A 1 minute login/logoff process eliminates the time savings of using a portal device.
For those seeking early experimentation with biometrics, I recommend a pilot of fingerprint scanning. Iris scanning requires more expensive hardware, hand geometry is harder to deploy, and facial recognition is much more experimental technology.
Saturday, February 9, 2008
Electronic Health Records for Non-owned doctors - Cost modeling
The third entry in my series on the 10 critical aspects of providing electronic health records to non-owned doctors is about modeling the costs of the project.
Based on the informatics literature, the initial implementation cost of an EHR for private practices averages between $40,000-$60,000 per provider and the cost of maintenance averages $5,000-10,000 per provider per year. Using these numbers, the total EHR implementation costs for our 300 non-owned doctors could be $12-$18 million and $1.5-$3 million per year. Of course, this includes total costs paid by the hospital and by the practices. To understand the economics of the project, we need to inventory all the costs included and who pays those costs. Stark safe harbors provide some guidance here, since Stark separates costs into those which can be shared with hospitals and those which must be paid by the providers themselves. Up to 85% of implementation costs excluding office hardware can be funded by the hospital. Hardware and most ongoing costs must be paid by the providers. We must also consider what costs the hospital should absorb for planning, legal and infrastructure to offer EHR services to non-owned doctors. These startup costs are nearly the same for 30 or 300 doctors, so they are not easily computed on a per provider basis.
Initial Costs
1. Startup costs to be funded by the hospital
Planning
Legal costs
Hosting Site hardware and operating system software
2. Practice implementation costs to be shared between the hospital and practices
Software licensing fees
Technical Deployment services and Workflow design services
Project Management costs
Training costs
Interface costs
Data conversion costs
3. Practice implementation costs to be funded entirely by the practice
Hardware local to the practice
Ongoing costs
1. Maintenance costs to be funded by the hospital
Hosting Site staffing and hardware lifecycle maintenance
2. Support costs to be shared per Stark
Help desk
Practice consulting support
3. Support costs to be funded entirely by the practice
Hardware service and support
Network connectivity
Of course, each of these categories and subcategories has its own detailed analysis. The "hydraulics" of our model must take into account the goals of the stakeholders - the hospital has a fixed capital budget and wants to connect as many doctors as possible. Doctors want as much subsidy as possible. Given the hospital contribution of x million, and a doctor's ability to pay of y thousand, we need to compute the subsidy level and number of doctors included in the rollout. To help with this decision we're dividing our budgets for all the categories above into fixed startup costs and marginal costs to add 100 doctors. We're also categorizing all costs as subsidizable or non-subsidizable.
Over the next 90 days, we'll do our best to achieve economies of scale, negotiate appropriate vendor pricing, and document acceptable service levels. Our Governance committees will review the final pricing to ensure we've achieved a balance of hospital costs, practice costs, and service. We'll also refine our cost models by documenting all the costs we experience in our pilots this Spring.
Our internal staff and external collaborators are doing a remarkable job documenting the costs. We'll know soon if it is possible to use the capital budgets that the hospital has available to create an EHR product at a price that clinicians are willing to pay for.
Based on the informatics literature, the initial implementation cost of an EHR for private practices averages between $40,000-$60,000 per provider and the cost of maintenance averages $5,000-10,000 per provider per year. Using these numbers, the total EHR implementation costs for our 300 non-owned doctors could be $12-$18 million and $1.5-$3 million per year. Of course, this includes total costs paid by the hospital and by the practices. To understand the economics of the project, we need to inventory all the costs included and who pays those costs. Stark safe harbors provide some guidance here, since Stark separates costs into those which can be shared with hospitals and those which must be paid by the providers themselves. Up to 85% of implementation costs excluding office hardware can be funded by the hospital. Hardware and most ongoing costs must be paid by the providers. We must also consider what costs the hospital should absorb for planning, legal and infrastructure to offer EHR services to non-owned doctors. These startup costs are nearly the same for 30 or 300 doctors, so they are not easily computed on a per provider basis.
Initial Costs
1. Startup costs to be funded by the hospital
Planning
Legal costs
Hosting Site hardware and operating system software
2. Practice implementation costs to be shared between the hospital and practices
Software licensing fees
Technical Deployment services and Workflow design services
Project Management costs
Training costs
Interface costs
Data conversion costs
3. Practice implementation costs to be funded entirely by the practice
Hardware local to the practice
Ongoing costs
1. Maintenance costs to be funded by the hospital
Hosting Site staffing and hardware lifecycle maintenance
2. Support costs to be shared per Stark
Help desk
Practice consulting support
3. Support costs to be funded entirely by the practice
Hardware service and support
Network connectivity
Of course, each of these categories and subcategories has its own detailed analysis. The "hydraulics" of our model must take into account the goals of the stakeholders - the hospital has a fixed capital budget and wants to connect as many doctors as possible. Doctors want as much subsidy as possible. Given the hospital contribution of x million, and a doctor's ability to pay of y thousand, we need to compute the subsidy level and number of doctors included in the rollout. To help with this decision we're dividing our budgets for all the categories above into fixed startup costs and marginal costs to add 100 doctors. We're also categorizing all costs as subsidizable or non-subsidizable.
Over the next 90 days, we'll do our best to achieve economies of scale, negotiate appropriate vendor pricing, and document acceptable service levels. Our Governance committees will review the final pricing to ensure we've achieved a balance of hospital costs, practice costs, and service. We'll also refine our cost models by documenting all the costs we experience in our pilots this Spring.
Our internal staff and external collaborators are doing a remarkable job documenting the costs. We'll know soon if it is possible to use the capital budgets that the hospital has available to create an EHR product at a price that clinicians are willing to pay for.
Friday, February 8, 2008
Always Look on the Bright Side
Every day as a CIO, I inevitably receive unpleasant emails. I truly wish I could receive emails like
"The network and the servers have been running flawlessly for the past year. Congratulations on zero downtime"
but alas, no one is likely to send such an email.
The CIO has the challenge of delivering flawless operational performance while also managing constant change. It's a bit like changing the wings on a 747 while in flight.
I have an appropriate budget which is prioritized by excellent governance committees, and a yearly operating plan that is only occasionally interrupted by the "Tyranny of the Urgent" due to compliance, quality, or strategic opportunity mandates, but I still receive daily complaints such as:
"The Spam filters are too lax since I still receive some junk mail, but by the way, you need to let my eBay transactions through"
"My brother in law will offer me an Owuga 3000 desktop computer at a cheaper price, why are you using Dell and Lenovo hardware?"
"I need to surf pornography sites as part of an NIH funded research study and you should not restrict my academic freedom"
"My application, although not funded and not reviewed by any governance process, is your highest priority"
"I did not tell you that we needed network, telephones, desktops, and new applications by next week but now it's your emergency. I'm headed out to my vacation, let me know how it goes."
To all such complaints, a kneejerk response might be:
"Your bad planning does not constitute an emergency on my part"
or
"Every project is function of funding, scope, and time. You've provided no funding, so your project will either have zero scope or take infinite time"
but the CIO needs to respond
"Thanks so much for your thoughtful email. There is a process to evaluate your request and I will personally supervise your request during that process. Your peers and the clinical leadership of the entire organization will evaluate your request based on
Return on Investment
Quality/Compliance
Staff/Patient/Clinician impact
Strategic importance"
Every time I have responded to angry email with emotion I have regretted it. Although it may feel good to respond to a negative email with a venomous answer, emotion is never appropriate. I tell my staff that if they ever feel emotion while writing an email, "save as draft". Get someone else to review the response first. Send it after a day of rest.
Rather than judge the quality of every day based on the negative email I receive, I ask about our trajectory. Have we moved forward on our yearly and five year plan? Has today had 10 good things and only 5 bad things? Do I have my health, my family, and my reputation?
No matter how bad the week, the answer to all of these questions is always yes. Our trajectory is always positive.
With a positive trajectory in mind, a non-emotional response to every issue is easier. If a CIO ever uses profanity, a raised voice, or escalation to the CEO, the CIO diminishes himself/herself.
You can always recover from a bad day, but you cannot always recover from a bad email. Just ask Neal Patterson.
Thus, keep a stiff upper lip, have a thick skin, and run each day based on your trajectory not the position of your ego. And remember, "save as draft."
"The network and the servers have been running flawlessly for the past year. Congratulations on zero downtime"
but alas, no one is likely to send such an email.
The CIO has the challenge of delivering flawless operational performance while also managing constant change. It's a bit like changing the wings on a 747 while in flight.
I have an appropriate budget which is prioritized by excellent governance committees, and a yearly operating plan that is only occasionally interrupted by the "Tyranny of the Urgent" due to compliance, quality, or strategic opportunity mandates, but I still receive daily complaints such as:
"The Spam filters are too lax since I still receive some junk mail, but by the way, you need to let my eBay transactions through"
"My brother in law will offer me an Owuga 3000 desktop computer at a cheaper price, why are you using Dell and Lenovo hardware?"
"I need to surf pornography sites as part of an NIH funded research study and you should not restrict my academic freedom"
"My application, although not funded and not reviewed by any governance process, is your highest priority"
"I did not tell you that we needed network, telephones, desktops, and new applications by next week but now it's your emergency. I'm headed out to my vacation, let me know how it goes."
To all such complaints, a kneejerk response might be:
"Your bad planning does not constitute an emergency on my part"
or
"Every project is function of funding, scope, and time. You've provided no funding, so your project will either have zero scope or take infinite time"
but the CIO needs to respond
"Thanks so much for your thoughtful email. There is a process to evaluate your request and I will personally supervise your request during that process. Your peers and the clinical leadership of the entire organization will evaluate your request based on
Return on Investment
Quality/Compliance
Staff/Patient/Clinician impact
Strategic importance"
Every time I have responded to angry email with emotion I have regretted it. Although it may feel good to respond to a negative email with a venomous answer, emotion is never appropriate. I tell my staff that if they ever feel emotion while writing an email, "save as draft". Get someone else to review the response first. Send it after a day of rest.
Rather than judge the quality of every day based on the negative email I receive, I ask about our trajectory. Have we moved forward on our yearly and five year plan? Has today had 10 good things and only 5 bad things? Do I have my health, my family, and my reputation?
No matter how bad the week, the answer to all of these questions is always yes. Our trajectory is always positive.
With a positive trajectory in mind, a non-emotional response to every issue is easier. If a CIO ever uses profanity, a raised voice, or escalation to the CEO, the CIO diminishes himself/herself.
You can always recover from a bad day, but you cannot always recover from a bad email. Just ask Neal Patterson.
Thus, keep a stiff upper lip, have a thick skin, and run each day based on your trajectory not the position of your ego. And remember, "save as draft."
Wednesday, February 6, 2008
Cool Technology of the Week
As I travel around the world, I stay connected via WiFi, EVDO, EDGE, Broadband, and dialup. I'm often roaming from place to place on low speed networks, switching from WiFi to EDGE, and having to close all applications, VPN sessions, and even reboot as I travel to ensure robust application functionality.
The Cool Technology of the Week is Netmotion's Mobility XE Mobile VPN.
What is a mobile VPN? It's a VPN solution engineered to deal with the reality of wireless networks such as wireless security, coverage gaps and performance.
The technology consists of a server behind the firewall in the DMZ of your data center and a small laptop client application.
Whenever you login remotely, the Mobility XE server establishes a virtual IP address for your VPN session and persists this session for as long as you need it. With a persistent session, applications believe your IP is constant even if you switch networks, lose signal or hibernate your laptop. Mobility XE's InterNetwork Roaming is tightly integrated with network persistence, application session persistence and single sign-on authentication so that users do not lose application sessions or have to re-login when they traverse networks, go in and out of network range, or suspend and resume their devices.
It also works across VLANs on an intranet, so a mobile computer on wheels can connect to applications anywhere in the hospital without requiring a reboot or new login.
Additionally, the Mobility product comes with the ability to specify granular policies associated by system, application, user, or network address range. This provides the ability to do quality of service or restrict certain applications to a given address space. For example we can restrict high bandwidth applications from running while on WiFi in a clinical area but when moving into an office area allow the applications as there would be no clinical impact.
In the past at BIDMC, we implemented GRE tunneling and Wireless LAN Solution Engine (WLSE) components to enable roaming, but these are complex and have significant management overhead. A mobile VPN provides a more robust roaming experience while simplifying network architecture and administration. That's why it's the cool technology of the week.
The Cool Technology of the Week is Netmotion's Mobility XE Mobile VPN.
What is a mobile VPN? It's a VPN solution engineered to deal with the reality of wireless networks such as wireless security, coverage gaps and performance.
The technology consists of a server behind the firewall in the DMZ of your data center and a small laptop client application.
Whenever you login remotely, the Mobility XE server establishes a virtual IP address for your VPN session and persists this session for as long as you need it. With a persistent session, applications believe your IP is constant even if you switch networks, lose signal or hibernate your laptop. Mobility XE's InterNetwork Roaming is tightly integrated with network persistence, application session persistence and single sign-on authentication so that users do not lose application sessions or have to re-login when they traverse networks, go in and out of network range, or suspend and resume their devices.
It also works across VLANs on an intranet, so a mobile computer on wheels can connect to applications anywhere in the hospital without requiring a reboot or new login.
Additionally, the Mobility product comes with the ability to specify granular policies associated by system, application, user, or network address range. This provides the ability to do quality of service or restrict certain applications to a given address space. For example we can restrict high bandwidth applications from running while on WiFi in a clinical area but when moving into an office area allow the applications as there would be no clinical impact.
In the past at BIDMC, we implemented GRE tunneling and Wireless LAN Solution Engine (WLSE) components to enable roaming, but these are complex and have significant management overhead. A mobile VPN provides a more robust roaming experience while simplifying network architecture and administration. That's why it's the cool technology of the week.
Web Content Management Systems
In a previous post, I lamented that I had not rapidly adopted Web 2.0 for all my enterprises, making everyone an author, editor or publisher.
To help accelerate our Web 2.0 efforts, my web teams investigated Web Content Management Systems (CMS) which offer an integrated suite of page creation, wiki, blog, forum, and other distributed publishing tools.We evaluated offerings from Microsoft, Ektron, SiteCore, Documentum and others. The end result of our evaluation was to use SiteCore for content management in combination with free Microsoft Windows SharePoint Services 3.0 tools.
Our requirements for a CMS were Integration with our existing .NET/SQL Server 2005 web applications and SOAP services written in other platforms
Our plan is to convert our existing external websites to this new platform and gain a consistent navigation paradigm, enhanced search capability and common look/feel to every page. The most important aspect of this project is a new governance model which will distribute content authoring and maintenance to every department, overseen by project managers in Corporate Communications. As we change the governance model, we'll also be able to delete our outdated content which has made searching our 10,000 page website less than perfect.
I'm a strong advocate of a web content management strategy based on a distributed authoring model, driven by a workflow engine, with robust processes to ensure only accurate/updated content is available to internal and external search engines.
My experience is that most patients use Google to find content on the web rather than navigate a website, so doing a complete reorganization of our content into a database-backed authoring system that is easily spidered by Google will really help our patients find the information they are looking for.
At the same time we're implementing this wholesale revision of our external site, we're also revising our internal site to include collaboration tools, group calendaring, wikis, blogs, and customization. Using a combination of SiteCore and Microsoft's Windows SharePoint Services 3.0 tools, we hope to offer our internal stakeholders a much richer experience that supports departmental information management including Web 2.0 community interaction.
2008 will be the year of Web 2.0 for all my organizations and commercial Content Management Systems will help.
To help accelerate our Web 2.0 efforts, my web teams investigated Web Content Management Systems (CMS) which offer an integrated suite of page creation, wiki, blog, forum, and other distributed publishing tools.We evaluated offerings from Microsoft, Ektron, SiteCore, Documentum and others. The end result of our evaluation was to use SiteCore for content management in combination with free Microsoft Windows SharePoint Services 3.0 tools.
Our requirements for a CMS were
- A distributed publishing model which enables delegated content management by every person in the organization, with review by an editor before it is published to the public.
- Development, staging and production platforms which enable us to rigorously test our websites before publishing.
- Support for our home built single signon application that works via AJAX with any web form based authentication
- A robust "what you see is what you get editor" to support narrative text, graphic design, and multimedia
- A very easy to use authoring and publishing system with an intuitive user interface that does not require training
- User configurable business rules as to who can author, edit and publish as well as a workflow that supports lifecycle management of content
- A truly thin client approach that works on every browser and every operating system
- An architecture that enables our web sites to be clustered within a data center and replicated across multiple data centers for disaster recovery
- Authentication via our LDAP/Active Directory systems
Our plan is to convert our existing external websites to this new platform and gain a consistent navigation paradigm, enhanced search capability and common look/feel to every page. The most important aspect of this project is a new governance model which will distribute content authoring and maintenance to every department, overseen by project managers in Corporate Communications. As we change the governance model, we'll also be able to delete our outdated content which has made searching our 10,000 page website less than perfect.
I'm a strong advocate of a web content management strategy based on a distributed authoring model, driven by a workflow engine, with robust processes to ensure only accurate/updated content is available to internal and external search engines.
My experience is that most patients use Google to find content on the web rather than navigate a website, so doing a complete reorganization of our content into a database-backed authoring system that is easily spidered by Google will really help our patients find the information they are looking for.
At the same time we're implementing this wholesale revision of our external site, we're also revising our internal site to include collaboration tools, group calendaring, wikis, blogs, and customization. Using a combination of SiteCore and Microsoft's Windows SharePoint Services 3.0 tools, we hope to offer our internal stakeholders a much richer experience that supports departmental information management including Web 2.0 community interaction.
2008 will be the year of Web 2.0 for all my organizations and commercial Content Management Systems will help.
Monday, February 4, 2008
My Top 10 rules for Schedule triage
In previous posts, I've described my approach to email triage.
Here's my approach to triage of my daily schedule.
My meetings are typically scheduled from 7a to 6p, followed by dinner with my family until 8p, followed by email, reading and writing until 1am. My assistant and I schedule each day using the following rules:
1. Leave 50% of the schedule available for the events of each day - in a complex organization, many operational issues arise each day that are easier to resolve 'just in time' than via meetings scheduled a week later. I try to reserve 50% of my time for real time response to strategic issues, ad hoc meetings, phone calls, and opportunities, doing today's work today.
2. Manage vendor relationships - I receive a hundred requests for vendor meetings each day. My assistant triages these. My approach to vendors is that I select a few close vendor partners through exhaustive research and then really cultivate those relationships. If my selected vendor partners need me to alpha test products, speak to their staff about our needs, or comment on their strategy, I'm available to do so. If new vendors cold call me, I cannot take their calls, although I will review their products if they email me information to read asynchronously. My message to new vendors is that I'll contact them when I'm ready to discuss their products based on my review of electronic briefing materials.
3. Evaluate the impact factor - Every day I receive numerous requests to speak, travel and write. I evaluate the impact factor of each of these requests. How many people will I reach? Based on the audience, what positive change might result? Will there be an opportunity to discuss issues with detractors? As I've said in previous posts, I embrace debate and controversy, since resolving conflict can have great impact.
4. Serve those who serve you - In the course of my jobs at HMS, CareGroup, NEHEN, MA-Share and HITSP I depend upon hundreds of people. These people often work long hours, endure inconvenient travel, and sacrifice their personal time to work on projects I lead. I do whatever I can to support them whenever they ask me to speak, attend specialty society meetings, or write articles.
5. Leverage travel - Travel is miserable today. Each year, I fly 400,000 miles and I leverage every minute of that travel. I try to cluster many meetings, speaking engagements, and events around each trip. If I'm on the West Coast, I group all my San Francisco, Los Angeles, and San Diego meetings together into a 2 day cluster.
6. Use the interstitial time - Each day is filled with gaps between meetings, walking from place to place, and driving. I use this time as much as possible by filling it with hallway conversations, wireless email, and calls from the hands free bluetooth microphone in my Prius.
7. Keep focused on the important issues - The tyranny of the urgent creates distractions every day, but I stay focused on our yearly operating plan and 5 year plan. When I look at each week's schedule on Sunday night, I make sure that all the important issues are pre-scheduled into my week.
8. Debrief after every day - At the end of every day, I review my important issues list and review the progress and next steps on each issue. By doing this, I minimize the number of forgotten followups and dropped balls.
9. Respond to each email each day - I do not know the answer to every question that I'm asked via email, but I respond to each one with a description of the process I've initiated to get an accurate answer. This ensures that every person who emails me knows that I've acknowledged their question, even if an answer may take a few days to determine.
10. Never be the rate limiting step - In my schedule there is always time to resolve open issues, settle a political conflict, or answer an operational question. I close every day with an empty desk, an empty voice mailbox and an empty email queue. This enables all my staff to be as efficient and productive as possible since they are not waiting for me.
These 10 triage rules work most of the time to keep my schedule sane and stakeholders happy. Of course there are times when travel cannot be clustered or there are more meetings and urgent issues than hours in a day, but on average, my day is well balanced.
Here's my approach to triage of my daily schedule.
My meetings are typically scheduled from 7a to 6p, followed by dinner with my family until 8p, followed by email, reading and writing until 1am. My assistant and I schedule each day using the following rules:
1. Leave 50% of the schedule available for the events of each day - in a complex organization, many operational issues arise each day that are easier to resolve 'just in time' than via meetings scheduled a week later. I try to reserve 50% of my time for real time response to strategic issues, ad hoc meetings, phone calls, and opportunities, doing today's work today.
2. Manage vendor relationships - I receive a hundred requests for vendor meetings each day. My assistant triages these. My approach to vendors is that I select a few close vendor partners through exhaustive research and then really cultivate those relationships. If my selected vendor partners need me to alpha test products, speak to their staff about our needs, or comment on their strategy, I'm available to do so. If new vendors cold call me, I cannot take their calls, although I will review their products if they email me information to read asynchronously. My message to new vendors is that I'll contact them when I'm ready to discuss their products based on my review of electronic briefing materials.
3. Evaluate the impact factor - Every day I receive numerous requests to speak, travel and write. I evaluate the impact factor of each of these requests. How many people will I reach? Based on the audience, what positive change might result? Will there be an opportunity to discuss issues with detractors? As I've said in previous posts, I embrace debate and controversy, since resolving conflict can have great impact.
4. Serve those who serve you - In the course of my jobs at HMS, CareGroup, NEHEN, MA-Share and HITSP I depend upon hundreds of people. These people often work long hours, endure inconvenient travel, and sacrifice their personal time to work on projects I lead. I do whatever I can to support them whenever they ask me to speak, attend specialty society meetings, or write articles.
5. Leverage travel - Travel is miserable today. Each year, I fly 400,000 miles and I leverage every minute of that travel. I try to cluster many meetings, speaking engagements, and events around each trip. If I'm on the West Coast, I group all my San Francisco, Los Angeles, and San Diego meetings together into a 2 day cluster.
6. Use the interstitial time - Each day is filled with gaps between meetings, walking from place to place, and driving. I use this time as much as possible by filling it with hallway conversations, wireless email, and calls from the hands free bluetooth microphone in my Prius.
7. Keep focused on the important issues - The tyranny of the urgent creates distractions every day, but I stay focused on our yearly operating plan and 5 year plan. When I look at each week's schedule on Sunday night, I make sure that all the important issues are pre-scheduled into my week.
8. Debrief after every day - At the end of every day, I review my important issues list and review the progress and next steps on each issue. By doing this, I minimize the number of forgotten followups and dropped balls.
9. Respond to each email each day - I do not know the answer to every question that I'm asked via email, but I respond to each one with a description of the process I've initiated to get an accurate answer. This ensures that every person who emails me knows that I've acknowledged their question, even if an answer may take a few days to determine.
10. Never be the rate limiting step - In my schedule there is always time to resolve open issues, settle a political conflict, or answer an operational question. I close every day with an empty desk, an empty voice mailbox and an empty email queue. This enables all my staff to be as efficient and productive as possible since they are not waiting for me.
These 10 triage rules work most of the time to keep my schedule sane and stakeholders happy. Of course there are times when travel cannot be clustered or there are more meetings and urgent issues than hours in a day, but on average, my day is well balanced.
Electronic Records for Non-Owned Doctors - Governance
As promised last week, I will blog each week about the 10 critical aspects of our project to provide a hosted electronic health record solution for non-owned clinicians, one of the most challenging projects facing hospitals nationwide. This week's entry describes our project governance.
The needs of many stakeholders must be balanced to ensure the success of this project. The hospital wants to support as many clinicians as possible using its capital budgets most efficiently. Community clinicians want to minimize the financial and operational impact of the project on their practice. IT staff must manage their hospital-based projects and infrastructure while expanding their scope to new offsite locations.
Governance is critical to establish priorities, align stakeholders, and set expectations. To support this project we created two governance committees - a steering committee and an advisory committee.
The steering committee is comprised of senior executives from the hospital and physicians' organization, since it is truly a joint effort of Beth Israel Deaconess Medical Center (BIDMC) and the Beth Israel Deaconess Physicians' Organization (BIDPO). BIDMC representatives include the CFO, the CIO, the SVP of Network Development and the IT project manager. Physicians' organization members include the President, the Executive Director, and the Chief Medical Officer of BIDPO. This committee provides oversight of legal agreements, financial expenditures, project scope, timelines, and resources. It is co-chared by the CIO and Executive Director of BIDPO, who jointly sign off on all expenditures. The BIDMC and BIDPO boards provide additional oversight of the committee chairs.
The advisory committee is comprised of prospective community physician users of the electronic health record system. Since our community network is comprised of 300 non-owned Boston-based physicians, clinicians in the western suburbs and clinicians in the southern part of the state, we have representatives of each group sitting on the committee. The committee focuses on making the project really work for the practices, but also to meet the needs of the physician organization's clinically integrated network model. The role of the committee is to review our "model" office templates, help us prioritize the implementation order of practices, and make recommendations on policies. As with every project, we use our standard project management tools including a charter for each committee.
Since this project is so challenging and requires a precise blend of economics, information technology and politics, the governance committees are the place to ask permission, beg forgiveness, and communicate progress on every milestone. This is especially true to the complex cost model which shares expenditures equally between the hospital and physician's organization for implementation, subsidizing private clinician costs to the extent we are able based on Stark safe harbors. As you'll see in next week's EHR blog entry, the costs are diverse and deciding who pays/how much they pay cannot be done alone by IT, the hospital, or the physicians. It's truly a role for transparent, multi-disciplinary governance committees.
The needs of many stakeholders must be balanced to ensure the success of this project. The hospital wants to support as many clinicians as possible using its capital budgets most efficiently. Community clinicians want to minimize the financial and operational impact of the project on their practice. IT staff must manage their hospital-based projects and infrastructure while expanding their scope to new offsite locations.
Governance is critical to establish priorities, align stakeholders, and set expectations. To support this project we created two governance committees - a steering committee and an advisory committee.
The steering committee is comprised of senior executives from the hospital and physicians' organization, since it is truly a joint effort of Beth Israel Deaconess Medical Center (BIDMC) and the Beth Israel Deaconess Physicians' Organization (BIDPO). BIDMC representatives include the CFO, the CIO, the SVP of Network Development and the IT project manager. Physicians' organization members include the President, the Executive Director, and the Chief Medical Officer of BIDPO. This committee provides oversight of legal agreements, financial expenditures, project scope, timelines, and resources. It is co-chared by the CIO and Executive Director of BIDPO, who jointly sign off on all expenditures. The BIDMC and BIDPO boards provide additional oversight of the committee chairs.
The advisory committee is comprised of prospective community physician users of the electronic health record system. Since our community network is comprised of 300 non-owned Boston-based physicians, clinicians in the western suburbs and clinicians in the southern part of the state, we have representatives of each group sitting on the committee. The committee focuses on making the project really work for the practices, but also to meet the needs of the physician organization's clinically integrated network model. The role of the committee is to review our "model" office templates, help us prioritize the implementation order of practices, and make recommendations on policies. As with every project, we use our standard project management tools including a charter for each committee.
Since this project is so challenging and requires a precise blend of economics, information technology and politics, the governance committees are the place to ask permission, beg forgiveness, and communicate progress on every milestone. This is especially true to the complex cost model which shares expenditures equally between the hospital and physician's organization for implementation, subsidizing private clinician costs to the extent we are able based on Stark safe harbors. As you'll see in next week's EHR blog entry, the costs are diverse and deciding who pays/how much they pay cannot be done alone by IT, the hospital, or the physicians. It's truly a role for transparent, multi-disciplinary governance committees.
Managing Consulting Engagements
In previous blogs, I've mentioned the importance of project management. Every IT project, no matter how large or small, needs an assigned single point of contact for the IT department who can resolve day to day project issues and orchestrate communication. As I've said, not every project needs a Gantt chart and I'm dubious about the value of centralized project management offices for IT departments, but assigning an IT project manager and using a set of standardized project management tools are very important prerequisites for successful projects.
Consulting engagements need to be managed using the very same approach. All consulting projects need an IT project manager, a steering committee, and a project charter which documents the reason the consultants have been hired. For very politically challenging consulting engagements, the CIO can serve as the catalyst to start the project, but I do not recommend that the CIO serve as the IT project manager. The level of detail required to manage consultants requires more dedicated time than most CIOs have each day.
Here's the structured approach I recommend to manage consultants
1. Scope - All the stakeholders involved in the consulting engagement must agree on an unambigious scope for the project. The steering committee for the engagement should meet and agree on this scope before the consultants are engaged. This scope should be described in the project charter along with the governance that will be used to escalate questions about scope. Only by actively managing scope can consulting costs be controlled.
2. Deliverables - The result of a consulting engagement should be clearly described deliverables such as a finalized software product selection, a thoroughly researched whitepaper, or a comprehensive policy. The entire consulting engagement should be managed toward the production of these deliverables including interim review of drafts as often as possible. Mid course corrections of interim deliverables are always easier than a wholesale revision at the end of the process.
3. Interview Plan - Consultants, no matter how well intentioned, are disruptive to the day to day work of an organization since they need to meet with many stakeholders on an aggressive schedule to gather the information they need for their deliverables. The project manager overseeing the consulting engagement should work closely with the consultants to create a draft interview plan. This interview plan should specify the person, their role, and the questions to be answered.
4. Inform Superiors - The steering committee of the consulting engagement needs to review the draft interview plan, concur with the interview choices and ensure the managers of the interviewees are informed that the interviews will be scheduled. Typically, the project manager can send an email on behalf of the steering committee to the managers of the interviewees, so that all concerned realize the importance of the engagement.
5. Inform Interviewees - The managers of the interviewees should inform them of the purpose of the interviews and the need to schedule meetings with the consultants promptly. Urgent scheduling minimizes the cost of consultant time. The reason I prefer the direct managers to notify the interviewees is that most employees are reluctant to speak with consultants promptly unless they are told by their managers that they they can defer their other work to make time available for the consultants. Circulating a draft list of questions to each interviewee ahead of time is always helpful.
6. Conduct interviews - The interviews should be grouped by physical location to minimize consultant travel time. There are pros and cons to onsite verses phone interviews. Onsite interviews build a sense of team and establish relationships between the consultants and key stakeholders. However, onsite interviews generally require travel and hotel expense. Phone interviews are often easier to schedule and execute. Generally, we schedule the first consultant meetings with key stakeholders onsite and then followup meetings by phone.
7. Weekly deliverable check in - Every week, the steering committee (or an executive subset of the steering committee) should meet by phone to discuss the progress of the engagement and the status of deliverables. These weekly meetings are essential to rapidly resolve project roadblocks and clear up any misunderstandings.
8. Daily communication as needed - the IT Project Manager should be available to all stakeholders by email and phone to respond to daily issues as they arise. Interviews may need to be rescheduled, consultants may step on political landmines requiring escalation, and logistical details may need clarification.
9. Draft deliverable review - the entire steering committee should meet in person at a midpoint in the consulting engagement to review all the draft deliverable work and make recommendations about the final format, content, and timing of the deliverables, ensuring they align with the agreed upon scope.
10. Final communication of the deliverables and next steps - Once the deliverables are completed and reviewed by the committee, they should be broadly communicated to all the stakeholders involved in the engagement. Each interviewee will be more likely to participate in future engagements if they see the results of their input and understand next steps. After each consulting engagement, I summarize the key points from the deliverables so that everyone in the organization learns from the work and understands what we received for our money.
As a final note, I want to re-emphasize that consultants create a lot of work for internal staff, including those who manage the consultants and those who provide the documents requested during interviews. If anyone believes that consultants can simply parachute into an organization, do their work without disrupting operations, then depart, they have not been on the receiving end of a consulting engagement!
Consulting engagements need to be managed using the very same approach. All consulting projects need an IT project manager, a steering committee, and a project charter which documents the reason the consultants have been hired. For very politically challenging consulting engagements, the CIO can serve as the catalyst to start the project, but I do not recommend that the CIO serve as the IT project manager. The level of detail required to manage consultants requires more dedicated time than most CIOs have each day.
Here's the structured approach I recommend to manage consultants
1. Scope - All the stakeholders involved in the consulting engagement must agree on an unambigious scope for the project. The steering committee for the engagement should meet and agree on this scope before the consultants are engaged. This scope should be described in the project charter along with the governance that will be used to escalate questions about scope. Only by actively managing scope can consulting costs be controlled.
2. Deliverables - The result of a consulting engagement should be clearly described deliverables such as a finalized software product selection, a thoroughly researched whitepaper, or a comprehensive policy. The entire consulting engagement should be managed toward the production of these deliverables including interim review of drafts as often as possible. Mid course corrections of interim deliverables are always easier than a wholesale revision at the end of the process.
3. Interview Plan - Consultants, no matter how well intentioned, are disruptive to the day to day work of an organization since they need to meet with many stakeholders on an aggressive schedule to gather the information they need for their deliverables. The project manager overseeing the consulting engagement should work closely with the consultants to create a draft interview plan. This interview plan should specify the person, their role, and the questions to be answered.
4. Inform Superiors - The steering committee of the consulting engagement needs to review the draft interview plan, concur with the interview choices and ensure the managers of the interviewees are informed that the interviews will be scheduled. Typically, the project manager can send an email on behalf of the steering committee to the managers of the interviewees, so that all concerned realize the importance of the engagement.
5. Inform Interviewees - The managers of the interviewees should inform them of the purpose of the interviews and the need to schedule meetings with the consultants promptly. Urgent scheduling minimizes the cost of consultant time. The reason I prefer the direct managers to notify the interviewees is that most employees are reluctant to speak with consultants promptly unless they are told by their managers that they they can defer their other work to make time available for the consultants. Circulating a draft list of questions to each interviewee ahead of time is always helpful.
6. Conduct interviews - The interviews should be grouped by physical location to minimize consultant travel time. There are pros and cons to onsite verses phone interviews. Onsite interviews build a sense of team and establish relationships between the consultants and key stakeholders. However, onsite interviews generally require travel and hotel expense. Phone interviews are often easier to schedule and execute. Generally, we schedule the first consultant meetings with key stakeholders onsite and then followup meetings by phone.
7. Weekly deliverable check in - Every week, the steering committee (or an executive subset of the steering committee) should meet by phone to discuss the progress of the engagement and the status of deliverables. These weekly meetings are essential to rapidly resolve project roadblocks and clear up any misunderstandings.
8. Daily communication as needed - the IT Project Manager should be available to all stakeholders by email and phone to respond to daily issues as they arise. Interviews may need to be rescheduled, consultants may step on political landmines requiring escalation, and logistical details may need clarification.
9. Draft deliverable review - the entire steering committee should meet in person at a midpoint in the consulting engagement to review all the draft deliverable work and make recommendations about the final format, content, and timing of the deliverables, ensuring they align with the agreed upon scope.
10. Final communication of the deliverables and next steps - Once the deliverables are completed and reviewed by the committee, they should be broadly communicated to all the stakeholders involved in the engagement. Each interviewee will be more likely to participate in future engagements if they see the results of their input and understand next steps. After each consulting engagement, I summarize the key points from the deliverables so that everyone in the organization learns from the work and understands what we received for our money.
As a final note, I want to re-emphasize that consultants create a lot of work for internal staff, including those who manage the consultants and those who provide the documents requested during interviews. If anyone believes that consultants can simply parachute into an organization, do their work without disrupting operations, then depart, they have not been on the receiving end of a consulting engagement!