Many innovative companies are creating novel healthcare applications for smart phones. One of the coolest I've seen adapts the camera on a smart phone to serve as a microscope with a 1.2 micrometer resolution - sufficient to see and count white blood cells and bacteria.
Aydogan Ozcan at UCLA has worked for many years to bring high tech tools to low tech places. A cell phone microscope that costs about $14 to produce, can provide laboratory services in isolated locations without the financial resources to purchase diagnostic equipment.
You can imagine clinicians in malaria endemic areas using cell phone microscopes to send blood smear photomicrographs to consultants. Given the ubiquity of cell phone networks throughout the developing world, such technology has real promise.
Last year, I worked with the government of New Zealand on their Focus on Health Competition. One of the competitors was Pictor Ltd, which created diagnostic products using micro arrays of color changing reaction wells requiring microliter quantities of blood or other fluids to be tested. The microarray can be photographed and read with a smart phone.
Making a smart phone into a microscope, hematology lab, and a chemistry lab. That's cool!
As president of the Mayo Clinic Platform, I lead a portfolio of new digital platform businesses focused on transforming health by leveraging artificial intelligence, the internet of things, and an ecosystem of partners for Mayo Clinic. This is made possible by an extraordinary team of people at Mayo and collaborators worldwide. This blog will document their story.
Friday, December 31, 2010
Thursday, December 30, 2010
Cleaning Outdoor Clothes
Now that it's winter, I'm wearing base layers, soft-shells, and hard-shells to keep warm while hiking, skiing, and winter mountaineering.
Recently, while hiking in a wintry mix of snow, sleet, and rain, I noticed that my 5 year old Gore-tex jacket was wetting out - the water was not beading off the surface.
Admittedly, the manufacturer of my shells recommends washing them after every 10 to 12 days of hard use or every 20 to 30 days of light use. They also recommend applying durable water repellent (DWR) treatment when water stops beading off the fabric. Since I've climbed every peak in New England in winter conditions over the past 5 years, it was definitely time to wash them for the first time (I know, that sounds disgusting). I'd never washed Gore-tex hard-shells or Power Shield soft-shells, so I had to do some research.
Here's what I found.
To prepare the garment for washing, close the main zippers and pit zips, open pocket zippers, and release tension on all elastic draw cords. You should follow the washing instructions on the garment label, which are likely to be cryptic international symbols. Here's a "Laundry Rosetta Stone" that tells you everything you need to know.
In my case, my Arcteryx Alpha SV Jacket instructions told me to wash the garment on a medium heat setting (40°C). Arcteryx recommended a free-rinsing soap or non-detergent cleaning agent to wash Gore-tex. The washing product should be free of surfactants and detergents, fabric softeners, enzymes, perfumes, or whiteners since these chemicals tend to be hydrophilic (attract water) and can reduce the effectiveness of the durable water repellent (DWR) treatment on your garment. Specifically they recommended Granger's Performance Wash. If you only have access to normal laundry soaps simply rinse the garment with a second rinse cycle in order to completely remove any residual cleaning chemicals.
Once the jacket was clean, I needed to reapply the durable water repellent treatment. DWR is a polymer substance applied to the face-fabric of a Gore-tex garment. Arcteryx recommended Granger's XT Proofer spray because the technologies complement the garment's original DWR treatment. They do not recommend using a wash-in DWR treatment.
After washing, I closed all zippers, hung the wet garment on a hanger and sprayed Granger's XT Proofer evenly onto the wet face fabric of the garment. Next, I placed the garment in a tumble drier on a medium heat setting (40°C) for 40 minutes (Yes you can tumble dry Gore-tex garments safely). The heat maximizes the effectiveness of the DWR treatment.
After washing, DWR treating, and drying, my Gore-tex jacket looked and worked like the day I purchased it. The process was so successful that I repeated it with all my soft-shells and wind shells.
Now that I'm a Gore-tex cleaning expert, I'll wash my outdoor gear a bit more often. Once in five years is definitely not recommended!
Recently, while hiking in a wintry mix of snow, sleet, and rain, I noticed that my 5 year old Gore-tex jacket was wetting out - the water was not beading off the surface.
Admittedly, the manufacturer of my shells recommends washing them after every 10 to 12 days of hard use or every 20 to 30 days of light use. They also recommend applying durable water repellent (DWR) treatment when water stops beading off the fabric. Since I've climbed every peak in New England in winter conditions over the past 5 years, it was definitely time to wash them for the first time (I know, that sounds disgusting). I'd never washed Gore-tex hard-shells or Power Shield soft-shells, so I had to do some research.
Here's what I found.
To prepare the garment for washing, close the main zippers and pit zips, open pocket zippers, and release tension on all elastic draw cords. You should follow the washing instructions on the garment label, which are likely to be cryptic international symbols. Here's a "Laundry Rosetta Stone" that tells you everything you need to know.
In my case, my Arcteryx Alpha SV Jacket instructions told me to wash the garment on a medium heat setting (40°C). Arcteryx recommended a free-rinsing soap or non-detergent cleaning agent to wash Gore-tex. The washing product should be free of surfactants and detergents, fabric softeners, enzymes, perfumes, or whiteners since these chemicals tend to be hydrophilic (attract water) and can reduce the effectiveness of the durable water repellent (DWR) treatment on your garment. Specifically they recommended Granger's Performance Wash. If you only have access to normal laundry soaps simply rinse the garment with a second rinse cycle in order to completely remove any residual cleaning chemicals.
Once the jacket was clean, I needed to reapply the durable water repellent treatment. DWR is a polymer substance applied to the face-fabric of a Gore-tex garment. Arcteryx recommended Granger's XT Proofer spray because the technologies complement the garment's original DWR treatment. They do not recommend using a wash-in DWR treatment.
After washing, I closed all zippers, hung the wet garment on a hanger and sprayed Granger's XT Proofer evenly onto the wet face fabric of the garment. Next, I placed the garment in a tumble drier on a medium heat setting (40°C) for 40 minutes (Yes you can tumble dry Gore-tex garments safely). The heat maximizes the effectiveness of the DWR treatment.
After washing, DWR treating, and drying, my Gore-tex jacket looked and worked like the day I purchased it. The process was so successful that I repeated it with all my soft-shells and wind shells.
Now that I'm a Gore-tex cleaning expert, I'll wash my outdoor gear a bit more often. Once in five years is definitely not recommended!
Wednesday, December 29, 2010
Defining Business Requirements
In my recent blog about consultants, I highlighted the work of Robert X. Cringely, who noted that most IT projects fail at the requirements stage. This is topic worth its own blog post.
In my roles at various institutions, I've had the opportunity to work with thousands of highly diverse stakeholders. Some are IT savvy, some are not. Some are project management savvy, some are not. Some understand leading practices for their particular departmental functions, some do not.
Here's what I've learned.
1. Automating a dysfunctional manual process will not yield a successful performance improvement outcome. Before any technology project is launched, the business owners need to understand their own process flows and goals for improving them.
2. If business owners cannot define their future state workflows, software is not going to do it for them. Sometimes, business owners tell me "I need to buy a wonderful niche software package from XYZ vendor." When I ask how they will use it, they answer that the software will define their workflow for them.
3. The IT department can impose governance and project management processes to ensure that future state workflows and requirements are defined prior to any procurement processes. However, the business owners who are least experienced with project management methodology will accuse the IT department of slowing down the purchase. One way around this is to create an institutional project management office outside of the IT department which serves as a bridge between the business owners and the IT organization providing the service. Such an approach adds expert resources to the department requesting automation to lead them through a requirements definition process as a first step. Projects without clear requirements and goals can be stopped before they expend time and money on an implementation that is likely to fail.
4. Some departments will try to circumvent governance prioritization and project management processes by contributing departmental funds, obtaining a grant, or getting a donor to fund their software purchases. Such as approach should not be allowed for many reasons. Software licensing is generally about 20% of total implementation cost which includes hardware, configuration, interfacing, testing, training, and support costs. Every software implementation is a project and needs to be considered a use of scarce IT resources. It is reasonable to initiate an automation request through a project management office to define business requirements and goals, then present it to a governance process for prioritization, then fund the total project costs via departmental/grant/donor dollars if the project is deemed a high priority for implementation.
5. Creating formal documentation of business requirements, goals/success metrics, and forecasted financial impact is important to establish ownership of the project by the sponsoring department. Although infrastructure projects such as provisioning networks, desktops, storage, and servers can be owned by the IT department, application projects should never be owned or sponsored by the IT department. The business owner, working with the institutional project management office, needs to drive the implementation to achieve the desired process improvement and to ensure appropriate change management. If the project is considered an IT effort, then business owners will claim their lack of requirements definition or process redesign is an IT failure based on poorly designed or implemented software.
Thus, however unpopular it makes the CIO, insist on business owner sponsorship with defined requirements, goals, and accountability for process and people change management. Every project I've been involved in that includes this role for the business owner has been successful. With clearly defined responsibilities and accountability, customer satisfaction with these projects has been high, because business owners feel compelled to make the project a success rather than expect IT to deliver a finished project to them.
Tuesday, December 28, 2010
A Secure Transport Strawman
Over the past few years, I've posted many blogs about the importance of transport standards. Once a transport standard is widely adopted, content will seamlessly flow per Metcalfe's law. We already have good content standards from X12, HL7, ASTM, and NCPDP. We already have good vocabulary standards from NLM, Regenstrief, IHTSDO and others. We have the beginnings of transport standards from CAQH, IHE, and W3C. We have the work of the NHIN Direct Project (now called the Direct Project).
After working with Dixie Baker/the HIT Standards Committee's Privacy and Security Workgroup on the Direct evaluation and after many email conversations with Arien Malec, I can now offer a strawman plan for transport standards.
Based on the implementation guides currently available, the HIT Standards Committee evaluation found the SMTP/SMIME exchange defined by the Direct Project sufficiently simple, direct, scalable, and secure, but stressed the need to develop implementation guidance that is clear and unambiguous. I've received many emails and blog comments about SMTP/SMIME verses other approaches. I believe I can harmonize everything I've heard into a single path forward.
A "little guy" such as a 2 doctor practice in rural America wants to send content to another 2 doctor practice across town. These small practices should not have to operate servers or have to pay for a complex health information exchange infrastructure. Healthcare Information Services Providers (HISPs) should provide them the means to exchange data as easily as Google provides Gmail or Verizon FIOS provides ISP service. All HISP to HISP communications should be encrypted such that the sending practice and receiving practice can exchange data without any HISP in the middle being able to view the contents of the data exchanged.
In my opinion, for this type of exchange
Small Practice 1 ---> HISP 1 ----> HISP 2 ----> Small Practice 2
SMTP/SMIME at an organizational level is the right transport solution. By organizational level, I mean that one certificate is used for the sending organization and one for the receiving organization. There is no need to issue certificates to individual people involved in the exchange.
SMTP/SMIME at an organizational level encrypts, integrity protects, and digitally signs the payload at the point where the message is created. The payload can be sent through multiple intermediaries to the receiver with assurance that the message will be readable only by the intended receiver.
Given the policy guidance to support the little guy, any practice in the country that wants to send any content securely to any other practice without risk of viewing by any intermediary, SMTP/SMIME is sufficient and appropriate.
For other types of exchanges with different policy constraints, TLS is more flexible and functional. In Massachusetts, NEHEN is a federated HIE, enabled by placing open source software within the networks of each participating institution. Server to Server communication is a SOAP exchange over TLS. In this case, the HISP resides within the firewall of each participating payer or provider organization. TLS enables simple, secure transmission from one organization to another. TLS does not require a certificate store. TLS enables REST, SOAP, or SMTP transactions to flow securely because the connection between organizations is encrypted.
Where TLS falls down is in the Direct use case with its policy requirements that no intermediaries between the sender and receiver may have access to unencrypted data. This excludes the case in which the sender uses a HISP as a business associate to package the data as an SMIME message. A sender has no way of knowing what intermediaries the information may flow through, so implementing secured message flows from any sender to any receiver using TLS is untenable.
Thus, our path forward is clear. If we impose a policy constraint that small organizations which use external HISPs should be able to send data securely to other small organizations which use external HISPs such that the HISPs cannot view the data, then SMTP/SMIME with some mechanism to discover the certificates of each organization is the right answer at this time.
If the use case is simpler - secure exchange between HISPs such that the HISPs reside within the trading partner organizations or a relaxation of the policy constraint that HISPs cannot view data, then TLS is the right answer at this time.
The next steps are also clear. Establish SMTP/SMIME as a standard, and secure email using SMTP/SMIME as a certification criteria for EHR technology. Establish standards for X.509 certificates for organization-to-organization exchanges, as suggested by the Privacy and Security Tiger Team.
There you have it - the solution to the transport standards issue for the country - SMTP/SMIME for little guys using external HISPs and TLS for other use cases.
Done! Now it's time to implement and foster adoption.
After working with Dixie Baker/the HIT Standards Committee's Privacy and Security Workgroup on the Direct evaluation and after many email conversations with Arien Malec, I can now offer a strawman plan for transport standards.
Based on the implementation guides currently available, the HIT Standards Committee evaluation found the SMTP/SMIME exchange defined by the Direct Project sufficiently simple, direct, scalable, and secure, but stressed the need to develop implementation guidance that is clear and unambiguous. I've received many emails and blog comments about SMTP/SMIME verses other approaches. I believe I can harmonize everything I've heard into a single path forward.
As with all HIE efforts, policy has to constrain technology. The policy guidance that the Direct Project was given was as follows:
A "little guy" such as a 2 doctor practice in rural America wants to send content to another 2 doctor practice across town. These small practices should not have to operate servers or have to pay for a complex health information exchange infrastructure. Healthcare Information Services Providers (HISPs) should provide them the means to exchange data as easily as Google provides Gmail or Verizon FIOS provides ISP service. All HISP to HISP communications should be encrypted such that the sending practice and receiving practice can exchange data without any HISP in the middle being able to view the contents of the data exchanged.
In my opinion, for this type of exchange
Small Practice 1 ---> HISP 1 ----> HISP 2 ----> Small Practice 2
SMTP/SMIME at an organizational level is the right transport solution. By organizational level, I mean that one certificate is used for the sending organization and one for the receiving organization. There is no need to issue certificates to individual people involved in the exchange.
SMTP/SMIME at an organizational level encrypts, integrity protects, and digitally signs the payload at the point where the message is created. The payload can be sent through multiple intermediaries to the receiver with assurance that the message will be readable only by the intended receiver.
Given the policy guidance to support the little guy, any practice in the country that wants to send any content securely to any other practice without risk of viewing by any intermediary, SMTP/SMIME is sufficient and appropriate.
For other types of exchanges with different policy constraints, TLS is more flexible and functional. In Massachusetts, NEHEN is a federated HIE, enabled by placing open source software within the networks of each participating institution. Server to Server communication is a SOAP exchange over TLS. In this case, the HISP resides within the firewall of each participating payer or provider organization. TLS enables simple, secure transmission from one organization to another. TLS does not require a certificate store. TLS enables REST, SOAP, or SMTP transactions to flow securely because the connection between organizations is encrypted.
Where TLS falls down is in the Direct use case with its policy requirements that no intermediaries between the sender and receiver may have access to unencrypted data. This excludes the case in which the sender uses a HISP as a business associate to package the data as an SMIME message. A sender has no way of knowing what intermediaries the information may flow through, so implementing secured message flows from any sender to any receiver using TLS is untenable.
Thus, our path forward is clear. If we impose a policy constraint that small organizations which use external HISPs should be able to send data securely to other small organizations which use external HISPs such that the HISPs cannot view the data, then SMTP/SMIME with some mechanism to discover the certificates of each organization is the right answer at this time.
If the use case is simpler - secure exchange between HISPs such that the HISPs reside within the trading partner organizations or a relaxation of the policy constraint that HISPs cannot view data, then TLS is the right answer at this time.
The next steps are also clear. Establish SMTP/SMIME as a standard, and secure email using SMTP/SMIME as a certification criteria for EHR technology. Establish standards for X.509 certificates for organization-to-organization exchanges, as suggested by the Privacy and Security Tiger Team.
There you have it - the solution to the transport standards issue for the country - SMTP/SMIME for little guys using external HISPs and TLS for other use cases.
Done! Now it's time to implement and foster adoption.
Monday, December 27, 2010
Group IQ
Early in my career, I thought the path to success was making a name for myself - creating software, ideas, and innovation that would be uniquely associated with me. My mentors gave me sage advice to ban the "I" word from my vocabulary and to focus on developing high functioning organizations. A single individual can come and ago, but organizations can scale to enormous size and last beyond the strengths, weaknesses, and longevity of any one person.
Many books have been written about building organizations - Built to Last, In Search of Excellence, and The Innovators Dilemma, but the idea I find most compelling was described in a recent Boston Globe article about Group IQ.
"Group intelligence, the researchers discovered, is not strongly tied to either the average intelligence of the members or the team’s smartest member. And this collective intelligence was more than just an arbitrary score: When the group grappled with a complex task, the researchers found it was an excellent predictor of how well the team performed."
When I was doing graduate work, I thought that a successful leader should be the smartest person in the room. I've learned that the best leaders hire people who are smarter than themselves. Good leaders revel in the capabilities of teams to exceed the leader's own capabilities. Poor leaders surround themselves with underperformering teams which support the leader's need to feel superior. This leads to the notion that "Grade 'A' leaders hire grade 'A' teams and grade 'B' leaders hire grade 'C' teams".
Group IQ is a great concept. I'm very proud of the way IS teams approach tricky problems, resource allocation decisions, and out of the blue compliance requirements that are unstaffed, unbudgeted, and "must do".
The best metric that teams are high functioning is watching their response to a crisis. Is there infighting? Is there jockeying for leadership when responding to the event? Does one person dominate the conversation?
I've watched time after time when teams of IS professionals come together, each assuming a mutually supportive role. Depending on the issue, the person with the most experience on the team runs the activity. Everyone contributes their ideas and is respected for whatever they say - right or wrong. If there are emotions, the team rallies to support the good ones and diffuse the bad ones. No one is blamed for human error - it's used to improve processes in the future.
My experience with highly functional groups is that people have to be hired for their emotional quotient (EQ) as well as their intelligence quotient (IQ). The only thing you can do wrong during a team activity is to impede the work of others. Criticism of ideas is encouraged, criticism of people is not.
The only way we could survive 2010, which was a year that tested both patience and stamina, was relying on Group IQ to shrug off the naysayers, maintain course and direction, and keep an upbeat attitude through it all.
Thanks to all the IS teams who work with me - you have a great Group IQ!
Many books have been written about building organizations - Built to Last, In Search of Excellence, and The Innovators Dilemma, but the idea I find most compelling was described in a recent Boston Globe article about Group IQ.
"Group intelligence, the researchers discovered, is not strongly tied to either the average intelligence of the members or the team’s smartest member. And this collective intelligence was more than just an arbitrary score: When the group grappled with a complex task, the researchers found it was an excellent predictor of how well the team performed."
When I was doing graduate work, I thought that a successful leader should be the smartest person in the room. I've learned that the best leaders hire people who are smarter than themselves. Good leaders revel in the capabilities of teams to exceed the leader's own capabilities. Poor leaders surround themselves with underperformering teams which support the leader's need to feel superior. This leads to the notion that "Grade 'A' leaders hire grade 'A' teams and grade 'B' leaders hire grade 'C' teams".
Group IQ is a great concept. I'm very proud of the way IS teams approach tricky problems, resource allocation decisions, and out of the blue compliance requirements that are unstaffed, unbudgeted, and "must do".
The best metric that teams are high functioning is watching their response to a crisis. Is there infighting? Is there jockeying for leadership when responding to the event? Does one person dominate the conversation?
I've watched time after time when teams of IS professionals come together, each assuming a mutually supportive role. Depending on the issue, the person with the most experience on the team runs the activity. Everyone contributes their ideas and is respected for whatever they say - right or wrong. If there are emotions, the team rallies to support the good ones and diffuse the bad ones. No one is blamed for human error - it's used to improve processes in the future.
My experience with highly functional groups is that people have to be hired for their emotional quotient (EQ) as well as their intelligence quotient (IQ). The only thing you can do wrong during a team activity is to impede the work of others. Criticism of ideas is encouraged, criticism of people is not.
The only way we could survive 2010, which was a year that tested both patience and stamina, was relying on Group IQ to shrug off the naysayers, maintain course and direction, and keep an upbeat attitude through it all.
Thanks to all the IS teams who work with me - you have a great Group IQ!
Friday, December 24, 2010
Reflections on Christmas Eve
As I sit in an old Morris chair, sipping a Lamarca Proseco and watching the glow of the Christmas tree, I'm beginning to unwind and reflect on the tumultuous 2010 that's drawing to a close.
It was a year of thousands of pages of new Federal HIT regulations, of debate and consensus on standards, of new privacy requirements, of ever-increasing demand for increasing complex IT services, and escalating expectations for speed/reliability/efficiency in everything that we do.
It was a year of recovery from the economic doldrums of 2009, with a gradual increase in new jobs and budgets, but still a bit of uncertainty.
The stress, the pace, and global competitiveness were unsettling to many, leading to a lack of civility and patience.
Whether you think 2010 was a year of innovation, chaos, or anxiety, there is one thing we can all agree upon - 2010 was a year of incredible change.
As I tell my staff and all my colleagues, it's impossible to evaluate a year based on the events of any given day. Don't consider your position today, consider your trajectory over the past year.
My family is all on track. My parents are fully recovered from their November hospitalizations and are healthy and happy. My daughter was admitted to Tufts University. My wife is running the NK Gallery in Boston's South End and she's creating new art. I'm balancing my home life, work life, and personal life in a way that is satisfying and invigorating. I'll be 50 next year and I do have 3 grey hairs but otherwise I'm medication free and my body mass index is still 20.
Nationally, we've implemented regulations for content, vocabulary and security standards. We have a consensus approach to transport standards using SMTP/SMIME through NHIN Direct. EHR adoption is climbing steadily. Boards throughout the country are talking about the interoperability, business intelligence, and decision support that are needed to support the healthcare future described in the Affordable Care Act/Healthcare reform.
In Massachusetts, we've agreed upon a short term and long term HIE governance model. We're procuring workgroup facilitation and a program management office to oversee our statewide "entity level" provider directory, certificate management, and standards/conformance/certification.
At Harvard Medical School, we've created one of the top 100 supercomputers in the world, deployed a petabyte of storage, automated numerous administrative, educational, and research workflows, and established governance committees for all our stakeholders.
At BIDMC, we're on the home stretch of our community EHR rollout, our massive lab project, our major replacement of our extranet and intranet, and our disaster recovery efforts. We've kept our infrastructure stable, reliable and secure. We enhanced our enterprise clinical, fiscal, administrative, digital library, and media applications in a way that kept most users satisfied. Every compliance, regulatory, and e-discovery need was met.
In many ways, this Christmas is a milestone. My daughter leaves for college next Summer, so we'll have an empty nest. 2010 was an especially stressful year due to ARRA funded projects, Meaningful Use, Harvard's LCME reaccreditation planning, BIDMC's Joint Commission reaccreditation, and more policy/technology change in a single year than any year before.
The details of the good and bad, the joys and sorrows, the triumphs and defeats are fading in importance because the trajectory is uniformly good.
Based on my definition of the Good Life, getting the basics right and ensuring every member of your family feels good about themselves, all is calm, all is bright.
So hug your family members, pat yourself on the back, and toast with your favorite beverage. 2010 has been a year of great change, but we did all we needed to do and the world is a better place because of it.
May 2011 be a year of satisfying work, a little less anxiety, and a boost to your own sense of self worth.
Cheers!
It was a year of thousands of pages of new Federal HIT regulations, of debate and consensus on standards, of new privacy requirements, of ever-increasing demand for increasing complex IT services, and escalating expectations for speed/reliability/efficiency in everything that we do.
It was a year of recovery from the economic doldrums of 2009, with a gradual increase in new jobs and budgets, but still a bit of uncertainty.
The stress, the pace, and global competitiveness were unsettling to many, leading to a lack of civility and patience.
Whether you think 2010 was a year of innovation, chaos, or anxiety, there is one thing we can all agree upon - 2010 was a year of incredible change.
As I tell my staff and all my colleagues, it's impossible to evaluate a year based on the events of any given day. Don't consider your position today, consider your trajectory over the past year.
My family is all on track. My parents are fully recovered from their November hospitalizations and are healthy and happy. My daughter was admitted to Tufts University. My wife is running the NK Gallery in Boston's South End and she's creating new art. I'm balancing my home life, work life, and personal life in a way that is satisfying and invigorating. I'll be 50 next year and I do have 3 grey hairs but otherwise I'm medication free and my body mass index is still 20.
Nationally, we've implemented regulations for content, vocabulary and security standards. We have a consensus approach to transport standards using SMTP/SMIME through NHIN Direct. EHR adoption is climbing steadily. Boards throughout the country are talking about the interoperability, business intelligence, and decision support that are needed to support the healthcare future described in the Affordable Care Act/Healthcare reform.
In Massachusetts, we've agreed upon a short term and long term HIE governance model. We're procuring workgroup facilitation and a program management office to oversee our statewide "entity level" provider directory, certificate management, and standards/conformance/certification.
At Harvard Medical School, we've created one of the top 100 supercomputers in the world, deployed a petabyte of storage, automated numerous administrative, educational, and research workflows, and established governance committees for all our stakeholders.
At BIDMC, we're on the home stretch of our community EHR rollout, our massive lab project, our major replacement of our extranet and intranet, and our disaster recovery efforts. We've kept our infrastructure stable, reliable and secure. We enhanced our enterprise clinical, fiscal, administrative, digital library, and media applications in a way that kept most users satisfied. Every compliance, regulatory, and e-discovery need was met.
In many ways, this Christmas is a milestone. My daughter leaves for college next Summer, so we'll have an empty nest. 2010 was an especially stressful year due to ARRA funded projects, Meaningful Use, Harvard's LCME reaccreditation planning, BIDMC's Joint Commission reaccreditation, and more policy/technology change in a single year than any year before.
The details of the good and bad, the joys and sorrows, the triumphs and defeats are fading in importance because the trajectory is uniformly good.
Based on my definition of the Good Life, getting the basics right and ensuring every member of your family feels good about themselves, all is calm, all is bright.
So hug your family members, pat yourself on the back, and toast with your favorite beverage. 2010 has been a year of great change, but we did all we needed to do and the world is a better place because of it.
May 2011 be a year of satisfying work, a little less anxiety, and a boost to your own sense of self worth.
Cheers!
Thursday, December 23, 2010
Selecting a Home Weather Station
My parents live in a windy area near the beach in Southern California. With a prevailing 12mph wind, there is a reasonable possibility that a small wind generator to reduce their energy bills makes sense.
My father would like to make the case to the city council and needs to collect weather data. His requirements are
-Solar powered, wireless sensor array for wind, rain, temperature, and barometric pressure
-Indoor real time data console with a trend display
-Computer interface from the console which enables long term archiving of data for analysis
I've compared all the major manufacturers (Davis Instruments, Oregon Scientific, LaCrosse Technology). Choices are very limited for a solar powered sensor array with computer interfacing. You'll find a great discussion at the Weathertracker website.
What did I ultimately buy him?
1. Davis Instruments Vantage Vue $395.00 (a new, updated, modern product that is easy to install and maintain)
2. Davis Instruments WeatherlinkIP $295.00 (a direct to web interface)
The computer interfacing question was challenging. Davis Instruments offers 3 options
a. Windows based USB interface - simple, easy to install, and includes full featured software for Windows 2000/XP/7.
b. Windows based Serial interface - can also be used via a serial to USB adapter with Mac OSX but can be problematic. If the OS changes, there is a chance the Serial or USB drivers will not work.
c. A Cloud/web-based option - data is uploaded via IP from the console to a data management application and publicly available website hosted by Davis Instruments. Data can be downloaded and archived directly from the website - no direct computer interfacing or drivers are needed. It works on any operating system and browser.
I chose the Cloud option. Thus my father will be tracking the clouds in the Cloud! I've temporarily installed it in Wellesley and you can follow the weather at my home via Weather Underground.
My father would like to make the case to the city council and needs to collect weather data. His requirements are
-Solar powered, wireless sensor array for wind, rain, temperature, and barometric pressure
-Indoor real time data console with a trend display
-Computer interface from the console which enables long term archiving of data for analysis
I've compared all the major manufacturers (Davis Instruments, Oregon Scientific, LaCrosse Technology). Choices are very limited for a solar powered sensor array with computer interfacing. You'll find a great discussion at the Weathertracker website.
What did I ultimately buy him?
1. Davis Instruments Vantage Vue $395.00 (a new, updated, modern product that is easy to install and maintain)
2. Davis Instruments WeatherlinkIP $295.00 (a direct to web interface)
The computer interfacing question was challenging. Davis Instruments offers 3 options
a. Windows based USB interface - simple, easy to install, and includes full featured software for Windows 2000/XP/7.
b. Windows based Serial interface - can also be used via a serial to USB adapter with Mac OSX but can be problematic. If the OS changes, there is a chance the Serial or USB drivers will not work.
c. A Cloud/web-based option - data is uploaded via IP from the console to a data management application and publicly available website hosted by Davis Instruments. Data can be downloaded and archived directly from the website - no direct computer interfacing or drivers are needed. It works on any operating system and browser.
I chose the Cloud option. Thus my father will be tracking the clouds in the Cloud! I've temporarily installed it in Wellesley and you can follow the weather at my home via Weather Underground.
Wednesday, December 22, 2010
Simple, Direct, Scalable, and Secure Transport - S/MIME verses TLS
The HIT Standards Committee's Privacy and Security Workgroup made several recommendations to simplify the NHIN Direct project's approach to secure data transport.
By default, NHIN Direct uses S/MIME and X.509 digital certificates to secure content end-to-end. S/MIME verifies the identity of sender and receiver, and encrypts and integrity-protects message content (the payload) but does not encrypt the email header fields (to/from, subject). The Direct specification contains ambiguous requirements regarding the use of TLS and full message wrapping to protect against data-leakage, creating confusion and complexity i.e.
“SHOULD” provide capability to use mutually authenticated Transport Layer Security (TLS) for all communications
Full message wrapping is “RECOMMENDED” and “OPTIONAL” – but warns that some receivers “may present such messages in ways that are confusing to end users“
The Privacy and Security Workgroup recommended specifying S/MIME as the standard for securing NHIN Direct content end-to-end, removing TLS and message wrapping as security options in the core specification. The residual risk of header fields can be mitigated through policy direction regarding suitable content for subject fields.
What does this mean? What's the difference between S/MIME and TLS? Is S/MIME better than TLS? Should one or both be used?
To understand the recommendations, you first need to understand how S/MIME and TLS work.
Dixie Baker created this overview of the two approaches.
S/MIME is a great way to encrypt and sign a payload of content to be sent from point A to point B. The channel of communication is not encrypted, the contents of the message are encrypted such that only the rightful receiver can decrypt them, validate that the expected sender transmitted the message, and the message was not modified along the way. The disadvantage of S/MIME is that you must keep certificates on file (or use public key infrastructure) for every organization you exchange data with. This does have the advantage that you can be sure only the right, trusted entities, with pre-existing authorization to receive data are part of the exchange.
TLS is a great way to encrypt a channel of communication. From Dixie's diagram, no existing certificates need to be kept on file and PKI is not required. A certificate is requested as the transaction begins and is used to send data via a symmetric encryption technique. It's simple and easy to implement. The downside is that there is no guarantee that the certificate is associated with the right authorized entity.
For Stage 1 of meaningful use, in which organizational entity to organizational entity exchange is all that is expected (not individual to individual), S/MIME and TLS are not much different. With S/MIME you keep a list of your organizational trading partner certificates on file and with TLS you hope that the URLs you are calling to request certificates are really the correct trading partners. TLS has a bit of an advantage in this configuration because the communication channel itself is secured and works fine with any protocol - SMTP, REST or SOAP. No header information is sent unencrypted as is the case with the S/MIME payload only encryption.
Where S/MIME wins over TLS is when more granularity than organization-to-organization is required. For example, if you wanted to secure content from Dr. Halamka to Dr. Baker, with assurance that the content went only to Dr. Baker, TLS cannot do that.
Thus, S/MIME alone provides a level of encryption which is good enough, ensures the organization receiving the data is the right one because you have certificates for all valid trading partners on file, and it prepares for a future when we may have individual to individual secure exchange, not just entity to entity exchange.
I welcome feedback from the industry on the recommendation that S/MIME be used as the standard for securing NHIN Direct content end to end. When I polled my operational people they voiced a distinct preference for just TLS which they felt was easier to implement and support, since there is no need for PKI and maintaining a local file of certificates for trading partners. I definitely understand how S/MIME has advantages for individual to individual secure transport, but I wonder if we will ever need such functionality. If the long term vision of a network of networks, based on secure organization to organization transmission using a national entity level directory, might TLS be good enough for the short term and long term?
By default, NHIN Direct uses S/MIME and X.509 digital certificates to secure content end-to-end. S/MIME verifies the identity of sender and receiver, and encrypts and integrity-protects message content (the payload) but does not encrypt the email header fields (to/from, subject). The Direct specification contains ambiguous requirements regarding the use of TLS and full message wrapping to protect against data-leakage, creating confusion and complexity i.e.
“SHOULD” provide capability to use mutually authenticated Transport Layer Security (TLS) for all communications
Full message wrapping is “RECOMMENDED” and “OPTIONAL” – but warns that some receivers “may present such messages in ways that are confusing to end users“
The Privacy and Security Workgroup recommended specifying S/MIME as the standard for securing NHIN Direct content end-to-end, removing TLS and message wrapping as security options in the core specification. The residual risk of header fields can be mitigated through policy direction regarding suitable content for subject fields.
What does this mean? What's the difference between S/MIME and TLS? Is S/MIME better than TLS? Should one or both be used?
To understand the recommendations, you first need to understand how S/MIME and TLS work.
Dixie Baker created this overview of the two approaches.
S/MIME is a great way to encrypt and sign a payload of content to be sent from point A to point B. The channel of communication is not encrypted, the contents of the message are encrypted such that only the rightful receiver can decrypt them, validate that the expected sender transmitted the message, and the message was not modified along the way. The disadvantage of S/MIME is that you must keep certificates on file (or use public key infrastructure) for every organization you exchange data with. This does have the advantage that you can be sure only the right, trusted entities, with pre-existing authorization to receive data are part of the exchange.
TLS is a great way to encrypt a channel of communication. From Dixie's diagram, no existing certificates need to be kept on file and PKI is not required. A certificate is requested as the transaction begins and is used to send data via a symmetric encryption technique. It's simple and easy to implement. The downside is that there is no guarantee that the certificate is associated with the right authorized entity.
For Stage 1 of meaningful use, in which organizational entity to organizational entity exchange is all that is expected (not individual to individual), S/MIME and TLS are not much different. With S/MIME you keep a list of your organizational trading partner certificates on file and with TLS you hope that the URLs you are calling to request certificates are really the correct trading partners. TLS has a bit of an advantage in this configuration because the communication channel itself is secured and works fine with any protocol - SMTP, REST or SOAP. No header information is sent unencrypted as is the case with the S/MIME payload only encryption.
Where S/MIME wins over TLS is when more granularity than organization-to-organization is required. For example, if you wanted to secure content from Dr. Halamka to Dr. Baker, with assurance that the content went only to Dr. Baker, TLS cannot do that.
Thus, S/MIME alone provides a level of encryption which is good enough, ensures the organization receiving the data is the right one because you have certificates for all valid trading partners on file, and it prepares for a future when we may have individual to individual secure exchange, not just entity to entity exchange.
I welcome feedback from the industry on the recommendation that S/MIME be used as the standard for securing NHIN Direct content end to end. When I polled my operational people they voiced a distinct preference for just TLS which they felt was easier to implement and support, since there is no need for PKI and maintaining a local file of certificates for trading partners. I definitely understand how S/MIME has advantages for individual to individual secure transport, but I wonder if we will ever need such functionality. If the long term vision of a network of networks, based on secure organization to organization transmission using a national entity level directory, might TLS be good enough for the short term and long term?
Tuesday, December 21, 2010
500 Meetings a Day
In the early 1980's when I was running a small software company while attending Stanford as an undergraduate, my business activities were limited to the number of phone calls I could receive in a day. At most I could have 5-10 phone teleconferences.
In 2010, with email and social networking, all of the limits on synchronous group interaction have disappeared and I now have limitless meetings per day. When you count the emails I send, the blog comments I respond to, and the Twitter/Forums/Texts/Linked In/Plaxo/Facebook interactions, I can have 500 meetings a day.
What does that really mean?
One of my staff summarized it perfectly when I asked him what keeps him up at night
"The flow of email and expectation upon us all to respond quickly has become more challenging for me than probably most because of the great diversity of areas that I cover. I've been making changes and removing myself from unnecessary support queues(previously used to monitor day-to-day), delegating as much as possible, and making the needed staffing changes."
The demands of 500 virtual meetings a day on top of the in person meetings results in what I call "Continuous Partial Attention." A one hour in person meeting implies that you're 50 virtual meetings behind by the end of the face to face time, forcing attention spans to fade about 10 minutes into any in person meeting. The modern electronic world has removed all barriers to escalation and facilitated scheduling. Anyone can interrupt anything 24x7x365. Instantaneous frictionless communication is analogous to the revolution in the publishing industry where anyone can be an author/publisher/editor without any triage.
What's the best strategy for dealing with this communication overload? Here's a few I've experienced:
1. Declare an end to the madness and stop doing mobile mail and texting. Some senior executives have taken an inspiration from the Corona Beer Advertisement and thrown their Blackberry into the ether.
2. Put up a firewall around your schedule. One of my staff published an out of office message this week. When I asked him about it, he said
"I’m just trying to take some time and my outgoing message is helping to filter out the emergencies from the last minute stragglers that want to something that doesn’t really need attention until after the break as I’m trying to finish up the necessary end of year items."
3. Accept the chaos and schedule around it, creating an open access schedule that reserves half the workday for the asynchronous, unplanned work of each day.
4. Ignore your emails. Some senior executives just never respond and have inboxes with thousands of unanswered emails.
5. Delegate email management. Some executives delegate email to trusted assistants who separate the wheat from the chaff, escalating only a few emails a day to the executive they support.
At the moment, I still do #3, but I must admit it's getting more challenging. I receive over 1000 emails a day and try to respond to each one, but for the past 6 months, I've been deleting unread every email that begins
"Hi, I'm Bob at xyz.com and our products..."
or
"Hi, I'm a venture capitalist and I'd like an hour of your time to…"
Hopefully, I'm answering my critical asynchronous communications in a timely way and only ignoring those communications which are a lower priority. At 500 email and social networking responses per day, I'm approaching the limits of my bandwidth, which I never thought would happen.
I do my best and clear my queues every night before sleep. If I've somehow missed you in my 500 meetings a day, please let me know!
In 2010, with email and social networking, all of the limits on synchronous group interaction have disappeared and I now have limitless meetings per day. When you count the emails I send, the blog comments I respond to, and the Twitter/Forums/Texts/Linked In/Plaxo/Facebook interactions, I can have 500 meetings a day.
What does that really mean?
One of my staff summarized it perfectly when I asked him what keeps him up at night
"The flow of email and expectation upon us all to respond quickly has become more challenging for me than probably most because of the great diversity of areas that I cover. I've been making changes and removing myself from unnecessary support queues(previously used to monitor day-to-day), delegating as much as possible, and making the needed staffing changes."
The demands of 500 virtual meetings a day on top of the in person meetings results in what I call "Continuous Partial Attention." A one hour in person meeting implies that you're 50 virtual meetings behind by the end of the face to face time, forcing attention spans to fade about 10 minutes into any in person meeting. The modern electronic world has removed all barriers to escalation and facilitated scheduling. Anyone can interrupt anything 24x7x365. Instantaneous frictionless communication is analogous to the revolution in the publishing industry where anyone can be an author/publisher/editor without any triage.
What's the best strategy for dealing with this communication overload? Here's a few I've experienced:
1. Declare an end to the madness and stop doing mobile mail and texting. Some senior executives have taken an inspiration from the Corona Beer Advertisement and thrown their Blackberry into the ether.
2. Put up a firewall around your schedule. One of my staff published an out of office message this week. When I asked him about it, he said
"I’m just trying to take some time and my outgoing message is helping to filter out the emergencies from the last minute stragglers that want to something that doesn’t really need attention until after the break as I’m trying to finish up the necessary end of year items."
3. Accept the chaos and schedule around it, creating an open access schedule that reserves half the workday for the asynchronous, unplanned work of each day.
4. Ignore your emails. Some senior executives just never respond and have inboxes with thousands of unanswered emails.
5. Delegate email management. Some executives delegate email to trusted assistants who separate the wheat from the chaff, escalating only a few emails a day to the executive they support.
At the moment, I still do #3, but I must admit it's getting more challenging. I receive over 1000 emails a day and try to respond to each one, but for the past 6 months, I've been deleting unread every email that begins
"Hi, I'm Bob at xyz.com and our products..."
or
"Hi, I'm a venture capitalist and I'd like an hour of your time to…"
Hopefully, I'm answering my critical asynchronous communications in a timely way and only ignoring those communications which are a lower priority. At 500 email and social networking responses per day, I'm approaching the limits of my bandwidth, which I never thought would happen.
I do my best and clear my queues every night before sleep. If I've somehow missed you in my 500 meetings a day, please let me know!
Monday, December 20, 2010
The December HIT Standards Committee
The December HIT Standards Committee focused on a review of the President's Council of Advisors on Science and Technology (PCAST) report, a review of the Standards and Interoperability Framework Priorities, and a review of NHIN Direct (now called the Direct Project).
We began the meeting with an introduction from Dr. Perlin in which he noted that reports by commissions such as PCAST need to be read, not for their details, but for their directionality. We should ask about the trajectory the experts think we should be on and how/when should it modify our current course. Dr. Blumenthal also offered an introduction to the PCAST discussion, noting that the White House fully supports and encourages interoperability, suggesting that we should accelerate the priority of healthcare information exchange in the progression from Meaningful Use stage 1 to 3.
We discussed the origins and history of the PCAST report. The President asked PCAST how health IT could improve the quality of healthcare and reduce its cost, and whether existing Federal efforts in health IT are optimized for these goals. In response, PCAST formed a working group consisting of PCAST members and advisors in both healthcare and information technology.
The working group held meetings in Washington, D.C., on December 18, 2009, and in Irvine, California, on January 14 15, 2010, as well as additional meetings by teleconference. The viewpoints of researchers, policy analysts, and administrators from government, healthcare organizations, and universities were presented and discussed.
A draft report developed by the working group was submitted to the Health and Life Sciences committee of PCAST. That committee submitted the draft to several outside reviewers, who made valuable suggestions for improvements. From the working group draft, the additional input, and its own discussions, the Health and Life Sciences committee produced the present report, which was discussed and endorsed (with some modifications) by the full PCAST in public session on July 16, 2010.
A disclaimer at beginning of report notes "Working Group members participated in the preparation of an initial draft of this report. They are not responsible for, nor necessarily endorse, the final version of this report as modified and approved by PCAST."
We identified a number of key themes in the report
1. The foundation for healthcare information exchange should be built on an XML-based Universal Exchange Language
2. Data elements should be separable from documents
3. Metadata should identify characteristics of each data element i.e. how it was recorded, by whom and for what patient
4. Privacy controls should integrate patient consent preferences with metadata about the data available for exchange
5. Search engine technology/data element access service indexing at a national level will accelerate data element discovery
6. Data reuse with patient consent for clinical trials and population health is a priority
The key ideas from the discussion included:
a. Thinking at a national scale is good to avoid creating regional health information exchange silos
b. Messaging (such as HL7 2.x) is still going to be needed to support event-based transactional workflows
c. The strength of the PCAST report is in supporting exchange models that require aggregation - research, epidemiology, and unanticipated interactions such as Emergency Department visits.
d. For some uses such as communication among providers, encounter summaries which provide structured and unstructured data in context, are more useful than data elements
e. Many data elements are not useful on their own and a module/collection of data elements would be better i.e. Allergies should include the substance, onset date, the type of reaction, the severity of the reaction, and the level of certainty of the reaction (your mother reported it based on a distant memory verses a clinician observed it happening). To understand how best to collect data elements into modules, clinical data models would be very helpful.
f. Since information is going to exchanged among multiple parties, metadata will need to include the provenance of the data, so that data is not duplicated multiple times i.e. Hospital A sends data to Hospital B and C. C requests a copy of B's data (which includes B and A) and it should be possible to avoid storing a duplicate of A's data which C already has.
f. We should proceed with the health information exchange work already in progress to achieve interoperability in support of Meaningful Use stage 1 and not derail current efforts.
g. Finely grained privacy (to the data element level) will be challenging to implement and maintain. Tagging elements with privacy characteristics is very hard because societal attitudes about the sensitivity of data elements may change over time. HIV testing used to be a rare event, so the presence of an HIV test alone (not its result) could be concerning. Today 1/3 of Americans have had an HIV test, generally as part of getting life or health insurance, so the presence of a test is no longer a stigma.
h. The national scope suggested includes using web search engine technology to keep a data element index, identifying what data is available for what patients and where. The policy and security issues of doing this are greater than the technology challenges.
The next step for the PCAST report will be ONC's naming of a multi-stakeholder workgroup to review the report in detail and make recommendations by April.
We next heard about the planned Implementation Workgroup hearing regarding certification, Meaningful Use, and healthcare information exchange. On January 10-11, the Workgroup will learn about early adopter successes and challenges.
Next, the Clinical Operations Workgroup reported on its plans to consider vocabulary and content issues for devices - critical care, implantable, and home care. Issues include universal device identification, ensuring data integrity, and interoperability of devices that may require a clinical data model to ensure the meaning of data communicated is understood by EHRs, PHRs, and devices.
We next considered the standards and interoperability framework priorities as outlined by Doug Fridsma. The S&I Framework contractors are working on clinical summaries, templates documents, Laboratory results, Medication Reconciliation, Provider Directories, Syndromic Surveillance, Quality, Population Health, Clinical Decision Support, Patient Engagement, EHR to EHR data exchange, and Value Sets.
Points raised during this discussion included the need to include policy discussions throughout the process of harmonizing and testing standards. We agreed that the Clinical Operations workgroup should study these priorities and make recommendations based on real world implementation experience that will help ONC and the contractors focus on the gaps to be addressed such as patient identification and vocabularies/codes sets.
We discussed the HIT Policy Committee's request for the Standards Committee to work on Certificate Management standards. The Privacy and Security Workgroup will make recommendations for organization to organization and server to server certificate standards.
We next considered the Privacy and Security Workgroup's evaluation of NHIN Direct. The Workgroup concluded that certificate exchange should not be limited to certificates stored in Domain Naming Services (DNS) applications. It also suggested that XDR (a SOAP transaction) be removed from the NHIN Direct Core specification, reducing the complexity and optionality of the specification. The only debate that arose during this discussion revolved around the issue of rejecting an NHIN Direct message because it did not meet regulatory requirements. Specifically, the Privacy and Security Workgroup recommended the following language -
"Destinations MAY reject content that does not meet Destination expectations. For instance, some Destinations MAY require receipt of structured data, MAY support only particular content types, and MAY require receipt of XDM structured attachments."
Here's a use case that illustrates the issue:
Federal Regulations require quality measures to be sent in PQRI XML as of 2012.
A doctor uses NHIN Direct to send an unstructured text message to CMS "I achieved the quality measures you wanted me to!"
What should CMS do?
1. Reject the message as not compliant with Federal regulations, notifying the sender as to the reason
2. Accept the message, but contact the sender out of band to specify the requirements
3. Accept the message, but later send a functional acknowledgement via NHIN Direct that the contents of the message did not qualify for meaningful use reporting requirements
etc.
In an email dialog following the HIT Standards Committee, many members agreed tat that the message should be rejected with an error message that the contents of the message did not meet regulatory requirements.
At the meeting, we agreed that decisions to reject or accept messages are a matter of policy and that the HIT Standards Committee should only recommend technology that enables messages to be sent securely and error messages to be provided to the message sender if policy requirements are not met.
A great meeting with significant progress on the PCAST review, S&I Framework review, and the NHIN Direct review.
Next month, we'll hear more about certificates, provider directories, and PCAST. It's clear that the work of the Policy Committee on Certificates and Provider Directories, the work of NHIN Direct, and the work of HIT Standards Committee are converging such that we will soon have a unified approach to transport that will rapidly accelerate transmission of the standardized content and vocabularies already required by Meaningful Use.
We began the meeting with an introduction from Dr. Perlin in which he noted that reports by commissions such as PCAST need to be read, not for their details, but for their directionality. We should ask about the trajectory the experts think we should be on and how/when should it modify our current course. Dr. Blumenthal also offered an introduction to the PCAST discussion, noting that the White House fully supports and encourages interoperability, suggesting that we should accelerate the priority of healthcare information exchange in the progression from Meaningful Use stage 1 to 3.
We discussed the origins and history of the PCAST report. The President asked PCAST how health IT could improve the quality of healthcare and reduce its cost, and whether existing Federal efforts in health IT are optimized for these goals. In response, PCAST formed a working group consisting of PCAST members and advisors in both healthcare and information technology.
The working group held meetings in Washington, D.C., on December 18, 2009, and in Irvine, California, on January 14 15, 2010, as well as additional meetings by teleconference. The viewpoints of researchers, policy analysts, and administrators from government, healthcare organizations, and universities were presented and discussed.
A draft report developed by the working group was submitted to the Health and Life Sciences committee of PCAST. That committee submitted the draft to several outside reviewers, who made valuable suggestions for improvements. From the working group draft, the additional input, and its own discussions, the Health and Life Sciences committee produced the present report, which was discussed and endorsed (with some modifications) by the full PCAST in public session on July 16, 2010.
A disclaimer at beginning of report notes "Working Group members participated in the preparation of an initial draft of this report. They are not responsible for, nor necessarily endorse, the final version of this report as modified and approved by PCAST."
We identified a number of key themes in the report
1. The foundation for healthcare information exchange should be built on an XML-based Universal Exchange Language
2. Data elements should be separable from documents
3. Metadata should identify characteristics of each data element i.e. how it was recorded, by whom and for what patient
4. Privacy controls should integrate patient consent preferences with metadata about the data available for exchange
5. Search engine technology/data element access service indexing at a national level will accelerate data element discovery
6. Data reuse with patient consent for clinical trials and population health is a priority
The key ideas from the discussion included:
a. Thinking at a national scale is good to avoid creating regional health information exchange silos
b. Messaging (such as HL7 2.x) is still going to be needed to support event-based transactional workflows
c. The strength of the PCAST report is in supporting exchange models that require aggregation - research, epidemiology, and unanticipated interactions such as Emergency Department visits.
d. For some uses such as communication among providers, encounter summaries which provide structured and unstructured data in context, are more useful than data elements
e. Many data elements are not useful on their own and a module/collection of data elements would be better i.e. Allergies should include the substance, onset date, the type of reaction, the severity of the reaction, and the level of certainty of the reaction (your mother reported it based on a distant memory verses a clinician observed it happening). To understand how best to collect data elements into modules, clinical data models would be very helpful.
f. Since information is going to exchanged among multiple parties, metadata will need to include the provenance of the data, so that data is not duplicated multiple times i.e. Hospital A sends data to Hospital B and C. C requests a copy of B's data (which includes B and A) and it should be possible to avoid storing a duplicate of A's data which C already has.
f. We should proceed with the health information exchange work already in progress to achieve interoperability in support of Meaningful Use stage 1 and not derail current efforts.
g. Finely grained privacy (to the data element level) will be challenging to implement and maintain. Tagging elements with privacy characteristics is very hard because societal attitudes about the sensitivity of data elements may change over time. HIV testing used to be a rare event, so the presence of an HIV test alone (not its result) could be concerning. Today 1/3 of Americans have had an HIV test, generally as part of getting life or health insurance, so the presence of a test is no longer a stigma.
h. The national scope suggested includes using web search engine technology to keep a data element index, identifying what data is available for what patients and where. The policy and security issues of doing this are greater than the technology challenges.
The next step for the PCAST report will be ONC's naming of a multi-stakeholder workgroup to review the report in detail and make recommendations by April.
We next heard about the planned Implementation Workgroup hearing regarding certification, Meaningful Use, and healthcare information exchange. On January 10-11, the Workgroup will learn about early adopter successes and challenges.
Next, the Clinical Operations Workgroup reported on its plans to consider vocabulary and content issues for devices - critical care, implantable, and home care. Issues include universal device identification, ensuring data integrity, and interoperability of devices that may require a clinical data model to ensure the meaning of data communicated is understood by EHRs, PHRs, and devices.
We next considered the standards and interoperability framework priorities as outlined by Doug Fridsma. The S&I Framework contractors are working on clinical summaries, templates documents, Laboratory results, Medication Reconciliation, Provider Directories, Syndromic Surveillance, Quality, Population Health, Clinical Decision Support, Patient Engagement, EHR to EHR data exchange, and Value Sets.
Points raised during this discussion included the need to include policy discussions throughout the process of harmonizing and testing standards. We agreed that the Clinical Operations workgroup should study these priorities and make recommendations based on real world implementation experience that will help ONC and the contractors focus on the gaps to be addressed such as patient identification and vocabularies/codes sets.
We discussed the HIT Policy Committee's request for the Standards Committee to work on Certificate Management standards. The Privacy and Security Workgroup will make recommendations for organization to organization and server to server certificate standards.
We next considered the Privacy and Security Workgroup's evaluation of NHIN Direct. The Workgroup concluded that certificate exchange should not be limited to certificates stored in Domain Naming Services (DNS) applications. It also suggested that XDR (a SOAP transaction) be removed from the NHIN Direct Core specification, reducing the complexity and optionality of the specification. The only debate that arose during this discussion revolved around the issue of rejecting an NHIN Direct message because it did not meet regulatory requirements. Specifically, the Privacy and Security Workgroup recommended the following language -
"Destinations MAY reject content that does not meet Destination expectations. For instance, some Destinations MAY require receipt of structured data, MAY support only particular content types, and MAY require receipt of XDM structured attachments."
Here's a use case that illustrates the issue:
Federal Regulations require quality measures to be sent in PQRI XML as of 2012.
A doctor uses NHIN Direct to send an unstructured text message to CMS "I achieved the quality measures you wanted me to!"
What should CMS do?
1. Reject the message as not compliant with Federal regulations, notifying the sender as to the reason
2. Accept the message, but contact the sender out of band to specify the requirements
3. Accept the message, but later send a functional acknowledgement via NHIN Direct that the contents of the message did not qualify for meaningful use reporting requirements
etc.
In an email dialog following the HIT Standards Committee, many members agreed tat that the message should be rejected with an error message that the contents of the message did not meet regulatory requirements.
At the meeting, we agreed that decisions to reject or accept messages are a matter of policy and that the HIT Standards Committee should only recommend technology that enables messages to be sent securely and error messages to be provided to the message sender if policy requirements are not met.
A great meeting with significant progress on the PCAST review, S&I Framework review, and the NHIN Direct review.
Next month, we'll hear more about certificates, provider directories, and PCAST. It's clear that the work of the Policy Committee on Certificates and Provider Directories, the work of NHIN Direct, and the work of HIT Standards Committee are converging such that we will soon have a unified approach to transport that will rapidly accelerate transmission of the standardized content and vocabularies already required by Meaningful Use.
Friday, December 17, 2010
Cool Technology of the Week
In my continuing series on living green and low impact, this week's post is not really about a technology, but about a concept - living small in a house on wheels.
Call this the anti-McMansion, the opposite of living large.
These small houses require innovative use of construction materials, architecture, and three dimensional thinking. Because of their on wheels construction, they bypass many zoning and permitting restrictions.
Their cost is low, their use of resources modest, and their footprint on real estate and the planet is small.
A home built with advanced materials that includes a kitchen, great room, bathroom/shower, and bedroom - all under 100 square feet. That's cool.
One other cool item to share this week - a great YouTube Video on healthcare data visualization.
Call this the anti-McMansion, the opposite of living large.
These small houses require innovative use of construction materials, architecture, and three dimensional thinking. Because of their on wheels construction, they bypass many zoning and permitting restrictions.
Their cost is low, their use of resources modest, and their footprint on real estate and the planet is small.
A home built with advanced materials that includes a kitchen, great room, bathroom/shower, and bedroom - all under 100 square feet. That's cool.
One other cool item to share this week - a great YouTube Video on healthcare data visualization.
Thursday, December 16, 2010
Living the Good Life
Last Friday, my daughter was admitted early decision to Tufts University, so the anxiety of the college application process is passed. One of her essays asked her to describe the environment in which she was raised and how it influenced the person she is today. It's worth sharing her observations on what constitute living the good life:
"At this moment, from a room of windows, I can see tall pine trees framing a beautiful, soft green yard. A little vegetable garden lies to my right, with lettuce enduring the brisk autumn wind. Above it stands a lone maple gradually turning brilliant shades of fire. A heavenly light illuminates the clouds passing overhead in the vast baby blue sky. The wisteria climbs the windows to my left, waiting for a warm spring to show its beautiful lavender flowers. The wind passes through the wooden chimes hanging from our crabapple tree, initiating a clonking chorus. Bamboo lines the white rock river with a little wooden bridge. A stone bench rests near the fence, where my father sits and plays his Shakuhachi (traditional Japanese flute). Cardinals, sparrows, and grackles fly overhead, seeking food, warmth, and family. As I open a window, a rush of sweet, crisp autumn cold fills my senses, making me shiver. These wonders surrounding me in such a welcoming, beautiful, and inspiring home and community fostered an appreciation for the subtle things in life. I learned to openly embrace the world around me, understanding and loving its everlasting beauty. Nature is a teacher and a gift, one never to be overlooked. I’ve grown as a student, an observer, an appreciator, and a believer in the magic and beauty of the world."
As a parent, I want my daughter to feel good about herself. In her essay, she highlighted the simple things that bring richness to her life - a vegetable garden, autumn colors, and a supportive community of family and friends.
I can understand her point of view.
As I write this, I'm sitting in an old Morris chair, sipping Gyokuro green tea, breathing in wisps of smoke from Blue kungyokudo incense. Breakfast will be a bowl of steel cut oatmeal with a few drops of Vermont maple, and soy milk.
The ability to sit quietly and think, enjoy wholesome foods, and enjoy the warmth and comfort of a small home while the weather outside is cold and blustery gives me an overwhelming sense of well being.
I hope my daughter continues to appreciate that the good life comes from the basics of food/clothing/shelter/family/self-worth.
Tufts University is a great fit for her and I'm confident the next four years will polish and amplify the foundation she's already built. As she creates her own version of the good life, we'll always be available for advice and support, but as of next Summer, she's a fledgling, exploring the world on her own.
"At this moment, from a room of windows, I can see tall pine trees framing a beautiful, soft green yard. A little vegetable garden lies to my right, with lettuce enduring the brisk autumn wind. Above it stands a lone maple gradually turning brilliant shades of fire. A heavenly light illuminates the clouds passing overhead in the vast baby blue sky. The wisteria climbs the windows to my left, waiting for a warm spring to show its beautiful lavender flowers. The wind passes through the wooden chimes hanging from our crabapple tree, initiating a clonking chorus. Bamboo lines the white rock river with a little wooden bridge. A stone bench rests near the fence, where my father sits and plays his Shakuhachi (traditional Japanese flute). Cardinals, sparrows, and grackles fly overhead, seeking food, warmth, and family. As I open a window, a rush of sweet, crisp autumn cold fills my senses, making me shiver. These wonders surrounding me in such a welcoming, beautiful, and inspiring home and community fostered an appreciation for the subtle things in life. I learned to openly embrace the world around me, understanding and loving its everlasting beauty. Nature is a teacher and a gift, one never to be overlooked. I’ve grown as a student, an observer, an appreciator, and a believer in the magic and beauty of the world."
As a parent, I want my daughter to feel good about herself. In her essay, she highlighted the simple things that bring richness to her life - a vegetable garden, autumn colors, and a supportive community of family and friends.
I can understand her point of view.
As I write this, I'm sitting in an old Morris chair, sipping Gyokuro green tea, breathing in wisps of smoke from Blue kungyokudo incense. Breakfast will be a bowl of steel cut oatmeal with a few drops of Vermont maple, and soy milk.
The ability to sit quietly and think, enjoy wholesome foods, and enjoy the warmth and comfort of a small home while the weather outside is cold and blustery gives me an overwhelming sense of well being.
I hope my daughter continues to appreciate that the good life comes from the basics of food/clothing/shelter/family/self-worth.
Tufts University is a great fit for her and I'm confident the next four years will polish and amplify the foundation she's already built. As she creates her own version of the good life, we'll always be available for advice and support, but as of next Summer, she's a fledgling, exploring the world on her own.
Wednesday, December 15, 2010
What is Our Cloud Strategy?
In a meeting last week with senior management at Harvard Medical School, one of our leaders asked, "What is our cloud strategy?"
My answer to this is simple. The public cloud (defined as the rapid provisioning and de-provisioning of CPU cycles, software licenses, and storage) is good for many things, such as web hosting or non-critical applications that do not contain patient or confidential information. At Harvard Medical School and Beth Israel Deaconess Medical Center, we've embraced public cloud technology, but transformed it into something with a guaranteed service level and compliance with Federal/State security regulations - the private cloud.
Here's the approach we're using to create private clouds at HMS and BIDMC:
1. At HMS, we created Orchestra, a 6000 core blade-based supercomputer backed by a petabyte of distributed storage. Thousands of users run millions of jobs. It's housed in Harvard controlled space, protected by a multi-layered security strategy, and engineered to be highly available. We also use grid computing technologies to share CPU among multiple high performance computing facilities nationwide.
2. At BIDMC and its physician organization (BIDPO), we've created a virtualized environment for 150 clinician offices, hosting 20 instances of logically isolated electronic health record applications per physical CPU. It's backed with half a petabyte of storage in a fault tolerant networking configuration and is housed at a commercial high availability co-location center.
3. At BIDMC, our clinical systems are run on geographically separated clusters built with high availability blade-based Linux machines backed by thin-provisioned storage pools.
Each of our private clouds has very high bandwidth internet connections with significant throughput (terabytes per day at HMS). The bandwidth charges of public clouds would be cost prohibitive.
We are investigating the use of public cloud providers to host websites with low volume, low security requirements, and no mission criticality. Public solutions could be better/faster/cheaper than internal provisioning.
Thus, our cloud strategy is to create private clouds that are more reliable, more secure, and cheaper than public clouds for those applications which require higher levels of availability and privacy. For those use cases where the public cloud is good enough, we're considering external solutions.
Someday, it may make sense to move more into the public cloud, but for now, we have the best balance of service, security, and price with a largely private cloud approach.
My answer to this is simple. The public cloud (defined as the rapid provisioning and de-provisioning of CPU cycles, software licenses, and storage) is good for many things, such as web hosting or non-critical applications that do not contain patient or confidential information. At Harvard Medical School and Beth Israel Deaconess Medical Center, we've embraced public cloud technology, but transformed it into something with a guaranteed service level and compliance with Federal/State security regulations - the private cloud.
Here's the approach we're using to create private clouds at HMS and BIDMC:
1. At HMS, we created Orchestra, a 6000 core blade-based supercomputer backed by a petabyte of distributed storage. Thousands of users run millions of jobs. It's housed in Harvard controlled space, protected by a multi-layered security strategy, and engineered to be highly available. We also use grid computing technologies to share CPU among multiple high performance computing facilities nationwide.
2. At BIDMC and its physician organization (BIDPO), we've created a virtualized environment for 150 clinician offices, hosting 20 instances of logically isolated electronic health record applications per physical CPU. It's backed with half a petabyte of storage in a fault tolerant networking configuration and is housed at a commercial high availability co-location center.
3. At BIDMC, our clinical systems are run on geographically separated clusters built with high availability blade-based Linux machines backed by thin-provisioned storage pools.
Each of our private clouds has very high bandwidth internet connections with significant throughput (terabytes per day at HMS). The bandwidth charges of public clouds would be cost prohibitive.
We are investigating the use of public cloud providers to host websites with low volume, low security requirements, and no mission criticality. Public solutions could be better/faster/cheaper than internal provisioning.
Thus, our cloud strategy is to create private clouds that are more reliable, more secure, and cheaper than public clouds for those applications which require higher levels of availability and privacy. For those use cases where the public cloud is good enough, we're considering external solutions.
Someday, it may make sense to move more into the public cloud, but for now, we have the best balance of service, security, and price with a largely private cloud approach.
Tuesday, December 14, 2010
The Beacon Communities
Last week, the Office of the National Coordinator (ONC) launched a new web resource for Beacon communities, which includes summary videos of each community and their key projects.
Beacon is a program to watch, since it represents a set of innovative pilots that are likely to help us determine what IT-enabled interventions really improve quality, safety, and efficiency. Healthcare reform will require numerous IT innovations to support accountable care organizations, medical homes, and new reimbursement strategies that focus on quality rather than quantity.
Although Massachusetts was not selected as a Beacon Community, here's a link to the Greater Boston submission, to give you a sense of the depth of work Beacon Communities must do. The work is so important, that our regional, state, and local efforts are likely to move forward with many aspects of the Beacon proposal, using private funding and other available State and Federal funds.
Beacon is a program to watch, since it represents a set of innovative pilots that are likely to help us determine what IT-enabled interventions really improve quality, safety, and efficiency. Healthcare reform will require numerous IT innovations to support accountable care organizations, medical homes, and new reimbursement strategies that focus on quality rather than quantity.
Although Massachusetts was not selected as a Beacon Community, here's a link to the Greater Boston submission, to give you a sense of the depth of work Beacon Communities must do. The work is so important, that our regional, state, and local efforts are likely to move forward with many aspects of the Beacon proposal, using private funding and other available State and Federal funds.
Monday, December 13, 2010
The Standards Work Ahead
In past years, the Office of the National Coordinator (ONC) has been called the Office of No Christmas (ONC) because of the pace of the year-round effort, especially December/January deadlines. At the October HIT Standards Committee meeting, one public commenter was concerned that the pace of work on standards would slow down over the holidays. Although it is vitally important that we all take time with our families and that we recharge for the new year ahead, I can assure you that standards efforts are not slowing down!
Here are a few of the issues we are addressing in December and January.
1. Evaluation of the Direct Project (formerly called NHIN Direct) - During December, we'll evaluate the Direct Project to determine if it has met its goals of being simple, direct, scalable and secure:
Simple means it is responsive to the Implementation Workgroup's guidelines which include ease of implementation, concern for the "little guy," and recognition of the fact that the development community is broad and diverse.
Direct means the transport of content from a sender to a receiver, with no content-aware intermediary services.
Scalable means the ability to support increasing workload and to adapt to new exchange models.
Secure means minimizing confidentiality, integrity, and availability risk to the content being transported.
2. Review of the PCAST Report - although ONC and CMS are likely to create tiger teams of expert members to make recommendations, the HIT Standards Committee will review the history and intent of report.
3. Review of the priorities suggested for the first application of new Standards and Interoperability framework. Take a look at the FACA blog for details of the projects being considered.
4. Begin work on the Policy Committee's requests on certificates and directories. Regarding certificates, the request from the Policy Committee is
"ONC, through the Standards Committee, should select or specify standards for digital certificates (including data fields) in order to promote interoperability among health care organizations"
Although we have not yet received a request to review the HIT Policy Committee's Provider Directory work, it's likely we'll be asked about standards for Entity Level Provider Directories (ELPD) which are a yellow pages for provider organizations and Individual Level Provider Directories (ILPD) which are a white pages for lookup of individuals to identify their organizational affiliations such that the yellow pages can be used for routing.
5. Begin work on device standards - Our Vocabulary workgroup has a special interest in the content and vocabulary standards that will ensure home care devices (from pedometers, blood pressure cuffs, glucometers, pulse oximeters, etc) can transmit data in an interoperable format to EHRs and PHRs.
At the HIT Standards Committee December 17 meeting (which I'll blog about on Friday instead of a Cool Technology of the Week), you'll hear about our plans for January hearings on early experiences with the implementation of certified EHR technology/meaningful use of those technologies. You'll hear about our plans to begin hearings on medical device standards including the FDA's work on unique device identification and content/vocabulary standards for home care devices. We'll discuss the Standards and Interoperability framework priorities. We'll begin the PCAST review.
As you can see from the five December/January goals above and our December 17 agenda, we're sustaining the momentum!
Here are a few of the issues we are addressing in December and January.
1. Evaluation of the Direct Project (formerly called NHIN Direct) - During December, we'll evaluate the Direct Project to determine if it has met its goals of being simple, direct, scalable and secure:
Simple means it is responsive to the Implementation Workgroup's guidelines which include ease of implementation, concern for the "little guy," and recognition of the fact that the development community is broad and diverse.
Direct means the transport of content from a sender to a receiver, with no content-aware intermediary services.
Scalable means the ability to support increasing workload and to adapt to new exchange models.
Secure means minimizing confidentiality, integrity, and availability risk to the content being transported.
2. Review of the PCAST Report - although ONC and CMS are likely to create tiger teams of expert members to make recommendations, the HIT Standards Committee will review the history and intent of report.
3. Review of the priorities suggested for the first application of new Standards and Interoperability framework. Take a look at the FACA blog for details of the projects being considered.
4. Begin work on the Policy Committee's requests on certificates and directories. Regarding certificates, the request from the Policy Committee is
"ONC, through the Standards Committee, should select or specify standards for digital certificates (including data fields) in order to promote interoperability among health care organizations"
Although we have not yet received a request to review the HIT Policy Committee's Provider Directory work, it's likely we'll be asked about standards for Entity Level Provider Directories (ELPD) which are a yellow pages for provider organizations and Individual Level Provider Directories (ILPD) which are a white pages for lookup of individuals to identify their organizational affiliations such that the yellow pages can be used for routing.
5. Begin work on device standards - Our Vocabulary workgroup has a special interest in the content and vocabulary standards that will ensure home care devices (from pedometers, blood pressure cuffs, glucometers, pulse oximeters, etc) can transmit data in an interoperable format to EHRs and PHRs.
At the HIT Standards Committee December 17 meeting (which I'll blog about on Friday instead of a Cool Technology of the Week), you'll hear about our plans for January hearings on early experiences with the implementation of certified EHR technology/meaningful use of those technologies. You'll hear about our plans to begin hearings on medical device standards including the FDA's work on unique device identification and content/vocabulary standards for home care devices. We'll discuss the Standards and Interoperability framework priorities. We'll begin the PCAST review.
As you can see from the five December/January goals above and our December 17 agenda, we're sustaining the momentum!
Friday, December 10, 2010
The Spirit of PCAST
On December 8, the President's Council of Advisors on Science and Technology (PCAST) released the report "Realizing the Full Potential of Health Information Technology to Improve Healthcare for Americans: The Path Forward"
In its 91 pages are several "gold star" ideas for empowering patients, providers and payers to improve quality, safety, and efficiency.
The major ideas are
1. "Universal Exchange Language" - As I have discussed many times, interoperability requires content, vocabulary and transport standards. Although the PCAST report does not provide specifics, it does list characteristics of this language
*Should be XML-based
*Should be optimized for representing structured data, not just unstructured text
*Should include controlled vocabularies/code sets where possible for each data element
*Should be infinitely extensible
*Should be architecturally neutral, decoupling content and transport standards
2. Data elements should be separable and not confined to a specific collection of elements forming a document. There are thousands of forms and document types in an average hospital. Rather than trying to create one ideal format for each of these (which would be a never-ending task) providing a modular approach that enables collections of data elements to be repurposed for different needs would enhance flexibility and reduce the burden on implementation guide writers/developers/users.
3. Each data element should include metadata attributes that enable the datum to be reused outside of any collection of elements or context. The report does not specify how this would work, but let's presume that each data element would contain attributes such as the data element name, the patient name, and the patient date of birth so that information about a specific patient could be searched and aggregated.
4. Privacy controls specified by the patient used in conjunction with the metadata would enable multiple data uses that adhere to patient consent declarations and support multiple types of consent models (opt in, opt out, HIV/genetics/mental health restrictions etc). Although this is a noble goal, the reality of implementing this is quite difficult. Deciding if a data element does or does not imply a condition is a major informatics challenge.
5. Search engine technology should be able to index data elements based on metadata. Search results would reflect patient consent preferences and the access rights of the authenticated user.
6. De-identified data should be available for population health, clinical research, syndromic surveillance, and other novel uses to advance healthcare science and operations.
How does this compare to the work to date by ONC, the Federal Advisory Committees, and vendors to implement meaningful use data exchanges?
I believe that the PCAST report is consistent with the work done to date and that the foundation created by Meaningful Use Stage 1 puts us on the right trajectory to embrace the spirit of PCAST.
Let's look at each of the PCAST ideas as compared to our current trajectory
1. There are 2 kinds of content standards specified in the Standards and Certification Final rule - transactions and summaries. Transactions include such things as e-prescribing a medicine or ordering a diagnostic test through a CPOE system. Summaries include sharing a lifetime health history or episode of care between providers or with patients. Transactions, such as specific actionable orders, work very well today using the HL7 2.x messages specified in the rule. Transactions are not a problem. It's the summaries that should be the focus of the PCAST ideas.
The current summary formats specified by the Standards and Certification Final Rule are CCR and CCD. Both are XML. CCR is extensible but I do not believe there has been much demand in the industry to expand it. CCD is based on CDA which is extensible. In fact, CCD is just the CCR expressed as a CDA template. It’s a demonstration of the extensibility of CDA.
CCR and CCD incorporate vocabularies for each data element where appropriate - ICD9/SNOMED-CT for problems, LOINC for labs, and RXNORM for medications.
I would hope that the country does not start from scratch to build a new Universal Exchange Language. Wise people can take the best of CCR, CDA Templates, Green CDA, and other existing XML constructs to create implementation guides which fulfill the PCAST recommendations.
2. If data elements are going to stand alone, do we need an information model or dictionary so that we know how to name data elements in a consistent way? If the goal is to represent every possible data element in healthcare in a manner that allows consistent searching, then the metadata will need to include consistent data element names and the relationship of data elements to each other i.e. a problem list consists of a problem name, problem code, problem date, active/inactive flag.
3. CCR and CCD/CDA both include metadata. What does it mean to represent metadata at the data element level? CCR and CCD have specific sections that incorporate patient identity information. Should that be replicated in every data element so that each data element can stand alone? While that could be done, it will result in substantially larger payloads to exchange because of the redundant metadata added to each datum.
4. The ONC Privacy and Security Tiger Team has already been working on a framework for meaningful consent. Their work is truly a pre-requisite to the privacy protections suggested by PCAST. The Tiger Team has acknowledged the value of highly granular consent, but has been realistic about the challenges of implementing it. A phased approach to get us to the goals outlined in the PCAST report would work well.
5. Search engine technology has not been a part of the work on healthcare information exchange to date. It will be interesting to think about the security issues of cached indexes in such search engines. Just knowing that a data element exists (HIV test or visit to a substance abuse facility), regardless of the actual data contents, can be disclosing. Another issue is that search engines would have to do a probabilistic match of name, date of birth and other patient demographics from metadata to assemble data elements into a complete record for clinical care. Although such approaches might work for research, quality measurement, or public health reporting, they are problematic for clinical care where false positives (matching the wrong patient) could have significant consequences.
6. De-identified data for public health has already been part of the ONC effort. Novel data mining in support of research has been a part of the NIH CTSA projects, such as Shrine/I2B2. These CTSA applications already adhere to many of the PCAST principles.
What are the next steps?
I presume ONC/CMS will convene teams from the HIT Policy Committee, the HIT Standards Committee and existing Workgroups to discuss the PCAST report and its implication for the work ahead.
In the spirit of my recent blog about The Glass Half Full, I believe the PCAST report is a positive set of recommendations that builds on the Meaningful Use Stage 1 effort to date. ONC should be congratulated for creating a foundation that is so consistent with the PCAST vision for the future.
In its 91 pages are several "gold star" ideas for empowering patients, providers and payers to improve quality, safety, and efficiency.
The major ideas are
1. "Universal Exchange Language" - As I have discussed many times, interoperability requires content, vocabulary and transport standards. Although the PCAST report does not provide specifics, it does list characteristics of this language
*Should be XML-based
*Should be optimized for representing structured data, not just unstructured text
*Should include controlled vocabularies/code sets where possible for each data element
*Should be infinitely extensible
*Should be architecturally neutral, decoupling content and transport standards
2. Data elements should be separable and not confined to a specific collection of elements forming a document. There are thousands of forms and document types in an average hospital. Rather than trying to create one ideal format for each of these (which would be a never-ending task) providing a modular approach that enables collections of data elements to be repurposed for different needs would enhance flexibility and reduce the burden on implementation guide writers/developers/users.
3. Each data element should include metadata attributes that enable the datum to be reused outside of any collection of elements or context. The report does not specify how this would work, but let's presume that each data element would contain attributes such as the data element name, the patient name, and the patient date of birth so that information about a specific patient could be searched and aggregated.
4. Privacy controls specified by the patient used in conjunction with the metadata would enable multiple data uses that adhere to patient consent declarations and support multiple types of consent models (opt in, opt out, HIV/genetics/mental health restrictions etc). Although this is a noble goal, the reality of implementing this is quite difficult. Deciding if a data element does or does not imply a condition is a major informatics challenge.
5. Search engine technology should be able to index data elements based on metadata. Search results would reflect patient consent preferences and the access rights of the authenticated user.
6. De-identified data should be available for population health, clinical research, syndromic surveillance, and other novel uses to advance healthcare science and operations.
How does this compare to the work to date by ONC, the Federal Advisory Committees, and vendors to implement meaningful use data exchanges?
I believe that the PCAST report is consistent with the work done to date and that the foundation created by Meaningful Use Stage 1 puts us on the right trajectory to embrace the spirit of PCAST.
Let's look at each of the PCAST ideas as compared to our current trajectory
1. There are 2 kinds of content standards specified in the Standards and Certification Final rule - transactions and summaries. Transactions include such things as e-prescribing a medicine or ordering a diagnostic test through a CPOE system. Summaries include sharing a lifetime health history or episode of care between providers or with patients. Transactions, such as specific actionable orders, work very well today using the HL7 2.x messages specified in the rule. Transactions are not a problem. It's the summaries that should be the focus of the PCAST ideas.
The current summary formats specified by the Standards and Certification Final Rule are CCR and CCD. Both are XML. CCR is extensible but I do not believe there has been much demand in the industry to expand it. CCD is based on CDA which is extensible. In fact, CCD is just the CCR expressed as a CDA template. It’s a demonstration of the extensibility of CDA.
I would hope that the country does not start from scratch to build a new Universal Exchange Language. Wise people can take the best of CCR, CDA Templates, Green CDA, and other existing XML constructs to create implementation guides which fulfill the PCAST recommendations.
2. If data elements are going to stand alone, do we need an information model or dictionary so that we know how to name data elements in a consistent way? If the goal is to represent every possible data element in healthcare in a manner that allows consistent searching, then the metadata will need to include consistent data element names and the relationship of data elements to each other i.e. a problem list consists of a problem name, problem code, problem date, active/inactive flag.
3. CCR and CCD/CDA both include metadata. What does it mean to represent metadata at the data element level? CCR and CCD have specific sections that incorporate patient identity information. Should that be replicated in every data element so that each data element can stand alone? While that could be done, it will result in substantially larger payloads to exchange because of the redundant metadata added to each datum.
4. The ONC Privacy and Security Tiger Team has already been working on a framework for meaningful consent. Their work is truly a pre-requisite to the privacy protections suggested by PCAST. The Tiger Team has acknowledged the value of highly granular consent, but has been realistic about the challenges of implementing it. A phased approach to get us to the goals outlined in the PCAST report would work well.
5. Search engine technology has not been a part of the work on healthcare information exchange to date. It will be interesting to think about the security issues of cached indexes in such search engines. Just knowing that a data element exists (HIV test or visit to a substance abuse facility), regardless of the actual data contents, can be disclosing. Another issue is that search engines would have to do a probabilistic match of name, date of birth and other patient demographics from metadata to assemble data elements into a complete record for clinical care. Although such approaches might work for research, quality measurement, or public health reporting, they are problematic for clinical care where false positives (matching the wrong patient) could have significant consequences.
6. De-identified data for public health has already been part of the ONC effort. Novel data mining in support of research has been a part of the NIH CTSA projects, such as Shrine/I2B2. These CTSA applications already adhere to many of the PCAST principles.
What are the next steps?
I presume ONC/CMS will convene teams from the HIT Policy Committee, the HIT Standards Committee and existing Workgroups to discuss the PCAST report and its implication for the work ahead.
In the spirit of my recent blog about The Glass Half Full, I believe the PCAST report is a positive set of recommendations that builds on the Meaningful Use Stage 1 effort to date. ONC should be congratulated for creating a foundation that is so consistent with the PCAST vision for the future.
Thursday, December 9, 2010
Choosing the Right Nordic Skis
From December to April I do "OTBD" skiing. What's that? "Out the back door" through nature preserves, forested hills, and old railroad beds. I ski the 1500 acres of Noanet in Dover, the Audubon Broadmoor Reserve in South Natick, and the Wellesley Golf Course.
Choosing the right skis can be challenging and the marketing materials from the manufacturers doesn't help much.
Here's the way I think about it.
First choose the places you want to ski and the style of skiing that you plan to do.
Are you skiing for fitness, doing mile after mile of groomed tracks as fast as you can go?
Are you skiing for the experience of nature, venturing off track to snow filled hiking trails and rolling terrain with hills a few hundred feet high?
Are you into steep powder, seeking the thrill of downhill turns at high speeds?
Once you know where and how you'll ski, you can pick the skis, boots, bindings, and poles you'll need.
There are many types of nordic skis but for the purposes of my analysis, I'll refer to them by their typical width and edge configuration.
1. The endorphin junkie in groomed tracks - Skis less than 60mm wide without a metal edge are perfect for groomed tracks. You'll go fast on the flats. As long as the track is not too steep or too filled with turns, you'll be fine.
2. The OTBD skier (me) - Skis between 65mm and less than 80mm with metal edges are a compromise - slower on the flats but with more stability and control for ungroomed trails, turning around trees/rocks, and traversing more varied terrain in forests, streams, and hilltops.
3. The adrenaline junkie on steep slopes - Skls above 85mm with metal edges have great control for turning - telemarking, parallel turns, and stemming (letting one ski slide on the uphill side of the turn). They are slow on the flats and are too wide to be used in tracks.
You may need more than one pair of skis if you do more than one type of skiing. One size does not fit all since each ski is optimized for some types of skiing and a compromise for others.
One you've chosen your ski, you can chose your bindings.
1. For groomed track skiing, chose the New Nordic Norm T3 or Salomon Profile
2. For OTBD choose the New Nordic Norm BC Auto binding
3. For steep slopes chose the New Nordic Norm BC Magnum binding or 75mm Telemark bindings
Choose a boot that fits the binding you've selected, noting that all these binding systems are not interchangeable - the boot must be designed to accompany a specific binding type.
Poles are generally as tall as your armpits. For OTBD and steep slops, choose an adjustable length pole so you can set it longer for climbing and shorter for descending.
What do I use?
For long distances in groomed tracks - Fischer Nordic Cruiser skis, 50mm wide with NNN T3 bindings, Fischer XC Tour boots, and Fischer XC Sport poles - generally about $250 for the package.
For OTBD skiing - Fischer Outbound Crown, 70mm wide with a metal edge and NNN BC Auto bindings, Fischer BCX 6 boots, and Fischer BCX poles - generally about $500 for the package.
Of all the websites with educational materials about Nordic skis, the most useful I've found are the videos at Onion River Sports and an older collection of pages called Dave's Backcountry Skiing page.
I hope this overview is helpful. Let it snow!
Choosing the right skis can be challenging and the marketing materials from the manufacturers doesn't help much.
Here's the way I think about it.
First choose the places you want to ski and the style of skiing that you plan to do.
Are you skiing for fitness, doing mile after mile of groomed tracks as fast as you can go?
Are you skiing for the experience of nature, venturing off track to snow filled hiking trails and rolling terrain with hills a few hundred feet high?
Are you into steep powder, seeking the thrill of downhill turns at high speeds?
Once you know where and how you'll ski, you can pick the skis, boots, bindings, and poles you'll need.
There are many types of nordic skis but for the purposes of my analysis, I'll refer to them by their typical width and edge configuration.
1. The endorphin junkie in groomed tracks - Skis less than 60mm wide without a metal edge are perfect for groomed tracks. You'll go fast on the flats. As long as the track is not too steep or too filled with turns, you'll be fine.
2. The OTBD skier (me) - Skis between 65mm and less than 80mm with metal edges are a compromise - slower on the flats but with more stability and control for ungroomed trails, turning around trees/rocks, and traversing more varied terrain in forests, streams, and hilltops.
3. The adrenaline junkie on steep slopes - Skls above 85mm with metal edges have great control for turning - telemarking, parallel turns, and stemming (letting one ski slide on the uphill side of the turn). They are slow on the flats and are too wide to be used in tracks.
You may need more than one pair of skis if you do more than one type of skiing. One size does not fit all since each ski is optimized for some types of skiing and a compromise for others.
One you've chosen your ski, you can chose your bindings.
1. For groomed track skiing, chose the New Nordic Norm T3 or Salomon Profile
2. For OTBD choose the New Nordic Norm BC Auto binding
3. For steep slopes chose the New Nordic Norm BC Magnum binding or 75mm Telemark bindings
Choose a boot that fits the binding you've selected, noting that all these binding systems are not interchangeable - the boot must be designed to accompany a specific binding type.
Poles are generally as tall as your armpits. For OTBD and steep slops, choose an adjustable length pole so you can set it longer for climbing and shorter for descending.
What do I use?
For long distances in groomed tracks - Fischer Nordic Cruiser skis, 50mm wide with NNN T3 bindings, Fischer XC Tour boots, and Fischer XC Sport poles - generally about $250 for the package.
For OTBD skiing - Fischer Outbound Crown, 70mm wide with a metal edge and NNN BC Auto bindings, Fischer BCX 6 boots, and Fischer BCX poles - generally about $500 for the package.
Of all the websites with educational materials about Nordic skis, the most useful I've found are the videos at Onion River Sports and an older collection of pages called Dave's Backcountry Skiing page.
I hope this overview is helpful. Let it snow!
Wednesday, December 8, 2010
Healthcare IT implications of Healthcare Reform
I'm often asked how Healthcare Reform will impact IT planning and implementation over the next few years.
First, some background. The Patient Protection and Affordable Care Act (HR 3590) and Health Care and Education Reconciliation Act (HR 4872) were passed to to address several problems with healthcare in the US. We're spending 17% of our Gross Domestic Product on healthcare, yet we have worse population health outcomes than many other industrialized societies spending half as much. Healthcare costs are rising faster than inflation. We have significant variation in practice patterns that is not explained by patient co-morbidities nor justified by comparative effectiveness evidence. We want to expand access to health insurance to 95% of the population, lower our spending growth rate, and incentivize delivery system change.
How will we do this?
Health insurance reform expands coverage, makes features and costs of plans transparent, and removes the barriers to enrollment created by pre-existing condition considerations.
Payment reform transforms the Medicare payment systems from fee-for-service to Value Based Payment - paying for good outcomes rather than quantity of care. Pilot projects will test new payment methods and delivery models. Successful innovations will be widely implemented.
Let's look at the payment reform details that will lead to delivery system reform.
Medicare Initiatives include
*Medicare shared savings program including Accountable Care Organizations (ACOs)
*National pilot program on payment bundling
*Independence at home demonstration program
*Hospital readmissions reduction program
*Community-Based Care Transitions Program
*Extension of Gainsharing Demonstration
Medicaid Initiatives include
*Health Homes for the Chronically Ill
*Medicaid Community First Choice Option
*Home and Community Based Services State Plan Option
*Hospital Care Integration
*Global Capitation Payment for Safety Net Hospitals
*Pediatric ACOs
I believe that Accountable Care Organizations will be the ideal place to host several of these innovations including bundled payments, the medical home, and an increased focus on wellness.
All of this requires innovative IT support.
Here are my top 10 IT implications of healthcare reform
1. Certified EHR technology needs to be implemented in all practices and hospitals which come together to form Accountable Care Organizations. EHRs are foundational to the capture of clinical and administrative data electronically so that data can be transformed into information, knowledge and wisdom.
2. Health Information Exchange among the PCPs, Specialists, and Hospitals is necessary to coordinate care. Data sharing will start with the "pushed" exchange of patient summaries in 2011 and evolve to just in time "pulls" of data from multiple sources by 2015.
3. Health Information Exchange to Public Health registries is necessary to measure population health across the community.
4. Quality data warehousing of key clinical indicators across the ACO is necessary to measure outcomes. 2011 will be about measuring practice and hospital level quality, 2013 will be about measuring quality throughout the accountable care organization, and 2015 will be about measuring patient-centric quality regardless of the site of care.
5. Decision support that occurs in real time is needed to ensure the right evidence-based care is delivered to the right patient at the right time - not too little or too much care, but just the right amount of care to maintain wellness.
6. Alerts and Reminders are critical to elevate the overwhelming amount of data about a patient to action that a caregiver (or the patient) can take to maintain wellness.
7. Home care is needed to prevent hospital readmissions, provide care that is consistent with patient preferences, and to enlist families as part of the care team. Novel IT solutions range from connected consumer health devices (blood pressure cuffs, glucometers, scales) to wireless telemetry informing clinicians about compliance with treatment.
8. Online access to medical records, secure communication with caregivers and customized patient educational materials are needed to enhance workflow, improve coordination, and engage patients.
9. Outcomes are challenging to measures and we'll need new innovative sources of data such as a patient reports of wellness, exercise, and symptoms.
10. Revenue Cycle systems will need to be significantly modified as we move from fee for service models to value-based payment and gainsharing when ACOs deliver higher quality care for less cost.
So there you have it - find the PCPs, Specialists and Hospitals you want to form an ACO then fully implement EHRs, PHRs, Quality Data Warehouses, Health Information Exchange, Decision Support Systems with alerts and reminders, homecare support including consumer healthcare device interfaces, and new revenue cycle systems. Luckily this is well aligned with Meaningful Use Stages 1,2, and 3, so you'll be doing it anyway.
For IT professionals, we truly live in interesting times.
First, some background. The Patient Protection and Affordable Care Act (HR 3590) and Health Care and Education Reconciliation Act (HR 4872) were passed to to address several problems with healthcare in the US. We're spending 17% of our Gross Domestic Product on healthcare, yet we have worse population health outcomes than many other industrialized societies spending half as much. Healthcare costs are rising faster than inflation. We have significant variation in practice patterns that is not explained by patient co-morbidities nor justified by comparative effectiveness evidence. We want to expand access to health insurance to 95% of the population, lower our spending growth rate, and incentivize delivery system change.
How will we do this?
Health insurance reform expands coverage, makes features and costs of plans transparent, and removes the barriers to enrollment created by pre-existing condition considerations.
Payment reform transforms the Medicare payment systems from fee-for-service to Value Based Payment - paying for good outcomes rather than quantity of care. Pilot projects will test new payment methods and delivery models. Successful innovations will be widely implemented.
Let's look at the payment reform details that will lead to delivery system reform.
Medicare Initiatives include
*Medicare shared savings program including Accountable Care Organizations (ACOs)
*National pilot program on payment bundling
*Independence at home demonstration program
*Hospital readmissions reduction program
*Community-Based Care Transitions Program
*Extension of Gainsharing Demonstration
Medicaid Initiatives include
*Health Homes for the Chronically Ill
*Medicaid Community First Choice Option
*Home and Community Based Services State Plan Option
*Hospital Care Integration
*Global Capitation Payment for Safety Net Hospitals
*Pediatric ACOs
I believe that Accountable Care Organizations will be the ideal place to host several of these innovations including bundled payments, the medical home, and an increased focus on wellness.
All of this requires innovative IT support.
Here are my top 10 IT implications of healthcare reform
1. Certified EHR technology needs to be implemented in all practices and hospitals which come together to form Accountable Care Organizations. EHRs are foundational to the capture of clinical and administrative data electronically so that data can be transformed into information, knowledge and wisdom.
2. Health Information Exchange among the PCPs, Specialists, and Hospitals is necessary to coordinate care. Data sharing will start with the "pushed" exchange of patient summaries in 2011 and evolve to just in time "pulls" of data from multiple sources by 2015.
3. Health Information Exchange to Public Health registries is necessary to measure population health across the community.
4. Quality data warehousing of key clinical indicators across the ACO is necessary to measure outcomes. 2011 will be about measuring practice and hospital level quality, 2013 will be about measuring quality throughout the accountable care organization, and 2015 will be about measuring patient-centric quality regardless of the site of care.
5. Decision support that occurs in real time is needed to ensure the right evidence-based care is delivered to the right patient at the right time - not too little or too much care, but just the right amount of care to maintain wellness.
6. Alerts and Reminders are critical to elevate the overwhelming amount of data about a patient to action that a caregiver (or the patient) can take to maintain wellness.
7. Home care is needed to prevent hospital readmissions, provide care that is consistent with patient preferences, and to enlist families as part of the care team. Novel IT solutions range from connected consumer health devices (blood pressure cuffs, glucometers, scales) to wireless telemetry informing clinicians about compliance with treatment.
8. Online access to medical records, secure communication with caregivers and customized patient educational materials are needed to enhance workflow, improve coordination, and engage patients.
9. Outcomes are challenging to measures and we'll need new innovative sources of data such as a patient reports of wellness, exercise, and symptoms.
10. Revenue Cycle systems will need to be significantly modified as we move from fee for service models to value-based payment and gainsharing when ACOs deliver higher quality care for less cost.
So there you have it - find the PCPs, Specialists and Hospitals you want to form an ACO then fully implement EHRs, PHRs, Quality Data Warehouses, Health Information Exchange, Decision Support Systems with alerts and reminders, homecare support including consumer healthcare device interfaces, and new revenue cycle systems. Luckily this is well aligned with Meaningful Use Stages 1,2, and 3, so you'll be doing it anyway.
For IT professionals, we truly live in interesting times.
Tuesday, December 7, 2010
A Glass Half Full
My 17 year old daughter recently wrote an essay that began "we cannot see our own eyes. The perception of ourselves comes from the reflections of others - how we're perceived and treated by the world around us".
Later in the essay she laments that the modern world seems to embrace bad news, negativity, and criticism rather than joy, optimism and gratitude.
I agree with her.
2010 has been a particularly strange year filled with audits, new compliance requirements, and regulatory review. Negative commentators have been granted more airtime than those trying to make the world a better place. We have become a nation that thrives on sensational news, usually to someone's discredit.
There may come a time when we spend more time defending our work to consultants, regulators, and naysayers than doing it.
I wonder if it is possible to reverse this trend.
Imagine the following - instead of a statement with an accusatory overtone such as
"40% of clinicians in Massachusetts do not have an electronic health record. Clearly the state has challenges."
How about
"60% of clinicians in the state have an electronic health record, making Massachusetts one of the most wired regions in the country. For the remaining 40%, there is a step by step plan to achieve 100% adoption by 2015. Massachusetts is the only state to mandate EHR adoption as a condition of licensure by 2015."
Instead of highlighting a small number of flaws in a person, a team, or an organization, I would rather celebrate their strengths. Then in the context of a positive trajectory, discuss that ways they could be even better.
I rarely see this approach. Instead there is a focus on what is not done, not planned, and not budgeted, sometimes declaring risk without providing a benchmark as to the real current state of the industry.
For example, what if an audit or consulting report declared
"IT has not implemented flying cars"
Senior management or Board members might think they should worry about IT management, IT planning, or Governance processes.
Of course, no one in the country has implementing flying cars and the first production vehicle is not expected until 2011.
Business owners facing their own operational challenges might say - we cannot move forward with our workflow redesign because IT has not deployed the flying cars needed to support our automation needs.
Thus, IT becomes the bottleneck, the area of scrutiny, and point of failure.
Consultants might even be hired to analyze why IT has not implemented flying cars and make recommendations for accelerating the flying car program.
Of course, there are numerous other projects that deserve time, attention and resources before flying cars are even considered.
So what's needed to make this better?
First, we need to eliminate our default tone of negativity. The quality, safety and efficiency risks we have today were there last year. Somehow we still delivered appropriate care. They is focusing on the trajectory, making each day better than the last.
I've recently rewritten several reports to take this more positive, optimistic approach. Instead of a gap or failure mode analysis, I created a trajectory analysis and mitigation analysis.
If we persist with a negative approach in the way we interact with others and manage our organizations, our work lives will continue to change for the worse. How so?
A recent NY Times column relates the modern world to life in a Zombie film in which we spend each day shooting Zombie after Zombie in a war of attrition until the Zombies are all gone or we become one of them. Think of your email, your cell phone, and your meeting schedule as a daily battle against Zombies and you'll see the author's point.
As my daughter said, we define ourselves based on the reflections we see from others. If others are negative, we become negative. If others highlight the positive, the good, and the trajectory to become even better, we will do the same.
Thus, each one of us can make a difference. Start tomorrow with a glass half full and soon, those around you will see the world for what it can be instead of of what it is not.
Later in the essay she laments that the modern world seems to embrace bad news, negativity, and criticism rather than joy, optimism and gratitude.
I agree with her.
2010 has been a particularly strange year filled with audits, new compliance requirements, and regulatory review. Negative commentators have been granted more airtime than those trying to make the world a better place. We have become a nation that thrives on sensational news, usually to someone's discredit.
There may come a time when we spend more time defending our work to consultants, regulators, and naysayers than doing it.
I wonder if it is possible to reverse this trend.
Imagine the following - instead of a statement with an accusatory overtone such as
"40% of clinicians in Massachusetts do not have an electronic health record. Clearly the state has challenges."
How about
"60% of clinicians in the state have an electronic health record, making Massachusetts one of the most wired regions in the country. For the remaining 40%, there is a step by step plan to achieve 100% adoption by 2015. Massachusetts is the only state to mandate EHR adoption as a condition of licensure by 2015."
Instead of highlighting a small number of flaws in a person, a team, or an organization, I would rather celebrate their strengths. Then in the context of a positive trajectory, discuss that ways they could be even better.
I rarely see this approach. Instead there is a focus on what is not done, not planned, and not budgeted, sometimes declaring risk without providing a benchmark as to the real current state of the industry.
For example, what if an audit or consulting report declared
"IT has not implemented flying cars"
Senior management or Board members might think they should worry about IT management, IT planning, or Governance processes.
Of course, no one in the country has implementing flying cars and the first production vehicle is not expected until 2011.
Business owners facing their own operational challenges might say - we cannot move forward with our workflow redesign because IT has not deployed the flying cars needed to support our automation needs.
Thus, IT becomes the bottleneck, the area of scrutiny, and point of failure.
Consultants might even be hired to analyze why IT has not implemented flying cars and make recommendations for accelerating the flying car program.
Of course, there are numerous other projects that deserve time, attention and resources before flying cars are even considered.
So what's needed to make this better?
First, we need to eliminate our default tone of negativity. The quality, safety and efficiency risks we have today were there last year. Somehow we still delivered appropriate care. They is focusing on the trajectory, making each day better than the last.
I've recently rewritten several reports to take this more positive, optimistic approach. Instead of a gap or failure mode analysis, I created a trajectory analysis and mitigation analysis.
If we persist with a negative approach in the way we interact with others and manage our organizations, our work lives will continue to change for the worse. How so?
A recent NY Times column relates the modern world to life in a Zombie film in which we spend each day shooting Zombie after Zombie in a war of attrition until the Zombies are all gone or we become one of them. Think of your email, your cell phone, and your meeting schedule as a daily battle against Zombies and you'll see the author's point.
As my daughter said, we define ourselves based on the reflections we see from others. If others are negative, we become negative. If others highlight the positive, the good, and the trajectory to become even better, we will do the same.
Thus, each one of us can make a difference. Start tomorrow with a glass half full and soon, those around you will see the world for what it can be instead of of what it is not.
Monday, December 6, 2010
A Lookback at 2010
Every Summer I work with my governance committees and IS staff to develop operating plans for the year ahead. Every Winter, I think about the new issues and challenges that keep me awake at night and review the progress on the previous year.
In FY10, the following were my keep awake at night issues. How did would do?
BIDMC
Intranet - Our new intranet went live with all expected features and technologies. As part of the project, we introduced a web application firewall and reverse proxy capability. Mastering any new technology and ensuring it is configured for disaster recovery and high reliability takes training and resources. We are not shutting off our old portal, which at this point is just a fallback and is not used much, until early 2011 to allow time for hardening all our newly introduced technologies.
Enterprise Image Management - We migrated all cardiology images to our new GE Enterprise Archive 4.0/EMC Atmos Enterprise Image Archive. Vendor neutral archives are definitely becoming more mainstream, but they are still a challenge for RIS/PACS vendors. I expect support for these archives to be built into imaging products in 2011.
EHR rollouts - By January all 1700 clinicians in our physician's organization will have a certified EHR in place. The major challenges have been workflow redesign, change management, and communication.
Business Intelligence - After years of experimentation and investigation, we settled on a suite of new functionality using Microsoft SQL Server 2011 Reporting and Analysis services to meet our business intelligence needs.
Interoperability - Our Meaningful Use related interoperability efforts (provider to provider summary exchange, public health exchange, and quality data warehousing) will all be live by the end of 2010. The standards for content and vocabulary were taken from the Standards Final Rule. The standards used for transport were SOAP 1.2 using XDR. We are seeing convergence of vendor approaches to healthcare information exchange, which is making our integration task easier.
HMS
High Performance Computing - In November we expanded our High Performance Computing facility from 1000 cores to 2000 cores. By January it will be at 5000 cores. By March it will be at 6000 cores, incorporating graphics processing unit support and InfiniBand connections to storage. The major challenge was more power and cooling support.
Storage - Our enterprise storage now includes over 1 petabyte of replicated storage at 2 different service levels - high performance and standard performance. The challenge has been developing an NIH compliant chargeback model to sustain the growth of our storage infrastructure and staff.
Content Management - we experimented with content management but the project to migrate all externally facing content to a single infrastructure with a common navigation experience and search was not funded in the past year. Hopefully it will be in 2011.
Social Networking for Research - We completed several releases of our social networking platform for research, open sourced it, and implemented it at 60 Universities throughout the world. Profiles has been a great success story.
Governance - We completed the design for our new governance committees - Research Computing Governance Committee, Educational Technologies, Administrative IT, and overall IT Governance.
NEHEN/State Healthcare Information Exchange
At NEHEN, we added more transactions and more trading partners, obtained additional funding to accelerate our work, and went live with all the capabilities needed to support meaningful use. At the State level, our trajectory to complete a governance design, prepare for procurement, and plan for business operations has been positive.
Federal
Over the past year, the Standards and Certification Final rule was published, work on transport standards moved forward, and multiple efforts to accelerate implementation and adoption are in process.
Personal
My daughter applied to college (early decision results are available in 10 days).
My parents are back home after their healthcare experiences in November.
My wife and I are spending even more time together walking, talking, and enjoying nature.
My commitment to the outdoors has included more kayaking, skiing, hiking, biking, and mountaineering experiences
Thus, despite the tyranny of the urgent, substantial new work, a continually changing healthcare environment and new compliance/regulations, 2010 was a good year. In many ways, I'm surprised that the 2010 things that kept me up at night are all on track. Along the way, 2010 was a roller coaster. But due to the hard work of hundreds of people, all will be well.
In FY10, the following were my keep awake at night issues. How did would do?
BIDMC
Intranet - Our new intranet went live with all expected features and technologies. As part of the project, we introduced a web application firewall and reverse proxy capability. Mastering any new technology and ensuring it is configured for disaster recovery and high reliability takes training and resources. We are not shutting off our old portal, which at this point is just a fallback and is not used much, until early 2011 to allow time for hardening all our newly introduced technologies.
Enterprise Image Management - We migrated all cardiology images to our new GE Enterprise Archive 4.0/EMC Atmos Enterprise Image Archive. Vendor neutral archives are definitely becoming more mainstream, but they are still a challenge for RIS/PACS vendors. I expect support for these archives to be built into imaging products in 2011.
EHR rollouts - By January all 1700 clinicians in our physician's organization will have a certified EHR in place. The major challenges have been workflow redesign, change management, and communication.
Business Intelligence - After years of experimentation and investigation, we settled on a suite of new functionality using Microsoft SQL Server 2011 Reporting and Analysis services to meet our business intelligence needs.
Interoperability - Our Meaningful Use related interoperability efforts (provider to provider summary exchange, public health exchange, and quality data warehousing) will all be live by the end of 2010. The standards for content and vocabulary were taken from the Standards Final Rule. The standards used for transport were SOAP 1.2 using XDR. We are seeing convergence of vendor approaches to healthcare information exchange, which is making our integration task easier.
HMS
High Performance Computing - In November we expanded our High Performance Computing facility from 1000 cores to 2000 cores. By January it will be at 5000 cores. By March it will be at 6000 cores, incorporating graphics processing unit support and InfiniBand connections to storage. The major challenge was more power and cooling support.
Storage - Our enterprise storage now includes over 1 petabyte of replicated storage at 2 different service levels - high performance and standard performance. The challenge has been developing an NIH compliant chargeback model to sustain the growth of our storage infrastructure and staff.
Content Management - we experimented with content management but the project to migrate all externally facing content to a single infrastructure with a common navigation experience and search was not funded in the past year. Hopefully it will be in 2011.
Social Networking for Research - We completed several releases of our social networking platform for research, open sourced it, and implemented it at 60 Universities throughout the world. Profiles has been a great success story.
Governance - We completed the design for our new governance committees - Research Computing Governance Committee, Educational Technologies, Administrative IT, and overall IT Governance.
NEHEN/State Healthcare Information Exchange
At NEHEN, we added more transactions and more trading partners, obtained additional funding to accelerate our work, and went live with all the capabilities needed to support meaningful use. At the State level, our trajectory to complete a governance design, prepare for procurement, and plan for business operations has been positive.
Federal
Over the past year, the Standards and Certification Final rule was published, work on transport standards moved forward, and multiple efforts to accelerate implementation and adoption are in process.
Personal
My daughter applied to college (early decision results are available in 10 days).
My parents are back home after their healthcare experiences in November.
My wife and I are spending even more time together walking, talking, and enjoying nature.
My commitment to the outdoors has included more kayaking, skiing, hiking, biking, and mountaineering experiences
Thus, despite the tyranny of the urgent, substantial new work, a continually changing healthcare environment and new compliance/regulations, 2010 was a good year. In many ways, I'm surprised that the 2010 things that kept me up at night are all on track. Along the way, 2010 was a roller coaster. But due to the hard work of hundreds of people, all will be well.
Friday, December 3, 2010
Cool Technology of the Week
I'm not a gamer, but I have a great appreciation for the technologies incorporated into gaming systems. Sometimes, technologies using in gaming can have an impact on healthcare education, such as in physical simulators and virtual patient tools.
The Microsoft Kinect controller, introduced in November, is likely to be one of those technologies.
The device features an RGB camera, depth sensor and multi-array microphone which provides full-body 3D motion capture, facial recognition and voice recognition capabilities. The depth sensor consists of an infrared laser projector combined with a monochrome sensor, and allows the Kinect sensor to see in 3D under any ambient light conditions.
What are the possibilities? This New York Times article highlights some of the creative ways Kinect has already been used to control robotics, create immersive 3D renderings, and control movement of virtual objects.
Microsoft would be wise to offer a Software Development Kit and embrace a community of innovative developers, just as iRobot has done with its iRobot Create variant of the Roomba consumer cleaning robot.
3D motion capture, facial recognition, voice recognition and depth sensing for $150.00. That's cool!
The Microsoft Kinect controller, introduced in November, is likely to be one of those technologies.
The device features an RGB camera, depth sensor and multi-array microphone which provides full-body 3D motion capture, facial recognition and voice recognition capabilities. The depth sensor consists of an infrared laser projector combined with a monochrome sensor, and allows the Kinect sensor to see in 3D under any ambient light conditions.
What are the possibilities? This New York Times article highlights some of the creative ways Kinect has already been used to control robotics, create immersive 3D renderings, and control movement of virtual objects.
Microsoft would be wise to offer a Software Development Kit and embrace a community of innovative developers, just as iRobot has done with its iRobot Create variant of the Roomba consumer cleaning robot.
3D motion capture, facial recognition, voice recognition and depth sensing for $150.00. That's cool!
Thursday, December 2, 2010
Publicity is Cheap, Privacy is Expensive
When I was 18 years old, publicity was hard to come by. Media outlets were limited to newspapers with very high editorial standards, television with few channels and very limited news time, and a few high profile news magazines.
My first 15 minutes of fame came in 1981 when I was interviewed by Dan Rather for a CBS Evening News spot on entrepreneurialism in the Silicon Valley. In 1982, I appeared in Newsweek, as a student correspondent at Stanford, writing about religion, politics and the culturally important trends of the day. In 1983, I appeared in US News and World Report in an article about the emerging importance of software.
Today, blogs, wikis, forums, YouTube, Facebook, Twitter, and Google enable fame and publicity without editorial control. Use your phone to take a video of a squirrel doing something amusing and a few minutes later you've got publicity and thousands of people watching your work.
The democratization of information is a good thing. It enables freedom of expression and instant access to news and information. Of course, it's hard to tell fact from fiction, opinion from news, and accomplishment from self promotion, but it's left up to the consumer to turn data into information, knowledge and wisdom.
The downside of a completely connected world is that publicity is cheap, but privacy is expensive.
How much effort does it take to not appear on the internet, not be tracked by vendors maximizing sales by analyzing your browsing behavior, and not be findable from the innumerable legal/property/licensure records available on the internet?
In 1981, publicity was expensive, and privacy was cheap.
30 years later, publicity is cheap, and privacy is expensive.
In another 30 years, it will be interesting to see how the concept of privacy evolves.
My daughter's generation shares everything about their day on Facebook. Maybe the concept of privacy will disappear for most aspects of life, except for those items, like medical records, which are protected via regulation and policy.
My advice to my daughter about privacy is simple - content on the web lasts forever, on the internet nobody knows you're a dog http://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you're_a_dog, and share what you will such that no one gets hurt including you.
To discover just how "expensive" it is to preserve your privacy, here's a great WikiHow about deleting yourself from the internet.
30 years ago I had to wait for a call from Dan Rather. Today, I just press Post. How we balance the expense of publicity and privacy is a question that society will need to continuously evaluate as we become more and more connected.
My first 15 minutes of fame came in 1981 when I was interviewed by Dan Rather for a CBS Evening News spot on entrepreneurialism in the Silicon Valley. In 1982, I appeared in Newsweek, as a student correspondent at Stanford, writing about religion, politics and the culturally important trends of the day. In 1983, I appeared in US News and World Report in an article about the emerging importance of software.
Today, blogs, wikis, forums, YouTube, Facebook, Twitter, and Google enable fame and publicity without editorial control. Use your phone to take a video of a squirrel doing something amusing and a few minutes later you've got publicity and thousands of people watching your work.
The democratization of information is a good thing. It enables freedom of expression and instant access to news and information. Of course, it's hard to tell fact from fiction, opinion from news, and accomplishment from self promotion, but it's left up to the consumer to turn data into information, knowledge and wisdom.
The downside of a completely connected world is that publicity is cheap, but privacy is expensive.
How much effort does it take to not appear on the internet, not be tracked by vendors maximizing sales by analyzing your browsing behavior, and not be findable from the innumerable legal/property/licensure records available on the internet?
In 1981, publicity was expensive, and privacy was cheap.
30 years later, publicity is cheap, and privacy is expensive.
In another 30 years, it will be interesting to see how the concept of privacy evolves.
My daughter's generation shares everything about their day on Facebook. Maybe the concept of privacy will disappear for most aspects of life, except for those items, like medical records, which are protected via regulation and policy.
My advice to my daughter about privacy is simple - content on the web lasts forever, on the internet nobody knows you're a dog http://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows_you're_a_dog, and share what you will such that no one gets hurt including you.
To discover just how "expensive" it is to preserve your privacy, here's a great WikiHow about deleting yourself from the internet.
30 years ago I had to wait for a call from Dan Rather. Today, I just press Post. How we balance the expense of publicity and privacy is a question that society will need to continuously evaluate as we become more and more connected.
Wednesday, December 1, 2010
Good Consultants, Bad Consultants
In 1998 when I became CIO of CareGroup, there were numerous consultants serving in operational roles at BIDMC and CareGroup. My first task as CIO was build a strong internal management team, eliminate our dependency on consultants, and balance our use of built and bought applications. Twelve years later, I have gained significant perspective on consulting organizations - large and small, strategic and tactical, mainstream and niche.
There are many good reasons to hire consultants. One of my favorite industry commentators, Robert X. Cringley wrote an excellent column about hiring consultants. A gold star idea from his analysis is that most IT projects fail at the requirements stage. If business owners cannot define their future state workflows, hiring consultants to implement automation will fail.
I've been a consultant to some organizations, so I've felt the awkwardness of parachuting into an organization, making recommendations, then leaving before those recommendations have an operational impact. Many of my friends and colleagues work in consulting companies. Some consultants are so good that I think of them as partners and value-added extensions of the organization instead of vendors. From my experience, both hiring and being a consultant, here's an analysis of what makes a consultant good or a consultant bad.
1. Project Scope
Good - They provide work products that are actionable without creating dependency on the consultant for follow-on work. There are no change orders to the original consulting assignment.
Bad - Consultants become self-replicating. Deliverables are missing the backup data needed to justify their recommendations. Consultants build relationships throughout the organization outside their constrained scope of work, identifying potential weaknesses and convincing senior management that more consultants are needed to mitigate risk. Two consultants become four, then more. They create overhead that requires more support staff from the consulting company.
2. Knowledge Transfer
Good - They train the organization to thrive once the consultants leave. They empower the client with specialized knowledge of technology or techniques that will benefit the client in operational or strategic activities.
Bad - Their deliverable is a PowerPoint of existing organizational knowledge without insight or unique synthesis. This is sometimes referred to as "borrowing your watch to tell you the time".
3. Organizational Dynamics
Good - They build bridges among internal teams, enhancing communication through formal techniques that add processes to complement existing organizational project management approaches. Adding modest amounts of work to the organization is expected because extra project management rigor can enhance communication and eliminate tensions or misunderstandings among stakeholders.
Bad - They identify organizational schisms they can exploit, become responsible for discord and cause teams to work against each other as a way to foster organizational dependency on the consultants.
4. Practical Recommendations
Good - Recommendations are data-backed, prioritized by relative value (cost multiplied by benefit), reflect current community standards, and take into account competing uses of the organization's resources and time.
Bad - Recommendations lack depth. They are products of uncorroborated interviews. They lack factual details and are a scattershot intended to create fear, uncertainty and doubt. They focus on parts rather than systems. Implementing these recommendations causes energy to be drained away from more strategic and beneficial initiatives.
5. Fees
Good - Consultants use markup factors (amount they charge verses the amount they pay their staff) such as the following:
Staff augmentation / placement only, with no management oversight = 1.5
Commodity consultants, largely staff augmentation, but with "account management" = 1.5-2
Consulting / systems integration, project-based = 2-3.5
Management consulting / very senior and high-demand specialists = 3.5-4
Bad - The engagement partner becomes more concerned about billing you than serving you. Meetings appear on your calendar weeks before the end of a consulting engagement to discuss your statement of work renewal. You begin to spend more time managing the consultants than managing the project. Consultants justify a markup factor of 5 or 6 by saying "We're so good that we have high overhead".
6. Balancing Priorities
Good - Complex organizations execute numerous projects every year in the context of their annual operating plans. Although consultants are hired to complete very specific tasks, good consultants take into account the environment in which they are working and balance their project against the other organizational priorities. In this way, the organization can adapt to the changes caused by the presence of the consultant while not significantly disrupting their other work.
Bad - Meetings are consistently scheduled with little advance notice that conflict with other organizational imperatives. Any attention paid to organizational demands outside the consulting engagement are escalated to senior management as being "uncooperative".
7. Quality of deliverables
Good - The deliverables are innovative, customized to the organization, and represent original work based on significant effort, due diligence, and expertise.
Bad - Material is reused from other organizations. The volume of deliverables is increased with boilerplate. The content seems unhelpful, general, or unrelated to the details of your organization.
8. Managing project risk
Good - Risk is defined as the likelihood of bad things happening multiplied by the impact on the organization. Real risks to the project are identified and solutions are recommended/developed collaboratively with project sponsors.
Bad - There is greater concern about risk to the reputation of the consultants than the risks to project success.
9. Respect for the org chart
Good - Work is done at the request of the project sponsors. The chain of command and the hierarchy of the organization are respected, so that consultants do not interact directly with the Board or senior management unless directed to do so by the project sponsors.
Bad - Governance processes are disrupted and consultants seek to establish the trust of the organizational tier above the project sponsors. Sometimes they will even work against the project sponsors to ensure organizational dependency on the consultants.
10. Consistency
Good - Transparency, openness, and honesty characterize all communications from the consultants to all stakeholders in the organization.
Bad - Every person is told a different story in the interest of creating the appearance of being supportive and helpful. This appearance of trustworthiness is exploited to identify weaknesses and increase dependency on consultants.
I'm the greatest ally of good consultants. Per Robert Cringely's article, we'll bring in a few "Consulting Type A" experts each year for specific well-defined tactical projects requiring deep expertise.
If survival of the fittest applies to consultants, then the good ones should thrive and the bad ones should see fewer engagements over time. However, I'm not sure Darwinian selection pressures apply to consultants, since organizations may have short institutional memories about consulting experiences due to their own staff turnover.
The best you can do for your organization is think about the good and bad comparisons above, then use them to evaluate your own consulting experiences, rewarding those who bring value added expertise and penalizing those who bring only "powerpoint and suits".
There are many good reasons to hire consultants. One of my favorite industry commentators, Robert X. Cringley wrote an excellent column about hiring consultants. A gold star idea from his analysis is that most IT projects fail at the requirements stage. If business owners cannot define their future state workflows, hiring consultants to implement automation will fail.
I've been a consultant to some organizations, so I've felt the awkwardness of parachuting into an organization, making recommendations, then leaving before those recommendations have an operational impact. Many of my friends and colleagues work in consulting companies. Some consultants are so good that I think of them as partners and value-added extensions of the organization instead of vendors. From my experience, both hiring and being a consultant, here's an analysis of what makes a consultant good or a consultant bad.
1. Project Scope
Good - They provide work products that are actionable without creating dependency on the consultant for follow-on work. There are no change orders to the original consulting assignment.
Bad - Consultants become self-replicating. Deliverables are missing the backup data needed to justify their recommendations. Consultants build relationships throughout the organization outside their constrained scope of work, identifying potential weaknesses and convincing senior management that more consultants are needed to mitigate risk. Two consultants become four, then more. They create overhead that requires more support staff from the consulting company.
2. Knowledge Transfer
Good - They train the organization to thrive once the consultants leave. They empower the client with specialized knowledge of technology or techniques that will benefit the client in operational or strategic activities.
Bad - Their deliverable is a PowerPoint of existing organizational knowledge without insight or unique synthesis. This is sometimes referred to as "borrowing your watch to tell you the time".
3. Organizational Dynamics
Good - They build bridges among internal teams, enhancing communication through formal techniques that add processes to complement existing organizational project management approaches. Adding modest amounts of work to the organization is expected because extra project management rigor can enhance communication and eliminate tensions or misunderstandings among stakeholders.
Bad - They identify organizational schisms they can exploit, become responsible for discord and cause teams to work against each other as a way to foster organizational dependency on the consultants.
4. Practical Recommendations
Good - Recommendations are data-backed, prioritized by relative value (cost multiplied by benefit), reflect current community standards, and take into account competing uses of the organization's resources and time.
Bad - Recommendations lack depth. They are products of uncorroborated interviews. They lack factual details and are a scattershot intended to create fear, uncertainty and doubt. They focus on parts rather than systems. Implementing these recommendations causes energy to be drained away from more strategic and beneficial initiatives.
5. Fees
Good - Consultants use markup factors (amount they charge verses the amount they pay their staff) such as the following:
Staff augmentation / placement only, with no management oversight = 1.5
Commodity consultants, largely staff augmentation, but with "account management" = 1.5-2
Consulting / systems integration, project-based = 2-3.5
Management consulting / very senior and high-demand specialists = 3.5-4
Bad - The engagement partner becomes more concerned about billing you than serving you. Meetings appear on your calendar weeks before the end of a consulting engagement to discuss your statement of work renewal. You begin to spend more time managing the consultants than managing the project. Consultants justify a markup factor of 5 or 6 by saying "We're so good that we have high overhead".
6. Balancing Priorities
Good - Complex organizations execute numerous projects every year in the context of their annual operating plans. Although consultants are hired to complete very specific tasks, good consultants take into account the environment in which they are working and balance their project against the other organizational priorities. In this way, the organization can adapt to the changes caused by the presence of the consultant while not significantly disrupting their other work.
Bad - Meetings are consistently scheduled with little advance notice that conflict with other organizational imperatives. Any attention paid to organizational demands outside the consulting engagement are escalated to senior management as being "uncooperative".
7. Quality of deliverables
Good - The deliverables are innovative, customized to the organization, and represent original work based on significant effort, due diligence, and expertise.
Bad - Material is reused from other organizations. The volume of deliverables is increased with boilerplate. The content seems unhelpful, general, or unrelated to the details of your organization.
8. Managing project risk
Good - Risk is defined as the likelihood of bad things happening multiplied by the impact on the organization. Real risks to the project are identified and solutions are recommended/developed collaboratively with project sponsors.
Bad - There is greater concern about risk to the reputation of the consultants than the risks to project success.
9. Respect for the org chart
Good - Work is done at the request of the project sponsors. The chain of command and the hierarchy of the organization are respected, so that consultants do not interact directly with the Board or senior management unless directed to do so by the project sponsors.
Bad - Governance processes are disrupted and consultants seek to establish the trust of the organizational tier above the project sponsors. Sometimes they will even work against the project sponsors to ensure organizational dependency on the consultants.
10. Consistency
Good - Transparency, openness, and honesty characterize all communications from the consultants to all stakeholders in the organization.
Bad - Every person is told a different story in the interest of creating the appearance of being supportive and helpful. This appearance of trustworthiness is exploited to identify weaknesses and increase dependency on consultants.
I'm the greatest ally of good consultants. Per Robert Cringely's article, we'll bring in a few "Consulting Type A" experts each year for specific well-defined tactical projects requiring deep expertise.
If survival of the fittest applies to consultants, then the good ones should thrive and the bad ones should see fewer engagements over time. However, I'm not sure Darwinian selection pressures apply to consultants, since organizations may have short institutional memories about consulting experiences due to their own staff turnover.
The best you can do for your organization is think about the good and bad comparisons above, then use them to evaluate your own consulting experiences, rewarding those who bring value added expertise and penalizing those who bring only "powerpoint and suits".