Friday, October 30, 2009

The Implementation Workgroup Testimony

Yesterday I spent the day in Washington with the HIT Standards Committee's Implementation Workgroup . The online forum to comment about standards adoption and implementation is now available.

The first article was posted by Aneesh Chopra, the US CTO. The second, my summary of the standards work thus far, will be posted this morning. Additional articles will be posted by others members of the HIT Standards Committee in the next week.

Whenever I hear testimony from teams of smart people, I try to distill everything I've heard into "Gold Star Ideas" - those themes that surfaced over and over. Here are a few:

1. We've learned from other industries that starting with simple standards works well. Mastering web transport standards such as REST takes minutes. Learning RSS takes an hour. Learning HTML takes a day. In the healthcare domain, I learned the basics of HL7 2.x, X12 and NCPDP in about a day.

2. Keep the standards as minimal as possible to support the business goal. Design for the little guy so that all participants can adopt the standard and not just the best resourced. Do not try to create a one size fits all standard - it will be too heavy for the simple use cases.

3. Start immediately rather than waiting for the perfect standard. Use early implementation experiences to create great documentation. Leave aspects of the standard open for future expansion and let innovation occur after adoption.

4. Declare a long term goal for new standards implementation but in the short term map what exists to new standards at the border of the organization rather than convert all existing legacy systems.

5. In early phases of implementation, allow ambiguity in the standard (what Adam Bosworth called Hysteresis) so that implementers can start simply and improve the completeness of their interfaces over time.

These are all reasonable principles. How do we apply them to the meaningful use standards we're all working on?

I asked one group of testifiers to tell me their views about the maturity of standards for the 4 required data exchanges in 2011. Here are their answers, interpreted against the 5 criteria above

ePrescribing - we have a mature standard (NCPDP Script 8.x) that is being enhanced to support new features (NCPDP Script 10.x) on a reasonable timeframe with minimal burden. We have test harnesses, middleware and clearinghouses that will accelerate adoption. We have an ecosystem of application developers. There is work to do to encourage more transactions to flow, but we're in generally good shape.

Lab - we have a mature standard for messaging (HL7 2.x), however we have numerous versions already implemented that will require mapping to HL7 2.51, since replacing all HL7 2.x in legacy systems will be burdensome. The real problem is not the HL7 but the lack of a single national lab compendium of the minimal set of LOINC codes for the most commonly ordered tests that should be implemented by all labs (commercial and hospital). CLIA is also an issue, requiring validation of every interface even if the same interfaced is cloned over and over for the same products. HITSP has already prepared a LOINC subset (700 codes instead of 20,000). The work ahead is part policy (reform CLIA) and part standards. The HIT Standards Committee has established a new workgroup on vocabularies and one of its first charges should be to ensure the appropriate LOINC subsets are available for general use. Regulation should require use of these subsets for lab ordering in 2013.

Administrative transactions (Benefits/Eligibility, Claims etc) - we have a mature standard for messaging (X12 4010) and transport (CAQH Core II). We have new enhancements on the way (X12 5010) that provide value. We have test harnesses, middleware, and clearinghouses that will accelerate adoption. We have many companies that build applications to support administrative transaction exchange. There is work to do to encourage more transactions to flow, but we're in generally good shape.

Quality - a consistent complaint is that every stakeholder (payers, government, specialty specific registries) require different quality measures with different data elements and definitions. There was broad agreement that the work the NQF has done and is doing to select a few consistent measures, with clearly defined data types, and retooling them to be EHR-based (not paper record) is the right thing to do. The measures will likely require controlled vocabularies and we need to be sure the right SNOMED-CT, LOINC, and RXNorm vocabularies plus mapping tools are available to report data in a normalized format for quality measurement.

My synthesis of the advice we received from all the panels is:

Creating controlled vocabularies/code sets is consistent with the simple standards goal. You can imagine an implementation guide that defines an XML format and then points to a website that contains publicly available vocabulary content (such as that developed by NLM or licensed for public use such as SNOMED-CT). Engineers would have no problem downloading and implementing a publicly available vocabulary code set.

Keep transport simple. Several testifiers noted that content and transmission should be separate standards, leveraging the web when possible for transport so that implementers do not need to learn new transport standards.

Get everyone to send the basics - medications (highlighted by everyone as a high value data exchange), problem lists, and labs before focusing on the esoteric.

Security is very important but privacy policy is even more urgent. We can very significantly constrain the number of security standards if a policy framework outlines our goals. For example - do we need a standard-based audit trail for every organization or is it sufficient to create a policy that an audit trail must be available to patients showing who accessed what and when?

What action items should we take?

I would like to get the input from other HIT Standards Committee members, but action items seem to be

1. Work hard on vocabularies and try to get them open sourced for the entire community of stakeholders

2. Consider adding a simple REST-based transport method for point to point exchanges

3. Work jointly with the HIT Policy Committee to establish a privacy framework that enables us to constrain the number of security standards

4. As we continue our work, try to use the simplest, fewest standards to meet the need

5. Continue to gather feedback on the 2011 exchanges - eRx, Lab, Quality, Administrative - to determine if there are opportunities to enhance testing platforms and implementation guidance that will accelerate adoption.

I look forward to continued discussion.

7 comments:

Anonymous said...

For Laboratory interfaces, we shouldn't wait until 2013. The focus should be on lab results, and not orders. If we establish a standard that 95% of all results require a LOINC cross-reference, then we can obtain a very significant improvement in the current situation.

Also, the HL7 laboratory results standard should be required certification criteria.

This combination would make a very big difference very rapidly. Our goal should be to improve this situation for 2011.

John Waldron said...

Thanks for another great summary and clear distillation of the "gold star" ideas. With a few years experience in the field, I concur with all your points. I hope similar guidance is applied here in Canada as well.

Steven Waldren said...

Great summary and a great meeting yesterday. One thing that was was not listed was a federated identity standard/process for the REST point to point exchange. As I write my comment, I see that I could of logged in with the standard the Internet uses for federated identity...OpenID.

Best,
Steven

Anonymous said...

John, excellent summary. I'm concerned, however, about how well the HITSP work products to date correspond to the stated needs of the testifiers. For instance, "simplicity" is the last word that comes to mind in reviewing the HITSP harmonization standards, which consist of dozens of interleaved constructs, capabilities, interoperability specifications, service collaborations, and technical notes with dependencies on numerous other standards and implementation guides from HL7, IHE, NCPDP, etc. Can mere mortals implement specifications like these in a consistent manner, as required for interoperability?

Arien Malec said...

First, thank you for allowing me to participate in the panel.

Second, relative to your comment that HIT standards should stay out of the transport space, it occurs to me that the whole of XDS.b could be described in REST as follows:

1) Get a list of known documents: GET to URI *baseuri*/documents/
*patient_id*, where *patient_id* could be an OpenID, or an HL7 triplet
of assigning authority + ID type + ID, returns a list (XOXO?) of
document location URIs, in the format:
*baseuri*/documents/*document_id* (note that if the *baseuri* is a
registry, the documents could be served out of different locations

Query strings can be used to subset the list based on creation time, source, etc.

2) Get a document: GET to document location URI
3) Add a new document: POST the document to /*patient_id*/documents
(POST returns the location of the newly created document)
3a) Add a reference to a new document POST the URI to /*patient_id*/
documents
4) Update an existing document: PUT the document to the document URI

If you want to register or provide multiple documents, do multiple POSTS.

And that is, I believe, the entire XDS.b spec in 4 bullet points.
But, based on REST, we can do so much more:
1) Content negotiation can be used to get documents of different types
(C32 CCD, CCR, non-C32 CCD, CDA text/PDF, simple PDF, text, even movies, photos, etc.)
2) We can use 303s to notify when the authoritative source has moved
3) We can use ETags to version documents and let people know if the document version has changed.

Mark Boss said...

Although the desire to keep the standards simple seems to be quite well received, simplicity in relation to transmission of data often does not adequately protect the privacy or confidentiality concerns of the individuals to whom the data relates. Building a system that will eventually protect the privacy of individuals means that their confidentiality will not be initially preserved. Essentially, such an approach places the privacy of individuals in jeopardy and increases the public's distrust for electron transmission of sensitive information. Even transmission of prescriptions or lab results exposes the medical condition of the individual to whom that information pertains.

Keith W. Boone said...

It's not about SOAP vs. REST, it's about easy vs. hard. Building it yourself is hard (but not too hard). Using the work of others is easy. I have some experiences with both that I've documented here.