Wednesday, October 7, 2009

My Privacy and Security lessons learned

The editor at Computerworld gave me permission to share my monthly column with you on my blog:

Privacy and security are foundational to health care reform. Patients will trust electronic health care records only if they believe their confidentiality is protected via good security.

As vice chairman of the federal Healthcare Information Technology Standards Committee, I have been on the front lines in the debate over the standards and implementation guidance needed to support the exchange of health care information. Over the past few months, I've learned a great deal from the committee's privacy and security workgroup. Here are my top five lessons:

1. Security is not just about using the right standards or purchasing products that implement those standards. It's also about the infrastructure on which those products run and the policies that define how they'll be used. A great software system that supports role-based security is not so useful if everyone is assigned the same role and its accompanying access permissions. Similarly, running great software on an open wireless network could compromise privacy.

2. Security is a process, not a product. Hackers are innovative, and security practices need to be constantly enhanced to protect confidentiality. Security is also a balance between ease of use and absolute protection. The most secure library in the world -- and the most useless -- would be one that never loaned out any books.

3. Security is an end-to-end process. The health care ecosystem is as vulnerable as its weakest link. Thus, each application, workstation, network and server within an enterprise must be secured to a reasonable extent. The exchange of health care information between enterprises cannot be secured if the enterprises themselves are not secure.

4. The U.S. does not have a single, unified health care privacy policy -- it has 50 of them. That means that products need to support multiple policies -- for example, those of a clinic that uses simple username/password authentication and those of a government agency that requires smart cards, biometrics or hardware tokens.

5. Security is a function of budget. Health care providers' budgets vary widely. New security requirements must take into account the implementation pace that the various stakeholders can afford. Imposing "nuclear secrets" security technology on a small doctor's office is not feasible. Thus, the privacy and security workgroup has developed a matrix of required minimum security standards to be implemented in 2011, 2013 and 2015, recognizing that some users will go beyond these minimums.

In debating how to enhance security for all stakeholders without creating a heavy implementation burden, the workgroup has come up with these ideas:

All data moving between organizations must be encrypted over the wire. Data moving in an organization's data center should be encrypted if open wireless networks could lead to the compromise of data as it is moved inside the organization. There is no need to encrypt the data twice -- if an organization implements appropriate secure wireless protocols such as WPA Enterprise, the data can be sent within the organization unencrypted.

All data at rest on mobile devices must be encrypted. Encrypting all databases and storage systems within an organization's data center would create a burden. But ensuring that devices such as laptops and USB drives, which can be stolen, encrypt patient-identified data makes sense and is part of new regulations such as Massachusetts' data protection law.

Such proposals strike a delicate balance, for while attaining the goal of care coordination through the exchange of health information depends on robust security technology, infrastructures and best practices, it can't succeed if safeguarding patients' privacy is unduly cumbersome.

9 comments:

Keith W. Boone said...

One point I'd like to add is that privacy and security must be weighed against other considerations, such as patient safety. A security policy that prevents access to lifesaving information to ensure patient privacy isn't necessarily acting in the best interests of the patient. As you said, that's like a library that never loans out any books.

Glen said...

Thanks for sharing this on your blog.

I would add, as corollaries:
1) A key objective of security is to mitigate risks in accordance with clear and realistic policies. It is very important to keep security policies and risk analysis up-to-date.
2) The role of administrative controls and insurance is as important as standards and technology. They are relatively easy to use and may be the least costly choices.
3) There is no such thing as shrink-wrapped security. There are many who will try to sell it to you, however.
4) Do not pay more for risk mitigation than the value of the underlying risks.
5) The most frequent security breaches come from trusted users. Security auditing, and reading audit reports, is critical.

Gerald Beuchelt said...

John -

Your observations on Identity and Access Management (IdAM) and Privacy are spot on. The security community, and in particular the identity management community have been struggling with these issues for a while and there are emerging solutions that can address some of the concerns you raised (for example, please take a look at http://tinyurl.com/ydqlfx4).

Overall, any security system can only be as good as its weakest component, and security policies and procedures - both online and offline - are critical factors in such a system.

Going forward, we should also not loose sight of a very critical issue: scalability. Any security control will have an impact on performance and - as such - limit the system's scalability. We really need to start looking into highly scalable architecture patterns (such as REST) in order to be able to scale to internet orders of magnitude. As such, federation technologies will likely be also more important.

Regards,

Gerald Beuchelt

AlanS said...

If security is a process doesn't it need to be a systematic process? So where do procedures (e.g. ISO27001, NIST SP800-39) for developing, implementing and maintaining information security / risk management systems fit into the picture?

dining_phil said...

LOA level 2 protections (required by CCHIT) are not enough to stop MITM attacks, where identity can be spoofed using wildcard SSL certs as demonstrated at Blackhat.

By separating out identity as a managed service from the XML, the relevant HL7 RIM messages and ICD codes can be delivered, and bound late in the process to the identity data when it appears as a CCD/CCR.

Even better is an approach that can actually separate out the bad data that should not be in the patient's EHR when it may have originated due to valid service records for billing, but attached to the wrong person due to medical identity theft or coding mistakes.

I think that process was made clear by epatientDave, that crossing security domains requires agreed upon abstractions which can be achieved by a national level schema, which does not get lost in translation.

HITSP has demonstrated that this can be done effectively and at low cost between states, harmonized with state privacy laws using directory technology such as LDAP and X.500.

Taking a national approach has value IMHO since the entire country (330 million plus people) falls under several defined OIDs, or containers for example, c=US, 1.3.6.1 and elsewhere, and the states can run their own ID management under the standard FIPS codes, which means they apply their own governance, and can network their own identity attributes (such as patient identifiers compatible with PIX NHIN, hDATA) as they, and the organizations in that state choose to negotiate.

BTW, this is with or without Federal participation, which from a data modeling view of X.500 exists at the level of another complex bi-lateral peering and groupings of organizations, and not necessarily (but can be, none the less) as the root, such as how SAFE Bio-Pharma currently is connected to the Federal Bridge PKI.

For obvious security reasons you don't want people registering in the .mil domain for example, but the US container is open in the domain name space. A container like c=US is a way to have a national policy, and state policies at the same time, since it can be enforced in schema, and traceable in requirements. One state might have totally different gender requirements for example than another, as to what attributes where allowed. Want your social networking profile to be exposed to the health system? That really is possible with Web 2.0.

Patient identity is clearly spelled out in the architecture...
http://www.connectopensource.org/download/attachments/17629260/CONNECT+Release+2.2+Software+Architecture_100309.pdf?version=1

Now it's the practical matter of making the data and authentication and authorization portable, by letting the end user choose how they want move it, and providing transparency into the process.

To have your choice of identity providers, and what attributes you want to share, means that actors can link different technologies, and still end up with a consistent result which is based on open systems.

Since LDAP and X.500 have already been vetted into the higher levels of LOA, plus patient identifiers are already written into HIPPA, the problem space is fairly simple for many people who want a secure, proven, international standards based approach.

For others that would prefer to be on the bleeding edge of emerging web services and want to transmit their PHR/EHR/EMR via some other API, to deal with the complexity of the security domain with whom they are trying to communicate, that's just doable (especially globally), but not at the same level of scalability as a national/state solution.

kc cowan said...

RE: "Patients will trust electronic health care records only if they believe their confidentiality is protected via good security."

There is more to it than that. Patients will need access control over their own data, and auditing of access in order to trust the system. There have been too many breaches of SSN's and credit card numbers from other businesses for ordinary consumers to automatically trust their health care provider, just because of encryption technology.

Remediating a lost credit card is a hassle, but not hard, and the credit card companies usually cover any financial loss. What is the remediation if your personal health record is exposed? I realize that some people don't consider their health records highly confidential, but others do. What do you do for them when data is stolen?

Dan Draper said...

One area of security that I see ignored or taken for granted is the need to provide physical security of electronic information systems. Just as paper records are physically kept away from unauthorized access, IT and network equipment must also be secured from unwarranted human contact. In my experience though, physicians, especially those in ambulatory care or private practices, often compromise security as a result of a poorly planned IT implementation. For example, while walking through the exam area of a primary care provider, I came across a data closet with the door wide open and a box fan in the threshold. The IT equipment was running too hot and kept crashing, so the only solution was to open the door in an attempt to cool the room. A patient could have very easily taken down this doctor’s network with a single accidental button push, or worse could happen if a nefarious-type happened upon the data arrays.

While data security can be improved with processes and software, a breach in physical security can have dire consequences as well. Proper use of power, cooling and lockable enclosures should be the starting point of developing a high availability / highly secure IT network.

Rob M said...

John, your post mentions privacy but essentially focuses on security alone. This is understandable given that we know a lot more about security, both technically and functionally, then we do about privacy. No question that the two are hand in glove, but they do present different challenges and a different focus. In my eyes privacy (I'm thinking of confidentiality as the same thing) is about data consent and representation of preferences. Preferences are defined by multiple parties, governmental, organizational and consumer. A reconciled final consent is then operationalized via security systems.

What is still unclear is how we will translate what may be complicated consent preferences into enforceable security policies. I suspect this will take some time to work through because the kinds of preferences we often hear (perhaps with some regulatory expectations-think 42 CFR Part 2) will present very new challenges to current security systems.

HL7 working groups are beginning to address this but we still have a way to go.

Adam Scott said...

Security is the degree of protection against danger, damage, loss, and criminal activity. Security has to be compared to related concepts: safety, continuity, reliability. The key difference between security and reliability is that Security System must take into account the actions of people attempting to cause destruction.