Today, Beth Israel Deaconess and UCSF issued press releases about a complex situation.
Over a year ago, an employee of BIDMC who had authorized access to data for quality improvement activities placed clinical data (not financial or social security number data) for approximately 2,900 patients on a thumb drive. The employee left BIDMC and went to work in California for UCSF. While at UCSF, the employee copied the thumb drive to a UCSF owned laptop in order to demonstrate quality improvement reporting. The laptop was stolen, then recovered. There is no evidence that the data on the laptop was accessed.
BIDMC takes this situation very seriously and notified the patients, Health and Human Services, and the media.
As with other challenging situations I've discussed such as the CareGroup Network Outage and the Limitations of Administrative Data, it is my intent to openly share lessons learned with my colleagues and the industry. By writing about the process, I hope to encourage policy and technology improvements at healthcare institutions throughout the country to protect privacy.
A few thoughts
1. Make sure you have a policy requiring that all mobile storage devices be secured. BIDMC has a written policy and is revising it to be even more restrictive.
2. To further mitigate risk, encrypt all laptops. BIDMC has implemented McAfee Safeboot for this purpose. Harvard Medical School has licensed PGP Whole Disk Encryption for this purpose.
3. Educate employees about the policy and technology best practices to protect privacy. A learning management system is great for this.
4. Sanction employees who violate the policies
5. Implement new technologies that scan/restrict data transfers in the organization i.e. scan email for medical record numbers or patient identified information sent non-securely.
The combination of strong policies, state of the art technology, and education is required to protect patient data.
In this case, an authorized employee took data in violation of policies and placed it on technology not controlled by BIDMC. Likely, the laptop data was not accessed but you can be sure that additional education, broad communication with patients, and close collaboration with government and the media will be our next steps.
3 comments:
Cool transparency of the week John. Good to see.
Data Loss Prevention is tough. There are products for this, but I tend to take a certain degree of skepticism with them.
I my legal documents management practice (where we processed and hosted docs) I had the following precautions with data-handling:
1. All laptops and desktops have BIOS passwords and physical locks to keep them closed.
2. Linux and Windows machines run low-level software to prevent any unauthorized devices (such as USB drives) from being used. We use a central AD ruleset to control this.
3. Only "secure" machines can be plugged into the network (network authentication/authorization). This network is very locked down via a blacklist (updated daily). People can bring unauthorized devices in, but they can only work on the DMZ where there is no path back to the secure network.
4. Everything is logged/audited. Every user is authenticated via username/password and token.
5. We have good key management. This is (hah) key to having easy-to-use/secure encryption. This is one of those things traditional IT managers don't always see the value of, but none of this would be possible without it.
Internal data transfers are easy and seamless. Internal to External transfers are hard. We manage those paths. Sometimes that means that when an internal user actually NEEDS to send stuff out, it can require them to take out a ticket and get it approved. We have deemed that a reasonable precaution given the alternative. As the Doctor points out, the alternative of a breach is likely more of a pain in the butt.
Is this perfect? No, but it has layers of authentication and encryption and auditing, meaning we SHOULD catch breaches at some point before they're breaches.
There's much more we do under the surface. Part of it is having dedicated security staff who are empowered by the business. It also means having security staff who understand your business and aren't so "security" focused but "experience" focused as well. Understanding balance and tradeoffs. That's a hard thing to find, but they exist.
Regarding Bernz's notes. Good points regarding mitigating the risks which can be overlooked by individuals.
Key management is hard. From a standards basis, that was one of the reasons for promoting consistent identity across security domains, instead of point to point relationships mediated by devices.
Point in reference, movies recorded on on BluRay. HDMI negotiates keys through HDCP which can cause access problems when session key negotiation fails.
Not surprised there is a rule in Active Directory can prevent use of USB drives, and that these are increasingly being locked out, (after Centcom was hacked through infected drives left in parking lot).
Recent vulnerabilities also with security software for some encrypted (FIPS compliant) USB storage devices prove that point. The drives themselves might be secure, but the software that manages them may have a problem.
We tend to work on a point to point basis, both within and between organizations, and portable devices provide that functionality, but the health data needs to flow securely across the entire ecosystem, (regardless of organizational borders), without creating large rings of trust, which is likely to be the case. Larger and more extensive record keeping leads to greater data exposure so it's important that the actors really understand transparently how their personal data is being protected.
The legal system is a last resort for making information secure, effective in its own way, not as powerful as good practices and standards.
From an employment viewpoint the focus is focused on specific security domains and the security people that may or may not have C level management support they require.
From an AI agent fitness perspective, they compete with other agents in a marketplace to inject or remove code at memory locations when dereferencing pointers, (not unlike their biological antecedents), noted by Ken Thompson we he mused how he started writing the shortest self reproducing programs in FORTRAN in college.
In applying broad strokes to a complex ecosystem, I fear unintended consequences of creating resistant strains of these AI agents when we could be applying simpler means, if we understood the entire picture of a system in balance. It's not likely that we are all going to write our own source code.
If we learned anything in designing a Federal IT architecture with OMB is that security is not one thing, or some very smart nerds in one part of a building or conference, discussing the latest exploits, (some of which are remarkably trivial but unknown to the average computer user) but something that exists at every layer of the business process model as an enabler of services, especially when it comes to understanding risks. Thus your point about having business savvy security people who can facilitate the entire experience is well taken. Enjoyed your comment and the original on breach transparency.
Post a Comment