As I traveled from Tokyo to Kyoto to Hiroshima this week, I rode the Shinkansen using my Japan Rail Pass (only available to foreign visitors). The quality, safety, and efficiency of the Shinkansen are inspiring - no loss of life in billions of rides and an average deviation from the published time table that is less than 6 seconds. My experience with train travel in the US is that a 6 minute deviation would be considered superlative.
If only the Boston, New York, and Washington DC corridor could have a Shinkansen - I'd never fly again!
Japan Railways will implement a truly cool technology over the next 10 years, a maglev Shinkansen between Tokyo and Osaka.
The trains will travel at 500 kilometers per hour (300 mph) and cover the distance between Tokyo and Kyoto in about an hour.
An equivalent train on the Eastern Seaboard would make an end to end train ride between Boston and Washington shorter than the typical US Airways Shuttle flight.
The idea behind maglev is simple - use fixed magnets in the train and electromagnets in the track to lift the train 1-10cm of the track, creating an nearly frictionless sliding surface. Electromagnets also are used to alter the field creating forward motion. There no "engine" pulling the train, thus no fossil fuels are required.
A 300 mph frictionless train that links distant cities without the hassle of airport security and congestion. That's cool!
Friday, July 29, 2011
Thursday, July 28, 2011
The Adjustment Bureau
On my flight to Tokyo, I watched The Adjustment Bureau, a Philip K. Dick inspired film about a supernatural team of agents who ensure each person's path through life (their fate) is followed according to plan.
I've written about Regression to the Mean and the need for constant reinvention which makes life seem entirely non-linear. When I consider the circuitous path I've taken in my career, it's interesting to think about the inflection points - my own adjustment bureau that led me to where I am today.
Here are five random but pivotal events:
1. In the early 1970's my parents were admitted to law school in Southern California. I had free time after school to explore my own interests while they were taking classes. I rode my bicycle to a local surplus store that specialized in integrated circuits discarded by local defense contractors but still completely functional. By the age of 12, I taught myself digital logic, analog signal processing, and the basics of microprocessors. I learned to program in machine language and built an Altair microcomputer in 1979, becoming the first student with a dorm-room based computer at Stanford University. My parents' law school admission led to my IT expertise - a non-obvious association.
2. While I was an undergraduate at Stanford and a medical student at UCSF, I ran a 35 person company which specialized in business process automation software. It enabled me to purchase a house in Marin Country and anchored me to the San Francisco Bay Area. The Dean of Students at UCSF during that time did not believe that a medical student doing advanced clinical rotations should be allowed to run a company simultaneously and gave me a choice - divest the company or defer medical school. I recognized that medicine was my future and divested the company, which eliminated my Northern California ties and ultimately led to my taking a job at Beth Israel Deaconess and Harvard. A Dean of Students with a strong opinion about medical student entrepreneurial activities led to my career in Boston - another non-obvious association.
3. On December 10, 1997 while working in the Emergency Department at BIDMC, my cell phone rang and the CEO of CareGroup informed me that I was going to serve as the CIO of CareGroup beginning at 8am the next day. The external auditors at the time told the CEO that giving a CIO job to an unproven Emergency Physician was administrative malpractice. The CEO firmly believed that a clinically focused, web-savvy, risk taker was better than a traditional process oriented CIO. Nearly 15 years later, many people believe he was right. A CEO who took a very controversial risk on a 35 year old with limited leadership experience resulted in my career as a CIO - a challenging outcome to predict based on my life history up to that point.
4. In 2001, the new Dean for Medical Education at Harvard Medical School had a dream - moving the entire medical school curriculum to the web and mobile platforms (early Palm technology). He asked for my advice given my experience at BIDMC moving clinical records to the web. Although I had no experience in educational technology, I worked with a team of students, faculty and IT developers to collaboratively create the Mycourses learning management system, serving as a part time Associate Dean in addition to my CareGroup CIO role. The project went well and I was eventually asked to serve as part time Harvard Medical School CIO, which the CareGroup Board gave me permission to do (I became a 1.5 FTE, 100% at CareGroup and 50% at Harvard simultaneously). A sojourn into educational technologies led to my becoming responsible for 10 years of infrastructure and application work at Harvard Medical School, including the evolution of high performance computing and storage clusters to support the life sciences, unique challenges that were not even imagined in 2001 - a definite non-linear path.
5. In 2005, I took a call from ANSI, asking if I would attend a meeting to discuss harmonizing standards as part of program conceived by the first national coordinator for IT, David Brailer. Although I did not consider myself a standards expert, I agreed to serve as chair of HITSP. As a consequence of 5 years of work with healthcare standards I became part of many national, regional, and state healthcare IT projects. A phone call about standards led to my federal and state roles, which became the basis for my Harvard professorship. A call from ANSI and a Harvard professorship - very hard to predict that!
What's next? At BIDMC, the Chief Executive Officer selection process will result in new leadership this Fall. New mergers and acquisitions will result in an accountable care organization built around BIDMC. Complex healthcare information exchange, registries, and business intelligence tools needed to support healthcare reform will accelerate my hospital CIO, state, and national activities.
All of this is happening while I'm working on replacing my Harvard Medical School CIO role with a full time successor.
It's July of 2011 and hard to know exactly how the inflection point of my evolving Harvard role will affect the future, but I feel powerful forces are aligning to create a quantum leap forward in electronic health records and health information exchange technology.
A year from now, I'll look back and assess what The Adjustment Bureau had in mind for me.
I've written about Regression to the Mean and the need for constant reinvention which makes life seem entirely non-linear. When I consider the circuitous path I've taken in my career, it's interesting to think about the inflection points - my own adjustment bureau that led me to where I am today.
Here are five random but pivotal events:
1. In the early 1970's my parents were admitted to law school in Southern California. I had free time after school to explore my own interests while they were taking classes. I rode my bicycle to a local surplus store that specialized in integrated circuits discarded by local defense contractors but still completely functional. By the age of 12, I taught myself digital logic, analog signal processing, and the basics of microprocessors. I learned to program in machine language and built an Altair microcomputer in 1979, becoming the first student with a dorm-room based computer at Stanford University. My parents' law school admission led to my IT expertise - a non-obvious association.
2. While I was an undergraduate at Stanford and a medical student at UCSF, I ran a 35 person company which specialized in business process automation software. It enabled me to purchase a house in Marin Country and anchored me to the San Francisco Bay Area. The Dean of Students at UCSF during that time did not believe that a medical student doing advanced clinical rotations should be allowed to run a company simultaneously and gave me a choice - divest the company or defer medical school. I recognized that medicine was my future and divested the company, which eliminated my Northern California ties and ultimately led to my taking a job at Beth Israel Deaconess and Harvard. A Dean of Students with a strong opinion about medical student entrepreneurial activities led to my career in Boston - another non-obvious association.
3. On December 10, 1997 while working in the Emergency Department at BIDMC, my cell phone rang and the CEO of CareGroup informed me that I was going to serve as the CIO of CareGroup beginning at 8am the next day. The external auditors at the time told the CEO that giving a CIO job to an unproven Emergency Physician was administrative malpractice. The CEO firmly believed that a clinically focused, web-savvy, risk taker was better than a traditional process oriented CIO. Nearly 15 years later, many people believe he was right. A CEO who took a very controversial risk on a 35 year old with limited leadership experience resulted in my career as a CIO - a challenging outcome to predict based on my life history up to that point.
4. In 2001, the new Dean for Medical Education at Harvard Medical School had a dream - moving the entire medical school curriculum to the web and mobile platforms (early Palm technology). He asked for my advice given my experience at BIDMC moving clinical records to the web. Although I had no experience in educational technology, I worked with a team of students, faculty and IT developers to collaboratively create the Mycourses learning management system, serving as a part time Associate Dean in addition to my CareGroup CIO role. The project went well and I was eventually asked to serve as part time Harvard Medical School CIO, which the CareGroup Board gave me permission to do (I became a 1.5 FTE, 100% at CareGroup and 50% at Harvard simultaneously). A sojourn into educational technologies led to my becoming responsible for 10 years of infrastructure and application work at Harvard Medical School, including the evolution of high performance computing and storage clusters to support the life sciences, unique challenges that were not even imagined in 2001 - a definite non-linear path.
5. In 2005, I took a call from ANSI, asking if I would attend a meeting to discuss harmonizing standards as part of program conceived by the first national coordinator for IT, David Brailer. Although I did not consider myself a standards expert, I agreed to serve as chair of HITSP. As a consequence of 5 years of work with healthcare standards I became part of many national, regional, and state healthcare IT projects. A phone call about standards led to my federal and state roles, which became the basis for my Harvard professorship. A call from ANSI and a Harvard professorship - very hard to predict that!
What's next? At BIDMC, the Chief Executive Officer selection process will result in new leadership this Fall. New mergers and acquisitions will result in an accountable care organization built around BIDMC. Complex healthcare information exchange, registries, and business intelligence tools needed to support healthcare reform will accelerate my hospital CIO, state, and national activities.
All of this is happening while I'm working on replacing my Harvard Medical School CIO role with a full time successor.
It's July of 2011 and hard to know exactly how the inflection point of my evolving Harvard role will affect the future, but I feel powerful forces are aligning to create a quantum leap forward in electronic health records and health information exchange technology.
A year from now, I'll look back and assess what The Adjustment Bureau had in mind for me.
Wednesday, July 27, 2011
An Inflection Point in Life Science Computing
Jacob Farmer, the CTO of Cambridge Computer and an industry recognized expert on storage, recently shared with me his draft whitepaper which highlights the life sciences as experiencing the pain of storage growth more than any other industry.
Here's his very thoughtful observation
"Detailed Problem Statement
Until quite recently, life sciences research would not typically have been described as 'data intensive', certainly not in comparison with other scientific disciplines, such as high energy physics or weather modeling. In the last few years, however, new data-intensive modalities such as spectrometry, next-gen sequencing, and digital microscopy have entered the mainstream, thus unleashing an unprecedented tsunami of unstructured data.
Life Sciences IT professionals have been caught off guard. Having not grown up with data intensive research, life sciences IT professionals have had to improvise. When they look to their peer institutions for guidance, they find that their peers are in the same predicament. The institutions at the cutting edge are constantly in triage mode, throwing money at the problem in order to keep their heads above water. On the one hand, these institutions are presenting at conferences and being heralded as examples for the rest of the industry. On the other hand, they are the first to admit that there are many problems left to be solved. Their experiences serve to warn the industry that things are going to get worse before they get better!"
One of the most significant IT leadership challenges is deciding when to change and when not to change. Some technology projects in your portfolio should be on the bleeding edge, but not all, mitigating the risk that you'll implement technologies that are more hype than lasting innovation.
For example, in the 1990's BIDMC chose not adopt client/server technologies. Instead it embraced the cutting edge web in 1996, skipping an entire generation of products. Being late to client/server and early to the web created a great trajectory that has served us well.
Life Sciences is at an inflection point as Jacob notes. Harvard Medical School has been an early adopter of high performance computing clusters but has proceeded cautiously on research storage. A large investment in SAN over the past few years would have been too expensive. A significant investment in early NAS would not scaled. We've made and will continue to make moderate investments in high performance NAS backed by SSD drives for metadata management, ensuring that we meet the current needs of our customers. However, we're going to step back and ask ourselves where the balance of cost, performance, capacity, information life cycle, and security needs to be in the future. We'd rather be cutting edge on demand management/reporting tools/chargeback approaches but a fast follower on storage technology.
In the conference I'm speaking at today in Tokyo, Professor Nonaka described the 6 characteristics of a wise leader based on his May 2011 Harvard Business Review Article. One of his major arguments is that leaders need to have explicit (factual) and tacit (practical/intuitive) knowledge. I believe that the inflection point in storage technologies requires life science IT leaders to rely on their intuition, since user needs and storage technologies, per Jacob's comments, are changing too fast to rely on the facts.
Thanks to Jacob for sharing his insight. I'm confident that life sciences IT leaders will experience disruptive innovation over the 18 months. Some will make wise investments and others will be less fortunate. I'm optimistic that Harvard Medical School can navigate the rough waters ahead, accelerating technology adoption or putting on the brakes in response to the evolving environment.
Here's his very thoughtful observation
"Detailed Problem Statement
Until quite recently, life sciences research would not typically have been described as 'data intensive', certainly not in comparison with other scientific disciplines, such as high energy physics or weather modeling. In the last few years, however, new data-intensive modalities such as spectrometry, next-gen sequencing, and digital microscopy have entered the mainstream, thus unleashing an unprecedented tsunami of unstructured data.
Life Sciences IT professionals have been caught off guard. Having not grown up with data intensive research, life sciences IT professionals have had to improvise. When they look to their peer institutions for guidance, they find that their peers are in the same predicament. The institutions at the cutting edge are constantly in triage mode, throwing money at the problem in order to keep their heads above water. On the one hand, these institutions are presenting at conferences and being heralded as examples for the rest of the industry. On the other hand, they are the first to admit that there are many problems left to be solved. Their experiences serve to warn the industry that things are going to get worse before they get better!"
One of the most significant IT leadership challenges is deciding when to change and when not to change. Some technology projects in your portfolio should be on the bleeding edge, but not all, mitigating the risk that you'll implement technologies that are more hype than lasting innovation.
For example, in the 1990's BIDMC chose not adopt client/server technologies. Instead it embraced the cutting edge web in 1996, skipping an entire generation of products. Being late to client/server and early to the web created a great trajectory that has served us well.
Life Sciences is at an inflection point as Jacob notes. Harvard Medical School has been an early adopter of high performance computing clusters but has proceeded cautiously on research storage. A large investment in SAN over the past few years would have been too expensive. A significant investment in early NAS would not scaled. We've made and will continue to make moderate investments in high performance NAS backed by SSD drives for metadata management, ensuring that we meet the current needs of our customers. However, we're going to step back and ask ourselves where the balance of cost, performance, capacity, information life cycle, and security needs to be in the future. We'd rather be cutting edge on demand management/reporting tools/chargeback approaches but a fast follower on storage technology.
In the conference I'm speaking at today in Tokyo, Professor Nonaka described the 6 characteristics of a wise leader based on his May 2011 Harvard Business Review Article. One of his major arguments is that leaders need to have explicit (factual) and tacit (practical/intuitive) knowledge. I believe that the inflection point in storage technologies requires life science IT leaders to rely on their intuition, since user needs and storage technologies, per Jacob's comments, are changing too fast to rely on the facts.
Thanks to Jacob for sharing his insight. I'm confident that life sciences IT leaders will experience disruptive innovation over the 18 months. Some will make wise investments and others will be less fortunate. I'm optimistic that Harvard Medical School can navigate the rough waters ahead, accelerating technology adoption or putting on the brakes in response to the evolving environment.
Tuesday, July 26, 2011
Super Cool Biz
Today I'm in Tokyo lecturing and meeting with industry, academic, and government leaders.
The weather is 85-90F with 85% humidity and 4mph winds. It's hot and power is limited.
The Japanese are a very resilient people, so it's interesting to see how they have worked together to conserve energy.
In the Fujitsu offices where I was visiting with Professor Ikujiro Nonaka, the Super Cool Biz poster pictured above lined the entrance.
Here are the basic ideas
1. Office thermostats are set to 82F.
2. Office attire is relaxed with fewer ties and suits. Super Cool Biz encourages polo shirts, Hawaiian shirts, running shoes and even appropriate T-shirts, jeans and sandals.
3. Switching off lights and unplugging computers that are not in use is encouraged.
4. Shifting work hours to the morning and taking more summer vacation than usual is suggested. The Tokyo metropolitan government, for example, begins shifts at 7:30, 8 or 9 a.m. rather than at 8:30, 9 or 9:30.
5. Store and restaurant hours are shifted so that they open later when the weather is cooler. In my hotel, all power and air conditioning is shut off when I leave the room. Businesses hand out fans at the door.
Everyone does their part to conserve and it works.
I'm dressed in my lightest weight black clothing and have abandoned my suit jacket.
Off to a day of lecturing and Super Cool Biz!
The weather is 85-90F with 85% humidity and 4mph winds. It's hot and power is limited.
The Japanese are a very resilient people, so it's interesting to see how they have worked together to conserve energy.
In the Fujitsu offices where I was visiting with Professor Ikujiro Nonaka, the Super Cool Biz poster pictured above lined the entrance.
Here are the basic ideas
1. Office thermostats are set to 82F.
2. Office attire is relaxed with fewer ties and suits. Super Cool Biz encourages polo shirts, Hawaiian shirts, running shoes and even appropriate T-shirts, jeans and sandals.
3. Switching off lights and unplugging computers that are not in use is encouraged.
4. Shifting work hours to the morning and taking more summer vacation than usual is suggested. The Tokyo metropolitan government, for example, begins shifts at 7:30, 8 or 9 a.m. rather than at 8:30, 9 or 9:30.
5. Store and restaurant hours are shifted so that they open later when the weather is cooler. In my hotel, all power and air conditioning is shut off when I leave the room. Businesses hand out fans at the door.
Everyone does their part to conserve and it works.
I'm dressed in my lightest weight black clothing and have abandoned my suit jacket.
Off to a day of lecturing and Super Cool Biz!
Monday, July 25, 2011
FDA Mobile Medical Applications NPRM
Many have asked me for an analysis of the new FDA Mobile Medical Applications NPRM.
The FDA will not seek to regulate mobile medical apps that perform the functionality of an electronic health record system or personal health record system. However, the FDA defined a small subset of mobile medical apps that may impact the functionality of currently regulated medical devices that will require oversight. Here's a thoughtful analysis by Bradley Merrill Thompson of Epstein Becker Green, which he has given me permission to post:
"Today, FDA published the long-anticipated draft guidance on the regulation of mobile apps—more specifically, what the agency calls “mobile medical apps”. This draft reflects significant efforts by FDA in a fairly short amount of time, and we applaud that work. Much of the framework of the FDA guidance is consistent with the work the mHealth Regulatory Coalition (MRC) published on its website earlier this year (www.mhealthregulatorycoalition.org). While FDA has done a good job getting the ball rolling, there are a number of areas that require further work. We all (including FDA) recognize that this draft guidance is certainly not the end of the story.
The regulatory oversight recommended in today’s draft guidance applies only to a small subset of mobile apps, which FDA defines as any software application that runs on an off-the-shelf, handheld computing platform as well as web-based software designed for mobile platforms. To be regulated, as a first step the app would have to first meet the definition of a medical device and then as a second step either (1) be used as an accessory to another regulated device or (2) “transform” the handheld platform into a device, such as by using the platform’s display screens or built-in sensors.
The problem with the first step is that this guidance doesn’t explain how to determine whether the apps meet the medical device definition in areas where the MRC has questions about intended use. In our policy papers, we explain that many of the mHealth apps operate in the gray areas between treating disease and managing wellness. But the guidance simply states that apps intended for general health and wellness purposes are not regulated. That, unfortunately, doesn’t provide the clarity we need. We already knew that. Instead, we need to understand what types of claims a company can make about health and wellness that also implicate a disease before we can determine whether the app is regulated or not.
The second step has a number of ambiguities too. First there’s the question of accessories. At least for now, the FDA kept the old rule of treating apps that “connect” to regulated medical devices as accessories that are regulated in the same device classification as the “parent” device. The problem with this approach is that it produces over-regulation in mHealth systems, particularly where there are a number of different medical devices and non-clinical products involved. Fortunately, FDA appears to recognize that this approach could lead to over-regulation of low-risk apps, and seems open to considering other approaches. Indeed, the agency specifically requested comment on how to improve this framework. The MRC is developing an alternative approach that involves creation of classification regulations for specific types of mHealth apps so that these apps will be classified separately from the parent device. This approach will require rulemaking activities, which we believe is the reason FDA did not address it in the proposed guidance.
And then there are the apps that transform a mobile platform into a medical device: there are a bunch of good nuggets in the guidance, but still some uncertainty and slightly puzzling aspects. I think everyone (though perhaps no one more than the folks at iTunes, Blackberry App World, and Android Market) is excited to hear that FDA doesn’t plan to regulate as manufacturers those who “exclusively distribute” the app. Likewise, the smartphone manufacturers can breathe a sigh of relief that they too will not be considered regulated manufacturers so long as they merely distribute or market their platform with no device intended uses. What’s interesting is that FDA “expects” these distributors to cooperate with the regulated app developers in the event of a correction or a recall.
In the category of puzzling aspects, the FDA for example says that an entity that provides app functionality through a “web service” or “web support” for use on a mobile platform is considered a manufacturer. But at first blush, this type of entity may or may not meet the agency’s own definition of manufacturer because the web-service provider might not be responsible for initiating the specifications or designing, labeling, or creating the software system.
The agency offered some examples of what it believes to be outside of this guidance, including electronic copies of medical textbooks, health and wellness apps, apps that automate general office operations, general aids that assist users, and electronic/personal health records. In addition, FDA specifically indicated that it is exercising its enforcement discretion and will not currently regulate apps that automate common medical knowledge available in the medical literature or allow individuals to self-manage their disease or condition. This is helpful, but there remain a number of other types of apps that should be specifically identified as being outside of FDA regulation.
You need to read the draft guidance carefully, because if you are not familiar with medical device law, there is a category you might miss: clinical decision supports systems fall. This falls under the rubric of an app that would cause the mobile platform to become a device. I suppose most people instantly think of things like an electronic stethoscope where a senor is added to a phone, in addition to software. But no hardware needs to be added to a mobile platform to make it a medical device. Many people may not realize that a simple computer system of any type that runs software used to analyze clinical data to advise a healthcare professional can, in certain circumstances, be a regulated medical device. So an app that doesn’t run any hardware other that the mobile platform itself can be regulated.
This draft guidance has no doubt generated a ton of questions. So, the timing of the release is perfect! On July 27th, a large group of individuals involved in the mobile health space will be congregating at the Continua/ATA Summit to discuss regulation of mHealth products. Bakul Patel, the mobile health policy guru at FDA, will be there to discuss the details of this guidance, as will the mHealth Regulatory Coalition, which is set to release its own version of proposed mHealth guidance in the coming weeks. The discussion will surely be lively and informative. There’s still time to register and I hope to see you there."
The FDA will not seek to regulate mobile medical apps that perform the functionality of an electronic health record system or personal health record system. However, the FDA defined a small subset of mobile medical apps that may impact the functionality of currently regulated medical devices that will require oversight. Here's a thoughtful analysis by Bradley Merrill Thompson of Epstein Becker Green, which he has given me permission to post:
The regulatory oversight recommended in today’s draft guidance applies only to a small subset of mobile apps, which FDA defines as any software application that runs on an off-the-shelf, handheld computing platform as well as web-based software designed for mobile platforms. To be regulated, as a first step the app would have to first meet the definition of a medical device and then as a second step either (1) be used as an accessory to another regulated device or (2) “transform” the handheld platform into a device, such as by using the platform’s display screens or built-in sensors.
The problem with the first step is that this guidance doesn’t explain how to determine whether the apps meet the medical device definition in areas where the MRC has questions about intended use. In our policy papers, we explain that many of the mHealth apps operate in the gray areas between treating disease and managing wellness. But the guidance simply states that apps intended for general health and wellness purposes are not regulated. That, unfortunately, doesn’t provide the clarity we need. We already knew that. Instead, we need to understand what types of claims a company can make about health and wellness that also implicate a disease before we can determine whether the app is regulated or not.
The second step has a number of ambiguities too. First there’s the question of accessories. At least for now, the FDA kept the old rule of treating apps that “connect” to regulated medical devices as accessories that are regulated in the same device classification as the “parent” device. The problem with this approach is that it produces over-regulation in mHealth systems, particularly where there are a number of different medical devices and non-clinical products involved. Fortunately, FDA appears to recognize that this approach could lead to over-regulation of low-risk apps, and seems open to considering other approaches. Indeed, the agency specifically requested comment on how to improve this framework. The MRC is developing an alternative approach that involves creation of classification regulations for specific types of mHealth apps so that these apps will be classified separately from the parent device. This approach will require rulemaking activities, which we believe is the reason FDA did not address it in the proposed guidance.
And then there are the apps that transform a mobile platform into a medical device: there are a bunch of good nuggets in the guidance, but still some uncertainty and slightly puzzling aspects. I think everyone (though perhaps no one more than the folks at iTunes, Blackberry App World, and Android Market) is excited to hear that FDA doesn’t plan to regulate as manufacturers those who “exclusively distribute” the app. Likewise, the smartphone manufacturers can breathe a sigh of relief that they too will not be considered regulated manufacturers so long as they merely distribute or market their platform with no device intended uses. What’s interesting is that FDA “expects” these distributors to cooperate with the regulated app developers in the event of a correction or a recall.
In the category of puzzling aspects, the FDA for example says that an entity that provides app functionality through a “web service” or “web support” for use on a mobile platform is considered a manufacturer. But at first blush, this type of entity may or may not meet the agency’s own definition of manufacturer because the web-service provider might not be responsible for initiating the specifications or designing, labeling, or creating the software system.
The agency offered some examples of what it believes to be outside of this guidance, including electronic copies of medical textbooks, health and wellness apps, apps that automate general office operations, general aids that assist users, and electronic/personal health records. In addition, FDA specifically indicated that it is exercising its enforcement discretion and will not currently regulate apps that automate common medical knowledge available in the medical literature or allow individuals to self-manage their disease or condition. This is helpful, but there remain a number of other types of apps that should be specifically identified as being outside of FDA regulation.
You need to read the draft guidance carefully, because if you are not familiar with medical device law, there is a category you might miss: clinical decision supports systems fall. This falls under the rubric of an app that would cause the mobile platform to become a device. I suppose most people instantly think of things like an electronic stethoscope where a senor is added to a phone, in addition to software. But no hardware needs to be added to a mobile platform to make it a medical device. Many people may not realize that a simple computer system of any type that runs software used to analyze clinical data to advise a healthcare professional can, in certain circumstances, be a regulated medical device. So an app that doesn’t run any hardware other that the mobile platform itself can be regulated.
This draft guidance has no doubt generated a ton of questions. So, the timing of the release is perfect! On July 27th, a large group of individuals involved in the mobile health space will be congregating at the Continua/ATA Summit to discuss regulation of mHealth products. Bakul Patel, the mobile health policy guru at FDA, will be there to discuss the details of this guidance, as will the mHealth Regulatory Coalition, which is set to release its own version of proposed mHealth guidance in the coming weeks. The discussion will surely be lively and informative. There’s still time to register and I hope to see you there."
Friday, July 22, 2011
A Healthcare IT Plan for Japan
In February I visited Japan a few days before the Tohoku earthquake to complete my research on the state of healthcare IT in Japanese hospitals and private practices.
Since then I have worked with the US Center for Strategic International Studies (CSIS) and the Japanese Health and Global Policy Institute (HGPI) to develop a national healthcare IT plan for Japan in response to the earthquake, tsunami and nuclear reactor crisis.
Here's the finished report, issued by CSIS today.
I'm flying to Japan tonight and will be meeting with government, academic, and industry leaders in Tokyo and Kyoto until August 4.
As readers of my blog know, my life is devoted to making a difference. If I can share lessons learned from the US experience to accelerate healthcare IT in Japan, I will have repaid all my Japanese colleagues for the kindness they have shown me over the years.
My blogs next week will be sporadic as I travel the country as a missionary for healthcare IT.
Since then I have worked with the US Center for Strategic International Studies (CSIS) and the Japanese Health and Global Policy Institute (HGPI) to develop a national healthcare IT plan for Japan in response to the earthquake, tsunami and nuclear reactor crisis.
Here's the finished report, issued by CSIS today.
I'm flying to Japan tonight and will be meeting with government, academic, and industry leaders in Tokyo and Kyoto until August 4.
As readers of my blog know, my life is devoted to making a difference. If I can share lessons learned from the US experience to accelerate healthcare IT in Japan, I will have repaid all my Japanese colleagues for the kindness they have shown me over the years.
My blogs next week will be sporadic as I travel the country as a missionary for healthcare IT.
Thursday, July 21, 2011
Preparing for the Future of IT at HMS
Every day I examine my life and think about the roles I serve. I consider all the unresolved issues in my professional and personal life, then ponder the processes needed to address them.
I think about the next week, the next month, and the next year. Hopefully, I'll be able to skate where the puck will be.
As I approach 50, I've become particularly introspective about the challenges in healthcare and medicine that lay ahead.
I believe that Accountable Care Organizations, Patient Centered Medical Homes, and the Partnership for Patients/CMS Center for Innovation will create exponential growth in healthcare IT requirements.
My senior leadership at BIDMC knows that we'll need novel approaches to healthcare information exchange for care coordination and population health management. They know we'll need new analytics which include cost, quality, and outcomes. They want new tools to make these analytics available to every stakeholder, both outside and inside the EHR.
Furthermore, state infrastructure to support "push" and "pull" data exchanges will need to be built. The need for Federal standards and policies will accelerate.
At the same time, the science of medicine at Harvard Medical School (HMS) is becoming more computationally intensive.
The next generation of whole genome analysis requires tools like BFAST that require new approaches to processing and storage infrastructure.
Image analysis also requires new tools such as OMERO for visualization, management and analysis of biological microscope images.
These and other research tools need to run on petabytes of data maintained on high performance storage, backed by thousands of processors, numerous specialized graphics processing units and high speed infiniband connections.
How does this relate to me?
As the CIO of hospitals, the innovation required to support healthcare reform will require increasing amounts of my time.
As the part time (50%) CIO of Harvard Medical School, the tools and technology required to support new scientific approaches will require increasing amounts of my time.
How do I ensure the exponentially increasing needs of the customers I serve are best met?
The answer requires a tough decision.
I believe that Harvard Medical School requires a full time dedicated CIO with a skill set in highly scalable infrastructure and the tools needed to support emerging science.
Thus, I think it best that I pass the baton at HMS to a new IT leader. I will continue to serve the Dean of HMS as an advisor on strategic projects, especially those which require cross-affiliate and clinical coordination. In collaboration with the IT stakeholders of HMS, I will work to find my replacement.
Once my successor is found, I will take on additional challenges implementing the next stages of meaningful use, healthcare reform, and new healthcare information exchange initiatives at BIDMC, in Massachusetts, and Nationwide.
Wish me luck!
I think about the next week, the next month, and the next year. Hopefully, I'll be able to skate where the puck will be.
As I approach 50, I've become particularly introspective about the challenges in healthcare and medicine that lay ahead.
I believe that Accountable Care Organizations, Patient Centered Medical Homes, and the Partnership for Patients/CMS Center for Innovation will create exponential growth in healthcare IT requirements.
My senior leadership at BIDMC knows that we'll need novel approaches to healthcare information exchange for care coordination and population health management. They know we'll need new analytics which include cost, quality, and outcomes. They want new tools to make these analytics available to every stakeholder, both outside and inside the EHR.
Furthermore, state infrastructure to support "push" and "pull" data exchanges will need to be built. The need for Federal standards and policies will accelerate.
At the same time, the science of medicine at Harvard Medical School (HMS) is becoming more computationally intensive.
The next generation of whole genome analysis requires tools like BFAST that require new approaches to processing and storage infrastructure.
Image analysis also requires new tools such as OMERO for visualization, management and analysis of biological microscope images.
These and other research tools need to run on petabytes of data maintained on high performance storage, backed by thousands of processors, numerous specialized graphics processing units and high speed infiniband connections.
How does this relate to me?
As the CIO of hospitals, the innovation required to support healthcare reform will require increasing amounts of my time.
As the part time (50%) CIO of Harvard Medical School, the tools and technology required to support new scientific approaches will require increasing amounts of my time.
How do I ensure the exponentially increasing needs of the customers I serve are best met?
The answer requires a tough decision.
I believe that Harvard Medical School requires a full time dedicated CIO with a skill set in highly scalable infrastructure and the tools needed to support emerging science.
Thus, I think it best that I pass the baton at HMS to a new IT leader. I will continue to serve the Dean of HMS as an advisor on strategic projects, especially those which require cross-affiliate and clinical coordination. In collaboration with the IT stakeholders of HMS, I will work to find my replacement.
Once my successor is found, I will take on additional challenges implementing the next stages of meaningful use, healthcare reform, and new healthcare information exchange initiatives at BIDMC, in Massachusetts, and Nationwide.
Wish me luck!
Wednesday, July 20, 2011
The July HIT Standards Committee Meeting
Farzad Mostashari, national coordinator for healthcare IT, began the meeting with a discussion of the issues we have always faced while harmonizing standards. Standards that are widely adopted by the marketplace and are well tested make harmonization easy. However, many standards are mature but not widely adopted or novel but not well tested. We want to encourage innovation and use market adoption as a measure of our success. It's clear that at times we'll have to consider new standards that seem very reasonable for the purpose intended and test them in real world scenarios before forcing top down adoption through regulation. Bottom up adoption of standards that are implemented and improved by stakeholders is a better approach.
Per the Standards Summer Camp schedule, the July HIT Standards Committee meeting focused on
Vocabulary recommendations
ePrescribing of discharged medications recommendations
Patient Matching recommendations
Syndromic Surveillance recommendations
Jim Walker, chair of the Clinical Quality workgroup, presented an overview of the vocabulary work done to support all our clinical coordination and quality measurement activities. The charter for the group was to select the minimum number of vocabulary standards with the minimum number of values to meet the requirements of meaningful use stages 2 and 3. Reducing the number of standards makes mapping between different vocabularies much easier. The workgroup used SNOMED-CT and LOINC wherever possible and tried to select one vocabulary per domain (allergies, labs, medications etc). Examples of their selections include
Adverse Drug effect - RxNorm for medications, SNOMED-CT for non-medication substances, SNOMED-CT for severity of reaction
Patient characteristics - ISO 639-2 for preferred language, HL7 for administrative gender, PHIN-VADS (Centers for Disease Control) for Race/Ethnicity
Condition/Diagnosis/Problem - SNOMED-CT
Non-lab Diagnostic study - LOINC for name, SNOMED-CT for appropriate findings, UCUM for Units
A rich discussion followed. Points of concern included:
*Using RxNorm for all medications including vaccines, even though CVX is the required vaccine vocabulary for Meaningful Use stage 1. We clarified this with an example from Beth Israel Deaconess Medical Center:
BIDMC uses First Data Bank as the medication vocabulary for its internal systems. However, when BIDMC sends clinical summaries, it maps FDB to RxNorm for all drug names. When BIDMC sends immunization records to public health, it uses CVX codes. Thus, the HIT Standards Committee will not specify the vocabularies used within enterprise applications, just those vocabularies that are needed for specific purposes when data is transmitted between entities.
Next, Doug Fridsma began a discussion of our Summer Camp items, noting the many projects of the S&I framework are proceeding according to plan.
Scott Robertson presented the work of the Discharge Medications Power Team. They recommended HL7 and NCPDP script as reasonable standards for sending discharge medication orders to hospital pharmacies and retail pharmacies.
Discussion followed regarding two specific points - their recommendations did not include a specific version of HL7, since existing Medicare Part D regulations do not specify an HL7 version. The power team will make additional more specific HL7 recommendations. There was discussion about the specific aspects of RxNorm that constrain the way dose and route are specified. The HIT Standards Committee members felt additional work was needed before mandating this level of specificity, so our recommendations will include RxNorm for medication name, but not additional specificity for dose and route vocabularies at this time.
Next, Marc Overhage presented the recommendations of the Patient Matching Power Team. The scope of the Patient Matching work is to provide guidance to implementers who want to understand best practices for the use of demographics in machine to machine matching of patient identity. Per the RAND Report, use of different fields results in variation of specificity and sensitivity. Some fields such as social security number (or a subset of it) greatly increase specificity, resulting in fewer false positives such as matching the wrong patient. However, social security number is controversial because of the potential for identity theft and the fact that immigrants may not have one. The final report will take into account all these observations.
Chris Chute presented the recommendations of the Surveillance Implementation Guide Power Team, which aims to specify one implementation guide for each public health transaction. They are studying the difference between HL7 2.31 and 2.51 as well as considering the potential for public health entities to use CDA constructs.
Dixie Baker presented a project plan for the NwHIN Power Team, which aims to specify a set of building blocks for secure transport of data in multiple architectures.
Finally, Judy Murphy and Liz Johnson presented their plans for the Implementation Workgroup, collecting lessons learned from certification and attestation.
We're on track with Summer Camp. Our next meeting in August will include the final recommendations for
Simple Lab Results
Transitions of Care
CDA Cleanup
Patient Matching
Vocabulary
Every meeting with the HIT Standards Committee (this was our 27th) brings us closer as a working team. We're transparent and passionate, openly sharing all the issues and concerns about the standards we're selecting. Coordination with all the moving parts (ONC, Policy Committee, S&I Framework) keeps getting better and better.
Thus far, Summer Camp is a winner and I am confident we'll meet all our September deadlines for offering recommendations to ONC in preparation for Meaningful Use Stage 2 regulations.
Per the Standards Summer Camp schedule, the July HIT Standards Committee meeting focused on
Vocabulary recommendations
ePrescribing of discharged medications recommendations
Patient Matching recommendations
Syndromic Surveillance recommendations
Jim Walker, chair of the Clinical Quality workgroup, presented an overview of the vocabulary work done to support all our clinical coordination and quality measurement activities. The charter for the group was to select the minimum number of vocabulary standards with the minimum number of values to meet the requirements of meaningful use stages 2 and 3. Reducing the number of standards makes mapping between different vocabularies much easier. The workgroup used SNOMED-CT and LOINC wherever possible and tried to select one vocabulary per domain (allergies, labs, medications etc). Examples of their selections include
Adverse Drug effect - RxNorm for medications, SNOMED-CT for non-medication substances, SNOMED-CT for severity of reaction
Patient characteristics - ISO 639-2 for preferred language, HL7 for administrative gender, PHIN-VADS (Centers for Disease Control) for Race/Ethnicity
Condition/Diagnosis/Problem - SNOMED-CT
Non-lab Diagnostic study - LOINC for name, SNOMED-CT for appropriate findings, UCUM for Units
A rich discussion followed. Points of concern included:
*Using RxNorm for all medications including vaccines, even though CVX is the required vaccine vocabulary for Meaningful Use stage 1. We clarified this with an example from Beth Israel Deaconess Medical Center:
BIDMC uses First Data Bank as the medication vocabulary for its internal systems. However, when BIDMC sends clinical summaries, it maps FDB to RxNorm for all drug names. When BIDMC sends immunization records to public health, it uses CVX codes. Thus, the HIT Standards Committee will not specify the vocabularies used within enterprise applications, just those vocabularies that are needed for specific purposes when data is transmitted between entities.
Next, Doug Fridsma began a discussion of our Summer Camp items, noting the many projects of the S&I framework are proceeding according to plan.
Scott Robertson presented the work of the Discharge Medications Power Team. They recommended HL7 and NCPDP script as reasonable standards for sending discharge medication orders to hospital pharmacies and retail pharmacies.
Discussion followed regarding two specific points - their recommendations did not include a specific version of HL7, since existing Medicare Part D regulations do not specify an HL7 version. The power team will make additional more specific HL7 recommendations. There was discussion about the specific aspects of RxNorm that constrain the way dose and route are specified. The HIT Standards Committee members felt additional work was needed before mandating this level of specificity, so our recommendations will include RxNorm for medication name, but not additional specificity for dose and route vocabularies at this time.
Next, Marc Overhage presented the recommendations of the Patient Matching Power Team. The scope of the Patient Matching work is to provide guidance to implementers who want to understand best practices for the use of demographics in machine to machine matching of patient identity. Per the RAND Report, use of different fields results in variation of specificity and sensitivity. Some fields such as social security number (or a subset of it) greatly increase specificity, resulting in fewer false positives such as matching the wrong patient. However, social security number is controversial because of the potential for identity theft and the fact that immigrants may not have one. The final report will take into account all these observations.
Chris Chute presented the recommendations of the Surveillance Implementation Guide Power Team, which aims to specify one implementation guide for each public health transaction. They are studying the difference between HL7 2.31 and 2.51 as well as considering the potential for public health entities to use CDA constructs.
Dixie Baker presented a project plan for the NwHIN Power Team, which aims to specify a set of building blocks for secure transport of data in multiple architectures.
Finally, Judy Murphy and Liz Johnson presented their plans for the Implementation Workgroup, collecting lessons learned from certification and attestation.
We're on track with Summer Camp. Our next meeting in August will include the final recommendations for
Simple Lab Results
Transitions of Care
CDA Cleanup
Patient Matching
Vocabulary
Every meeting with the HIT Standards Committee (this was our 27th) brings us closer as a working team. We're transparent and passionate, openly sharing all the issues and concerns about the standards we're selecting. Coordination with all the moving parts (ONC, Policy Committee, S&I Framework) keeps getting better and better.
Thus far, Summer Camp is a winner and I am confident we'll meet all our September deadlines for offering recommendations to ONC in preparation for Meaningful Use Stage 2 regulations.
Tuesday, July 19, 2011
Planning FY12 Clinical IT Goals
As healthcare CIO's plan for FY12, they all have the triple threat of Meaningful Use, 5010, and ICD-10. On top of that, they'll be asked to lead innovative efforts supporting healthcare reform/patient centered medical homes/accountable care organizations.
In times of great change, governance is absolutely key to align priorities and resources.
Here are the projects that have been discussed in my leadership and governance committees.
Overall clinical IT goal:
Move from the current hybrid electronic/paper medical record to a fully electronic integrated record by 2015 by redefining workflows and then implementing automation in support of meaningful use, healthcare reform, and patient engagement.
To do that requires that we fill our functional gaps. Highlights include
Electronic Medication Administration Records - we will redefine the entire medication management process from doctor's brain to patient's vein then automate it using iPads, smartphones, and novel applications. We will select build/bought solutions in FY12, pilot it in FY13 and then widely roll it out.
Patientsite - we will ensure every BIDMC provider is enrolled and every patient is offered access to our PHR.
IT Resource Prioritization - we will implement a highly functional overall clinical IT governance process that ensures resources are aligned with priorities and interoperability is maximized. Here's a brief proposal for this work
"Currently we have a series of department and domain specific IT committees which plan their individual priorities- lab, radiology, critical care, inpatient, outpatient etc.
These do not report in a formal way to the Clinical IT Governance Committee.
We propose that the Clinical IT Governance Committee should meet 10 times per year (no meeting in August and December due to vacations). At each of these meetings, the Committee should hear from subcommittees to better understand their goals, their project progress, barriers, enablers, staffing, and competing priorities.
The Clinical IT Governance committee can
*Resolve resource conflicts between subcommittees
*Assist with decision making about software build/buy selection in the context of the entire BIDMC application portfolio, ensuring integration and interoperability, along with prudent and equitable use of resources
*Ensure the work to be done is aligned with the Annual operating plan, BIDMC strategy, compliance timelines and staffing levels
*Review the performance/efficacy of each subcommittee based on objective criteria and recommend membership/structural change if needed
*Sponsor those projects which are enterprise-wide and have no specific departmental or domain sponsor - such as electronic medication administration records and inpatient documentation
To kick off this process, we'll begin with an objective review of our existing subcommittees using the characteristics we discussed at our last meeting. We'll schedule our first subcommittees to present their plan in September"
Inpatient clinical documentation - a multi-stakeholder group will study inpatient clinical documentation and specify an ideal workflow for integrated documentation that meets regulatory requirements. We will eliminate paper while also eliminating redundancy in clinical documentation.
ICD-10 - we will continue our work on ICD-10. On Thursday, our statewide CIO Forum will examine ways that the community can do ICD-10 together, reducing the burden for all.
Business intelligence tools - we will implement novel business intelligence tools that are needed to support new payment methodologies such as alternative quality contracts.
We'll do all this on top of the usual infrastructure and application activities that keep the trains running on time. If FY11 was the year of living anxiously, I'm hoping FY12 is a year of great achievement.
In times of great change, governance is absolutely key to align priorities and resources.
Here are the projects that have been discussed in my leadership and governance committees.
Overall clinical IT goal:
Move from the current hybrid electronic/paper medical record to a fully electronic integrated record by 2015 by redefining workflows and then implementing automation in support of meaningful use, healthcare reform, and patient engagement.
To do that requires that we fill our functional gaps. Highlights include
Electronic Medication Administration Records - we will redefine the entire medication management process from doctor's brain to patient's vein then automate it using iPads, smartphones, and novel applications. We will select build/bought solutions in FY12, pilot it in FY13 and then widely roll it out.
Patientsite - we will ensure every BIDMC provider is enrolled and every patient is offered access to our PHR.
IT Resource Prioritization - we will implement a highly functional overall clinical IT governance process that ensures resources are aligned with priorities and interoperability is maximized. Here's a brief proposal for this work
"Currently we have a series of department and domain specific IT committees which plan their individual priorities- lab, radiology, critical care, inpatient, outpatient etc.
These do not report in a formal way to the Clinical IT Governance Committee.
We propose that the Clinical IT Governance Committee should meet 10 times per year (no meeting in August and December due to vacations). At each of these meetings, the Committee should hear from subcommittees to better understand their goals, their project progress, barriers, enablers, staffing, and competing priorities.
The Clinical IT Governance committee can
*Resolve resource conflicts between subcommittees
*Assist with decision making about software build/buy selection in the context of the entire BIDMC application portfolio, ensuring integration and interoperability, along with prudent and equitable use of resources
*Ensure the work to be done is aligned with the Annual operating plan, BIDMC strategy, compliance timelines and staffing levels
*Review the performance/efficacy of each subcommittee based on objective criteria and recommend membership/structural change if needed
*Sponsor those projects which are enterprise-wide and have no specific departmental or domain sponsor - such as electronic medication administration records and inpatient documentation
To kick off this process, we'll begin with an objective review of our existing subcommittees using the characteristics we discussed at our last meeting. We'll schedule our first subcommittees to present their plan in September"
Inpatient clinical documentation - a multi-stakeholder group will study inpatient clinical documentation and specify an ideal workflow for integrated documentation that meets regulatory requirements. We will eliminate paper while also eliminating redundancy in clinical documentation.
ICD-10 - we will continue our work on ICD-10. On Thursday, our statewide CIO Forum will examine ways that the community can do ICD-10 together, reducing the burden for all.
Business intelligence tools - we will implement novel business intelligence tools that are needed to support new payment methodologies such as alternative quality contracts.
We'll do all this on top of the usual infrastructure and application activities that keep the trains running on time. If FY11 was the year of living anxiously, I'm hoping FY12 is a year of great achievement.
Monday, July 18, 2011
The Health Insurance Exchange NPRM
Last week, Health and Human Services issued an important notice of proposed rulemaking (NPRM) about Health Insurance Exchanges.
What are Health Insurance Exchanges?
They are state-based competitive marketplaces where individuals and small businesses are able to purchase affordable private health insurance. As Secretary Sebellius noted
"Health Insurance Exchanges offer Americans competition, choice, and clout. Insurance companies will compete for business on a transparent, level playing field, driving down costs; and Exchanges will give individuals and small businesses the same purchasing power as big businesses and a choice of plans to fit their needs.”
Health Insurance Exchanges are abbreviated HIX rather than HIE, reducing confusion with Health Information Exchanges.
The IT portion of the NPRM is interesting - Section 1311(c)(5) requires the Secretary to make available to all states a model HIX web application developed by HHS.
This is not intended to be a single central website for the US. Instead, there will be a set of common web tools available to all states to support health insurance exchange websites. My understanding is that the infrastructure will be service oriented so that states can create their own user experience but leverage complex business logic and administrative tools developed for all.
Massachusetts is part of a six state New England consortium that was awarded an innovation grant of $35.5M to in February 2011 to build a New England HIX by December 2012. For details, see their website. The New England team believes HIX has many components that are common and hence can be developed just once for the region/country. At the moment, HIX has a strong policy directive, appropriate funding, excellent leadership, and multi-stakeholder governance - many of the key elements in my Recipe for Success.
Are there lessons learned from Healthcare Information Exchange (HIE)?
At the moment, HIE has less of a strong policy directive (Meaningful Use Stage 1 only requires a single test of HIE), funding is limited considering the scope of work necessary to connect every provider/payer/patient, the domain is highly complex creating a shortage of excellent leaders, and governance is still evolving.
CareSpark, one of the early HIEs recently closed it doors because it lacked significant adoption and a sustainable business model.
A lesson learned from the HIX effort is that HIE needs a urgency to implement the technology and an audience that wants to adopt it.
Are there common components that could be developed just once?
Yes - gateways provided by a Health Information Services Provider (HISP), provider directories, certificate management, a standard for transfer of care summaries, and consent guidance that empowers the develop of local consent frameworks.
Hence, the work by the Direct project, the Summer Camp of the HIT Standards Committee, and the projects of the Standards and Interoperability Framework.
It will be more difficult to create national components that can be repurposed locally because of the heterogeneity of use cases for HIE in every locality. Thus, HIX, which provides uniform functionality across states does not precisely provide a model for HIE, but in the next year all the use case work, the standards work and the policy work (meaningful use stage 2) will converge so that HIE will have as many "recipe for success" factors as HIX has today.
What are Health Insurance Exchanges?
They are state-based competitive marketplaces where individuals and small businesses are able to purchase affordable private health insurance. As Secretary Sebellius noted
"Health Insurance Exchanges offer Americans competition, choice, and clout. Insurance companies will compete for business on a transparent, level playing field, driving down costs; and Exchanges will give individuals and small businesses the same purchasing power as big businesses and a choice of plans to fit their needs.”
Health Insurance Exchanges are abbreviated HIX rather than HIE, reducing confusion with Health Information Exchanges.
The IT portion of the NPRM is interesting - Section 1311(c)(5) requires the Secretary to make available to all states a model HIX web application developed by HHS.
This is not intended to be a single central website for the US. Instead, there will be a set of common web tools available to all states to support health insurance exchange websites. My understanding is that the infrastructure will be service oriented so that states can create their own user experience but leverage complex business logic and administrative tools developed for all.
Massachusetts is part of a six state New England consortium that was awarded an innovation grant of $35.5M to in February 2011 to build a New England HIX by December 2012. For details, see their website. The New England team believes HIX has many components that are common and hence can be developed just once for the region/country. At the moment, HIX has a strong policy directive, appropriate funding, excellent leadership, and multi-stakeholder governance - many of the key elements in my Recipe for Success.
Are there lessons learned from Healthcare Information Exchange (HIE)?
At the moment, HIE has less of a strong policy directive (Meaningful Use Stage 1 only requires a single test of HIE), funding is limited considering the scope of work necessary to connect every provider/payer/patient, the domain is highly complex creating a shortage of excellent leaders, and governance is still evolving.
CareSpark, one of the early HIEs recently closed it doors because it lacked significant adoption and a sustainable business model.
A lesson learned from the HIX effort is that HIE needs a urgency to implement the technology and an audience that wants to adopt it.
Are there common components that could be developed just once?
Yes - gateways provided by a Health Information Services Provider (HISP), provider directories, certificate management, a standard for transfer of care summaries, and consent guidance that empowers the develop of local consent frameworks.
Hence, the work by the Direct project, the Summer Camp of the HIT Standards Committee, and the projects of the Standards and Interoperability Framework.
It will be more difficult to create national components that can be repurposed locally because of the heterogeneity of use cases for HIE in every locality. Thus, HIX, which provides uniform functionality across states does not precisely provide a model for HIE, but in the next year all the use case work, the standards work and the policy work (meaningful use stage 2) will converge so that HIE will have as many "recipe for success" factors as HIX has today.
Friday, July 15, 2011
Cool Technology of the Week
I'm a Sci-Fi fan and the works of Frank Herbert (the Dune series) have always been some of my most treasured reading material. I was particularly fascinated (from an engineering perspective) by the concept of the Stillsuit worn by the Fremen which recycled lost body water to aid survival in the harsh desert climate of Dune.
Now, the Stillsuit is a reality and Nasa astronauts are testing a self contained device powered by the principles of osmosis that recycles urine into an sweetened, electrolyte-balanced sports drink.
Some may joke that Gatorade's color and flavor make it a natural for such "recycling" already, but the engineering behind body water recycling is serious business.
As a doctor, I know that drink urine in a dehydration emergency is a losing battle - the solute load is too high and your body will get thirstier and thirstier.
This osmosis approach ensures the free water exceeds the solute load, hydrating the body instead of further compromising it.
A self contained body fluid recycler for astronauts and survival - that's cool! Frank Herbert would be proud.
Now, the Stillsuit is a reality and Nasa astronauts are testing a self contained device powered by the principles of osmosis that recycles urine into an sweetened, electrolyte-balanced sports drink.
Some may joke that Gatorade's color and flavor make it a natural for such "recycling" already, but the engineering behind body water recycling is serious business.
As a doctor, I know that drink urine in a dehydration emergency is a losing battle - the solute load is too high and your body will get thirstier and thirstier.
This osmosis approach ensures the free water exceeds the solute load, hydrating the body instead of further compromising it.
A self contained body fluid recycler for astronauts and survival - that's cool! Frank Herbert would be proud.
Thursday, July 14, 2011
Reflections on Google+
Over the past week, over 100 of my friends, colleagues, and staff have sent me invitations to Google+ and my "Circles" are growing.
Google has an amazing capacity to create simple, clean, and highly functional user interfaces that work well with every browser I own (Safari 5.05, Firefox 5.0).
I'm on LinkedIn, Plaxo, Facebook, and Twitter not because I spend a great deal of time on social networks but so that I can communicate with folks who prefer those approaches over email.
To be honest, I've found Facebook's user interface to be cumbersome - you've got a wall, an inbox, invitations, events etc. It's challenging to know what to click and the sheer amount of traffic in difference places can be daunting to manage.
Google+ uses Gmail for invitations, has full integration of Google+ features into the iGoogle portal, and has just the right amount of functionality for me to organize my social network into manageable groups.
The "Circles" concept is a good one - I can place any contact into a group of interest - in my case that's Family, Friends, Federal work, State work, Harvard work, and Healthcare Work. I can follow "Streams" of postings in each of these circles, making it much easier for me to catch up on communications by focusing on topics or clusters of people. The "Sparks" feature that includes information collections on specific topics to be watched, read, or shared is less useful to me, since my reading time is generally more goal directed.
I admit that I'm a lumper rather than a splitter. In 1995, I switched to Windows 95 and Office 95 so that I would have a single integrated suite of products created by the same manufacturer to do my work. Over the past few years, I've used Mac OSX, Keynote, Pages, and Numbers for the same purpose.
I can definitely see the value of doing email, search, and social networking with a single integrated suite from one vendor.
Google's previous efforts, Wave and Buzz, did not attract my attention. Google+ is definitely a Facebook upgrade. It's a winner and I'll be organizing my social networks there because Google+ is fast, intuitive, and part of my existing Google workflow.
Google has an amazing capacity to create simple, clean, and highly functional user interfaces that work well with every browser I own (Safari 5.05, Firefox 5.0).
I'm on LinkedIn, Plaxo, Facebook, and Twitter not because I spend a great deal of time on social networks but so that I can communicate with folks who prefer those approaches over email.
To be honest, I've found Facebook's user interface to be cumbersome - you've got a wall, an inbox, invitations, events etc. It's challenging to know what to click and the sheer amount of traffic in difference places can be daunting to manage.
Google+ uses Gmail for invitations, has full integration of Google+ features into the iGoogle portal, and has just the right amount of functionality for me to organize my social network into manageable groups.
The "Circles" concept is a good one - I can place any contact into a group of interest - in my case that's Family, Friends, Federal work, State work, Harvard work, and Healthcare Work. I can follow "Streams" of postings in each of these circles, making it much easier for me to catch up on communications by focusing on topics or clusters of people. The "Sparks" feature that includes information collections on specific topics to be watched, read, or shared is less useful to me, since my reading time is generally more goal directed.
I admit that I'm a lumper rather than a splitter. In 1995, I switched to Windows 95 and Office 95 so that I would have a single integrated suite of products created by the same manufacturer to do my work. Over the past few years, I've used Mac OSX, Keynote, Pages, and Numbers for the same purpose.
I can definitely see the value of doing email, search, and social networking with a single integrated suite from one vendor.
Google's previous efforts, Wave and Buzz, did not attract my attention. Google+ is definitely a Facebook upgrade. It's a winner and I'll be organizing my social networks there because Google+ is fast, intuitive, and part of my existing Google workflow.
Wednesday, July 13, 2011
A Recipe for Success
When I reflect on the times in my career that perfect alignment of environmental factors resulted in high productivity and innovation, the following themes come to mind:
1. A sense of urgency to change - John Kotter's change model starts with creating urgency. A perfect 2011 example of this is Meaningful Use Stage 1. All stakeholders recognize that MU is a high priority requiring complete organizational focus with a well understood timeframe and clear outcomes.
2. A clearly defined scope - Often, business owners choose software solutions before they analyze their workflows and define requirements. Automating a broken process does not make it better. The rational way to approach projects is define an ideal workflow, develop requirements, create specifications for automation, and build/buy products. Currently, BIDMC is designing the medication workflow of the future. Our first step was to assemble all the stakeholders for a 3 day retreat to define the ideal functional characteristics without being biased by existing technology or practices. In the next two years, you can expect us to create an automated medication workflow that looks more like a Toyota production system and less like a traditional hospital ward.
3. A guiding coalition of leaders serving as champions for the project and providing ideal leadership characteristics - leaders need to be positive, visionary and supportive, not angry, autocratic and arbitrary.
4. A dedicated implementation team and appropriate funding - How many times have you been asked to do more with less? Eventually you'll do everything with nothing. Your lean and mean teams will become bony and angry. If an urgent project with a tightly defined scope and unified leadership is resourced with dedicated staff, you'll be unstoppable.
5. A timeframe that makes it possible to do the project with enough attention to detail that the result is high quality, well communicated, and innovative - every project needs to link time, scope and resources. 9 women cannot have a baby in 1 month. A fixed scope and fixed resources implies a fixed time. Attempts to artificially shorten the time will compromise scope or quality. A project that goes live at the right time, even if the timeframe seems long, will never be remembered. A project that goes live too early will never be forgotten.
May you all have the time, resources, scope, noble leaders, and urgency you need to succeed. May you be driven by innovation and the desire to make a difference instead of a pugilistic project manager creating fear of failure. May you all have your Man on the Moon moment.
These are the successes we'll tell our grandchildren about.
1. A sense of urgency to change - John Kotter's change model starts with creating urgency. A perfect 2011 example of this is Meaningful Use Stage 1. All stakeholders recognize that MU is a high priority requiring complete organizational focus with a well understood timeframe and clear outcomes.
2. A clearly defined scope - Often, business owners choose software solutions before they analyze their workflows and define requirements. Automating a broken process does not make it better. The rational way to approach projects is define an ideal workflow, develop requirements, create specifications for automation, and build/buy products. Currently, BIDMC is designing the medication workflow of the future. Our first step was to assemble all the stakeholders for a 3 day retreat to define the ideal functional characteristics without being biased by existing technology or practices. In the next two years, you can expect us to create an automated medication workflow that looks more like a Toyota production system and less like a traditional hospital ward.
3. A guiding coalition of leaders serving as champions for the project and providing ideal leadership characteristics - leaders need to be positive, visionary and supportive, not angry, autocratic and arbitrary.
4. A dedicated implementation team and appropriate funding - How many times have you been asked to do more with less? Eventually you'll do everything with nothing. Your lean and mean teams will become bony and angry. If an urgent project with a tightly defined scope and unified leadership is resourced with dedicated staff, you'll be unstoppable.
5. A timeframe that makes it possible to do the project with enough attention to detail that the result is high quality, well communicated, and innovative - every project needs to link time, scope and resources. 9 women cannot have a baby in 1 month. A fixed scope and fixed resources implies a fixed time. Attempts to artificially shorten the time will compromise scope or quality. A project that goes live at the right time, even if the timeframe seems long, will never be remembered. A project that goes live too early will never be forgotten.
May you all have the time, resources, scope, noble leaders, and urgency you need to succeed. May you be driven by innovation and the desire to make a difference instead of a pugilistic project manager creating fear of failure. May you all have your Man on the Moon moment.
These are the successes we'll tell our grandchildren about.
Tuesday, July 12, 2011
The Demand for a Free Service is Infinite
As CIOs we have significant responsibility but limited authority. We're accountable for stability, reliability, and security but cannot always control all the variables.
Here's an example of random events coming together to create a problem, which is now well on its way to resolution. However, there are many lessons learned that I'd like to share with you.
Timeline
November 2010 - Harvard Medical School financial and compliance experts asked for a temporary hold on computing and storage chargebacks, ensuring a thoughtful service center model could be implemented which adheres to every rule and regulation for grant funds flow.
January-March 2011 - Many ARRA grants enabled the research community to purchase microscopes, next generation sequencers, flow cytometry equipment, and other tools/technology that generate vast amounts of data. There was no specific process in place that required storage/IT resource plans before the purchases were made.
April 2011 - The Research Information Technology Group expanded the number of CPU Cores in our high performance computing cluster from 1000 to 4000. Although this did not specifically create an additional storage burden, it enabled the community to run 4 times as many jobs, increasing I/O demand.
May 2011 - All the new research equipment and tools/technologies were turned on by the research community. Storage demands grew from 650 terabytes to 1.1 petabytes in a few weeks. In parallel, next generation sequencing software, which tends to do millions of reads/writes, ran 4 times more often. At the same time. all IT resources were available without a chargeback. Effectively, the demand for a free service became infinite.
June 2011 - A few hours of storage downtime occurred as capacity crossed a threshold such that fewer hard drives were available to handle the I/O load. We initiated a series of immediate actions to resolve the problem, as described in my email to the community below
"Dear HMS IT User Community:
As a followup to my June 14 email about our plan to rapidly improve storage performance and capacity, here is an update.
The demand for storage between April and June 2011 increased 70%, from 650 Terabytes to 1.1 Petabytes (1,100,000,000,000,000 bytes), and research storage activity doubled. Per our promise we have
1. Separated all web and other applications into their own storage cluster, enhancing the speed and reliability of all our applications
2. Separated all home folders and administrative collaborations into their own storage cluster, enhancing the speed and reliability of file access for every user
3. Planned migrations of several research collaborations into a separate pool of specialized high speed storage.
4. Retained an expert consultant to provide an independent review of our storage infrastructure in 3 phases - short term improvements to ensure stability, medium term improvements to support growth, and long term improvements to ensure sustainability. Their report will be presented to the Research Computing Governance Committee tomorrow. After all stakeholders have reviewed that report, we will use existing budgets to make additional storage purchases that are consistent with the long term needs of the community.
5. The Research Computing Governance Committee is also working hard on policies to ensure everyone is a good steward of IT resources. A collaboratively developed chargeback model for computing and storage is nearly complete and will be widely vetted for feedback. Until the chargeback model is in place, we'll continue to use quotas to limit growth. By enhancing our storage supply and managing our demand, we will all be successful.
In the meantime, faculty will be receiving a note from the chair of the Research Computing Governance Committee, with further information and suggestions regarding efficient use of computing resources.
Thanks for your ongoing support, we're making progress
John"
What can we learn from this?
1. Policy and Technology need to be developed together. No amount of hardware technology will satisfy customer needs unless there is some policy as to how the technology is used. I should have focused on demand management in parallel with supply management, enforcing rigorous quotas and providing useful self-service reports while the chargeback model was being revised.
2. Governance is essential to IT success. Although no one in the user community relayed any storage plans or issues to me, there should have been appropriate committees or workgroups established to coordinate efforts among research labs. In administrative and educational areas, it's common for groups to coordinate efforts with enterprise initiatives. In research areas, it's more common for local efforts to occur without broad coordination. Establishing governance that includes all research lab administrators would help improve this.
3. Approval processes for purchases need to include IT planning. When grants are used to purchase equipment there is no specific oversight of the infrastructure implications of adding such equipment on firewalls, networks, servers and storage. Purchases that generate data should require additional approvals to align infrastructure supply and demand.
4. Be wary of "Big Bang" go lives. Our high performance computing upgrade was a single event - 1000 cores to 4000 cores. This should have been phased to better assess the impact of the expansion on application use, other elements of infrastructure, and customer expectation.
5. Know your own blind spots. As with the CareGroup Network Outage, there are aspects of emerging technology which are so new that I do not know what I do not know. When storage demand increases by 70% and throughput accelerates by a factor of 4, what happens to an advanced storage infrastructure? Bringing in a third party storage consultant would have filled in knowledge gaps.
In the world of healthcare quality, there's an analogy that error is like slices of swiss cheese. If a stack of individual slices is lined up precisely, you can get a hole all the way through the stack. In this case, a series of unrelated events lined up to create a problem. I hope that IT professionals can use this episode to realign their "slices" and prevent infinite demand from impacting a limited supply.
Here's an example of random events coming together to create a problem, which is now well on its way to resolution. However, there are many lessons learned that I'd like to share with you.
Timeline
November 2010 - Harvard Medical School financial and compliance experts asked for a temporary hold on computing and storage chargebacks, ensuring a thoughtful service center model could be implemented which adheres to every rule and regulation for grant funds flow.
January-March 2011 - Many ARRA grants enabled the research community to purchase microscopes, next generation sequencers, flow cytometry equipment, and other tools/technology that generate vast amounts of data. There was no specific process in place that required storage/IT resource plans before the purchases were made.
April 2011 - The Research Information Technology Group expanded the number of CPU Cores in our high performance computing cluster from 1000 to 4000. Although this did not specifically create an additional storage burden, it enabled the community to run 4 times as many jobs, increasing I/O demand.
May 2011 - All the new research equipment and tools/technologies were turned on by the research community. Storage demands grew from 650 terabytes to 1.1 petabytes in a few weeks. In parallel, next generation sequencing software, which tends to do millions of reads/writes, ran 4 times more often. At the same time. all IT resources were available without a chargeback. Effectively, the demand for a free service became infinite.
June 2011 - A few hours of storage downtime occurred as capacity crossed a threshold such that fewer hard drives were available to handle the I/O load. We initiated a series of immediate actions to resolve the problem, as described in my email to the community below
"Dear HMS IT User Community:
As a followup to my June 14 email about our plan to rapidly improve storage performance and capacity, here is an update.
The demand for storage between April and June 2011 increased 70%, from 650 Terabytes to 1.1 Petabytes (1,100,000,000,000,000 bytes), and research storage activity doubled. Per our promise we have
1. Separated all web and other applications into their own storage cluster, enhancing the speed and reliability of all our applications
2. Separated all home folders and administrative collaborations into their own storage cluster, enhancing the speed and reliability of file access for every user
3. Planned migrations of several research collaborations into a separate pool of specialized high speed storage.
4. Retained an expert consultant to provide an independent review of our storage infrastructure in 3 phases - short term improvements to ensure stability, medium term improvements to support growth, and long term improvements to ensure sustainability. Their report will be presented to the Research Computing Governance Committee tomorrow. After all stakeholders have reviewed that report, we will use existing budgets to make additional storage purchases that are consistent with the long term needs of the community.
5. The Research Computing Governance Committee is also working hard on policies to ensure everyone is a good steward of IT resources. A collaboratively developed chargeback model for computing and storage is nearly complete and will be widely vetted for feedback. Until the chargeback model is in place, we'll continue to use quotas to limit growth. By enhancing our storage supply and managing our demand, we will all be successful.
In the meantime, faculty will be receiving a note from the chair of the Research Computing Governance Committee, with further information and suggestions regarding efficient use of computing resources.
Thanks for your ongoing support, we're making progress
John"
What can we learn from this?
1. Policy and Technology need to be developed together. No amount of hardware technology will satisfy customer needs unless there is some policy as to how the technology is used. I should have focused on demand management in parallel with supply management, enforcing rigorous quotas and providing useful self-service reports while the chargeback model was being revised.
2. Governance is essential to IT success. Although no one in the user community relayed any storage plans or issues to me, there should have been appropriate committees or workgroups established to coordinate efforts among research labs. In administrative and educational areas, it's common for groups to coordinate efforts with enterprise initiatives. In research areas, it's more common for local efforts to occur without broad coordination. Establishing governance that includes all research lab administrators would help improve this.
3. Approval processes for purchases need to include IT planning. When grants are used to purchase equipment there is no specific oversight of the infrastructure implications of adding such equipment on firewalls, networks, servers and storage. Purchases that generate data should require additional approvals to align infrastructure supply and demand.
4. Be wary of "Big Bang" go lives. Our high performance computing upgrade was a single event - 1000 cores to 4000 cores. This should have been phased to better assess the impact of the expansion on application use, other elements of infrastructure, and customer expectation.
5. Know your own blind spots. As with the CareGroup Network Outage, there are aspects of emerging technology which are so new that I do not know what I do not know. When storage demand increases by 70% and throughput accelerates by a factor of 4, what happens to an advanced storage infrastructure? Bringing in a third party storage consultant would have filled in knowledge gaps.
In the world of healthcare quality, there's an analogy that error is like slices of swiss cheese. If a stack of individual slices is lined up precisely, you can get a hole all the way through the stack. In this case, a series of unrelated events lined up to create a problem. I hope that IT professionals can use this episode to realign their "slices" and prevent infinite demand from impacting a limited supply.
Monday, July 11, 2011
Can Blogging be Harmful to Your Career?
I blog 5 days a week. This is my 935th post. Monday through Wednesday are generally policy and technology topics. Thursday is something personal. Friday is an emerging technology.
Everything I write is personal, unfiltered, and transparent. Readers of my blog know where I am, what I'm doing, and what I'm thinking. They can share my highs and my lows, my triumphs and defeats.
Recently, I had my blog used against me for the first time.
In discussing a critical IT issue, someone questioned my focus and engagement because I had written a post about single malt scotch on June 2 at 3am, recounting an experience I had Memorial Day Weekend in Scotland.
I explained that I write these posts late at night, in a few minutes, while most people are sleeping. They are not a distraction but are a kind of therapy, enabling me to document the highlights of my day.
I realize that it is overly optimistic to believe that everyone I work with will embrace values like civility, equanimity, and a belief that the nice guy can finish first.
If Facebook can be used against college applicants to screen them for bad behavior and if review of web-based scholarly writing can be used by legislators to block executive appointment confirmations, what's the right way to use social media to minimize personal harm?
There are three possibilities
1. Ignore the naysayers - blog, tweet, chat, IM, and wiki as you wish!
2. Give up - the world is filled with angry people who can stalk you, harass you, and criticize you. Better to keep your thoughts private.
3. Write what you think, back it up with evidence, and temper your emotions - assume the world will read everything you write and have an opinion, but transparency and communication, as long as it is fair, is the best policy.
I've chosen #3.
Why did the person criticize me for blogging about Single Malt?
I have three ideas
1. Maybe they did not understand that I only blog for a few minutes at the end of my 20 hour day, when all work and family responsibilities are done to the extent I can do them. Hence my blogging does not detract from anything else I do.
2. Maybe they cannot accept that I've done everything I can to serve my customers in the 20 hour day before blogging. In that case, it falls under my leadership principle, "You cannot please everyone".
3. Maybe life is not fair and I should be judged by different criteria than other people. When I was 15, I wrote in my journal "If you are judged using rules that are inherently unfair or unreasonable, then you should realize that the game cannot be won. Stay true to your values, work hard, and all will be well." No matter what people say or how harshly they criticize me, even when their ideas are not factual, I will stay true to my values - not pursuing fame or fortune, but simply trying to make a difference.
So the answer to the question is yes, blogging can hurt your career. However, if you take the high road, you'll always get to where you want to be.
Everything I write is personal, unfiltered, and transparent. Readers of my blog know where I am, what I'm doing, and what I'm thinking. They can share my highs and my lows, my triumphs and defeats.
Recently, I had my blog used against me for the first time.
In discussing a critical IT issue, someone questioned my focus and engagement because I had written a post about single malt scotch on June 2 at 3am, recounting an experience I had Memorial Day Weekend in Scotland.
I explained that I write these posts late at night, in a few minutes, while most people are sleeping. They are not a distraction but are a kind of therapy, enabling me to document the highlights of my day.
I realize that it is overly optimistic to believe that everyone I work with will embrace values like civility, equanimity, and a belief that the nice guy can finish first.
If Facebook can be used against college applicants to screen them for bad behavior and if review of web-based scholarly writing can be used by legislators to block executive appointment confirmations, what's the right way to use social media to minimize personal harm?
There are three possibilities
1. Ignore the naysayers - blog, tweet, chat, IM, and wiki as you wish!
2. Give up - the world is filled with angry people who can stalk you, harass you, and criticize you. Better to keep your thoughts private.
3. Write what you think, back it up with evidence, and temper your emotions - assume the world will read everything you write and have an opinion, but transparency and communication, as long as it is fair, is the best policy.
I've chosen #3.
Why did the person criticize me for blogging about Single Malt?
I have three ideas
1. Maybe they did not understand that I only blog for a few minutes at the end of my 20 hour day, when all work and family responsibilities are done to the extent I can do them. Hence my blogging does not detract from anything else I do.
2. Maybe they cannot accept that I've done everything I can to serve my customers in the 20 hour day before blogging. In that case, it falls under my leadership principle, "You cannot please everyone".
3. Maybe life is not fair and I should be judged by different criteria than other people. When I was 15, I wrote in my journal "If you are judged using rules that are inherently unfair or unreasonable, then you should realize that the game cannot be won. Stay true to your values, work hard, and all will be well." No matter what people say or how harshly they criticize me, even when their ideas are not factual, I will stay true to my values - not pursuing fame or fortune, but simply trying to make a difference.
So the answer to the question is yes, blogging can hurt your career. However, if you take the high road, you'll always get to where you want to be.
Friday, July 8, 2011
Cool Technology of the Week
When I was 10 years old in 1972, I mentioned to my father that some day the analog world we live in would be digitized and if the sampling rate were fine enough, we would not be able to distinguish an analog picture from a digital one. Thereafter, sights and sounds could not be trusted because all media could be easily manipulated by rearranging the data.
The manipulation of digital photo data, considered science fiction in 1972, is something every teenager can do on their smartphone today.
What's next?
How about photos with an infinitely variable focus that can manipulated after you take them? Lytro, a spin out from Stanford University, enables you to readjust the focus to any point in the picture after the fact. I recommend you test drive the technology on their website
Lytro has built a new kind of camera sensor which captures every ray of light hitting it. Lytro also has software to turn that data into the shifting-focus images.
With this new camera, you can take photos of major life events, hard to capture objects in motion, or even the Loch Ness monster and not worry about focusing - just do it later.
That's cool
The manipulation of digital photo data, considered science fiction in 1972, is something every teenager can do on their smartphone today.
What's next?
How about photos with an infinitely variable focus that can manipulated after you take them? Lytro, a spin out from Stanford University, enables you to readjust the focus to any point in the picture after the fact. I recommend you test drive the technology on their website
Lytro has built a new kind of camera sensor which captures every ray of light hitting it. Lytro also has software to turn that data into the shifting-focus images.
With this new camera, you can take photos of major life events, hard to capture objects in motion, or even the Loch Ness monster and not worry about focusing - just do it later.
That's cool
Thursday, July 7, 2011
Experiencing the Alaskan Wilderness
When I wrote about hiking Denali, I did not know what to expect from the 49th State.
Having spent the last week in Alaska with my family and experienced the terrain, flora and fauna, I can now describe Alaska in all its grandeur.
We began our trip in Girdwood, 40 miles to the South of Anchorage. This area of Chugach National Forest is a temperate rain forest, which receives 120 inches of precipitation every year and is reminiscent of moss covered forests of Olympia National Park. I hiked the Winner Creek Trail to a hand trolley river crossing and then onto the Crow Creek trail a section of the traditional Iditarod route - about 10 miles for the circuit. A truly beautiful walk through lush and dense forest. Two great horned owl fledgelings serenaded me along the way.
We drove the Seward highway and explored the Exit Glacier via the Harding Icefield trail. Amazing deep blue ice and deep crevasses
We moved on to Anchorage so that I could meet with IT leaders and folks working on statewide EHR/HIE projects. After my lectures, Stewart Ferguson CIO, Alaska Native Tribal Health Consortium and I hiked 20 miles with 8000 feet of elevation gain through the Chugach State Park. We began at Glen Alps trailhead and hiked 6 miles to Ship Lake. From there we went off trail and climbed a 1000 foot ridge, then a 1000 foot glacier to reach Bird Ridge. We traversed the Bird Ridge Overlook and did a 3500 foot descent to Turnagain Arm. A truly amazing day - no need for a headlamp because in late June at 61 degrees North latitude, there is no darkness.
The next day, my family traveled to Talkeetna, the gateway to Denali National Park and the amazing peeks of Foraker, Hunter, and McKinley. We took the Talkeetna air taxi to Denali base camp and landed on the Pika Glacier. I met two climbers (medical students from the University of Washington) and we discussed the rock routes of Little Switzerland, moderate granite climbs of 5.8-5.9. So far this year, there have been 9 deaths on McKinley and the surrounding mountains. Foraker was particularly intimidating because of its cornices - rock edges covered with snow that looks like whipped cream - one step through a cornice is fatal.
An amazing trip. I look forward to returning and exploring the mountains/rivers of the Kenai Peninsula by driving along the Sterling Highway to Homer, Alaska.
I have long admired the work of Richard Proenneke (the book One Man's Wilderness and the PBS documentation Alone in the Wilderness) and hope to visit the Twin Lakes region of Lake Clark National Park, the least visited national park with 4 million acres and 5000 visitors per year.
Although I'm steeped in technology and sometimes described as an alpha geek there is something to be said for he homesteading lifestyle, defined as “a lifestyle of simple agrarian self-sufficiency". Alaska may the the ideal spot for my version of Thoreau's cabin in the woods.
Having spent the last week in Alaska with my family and experienced the terrain, flora and fauna, I can now describe Alaska in all its grandeur.
We began our trip in Girdwood, 40 miles to the South of Anchorage. This area of Chugach National Forest is a temperate rain forest, which receives 120 inches of precipitation every year and is reminiscent of moss covered forests of Olympia National Park. I hiked the Winner Creek Trail to a hand trolley river crossing and then onto the Crow Creek trail a section of the traditional Iditarod route - about 10 miles for the circuit. A truly beautiful walk through lush and dense forest. Two great horned owl fledgelings serenaded me along the way.
We drove the Seward highway and explored the Exit Glacier via the Harding Icefield trail. Amazing deep blue ice and deep crevasses
We moved on to Anchorage so that I could meet with IT leaders and folks working on statewide EHR/HIE projects. After my lectures, Stewart Ferguson CIO, Alaska Native Tribal Health Consortium and I hiked 20 miles with 8000 feet of elevation gain through the Chugach State Park. We began at Glen Alps trailhead and hiked 6 miles to Ship Lake. From there we went off trail and climbed a 1000 foot ridge, then a 1000 foot glacier to reach Bird Ridge. We traversed the Bird Ridge Overlook and did a 3500 foot descent to Turnagain Arm. A truly amazing day - no need for a headlamp because in late June at 61 degrees North latitude, there is no darkness.
The next day, my family traveled to Talkeetna, the gateway to Denali National Park and the amazing peeks of Foraker, Hunter, and McKinley. We took the Talkeetna air taxi to Denali base camp and landed on the Pika Glacier. I met two climbers (medical students from the University of Washington) and we discussed the rock routes of Little Switzerland, moderate granite climbs of 5.8-5.9. So far this year, there have been 9 deaths on McKinley and the surrounding mountains. Foraker was particularly intimidating because of its cornices - rock edges covered with snow that looks like whipped cream - one step through a cornice is fatal.
An amazing trip. I look forward to returning and exploring the mountains/rivers of the Kenai Peninsula by driving along the Sterling Highway to Homer, Alaska.
I have long admired the work of Richard Proenneke (the book One Man's Wilderness and the PBS documentation Alone in the Wilderness) and hope to visit the Twin Lakes region of Lake Clark National Park, the least visited national park with 4 million acres and 5000 visitors per year.
Although I'm steeped in technology and sometimes described as an alpha geek there is something to be said for he homesteading lifestyle, defined as “a lifestyle of simple agrarian self-sufficiency". Alaska may the the ideal spot for my version of Thoreau's cabin in the woods.
Wednesday, July 6, 2011
Testimony to the HIT Policy Committee
Doug Fridsma and I were asked to brief the HIT Policy Committee about the current activities of the HIT Standards Committee, ensuring coordination as we all work to finalize Meaningful Use Stage 2.
We used this presentation, which covers three major themes:
*Meaningful Use Stage 2 gap analysis work
*The Standards Summer Camp Activities
*The Standards and Interoperability Framework activities
Doug started by reiterating the guiding principles of the HIT Standards Committee which essentially translate into
"We will select no standard before its time"
Doug reflected on the way we select standards, assigning each requirement to one of four "buckets"
a. Functional criteria only – no standards are needed
b. Sufficient standards and implementation guides are available
c. Existing standards are available but not implementation guides
d. No standards or implementation guides are available
He discussed early work to place proposed Meaningful Use Stage 2 policy goals into these 4 buckets.
Since many of the Stage 2 goals will not have supporting standards and implementation guides in time for the regulations, it's like that some of stage 2 will be described using functional criteria and not standards. What does that mean? Here's an example
Electronic medication administration records (EMAR) are unlikely to require a specific bar code format. Instead, it will be sufficient to require that a certified application be capable of 5 functions
*Generate alert for wrong patient
*Generate alert for wrong medication
*Record the dose and route
*Record the provider administering the medication
*Record the time/date the medication was administered
Implementation details will be left to the creativity of the marketplace.
I described the Standards Summer Camp schedule and offered updates on
• Metadata
• Patient Matching
• ePrescribing
• Surveillance Implementation Guide
• NwHIN
Doug completed the presentation by describing the Standards and Interoperability Framework Projects
*CDA consolidation
*Transitions of Care
*Lab Results reporting
*Provider directories
*Distributed query (Using a web browser to query multiple databases such as is done with Shrine/I2B2 )
We took questions from the committee including how best to develop drug/drug interaction standards, how the policy committee can best work with the standards committee by providing requirements early and often, and how we can all plan for the future of stage 3.
A great meeting!
We used this presentation, which covers three major themes:
*Meaningful Use Stage 2 gap analysis work
*The Standards Summer Camp Activities
*The Standards and Interoperability Framework activities
Doug started by reiterating the guiding principles of the HIT Standards Committee which essentially translate into
"We will select no standard before its time"
Doug reflected on the way we select standards, assigning each requirement to one of four "buckets"
a. Functional criteria only – no standards are needed
b. Sufficient standards and implementation guides are available
c. Existing standards are available but not implementation guides
d. No standards or implementation guides are available
He discussed early work to place proposed Meaningful Use Stage 2 policy goals into these 4 buckets.
Since many of the Stage 2 goals will not have supporting standards and implementation guides in time for the regulations, it's like that some of stage 2 will be described using functional criteria and not standards. What does that mean? Here's an example
Electronic medication administration records (EMAR) are unlikely to require a specific bar code format. Instead, it will be sufficient to require that a certified application be capable of 5 functions
*Generate alert for wrong patient
*Generate alert for wrong medication
*Record the dose and route
*Record the provider administering the medication
*Record the time/date the medication was administered
Implementation details will be left to the creativity of the marketplace.
I described the Standards Summer Camp schedule and offered updates on
• Metadata
• Patient Matching
• ePrescribing
• Surveillance Implementation Guide
• NwHIN
Doug completed the presentation by describing the Standards and Interoperability Framework Projects
*CDA consolidation
*Transitions of Care
*Lab Results reporting
*Provider directories
*Distributed query (Using a web browser to query multiple databases such as is done with Shrine/I2B2 )
We took questions from the committee including how best to develop drug/drug interaction standards, how the policy committee can best work with the standards committee by providing requirements early and often, and how we can all plan for the future of stage 3.
A great meeting!
Tuesday, July 5, 2011
Alaskan Healthcare IT Lessons Learned
I'm back from Alaska and I'll post several blogs about my Healthcare IT and personal experiences in the 49th state. Sorry for the gap in blog posting last week. Per the photo to the right, I was "Into the Wild" with my wife and daughter, celebrating my daughter's graduation from high school, off grid.
Alaska faces many healthcare challenges given its large area (663,268 sq mi) and population of 710,231 residents (as per the 2010 US Census), approximately half of which live in the Anchorage metropolitan area, making Alaska the least densely populated state. Roads are limited, making boat and small plane the only means of transportation to many locations, especially in the western portion of the state.
The geographical challenge of delivering such widely distributed healthcare makes it a natural location for telemedicine, much of which is ably overseen by Stewart Ferguson CIO, Alaska Native Tribal Health Consortium.
The business case is simple - provider travel to remote villages and patient commuting to downtown clinics is expensive for providers and patients.
Because telemedicine has become part of the culture of Alaskan care delivery, healthcare information exchange is likely to be very successful. When a patient is referred to a specialist, lifetime medical summaries should be sent before the consultation. When the telemedicine encounter is done, a summary of the evaluation should be sent back to the primary care giver or health aide in the village.
Alaska has what I consider the perfect alignment of incentives needed for a successful HIE
a. Complete support of all payers and providers that HIE (coupled with Telemedicine) saves money by reducing travel
b. A universal expectation by providers and patients that HIE will occur following each encounter
c. A population of patients who understands the need for HIE and supports a simple consent model (opt-out)
d. Government support for HIE with strong public-private governance and delegation of HIE operations to a multi-stakeholder non-profit, the Alaska eHealth Network (AeHN) a 501c3 Alaska corporation organized and managed by Alaskans.
e. The stakeholders chose a single vendor, Orion, to provide the backbone which will connect all their EHRs.
A network of networks approach makes great sense for a state with regional centers of excellence and highly distributed broadband like Massachusetts. Alaska has some broadband but relies on satellite and microwave connections, which are limited in bandwidth and very expensive. Alaska does not distribute IT staff and infrastructure support to distant villages. Thus, for Alaska, a single network and a centralized clinical data repository supporting:
Composite Record Viewing
Master Patient Index / Record Locator Service
Audit Trails/Security
Patient Portal
Public Health Reporting
makes great sense.
The folks in Alaska working on HIE, led by Wiiliam Sorrells, are highly motivated, have done a great job with education/communication, and share a common vision.
I predict that Alaska will become one of the most successful HIEs because of its clear business case, alignment of incentives, and broad stakeholder engagement.
I look forward to working with my colleagues in Alaska as we all develop the Nationwide Health Information Network, connecting all the successful state HIEs together.
Alaska faces many healthcare challenges given its large area (663,268 sq mi) and population of 710,231 residents (as per the 2010 US Census), approximately half of which live in the Anchorage metropolitan area, making Alaska the least densely populated state. Roads are limited, making boat and small plane the only means of transportation to many locations, especially in the western portion of the state.
The geographical challenge of delivering such widely distributed healthcare makes it a natural location for telemedicine, much of which is ably overseen by Stewart Ferguson CIO, Alaska Native Tribal Health Consortium.
The business case is simple - provider travel to remote villages and patient commuting to downtown clinics is expensive for providers and patients.
Because telemedicine has become part of the culture of Alaskan care delivery, healthcare information exchange is likely to be very successful. When a patient is referred to a specialist, lifetime medical summaries should be sent before the consultation. When the telemedicine encounter is done, a summary of the evaluation should be sent back to the primary care giver or health aide in the village.
Alaska has what I consider the perfect alignment of incentives needed for a successful HIE
a. Complete support of all payers and providers that HIE (coupled with Telemedicine) saves money by reducing travel
b. A universal expectation by providers and patients that HIE will occur following each encounter
c. A population of patients who understands the need for HIE and supports a simple consent model (opt-out)
d. Government support for HIE with strong public-private governance and delegation of HIE operations to a multi-stakeholder non-profit, the Alaska eHealth Network (AeHN) a 501c3 Alaska corporation organized and managed by Alaskans.
e. The stakeholders chose a single vendor, Orion, to provide the backbone which will connect all their EHRs.
A network of networks approach makes great sense for a state with regional centers of excellence and highly distributed broadband like Massachusetts. Alaska has some broadband but relies on satellite and microwave connections, which are limited in bandwidth and very expensive. Alaska does not distribute IT staff and infrastructure support to distant villages. Thus, for Alaska, a single network and a centralized clinical data repository supporting:
Composite Record Viewing
Master Patient Index / Record Locator Service
Audit Trails/Security
Patient Portal
Public Health Reporting
makes great sense.
The folks in Alaska working on HIE, led by Wiiliam Sorrells, are highly motivated, have done a great job with education/communication, and share a common vision.
I predict that Alaska will become one of the most successful HIEs because of its clear business case, alignment of incentives, and broad stakeholder engagement.
I look forward to working with my colleagues in Alaska as we all develop the Nationwide Health Information Network, connecting all the successful state HIEs together.
Subscribe to:
Posts (Atom)