Tuesday, January 26, 2021

Wearable Danger

This article is written by John Halamka, M.D., president, Mayo Clinic Platform, and Paul Cerrato, senior research analyst and communications specialist, Mayo Clinic Platform.

If you ask health care executives what keeps them up at night, many would sum up their worries in one word: ransomware.  By one estimate, 56% of organizations suffered a ransomware attack in the last year. While there are countless ways in which a cyberthief can penetrate a facility’s computer network to block access to essential data, one avenue that gets too little attention is through wearables and related medical devices.  A growing number of providers are now allowing patients to send data from blood glucose monitors, blood pressure cuffs, bed sensors, and portal EKG devices to their networks. And during the COVID-19 pandemic, many more clinicians are working remotely using their own laptops, tablets and smartphones to access a hospital or office EHR system. All these connections are potential opportunities for hackers to infiltrate your computer network. And the word potential doesn’t fully capture the danger.

In 2019, for instance, FDA issued an alert to health professionals warning about a cybersecurity vulnerability affecting Medtronic implantable cardiac devices (ICDs), programmers and home monitors. The agency found the vulnerability in the wireless telemetry technology used to communicate between the ICDs, clinic programmers and home monitors. Similarly, the company that makes the OneTouch insulin pumps contacted patients using the device of the possibility that it could be hacked and reprogrammed, which could have life-threatening consequences.

During a recent conversation with Leon Lerman, CEO of Cynerio, a cybersecurity solutions firm, he explained that once a hacker infiltrates a computer network, often through a phishing scam and malware, medical devices become easy targets. That’s the case for several reasons, including inadequate segmentation and outdated operating system. Virtual local access networks (VLANs) are one way to address the issue because they limit the number of users allowed to have access to a specific part of the network.  Unfortunately, a study sponsored by Forescout, a security firm, found “only 49 percent of medical devices were deployed across 10 virtual local access networks (VLANs) or fewer ….”

Outdated operating systems, an easy access point for hackers, remain a persistent problem for health care providers, as are outdated software applications. An international survey involving 600 health care IT professionals in 2019 found more than 1 out of 4 organizations were still running Windows 7 on their medical devices. The danger posed by this practice may not be immediately obvious to most clinicians, but because many older OSs are no longer supported by their manufacturers, security patches are no longer available to block newly designed digital threats. Of course, health care providers running currently supported operating systems can also fall victim to cyberattacks if they fail to install security updates as soon as they become available. That’s how the infamous WannaCry ransomware worm was able to penetrate the NHS and numerous other networks; it affected more than 200,000 computers worldwide in 150 countries. Microsoft had already issued a security patch before the WannaCry incident, but many organizations had neglected to install it in time.

One of the challenges in keeping operating systems up to date is the restrictions that hospital IT teams face when they try to address the issue. Most devices are black boxes in the sense that the manufacturer does not allow users to touch the software; doing so without the company’s permission usually voids the warranty. That makes it virtually impossible for a hospital or medical practice to install security updates to legacy OSs, even when they are available. If the device manufacturer is cooperative, it may be possible to have their technicians do these updates. When that’s not an option, segmentation becomes all the more important.

Fortunately, many device manufacturers are now beginning to realize that their reputations depend upon developing machinery that is not just clinically functional but hardened to cyberattacks. Many new devices come with a Manufacturer Disclosure Statement for Medical Device Security (MDS2) that spells out the security protocols used on the device, whether anti-malware software has been installed, and whether it should even be connected to the Internet.

The adage about necessity being the mother of invention certainly applies to the Internet of Medical Things. As the health care ecosystem experiences more cyberattacks, we learn to adapt, and out of necessity, develop creative tools to defend our networks and most importantly we learn more effective ways to protect our patients—our number one priority.

Friday, January 22, 2021

How is AI Impacting Health Care Today?

By John Halamka and Paul Cerrato*

We are often asked this question during interviews, podcasts and speaking engagements. It’s a complicated question that requires context. A closer look at the research and product offerings in digital health demonstrates that there are several high-quality, well-documented algorithms now available, but there are also several questionable vendors that have rushed to market with little evidence to support their claims. Separating the wheat from the chaff can be a full-time occupation.

We recently received a press release from a large U.S. company highlighting a new AI system that may be able to diagnose dementia using a patient’s vocal patterns. The vendor pointed out that its research generated an area under the curve (AUC) of 0.74 for its system, which suggests that at least one in 4 patients with dementia would be overlooked. With these concerns in mind, the question is: What kind of guidelines can clinicians and technologists on the front lines turn to when they want to make informed choices?

Ideally, we need an impartial referee that can act as a Consumer Reports type service, weighing the strengths and weaknesses of AI offerings, with brief summaries of the evidence upon which they base its conclusions. In lieu of that, there are criteria that stakeholders can tap to help with the decision making process. As we point out in a recent review in the New England Journal of Medicine journal, called NEJM Catalyst, at the very least, there should be prospective studies to support any diagnostic or therapeutic claims. Too many algorithms continue to rely on retrospective analysis to support their products. (1) In our NEJM analysis, we include an appendix entitled “Randomized Controlled Trials and Prospective Studies on AI and Machine Learning,” which lists only 5 randomized controlled trials and 9 prospective, non-RCT studies. When one compares that to the thousands of AI services and products coming to market, it’s obvious that digital health still has a long journey to make before it’s fully validated.

That’s not to suggest that there are no useful, innovative AI and machine learning tools that are well supported, as well as several that are coming through the pipeline.  There are credible digital tools to estimate a patient’s risk of colorectal cancer (ColonFlag), manage type 1 diabetes (DreaMD), and screen for diabetic retinopathy (IDx), all of which are supported by good evidence.** The FDA has also published a database of approved AI/ML-based medical technologies, summarized by Stan Benjamens and his associates in npj Digital Medicine.(2) (Keep in mind when reviewing this database, however, that some of the algorithms cleared by FDA were based on very small numbers of patients.)

A recent virtual summit gathered several thought leaders in AI, digital health and clinical decision support to create a list of principles by which such tools can be judged. Spearheaded by Roche and Galen /Atlantica, a management consulting firm, its summit communique refers to the project as “A multi-stakeholder initiative to advance non-regulatory approaches to CDS quality.”  Emphasizing the need for better evidence, the communique states: “The development of CDS is driven by increasing access to electronic health care data and advancing analytical capabilities, including artificial intelligence and machine learning (AI/ML). Measures to ensure the quality of CDS systems, and that high-quality CDS can be shared across users, have not kept pace. This has led some corners of the market for CDS to be characterized by uneven quality, a situation participant likened to “the Wild West.”

The thought leaders who gathered for the CDS summit certainly aren’t the only ones interested in improving the quality of AI/ML-enhanced algorithms.  The SPIRIT AI and CONSORT-Ai Initiative, an international collaborative group that aims to improve the way AI-related research is conducted and reported in the medical literature, has published 2 sets of guidelines to address the issues we mentioned above. The twin guidelines have been published by Nature Medicine, BMJ and Lancet Digital Health. (3,4) They are also available on the group’s web site.

With all these thought leaders and experts on board, there’s no doubt the AI ecosystem is gradually transitioning from the “Wild West” into a set of well-defined and repeatable processes that health care stakeholders can trust. http://geekdoctor.blogspot.com/2021/01/to-build-fire.html


*Paul Cerrato is a senior research analyst and communications specialist at Mayo Clinic Platform

 **Products mentioned are not endorsements.  Mayo Clinic has no relationship with any of these vendors.



1. Halamka J, Cerrato, P. The Digital Reconstruction of Health Care. NEJM Catalyst: Innovations in Care Delivery. Vol 1 (6); Nov-Dec 2020.

2. Benjamens S, Dhunnoo P, Mesko B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. Npj Digital Medicine. 3:118, 2020. https://www.nature.com/articles/s41746-020-00324-0

3. Cruz Rivera, S, Liu, X, Chan, A-W et al. Guidelines for clinical trial protocols for interventions involving artificial intelligence: the SPIRIT-AI extension. Nature Medicine volume 26pages13511363(2020)

4. Liu, X, Cruz Rivera, S, Moher D et al. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension Nature Medicine volume 26pages13641374(2020)

Tuesday, January 19, 2021

Addressing racism with compassion, data analytics

This article is written by John Halamka, M.D., president, Mayo Clinic Platform, and Paul Cerrato, senior research analyst and communications specialist, Mayo Clinic Platform.

We speak often about the need to combine human and artificial intelligence (AI) to improve patient care. Equally important is the marriage of compassion and data analytics ― a powerful duo that is proving invaluable in the battle to eradicate the systemic racism that still permeates health care.

Unfortunately, numerous examples demonstrate that systemic racism continues to affect the patient experience and leads to mistrust of health care institutions among people of color.

Some of us are familiar with the unethical Tuskegee syphilis study, in which the U.S. Public Health Service observed a large population of African American men with untreated syphilis between 1932 and 1972. As part of that study, participants with syphilis were not informed of their diagnosis, nor treated for it. They were told they were receiving free health care from the federal government.

As another example of systemic racism in health care, a recent journal article in the Proceedings of the National Academy of Sciences of the United States of America revealed that among 1.8 million U.S. births, the Black newborn mortality rate was three times higher when white doctors delivered the child, compared to Black doctors.

Awareness of such prejudices causes fear of interacting with the health care system, as evidenced by a December 2020 survey by the Kaiser Family Foundation in which 35% of Black respondents said they definitely or probably would not get a COVID-19 vaccine.

The list of inequities associated with racism includes numerous other problems. Blacks have a lower average life expectancy and they are less likely to have been vaccinated, according to a 2015 review by the Department of Health and Human Services. A 2015 review by Paradies et al also found that racism was "associated with poorer mental health, including depression, anxiety, psychological stress and various other outcomes … and with poorer general health.

Since the Black Lives Matter movement came to prominence in the U.S., many health care leaders have spoken out to address these disparities. Mayo Clinic has likewise taken a strong position on the subject and has put skin in the game by investing in a 10-year, $100 million effort to eradicate racism in all its forms.

That effort will include initiatives to:

  • Increase recruitment of researchers and clinical trial patients from underrepresented racial and ethnic groups.
  • Find ways to recruit and retain physicians, nurses and supervisors from underrepresented groups.
  • Build out its digital and telehealth technology to make patient care more equitable around the nation.

Mayo also is working on programs to increase its own patient population's diversity, with special attention paid to Black patients.

At the local level, Mayo recently awarded several grants to communities to advance racial equity. Specifically, its EverybodyIN Fund for Change has given grants to 36 organizations in Mayo Clinic communities, including 17 organizations in the regions Mayo Clinic Health system serves.

Mayo Clinic's core values are the springboard for these initiatives and programs. These values include respect for everyone in our diverse community, providing compassionate care with an emphasis on sensitivity and empathy, and integrity to the highest principles of professionalism.

Of course, Mayo realizes that values have to be accompanied by specific actions to have any lasting impact. In addition to the concrete actions mentioned above, we are completing the most extensive analysis of care disparities in the U.S. to guide our steps.

In conjunction with Change Healthcare's data scientists, Mayo is creating a novel national disparities road map using Change Healthcare's enormous amount of linked data containing social determinants of health and health care use. The data will enable us to identify geographic areas and patient populations to first target as we work to eliminate health care inequities. We aim to develop a series of specific health care delivery interventions, leveraging many of the digital tools that are now part of the Mayo Clinic Platform that can address patient-level factors and structural factors that have contributed to, and continue to contribute to, racism in medicine.

We also have created an AI playbook that provides the tools and techniques needed to select unbiased data sets. This playbook can be used to develop algorithms to measure the biases in machine learning-based algorithms being used to deliver health care.

As we have mentioned in previous articles and books, several of the digital tools being recommended as diagnostic and therapeutic aids do not represent the populations they attempt to serve, leaving out adequate numbers of persons of color and other minorities.

In our book, "The Transformative Power of Mobile Medicine," we speak at length about the power of words, pointing out that they can persuade skeptics, overcome bigotry, wound colleagues, disrupt the status quo, ruin reputations, shatter misconceptions, deceive the uninformed, endear us to loved ones, and comfort the grief stricken.

Compassion is one of those powerful words, especially when it's backed by the actions of clinicians, executives and technologists who put patients first, regardless of the color of their skin.

Thursday, January 14, 2021

Responding to Misinformation

This article is written by John Halamka, M.D.president, Mayo Clinic Platform, and Paul Cerratosenior research analyst and communications specialist, Mayo Clinic Platform.

The singer/songwriter Paul Simon once penned the lyrics: A man hears what he wants to hear and disregards the rest (The Boxer). If that’s the case, how do we respond to misinformation that contradicts the data/evidence guiding development of treatment and cures?  If the only audience willing to read such articles are already critical thinkers, perhaps we are just preaching to the choir. And if by chance, a person who believes in controversial ideas does read articles based on real world evidence, will they consider them a one-sided discussion by the “medical-industrial establishment?” In our soon-to-be published book, The Digital Reconstruction of Healthcare (HIMSS/CRC Press), we discuss this dilemma at length. Here are a few highlights from that analysis. 

There is evidence to justify at least some mistrust among the public. For example, when physicians were asked what treatment recommendations they would make for patients and what decisions they would make for themselves if they were in similar circumstances, investigators found the clinicians would have made different choices for themselves: “Among those asked to consider our colon cancer scenario (n=242), 37.8% chose the treatment with a higher death rate for themselves but only 24.5% recommended this treatment to a hypothetical patient.’’(1) Even more concerningly are reports that suggest many medical procedures continue to be performed despite lack of strong scientific evidence supporting their efficacy. The problem has become significant enough to prompt the formation of the Right Care Alliance, “ a collaboration between health-care professionals and community groups that seeks to counter a trend: increasing medical costs without increasing patient benefits.”(2)

That misinformation conversation can be much shorter when health care leaders readily admit their mistakes and any uncertainty about the treatment protocols they are recommending. Chances are, such uncertainties are going to be revealed given the public's access to scientific data from clinical research that was once hidden from view.

Building the kind of trust that opens the minds of science skeptics also requires we accurately report the facts, theories and controversies when dealing with patients and the general public. This may seem an obvious weapon in the battle to debunk unscientific views, but it can be challenging for several reasons. Explanations take time and most clinicians are too busy discussing diagnosis and treatment in the short window of time they have with each patient to adequately counter misinformation. 

Another barrier to addressing misinformation is the nature of the scientific process itself. During the COVID-19 pandemic, for example, several critics have attacked statements by infectious disease specialists because their advice has changed over time as more data became available from a larger population of infected patients. The assertion, “The experts are constantly contradicting themselves,” reflects a lack of understanding of how the scientific method works, as well as the laws of probability.

When crafting a message that will reach skeptics, it also helps to understand the motivation that’s sometimes behind such doubts.  The motivation to believe falsehoods and half-truths, is complicated. Steven Pinker, a Harvard University professor of psychology, offers a plausible theory. Citing the research of legal scholar Dan Kahan, he points out that: “Certain beliefs become symbols of cultural allegiance. People affirm or deny these beliefs to express not what they know but what they are. We all identify with particular tribes or subcultures, each of which embraces a creed on what makes for a good life and how society should run its affairs.” (3) That observation implies that rejecting a deeply held belief is betraying one’s tribe, risking the loss of peer respect.

There is no magic pill to cure the misinformation epidemic we are currently experiencing. The problem has existed for centuries and is unlikely to disappear any time soon. Many believed in astrology in the 12th century just as they do now. Skeptics have likewise questioned the value of vaccinations for centuries. During the 1918 flu pandemic, some even claimed the disease was being spread as a result of a massive nationwide vaccine program. (4) Despite these myths, it is possible to overcome misconceptions with a combination of respect, patience and a willingness to admit that sometimes, the experts are wrong. Perhaps Paul Simon was too cynical. 


*Paul Cerrato is a senior research analyst and communication specialist at Mayo Clinic Platform 




1. Ubel PA, Angott AM, Zikmund-Fisher B. Physicians Recommend Different Treatments for Patients Than They Would Choose for Themselves. Arch Intern Med. 2011; 17: 639-634.

2. Epstein D, ProPublica. When Evidence Says No, but Doctors Say Yes. The Atlantic. Feb 22, 2017. https://www.theatlantic.com/health/archive/2017/02/when-evidence-says-no-but-doctors-say-yes/517368/


3. Pinker S. Enlightenment Now. The Case for Reason, Science, Humanism and 

Progress. New York, NY: Viking; 2018:357. 


4. Reuters Staff. False claim: the 1918 influenza pandemic was caused by vaccines. Reuters. April 1, 2020.  

Monday, January 4, 2021

To Build a Fire

In 2021, much of our work at Mayo Clinic Platform will be creating repeatable processes that achieve their intended result in a timely, repeatable, scalable fashion. To understand what it means to achieve process maturity, let me tell the story of firewood management at Unity Farm Sanctuary, a great illustration of use case definition and attention to detail.

At the Sanctuary, we heat the farmhouse in the evening with a wood fire using fallen trees from the property. The logs must be sorted into wood species — ash and black birch can be burned without aging, while maple and oak must be aged. Cedar and pine are not good firewood because their oils cause the wood to pop and sputter. Poplar is not a good firewood because it smells and doesn't generate much heat.

Once we've identified the right wood for the right purpose, it needs to be cut into logs less than 2 feet long so they can be split and stored.

How do you cut up a fallen tree? You need multiple tools, including a chain saw for bucking, a forest axe for limbing, a timber jack to lift the tree off the ground, a sawbuck to trim the logs that are too long and a felling wedge to prevent the chain saw from getting pinched as logs are cut.*

If you have all these tools and the training to use them, you can reduce a fallen tree into firewood for splitting.

Then how do you split it? An engineer in West Bridgewater, Mass., custom built the SuperSplit, which uses flywheels instead of hydraulics. With this tool, I can split a cord of wood by myself in 30 minutes.

Then how do you store it? We use firewood brackets to create whatever size and shape storage we need, and then we cover it with a tarp.

Next, we have to transport it to the fireplace. We use a Vermont cart with flat-free tires and a log carrier.

When the chain saw chains are dull and covered with sap, how do you clean and sharpen them? We use a small tank of mineral spirits to soak the chains, and then scrub them with a stainless-steel brush. Then we use a commercial chain sharpener adjusted at precise angles to bring each chain back to factory specification. Then we use another tank to soak the chain in oil before storing it.

I mention all these process steps because to achieve maturity, we needed to identify each action to turn a fallen tree into a cozy fire. We had to implement the collection of technologies and training to do it rapidly and repeatedly.

In the Mayo Clinic Platform, we're developing mature processes for bringing on new partners, ingesting new kinds of data, launching new projects with existing joint ventures, evaluating AI algorithms and accelerating pilots with startups. Like the tree-to-fire procedure at Unity Farm Sanctuary, we’ll develop detailed use cases, appropriate vendors, training/staffing, toolsets and key performance indicators to support these Platform processes.

Maneesh Goyal, COO of Platform, notes that we need People, Processes and Products to be successful at Mayo Clinic Platform. My role as President of Mayo Clinic Platform and co-founder of Unity Farm Sanctuary is to support these concepts in both my professional and home life. 

*Products mentioned are not endorsements. The Sanctuary has no relationship with any of these companies.