Wednesday, March 29, 2017

Early Experiences with Ambient Listening Devices (Alexa and Google Home)

BIDMC has a long tradition of testing speculative technologies with the notion that breakthroughs often require tolerance for failure.   For example, we’ve embraced blockchain in healthcare because we believe public ledgers have promise to unify medical records across institutions.

Over the past few months, we’ve developed healthcare applications for Alexa, Amazon’s ambient listening device that combines natural language processing and easy to use application program interfaces.    We have also tried Google Home.

Here’s our experience thus far.

1.  We’ve used Alexa in a pilot inpatient setting (not real patients).   Here are the questions/use cases we’ve implemented with back end interaction to our operational systems.

Alexa, ask BIDMC

What’s my room number
Who’s on my care team or List my care team
What is my diet or What can I eat
Call a nurse   or   I need a nurse  or Send in a nurse
Give me some inspiration  or Inspire me
I need spiritual care    or    Request spiritual care
I need a social worker   or   Request social work
What's my care plan for today  or What are my planned activities for today
Ok, thanks    or   Stop   or   You can stop

2.  Sentiment analysis

What is sentiment analysis - the process of computationally identifying and categorizing opinions expressed, especially in order to determine whether the author’s attitude towards a particular topic, product, etc., is positive, negative, or neutral.

We are beginning to use sentiment analysis on social media mentions of BIDMC. We have done a pilot to spot out BIDMC mentions on Twitter and with Google democratizing their sentiment analysis API at the conference a few weeks ago we are working on ingesting the feeds. Conceptually the same approaches can work on Alexa to analyze mood and urgency.   We will try it in an attempt to communicate emotion as well as text in the ambient listening workflow.

3.  HIPAA Compliance

Alexa and Google Home are not "HIPAA compliant" i.e. neither Amazon nor Google will sign business associate agreements for ambient listening technologies.  Both organizations are working on policies and controls that would enable them to sign such agreements for their speech driven products. Once we sign BAAs, we’ll explore use cases like a surgeon asking for patient data without needing a browser/keypad.

In the meantime, we’re not using patient identified data in ambient listening applications.   The questions above are anonymous -  the HIPAA 18 identifiers (i.e. names, social security numbers, addresses etc.) are not included in the data stream.

We're exploring a few other use cases outside of HIPAA controls, such as querying knowledge bases - commonly asked questions delivered via an ambient listening infrastructure.

4.  Accuracy
We have not had any unexpected misunderstandings when parsing spoken language.  There is a famous You Tube video illustrating a 3 year asking for hickory dickory dock and getting a pornographic response.    The only issue we’ve had is that Alexa can be sensitive to ambient voices, causing it to respond to an unasked question.

5.  Expanding the use cases to the outpatient scheduling domain

Amazon has offered Lex as a service that can be used to embed natural language processing in mobile apps that could be used for patient self scheduling.  We hope to support a use case of patients in their homes requesting appointment/referrals and interacting entirely with Alexa instead of having to place a phone call or visit a website.

Thus far, we’ve been very impressed with the capabilities of these conversational services.   The web was our focus 1996-2012.   Mobile has been our focus 2012 to the present, what I call the post-web era.  I can imagine that by 2018 we’ll enter the post-mobile era and have conversational interfaces based on ambient listening devices in patient and provider locations.

No comments: