a home hub featuring icons of all the tasks it can assist with in a thinking cloud

Exploring Elder Care Robotics: Voice Assistants and Home Hubs

This article is part of a four-part series that researches how robotics are being developed for aging care and investigates their ethical implications. In our first article, we explored emotional companion robots, which soothe and comfort patients experiencing loneliness, depression, or diseases such as Alzheimer’s. Today, we look at voice assistants and home hubs—robots designed to coordinate and simplify daily tasks around the house. 

What are Voice Assistants and Home Hubs?

Unlike other robots in this series, you are probably familiar with voice assistants and home hubs. These robots, which include Amazon Echo, Google Home, Apple Siri, Samsung Ballie, and Nest, respond to human commands (voice, motion, or input) to complete tasks like preheating the oven, playing a podcast, or refilling a prescription. Several devices also incorporate artificial intelligence (AI) to learn household patterns and anticipate needs.  However, unlike social robots (covered later in this series), voice assistants do not proactively engage with users unless programmed or commanded.  

Voice assistants and home hubs have experienced significant consumer adoption in the past several years: in 2019, 112 million Americans were expected to use a voice assistant at least monthly. These robots hold particular promise for aging individuals—many don’t want to depend on caretakers or loved ones, but do need help around the house. In a 2016 survey of older users, respondents preferred assistance from a device rather than a human in 28 out of 48 daily tasks, especially for household chores.

Older adults reported they would most prefer assistance from a robot over a human for housekeeping tasks, laundry, being reminded to take medication, new learning, and hobbies. (Smarr et. al

Thanks to evolutions in digital health and policy, voice assistants and home hubs are increasingly offering healthcare-related support, such as scheduling doctor’s appointments, ordering medications, and even guiding patients through physical therapy exercises. While the future for robotic home healthcare assistance is bright, in this article we’ll walk through a few important ethical considerations.

Business, Healthcare Provider, or Caretaker?

In the American healthcare system, patients interact with a variety of players: clinicians, insurers, pharmacies, and healthcare brands, to name a few. These players each have different patient relationships, involving different responsibilities, boundaries, and goals. 

In real-life interactions, patients have little difficulty understanding whether they are engaging with a clinician, insurer, or healthcare brand. However, in voice and home technology ecosystems, users engage with corporate players’ “skills” seamlessly, without clear indications when they switch from player to player.  These smooth interactions can confuse users, who may engage with vague or misaligned expectations, or who may not know which organization they’re engaging with. 

For example, if a patient asks their home hub for the weather report and hears a forecast along with a suggestion to take allergy medication that day, they may be confused about whether that recommendation came from their doctor, a pharmacy, a weather tracker, or a drug maker. 

Who’s Listening? When?

Most voice assistants and home hubs know when to engage by waiting for a designated word or motion. Once this signal is detected, most devices interpret the following command by recording it in a cloud system and analyzing it via AI methods, such as natural language processing (NLP). Recent reporting revealed that Amazon, Google, and Apple also hired human contractors to review thousands of interactions and train their AI programs for improvement.  Thus, when patients correspond with voice assistants and home hubs, it’s unclear whether they are communicating only with the machine in front of them, or eventually to a stranger. 

Many users are concerned that their voice assistants and home hubs are collecting data from background activities or when they are mistakenly activated. According to 2019 report, 10% of leaked Google Home voice clips were found to have been mistakenly recorded after devices incorrectly identified their “wake words” and began to listen. 

Reintroducing Friction

Voice assistants and home hubs are often promoted as being frictionless—responding to commands, anticipating needs, and seamlessly transitioning from one task to the next. However, friction is a vital safeguard to patient autonomy and control. 

Technology players bringing healthcare to home robotics stand to offer huge patient value. However, they could benefit by adding more friction, such as pausing to announce themselves—confirming patient identity, disclosing privacy settings, and clearly announcing any new updates or changes—at each interaction. 

Adriana Krasniansky

Adriana Krasniansky is a graduate student at the Harvard Divinity School studying the ethical implications of new technologies in healthcare, and her research focuses on personal and societal relationships to devices and algorithms. More specifically, Adriana is interested in how technology can support and scale care for aging and disability populations. Adriana previously worked as a writer and consultant in the technology field, managing projects involving artificial intelligence and robotics. Her work has been featured in publications including The Atlantic, Quartz, and PSFK.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.