Experts: Ethical Concerns Over AI Use in Medical Care Pressing but Answerable

Forthcoming report recommends development of 'AI Guardians'

A robot shakes salt over popcorn on March 8, 2017 at the Institute for Artificial Intelligence
A robot shakes salt over popcorn on March 8, 2017 at the Institute for Artificial Intelligence / Getty Images
December 17, 2017

As the technology develops, costs decline relative to human caregivers, and demographic trends create further demand for robotic assistance, Artificial Intelligence appears poised to expand its role in medical care, bringing with it ethical concerns for its use and interaction with humans.

In Japan, where an aging population and contracting workforce has created a shortage of human caregivers, robots are already an important part of providing medical assistance. As similar employment of AI systems spread, some observers worry that robotic care raises issues of relational authenticity, privacy, and the potential for harmful execution of its programming. Other observers, however, believe the benefits of incorporating AI assistance in medical care outweigh hypothetical costs, and that these ethical concerns can be addressed.

In a forthcoming report in Interaction Studies titled "The Ethics of Robotic Caregivers," Amitai and Oren Etzioni argue that the majority of such worries are not unique to robots, but exist for human care providers as well, and can be addressed through the development of what they call "AI Guardians," which would be "programs to interrogate, discover, supervise, audit, and guarantee the compliance of operational AI programs."

The Etzioni report argues AI research should be divided into two categories. What it calls pursuit of "AI: the Mind" is work toward developing artificial brains that function autonomously to make human minds redundant. Self-driving cars are an example of "AI: the Mind." The other type can be called "AI: the Partner," and describes a program which operates in conjunction with human activity. It is this kind of AI generally under consideration in medical care, which eases some concerns as it guarantees a degree of human supervision.

Amitai Etzioni, the director of the Institute for Communitarian Policy Studies at the George Washington University, described the work of "AI: the Partner" as an extension of the division of labor.

"'AI: the Partner' is out to make it clear that there are some things that machinery and computers are much better at: memory, obviously, is one—the smallest computer can beat a human at memory," he said. "But at some things humans are much better, like touch. ...The division of labor: What can we turn over to a computer and what can we leave in human hands?"

Caleb Watney, a technology policy associate for the R Street Institute, believes open-sourcing and transparency can protect privacy and safeguard AI.

Computers' memory raises privacy concerns as robots or AI programs can collect enormous amounts of data about their patients. Watney argues this is a comparable problem to data security in other sectors and can be addressed through standard security practices such as encryption and automatic deletion incentivized by making data collectors liable for breaches.

"If companies have no liability for having consumer data breached through a cyber attack then they have every incentive to just collect as much data as physically possible because you never know when it might be useful," he said. "But if companies have to weigh the additional benefit from collecting more data against the potential liability if that data gets hacked, then that should put in more of a balance there and cause companies to always question, should we really be collecting this data?"

The Etzioni report defines AI caregivers as "all AI-enriched programs that provide care and seem affective to those they care for." One of AI critics' concerns is that AI caregivers are necessarily inauthentic in their performances of sympathy and emotion for their patients. Etzioni dismisses that concern, as such seeming affection is a necessary part of quality medical care and the question of authenticity is present with human nurses and doctors as well.

"When you have a nurse aide come to you, for instance into your home, and say 'I care about you. Are you OK? Love you!'—she doesn't give a shit, she just gets paid to be nice. So, are we going to say, 'Well, OK, we should not accept nursing'? … the country will not be able to provide the kind of service that people need if I only employ other human beings."

Addressing concerns that AI systems could, in misguided pursuit of their mandate to provide medical care, control or harm vulnerable patients, Etzioni said, "The answer is much easier than most people think, and it is not often that I can say that: We have laws."

Just as laws and regulations set parameters for human behavior, what the report calls "AI Guardians" could enforce laws and regulations for AI caregivers, and identify and correct noncompliance. Etzioni pointed to self-driving cars and traffic laws to illustrate.

"We ask you to stop for stop signs," he said. "It's not an ethical dilemma. We are telling you to stop at stop signs and if you don't we will make you. So 90 percent of the behavior, in all the ways that we really care about, we legislate. The ethical decisions—if you ask me who makes them, who decides—the legislature decides. The legislature reflects our values."

The prospect of AI Guardians, Etzioni admits, raises the old question "who will guard the guardians?" There's no easy answer, he said, beyond relying on a human presence around AI programs.

"The best we can hope for is that all smart instruments will be outfitted with a readily locatable off-switch to grant ultimate control to human agents over both operational oversight AI programs."

Watney argues another solution to guarding the guardians may be found in transparent and open-source AI programming and software. If the AI Guardians' inputs and functions are made public then in some sense, Watney said, "everyone watches the watcher."

Both Watney and Etzioni push against what they see as excessive hand wringing over adopting AI systems. As Etzioni put it, when asked when these questions will become urgent: "Yesterday! ...This is not next year or ten years from now this is the here and now." Both men argue AI meets real and pressing needs and it is counterproductive to compare robot caregivers to a perfect model. Said Watney:

"Ideally we'd have a fully trained, very attentive doctor or nurse there with every single patient as they're dying to walk them through every stage of the process, but that's just impractical and it's not happening now, so if we can get closer to that world by using some partnership of AI and humans that seems like a more moral and more just system."