If you Google “care bots”, you’ll see an army of robot butlers and nurses, taking vital signs in hospitals, handing red roses to patients, serving juice to the elderly. For the most part these are just sci-fi fantasies. The care bots that already exist come in a different guise.
These care bots look less like robots and more like invisible pieces of code, webcams and algorithms. They can control who gets what test at the doctor’s office or how many care hours are received by a person on Medicaid. And they’re everywhere. Increasingly, human caregivers work through and alongside automated systems that set forth recommendations, manage and surveil their labor, and allocate resources.
They are emerging because the US has chronically underinvested in care infrastructure, relying heavily on informal family support and an industry sustained by poorly-paid workers – largely immigrants and women of color. These workers have a median annual salary of $25,000, and nearly a quarter of the workforce lives below the federal poverty line. Yet, demand for their labor is set to soar. In the United States, more than 50 million people are over the age of 65, and this number is expected to nearly double by 2060. The question looms: who will care for them?
There is a growing faith that tech can fill this gap by rapidly building care systems at scale, with the help of artificial intelligence and remote monitoring. Exhausted and understaffed nursing home workers could have sensors and webcams to help them keep tabs on residents’ health and well-being. The growing “AgeTech” industry could help seniors age in place in the comfort of their homes.
As the Guardian reports today, for example, a company called CarePredict has produced a watch-like device that alerts carers if repetitive eating motions are not detected as expected, and one of its patents notes it can infer whether someone is “using the toilet”. Another firm has created technology that observes when someone fell asleep and whether they bathed.
Artificial intelligence (AI) refers to computer systems that do things that normally require human intelligence. While the holy grail of AI is a computer system that is indistinguishable from a human mind, there are several forms of specialized, but limited, AI that are already a part of our everyday lives. AI may be used with cameras to identify someone based on their face, to power virtual companions, and to determine whether a patient is at a high risk for disease.
AI shouldn’t be confused with other kinds of algorithms. The simplest definition of an algorithm is that it’s a series of instructions needed to complete a task. For example, a thermostat in your home is equipped with sensors to detect temperature and instructions to turn on or off as needed. This is not the same as artificial intelligence.
The rollout of AI today has been made possible by decades of research on topics including computer vision, which enables computers to perceive and interpret the visual world; natural language processing, allowing them to interpret language; and machine learning, a way for computers to improve as they encounter new data.
AI allows us to automate tasks, gather insights from huge datasets, and complement human expertise. But a rich body of scholarship has also begun to document its pitfalls. For example, automated systems are often trained on huge troves of historical digital data. As many widely publicized cases show, these datasets often reflect past racial disparities, which AI systems learn from and replicate.
Moreover, some of these systems are difficult for outsiders to interpret due to an intentional lack of transparency or the use of genuinely complex methods.
Some of the uses of care tech are valid and valuable. But these tools may also conceal human costs.
Automated decision-making and AI can undermine the autonomy of the very people these systems are intended to help. In-home cameras, facial recognition systems, wearable movement trackers and risk prediction models can lead to elderly and disabled people feeling forced to turn their houses into nursing homes. This undercuts the focus on dignity and self-determination central to independent living and community-based care.
Automated decision-systems can also reinforce policies that treat the poor, the elderly, disabled, the immuno-compromised and communities of color as disposable. Within healthcare, technology is increasingly used to screen patients, direct nurses’ attention and support clinical judgments. But these systems often reproduce – and even worsen – bias, because the data they use reflect inequities already embedded in healthcare. For example, Zaid Obermeyer and his colleagues reported in Science in 2019 that a system used to allocate healthcare to 200 million people a year in hospitals across America dramatically underestimated the medical needs of African Americans.
In some states, governments have adopted automated decision-making tools to assess eligibility for Medicaid services, often without much public debate and little transparency over how decisions are made. For example, an algorithm in Arkansas was intended to more fairly distribute hours of care allocated to people receiving home- and community-based services. But it faced a wave of scrutiny for drastically cutting hours for people who rely on personal care assistants for basic activities of daily living such as bathing, eating and going to the bathroom.
Surveillance in the name of care brings up uneasy questions about the privacy and autonomy of those who need support. Technologies like Electronic Visit Verification (EVV) were introduced to monitor provision of care inside homes using features like GPS location tracking, but they have left disabled and elderly service recipients and their workers feeling like they’re shackled to an ankle monitor.
Many efforts to build care bots are motivated by a genuine desire to mend fissures in a strained and fragmented system. The devastation wrought by the Covid pandemic made our need for better care clear, not just in hospitals and clinics, but in our homes, schools, and streets. As National Domestic Workers Alliance director Ai-jen Poo has urged us to acknowledge, the care industry was a “house of cards on the point of collapse” long before the pandemic.
The pandemic and decades of grassroots organizing have encouraged the Biden administration to focus on investing in care jobs, provoking a new public conversation about care as critical public infrastructure. The Biden plan proposes investing $400bn to provide seniors healthcare and personal care services at home. While the plan usefully puts significant public investment at the heart of a revitalized care system, it does not reconcile the thornier issues – surveillance, eroding autonomy and bias – that accompany the government’s inevitable reliance on technologies of care management.
The care bots are already here. But their incursions don’t have to lead to techno-dystopia. Our future visions for a caring society must be built on a foundation of justice and equity, dignity and autonomy, not just efficiency and scale. The most essential aspects of caring for one another – presence, compassion, connection – are not always easy, or even possible, to measure. The rise of the care bots risks creating a system where we only value the parts of care that can be turned into data.
Alexandra Mateescu is a researcher at Data & Society, working on issues around the intersection of labor, care and technology. Virginia Eubanks is a political scientist at the University at Albany and author of Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor
Automating Care is our new series on the rise of AI in caregiving