“Good morning, George,” is the greeting for this 86-year-old, as he wakes-up to his voice assisted device reporting on the weather and the daily news. The coffee is made in the kitchen, the curtains drawn, and the lights are off. His voice assistant reminds him to take his pills and complete his morning exercises. Then his tablet rings with a video call from his daughter and grandkids checking-in. Now, time to visit the doctor, online. This is a great, independent life powered by technology.
Today, older adults comprised over 16.5% of the U.S. population and by the year 2050 that percentage is expected to jump to over 21%. Not only is the U.S. well on its way to becoming a silver society, but the 65 plus age demographic is rising globally as well*. Technology is playing a critical role in improving the health and general well-being for the senior community around the world.
A primary game-changer in this tech evolution is artificial intelligence(AI). AI technology is behind so many of the devices and software assisting older adults. It offers a wide range of opportunity to extend the years to living independently, to improve health, to modify the home and transform transportation to meet aging needs. All of this, we hope, equals aging well.
As we know, bias exists in certain aspects of all of our lives but you might be surprised that in today’s tech world there is a deeply recessed bias that is not as obvious. This bias is a product of the artificial intelligence (AI) that now drives so many of our devices and software.
AI, or machine learning, continuously relies on a series of algorithms that access data and then automatically learn and improve from the experience without being explicitly programmed. Algorithms, which basically just provide a series of instructions in computing, have their benefits for AI, helping to improve and enrich machine learning. However, algorithmic bias can occur within the AI system that can result in the discrimination of one or more groups. “Any bias in these algorithms could negatively impact people or groups of people, including traditionally marginalized groups.”
While there are benefits of an improved experience due to the use of algorithms, there are also a number of concerns that the bias can discriminate against groups, and even potentially expose personal or sensitive information.
Algorithmic bias takes several forms, for example, racial bias, age discrimination, gender bias etc. For older adults, there are concerns that algorithmic bias could have discriminatory impacts in health care, housing, employment, and in banking and finance issues. “Machines, like humans, are guided by data and experience.” If that data or experience is mistaken or crooked, a biased decision can be made, whether that decision is made by a human or a machine. The implications for older individuals could result in misinformation and discriminatory practices.
These AI enabled devices -utilizing algorithms to continually improve based on the data gathered and analyzed- cannot think critically or question specific results. They simply continue to replicate upon data received based on their algorithmic coding. Do they create better services, or are they discriminatory and unsettling? Is this view of innovation exciting or alarming?
In order to mitigate the harms of algorithmic bias, Members of Congress have introduced federal legislation, the Algorithmic Accountability Act. According to one of bill’s key sponsors, Senator Wyden (D-OR), the Act would require “companies to study the algorithms they use, identify bias in these systems and fix any discrimination or bias they find.” The Act would give the Federal Trade Commission (FTC) the authority to issue and enforce the necessary regulations. Until the point in time that there are federal protections, some have called for a self-regulatory Bill of Rights, to govern algorithmic bias and protect consumers. A key protection of this Bill of Rights is “transparency:” to know when an algorithm is making a decision about us, what factors are being used, and how those factors are weighted toward conclusions, as well as to allow a consumer to “consent” for any AI application which could have used sensitive data or had a material impact.
While the algorithms that machine learning relies upon to improve and serve consumers are critical for AI to function, it is also important to be aware of some of the unintended moral, ethical, and other consequences of algorithmic bias. While inadvertent, discriminatory results may occur, these need to be mitigated to ensure consumers are protected from unintended harm. Exactly how to accomplish this task is the challenge, but it is critically important to address these concerns to reduce the potential negative consequences of artificial intelligence so that consumers can receive the greatest benefits from innovations today, and tomorrow.