Artificial intelligence has progressed to the point where machines are capable of performing tasks that people once thought could only be done by humans. This rise in the power of AI highlights the importance of ethics in AI – we must use this powerful technology in responsible ways.
For example, modern artificial intelligence is capable of understanding and creating art, carrying on intelligent conversations, identifying objects by sight, learning from past experience, and making autonomous decisions.
Organizations have deployed AI to accomplish a wide range of tasks. AI creates personalized recommendations for online shoppers, determines the content social media users see, makes health care decisions, determines which applicants to hire, drives vehicles, recognizes faces, and much more.
Given the countless business opportunities that this new technology brings, the global market for AI technologies has exploded over the past decade and is continuing to grow. Gartner estimates that customers worldwide will spend $65.2 billion on AI software in 2022, an increase of 21.3 percent from the previous year.
While AI technology is new and exciting and has the potential to benefit both businesses and humanity as a whole, it also gives rise to many unique ethical challenges.
Also see: Top AI Software
Examples of Unethical AI
News stories have no shortage of examples of unethical AI.
In one of the more well-known of these cases, Amazon used an AI hiring tool that discriminated against women. The AI software was designed to look through resumes of potential candidates and choose those that were most qualified for the position. However, since the AI had learned from a biased data set that included primarily male resumes, it was much less likely to select female candidates. Eventually, Amazon stopped using the program.
In another example, a widely used algorithm for determining need in healthcare was systematically assessing Black patients’ need for care as lower than white patients’ needs. That was problematic because hospitals and insurance companies were using this risk assessment to determine which patients would get access to a special high-risk care management program. In this case, the problem occurred because the AI model used health care costs as a proxy for health care need, without accounting for disparities in how white and Black populations access health care.
But discrimination isn’t the only potential problem with AI systems. In one of the earliest examples of problematic AI, Microsoft released a Twitter chatbot called Tay that began sending racist tweets in less than 24 hours.
And a host of other less widely published stories have raised concerns about AI projects that seemed transphobic, that violated individuals’ privacy, or in the case of autonomous vehicles and weapons research, put human lives at risk.
Challenges of AI Ethics
Despite the many news stories highlighting concerns related to AI ethics, most organizations haven’t yet gotten the message that they need to be considering these issues. The NewVantage Partners 2022 Data and AI Leadership Executive Survey found that while 91 percent of organizations are investing in AI initiatives, less than half (44 percent) said they had well-established ethics policies and practices in place. In addition, only 22 percent said that industry has done enough to address data and AI ethics.
So what are the key challenges that organizations should be addressing?
Bias
As we have already seen, perhaps the biggest challenges to building ethical AI is AI bias. In addition to the cases already mentioned, the AI criminal justice tool known as COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) is one egregious example. The tool was designed to predict a defendant’s risk of committing another crime in the future. Courts, probation, and parole officials then used that information to determine appropriate criminal sentences or who gets probation or parole.
However, COMPAS tended to discriminate against Black people. According to ProPublica, “Even when controlling for prior crimes, future recidivism, age, and gender, Black defendants were 45% more likely to be assigned higher risk scores than white defendants.” In actuality, Black and white defendants reoffend at about the same rate — 59 percent. But Black defendants were receiving much longer sentences and were less likely to receive probation or parole because of AI bias.
Because humans created AI and AI relies on data provided by humans, it may be inevitable that some human bias will make its way into AI systems. However, there are some obvious steps that should be taken to mitigate AI bias.
And while situations like the COMPAS discrimination are horrifying, some argue that on the whole, AI is less prone to bias than humans. Difficult questions remain concerning to what degree bias must be eliminated before an AI can be used to make decisions. Is it sufficient to create an AI system that is less biased than humans, or should we require that the system is closer to having no biases?
Data Privacy
Another huge issue in AI ethics is data privacy and surveillance. With the rise of the internet and digital technologies, people now leave behind a trail of data that corporations and governments can access.
In many cases, advertising and social media companies have collected and sold data without consumers’ consent. Even when it is done legally, this collection and use of personal data is ethically dubious. Often, people are unaware of the extent to which this is going on and would likely be troubled by it if they were better informed.
AI exacerbates all these issues because it makes it easier to collect personal data and use it to manipulate people. In some instances, that manipulation is fairly benign, such as steering viewers to movies and TV programs that they might like because they have watched something similar. But the lines get blurrier when the AI is using personal data to manipulate customers into buying products. And in other cases, algorithms might be using personal data to sway people’s political beliefs or even convince them to believe things that aren’t true.
Additionally, facial recognition AI software make it possible to gather extensive information about people by looking at photos of them. Governments are wrestling with the question of when people have the right to expect privacy when they are out in public. A few countries have decided that it is acceptable to perform widespread facial recognition, while some others outlaw it in all cases. Most draw the lines somewhere in the middle.
Privacy and surveillance concerns presents obvious ethical challenges for which there is no easy solution. At a minimum, organizations need to make sure that they are complying with all relevant legislation and upholding industry standards. But leaders also need to make sure that they are doing some introspection and consideration of whether they might be violating people’s privacy with their AI tools.
Transparency
As already mentioned, AI systems often help make important choices that greatly affect people’s lives, including hiring, medical, and criminal justice decisions. Because the stakes are so high, people should be able to understand why a particular AI system came to the conclusion that it did. However, the rationale for determinations made by AI is often hidden from the people who are affected.
There are several reasons for this. First, the algorithms that AI systems use to make decisions are often protected company secrets that organizations don’t want rival companies to discover.
Second, the AI algorithms are sometimes too complicated for non-experts to easily understand.
Finally, perhaps the most challenging problem is that an AI system’s decision is often not transparent to anyone, not even to people who designed it. Deep learning, in particular, can result in models that only machines can understand.
Organizational leaders need to ask themselves whether they are comfortable with “black box” systems having such a large role in important decisions. Increasingly, the public is growing uncomfortable with opaque AI systems and demanding more transparency. And as a result, many organizations are looking for ways to bring more traceability and governance to their artificial intelligence tools.
Liability and Accountability
Organizations also need to worry about liability and accountability.
The fact that AI systems are capable of acting autonomously raises important issues about who should be held responsible when something goes wrong. For example, this issue arises when autonomous vehicles causing accidents or even deaths.
In most cases, when a defect causes an accident, the manufacturer is held responsible for the accident and required to pay the appropriate legal penalty. However, in the case of autonomous systems like self-driving cars that make their own decisions, legal systems have significant gaps. It is unclear when the manufacturer is to be held responsible in such cases.
Similar difficulties arise when AI is used to make health care recommendations. If the AI makes the wrong recommendation, should its manufacturer be held responsible? Or does the practitioner bear some responsibility for double-checking that the AI is correct?
Legislatures and courts are still working out the answers to many questions like these.
Self-Awareness
Finally, some experts say that AI could someday achieve self-awareness. This could potentially imply that an AI system would have rights and moral standing similar to humans.
This may seem like a farfetched scenario that is only possible in science fiction, but at the pace that AI technology is progressing, it is a real possibility. AI has already become able to do things that were once thought impossible.
If this were to happen, humans could have significant ethical obligations regarding the way they treat AI. Would it be wrong to force an AI to accomplish the tasks that it was designed to do? Would we be obligated to give an AI a choice about whether or how it was going to execute a command? And could we ever potentially be in danger from an AI?
Also see: How AI is Altering Software Development with AI-Augmentation
Key Steps for Improving your Organization’s AI ethics
The ethical challenges surrounding AI are tremendously difficult and complex and will not be solved overnight. However, organizations can take several practical steps that toward improving their organization’s AI ethics:
-
- Build awareness of AI ethics within your organization. Most people have either no familiarity or only a passing familiarity with these issues. A good first step is to start talking about ethical challenges and sharing articles that bring up important considerations.
- Set specific goals and standards for improving AI ethics. Many of these problems will never completely go away, but it is useful to have a standard that AI systems must meet. For example, organizations must decide to what degree AI systems must eliminate bias compared to humans before they are used to make important decisions. And they need to have clear policies and procedures in place for ensuring that AI tools meet those standards before entering production.
- Create incentives for implementing ethical AI. Employees need to be commended for bringing up ethical considerations rather than rushing AI into production without checking for bias, privacy, or transparency concerns. Similarly, they need to know that they will be held accountable for any unethical use of AI.
-
- Create an AI ethics task force. The field of AI is progressing at a rapid pace. Your organization needs to have a dedicated team that is keeping up with the changing landscape. And this team needs to be cross-functional with representatives from data science, legal, management, and the functional areas where AI is in use. The team can help evaluate the use of AI and make recommendations on policies and procedures.
AI offers tremendous potential benefits for both organizations and their customers. But implementing AI technology also carries the responsibility to make sure that the AI in use meets ethical standards.
Also see: Best Machine Learning Platforms