brett jordan x5WxEYe2DKA unsplash

While there has been a good deal of discussion about the use of AI in enterprises and the possibility of building ethical AI strategies, new research indicates that, even 10 years from today, it is unlikely that ethical AI design will be widely adopted.

The research, based on a survey of 602 technology innovators, business and policy leaders, researchers and activists conducted by Pew Research Center and Elon University, showed that a majority worried that the evolution of AI by 2030 will continue to be primarily focused on optimizing profits and social control and that stakeholders will struggle to achieve a consensus about ethics.

When asked whether AI systems being used by organizations will employ ethical principles focused primarily on the public good by 2030, 68% said they will not.

The research added that “ethical” implies adopting AI in a manner that is transparent, responsible and accountable. For others, it means ensuring their use of AI remains consistent with laws, regulations, norms, customer expectations and organizational values. Ethical AI also promises to guard against the use of biased data or algorithms, providing assurance that automated decisions are justified and explainable. 

The business context for ethical AI

For clarity, according to Vincent Müller in “Ethics of Artificial Intelligence and Robotics,” published in the Stanford Encyclopedia of Philosophy, the ethics of artificial intelligence is a branch of the ethics of technology specific to artificially intelligent systems. It is sometimes divided into a concern with the moral behavior of humans as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of machines, or machine ethics.

While ethical AI has a major role to play in the enterprise, context matters too even at a tactical level, said Kimberly Nevala, a Seattle-based AI strategic advisor at SAS. In an industrial manufacturing application, for example, ethical AI might focus on safety and reliability. In consumer-facing industries or public services, equity and fairness take priority.

“Principles aside, enterprises are already — at one level or another — held responsible for the products and services they deliver,” she said. “Disagreements exist regarding whether existing standards are high enough but this doesn’t negate the fact they exist.”

Using AI to power or enable said products and services does not change an enterprise’s fundamental responsibilities. “AI alone doesn’t change the basis of the argument for ethical business practices or technology of any stripe,” she said. “However, the scope and scale at which AI solutions can create, reinforce and/or amplify negative outcomes and how these levers are encoded does require new, heightened governance and due diligence.”

Even so, the extent to which an enterprise believes its duty includes safeguarding or enhancing the public good rests squarely on their values. Enterprises who filter their view through the lens of legal and regulatory compliance alone will not embrace broader, socially minded principles in a vacuum.

The risk for them, she said, is that a narrow view today risks running afoul of evolving regulations in the future. But, beyond this, every enterprise operates within an implicit social contract. As external pressures, from mission-minded consumers to increasing public awareness and social movements, increase, enterprises will need to respond.

Ethical AI in practical terms

In practical terms, what does this mean? Ethical AI ensures that the AI initiatives of the organization or entity maintain human dignity and do not in any way cause harm to people. That encompasses many things, such as fairness, anti-weaponization and liability, such as in the case of self-driving cars that encounter accidents.

“There’s a lot of jargon out there but essentially this is what’s at its heart,” said Kevin Sahin, CEO and co-founder of France-based ScrapingBee. “Some organizations are already trying to address the problem of making the principles universal and easier to follow.”

At enterprise level, these companies can have what is called an ethical AI manifesto that their machine learning and artificial intelligence initiatives follow. This guidance is not entirely a bad thing, but it differs from organization to organization.

“Making a profit is a priority but it’s also the people who help make the profit, and our shared humanity is something that needs to be conserved, be it in a personal or professional perspective,” he said.

Ethical AI example: healthcare

Even still, Dr. Trishan Panch, a professor at the Harvard School of Public Health and co-founder of Boston-based Wellframe, pointed out that the use of AI is fraught with ethical considerations and associated risks.

Take the example of healthcare, an area where practitioners have been some of the earliest adopters of AI and one where use cases in patient engagement, care delivery and population health are particularly prone to issues such as bias, failure to get appropriate patient consent and violations of data privacy.

“AI purveyors must proactively mitigate these risks or they will face significant backlash from clinicians, patients and policymakers”, he said.

Bias in society is reflected in historical health data and, when not corrected, can cause AI systems to make biased decisions on, for instance, who gets access to care management services.

Dr. Panch pointed out that STAT research found that of 161 products cleared by the U.S. Food and Drug Administration (FDA) from 2012 to 2020, just seven reported the racial makeup of study populations and just 13 reported the gender split. This will change: The FDA is developing regulatory approaches to reduce bias and is proposing that firms monitor and periodically report on the real-world performance of their algorithms.

“Consequently, firms need to ensure that the choices they make — the customers and partners they work with, the composition of their data science teams (i.e., their diversity), and the data they collect — all contribute to minimizing bias. Some companies are already making such changes,” he added. 

For example, Google Health, which is working on AI to revolutionize breast cancer screening by promising improved performance with an almost 10-fold reduction in cost, is not only validating the algorithm’s performance in different clinical settings but is also making large investments to ensure that the algorithm performs equitably across different racial groups.

What’s the future hold for ethical AI?

The adoption of ethical AI principles is essential for the healthy development of all AI-driven technologies and self-regulation by the industry will be much more effective than any legislative effort, said Xuhui Shao, managing partner of Los Altos, Calif.-based Tsingyuan Ventures, a technology-focused venture fund that invests in software and life sciences startups.

He cited the example of AI-driven redlining, or negative decisions based on certain discriminatory factors, that could be harder to detect, even by the operators. These AI-based decisions need to be made explainable and continuously monitored. Data is the fuel of all AI systems and the collection and usage of consumer data needs to be carefully tracked, especially in large-scale commercial systems.

Shao believes the future of ethical AI lies in the application of AI itself.

“A new class of general purpose adversarial neural networks can be built to examine and discriminate against other AI systems to produce human-understood interpretations and to check for hidden biases or flaws,” he said. ”As more consumers and businesses become aware of the importance of ethical AI, these types of safeguards will become more prevalent by 2030.”

David Roe