The rise of AI and the risks of relinquishing responsibility

Main visual : The rise of AI and the risks of relinquishing responsibility

Digital technology is continuing to transform our lives. We are becoming more and more reliant on the derived benefits of these advancements, and the impact they are having on our day-to-day routines is increasing rapidly. Artificial Intelligence, or AI, is one such area that is becoming more pervasive, both for us as individuals and at a wider societal level.

The opportunities presented by AI are far-reaching for practically every sector within our economy. AI has the potential to transform lives and possibly influence the future of humanity. From improving supply chain efficiencies, to increasing the accuracy of cancer screening and dramatically speeding up medical developments.

How much automation is too much?

But are we in danger of becoming over-reliant on automation? In Arizona in 2018, a pedestrian was struck and killed by an autonomous car in what is believed to be the first pedestrian death associated with self-driving technology.

This raises issues of trust and transparency.

AI cannot infer additional context if the information is not present in the data it feeds from. After all, AI is only as good as the available data. So how much automation is too much? And to what extent are we comfortable relinquishing decision-making, and to a lesser extent responsibility and control?

Consider the use of AI within the Defence sector for instance. Its potential applications go far beyond the control of autonomous weapons, and stretches into diagnostics, cybersecurity, supply chain logistics and asset maintenance, to name just a few. But given the magnitude of the decisions being made and their potential consequences, this question of responsibility and control is magnified even further.

Having a thorough knowledge of the critical decision-making process, and a clear understanding of what the implications could be further down the chain of command is absolutely crucial if AI is to play its part effectively within this military context. It is also a critical requirement to be able to trust the data sources and resulting information that propagates these decisions.

We also have to establish what level of human control we are comfortable giving up and what level of control is to be retained. For instance, how much control should be relinquished regarding the firing of autonomous weapons?

Ethics framework delivers much-needed governance

Introducing an AI ethics framework has the potential to act as that much-needed governance layer between the fuzzy boundaries of human and AI system interaction. Central to any AI ethics framework should be the principle to encourage responsible innovation through good AI governance practices and project workflow processes.

The IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. Its global mission is “to ensure every stakeholder involved in the design and development of autonomous and intelligent systems is educated, trained, and empowered to prioritize ethical considerations so that these technologies are advanced for the benefit of humanity.”

An ethical framework for AI should also encourage diversity in data and teams to prevent biases and achieve more rounded outcomes with intended use cases. The goal is to create comprehensive datasets for AI training that can address all possible scenarios and users. Likewise, an intended framework should strive to include expertise made up of individuals with varying skills and backgrounds.

As a starting point to mitigate unforeseen circumstances in the rollout of AI capability, we should seek to establish the necessary building blocks around four key development questions for ethical AI use:

Q1: Am I using AI for the right reasons?

Ethical AI applications accelerate the path we take to arrive at a decision, ultimately leading to better outcomes.

Q2: Can I explain the reasoning path?

For AI to deliver on its promise, it will require predictability and trust of the end-user.

Q3: Can I recognize and mitigate AI bias?

AI is only as good as the data behind it, and as such, this data must be fair and representative to ensure that AI evolves to be non-discriminatory.

Q4: How secure is the data I am using?

Data must be secure, otherwise, the risk of tampering or corruption can skew the machine’s output at the expense of the end-user.

Fujitsu’s stance

At Fujitsu, we are acutely aware of the potential risks posed by the development of unethical AI systems. As such, we are engaging with industry, academia and regulators as they continue to investigate and develop good practice measures and guidelines to ensure rigorous governance and the ethical use of AI solutions across a wide range of industry applications.

Like the technology, this is an area that is changing rapidly, and one that we are monitoring closely. Until AI solutions reach the point where government regulators and industry have provided applicable deployment and mature usage guidelines, Fujitsu will seek to improve its development practices through direct engagement with industry standards bodies and test with participation in the wider community to improve deployment of AI solutions.

Find out more…

To find out more about the ethics framework alluded to here, and some practical steps towards setting up an AI ethics framework download our latest White Paper.

Editor's Picks

AI-driven transformation: A synergistic path to sustainability and profitability
Drawing on Fujitsu’s research and extensive experience with customers worldwide, this blog discusse…
Fujitsu / November 20, 2024
Understanding Zero-Knowledge Proofs and their impact on privacy: A simple guide
Zero-knowledge proof (ZKP) technology is rapidly evolving, and its impact is being felt across vari…
Fujitsu / November 12, 2024
Unlocking Net-Zero: The role of emerging technology
Increasing levels of environmental awareness has led both governments and organizations to commit t…
Fujitsu / November 12, 2024