
The Organization for Economic Co-operation and Development in May adopted its first set of guiding principles for the responsible development of artificial intelligence, with more than 40 OECD member and partner countries signing an agreement to encourage the development and deployment of AI systems that are robust, safe, fair and trustworthy.
Signing of an agreement on the OECD principles on AI at the OECD Ministerial Council Meeting in May (Source: Permanent Delegation of Japan to the OECD)
Action by ministers of the Group of 20 major economies followed in June in Tsukuba City, Japan. There, officials at the G20 Ministerial Meeting on Trade and Digital Economy agreed to their first set of AI principles, including promoting a human-centric approach to AI.
And the OECD and G20 are not the only organizations that have been adopting AI principles and guidelines. Earlier in 2019, the Japanese government, through its Cabinet Office, organized the Council for Social Principles of Human-Centric AI and adopted a human-centric approach for development of AI in Japan. The initiative was part of Japan’s "Society 5.0" vision, which seeks to promote economic development while solving social ills. The AI principles pushed forward by the council significantly contributed to the agreements later adopted by the OECD and G20 on promoting ethical development and deployment of AI.
While the principles adopted by the various government and international organizations are not legally binding, they are expected to guide national and regional governments as they draft legislation, as needed, to help ensure AI systems follow principles of trust, ethics and human rights.
For although artificial intelligence holds tremendous promise for accelerating technological development and transforming societies, it is also fueling anxieties about a perceived lack of transparency and trustworthiness and fears that it would create substantial dislocations in the labor market.
Let's take a look at why organizations and governments around the world are supporting the adoption of AI principles and guidelines and how they could influence development and uses of the technology.
AI to Take Over Range of Human Tasks
There are two reasons governments around the world are keen to adopt guidelines for how AI should be developed and deployed.
First, as AI-enabled systems become more sophisticated, they are beginning to take over a range of tasks traditionally performed by humans. In particular, deep-learning technology combined with large amounts of data can enable AI systems to perform tasks at a level matching or exceeding that of humans in fields that have been difficult for computers to undertake in the past.
AI systems could reduce the workload in offices and factories and not only by performing tedious tasks but by undertaking complex decision-making. In this way, AI systems could serve as a valuable resource for humans. They could enhance our personal lives, improve business practices and retool industries, while providing solutions to labor shortages and inefficient work habits.
And implementation of AI systems will be vital for the range of industries undergoing digital transformation, or DX, to reshape business operations. It will also be needed to meet the Sustainable Development Goals, or SDGs, promoted by the United Nations and supported by many national governments, including Japan.
The mobility industry offers a practical example of AI-powered DX in action. Companies like Uber Technologies and Lyft in the U.S. and Grab in Singapore are using AI to connect riders with passengers. Their services have generated large-scale growth in the on-demand car dispatch and food-delivery markets. In the U.S., Google’s Waymo and GM Cruise are actively developing self-driving vehicles, which will be fueled in part by deep-learning technologies processing vast amounts of driving data. The companies have been testing prototypes of driverless autonomous vehicles and are making substantial progress. For example, unmanned taxis have been operating on a proof-of-concept basis in limited areas in the U.S. since 2018.
Public road tests by Waymo in Silicon Valley and GM Cruise in San Francisco (Photo by Nikkei BP Research Institute)
Nuro’s robot delivery car, R1, in service in Arizona, U.S. (Photo by Nuro)
But Early AI Rollouts Shake Trust in Technology
The second reason for the government backing of AI principles and recommendations is that as the potential for AI to transform our lives becomes clear, so do the potential risks for negative outcomes.
For example, self-driving cars use sensors and software to detect their surroundings and to determine whether it is safe to drive. But who will take responsibility in case of vehicle accidents that cause injury and property damage? When an AI system does the same work as a human, should the same liability be imposed on the developers, implementers or operators of the technology if something goes wrong? Currently, there are few if any laws in place that specifically require AI systems to have the same accountability as humans. But in the future, it will be necessary to clarify who will be held legally responsible for problems that occur with AI systems.
And tackling legal liability issues for AI is only one of the challenges that lie ahead for regulators and policymakers. A bigger problem is looming–growing doubts in society that AI technology will be developed and deployed in a trustworthy and ethical manner.
Traditionally, when companies develop and roll out new technology, we can verify how the products or services work. For example, when a product malfunctions, the company behind it is expected to immediately make fixes.
Today’s automobiles have dozens, even hundreds of CPUs that process data and help execute a driver’s commands. Every time a driver turns the steering wheel or presses the brake pedal, the actions are digitalized and sent to the car’s electronic control system. Thus, the car’s performance is tightly controlled, even if few drivers are aware of the control mechanisms. We don’t think, "What should I do if the car doesn't stop when I press the brake," because the mechanism is so stable that we don't even notice it is there. We take it for granted that the car will slow down as we step on the brake.
Current AI-enabled systems, on the other hand, have not earned this level of trust. That is especially true for deep-learning systems that are designed to use large amounts of data. We are not always able to fully anticipate or understand what they will do. For example, when a self-driving car approaches another car and comes so close to it seems that it is likely to cause a collision, passengers in the cars might feel uneasy. Even if it happened because the autonomous car’s operational software predicted the maneuver was necessary to avoid an accident, you might think, "Humans wouldn't drive like this. I can't trust it." This is why research is ongoing to develop "explainable AI," which would help to make it clear why an AI system made at a particular decision. This type of transparency would help AI systems become more trustworthy.
Confidence in AI also can be shaken by systems that base their decisions on erroneous or biased data. AI-enabled machines that process this data could draw the wrong conclusions. For example, Reuters reported in 2018 that Amazon canceled use of AI to evaluate job candidates’ résumés because the system produced biased decisions–generally ranking male candidates higher than female candidates regardless of qualifications. The AI technology had learned from past recruiting data, in which applicants were overwhelmingly male. Amazon engineers tried to make adjustments to eliminate this bias, but they remained unconvinced that their AI implementation was fairly screening candidates. So the company scraped use of the technology for recruitment.
Overall, while AI has the potential to greatly improve the functioning of our societies, the technology is also fraught with risks if used improperly. This is not so much a question of the maturity of AI as it is with how we develop and use the technology. It is a little like using a knife to cut something. Usually, the sharper the knife, the better the cut. But misuse of sharp knives obviously can cause problems for society. AI systems can often perform tasks better than humans, but we must be careful as we rely on these systems more and more in our daily lives.
Guiding Principles for Implementing AI
The AI principles set forth by governments and international development organizations do not have the force of law. They are recommendations or guidelines that officials would like to see AI developers and implementers follow.
The following is an overview of the OECD Principles on Artificial Intelligence:
- AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being.
- AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and they should include appropriate safeguards–for example, enabling human intervention where necessary–to ensure a fair and just society.
- There should be transparency and responsible disclosure around AI systems to ensure that people understand AI-based outcomes and can challenge them.
- AI systems must function in a robust, secure and safe way throughout their life cycles and potential risks should be continually assessed and managed.
- Organizations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the above principles.
Meanwhile, the Japanese government released a 12-page document, titled, "Social Principles of Human-Centric AI,"based on the government's AI principles. The document states that to create an "AI-ready society," various stakeholders should keep in mind the following basic principles. In order for AI to be accepted by society, implementations should:
- Be human centric
- Be accompanied by education and literacy
- Protect privacy
- Ensure security
- Enable fair competition
- Provide fairness, accountability and transparency,
- Foster Innovation
Regarding the trustworthiness of AI systems, Japanese officials introduced a new concept,
"Data Free Flow with Trust (DFFT)," which was part of the Ministerial Statement of the G20 in June. Japanese prime minister Shinzo Abe announced the concept at the annual meeting of the World Economic Forum in January 2019.
Above are the latest efforts to set principles and guidelines for how AI should be rolled out. The recommendations are designed to promote development of technology that improves society. Expectations for the potential of AI are high because the technology would be able to make decisions on behalf of humans. By the same token, the power of AI is the reason many believe use of the technology should be governed by the same set of ethical principles that most humans follow. Part 2 of this article will look more closely at the theme of AI and ethics.
Author Profile
Tetsushi Hayashi
Nikkei BP Intelligence Group Clean Tech Laboratory, Chief Research Officer
Mr. Hayashi joined Nikkei BP after graduating from Tohoku University's School of Engineering in 1985. As a reporter and editor-in-chief for outlets such as Nikkei Datapro, Nikkei Communications, and Nikkei Network, he has covered stories and written articles on topics such as cutting-edge communications and data processing technologies as well as standardization and productization trends. He consecutively held the post of chief editor for Nikkei BYTE from 2002, Nikkei Network from 2005, and Nikkei Communications from 2007. In January 2014, he became Chief Director of Overseas Operations after acting as publisher for magazines including ITpro, Nikkei Systems, Tech-On!, Nikkei Electronics, Nikkei Monozukuri, and Nikkei Automotive. He has served at his present post since September 2015. Since August 2016, Mr. Hayashi has been writing a regular column, "Creating the Future with Automated Driving," in the Nikkei Digital Edition. Moreover, he published the "Overview of International Automated Driving Development Projects" in December 2016 and the "Overview of International Automated Driving/Connected Cars Development" in December 2017. Mr. Hayashi has also served as a CEATEC Award judge since 2011.