Ethical AI: Our future may depend on it

By Professor Keng Leng Siau
Professor Keng Leng Siau
Head and Chair Professor
Department of Information Systems

An earlier version of this article, “Artificial Intelligence (AI) Ethics: Ethics of AI and Ethical AI,” with co-author Weiyu Wang, Missouri University of Science and Technology, was published in the Journal of Database Management, March 2020.

Artificial intelligence-based technology has already achieved many great things. Facial recognition, medical diagnosis, and self-driving cars spring to mind. AI promises enormous benefits for economic growth, social development, as well as human well-being and safety improvement. However, the lowlevel of explainability, data biases, data security, data privacy, and ethical problems of AI-based technology pose significant risks to users, developers, and society in general. Above all, as AI advances, a critical issue is how to address the associated ethical and moral challenges.

What should ethical AI look like? In the simplest form, we may define an ethical AI as one that does no harm to humans. But, what is harm? What constitutes human rights? Many questions need to be answered before we can design and build ethical AI. Ethical sensitivity training is required to make good ethical decisions. In theory, AI should be developed to recognise ethical issues. If AI is capable of making decisions, how can we design and develop an AI system that is sensitive to ethical issues? Unfortunately, it is not easy to implement in practice or to realise. Long-term and sustained efforts are needed. Nonetheless, understanding and realising the importance of developing ethical AI and starting to work on it step by step are needed.

Corporations initiating ethics of AI

Many institutions, such as Google, IBM, Accenture, Microsoft, and Atomium-EISMD, have started working on formulating ethical principles to guide the development of AI. In November 2018, the Monetary Authority of Singapore (MAS), together with Microsoft and Amazon Web Services, launched the FEAT principles (i.e., fairness, ethics, accountability, and transparency) for the use of AI. Academics, practitioners, and policymakers are working together to widen the engagement to establish ethical principles for AI design, development, and use. Alongside the frameworks and principles, protective guardrails are needed to ensure ethical behaviors. Good governance is necessary to enforce the implementation and adherence to those ethical principles, and a legal void is waiting to be filled by regulatory authorities. Either based on case law or accomplished via legislative and regulatory obligations, these legal and regulatory instruments will be critical to the good governance of AI, which helps to implement and enforce the ethics of AI to enable the development of ethical AI.

Regulation – governments and governance

To protect the public, the US has long enacted regulatory instruments, such as rules against discrimination, equal employment opportunity, the Health Insurance Portability and Accountability Act Title II, the Commercial Facial Recognition Privacy Act, and the Algorithmic Accountability Act. All these instruments would be useful in guiding the development of legal and regulatory policies and frameworks for AI ethics. In addition to the legal and government rules, self-regulation plays an important role. Communication and information disclosure can help society as a whole to ensure the development and deployment of ethical AI. Fostering discussion forums and publishing ethical guidelines by companies, industries, and policymakers can help educate and train the public in understanding the benefits of AI, and dispelling myths and misconceptions about AI. Besides, having a better knowledge of legal frameworks on human rights, strengthening the sense of security, and understanding the ethical issues related to AI can foster trust in AI and enable the development of ethical AI.

Transforming AI into ethical agents

There are three potential ways to transform AI into ethical agents: train AI to be “implicit ethical agents,” “explicit ethical agents,” and “full ethical agents.” Implicit ethical agents mean constraining the machine's actions to avoid unethical outcomes. Explicit ethical agents mean stating precisely what action is allowed and what is forbidden. Full ethical agents mean machines, as humans, have consciousness, intentionality, and free will. An implicit ethical agent can restrict the development of AI. An explicit ethical agent is currently getting the most attention and is considered to be more practical. A full ethical agent is still an R&D initiative, and one is not sure when this will become a reality. When realised, the way we treat an AI agent that has consciousness, moral sense, emotion, and feelings will be another ethical consideration. For instance, is it ethical to “kill” (shut down) an AI agent if it replaces human jobs or even endangers human lives? Is it ethical to deploy robots in a dangerous environment? These questions are intertwined with human ethics and moral values.

Embracing ethical AI

The President-elect of the European Commission made clear in her recently unveiled policy agenda that the cornerstone of the European AI plan will be to ensure that “AI made in Europe” is more ethical than AI made anywhere else in the world. US agencies such as the Department of Defense and the Department of Transportation have also launched initiatives to ensure the ethical use of AI within their respective domains. In China, the government-backed Beijing Academy of Artificial Intelligence has developed the Beijing AI Principles which rival those of other countries. The Chinese Association for Artificial Intelligence has also developed its own ethics guidelines. Many non-European countries, including the US, have signed on to the Organisation for Economic Co-operation and Development's (OECD) AI Principles that focus on “responsible stewardship of trustworthy AI.”

Trade-off between AI ethics and AI advancement

Still, the makers and researchers of AI are most likely to pay attention to hard performance metrics, such as speed and reliability, or softer performance metrics, such as usability and customer satisfaction. Nebulous concepts like ethics are not yet the most urgent consideration – especially with the intense competition between companies and between nations. Further, some consumers may pay lip service to AI. For example, among consumers who said they distrust the internet, only 12% report using technological tools, such as virtual private networks, to protect their data, according to a worldwide Ipsos survey (CIGI-Ipsos, 2019). Instead, the most important factors influencing consumers' purchasing decisions are still price and quality. Right now, consumers care more about what AI can do rather than whether AI's actions are ethical. This situation may put companies and institutions that are developing AI in a trade-off situation – whether to focus on AI advancement to realise profit maximisation, or to focus on AI ethics to ensure societal benefits from AI innovations.

Future of humanity may depend on the ethical development of AI

Understanding and addressing ethical and moral issues related to AI is still in the infancy stage. AI ethics is not simply about “right or wrong,” “good or bad,” and “virtue and vice.” It is not even a problem that can be solved by a small group of people. However, ethical and moral issues related to AI are critical and need to be discussed now. This article aims to call attention to the urgent need for various stakeholders to pay attention to the ethics and morality of AI agents. While attempting to formulate the ethics of AI to enable the development of ethical AI, we will also understand human ethics better, improve existing ethical principles, and enhance our interactions with AI agents in this AI age. AI ethics should be a central consideration in developing AI agents and not an afterthought. Our future depends on the correct development and implementation of AI ethics!

The original version of this article is available here: