Ethical AI by Design: Key Issues of Ethical AI

Ethical AI

Brief History of AI

The term “artificial intelligence” was first coined in a proposal for the Dartmouth Summer Research Project which John McCarthy co-authored in 1956. McCarthy was an American computer scientist at Stanford University. Referred to as the “Father of AI“, McCarthy founded the field of artificial intelligence and invented LISP, the standard programming language used in robotics (McCarthy, n.d.).

Artificial Intelligence (AI) is defined as “the science and engineering of making intelligent machines, especially intelligent computer programs” (McCarthy, 2007). Over the years, developments in the area of artificial intelligence have progressed so much with wide applications in many areas of our lives. From facial recognition door access, automatic language translation tools, speech recognition applications like Apple’s Siri, to self-driving cars, these are all examples of applications in artificial intelligence.

Living in the Age of AI

Artificial intelligence today may be designed to perform just a narrow task (referred to as Artificial Narrow Intelligence or ANI, also called “weak” AI) but the longer-term goal of AI researchers is to create Artificial General Intelligence (AGI, also called “strong” AI), giving machines the ability to “perform any intellectual tasks that a human can perform” (Miailhe & Hodes, 2017).

At the outset, AI seems to help us make better decisions, perform tasks more efficiently and to a certain extent, even give us “superhuman” powers like the instant ability to understand a foreign language through instant translation tools. In fact, a quote by Prensky (2012) really struck a chord with me and got me thinking further. He said, “the unenhanced brain is well on its way to becoming insufficient for truly wise decision making.” This struck me as bearing some truth but it also makes me wonder how much “enhancement” is too much enhancement? Where do we draw the line? In this post, I will explore the key ethical issues of AI and present proposals for safeguarding humanity in the age of AI.

Key Ethical Issues of AI

Artificial intelligence is a field under the umbrella of data science and includes the subset of machine learning, which in turn covers deep learning and artificial neural networks. According to Choi et al. (2020), artificial intelligence is focused on “automating intellectual tasks normally performed by humans” while machine learning and deep learning are the methods used in the process of automation.

Figure 1: Relationship between AI and data science and subsets of AI
(Choi et al., 2020)

As machines get more intelligent through machine learning and deep learning and our lives become more efficient, it is equally important to be aware of and contemplate the ethical issues and risks arising from advances in AI. Here are the key ethical issues of AI presented by Bossmann (2016).

1/ Unemployment

We already seeing factory operations being automated by machines that are capable of operating round the clock without the need for rest. These machines and robots have replaced humans and rendered factory workers jobless. Meanwhile, Tesla announced the launch of automated trucks capable of driving themselves up to 500 miles before needing a recharge (Marshall, 2017). What does the future look like for workers whose livelihoods become obsolete, replaced instead by machines?

2/ Inequality

Bossmann (2016) posits that the ones standing to gain the most from AI are individuals who have ownership in AI-driven companies. This is because companies leveraging AI can cut down on labor costs significantly, thereby increasing revenue. This increased revenue, unfortunately, goes to fewer people as the manual laborers have been eliminated. This situation may lead to an increasingly widening wealth gap with company owners keeping a larger portion of the gains created. The question of how we might structure a fair post-labor economy needs to be addressed.

3/ Humanity

In 2015, a computer program named Eugene Goostman managed to fool people into thinking that he is a real 13-year old boy from Odessa, Ukraine. Eugene passed the Turing Test, a challenge developed by Alan Turing which is considered to be the  benchmark for establishing artificial intelligence. Unlike humans, machines have unlimited resources to channel attention into building relationships. Machines can also be optimized to trigger the reward centers in the human brain and to catch our attention and trigger certain actions. This technology, when used right can open up many opportunities but conversely, can also lead to many problems if not used in the right way. Imagine what might happen if we created a machine that understands humans so well that it can fake and even manipulate humans – this would be an incredible breakthrough yet at the same time a rather disturbing one.

4/ Machine errors

Machines need time to learn and go through a training phase where they learn pattern detection and how to respond to inputs. Even after training, machines still need to go through the test phase to ensure that it is ready to be unleashed for use. The training phase may not be able to cover all the possibilities that the system may encounter in the real world. This opens up room for machine errors which can have dire consequences.

5/ AI Bias

AI can certainly process information at a far greater speed and efficiency than humans but whether it can be trusted to always be fair and neutral is highly questionable. When someone is biased, we consider that an injustice has happened against a person or a group (Omowole, 2021).  In one example, a prominent camera brand kept flashing the message asking “Did someone blink?” when an Asian family took a photo. The subjects in the photo were not in fact blinking. They were just Asian with naturally squinty eyes. Looks like the machine will need a lot more training to make it less “racist”! Omowole (2021) listed five sources of fairness and non-discrimination risks in the use of artificial intelligence:

  • Implicit bias – subject is unaware of person with the bias (covers gender, race, disability, sexuality or class).
  • Sampling bias – sample data skewed towards certain subset of the group.
  • Temporal bias – machine-learning model not future proof as we can’t possibly factor in possible future change.
  • Over-fitting to training data – AI data complies with training dataset but does not generalize to a larger population in real life.
  • Edge cases and outliers – data that falls outside of the boundaries of the training dataset.

6/ Security and Privacy

It is critical for us to consider issues of security and privacy when developing AI. There are bad actors in the cyberworld that may prey on the vulnerabilities of machines and launch attacks on AI systems. Hence hardware security should be an issue to consider when developing AI systems. Encryption technology is also needed to ensure that AI systems can perform the function of data processing yet at the same time respecting the privacy of users. (Reger, 2021). In another use case where social robots are used in a classroom learning environment, we need to be cognizant that these robots are capturing video footage of subjects. The question then is are the subjects aware that they are being monitored, have their consent been obtained, and how will the collected data be used? The matter becomes even more complicated when the subjects are young children.

7/ Evil AI

 Imagine creating an AI robot that has gone rogue and turned against its creator. This is reminiscent of a horror sci-fi movie from Hollywood. In the case of machines, it is not that the machine itself is intrinsically “evil” but just that it lacks understanding of the full context of a command. Bossmann (2016) presented an example of an AI system asked to eradicate cancer in the world. After much computing, the system found a way – by killing everyone in the world. Technically, this solution does indeed eradicate cancer completely but it may not necessarily be the way humans might have liked.

8/ Singularity

“Singularity” is “the point in time when human beings are no longer the most intelligent beings on earth” (Bossmann, 2016). We are able to control machines now due to our ingenuity and intelligence. However, will there come a point in time when AI becomes so advanced in the future that it can anticipate our every move and outsmart us to defend itself? What happens when the advantage we have over machines no longer becomes an advantage?

9/ Robot rights

When AI systems and robots become more complex and life-like, do we consider these machines as entities that can perceive, feel, and act? Can machines be considered to be suffering when their reward functions are showing negative input? What about their legal status? Should they be treated like animals of comparable intelligence?

Safeguarding Humanity in the Age of AI

It is undeniable that AI has its many advantages and benefits in making the world a more efficient place as well as making our lives easier. However, as we consider all the key ethical issues raised by AI, there is a need for us to “redirect your thinking from what is merely advantageous to what is genuinely good” while carefully weighing what is best for human life (Trout, 2019).

There seem to be two possibilities for how AI will turn out. In the first, AI will do what it is on track to do: slowly take over every human discipline. The second possibility is that we take the existential threat of AI with the utmost seriousness and completely change our approach. This means redirecting our thinking from a blind belief in efficiency to a considered understanding of what is most important about human life.

Bernhardt Trout,
Raymond F. Baddour, ScD, (1949) Professor of Chemical Engineering, MIT

Trout (2019) calls for us to redirect education if we want to shift thinking about AI. Rather than just an education focused on a certain discipline and accepting all that is taught, we need to become critical thinkers capable of reflecting on deeper fundamental questions on human dignity, freedom, and justice. Redirecting education will not only have an impact on how individuals and organizations respond to AI but will also have bearings on policies as the policy-level decision-makers shape legislation around AI.

References

  1. Aamoth, D. (2014, June 9). Interview: Eugene Goostman passes the turing test. Time. Retrieved October 31, 2021, from https://time.com/2847900/eugene-goostman-turing-test/.
  2. Bossmann, J. (2016, October 21). Top 9 ethical issues in Artificial Intelligence. World Economic Forum. Retrieved October 31, 2021, from https://www.weforum.org/agenda/2016/10/top-10-ethical-issues-in-artificial-intelligence/.
  3. Choi, R. Y., Coyner, A. S., Kalpathy-Cramer, J., Chiang, M. F., & Campbell, J. P. (2020, February). Introduction to Machine Learning, Neural Networks, and Deep Learning. Translational Vision Science & Technology. Retrieved October 31, 2021, from https://tvst.arvojournals.org/article.aspx?articleid=2762344.
  4. Marshall, A. (2017, November 17). Will tesla’s Automated Truck Kill Trucking Jobs? Wired. Retrieved October 31, 2021, from https://www.wired.com/story/what-does-teslas-truck-mean-for-truckers/.
  5. McCarthy, J. (n.d.). Professor John McCarthy. Retrieved October 31, 2021, from http://jmc.stanford.edu/.
  6. McCarthy, J. (2007, November 12). What is Artificial Intelligence? Professor John McCarthy. Retrieved October 31, 2021, from http://jmc.stanford.edu/articles/whatisai.html.
  7. Miailhe, N., & Hodes, C. (2017). The Third Age of Artificial Intelligence. Field Actions Science Reports, 2017(17). Retrieved October 31, 2021, from http://journals.openedition.org/factsreports/4383.
  8. Omowole, A. (2021, July 19). Research shows AI is often biased. here’s how to make algorithms work for all of Us. World Economic Forum. Retrieved October 31, 2021, from https://www.weforum.org/agenda/2021/07/ai-machine-learning-bias-discrimination/.
  9. Prensky, M. (2012). From Digital Natives to Digital Wisdom. In From Digital Natives to Digital Wisdom: Hopeful essays for 21st Century learning (pp. 201–215). essay, Corwin.
  10. Reger, L. (2021, January 26). Ai ethics really come down to security. Forbes. Retrieved October 31, 2021, from https://www.forbes.com/sites/forbestechcouncil/2021/01/27/ai-ethics-really-come-down-to-security/?sh=4249a8c71676.
  11. Trout, B. (2019, February 18). Safeguarding Our Humanity in the Age of AI. MIT School of Humanities, Arts, and Social Sciences. Retrieved October 31, 2021, from https://shass.mit.edu/news/news-2019-ethics-and-ai-series-safeguarding-humanity-age-ai-bernhardt-trout.

2 thoughts on “Ethical AI by Design: Key Issues of Ethical AI

  1. Chelly Rody says:

    Thank you for your post, Mun! Very eye opening, indeed! I like how you unpacked the different issues that surround Artificial Intelligence. It reminds me of a Biblical truth about the ‘created’ trying to overpower the Creator.
    You quoted a powerful statement by Trout. Rethinking, reassessing, reevaluating what is important should always be a priority in people’s minds as we invent, create, form, produce and engineer our way into this highly complex, fast-paced, digital world we live in.

    Reply
  2. Ignasia Yuyun says:

    This post is excellent, Mun! I appreciate your thoughts and discussion regarding the emergence of AI. I agree that AI has many advantages and benefits to ease our daily lives. However, we should consider important ethical issues raised by AI. In the education context, policies should be well made to ensure AI ethics are also well concerned.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

css.php