Topic > The Ethical Issues of Artificial Intelligence

Artificial intelligence has captured societal fascination dating back to the ancient Greeks; Greek mythology depicts a human-like automated machine called Talos defending the Greek island of Crete. However, the ethical issues related to such artificial intelligence only began to be seriously addressed in the 1940s, with the publication of Isaac Asimov's short story “Runaround”. Here, the main character states the “Three Laws of Robotics,” which are: 1. A robot cannot harm a human being, or, through inaction, allow a human being to be harmed. 2. A robot must obey orders given by humans, unless such orders conflict with the First Law. 3. A robot must protect its existence as long as such protection does not conflict with the First or Second Law. The rules established here are rather ambiguous; B. Hibbard in his article “Ethical Artificial Intelligence” provides a situation that conflicts with these laws – his example is “An AI police officer watching a gunman point a gun at a victim” which would require, for example, that the police officer shoots a gun at the gunman to save the victim's life, which conflicts with the First Law mentioned above. Therefore, a framework is needed to define how such AI would behave ethically (and even make some moral improvements); the other factors that this essay will discuss (mainly with the help of “The Ethics of Artificial Intelligence” by N. Bostrom and E. Yudkowsky) are transparency to inspection and predictability of artificial intelligence. Transparency to Inspection Engineers should, when developing an AI, make it transparent to inspection. Say no to plagiarism. Get a tailor-made essay on "Why Violent Video Games Shouldn't Be Banned"? Get an Original Essay For an AI to be transparent to inspection, a programmer should be able to at least understand how an algorithm would decide the AI's actions. Bostrom and Yudkowsky's article provides an example of how important this is, using a machine that recommends mortgage applications for approval. If the machine were to discriminate against people of a certain type, the paper argues that if the machine were not transparent to inspection, there would be no way to find out why or how it does so. Furthermore, A. Theodorou et al. in the document “Why does my robot behave like this?” Highlight three points that impose transparency on the inspection: allow an assessment of reliability; exhibit unexpected behavior; and, to expose the decision-making process. The document goes further by implementing what should be a transparent system, which includes its type, purpose and the people who use the system – while emphasizing that for different roles and users, the system should give readable information to the latter. Although the document does not specifically mention artificial intelligence as a separate topic, the principles of a transparent system could easily be transferred to engineers developing artificial intelligence. Therefore, when developing new technologies such as artificial intelligence and machine learning, the engineers and programmers involved ideally should not lose track of why and how the AI ​​performs its decision making and should strive to add to the intelligence artificial structure to protect or at least inform the user about unexpected behaviors that may occur. Predictability of AI Although AI has proven to be more intelligent thanhumans in specific tasks (e.g. Deep Blue's defeat of Kasparov in the World Chess Championship), most current AI is not general. However, as technology advances and AI design becomes more complex, the predictability of these comes into play. Bostrom and Yudkowsky argue that managing an AI that is general and performs tasks in many contexts is complex; Identifying security problems and predicting their intelligence behavior is considered difficult. It highlights the need for an AI to act safely through unknown situations, extrapolating consequences based on these situations, and essentially thinking ethically just as a human engineer would. Hibbard's article suggests that while determining the AI's responses, testing should be performed in a simulated environment using a "decision support system" that would explore the intentions of the AI ​​learning in the environment - with simulations run without human interference. However, Hibbard also promotes a 'stochastic' process, using a random probability distribution, which would serve to reduce its predictability on specific actions (the probability distribution could still be analyzed statistically); this would serve as a defense against other AIs or people trying to manipulate the AI ​​currently under construction. Overall, the predictability of AI is an important factor in designing one in the first place, especially when general AI is designed to perform large-scale tasks in very different situations. However, while an AI that is obscure in how it carries out its actions is undesirable, engineers should also consider the other side: an AI should have a certain unpredictability that, if nothing else, would discourage the manipulation of such artificial intelligence for a certain period. harmful purpose. Does AI Think Ethically Arguably, the most important aspect of ethics in AI is the framework for how the AI ​​would think ethically and consider the consequences of its actions – essentially, how to encapsulate human values ​​and recognize their development over time into the future. This is especially true for superintelligence, where the question of ethics could mean the difference between prosperity and destruction. Bostrom and Yudkowsky state that for such a system to think ethically, it would have to be responsive to changes in ethics over time and decide which are a sign of progress – giving the example of comparing ancient Greece and modern society using slavery. . Here the authors fear the creation of an ethically “stable” system that would be resistant to changing human values, and yet they do not want a system whose ethics are determined at random. They argue that to understand how to create a system that behaves ethically, it should “understand the structure of ethical issues” in order to consider ethical progress that has not yet been conceived. Hibbard suggests a statistical solution to allow an AI to have some semblance of ethical behavior; this forms the main topic of his article. For example, it highlights the problem of people around the world respecting different human values, thus making the ethical framework of artificial intelligence complex. He argues that to address this problem, human values ​​should not be expressed to an artificial intelligence as a set of rules, but learned using statistical algorithms.