requestId:68499ac3048666.37456730.
Confucian robot ethics
Author: [US] Liu Bianlu
Translation: Xie Chenyun, Jan Chaoqin, Gu Long
Study: Yiqin
Source: “Thinking and Civilization”, Huadong Teacher Fan Daxue Bookstore released in June 2018
Time: Confucius was in the 2570th year of the first lunar month of Jihai
Xincho
Jesus March 5, 2019
Author’s introduction:
[American] Liu Binglu**Liu Binglu (JeeLoo) Liu, 1958—), female, professor at the Department of Philosophy at the University of California State University, American, and the important research and development areas and design goals are spiritual philosophy, Chinese philosophy, metaphysics, and moral thinking.
Xi Chenyun (1995-), female, from Ji’an, Jiangxi Province, a graduate student of the Department of Philosophy, Huadong, and the purpose of the study was to teach Taoism in China.
Jing Chaoqin (1995-), female, from Xinyu, Jiangxi Province, is a graduate student of the Department of Philosophy in Huadong. The purpose of the seminar was to pre-Qin philosophy and moral theory.
Gu Long (1991-), male, from Leiyang, Hunan, is a graduate student of the Department of Philosophy, Huadong. The purpose of the seminar is to teach in China.
王王 (1989-), male, from Jiangzhen, Jiangsu, Jiangdong, a graduate student of the Department of Philosophy, and the purpose of the seminar was to study scientific philosophy.
[Abstract]This article explores the effectiveness of implanting Confucian ethical standards in the so-called “artificial moral subject”. This article quotes the Confucian classic “Theory” and think about what ethical rules can be implicated in the robot’s nature and virtue. This article will also compare the three artificial moral entities, namely Kantian, utilitarian, and Confucian, to assess their respective advantages and disadvantages. This article believes that although robots do not possess the inherent moral feelings of human beings, as the “four ends” defended by Mencius, we can build the moral ethics of robots through the moral standards emphasized by Confucianism. With the incorporation of Confucian ethics standards, robots can effectively qualify themselves, thus making them artificial moral bodiesqualifications.
[Keywords]Artificial moral subject; Confucian ethics; utilitarianism; Kant’s ethics; Asimov’s law
The study of this article has received assistance from the Aodan university. The author once studied at the Bandung University as a “dian student”. For the start-up period, the pressure is high and I often work overtime. The generous mutual assistance and thoughts and communication during the visiting school are sincerely expressed.
Speak
With the development of artificial intelligence technology, intelligent human robots are very capable of emerging in human society in a remote manner. Whether they can truly possess human intelligence and whether they can truly think like humans is also an exploration of the philosophical level. But it is certain that they will be able to maintain the artificial intelligence test method (graphic spirit test method) proposed by British computer scientists, mathematicians, and logicians – that is, if robots can benefit the human beings who speak to them and treat them as humans, then they can be proven to be intelligent. Maybe one day, smart robots will become members of our society. They will take the initiative to split our tasks, take care of our elderly, serve us in hotels and restaurants, and make major decisions for us in navigation, military and even medical fields. Can we set up ethical standards for these robots and teach them how to distinguish their lengths? If the answer is certain, then how can moral standards create artificial moral agents that conform to human social expectations?
Many artificial intelligence designers believe that the development of artificial morality will one day benefit. Under this condition, this article explores whether the artificial moral subject that implants Confucian ethics into artificial intelligence robots can cultivate artificial moral subjects that can coexist with human wars. This article 口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口用口� At the same time, this article also compares the Confucian artificial moral subjects with Kant moral standards and utilitarian standards to assess their respective advantages and disadvantages. This article believes that although robots do not possess the inherent moral feelings of human beings, as the “four ends” defended by Mencius, we can build robots through the moral standards emphasized by Confucianism, so that they become moral subjects that we can recognize.
For artificialThe exploration of intelligent moral standards is not just a brain-strength stimulus of future ideology. M. Anderson and S. Anderson believe: “Machine ethics allows ethics to reach an unprecedented level of precision, which can lead us to discover problems in the current ethics theory and thus lead us to our thinking on ordinary ethics problems.” [1] This article will prove that the comparison of robotics’ nature and virtuesCultivation contract research can let us see some theoretical flaws in discussing human ethics.
1. The rise of machine ethics
Let the robot consider the consequences of its behavior in advance and then make systematic moral choices on its own. This is still unsatisfactory in today’s view. However, there are guidelines for designing specific choices for machines now in the design of artificial intelligence machines, because some decisions of machines will bring many moral consequences. For example, we can use unmanned machines to compile the army. If it detects that there are many common people around the military target, should the attack be terminated immediately or the attack continues to be launched. We can also compile medical robots. When the patient has entered the end of the severe disease and has a sudden outbreak, is it necessary to implement rescue measures or give up a step-by-step treatment. Ryan Tonkens believes that “automatic machines do morally related behaviors like humans, so that for a steady start, our design must ensure that they act in a morally consistent manner.” [2] Therefore, even if “ethical machines” are not yet created at the moment, we must consider machine ethics. In addition, the version of the machine ethics we have prepared must be suitable for future machine genres rather than only design French for robots. In other words, machine ethics cares about how to apply moral standards to “artificial moral subjects” rather than their designers.
Today, there are two distinct approaches to design artificial smart machines. One is “bottom-up” and the other is “top-down”. [3] The former is to allow the machine to gradually develop its own moral standards from the sporadic rules applied in daily selection. The designer gives the machine a learning ability to process the total information that is the total result generated by the machine’s actions in a divergent situation. In order to allow the machine to form a certain behavior, the designer can set up a reward system to encourage the machine to take certain behaviors. This kind of reaction mechanism can prompt the machine to develop its own ethical standards at a time. This kind of approach is similar to the learning experience of human beings in their childhood to establish moral character. In contrast, the “top-down” approach is to implant ordinary abstract ethical rules on the machine that can control its daily choices and behaviors. If you follow this path, designers must first choose a kind of ethicsDiscuss, analyze the “information and overall legal requests necessary to execute the theorem in a computer system”, and then design a subsystem for executing the theory of ethics. [4] However, even with a preset design, the machine still needs to choose the best action plan based on ethical principles in every moral situation. This top-down design method will reflect disputes within the standard ethics science, because divergent ethics theory creates an artificial moral subject that thinks based on divergent moral standards. This article will compare the theoretical models of divergent in the process of research without considering the actual algorithms, design requests, and other technical issues required for the execution of the process.
According to M. Anderson and S. Anderson’s views, the goal of machine ethics is to clearly define abstract and broad moral principles, so that artificial intelligence can complain about these principles when choosing or thinking about the properness of its own behavior. [5] They think we cannot make specific rules for every situation that can occur. “Designing abstract and broad moral principles for machines, rather than ordering how machines make correct action
發佈留言