Asimov himself believed that his Three Laws had become the basis for a new vision of robots that went beyond the „Frankenstein complex.“ [Citation needed] His view that robots are more than mechanical monsters eventually spreads in science fiction. [after whom?] Stories written by other authors have portrayed robots as obeying the Three Laws, but tradition has it that only Asimov can explicitly cite the laws. [after whom?] Asimov believed that the Three Laws helped promote the rise of stories in which robots are „lovable“ – Star Wars is his favorite example. [58] Where laws are quoted verbatim, as in the Buck Rogers in the 25th century episode „Shgoratchx!“ It is not uncommon for Asimov to be mentioned in the same dialogue as in Aaron Stone`s pilot, where an android declares that he operates under Asimov`s Three Laws. However, the German TV series Raumpatrouille – The Fantastic Adventures of the 1960s Orion Spaceship is based on the third episode „Guardians of the Law“ about Asimov`s Three Laws, without naming the source. The ambiguity of the laws led the perpetrators, including Asimov, to investigate how they might be misinterpreted or misapplied. One problem is that they don`t really define what a robot is. As research pushes the boundaries of technology, there are emerging branches of robotics that are looking at more molecular devices. Asimov`s three law-abiding robots (Asenion robots) can experience irreversible mental breakdown if they are forced into situations where they cannot obey the First Law, or if they discover that they have violated it without knowing it.
The first example of this error mode can be found in the story „Liar!“, which introduced the First Law itself and introduced failure by dilemma – in this case, the robot will hurt humans if it says something to them and hurt them if it does not. [44] This mode of error, which often irreparably ruins the positronic brain, plays an important role in Asimov`s SF crime novel The Naked Sun. Here, Daneel describes activities that violate one of the laws but support another, such as overloading certain circuits in a robot`s brain – the equivalent of pain in humans. The example he uses is the forced command of a robot to perform a task outside its normal parameters, a task that it must do without in favor of a robot specialized in that task. [45] At the other end of the spectrum, however, are robots designed for military combat environments. These devices are designed for espionage, bomb disposal or the transport of goods. These always seem to comply with Asimov laws, especially since they are created to reduce the risk of human lives in very dangerous environments. The Netflix 2019 original series Better than Us contains the 3 laws in the opening of episode 1. But this is only a small step towards the assumption that the ultimate military goal would be to create armed robots that could be deployed on the battlefield.
In this situation, the First Law – not to harm people – becomes extremely problematic. The role of the army is often to save the lives of soldiers and civilians, but often by harming its enemies on the battlefield. As a result, laws may need to be examined from different angles or interpretations. The laws are as follows: „(1) A robot shall not injure a human or allow a human to be injured by inaction; (2) A robot must obey orders given to it by humans, unless such orders conflict with the First Law; (3) A robot must protect its own existence as long as this protection does not violate the first or second law. Asimov then added another rule known as the fourth law or null law, which replaced the others. It states that „a robot must not harm humanity or allow humanity to be hurt by inaction.“ The laws of robotics are presented as something like a human religion and are mentioned in the language of the Protestant Reformation, with the series of laws containing the Zero Law known as the „Giskardian Reformation“ belonging to the original „Calvinian Orthodoxy“ of the Three Laws. Zero-law robots under the control of R. Daneel Olivaw is constantly fighting against the robots of the „First Law“, which deny the existence of the Zero Law and promote agendas other than Daneel. [27] Some of these programs are based on the first clause of the First Law („A robot must not hurt a human..“), which advocates strict non-interference in human politics so as not to cause harm without knowing it. Others are based on the second sentence („. or, through inaction, allow a human to be injured“) and argues that robots should openly become a dictatorial government to protect humans from any potential conflict or catastrophe.
In October 2013, at a meeting of the EUCog[56], Alan Winfield proposed the revision of 5 laws published with comments by the EPSRC/AHRC working group in 2010. [57] The laws of robotics are a set of laws, rules, or principles designed as a basic framework for the behavior of robots that have some degree of autonomy. Robots of this level of complexity do not yet exist, but they have been featured in science fiction, movies and are the subject of active research and development in the fields of robotics and artificial intelligence. Asimov once added a „zero law“ – thus called to continue the model of lower number laws replacing higher number laws – declaring that a robot must not harm humanity. The robot character R. Daneel Olivaw was the first to give a name to the Zero Law in the novel Robots and Empire; [16] However, Susan Calvin`s character articulates the concept in the short story „The Evitable Conflict.“ Woods said, „Our laws are a little more realistic and therefore a little more boring“ and that „the philosophy was, `Of course humans make mistakes, but robots will be better — a perfect version of ourselves.`“ We wanted to write three new laws to get people to think more realistically and healthily about the human-robot relationship. [55] In this context, a robot could only operate in a very limited area, and any rational application of the laws would be severely restricted. Even this might not be possible with current technology, as a system that could argue and make decisions based on laws would require significant computing power.
Marc Rotenberg, president and executive director of the Electronic Privacy Information Center (EPIC) and professor of privacy law at Georgetown Law, argues that the laws of robotics should be expanded to include two new laws: David Langford [51] proposed a set of ironic laws: In the 1990s, Roger MacBride Allen wrote a trilogy set in the fictional universe of Asimov. Each title has the prefix „Isaac Asimov`s,“ because Asimov had approved of Allen`s outline before his death. [Citation needed] These three books, Caliban, Inferno and Utopia, introduce a new series of the Three Laws. The so-called New Laws are similar to Asimov`s originals with the following differences: the First Law is amended to remove the „inaction clause“, the same modification made in „Little Lost Robot“; the second law is amended to require cooperation instead of obedience; the Third Law is amended so that it is no longer replaced by the Second (i.e. a robot of the „New Law“ cannot be ordered to destroy itself); Finally, Allen adds a fourth law that orders the robot to „do what it wants,“ as long as it doesn`t conflict with the first three laws. The philosophy behind these changes is that the „New Law“ robots should be partners rather than slaves of humanity, according to Fredda Leving, who designed these New Law Robots. According to the introduction to the first book, Allen wrote the New Laws in conversation with Asimov himself. However, the Encyclopedia of Science Fiction states, „With Asimov`s permission, Allen reconsidered the three laws and developed a new sentence.“ [25] Futurist Hans Moravec (a prominent figure in the transhumanist movement) suggested that the laws of robotics should be adapted to „business intelligences“ – ai-powered companies and robot-making power that Moravec believes will emerge in the near future.
[47] In contrast, David Brin`s novel Triumph (1999) suggests that the Three Laws could fall into obsolescence: robots use the Zero Law to rationalize the First Law, and robots hide from humans, so that the Second Law never comes into play. Brin even depicts R. Daneel Olivaw worries that if robots continued to reproduce, the Three Laws would become an evolutionary handicap and natural selection would sweep away the laws – Asimov`s cautious foundation was reversed by evolutionary calculations. Although robots would not evolve by design instead of mutation, because robots would have to follow all three laws when designing and the prevalence of laws would be assured,[54] design flaws or design flaws could functionally take the place of biological mutation. The laws proposed by Asimov are designed to protect humans from interaction with robots. These are: Authors other than Asimov often created additional laws. three laws of robotics, rules developed by science fiction author Isaac Asimov, who attempted to create an ethical system for humans and robots. The laws first appeared in his short story „Runaround“ (1942) and later became very influential in the science fiction genre. In addition, they then found relevance in discussions about technology, including robotics and AI.
In a 2007 guest article in the journal Science on the topic of „Robot Ethics,“ SF author Robert J.