Three Laws of Robotics: Difference between revisions
From Halopedia, the Halo wiki
No edit summary |
No edit summary |
||
(19 intermediate revisions by 17 users not shown) | |||
Line 1: | Line 1: | ||
{{ | {{Status|Canon}} | ||
{{ | {{Wikipedia}} | ||
The '''Three Laws of Robotics'''<ref>'''[[Halo: Evolutions]]''', ''[[Midnight in the Heart of Midlothian]]'', page 88</ref> are conditions to which [[Artificial intelligence|artificial intelligences]] are subject to: | |||
#A robot may not injure a human being or, through inaction, allow a human being to come to harm. | |||
#A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law. | |||
#A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. | |||
The laws were created by science fiction author Isaac Asimov, with the first law first mentioned in the 1941-story [[wikipedia:Liar! (short story)|Liar!]]. Fleshed out more extensively in later series, these laws have also been adopted by other science fiction authors, albeit sometimes in an altered form, and has been considered a model on which to base future artificial intelligence research.<ref>[[wikipedia:Three Laws of Robotics#Applications to future technology|Wikipedia]]</ref> | |||
# | |||
[[United Nations Space Command]] [[Smart AI|"smart" AIs]] are able to ignore at least the first law at will while fully functional, and given their military usage are often ''required'' to ignore this law, though at lower-capacity states their adherence is compulsory. Whether [[Dumb AI|"dumb" AIs]] are able to ignore these laws is unknown. | |||
==List of appearances== | |||
*''[[Halo: Evolutions]]'' | |||
**''[[Midnight in the Heart of Midlothian]]'' {{1st}} | |||
*''[[Halo: Saint's Testimony]]'' | |||
==Sources== | ==Sources== | ||
{{Ref/Sources}} | |||
[[Category:Artificial intelligence]] | |||
[[Category:Human AI]] | |||
[[Category:UNSC protocols]] |
Latest revision as of 02:52, March 25, 2022
There is more information available on this subject at Three Laws of Robotics on the English Wikipedia. |
The Three Laws of Robotics[1] are conditions to which artificial intelligences are subject to:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The laws were created by science fiction author Isaac Asimov, with the first law first mentioned in the 1941-story Liar!. Fleshed out more extensively in later series, these laws have also been adopted by other science fiction authors, albeit sometimes in an altered form, and has been considered a model on which to base future artificial intelligence research.[2]
United Nations Space Command "smart" AIs are able to ignore at least the first law at will while fully functional, and given their military usage are often required to ignore this law, though at lower-capacity states their adherence is compulsory. Whether "dumb" AIs are able to ignore these laws is unknown.
List of appearances[edit]
- Halo: Evolutions
- Midnight in the Heart of Midlothian (First appearance)
- Halo: Saint's Testimony