Three Laws of Robotics: Difference between revisions

From Halopedia, the Halo wiki

(New page: The '''Three Laws of Robotics''' are conditions to which artificial intelligences are subject to: #A robot may not injure a human being or, through inaction, …)
 
No edit summary
Line 1: Line 1:
{{Era|HE}}
{{Ratings}}
The '''Three Laws of Robotics''' are conditions to which [[Artificial Intelligence|artificial intelligences]] are subject to:  
The '''Three Laws of Robotics''' are conditions to which [[Artificial Intelligence|artificial intelligences]] are subject to:  



Revision as of 04:55, December 20, 2009

Template:Ratings

The Three Laws of Robotics are conditions to which artificial intelligences are subject to:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

UNSC "Smart" AI's are able to ignore at least the first law at will while fully functional, and given their military usage are often required to ignore this law, though at lower-capacity states their adherence is compulsory. Whether "Dumb" AI's are able to ignore these laws is unknown.[1]

Sources

  1. ^ Halo: Evolutions - Essential Tales of the Halo Universe - Midnight in The Heart of Midlothian, page 88