PORTFOLIO
IT Portfolio, Wiliam Lloyd
Asimov’s Laws Of Robotics
Introduction
Asimov’s three laws of robotics were introduced in his 1942 science fiction novel “Runaround”, while they were used as a main plot device in his stories; they serve some purpose in the real world too. These rules are intended to restrict how robots can operate in order to protect and not harm the humans present. The three laws are:
1. First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. Second Law: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
3. Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These three laws were expanded upon to include a Zeroth Law: A robot may not injure humanity or, through inaction, allow humanity to come to harm.
These laws are not automatically known by a robot, they must be programmed in by its creator, so whether or not a robot with the capacity to know these laws will follow them depends on their creator and the robots programming. Another consideration is that some robots may simply not know that they are causing harm, because they lack the programming or capacity to do so, and to stop themself from doing so.
A scenario relating to the First Law
bebionic3 would violate the first law, if it were to malfunction while the operator was (for example) cutting vegetables with the arm. If the hand broke, dropping the knife it was holding onto the operator’s foot, it would be letting a human being come to harm through inaction and would thus be breaking the first law.
A scenario relating to the Second Law
If the bebionic3 was being operated by a soldier or police officer, it may be used by that human to fire a weapon that could kill someone, for the robot to do this would be to break the second law. However if the robot were to refuse this action (firing the weapon to kill as commanded by a human) it might inadvertently bring harm to the person operating it, thus breaking the first law. This is an interesting conundrum, but the bebionic lacks the capability to make such decisions and it is programmed to always follow the commands of its operator. So for this example the robot would follow the command to discharge the weapon and could potentially kill (or bring harm to) a human.
A scenario relating to the Third Law
As bebionic3 is solely reliant on its operator and lacks the ability to know and understand these rules, it is incapable of protecting itself/the protection of its existence is in the hands of the operator.
Comparison
Overall the bebionic lacks the ability to understand these rules and to make its own decisions as to what is right, therefore the breaking or obeying of these rules is in the hands of the operator. For a robot to truly break these rules, they must be fully aware of them and must be fully aware of what they are doing, in short these rules primarily affect adaptive robots, which bebionic is not.
Conclusion
In conclusion the final rule only applies to adaptive/intelligent robots, as non-adaptive robots are not aware that their existence is in danger and will continue to follow commands given by the operator, whether or not they break the first two rules, until such a time that it shuts down or breaks. Non adaptive robots lack the ability to be self aware, and make their own decisions, whereas adaptive robots are capable of making their own decisions but may not be aware of the rules or that they are indeed causing harm.