Full-text resources of CEJSH and other databases are now available in the new Library of Science.
Visit https://bibliotekanauki.pl

Refine search results

Results found: 1

first rewind previous Page / 1 next fast forward last

Search results

Search:
in the keywords:  three laws of robotics
help Sort By:

help Limit search:
first rewind previous Page / 1 next fast forward last
Ethics in Progress
|
2021
|
vol. 12
|
issue 1
134-151
EN
The main objective of this paper is to discuss people’s expectations towards social robots’ moral attitudes. Conclusions are based on the results of three selected empirical studies which used stories of robots (and humans) acting in hypothetical scenarios to assess the moral acceptance of their attitudes. The analysis indicates both the differences and similarities in expectations towards the robot and human attitudes. Decisions to remove someone’s autonomy are less acceptable from robots than from humans. In certain circumstances, the protection of a human’s life is considered more morally right than the protection of the robot’s being. Robots are also more strongly expected to make utilitarian choices than human agents. However, there are situations in which people make consequentialist moral judgements when evaluating both the human and the robot decisions. Both robots and humans receive a similar overall amount of blame. Furthermore, it can be concluded that robots should protect their existence and obey people, but in some situations, they should be able to hurt a human being. Differences in results can be partially explained by the character of experimental tasks. The present findings might be of considerable use in implementing morality into robots and also in the legal evaluation of their behaviours and attitudes.
first rewind previous Page / 1 next fast forward last
JavaScript is turned off in your web browser. Turn it on to take full advantage of this site, then refresh the page.