I think that it is interesting to reflect on guilt and responsibility in relation to the impact of autonomous robots, that is, artificial agents that can do human tasks with no direct human control [1]. Even if robotics technology is not yet ready to deploy fully autonomous robots, it is a good practice to think about what could and should happen in the future. Intelligent machines will become autonomous, meaning that the designers, developers, and deployers have no full understanding and control over the behavior of such technologies. This causes challenges to assign responsibility because it seems unfair to blame humans for the actions and consequences of robots.

Research shows that designers and developers of AI systems in startups don’t feel responsible for the unintended consequences of technology [2]. For example, a robot designer does not feel accountable for the impact of an autonomous bartender robot on human bartenders, clients, service workers, and managers of an automated hotel bar. The robot designer is just doing his or her job: developing a new technology that solves problems.

I think a distinction between guilt and responsibility is interesting in this context because responsibility is not a feeling nor an emotion, such as guilt. When designers, developers, salespeople, -and anyone involved in the long chain of stakeholders- say they don’t feel responsible for the impact of autonomous robots, they might be thinking about guilt instead. Guilt is an unhappy emotion that arises after you consciously or subconsciously feel that you did something wrong. This might be the reason why robot designers and developers are not feeling accountable for the social impact of the technology they produce, they are just doing their job, why would that be wrong? They don’t feel guilty, and therefore, they don’t acknowledge responsibility.

Responsibility is not synonymous with guilt, it is not an emotion that arises when you did something wrong, it comes from taking accountability despite your feelings or beliefs of right and wrong. This means that people are liable even when there is no obvious reason to be guilty. I think it is relevant to understand that although it is not wrong to produce autonomous robots, and thus, it does not make you guilty, it does make you responsible because technology is not only technically constructed but also socially and politically [3]. The impact of robots does not end in the 1 to 1 human–robot interaction, it continues in sociotechnical systems [4].

What does it mean to be responsible? To be accountable. To respond. To take action that will protect someone else’s interests. Then, for what are autonomous robot producers responsible? For anticipating impacts, reflecting, engaging in dialogue, and influencing the direction of technology [5]. And to whom are autonomous robot producers responsible? Being responsible implies responding to someone, including robot users in development processes, and being open and able to answer their questions [6]. And robot implementers and users should be responsible too. We would all have a share of responsibility, we would be co-responsible [3], as we are today with everything that is happening with climate change and many other societal problems.

We need to change this worldview, from feeling guilty to being responsible, if we want to have a smooth and positive transition to the use of autonomous robots when the moment comes.



[1] D. G. Johnson, “Technology with No Human Responsibility?,” J Bus Ethics, vol. 127, no. 4, pp. 707–715, Apr. 2015, doi: 10.1007/s10551-014-2180-1.

[2] A. Rojas and A. Tuomi, “Reimagining the sustainable social development of AI for the service sector: the role of startups,” JEET, vol. 2, no. 1, pp. 39–54, Nov. 2022, doi: 10.1108/JEET-03-2022-0005.

[3]  J. Stilgoe, R. Owen, and P. Macnaghten, “Developing a framework for responsible innovation,” Research Policy, vol. 42, no. 9, pp. 1568–1580, Nov. 2013, doi: 10.1016/j.respol.2013.05.008.

[4] A. van Wynsberghe, “Responsible Robotics and Responsibility Attribution,” in Robotics, AI, and Humanity, J. von Braun, M. S. Archer, G. M. Reichberg, and M. Sánchez Sorondo, Eds. Cham: Springer International Publishing, 2021, pp. 239–249. doi: 10.1007/978-3-030-54173-6_20.

[5] B. C. Stahl and M. Coeckelbergh, “Ethics of healthcare robotics: Towards responsible research and innovation,” Robotics and Autonomous Systems, vol. 86, pp. 152–161, Dec. 2016, doi: 10.1016/j.robot.2016.08.018.

[6] M. Coeckelbergh, Robot ethics. Cambridge, Massachettes: The MIT Press, 2022.



Add a Comment

Your email address will not be published. Required fields are marked *