Here, I refer to AI as any ‘intelligent’ system, mainly embodied but also virtual, that allows the automation of tasks with minimum or no human intervention in uncontrolled social environments. The type of AI that is or might be clearly displacing humans in some tasks at work. While some suggest that AI could stimulate economic development and create new jobs, others fear that it may devastate the economy [1]. Work is a very important part of our life, mainly because we spent most of our lifetime working but also because it is one of the main sources of meaning in life [2] and is considered a fundamental human need [3]. Through our day-to-day work, we can achieve things that will make us feel fulfilled and complete.

Then, what if AI takes that opportunity away from us?

For example, healthcare is a very vulnerable setting where AI can step in. Handing over activities that healthcare workers find meaningful to embodied AI technologies such as robots can lead to a loss of meaningfulness in their lives [2]. A nurse could find herself or himself asking “How to connect with patients if there is an intelligent robot taking care of them instead of me?” or “How am I going to be able to contribute to the patient’s well-being if this robot is doing what I used to do before?”. AI may represent a threat to meaningful work because tasks that were meaningful for them are not part of their routine anymore, however, there is another possibility: intelligent robots being an opportunity for meaningful work [4]. In that case, the nurse could be thinking “These robots are helping me with some tasks that I actually liked, but, in the end, it benefits the patients and that makes me feel good. Also, I have more time to focus on other meaningful activities”. As Sven Nyholm and Markus Rüther [2] put it, if AI systems take over activities that we find meaningful, then we have to find other meaningful activities or, otherwise, we will have a meaningfulness gap. According to the authors, the other option is that AI displaces humans only on mundane tasks. For this, it is necessary that AI technologies can take over these less meaningful activities without interfering with the activities that we consider meaningful [2].

Co-designing interactions for meaningful work with AI.

As we can see, there are many possible scenarios and “what if’s” for the many ways in which humans may feel about the automation of work. However, I believe that a design approach can help to give more certainty and direction to promote meaningful work. Design can be seen and defined from different perspectives; here, I talk about design as a process arrangement or rearrangement of elements in the best possible way to solve a problem. Thus, when I propose co-designing interactions, I refer to a collaborative organization and reorganization of the elements that form human-robot interactions. What are these elements? the features of the AI systems, the humans involved, the bystanders, the physical space, the role of the interactants, and the allocated tasks [5]. This list is not exhaustive, but it is a good start.

I think that interdisciplinarity may be key in navigating the ‘meaningful work with AI’ challenge. AI systems’ design, development, roles, and task allocation could be done by a group that involves or at least represents all stakeholders. Having a discussion on what tasks are meaningful, what tasks to automate, and how to balance the trade-offs can allow AI designers and developers to test their assumptions on what they thought could work in real life [1], and thus automate work in a dynamic and democratic way. The dynamism is based on the fact that many iterations and sessions with dialogue and negotiations have to be constant as part of a holistic process [6]. Meaningful work is a complex concept that most people may find difficult to define or understand if and how their own work is meaningful. Thinking about meaningful work invites self-reflection, so I think designing AI systems with meaningful work in mind is not an easy task, but it is relevant if we want a sustainable use of AI at work.

Is co-designing robotized interactions an ethical or an organizational endeavor?

We need mindful task allocation that considers the meaningfulness of work. But, who decides what tasks should be assigned to AI systems? Is it an ethical or an organizational discussion? I think it is both. Ethics can provide a list of the general criteria we should base our decisions on how to design human-robot interactions. For example, general criteria for good care involve a relevant amount of human contact [7]. Therefore, when assigning tasks between robots and healthcare workers we need to make sure that the workers have enough human contact, and for that, we need to consult them and not just assume that people from management know the answer.

Ethical discussions will allow us to design a more human work. We don’t need AI technologies to be the ones designing our work for us with systems that can predict and manage workflows for the sake of efficiency. As Kate Crawford [8] puts it, “Rather than representing a radical shift from established forms of work, the encroachment of AI into the workplace should properly be understood as a return to older practices of industrial labor exploitation established in the 1890s and early 20th century”. Indeed, research has shown that when work is designed only to improve efficiency, there is a decrease in satisfaction, but when both efficiency and satisfaction are considered, the tradeoffs are avoided [9]. This is why we need humans behind the redesign of work. Humans need to be engaged in the co-creation of meaning to feel that our lives are valuable [3].




[1] A. Rojas and A. Tuomi, “Reimagining the sustainable social development of AI for the service sector: the role of startups,” JEET, vol. 2, no. 1, pp. 39–54, Nov. 2022, doi: 10.1108/JEET-03-2022-0005.

[2] S. Nyholm and M. Rüther, “Meaning in Life in AI Ethics—Some Trends and Perspectives,” Philos. Technol., vol. 36, no. 2, p. 20, Jun. 2023, doi: 10.1007/s13347-023-00620-z.

[3] R. Yeoman, “Conceptualising Meaningful Work as a Fundamental Human Need,” J Bus Ethics, vol. 125, no. 2, pp. 235–251, Dec. 2014, doi: 10.1007/s10551-013-1894-9.

[4] J. Smids, S. Nyholm, and H. Berkers, “Robots in the Workplace: a Threat to—or Opportunity for—Meaningful Work?,” Philos. Technol., vol. 33, no. 3, pp. 503–522, Sep. 2020, doi: 10.1007/s13347-019-00377-4.

[5] K. Fischer et al., “Integrative Social Robotics Hands-on,” IS, vol. 21, no. 1, pp. 145–185, Jan. 2020, doi: 10.1075/is.18058.fis.

[6] R. V. Zicari et al., “Co-Design of a Trustworthy AI System in Healthcare: Deep Learning Based Skin Lesion Classifier,” Front. Hum. Dyn., vol. 3, p. 688152, Jul. 2021, doi: 10.3389/fhumd.2021.688152.

[7] M. Coeckelbergh, Robot ethics. Cambridge, Massachettes: The MIT Press, 2022.

[8] K. Crawford, Atlas of AI: power, politics, and the planetary costs of artificial intelligence. New Haven: Yale University Press, 2021.

[9] F. P. Morgeson and M. A. Campion, “Minimizing Tradeoffs When Redesigning Work: Evidence from a Longitudinal Quasi-Experiment,” Personnel Psychology, vol. 55, no. 3, pp. 589–612, Sep. 2002, doi: 10.1111/j.1744-6570.2002.tb00122.x.

Originally published at:

Add a Comment

Your email address will not be published. Required fields are marked *