Sci-Tech

Micromanaged by machines? Employees demand human supervision

DNVN - Cornell University research indicates that organizations that employ AI to monitor employees' productivity and behavior may anticipate that employees will be less productive, complain more, and desire to quit more frequently, unless the technology is presented as a means of fostering their growth.

Mental health training for managers lowers absenteeism, improves performance / Desks vs. discomfort: Does your workstation secretly sabotage you?

The research indicates that individuals experience a greater sense of loss of autonomy as a result of surveillance tools than human oversight. According to the researchers, businesses and other organizations that utilize the rapidly evolving technologies to assess employee behaviors should take into account their unintended consequences, which may result in resistance and diminished performance. They also suggest an opportunity to secure buy-in if the subjects of surveillance believe that the tools are intended to assist rather than to evaluate their performance, as they are concerned that the assessments will be devoid of context and accuracy.

Emily Zitek, an associate professor of organizational behavior, stated that individuals appreciate the ability to learn from and enhance their performance when artificial intelligence and other advanced technologies are implemented for developmental purposes. "The problem occurs when they feel like an evaluation is happening automatically, straight from the data, and they're not able to contextualize it in any way."

The researchers conducted four experiments that involved nearly 1,200 participants in total. During the initial study, participants were requested to recall and document instances in which they were monitored and evaluated by either surveillance type. They reported feeling less autonomous under AI and were more likely to engage in "resistance behaviors."

Two studies required participants to collaborate in order to generate ideas for a theme park, and subsequently to generate ideas for a specific segment of the park on an individual basis. They were informed that their work would be monitored by either a research assistant or AI, which is represented as "AI Technology Feed" in Zoom videoconferences. After a period of time, either the human assistant or the "AI" transmitted messages instructing the participants to exert more effort and generate more ideas. In surveys conducted subsequent to a single study, AI surveillance was criticized by over 30% of participants, while only approximately 7% were critical of human monitoring.

The researchers discovered that individuals who believed they were being monitored by AI generated fewer ideas, which suggests that their performance was worse, in addition to complaints and criticism.

"Even though the participants got the same message in both cases that they needed to generate more ideas, they perceived it differently when it came from AI rather than the research assistant," according to Zitek. "The AI surveillance caused them to perform worse in multiple studies."

In a fourth study, participants who imagined themselves as employees of a call center were informed that a sample of their calls would be analyzed by humans or AI. The analysis would be employed to assess the performance of certain individuals, while others would receive developmental feedback. Participants did not report a greater intention to quit in the developmental scenario, and they no longer perceived algorithmic surveillance as infringing more on their autonomy.

"Organizations trying to implement this kind of surveillance need to recognize the pros and cons," Zitek emphasized. "They should do what they can to make it either more developmental or ensure that people can add contextualization. If people feel like they don't have autonomy, they're not going to be happy."

Journal Reference: Rachel Schlund, Emily M. Zitek. Algorithmic versus human surveillance leads to lower perceptions of autonomy and increased resistance. Communications Psychology, 2024; 2 (1) DOI: 10.1038/s44271-024-00102-8

Thuy Duong
 
 

End of content

Không có tin nào tiếp theo

Xem nhiều nhất

Cột tin quảng cáo