[ad_1]
Establishing human-robot harmony in the workplace is not always easy. Beyond the common fear that automation will take human jobs, robots sometimes just mess up. When that happens, rebuilding trust between robots and their fellow humans can be a tricky business.
However, new research sheds light on how automated workers can restore trust. In large part, the study suggests that humans have an easier time trusting a robot that makes a mistake if it looks somewhat human and the machine offers some kind of explanation, according to Lionel Robert, associate professor at the School of Information at the University of Michigan.
When robots spoil
Even though robots are made of metal and plastic, Robert said we need to start looking at our interactions with them in social terms, especially if we want humans to trust and rely on their automated colleagues. “Humans mess up and are able to keep working together,” he told Ars.
To study how hardworking robots can regain trust after similar mistakes, Robert and Connor Esterwood, a postdoctoral student at U of M, recruited 164 participants online at Amazon Mechanical Turk. They then set up a simulation, much like a video game, in which they had to work with a robot loading boxes on a conveyor belt. The robot chose the boxes while the humans had to review the number on the boxes and make sure they were correct. Half of the humans worked with a human-like robot, while the rest worked with a robot arm.
Digital robots were programmed to periodically pick up the wrong boxes and then issue a statement designed to restore trust with their fellow humans. These statements were divided into four categories: apology, refusal, explanation and promise.
After the experiences, the respondents filled out a questionnaire about their experience. Part of the questionnaire was designed to measure the effectiveness of each trust repair strategy on different metrics: capacity (can the robot do the task), integrity (does it do the tasks) and benevolence (the robot does the task). will he do it the right way?).
Robert noted that previous studies had investigated trust between humans and robots, but their study differs in that it includes the method of explanation and distinguishes between human and non-human robots. He also broke down the concept of trust into three metrics. After all of the questionnaires were completed, Robert and Esterwood compiled and reviewed the data.
Apologetic automata
The team found that the ways the robots tried to regain trust had a different impact on the three metrics, as did the form the robots took. Overall, respondents said it was easier to trust the weirdly anthropomorphic robot after messing up. This is especially true when the human robot used the explanation option, which was particularly effective in convincing humans of the integrity of robots. The human-like robot also had an easier time restoring benevolence when offering apologies, denials, and explanations.
Explaining an error might work better for human-type robots, as it removes a bit of the ambiguity behind how the robot works. We don’t even fully understand how human beings work, Esterwood said. “A smart dishwasher, we think we understand. A human-like robot, we may not be able to fully detect it,” he told Ars. As such, an explanation might be seen as more transparent when it emanates from an agent that appears more complex than an arm on a box.
But the robot arm has found it easier to restore confidence in some cases. For example, these faceless automatons had an easier time restoring integrity and benevolence with promises than their human-imitating parents.
In the future, Robert and Esterwood hope to expand this research. They had originally planned to conduct their study in person using virtual reality, but the pandemic has made that impossible. In addition, they hope to see how different combinations of trust-building strategies might work – explanation and apology, for example.
Get the most out of your machines
Robots will increasingly be deployed in the job market, Robert says, so workers will need to be able to trust them. Additionally, he noted, some employees might hope to deploy learning robots, a process that will involve making mistakes.
If a worker cannot trust or feels uncomfortable with a robotic colleague, these feelings can create stress and negatively impact their well-being. This means that the worker is less happy and less efficient overall. In the extreme, it could mean constantly checking a robot’s work or even just getting rid of it altogether. Another risk: “You just put them in the corner and ignore them,” Esterwood said.
However, according to Kasper Hald, a post-doctoral researcher in the Department of Architecture, Design and Media Technology at Aalborg University, it is more important to keep human-robot trust at an appropriate level than to trust blindly.
Suppose, for example, that you work in a meat packing plant and have a robot there to help you with the more strenuous and repetitive tasks, tasks that could lead to muscle and skeletal diseases later in the process. life. The machine is quite powerful and works quickly. If you trust it too little, you might hesitate to use it. But if you trust it too much, you might end up getting too close or not paying as much attention to it or its position in relation to your hands. This could put workers at risk.
“Especially for robots in the workplace, it’s just as much about maintaining an appropriate level of confidence, not just maintaining confidence, but keeping it at an appropriate level,” Hald told Ars.
DOI: Deep Blue, 2021. 10.7302 / 1675 (About DOIs).
[ad_2]
Source link