Appearance of Artificial Agents: Reassurance or Risk Perception?

By Prof. Kimmy ChanDr. Shirley Li

Using presidential approval rates to make money Using presidential approval rates to make money

As the artificial intelligence era is unfolding, we are increasingly forced to interact with embodied artificial agents (i.e., robots). Unlike chatbots which remain confined to our screens, robots often take physical forms that mimic human beings. But while we may accept a human-looking robot welcoming us at a hotel's front desk, can we have the same level of comfort with one used for fire rescue or surgical procedures? Not so fast, concludes a new paper[1] at it may still be some time before we trust a semiconductor-powered contraction that looks like us to come to our help in dangerous situations.  

Numerous robotics companies have endowed artificial agents designed for high-risk scenarios with human-like attributes, such as arming rescue robots with two legs and a somewhat human appearance. Rooted in the belief that artificial agents bearing human-like features may garner a more favorable reception from users, this approach also appears justified by research demonstrating that we tend to respond positively to artificial agents that exhibit signs of intelligence or empathy. While such soft skills undeniably matter for machines tasked with addressing customer complaints, one must ask whether non-human attributes linked to robustness, strength, and durability such as using chenille wheels instead of human-looking legs for a rescue robot  might not come handier in times of crisis.

After conducting a series of experiments involving numerous participants recruited on Prolific, MTurk, as well as actual consumers, the researchers reached a clear conclusion: in hazardous circumstances, individuals tend to view artificial agents with human-like appearances as physically less reassuring than those without such features. This observation held true across diverse demographic groups – including Hong Kong undergraduate students, residents from the United States and the United Kingdom, as well as parents in China – and various artificial agents such as self-driving vehicles, home security robots, and rescue robots.

The study’s insights have significant implications for businesses. Firstly, firms should be careful when incorporating humanlike features into artificial agents since these – however cute they might seem – may end up projecting an image of weakness, especially for risk-conscious consumers. Meanwhile, if a firm’s existing artificial agents already possess humanlike features, its marketing team should seek to downplay the likelihood of dangerous scenarios in advertisements, emphasizing instead aspects unrelated to risk, like how smooth a drive is in the case of self-driving vehicle ads (Interestingly, giving an aggressive “face” to a self-driven car was perceived positively by participants assessing a car’s safety versus another vehicle adorned with a friendly face). Finally, the negative impact of these human-like physical attributes may be mitigated by directing consumer attention towards the cognitive and socio-emotional skills of their robots. Thus, highlighting an artificial agent's ability to adapt to changing threats or foster trust in vulnerable patients can help alleviate safety concerns.

While technology goes forward at lighting speed, it seems that humanity’s instincts are set to change slower, so that when faced with danger, we still feel prefer to be rescued by a car-shaped Transformer instead of one made to resemble Hello Kitty. Marketers beware!

 

Reference:

[1] Li, X., Kim, S., Chan, K. W., & McGill, A. (2023). Detrimental effects of anthropomorphism on the perceived physical safety of artificial agents in dangerous situations. International Journal of Research in Marketing, 40(4), 841-864.  http://doi.org/10.1016/j.ijresmar.2023.07.002