Why Ethical Factors Influence the User Acceptance of Automated Vehicles

Introduction

Automated driving is becoming increasingly relevant in research, industry, and society. Alongside technological development, questions concerning ethical decision-making and user acceptance are gaining importance.
The bachelor thesis by Lea Merz examines which ethical factors influence the user acceptance of automated driving systems and how these factors can be connected to an established technology acceptance model.

 

Background: Acceptance Models and the Research Gap

The Unified Theory of Acceptance and Use of Technology (UTAUT) by Venkatesh et al. (2003) is a widely used model for explaining technology adoption, focusing on performance expectancy, effort expectancy, social influence, and facilitating conditions.

However, research on automated driving has shown that these components alone are not sufficient, as automated vehicles operate in safety-critical situations and must make ethically sensitive decisions. Ethical dimensions such as transparency, responsibility, fairness, and user autonomy have therefore become increasingly relevant. Despite that they remain largely overlooked in existing acceptance research, which tends to emphasize functional and technical system characteristics.

To address this gap, the bachelor thesis integrates ethical aspects into the UTAUT framework and examines their relevance for user acceptance in the context of automated driving.

 

Methodology

To investigate which ethical factors shape acceptance, the thesis uses a qualitative research design based on nine expert interviews. The experts come from fields such as automated driving, ethics, mobility research, and technology acceptance.

The material was analyzed using structured qualitative content analysis, leading to a systematic categorization of ethically relevant elements that influence acceptance.

 

Key Ethical Factors Identified in the Study

The results of the content analysis highlight several key categories:

  1. Safety

Safety emerged as a central theme, including physical safety, subjective perceptions of safety, system reliability, the protection of human life, and the system’s ability to adapt traffic-appropriate and socially confirming. These elements reflect both technical safety requirements and users’ personal feelings of security during automated driving.

  1. Autonomy

Autonomy refers to the user’s ability to maintain control, intervene when necessary, and voluntarily decide when to use automated driving functions. The category also captures the importance of allowing users to choose the degree of automation according to their individual preferences and comfort levels.

  1. Responsibility and Liability

The category of responsibility and liability addresses questions about accountability in the case of system malfunctions and responsibility for accidents involving automated vehicles. It includes issues such as the distribution of responsibility between users, manufacturers, and system developers, and the clarity of responsibility assignments in safety-critical scenarios.

  1. Transparency and Explainability

Transparency and explainability concern the availability of understandable information about system behavior, insight into system functions and limitations, and clarity regarding the decision-making processes of automated vehicles. These aspects relate to users’ need to comprehend how the system operates and what its functional boundaries are.

  1. Ethical Dilemmas and Decision Conflicts

This category describes system decisions in morally challenging situations, including situations in which collisions are unavoidable. It includes considerations about how automated vehicles should handle conflicting values and prioritize different outcomes in ethically sensitive traffic scenarios.

  1. Fairness and Equal Treatment

Fairness and equal treatment refer to the ethical requirement that automated vehicles avoid discriminatory behavior, consider different groups of road users equally, and prevent biases within system decision-making. These aspects highlight the importance of ensuring that automated systems act in a manner that is impartial and inclusive across diverse traffic situations.

  1. Trust

Trust emerges as a cross-cutting category influenced by perceived safety, transparency, and the degree of user control. It encompasses several elements, including confidence based on system experience, safe and predictable driving behavior, transparent communication between the vehicle and the user, and clarity regarding responsibility assignments.

 

Extension of the UTAUT Model

Based on the identified categories, the thesis develops an extended UTAUT model that incorporates ethically relevant factors into the existing acceptance framework. The model builds on the original UTAUT components and adds categories such as safety, autonomy, transparency, responsibility, fairness, and trust to more comprehensively reflect the specific characteristics of automated driving systems. This extension follows the theoretical structure of UTAUT and positions the ethical dimensions in relation to the empirical findings of the study.

Figure 1.  Author’s own illustration based on Venkatesh et al., 2003, p. 447).

 

Conclusion

The bachelor thesis by Lea Merz offers a structured presentation of ethical factors that influence the acceptance of automated driving systems.
By integrating theoretical concepts from the UTAUT framework and ethical research with qualitative expert insights, the study provides a clearer understanding of relevant user expectations in the context of automated mobility.

Future research should further examine the extended model, for example through quantitative approaches or structural equation modeling, in order to empirically validate the relationships between the identified ethical factors, trust, and user acceptance.

 

Reference

Venkatesh, N., Morris, M. G., Davis, N., & Davis, N. (2003). User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly, 27(3), 425 478. https://doi.org/10.2307/30036540

 

Trusting the Machine: How Autonomy and Interaction Design Shape Confidence in AI Financial Advice

Artificial intelligence is no longer a futuristic vision but a tangible part of financial reality. Financial institutions increasingly rely on AI technologies for tasks ranging from credit scoring to fraud detection. Yet, the crucial question remains: How willing are people to delegate financial decision-making to machines?

The bachelor thesis by Svetlana Pobivanz examines how AI autonomy and communicative interaction design influence trust in AI-based investment advice, which is essential for the adoption of digital banking and robo-advisory services.

 

Research Objective:

The study investigates the role of AI system characteristics—autonomy and interaction style—on user trust. While previous research (Lee & See, 2004; Siau & Wang, 2018) highlighted trust as central to technology acceptance, this study extends this by considering the psychological and design-related factors affecting trust in financial AI.

Specifically, the research explores whether a semi-autonomous AI with user choice or a fully autonomous AI, and whether a human-like avatar or a text-based chatbot, can enhance perceived trustworthiness.

 

Method:

A 2×2 experimental design was employed with 197 participants who were randomly assigned to one of the four experimental conditions. Participants interacted with a fictional AI portfolio assistant simulating investment recommendations.

  • In the full-autonomy condition, the AI made decisions independently.
  • In the semi-autonomy condition, the AI offered three options for the user to select.
  • The interaction design varied between a human-like avatar capable of speech and facial expressions and a text-based chatbot.

Trust was assessed using the Trust in Automation Scale (Jian et al., 2000) and the Trust in Artificial Intelligence Scale (Hoffman et al., 2021), evaluating perceived reliability, competence, and transparency. 

Illustration of how the avatar and chatbot were displayed during the survey.

 

Key Findings:

Contrary to expectations, neither AI autonomy nor interaction design had a significant main effect on trust (p > .05). Descriptively, avatar-based advisors were perceived as slightly more personable, but this did not translate into significantly higher trust.

A regression analysis revealed that participants’ general attitudes toward AI were the strongest predictor of trust (b = .53; β = .45). Positive prior attitudes toward AI consistently led to higher trust, irrespective of system design. This aligns with the Technology Acceptance Model (Venkatesh & Davis, 2000), emphasizing trust as a key determinant of technology adoption.

 

Conclusion:

The results of the study show that trust in AI-supported investment advice depends less on interaction design or degree of autonomy. Neither avatars nor chatbots nor different levels of autonomy had a significant impact on trust. Although avatars are perceived as more likeable, this effect did not translate into greater trust.

Rather, the general attitude of users towards AI is decisive. It can be confirmed that trust is primarily created through transparency, traceability and comprehensible communication (Hoffman et al., 2023; Choung, David & Ross, 2023). User-friendly presentations can improve the experience, but they do not replace the need for clear explanations and education.

Overall, the results show that promoting knowledge, transparency and a positive attitude is key to building trust in AI-based investment advice and strengthening the acceptance of such systems in the long term.

 

References:

Choung, H., David, P., & Ross, A. (2023). Trust in AI and Its Role in the Acceptance of AI Technologies. International Journal of Human–Computer Interaction, 39(9), 1727–1739. https://doi.org/10.1080/10447318.2022.2050543

Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2023). Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance. Frontiers in Computer Science, 5, 1096257. https://doi.org/10.3389/fcomp.2023.1096257

Hoffman, R., Mueller, S. T., Klein, G., & Litman, J. (2021). Measuring Trust in the XAI Context. https://doi.org/10.31234/osf.io/e3kv9

Jian, J.-Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an Empirically Determined Scale of Trust in Automated Systems. International Journal of Cognitive Ergonomics, 4(1), 53–71. https://doi.org/10.1207/S15327566IJCE0401_04

Lee, J. D., & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392

Siau, K., & Wang, W. (2018). Building Trust in Artificial Intelligence, Machine Learning, and Robotics. 47https://www.researchgate.net/publication/324006061_Building_Trust_in_Artificial_Intelligence_Machine_Learning_and_Robotics

Venkatesh, V., & Davis, F. D. (2000). A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Management Science, 46(2), 186–204. https://doi.org/10.1287/mnsc.46.2.186.11926

Would you trust me?  I´m created – The use of virtual influencers in the cosmetic industry

The ongoing digitalization has not only transformed consumer behavior but also revolutionized how companies target their audiences. Particularly in the cosmetic industry, influencer marketing has become increasingly relevant. A novel development in this area is the rise of virtual influencers (VI), created using computer-generated imagery (CGI). In her master’s thesis, Julia Mergel, a student of our business psychology program, explores the effects of the visual human-likeness of VI on their perceived credibility in the cosmetic industry. The study identifies key factors for successful collaborations between companies and VI.

Research goal

The goal of this research was to understand how the visual human-likeness of VI affects their credibility ratings. The focus on credibility is especially relevant for virtual influencer, because – unlike human influencers – they are not independent entities and every post by them is usually created by a marketing team in the background. The study focuses on the cosmetic industry, where credibility plays a crucial role in consumer purchase decisions. Specifically, the research investigates whether different degrees of human-likeness in VI affect three critical factors influencing credibility: attractiveness, trustworthiness, and expertise. These three factors are related to each other in an interdependent relationship, collectively creating the perception of credibility of news senders (Ohanian, 1990).

Research overview

VI are CGI-created characters that actively participate in social media and are increasingly used for marketing purposes. Their application is particularly noticeable in the fashion and cosmetic industries. However, up to now the majority of studies on influencer marketing has primarily concentrated on human influencers (Why are influencers perceived as credible by social media users? – Innovation Acceptance Lab), with few focusing on virtual influencers and even fewer examining the effect of visual human-likeness on VI credibility.

The participants in the study were exposed to four different influencers: three VI with varying degrees of human-likeness and one human influencer. The study employed a within-subjects design where each participant viewed all influencers with a different post on a cosmetics product in a randomized order. The VI ranged from highly animated, cartoonish depictions to almost photorealistic, human-like virtual influencers. Participants were then asked to rate the credibility of each influencer and, in addition, to evaluate the three factors underlying credibility: attractiveness, trustworthiness, and expertise, which were later used as mediators in the analysis.

The study was run as an online experiment involving 119 female Instagram users, all of whom followed influencers.

Main findings

The study revealed that visual human-likeness significantly influences the perception of credibility. It was found that greater human-likeness had a positive impact on credibility ratings up to a certain point: No significant difference in perceived credibility was found between the most human-like VI and the human influencer.

The degree of human-likeness also had an effect on the underlying credibility factors:

  • Trustworthiness: A higher degree of human-likeness had a significant positive influence on trustworthiness of the VI.  
  • Attractiveness: A higher degree of human-likeness also increased the perceived attractiveness of the VI. Attractiveness plays a central role in credibility evaluations of influencers, as this trait fosters trust among consumers, making the VI appear less „uncanny“ or artificial (Choudhry et al., 2022).
  • Expertise: The perceived expertise of VI also increased with greater human-likeness. Participants perceived human-like VI as more competent compared to highly animated ones.

The parallel mediation analysis confirmed that all three factors influencing credibility—attractiveness, trustworthiness, and expertise—served as mediators. Overall, 72.5% of the total variance in credibility ratings was explained after including these mediators (p < .001). Trustworthiness, in particular, had the strongest effect on credibility assessments.

Scepticism toward influencer marketing

Despite the clear trends, the overall findings showed that none of the four influencers (including the human influencer!) received particularly high credibility ratings. On a seven-point scale, the highest mean score was lower than the “neutral” middle point (M = 3.71, SD = 1.32). This aligns with a survey by a digital market research institute, which found that advertising with human influencers is not necessarily perceived as more credible than traditional advertising (nextMedia. Hamburg, 2022). Thus, there may be other reasons, why influencers are so successful at the moment.

In an additional exploratory survey, reasons for and against following VI accounts were examined. Five respondents mentioned following VI out of entertainment, content interest, or curiosity. On the other hand, there were 90 negative responses. The most common reasons were that VI were seen as unrealistic, inauthentic, incapable of real experiences, and thus perceived as untrustworthy. Additionally, some found VI impersonal, and at times, even eerie or threatening. However, these descriptive results are likely influenced by the fact that VI aren’t as popular in Germany as in some American or Asian countries (Ströer Blog, 2023).

Implications

For the cosmetic industry, particularly in advertising on platforms like Instagram, companies and developers of VI should consider the following key factors to enhance the credibility of their marketing campaigns.

  1. Visual human-likeness: Companies should opt for VI with high visual human-likeness to increase credibility and avoid consumer discomfort.
  2. Factors influencing credibility: Companies should prioritize VI that are perceived as trustworthy, competent, and attractive to boost source credibility. Trustworthiness has the greatest impact on credibility, so particular attention should be paid to this factor.
  3. Product categories: Products such as nail polish or lipstick, which enhance aesthetic appeal, are better suited for VI advertising than products intended to correct human flaws.

Conclusion

While there remains scepticism surrounding VI, they are increasingly gaining importance in marketing, especially in the cosmetic industry. Our study shows that greater visual human-likeness positively influences the perception of attractiveness, trustworthiness, and expertise—key factors influencing credibility. Companies should leverage these insights to tailor their marketing strategies around VI and maximize the benefits of this innovative technology.

References

Choudhry, A., Han, J., Xu, X. & Huang, Y. (2022). „I Felt a Little Crazy Following a ‚Doll'“: Investigating Real Influence of Virtual Influencers on Their Followers. Proceedings of the ACM on Human-Computer Interaction, 6(GROUP), 1–28. https://doi.org/10.1145/3492862

NextMedia.Hamburg (Hrsg.). (2022). Authentisch, aber bitte unpolitisch? So standen die Befragten zu Influencer*innen. Verfügbar unter: https://www.nextmedia-hamburg.de/umfrage-zu-influencerinnen-lieberauthentisch-und-unpolitisch

Ohanian, R. (1990). Construction and Validation of a Scale to Measure Celebrity Endorsers‘ Perceived Expertise, Trustworthiness, and Attractiveness. Journal of Advertising, 19(3), 39–52. https://doi.org/10.1080/00913367.1990.10673191

 

 

Technology Acceptance Lab
Datenschutz-Übersicht

Please refer to our Data Protection Notice, which you find here:

Our Data Protection Guideline - Innovation Acceptance Lab