Why Ethical Factors Influence the User Acceptance of Automated Vehicles

Introduction

Automated driving is becoming increasingly relevant in research, industry, and society. Alongside technological development, questions concerning ethical decision-making and user acceptance are gaining importance.
The bachelor thesis by Lea Merz examines which ethical factors influence the user acceptance of automated driving systems and how these factors can be connected to an established technology acceptance model.

 

Background: Acceptance Models and the Research Gap

The Unified Theory of Acceptance and Use of Technology (UTAUT) by Venkatesh et al. (2003) is a widely used model for explaining technology adoption, focusing on performance expectancy, effort expectancy, social influence, and facilitating conditions.

However, research on automated driving has shown that these components alone are not sufficient, as automated vehicles operate in safety-critical situations and must make ethically sensitive decisions. Ethical dimensions such as transparency, responsibility, fairness, and user autonomy have therefore become increasingly relevant. Despite that they remain largely overlooked in existing acceptance research, which tends to emphasize functional and technical system characteristics.

To address this gap, the bachelor thesis integrates ethical aspects into the UTAUT framework and examines their relevance for user acceptance in the context of automated driving.

 

Methodology

To investigate which ethical factors shape acceptance, the thesis uses a qualitative research design based on nine expert interviews. The experts come from fields such as automated driving, ethics, mobility research, and technology acceptance.

The material was analyzed using structured qualitative content analysis, leading to a systematic categorization of ethically relevant elements that influence acceptance.

 

Key Ethical Factors Identified in the Study

The results of the content analysis highlight several key categories:

  1. Safety

Safety emerged as a central theme, including physical safety, subjective perceptions of safety, system reliability, the protection of human life, and the system’s ability to adapt traffic-appropriate and socially confirming. These elements reflect both technical safety requirements and users’ personal feelings of security during automated driving.

  1. Autonomy

Autonomy refers to the user’s ability to maintain control, intervene when necessary, and voluntarily decide when to use automated driving functions. The category also captures the importance of allowing users to choose the degree of automation according to their individual preferences and comfort levels.

  1. Responsibility and Liability

The category of responsibility and liability addresses questions about accountability in the case of system malfunctions and responsibility for accidents involving automated vehicles. It includes issues such as the distribution of responsibility between users, manufacturers, and system developers, and the clarity of responsibility assignments in safety-critical scenarios.

  1. Transparency and Explainability

Transparency and explainability concern the availability of understandable information about system behavior, insight into system functions and limitations, and clarity regarding the decision-making processes of automated vehicles. These aspects relate to users’ need to comprehend how the system operates and what its functional boundaries are.

  1. Ethical Dilemmas and Decision Conflicts

This category describes system decisions in morally challenging situations, including situations in which collisions are unavoidable. It includes considerations about how automated vehicles should handle conflicting values and prioritize different outcomes in ethically sensitive traffic scenarios.

  1. Fairness and Equal Treatment

Fairness and equal treatment refer to the ethical requirement that automated vehicles avoid discriminatory behavior, consider different groups of road users equally, and prevent biases within system decision-making. These aspects highlight the importance of ensuring that automated systems act in a manner that is impartial and inclusive across diverse traffic situations.

  1. Trust

Trust emerges as a cross-cutting category influenced by perceived safety, transparency, and the degree of user control. It encompasses several elements, including confidence based on system experience, safe and predictable driving behavior, transparent communication between the vehicle and the user, and clarity regarding responsibility assignments.

 

Extension of the UTAUT Model

Based on the identified categories, the thesis develops an extended UTAUT model that incorporates ethically relevant factors into the existing acceptance framework. The model builds on the original UTAUT components and adds categories such as safety, autonomy, transparency, responsibility, fairness, and trust to more comprehensively reflect the specific characteristics of automated driving systems. This extension follows the theoretical structure of UTAUT and positions the ethical dimensions in relation to the empirical findings of the study.

Figure 1.  Author’s own illustration based on Venkatesh et al., 2003, p. 447).

 

Conclusion

The bachelor thesis by Lea Merz offers a structured presentation of ethical factors that influence the acceptance of automated driving systems.
By integrating theoretical concepts from the UTAUT framework and ethical research with qualitative expert insights, the study provides a clearer understanding of relevant user expectations in the context of automated mobility.

Future research should further examine the extended model, for example through quantitative approaches or structural equation modeling, in order to empirically validate the relationships between the identified ethical factors, trust, and user acceptance.

 

Reference

Venkatesh, N., Morris, M. G., Davis, N., & Davis, N. (2003). User Acceptance of Information Technology: Toward a Unified View. MIS Quarterly, 27(3), 425 478. https://doi.org/10.2307/30036540

 

Trusting the Machine: How Autonomy and Interaction Design Shape Confidence in AI Financial Advice

Artificial intelligence is no longer a futuristic vision but a tangible part of financial reality. Financial institutions increasingly rely on AI technologies for tasks ranging from credit scoring to fraud detection. Yet, the crucial question remains: How willing are people to delegate financial decision-making to machines?

The bachelor thesis by Svetlana Pobivanz examines how AI autonomy and communicative interaction design influence trust in AI-based investment advice, which is essential for the adoption of digital banking and robo-advisory services.

 

Research Objective:

The study investigates the role of AI system characteristics—autonomy and interaction style—on user trust. While previous research (Lee & See, 2004; Siau & Wang, 2018) highlighted trust as central to technology acceptance, this study extends this by considering the psychological and design-related factors affecting trust in financial AI.

Specifically, the research explores whether a semi-autonomous AI with user choice or a fully autonomous AI, and whether a human-like avatar or a text-based chatbot, can enhance perceived trustworthiness.

 

Method:

A 2×2 experimental design was employed with 197 participants who were randomly assigned to one of the four experimental conditions. Participants interacted with a fictional AI portfolio assistant simulating investment recommendations.

  • In the full-autonomy condition, the AI made decisions independently.
  • In the semi-autonomy condition, the AI offered three options for the user to select.
  • The interaction design varied between a human-like avatar capable of speech and facial expressions and a text-based chatbot.

Trust was assessed using the Trust in Automation Scale (Jian et al., 2000) and the Trust in Artificial Intelligence Scale (Hoffman et al., 2021), evaluating perceived reliability, competence, and transparency. 

Illustration of how the avatar and chatbot were displayed during the survey.

 

Key Findings:

Contrary to expectations, neither AI autonomy nor interaction design had a significant main effect on trust (p > .05). Descriptively, avatar-based advisors were perceived as slightly more personable, but this did not translate into significantly higher trust.

A regression analysis revealed that participants’ general attitudes toward AI were the strongest predictor of trust (b = .53; β = .45). Positive prior attitudes toward AI consistently led to higher trust, irrespective of system design. This aligns with the Technology Acceptance Model (Venkatesh & Davis, 2000), emphasizing trust as a key determinant of technology adoption.

 

Conclusion:

The results of the study show that trust in AI-supported investment advice depends less on interaction design or degree of autonomy. Neither avatars nor chatbots nor different levels of autonomy had a significant impact on trust. Although avatars are perceived as more likeable, this effect did not translate into greater trust.

Rather, the general attitude of users towards AI is decisive. It can be confirmed that trust is primarily created through transparency, traceability and comprehensible communication (Hoffman et al., 2023; Choung, David & Ross, 2023). User-friendly presentations can improve the experience, but they do not replace the need for clear explanations and education.

Overall, the results show that promoting knowledge, transparency and a positive attitude is key to building trust in AI-based investment advice and strengthening the acceptance of such systems in the long term.

 

References:

Choung, H., David, P., & Ross, A. (2023). Trust in AI and Its Role in the Acceptance of AI Technologies. International Journal of Human–Computer Interaction, 39(9), 1727–1739. https://doi.org/10.1080/10447318.2022.2050543

Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2023). Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performance. Frontiers in Computer Science, 5, 1096257. https://doi.org/10.3389/fcomp.2023.1096257

Hoffman, R., Mueller, S. T., Klein, G., & Litman, J. (2021). Measuring Trust in the XAI Context. https://doi.org/10.31234/osf.io/e3kv9

Jian, J.-Y., Bisantz, A. M., & Drury, C. G. (2000). Foundations for an Empirically Determined Scale of Trust in Automated Systems. International Journal of Cognitive Ergonomics, 4(1), 53–71. https://doi.org/10.1207/S15327566IJCE0401_04

Lee, J. D., & See, K. A. (2004). Trust in Automation: Designing for Appropriate Reliance. Human Factors: The Journal of the Human Factors and Ergonomics Society, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392

Siau, K., & Wang, W. (2018). Building Trust in Artificial Intelligence, Machine Learning, and Robotics. 47https://www.researchgate.net/publication/324006061_Building_Trust_in_Artificial_Intelligence_Machine_Learning_and_Robotics

Venkatesh, V., & Davis, F. D. (2000). A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Management Science, 46(2), 186–204. https://doi.org/10.1287/mnsc.46.2.186.11926

Technology Acceptance Lab
Datenschutz-Übersicht

Please refer to our Data Protection Notice, which you find here:

Our Data Protection Guideline - Innovation Acceptance Lab