Identification of Trust Determinants in LLM Technology Using the DEMATEL Method

Marta Pawlowska-Nowak
European Research Studies Journal, Volume XXVII, Special Issue A, 694-711, 2024
DOI: 10.35808/ersj/3744

Abstract:

Purpose: The study identifies determinants of trust in Large Language Model (LLM) technology among office employees using the DEMATEL method. It addresses a gap in understanding factors like credibility, user experience, and data security, which are crucial for AI adoption in workplaces. Design/Methodology/Approach: The study employs a mixed-method approach, combining a literature review and empirical analysis using the DEMATEL method. The DEMATEL method was applied to develop causal-effect diagrams, identify prominence and relation indicators, and calculate the importance of individual factors. The methodology was chosen for its suitability in analyzing complex, interrelated variables. Findings: The research identified the most significant determinants of trust in LLM technology. Key findings highlight that perceived credibility and accuracy of responses, prior experience with AI, and awareness of productivity impacts are the most influential factors. Additionally, user education, intuitive user interfaces, and robust data security were identified as crucial for building trust. These factors underscore the importance of transparency, usability, and reliability in fostering employee confidence in LLM technology. Practical Implications: Organizations can enhance LLM adoption by ensuring credible outputs, providing training, and addressing data security. The results support trust-building strategies for sectors dependent on AI decision-making. Originality/Value: This study offers a novel application of the DEMATEL method to trust in LLMs, providing insights into workplace AI adoption and expanding understanding of trust-building mechanisms.


Cite Article (APA Style)