Disciplinary Differences in Students’ Attitudes Toward Artificial Intelligence in Higher Education: A Comparative Mixed-Methods Study
Purpose: Recent expansion of artificial intelligence (AI) in universities has amplified the need to understand how students from different academic fields perceive its benefits, risks, and implications for learning. This study examines discipline-specific differences in attitudes toward AI among students of Business Informatics, Public Relations and Public Management, and Fine Arts and Design. It focuses on four core dimensions identified in recent literature: adaptive learning support and personalization, ethical considerations, issues of equity and inclusion, pedagogical support and the role of teacher. Design/Methodology/Approach: A mixed-methods design was applied, combining quantitative Likert-scale questions with qualitative open-ended responses. The sample consisted of 165 students from two higher-education institutions. The survey captured experiences with AI tools, perceived opportunities and risks, and expectations regarding the role of educators. Quantitative data were analyzed descriptively and comparatively across disciplines, while qualitative data provided contextualized interpretations of attitudes and concerns. Findings: Across all groups, students recognized the value of AI-supported learning, particularly in relation to personalized feedback, efficiency, and skill development. However, clear disciplinary patterns emerged. Business Informatics students emphasized analytical and technical usefulness; Public Relations and Management students foregrounded ethical communication and institutional responsibility; and Fine Arts students expressed heightened concerns over authorship, creativity, and data protection. Ethical apprehensions – especially privacy, algorithmic bias, and transparency—were present across all groups but most pronounced in artistic disciplines. Students consistently affirmed the need for human oversight and rejected fully automated instruction, supporting hybrid human-AI models that preserve interaction and mentorship. The findings suggest that AI adoption strategies should be discipline-sensitive and aligned with pedagogical expectations. Universities should prioritize transparent ethical frameworks, strengthen digital literacy, and ensure equal access to AI tools. Practical Implications: Educators may benefit from integrating AI in ways that augment, rather than replace, human guidance, with particular attention to creativity-based learning environments. Originality/Value: This study contributes novel comparative evidence showing how disciplinary cultures shape students’ perceptions of AI. It fills a gap in existing research, which has largely examined student populations as homogeneous groups. By highlighting differences across technical, managerial, and artistic fields, the study offers nuanced insights for policy design, curriculum development, and responsible AI integration in higher education.