Trust and Safety in LLM-Based Mental Health Support: A Scoping Review and a Conceptual Governance Framework
محل انتشار: هشتمین همایش بین المللی دستاوردهای نوین در فناوری اطلاعات، علوم کامپیوتر، امنیت، شبکه و هوش مصنوعی
سال انتشار: 1404
نوع سند: مقاله کنفرانسی
زبان: انگلیسی
مشاهده: 13
فایل این مقاله در 9 صفحه با فرمت PDF قابل دریافت می باشد
- صدور گواهی نمایه سازی
- من نویسنده این مقاله هستم
استخراج به نرم افزارهای پژوهشی:
شناسه ملی سند علمی:
INDEXCONF08_013
تاریخ نمایه سازی: 20 بهمن 1404
چکیده مقاله:
Large language models (LLMs) are rapidly entering mental health settings, offering conversational support, psychoeducation, and clinical assistance. Their adoption, however, intensifies long-standing concerns about trust, safety, and accountability, particularly given risks such as hallucinations, uncertain crisis response, bias, and opaque reasoning. This paper conducts a scoping review of emerging empirical, conceptual, and ethical literature on LLM-based mental health tools and synthesizes five recurring themes: growing potential and use cases; trust shaped by anthropomorphism and uncertainty; safety threats related to hallucination and crisis handling; system-level vulnerabilities involving privacy, bias, and accountability; and persistent gaps in governance. Drawing on these insights and established AI ethics frameworks, we propose a multi-level governance model spanning the model, application, clinical, and ecosystem layers. The framework identifies six cross-cutting requirements-safety, transparency, privacy, equity, human oversight, and accountability-offering a structured foundation for responsible development and deployment of LLMs in mental health care.
کلیدواژه ها:
نویسندگان
Hadi Behjati
Department of Computer Engineering, Aliabad Katoul Branch, Islamic Azad University, Aliabad Katoul, Iran
Leila Ajam
Department of Computer Engineering, Aliabad Katoul Branch, Islamic Azad University, Aliabad Katoul, Iran