Detecting Hallucinations Generated by Large Language Models Using Paraphrasing Technique
محل انتشار: دهمین کنفرانس بین المللی وب پژوهی
سال انتشار: 1403
نوع سند: مقاله کنفرانسی
زبان: انگلیسی
مشاهده: 199
فایل این مقاله در 6 صفحه با فرمت PDF قابل دریافت می باشد
- صدور گواهی نمایه سازی
- من نویسنده این مقاله هستم
استخراج به نرم افزارهای پژوهشی:
شناسه ملی سند علمی:
IRANWEB10_007
تاریخ نمایه سازی: 14 مرداد 1403
چکیده مقاله:
Hallucination in large language models refers to outputs that appear correct but contradict reality or diverge from the source. Detecting hallucination in large language models is crucial to prevent the dissemination of these hallucinations in applications directly or indirectly related to such models. In this study, we have employed a simple algorithm to detect hallucination in a large language model. Our hypothesis is based on the hypothesis that if a large language model responds to the paraphrases of a question and an inconsistency is discovered among its answers, then we say that it is hallucination, and if the answers are consistent, it likely provides a correct answer. We have checked and confirmed these two hypotheses with experiments. In this way, our proposed method to discover the hallucination in answering a question is to create different paraphrases of that question and check the existence of inconsistencies or contradictions in the answers given to the generated questions. The presence or absence of inconsistency confirms the presence or absence of hallucinations. Experiments show that this method is able to detect hallucination in answering questions with high accuracy.
کلیدواژه ها:
Large Language Models ، Hallucination of Large Language Models ، Inconsistency Detection ، Paraphrasing
نویسندگان
Tara Zare
Shahid Beheshti University, Tehran, Iran
Mehrnoush Shamsfard
Shahid Beheshti University, Tehran, Iran