From Fine-Tuning to Prompting: A Review of Adaptation Strategies for Large Language Models

سال انتشار: 1404
نوع سند: مقاله کنفرانسی
زبان: انگلیسی
مشاهده: 16

فایل این مقاله در 9 صفحه با فرمت PDF قابل دریافت می باشد

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این مقاله:

شناسه ملی سند علمی:

INDEXCONF08_049

تاریخ نمایه سازی: 20 بهمن 1404

چکیده مقاله:

Large language models have changed artificial intelligence by showing surprising skills - like learning from context. But their size, sometimes with hundreds of billions of parts, makes them tough to use for specific tasks. Training every part isn't practical - it takes too much power and space. So people look for smarter ways to adapt these systems. This article breaks down main methods: training on labeled data or instruction-following examples to shape behavior; using prompts instead of changing weights; and tweaking just a few parameters to save resources. We compare how each handles memory use, efficiency, downsides like forgetting past knowledge, and picking up bad shortcuts. We also look into newer methods - like RetICL, GOP, or how PEFT works in shifting setups such as Fed LLM and PECFT. In the end, we highlight key open challenges while pointing toward what's next, especially auto-building PEFT systems and creating stable, checkable logic paths.

نویسندگان

Armin Janatisefat

Bachelor of Science Student in Computer Engineering, Islamic Azad University, West Tehran Branch, Tehran

Armin Tahamtan

Professor at Islamic Azad University, Tehran