From Fine-Tuning to Prompting: A Review of Adaptation Strategies for Large Language Models
محل انتشار: هشتمین همایش بین المللی دستاوردهای نوین در فناوری اطلاعات، علوم کامپیوتر، امنیت، شبکه و هوش مصنوعی
سال انتشار: 1404
نوع سند: مقاله کنفرانسی
زبان: انگلیسی
مشاهده: 16
فایل این مقاله در 9 صفحه با فرمت PDF قابل دریافت می باشد
- صدور گواهی نمایه سازی
- من نویسنده این مقاله هستم
استخراج به نرم افزارهای پژوهشی:
شناسه ملی سند علمی:
INDEXCONF08_049
تاریخ نمایه سازی: 20 بهمن 1404
چکیده مقاله:
Large language models have changed artificial intelligence by showing surprising skills - like learning from context. But their size, sometimes with hundreds of billions of parts, makes them tough to use for specific tasks. Training every part isn't practical - it takes too much power and space. So people look for smarter ways to adapt these systems. This article breaks down main methods: training on labeled data or instruction-following examples to shape behavior; using prompts instead of changing weights; and tweaking just a few parameters to save resources. We compare how each handles memory use, efficiency, downsides like forgetting past knowledge, and picking up bad shortcuts. We also look into newer methods - like RetICL, GOP, or how PEFT works in shifting setups such as Fed LLM and PECFT. In the end, we highlight key open challenges while pointing toward what's next, especially auto-building PEFT systems and creating stable, checkable logic paths.
کلیدواژه ها:
Large Language Models (LLM) ، Parameter-Efficient Fine-Tuning (PEFT) ، Instruction Tuning (IT) ، In-Context Learning (ICL) ، Prompt Engineering ، Adaptation Strategies ، Catastrophic Forgetting ، Shortcut Learning
نویسندگان
Armin Janatisefat
Bachelor of Science Student in Computer Engineering, Islamic Azad University, West Tehran Branch, Tehran
Armin Tahamtan
Professor at Islamic Azad University, Tehran