Transfer Learning in Natural Language ProcessingTechniques and Applications
سال انتشار: 1403
نوع سند: مقاله کنفرانسی
زبان: انگلیسی
مشاهده: 166
فایل این مقاله در 10 صفحه با فرمت PDF قابل دریافت می باشد
- صدور گواهی نمایه سازی
- من نویسنده این مقاله هستم
استخراج به نرم افزارهای پژوهشی:
شناسه ملی سند علمی:
ICNABS01_050
تاریخ نمایه سازی: 15 بهمن 1403
چکیده مقاله:
Transfer Learning (TL) has emerged as an innovative approach in Machine Learning (ML), particularly in Natural Language Processing (NLP), revolutionizing model performance across various tasks. By leveraging pre-trained models such as Bidirectional Encoder Representations from Transformers (BERT), Generative Pre-trained Transformer (GPT), and T۵, this method effectively utilizes knowledge acquired from one domain to address challenges in other domains, eliminating the need for extensive data to train models from scratch. This paper analyzes different TL techniques in NLP, including models based on Deep Neural Networks (DNN) and specific knowledge transfer strategies. Moreover, advanced applications of this approach are examined, including complex sentiment analysis, multilingual language processing, machine translation, and text generation. The challenges and limitations in implementing TL, such as domain adaptation and generalization issues, are thoroughly discussed. Finally, the paper explores the future of TL in NLP, highlighting emerging trends and its potential in computational linguistics and Artificial Intelligence (AI).
کلیدواژه ها:
نویسندگان
Niloofar Sasannia
School of Industrial and Information engineering, Polytechnic university of MilanTelecommunications engineering, Milan, Italy