Transformer-Based Image Segmentation: A Comprehensive Su rvey of Recent Advances
سال انتشار: 1403
نوع سند: مقاله کنفرانسی
زبان: انگلیسی
مشاهده: 98
فایل این مقاله در 6 صفحه با فرمت PDF قابل دریافت می باشد
- صدور گواهی نمایه سازی
- من نویسنده این مقاله هستم
استخراج به نرم افزارهای پژوهشی:
شناسه ملی سند علمی:
ITCT24_017
تاریخ نمایه سازی: 4 دی 1403
چکیده مقاله:
Transformers have emerged as a revolutionary force in artificial intelligence, initially gainingprominence in natural language processing (NLP) due to their ability to model complexdependencies and relationships inherent in textual data. This paradigm shift has significantlyimpacted various domains, including computer vision. Among the many applications withincomputer vision, image segmentation has become a focal point of research, as it plays a crucialrole in enabling systems to interpret visual information more effectively. Image segmentationrefers to the process of partitioning an image into distinct segments, allowing for easier analysisand comprehension of visual data. This technique is pivotal in applications such as medicalimaging (e.g., tumor detection), autonomous vehicles (e.g., obstacle recognition), and videosurveillance (e.g., person and object tracking).Historically, image segmentation has primarily relied on convolutional neural networks (CNNs),which excel in local feature extraction. While CNNs have set benchmarks for performance, theyoften struggle with capturing global context due to their hierarchical processing structure. Incontrast, Transformers utilize an attention mechanism that enables them to learn relationshipsbetween pixels irrespective of their spatial proximity. This attribute allows Transformers toefficiently capture both local features (fine details) and global context (general structures) withinimages, leading to superior performance in segmentation tasks.Recent advancements in Transformer-based segmentation include models like TransUNet andUNETR, which combine the strengths of Transformers and CNNs, particularly in medical imagesegmentation. Hierarchical structures such as the Swin Transformer and nnFormer have furtherimproved the ability to capture multi-scale features. Additionally, multi-scale feature fusionapproaches like the CoTr model have enhanced the integration of local and global contexts.Despite challenges such as high computational costs and the need for large labeled datasets,ongoing research continues to enhance the efficiency, robustness, and applicability ofTransformer-based models across diverse domains.
کلیدواژه ها:
نویسندگان
Zahra Shafiei Amini
PhD Student, Computer Engineering, Azad University, Central Tehran Branch