Transformer-Based Image Segmentation: A Comprehensive Su rvey of Recent Advances

  • سال انتشار: 1403
  • محل انتشار: بیست و چهارمین کنفرانس بین المللی فناوری اطلاعات، کامپیوتر و مخابرات
  • کد COI اختصاصی: ITCT24_017
  • زبان مقاله: انگلیسی
  • تعداد مشاهده: 100
دانلود فایل این مقاله

نویسندگان

Zahra Shafiei Amini

PhD Student, Computer Engineering, Azad University, Central Tehran Branch

چکیده

Transformers have emerged as a revolutionary force in artificial intelligence, initially gainingprominence in natural language processing (NLP) due to their ability to model complexdependencies and relationships inherent in textual data. This paradigm shift has significantlyimpacted various domains, including computer vision. Among the many applications withincomputer vision, image segmentation has become a focal point of research, as it plays a crucialrole in enabling systems to interpret visual information more effectively. Image segmentationrefers to the process of partitioning an image into distinct segments, allowing for easier analysisand comprehension of visual data. This technique is pivotal in applications such as medicalimaging (e.g., tumor detection), autonomous vehicles (e.g., obstacle recognition), and videosurveillance (e.g., person and object tracking).Historically, image segmentation has primarily relied on convolutional neural networks (CNNs),which excel in local feature extraction. While CNNs have set benchmarks for performance, theyoften struggle with capturing global context due to their hierarchical processing structure. Incontrast, Transformers utilize an attention mechanism that enables them to learn relationshipsbetween pixels irrespective of their spatial proximity. This attribute allows Transformers toefficiently capture both local features (fine details) and global context (general structures) withinimages, leading to superior performance in segmentation tasks.Recent advancements in Transformer-based segmentation include models like TransUNet andUNETR, which combine the strengths of Transformers and CNNs, particularly in medical imagesegmentation. Hierarchical structures such as the Swin Transformer and nnFormer have furtherimproved the ability to capture multi-scale features. Additionally, multi-scale feature fusionapproaches like the CoTr model have enhanced the integration of local and global contexts.Despite challenges such as high computational costs and the need for large labeled datasets,ongoing research continues to enhance the efficiency, robustness, and applicability ofTransformer-based models across diverse domains.

کلیدواژه ها

Transformers, Image Segmentation, Computer Vision

مقالات مرتبط جدید

اطلاعات بیشتر در مورد COI

COI مخفف عبارت CIVILICA Object Identifier به معنی شناسه سیویلیکا برای اسناد است. COI کدی است که مطابق محل انتشار، به مقالات کنفرانسها و ژورنالهای داخل کشور به هنگام نمایه سازی بر روی پایگاه استنادی سیویلیکا اختصاص می یابد.

کد COI به مفهوم کد ملی اسناد نمایه شده در سیویلیکا است و کدی یکتا و ثابت است و به همین دلیل همواره قابلیت استناد و پیگیری دارد.