A Computational-Cognitive Model of Visual Attention in Dynamic Environments

سال انتشار: 1401
نوع سند: مقاله ژورنالی
زبان: انگلیسی
مشاهده: 220

فایل این مقاله در 12 صفحه با فرمت PDF قابل دریافت می باشد

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این مقاله:

شناسه ملی سند علمی:

JR_JECEI-10-1_014

تاریخ نمایه سازی: 1 آذر 1400

چکیده مقاله:

kground and Objectives: Visual attention is a high order cognitive process of human brain which defines where a human observer attends. Dynamic computational visual attention models are modeled on the behavior of the human brain and can predict what areas a human will pay attention to when viewing a scene such as a video. However, several types of computational models have been proposed to provide a better understanding of saliency maps in static and dynamic environments, most of these models are used for specific scenes. In this paper, we propose a model that can generate saliency maps in a variety of dynamic environments with complex scenes.Methods: We used a deep learner as a mediating network to combine basic saliency maps with appropriate weighting. Each of these basic saliency maps covers an important feature of human visual attention, and ultimately the final saliency map is very similar to human visual behavior.Results: The proposed model is run on two datasets and the generated saliency maps are evaluated by different criteria such as ROC, CC, NSS, SIM and KLdiv. The results show that the proposed model has a good performance compared to other similar models.Conclusion: The proposed model consists of three main parts, including basic saliency maps, gating network, and combinator. This model was implemented on the ETMD dataset and the resulting saliency maps (visual attention areas) were compared with some other models in this field by evaluation criteria and their results were evaluated. The results obtained from the proposed model are acceptable and based on the accepted evaluation criteria in this area, it performs better than similar models.

نویسندگان

A. Bosaghzadeh

Artificial Intelligence Department, Faculty of Computer Engineering, Shahid Rajaee Teacher Training University, Tehran, Iran

M. Shabani

Artificial Intelligence Department, Faculty of Computer Engineering, Shahid Rajaee Teacher Training University, Tehran, Iran

R. Ebrahimpour

Artificial Intelligence Department, Faculty of Computer Engineering, Shahid Rajaee Teacher Training University, Tehran, Iran

مراجع و منابع این مقاله:

لیست زیر مراجع و منابع استفاده شده در این مقاله را نمایش می دهد. این مراجع به صورت کاملا ماشینی و بر اساس هوش مصنوعی استخراج شده اند و لذا ممکن است دارای اشکالاتی باشند که به مرور زمان دقت استخراج این محتوا افزایش می یابد. مراجعی که مقالات مربوط به آنها در سیویلیکا نمایه شده و پیدا شده اند، به خود مقاله لینک شده اند :
  • S.Treue, “Neural correlates of attention in primate visual cortex,” Trends ...
  • L. Itti, C. Koch, E. Niebur, “A model of saliency-based ...
  • J.K. Tsotsos, S.M. Culhane, W.Y.K. Wai, Y. Lai, N. Davis, ...
  • A. Borji, L. Itti, "State-of-the-art in visual attention modeling," IEEE ...
  • K. Koch, J. McLean, R. Segev, M.A. Freed, M.J. Berry, ...
  • L. Itti, “Models of bottom-up and top-down visual attention,” Ph.D. ...
  • H.Xiaodi, L. Zhang, "Saliency detection: A spectral residual approach." in ...
  • B.W. Tatler, “The central fixation bias in scene viewing: selecting ...
  • L. Zhang, M.H. Tong, T.K. Marks, H. Shan, G.W. Cottrell,“SUN: ...
  • N.D.B. Bruce, J.K. Tsotsos, “Saliency based on information maximization,” in ...
  • J. Harel, C. Koch, P. Perona, “Graph-based visual saliency,” in ...
  • C. Siagian, L. Itti, “Rapid biologically-inspired scene classification using features ...
  • G. Li, Y. Yizhou, "Visual saliency based on multiscale deep ...
  • R. Anna, P. Napieralski, “The visual attention saliency map for ...
  • W. Hongfa, X. Zhou, Y. Sun, J. Zhang, Ch. Yan. ...
  • W.Wang, J. Shen, F. Porikli, “Saliency-aware geodesic video object segmentation,” ...
  • W. Wang, J. Shen, R. Yang, F. Porikli, “Saliency-aware Video ...
  • S. Meijun, Z. Zhou, D. Zhang, Z. Wang. “Hybrid convolutional ...
  • K. Petros, P. Maragos. “SUSiNet: See, Understand and Summarize it,” ...
  • C. Tran, “Color-opponent channels,” ۲۰۱۴ ...
  • S. Kolkur, et al. “Human skin detection using RGB, HSV ...
  • K. Petros, A. Katsamanis, P. Maragos. “Predicting eyes’ fixations in ...
  • Z. Bylinskii, J. Tilke, O. Aude, T. Antonio, D. Frédo, ...
  • A. Mohammadi Anbaran; P. Torkzadeh; R. Ebrahimpour; N. Bagheri. "Fast ...
  • S. Mohseni; G. Ardeshir; N. Zarei. "Facial expression recognition based ...
  • M.R. Pishgoo; M.R. N. Avanaki, R. Ebrahimpour, "The application of ...
  • J. Khosravi; M. Shams Esfandabadi; R. Ebrahimpour, "Image registration based ...
  • R. Veale, H. Yoshida, “How is visual salience computed in ...
  • J. Li, Y. Tian, T. Huang, W. Gao, “Probabilistic multi-task ...
  • A. Oliva, A. Torralba, M.S. Castelhano, J.M. Henderson, “Top-down control ...
  • نمایش کامل مراجع