Multimodal Spatiotemporal Feature Map for Dynamic Gesture Recognition from Real Time Video Sequences

سال انتشار: 1402
نوع سند: مقاله ژورنالی
زبان: انگلیسی
مشاهده: 63

فایل این مقاله در 9 صفحه با فرمت PDF قابل دریافت می باشد

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این مقاله:

شناسه ملی سند علمی:

JR_IJE-36-8_004

تاریخ نمایه سازی: 10 مرداد 1402

چکیده مقاله:

The utilization of artificial intelligence and computer vision has been extensively explored in the context of human activity and behavior recognition. Numerous researchers have investigated and suggested various techniques for human action recognition (HAR) to accurately identify actions from real-time videos. Among these techniques, convolutional neural networks (CNNs) have emerged as the most effective and widely used for activity recognition. This work primarily focuses on the significance of spatial information in activity/action classification. To identify human actions and behaviors from large video datasets, this paper proposes a two-stream spatial CNN approach. One stream, based on RGB data, is fed with the spatial information from unprocessed RGB frames. The second stream is powered by graph-based visual saliency maps generated by GBVS (Graph-Based Visual Saliency) method. The outputs of the two spatial streams were combined using sum, max, average, and product feature fusion techniques. The proposed method is evaluated on well-known benchmark human action datasets, such as KTH, UCF۱۰۱, HMDB۵۱, NTU RGB-D, and G۳D, to assess its performance Promising recognition rates were observed on all datasets.

نویسندگان

S. Reddy P.

Department of Electronics and Communication Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Guntur AP, India

C. Santhosh

Department of Electronics and Communication Engineering, Koneru Lakshmaiah Education Foundation, Vaddeswaram, Guntur AP, India

مراجع و منابع این مقاله:

لیست زیر مراجع و منابع استفاده شده در این مقاله را نمایش می دهد. این مراجع به صورت کاملا ماشینی و بر اساس هوش مصنوعی استخراج شده اند و لذا ممکن است دارای اشکالاتی باشند که به مرور زمان دقت استخراج این محتوا افزایش می یابد. مراجعی که مقالات مربوط به آنها در سیویلیکا نمایه شده و پیدا شده اند، به خود مقاله لینک شده اند :
  • Afsar, P., Cortez, P. and Santos, H., "Automatic visual detection ...
  • Yang, X. and Liu, Z.-Y., "Adaptive graph matching", IEEE Transactions ...
  • Popovici, V. and Thiran, J., "Adaptive kernel matching pursuit for ...
  • Liu, H., Ju, Z., Ji, X., Chan, C.S., Khoury, M., ...
  • Abaei Kashan, A., Maghsoudi, A., Shoeibi, N., Heidarzadeh, M. and ...
  • Azimi, B., Rashno, A. and Fadaei, S., "Fully convolutional networks ...
  • Srihari, D., Kishore, P., Kumar, E.K., Kumar, D.A., Kumar, M.T.K., ...
  • Kuehne, H., Jhuang, H., Garrote, E., Poggio, T. and Serre, ...
  • Längkvist, M., Karlsson, L. and Loutfi, A., "A review of ...
  • Simonyan, K. and Zisserman, A., "Two-stream convolutional networks for action ...
  • Chegeni, M.K., Rashno, A. and Fadaei, S., "Convolution-layer parameters optimization ...
  • Soomro, K., Zamir, A.R. and Shah, M., "Ucf۱۰۱: A dataset ...
  • Liu, H., Zhou, A., Dong, Z., Sun, Y., Zhang, J., ...
  • Yue-Hei Ng, J., Hausknecht, M., Vijayanarasimhan, S., Vinyals, O., Monga, ...
  • Zohrevand, A., Imani, Z. and Ezoji, M., "Deep convolutional neural ...
  • Varol, G., Laptev, I. and Schmid, C., "Long-term temporal convolutions ...
  • Simonyan, K. and Zisserman, A., "Very deep convolutional networks for ...
  • Auli, M., Galley, M., Quirk, C. and Zweig, G., "Joint ...
  • Bloom, V., Makris, D. and Argyriou, V., "G۳d: A gaming ...
  • Kishore, P., Kumar, D.A., Sastry, A.C.S. and Kumar, E.K., "Motionlets ...
  • Ciresan, D.C., Meier, U., Masci, J., Gambardella, L.M. and Schmidhuber, ...
  • Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., ...
  • Girshick, R., Donahue, J., Darrell, T. and Malik, J., "Rich ...
  • Shahroudy, A., Liu, J., Ng, T.-T. and Wang, G., "Ntu ...
  • Hochreiter, S. and Schmidhuber, J., "Long short-term memory", Neural Computation, ...
  • نمایش کامل مراجع