The effect of batch normalization in training deep learning networks with Adam's optimization algorithm
سال انتشار: 1401
نوع سند: مقاله کنفرانسی
زبان: انگلیسی
مشاهده: 213
فایل این مقاله در 8 صفحه با فرمت PDF قابل دریافت می باشد
- صدور گواهی نمایه سازی
- من نویسنده این مقاله هستم
این مقاله در بخشهای موضوعی زیر دسته بندی شده است:
استخراج به نرم افزارهای پژوهشی:
شناسه ملی سند علمی:
ENGINEERKH01_066
تاریخ نمایه سازی: 24 اردیبهشت 1402
چکیده مقاله:
In this article, we have investigated the effect of network training in deep learning with the Batch Normalizationand Adam's optimizer. Batch Normalization allows us to usemuch higher learning rates and be less careful about initialization. Adam the method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters.The method is also appropriate for non-stationary objectives and problems withvery noisy and/or sparse gradients. The hyper-parameters have intuitive interpretationsand typically require little tuning.The simulation results of the article show well that the existence of batch and optimizer can help the speed and accuracy of the network, reduce the network error and prevent overfitting, and as a result, faster and better convergence of the network.
کلیدواژه ها:
نویسندگان
Roghayeh Akbarian
MastersDepartment of Electrical and Electronic EngineeringUniversity Khorasan Institute of Higher EducationMashhad, Iran
Ali karsaz
Associate ProfessorDepartment of Electrical and Electronic EngineeringUniversity Khorasan Institute of Higher EducationMashhad, Iran