The best practice of Gradient boosting machines (GBMs) Methods for machine learning
سال انتشار: 1401
نوع سند: مقاله کنفرانسی
زبان: انگلیسی
مشاهده: 213
فایل این مقاله در 7 صفحه با فرمت PDF قابل دریافت می باشد
- صدور گواهی نمایه سازی
- من نویسنده این مقاله هستم
این مقاله در بخشهای موضوعی زیر دسته بندی شده است:
استخراج به نرم افزارهای پژوهشی:
شناسه ملی سند علمی:
ITCT17_043
تاریخ نمایه سازی: 26 دی 1401
چکیده مقاله:
Boosting is one of the most powerful learning ideas introduced in the last twenty years. It was originally designed for classification problems, but it can profitably be extended to regression as well. The motivation for boosting was a procedure that combines the outputs of many “weak” classifiers to produce a powerful “committee.” Gradient boosting machines (GBMs) are an extremely popular machine learning algorithm that have proven successful across many domains and is one of the leading methods for winning Kaggle competitions. Whereas random forests build an ensemble of deep independent trees, GBMs build an ensemble of shallow trees in sequence with each tree learning and improving on the previous one. Moreover, Gradient boosting constructs additive regression models by sequentially fitting a simple parameterized function (base learner) to current “pseudo”- residuals by least squares at each iteration. The pseudo residuals are the gradient of the loss functional being minimized, which respect to the model values at each training data point, evaluated at the current step.
کلیدواژه ها:
نویسندگان
Peyvand Ahmadi
Information Technology Management, Payame Noor University, Qeshm, Iran
Anita Seihoon
Computer Sciences, Amirkabir University of Technology, Tehran, Iran