Federated Learning Trustworthiness against Label Flipping in Complex Intelligent Environments
سال انتشار: 1404
نوع سند: مقاله ژورنالی
زبان: انگلیسی
مشاهده: 20
فایل این مقاله در 12 صفحه با فرمت PDF قابل دریافت می باشد
- صدور گواهی نمایه سازی
- من نویسنده این مقاله هستم
استخراج به نرم افزارهای پژوهشی:
شناسه ملی سند علمی:
JR_ITRC-17-4_003
تاریخ نمایه سازی: 7 بهمن 1404
چکیده مقاله:
A decentralized method of machine learning, federated learning (FL) enables several clients to work together to train models without disclosing their raw data. However, because of its openness, it is also susceptible to poisoning assaults, especially label-flipping (LF), in which harmful clients alter training labels to taint the global model. Such effort drifts the model in a way that the model performance dwindles in specific attack-related classes while behaving the same as benign clients for other classes to increase complexity for detecting solutions. We combat this by using a defense mechanism that dynamically modifies trust factors to filter out malicious updates based on last-layer gradient similarity. By assessing the defense across a variety of datasets and more complex adversarial scenarios, such as multi-group attacks with different intensities, this study builds on earlier studies. According to experimental data, the method maintains accuracy within a proper level of the clean model while drastically reducing the impact of label-flipping, cutting the attack success rate by ۵۰%. These results demonstrate how important it is to have adaptive security measures in place to protect FL models in hostile and changing contexts.
کلیدواژه ها:
نویسندگان
Mohammad Ali Zamani
Electrical and Computer Engineering Faculty University of Tehran Tehran, Iran
Fatemeh Amiri
Department of Computer Engineering Hamedan University of Technology Hamedan, Iran
Nasser Yazdani
Electrical and Computer Engineering University of Tehran Tehran, Iran