Multi-Instance Learning (MIL) by Finding an Optimal set of Classification Exemplars (OSCE) Using Linear Programming

سال انتشار: 1400
نوع سند: مقاله ژورنالی
زبان: انگلیسی
مشاهده: 161

فایل این مقاله در 11 صفحه با فرمت PDF قابل دریافت می باشد

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این مقاله:

شناسه ملی سند علمی:

JR_JCR-14-1_004

تاریخ نمایه سازی: 18 مهر 1400

چکیده مقاله:

This paper describes how to classify a data set by using an optimum set of exemplar to determine the label of an instance among a set of data for solving classification run time problem in a large data set. In this paper, we purposely use these exemplars to classify positive and negative bags in synthetic data set.There are several methods to implement multi-instance learning (MIL) such as SVM, CNN, and Diverse density. In this paper, optimum set of classifier exemplar (OSCE) is used to recognize positive bag (contains tumor patches). The goal of this paper is to find a way to speed up the classifier run time by choosing a set of exemplars. We used linear programming problems to optimize a hinge loss cost function, in which estimated label and actual label is used to train the classification. Estimated label is calculated by measuring Euclidean distance of a query point to all of its k nearest neighbors and an actual label value. To select some exemplars with none zero weights, Two solutions is suggested to have a better result. One of them is choosing k closer neighbors. The other one is using LP and thresholding to select some maximum of achieved unknown variable which are more significant in finding a set of exemplar. Also, there is trade-off between classifier run time and accuracy. In large data set, OSCE classifier has better performance than ANN and K-NN cluster. Also, OSCE is faster than NN classifier. After describing OSCE method, we used it to recognize a data set which contains cancer in synthetic data points. In deed, we define OSCE to apply for MIL for cancer detection.

کلیدواژه ها:

Integer linear programming (ILP) ، linear programming (LP) ، exemplar ، hinge loss function ، Multi instance learning (MIL) ، positive bag

نویسندگان

Mohammad Khodadadi Azadboni

Faculty of Electrical Engineering, Czech Technical University, Prague, Czech Republic

Abolfazl Lakdashti

Faculty of Computer Engineering, Rouzbahan University, Sari, Iran

مراجع و منابع این مقاله:

لیست زیر مراجع و منابع استفاده شده در این مقاله را نمایش می دهد. این مراجع به صورت کاملا ماشینی و بر اساس هوش مصنوعی استخراج شده اند و لذا ممکن است دارای اشکالاتی باشند که به مرور زمان دقت استخراج این محتوا افزایش می یابد. مراجعی که مقالات مربوط به آنها در سیویلیکا نمایه شده و پیدا شده اند، به خود مقاله لینک شده اند :
  • Samet, ”Foundations of multidimensional and metric data structures”, Morgan Kaufmann, ...
  • Weber, H. Schek, and S. Blott,A quantitative analysis and performance ...
  • Dasgupta and Y. Freund, Random projection trees and low dimensional ...
  • Jones, A. Osipov, and V. Rokhlin, Randomized approximate nearest neighbors ...
  • Aiger, E. Kokiopoulou, and E. Rivlin, Random grids: Fast approximate ...
  • Li and X. Zhang, “Improving k nearest neighbor with exemplar ...
  • Rousu, C. Saunders, S. Szedmak, and J. Shawe-Taylor, “Kernel-based learning ...
  • Camps-Valls and L. Bruzzone, ”Kernel-based methods for hyperspectral image classification,” ...
  • L. Hintzman and G. Ludlam, ”Differential forgetting of prototypes and ...
  • Zhou, Zhi-Hua, and Min-Ling Zhang. ”Neural networks for multi-instance learning.” ...
  • Zhang, Min-Ling, and Zhi-Hua Zhou. ”Improve multi-instance neural networks through ...
  • M.Liu, J.Zhang, E.Adeli, and D.Shen, ”Landmark-based deep multi-instance learning for ...
  • T.Khatibi, A.Shahsavari, A.Farahani, ”Proposing a novel multi-instance learning model for ...
  • X.Wang, F.Tang, L.Luo, Z.Tang, A.Ran, C.Cheung, P.Heng ”uncertainty-driven deep multiple ...
  • نمایش کامل مراجع