A Review on Web Search Engines’ Automatic Evaluation Methods and How to Select the Evaluation Method

سال انتشار: 1395
نوع سند: مقاله کنفرانسی
زبان: انگلیسی
مشاهده: 842

فایل این مقاله در 7 صفحه با فرمت PDF و WORD قابل دریافت می باشد

استخراج به نرم افزارهای پژوهشی:

لینک ثابت به این مقاله:

شناسه ملی سند علمی:

IRANWEB02_057

تاریخ نمایه سازی: 9 مرداد 1395

چکیده مقاله:

Nowadays search engines are recognized as the pathway for accessing the tremendous amount of information in the internet. They provide aids and services for solving users’ different information needs. Thus, being able to evaluate their effectiveness and performance is constantly gaining importance because these evaluations are useful for both developers and users of search engines. Developers can use the evaluation results for improving their strategies and paradigms in the development of search engines. Users, on the other hand, can identify the best performing search engines and in a better, quicker and more accurate way, gratify their information needs. Evaluation of search engines can be done in two different ways; either manually using human arbitrators or automatically using automatic machinery approaches which do not use human arbitrators and their judgments. In the case of manual evaluation methods, by now numerous and standard activities had been carried out by organizers and participants of conferences like TREC or CLEF. In the case of automatic evaluation methods, unlike variety of efforts which had been done by different researchers, no categorization and organization of such methods exists so far. As a result, anyone that wants to use one of the automatic evaluation methods must read all the relevant literature of these methods which is very time consuming and confusing activity. In this paper, we have reviewed almost all the important reported automatic methods for evaluation of search engines. Analyzing the results of this review, we have stated the requirements and prerequisites of using any of these methods. At the end, a framework for selecting the best pertinent method for each evaluation scenario has been suggested.

نویسندگان

Masomeh Azimzadeh

Information Technology Research Group Information and Telecommunication Research Center (ITRC) Tehran, Iran

Reza Badie

Information Technology Research GroupInformation and Telecommunication Research Center (ITRC)Tehran, Iran

Mohammad Mehdi Esnaashari

Information Technology Research GroupInformation and Telecommunication Research Center (ITRC)Tehran, Iran

مراجع و منابع این مقاله:

لیست زیر مراجع و منابع استفاده شده در این مقاله را نمایش می دهد. این مراجع به صورت کاملا ماشینی و بر اساس هوش مصنوعی استخراج شده اند و لذا ممکن است دارای اشکالاتی باشند که به مرور زمان دقت استخراج این محتوا افزایش می یابد. مراجعی که مقالات مربوط به آنها در سیویلیکا نمایه شده و پیدا شده اند، به خود مقاله لینک شده اند :
  • because there is no adequate level of overlap among results ...
  • S. Moosavi, M. Azimzadeh, M Mahmoudy, A. Yari, "Presenting an ...
  • R. Badie, M. Azimzadeh, M.S. Zahedi, S. Samuri, ،Automatic evaluation ...
  • M. Mahmoudy, M. Sadegh zahedi, M. Azimzadeh, "Evaluating the retrieval ...
  • I. Soboroff, C. Nicholas, and P. Cahan, "Ranking retrieval systems ...
  • in Procedings of the 28th annual international ACM SIGIR conference ...
  • Y. Liu, Y. Fu, M. Zhang, S. Ma, and L. ...
  • mapping users' relevance judgments, " Journal of the American [15] ...
  • S. Wu and F. Crestani, "Methods for ranking information retrieval ...
  • J. Callan, M. Connell, and A. Du, "Automatic discovery of ...
  • Clickthrough Data, " ed: Citeseer, 2003. ...
  • G. Mood, "Boes, Introduction to the theory of statistics, " ...
  • H. Sharma and B. J. Jansen, "Automated evaluation of search ...
  • R. Ali and M. S. Beg, "Automatic performance evaluation of ...
  • R. Nuray and F. Can, "Automatic ranking of information retrieval ...
  • Performance Evaluation of Web Automatic:ه [12] F. Can, R. Nuray, ...
  • W. Tawileh, J. Griesbaum, T. Mand1 , Evaluation of five ...
  • international ACM SIGIR conference _ Research and development in information ...
  • S. P. Harter, "Variations in relevance assessmeno and the measurement ...
  • A. Spink and H .Greisdorf, "Regions and levels: measuring and ...
  • Society for Information science and Technology, vol. 52, pp. 161- ...
  • A. Chowdhury and I. Soboroff, "Automatic evaluation of world wide ...
  • S. M. Beitzel, E. C. Jensen, A. Chowdhury, and D. ...
  • evaluation of Web search engines, " Information processing & management, ...
  • T. Joachims, L. Granka, B. Pan, H. Hembrooke, and G. ...
  • D. Lewandowski, Evaluating the retrieval effectiveness of Web [24] J. ...
  • Online Information Review 35(6), 854-868. (2011). ...
  • نمایش کامل مراجع