Page Header Logo
TEI of Athens eJournals

Comparing and evaluating an interactive, subject-based search system with the traditional digital library’s search functionality

Konstantinos Kyprianos

Abstract


Purpose-The purpose of this paper is to compare and evaluate the usability, usefulness and effectiveness of an Interactive, Information Retrieval – IIR system with a DSpace-based digital library.

Design/methodology/approach– The proposed evaluation approach consists of two subcomponents. The first one refers to a log file analysis capable of revealing quantitative features of the systems’ usage. The second part refers to a user survey that compares the new IIR system against the traditional subject-based search functionality provided by DSpace in terms of usefulness and effectiveness. 

Findings- Based on the evaluation results, it seems that users are very interested in employing new methods and techniques in information seeking and retrieval, especially when such new tools and methods help them in fulfilling their information needs accurately and timely. The results also revealed that the users are more satisfied when employing the new search functionality and the search and retrieval process is improved.

Originality/value - A novel IIR system for subject-based browsing was evaluated and interesting results for the future of such tools are shown.


Keywords


evaluation, questionnaire, user survey, IIR system, simulated work task scenario

References


Liu, J., Zamir, H., Li, Y. and Hastings, S.K. (2018). Search systems and their features: What college students use to find and save information. Library & Information Science Research, 40 (2), pp. 118-124.

Kelly, D. and Sugimoto, C.R. (2013). A Systematic Review of Interactive Information Retrieval Evaluation Studies, 1967-2006. Journal of the American Society for Information Science and Technology, 64 (4), pp. 745-770. DOI: 10.1002/asi.22799.

Drabenstott, K. M. (2003). Do non-domain experts enlist the strategies of domain experts. Journal of the American Society for Information Science and Technology, 54(9), pp. 836–854.

Wildemuth, B.M. (2004). The effects of domain knowledge on search tactic formulation. Journal of the American Society for Information Science and Technology, 55(3), pp. 246–258.

Brajnik, G., Mizzaro, S., Tasso, C. and Venuti, F. (2002). Strategic help in user interfaces for information retrieval. Journal of the American Society for Information Science and Technology, 53(5), pp. 343–358.

Pollock, A. and Hockley, A. (1997). What’s wrong with internet searching. D-Lib Magazine

Rieh, S.Y. and Xie, H.I. (2001). Patterns and sequences of multiple query reformulations in web searching: a preliminary study. In: Proceedings of the 64th Annual Meeting of the American Society for Information Science and Technology, vol. 38, pp. 246–255.

Fields B., Keith S., Blandford A. (2005). Designing for Expert Information Finding Strategies. In: Fincher S., Markopoulos P., Moore D., Ruddle R. (eds) People and Computers XVIII — Design for Life. Springer, London, pp. 89–102

Carstens, C., Rittberger, M. and Wissel, V. (2009). How users search in the German education index - tactics and strategies. In: Proceedings of the workshop Information Retrieval at the LWA.

Markey, K. (2007). Twenty-five years of end-user searching, part 1: Research findings. Journal of the American Society for Information Science and Technology, 58(8), pp. 1071–1081.

Frakes, W. and Baeza-Yates, R. (eds) (1992). Information Retrieval, data structures and algorithms. Prentice Hall, New York: Englewood Cliffs.

Kelly, D., Erguello, J., Edwards, A. and Wu, W.C. (2015). Development and Evaluation of Search Tasks for IIR Experiments using a Cognitive Complexity Framework. In: Proceedings of the 2015 International Conference on The Theory of Information Retrieval, Northampton, Massachusetts, USA, September 27 - 30, pp. 101-110.

Cheng, J., Hu, X. and Heidorn, P.B. (2010). New Measures for the Evaluation of Interactive Information Retrieval Systems: Normalized Task Completion Time and Normalized User Effectiveness. ASIST, October 22-27, Pittsburgh, PA, USA.

Borlund, P. (2000). Experimental components for the evaluation of interactive information retrieval systems. Journal of Documentation, 56(1), pp. 71-90.

Papadakis, I., Kyprianos, K., Mavropodi, R. and Stefanidakis, M. (2009). Subject-based Information Retrieval within Digital Libraries Employing LCSHs. D-Lib Magazine, 15(9/10)

Kelly, D. (2009). Methods for evaluating interactive information retrieval systems with users. Foundations and Trends in Information Retrieval, 3(1-2), pp. 1-224.

Borlund, P. and Ingwersen, P. (1997). The development of a method for the evaluation of interactive information retrieval systems. Journal of Documentation, 53(3), pp. 225-250.

Su, L.T. (1992). Evaluation measure for interactive information retrieval. Information Processing and Management, 28(4), pp. 503-516.

Dunlop, M. (1997). Time, relevance and interaction modeling for information retrieval. In: Proceedings of the 20th ACM SIGIR Conference on Research and Development in Information Retrieval, pp.206-213.

Belkin, N., Cole, M. and Liu, J. (2009). A model for evaluation of interactive information retrieval. In: Proceedings of the SIGIR 2009 Workshop on the future of IR Evaluation, pp. 7-8.

Hienert, D. and Mutschke, P. (2016). A Usefulness-based Approach for Measuring the Local and Global Effect of IIR Services. In: Proceedings of the 2016 ACM on Conference on Human Information Interaction Retrieval, Carrboro, North Carolina, USA, March 13-17, 2016, pp. 153-162.

Cleverdon, C.W., Mills, J. and Keen, E. M. (1996). Factors determining the performance of indexing systems. Cranfield. UK: Aslib Cranfield Research Project, College of Aeronautics.

Cleverdon, C.W. (2000). The effect of variations in relevance assessments in comparative experimental tests of index languages. Cranfield, UK: Cranfield Institute of Technology.

Hildreth, C.R. (2001). Accounting for users' inflated assessments of on-line catalogue search performance and usefulness: an experimental study. Information Research, 6(2)

Borlund, P. (2003). The IIR evaluation model: a framework for evaluation of interactive information retrieval systems. Information Research, 8(3).

Hearst, M. (2009). Search User Interfaces. Cambridge: University Press.

Fuhr, N., Klas, C.P., Schaefer, A. and Mutschke, P. (2002). Daffodil: An integrated desktop for supporting high-level search activities in federated digital libraries. In: Research and Advanced Technology for Digital Libraries. 6th European Conference on Digital Libraries, pp. 597-612.

Klas, C.P., Kriewel, S. and Fuhr, N. (2007). An experimental framework for interactive information retrieval and digital libraries evaluation. In: Proceedings of the 1st international conference on Digital libraries: research and development (DELOS'07). Springer-Verlag, Berlin, Heidelberg, pp. 147-156.

Wei, X., Zhang, Y. and Gwizdka, J. (2014). YASFIIRE: yet another system for IIR evaluation. In: Proceedings of the 5th Information Interaction in Context Symposium, Regensburgm Germany, pp. 316-319.

Agirre, E., Di Nunzio, G.M., Mandl, T. and Otegi, A. (2009). CLEF 2009 ad hoc track overview: robust-WSD task. In: Proceedings of the 10th cross-language evaluation forum conference on Multilingual information access evaluation: text retrieval experiments (CLEF'09). Springer-Verlag, Berlin, Heidelberg, pp. 36-49.

Suomela, S. and Kekalainen, J. (2006). User evaluation of ontology as query construction tool. Information Retrieval, 9(4), pp. 455-475 DOI: 10.1007/s10791-006-6387-3.

Pharo, N., Nordlie, R., Fuhr, N., Beckers, T. and Fachry, K.N. (2009). Overview of the INEX 2009 interactive track. In: Proceedings of the Focused retrieval and evaluation, and 8th international conference on Initiative for the evaluation of XML retrieval (INEX'09), Springer-Verlag, Berlin, Heidelberg, pp. 303-311

Kriewel, S. and Fuhr, N. (2010). Evaluation of an adaptive search suggestion system. In: Proceedings of the 32nd European Conference on Information Retrieval Research (ECIR 2010), Springer-Verlag, pp. 544-555.

Borlund, P. (2016). A study of the use of simulated work task situations in interactive information retrieval evaluations: A meta-evaluation, Journal of Documentation, 72 (3), pp. 394-413.

Huang, X., Peng, F., An, A. and Schuurmans, D. (2004). Dynamic Web Log Session Identification With Statistical Language Models. Journal of The American Society for Information Science and Technology, 55(14), pp. 1290–1303.

Avouris, N. (2000). Introduction to Human-Computer Interaction. Athens: Diavlos Publications.

Covey, D.T. (2002). Usage and usability assessment: Library practices and concerns. Technical report, Digital Library Federation.

Li, Y. & Hu, D. (2013). Interactive retrieval using simulated versus real work task situations: Differences in sub-facets of tasks and interaction performance. In: Proceedings of the American Society for Information Science and Technology. DOI: 50.10.1002/meet.14505001092.


Full Text: PDF

DOI: 10.18780/jiim.v3i1.4269

Refbacks

  • There are currently no refbacks.

The application for presenting electronic journals TEI developed within subproject 2 "electronic publishing service" the Act "Development Services Digital Library of TEI" and financed by the operational program "Digital Convergence", NSRF 2007-2013.