User Evaluation Research (2 credits)
Instructor: Muh-Chyun Tang
(TEL) 33662967.
Dept. of Library and Information Science,
National Taiwan University

Course description
Though often overlooked, evaluation studies are essential part of the design feedback loop. User evaluation research conducted before, during, and after the design provide valuable insights for continued improvement of our services. This class examines the basic components of an evaluation research: goals, criteria, measures, measuring instruments and methods, and cover a variety of library and information services. Emphasis is placed on research and the collection of data for planning and decision making. Topics covered include usability testing, social media metrics, and information retrieval/recommendation evaluation. Traditional library evaluation components such as evaluation of collection, applied bibliometrics, availability analysis, and user satisfaction will also be addressed. Methods surveyed include circulation and log analyses, experimental design, protocol analysis, critical incident analysis, and survey interview.  

Course Outline

Students will be able to
1. Acquire a basic understanding of the basic steps and components for evaluating information services.
2. Hand-on expereinces in conducting usability and user experience studies of online services and apps.
3. Identify the most appropriate method and determine proper evaluation criteria for a particular evaluation project.
4. Gain knowledge about different methodologies for evaluation research, including experimental design, survey and scale construction, experience sampling method etc.

Course Schedule
Week Topic Note
Orientation & introduction

Effectiveness,; /output/outcome/impact
Value-added model; value of information and information services
Matthews, Ch. 2 Evaluation models.
Eisenberg, M., & Dirks, L. (2008). Taylor's value-added model: Still relevant after all these years.
Usability and user experience;
Evaluation of online systems: basic elements of an evaluation study
Tullis and Albert, Ch. 1.3.

User research

Usability case presentation due Each group will present a case study from Tullis and Albert Ch. 10
Baxter & Courage (2015). Chapter 2. Before you choose an activity learning about your product users.
(Parush, 2015). Chapter 13. First, User Research.Just Do It.
Types of usability studies: issues finding vs. comparison; task analysis; Morae demo Tullis and Albert, Ch. 4 ,5; visit Usability testing
Website ideal routes due
(hospital registration)
Usability testing metrics;  Tullis and Albert, Ch. 6;

Card sorting and tree testing

Website task analysis due (hospital registration and tourist sites)
8 Information retrieval evaluation

Hearsh (2003): Ch. 3, pp.83-113
9 Experimental Design; ANOVA, GLM and advanced experimental design Keppel & Wikens (2004), Ch. 1, pp. 1-11;

10 Recommender system evaluation Konstan, J & Riedl, J. (2012). Recommender system: from algorithms to user experience. User Model User-Adap Interface 22:101-123
User experience
ARL new measures initiatives
Evaluation of customer service

Card sorting and tree testing exercise


Reflection and discussion

13 Live Website data/google analytics; E-metrics, Social media metrics  Tullis and Albert, Ch. 9; Socail media metrics: the beginner's guide

14 Analysis of use A competitive heuristic evaluation exercise due
Lancaster (1993): Ch. 1, pp.1-20; Lancaster (1993): Ch. 3, pp.51-75 (make sense of the tables)
15 Shelf availability
Lancaster (1993): Ch. 8, pp.129-146

16 Outcome assessment; evaluation of bibliographic instruction; critical incident analysis Siegel et al. (1991)Evaluating the impact of MEDLINE using the Critical Incident Technique.   Proc Annu Symp Comput Appl Med Care. 1991: 83ˇV87.

Discussion for your final project

Final presentation Final presentation
Assignments and Grading

Class participation: 10% of your final grade
Will be evaluated on your attendance and participation (i.e. the questions you raise and answers you give in the class)

Group projects:  
Students will form into groups of 2 to 3 to conduct all four assignments. 
*For all the group projects, besides the group report, each individual will also write a half page personal report on his/her contributions and reflections on the assignment. 

1.Usability study review : 10%  
Present one of a case studies from Chapter 10 of (Tullis & Albert, 2008/2013
Measuring the user experience: collecting, analyzing, and presenting usability metrics. Newnes.), an electronic book accessible through NTU library.
This assignment is designed to help you familiarize with the actual user study procedures. You should therefore focus mainly on the methodologies used in these studies. Specifically, the review should consist of the six basic elements of an evaluation research:  1. Construct
2. Criteria
3. Measures
4. Measuring instruments
5. Methodology 
6. Strengths and potential flaws (e.g. threats to external or internal validity)

Only powerpoint presentation is required for this exercise.

2. Task analysis and prototyping: 20%
Conduct a hierarchical task analysis of hospital registration which involves finding and registering an appointment with a doctor from a major hospital.
First, break down the procedures into
smaller tasks
1. Determine the specialty you need to consult according to your diseases/symptons.
2. Choose a doctor that best fits your need. (several criteral needs to be considered, including: specialty, available time, and credentials)
3. Making an appointment with the doctor.
Notice that each step can be further decomposed into smaller steps or moves that can be performed on an interface.

Visit two major hospital registration sites. Perform the registration task and identify the best route to complete the task with each
based on your task analysis. Compare the route needed for each website with the best route and comment on the strengths and shortcomings of each site.

Redesign a navigation structure for a hospital's registratin page that best implements the steps needed to complete the registration task. Only powerpoint presentation is required for this exercise, use screenshots and/or screen capture video clipe to demonstrate your points.

3. Card sorting and tree testing exercise: 20%
For this assignment you are to perform a card sorting analysis on a functional website and make a revised website navigational structure based on the results. You will then conduct a usability test to compare the original and revised version of the navigational structure:
1. Choose a website that you believe to have navigational/labeling issues that  would negatively impact information findability.
2. Visually display the portions or levels of the web structure you wish to reorganize.
3. Use the basic constituent units (20~30) of the structure and recruit 2-4 participants to perform a card-sorting test (see
4. Create a new navigational structure with new labels based on the results of the card-sorting test from the real users.
5. Compare the usability testing of the original navigational structure and the one based on the results of card-sorting task. You will first build two versions of the website using treejack testing tools then recruit 4-6 participants to perform three information finding tasks with the sites so if you can empirically determine whether the new design has a better usability (i.e. task success, efficiency, user satisfaction etcs.).
6. Choose your performance criteria carefully, which should include both performance and self-report criteria (i.e. questionnaire interview). Report your performance measurement using examples in our textbook.
7. Only power point presentation is required for this exercise, use screenshots and/or screen capture videos to demonstrate your points.

4. A competitive heuristic evaluation exercise: 10%
For this assignment, you will choose a target website or app and perform a competitive heuristic evaluation. To complete the assignment, you will:
1. Conduct a benchmarking comparison of your main or ˇ§surrogate productsˇ¨: survey 2-4 apps of the same nature and/or offer similar services with the target app, and interact with each to find its strength and weakness in terms of functionalities, feature, usability, and user experience.
2. Apply what you learn from your knowledge of these apps to evaluate the usability of the target app.
3. List the design issues that might cause user errors or inefficiency on the app you are evaluating.
4. Make suggestions on how the target app can be improved.
5. Only PowerPoint presentation is required for this exercise, use screenshots and/or screen capture video clip to demonstrate your points.

5. Empirical usability testing project 30%
For this assignment, each group will conduct an empirical usability study of an on line information resource of your choosing.
    Definition of an online service:
An online service can be a library web site, library OPAC, bibliographic database, or e-commerce/government site or an app. The user study can be either an IR performance evaluation or interface usability test. 
    Research participants:
The study should include at least 2 participants, who are potential users of the online service. 
The participants will be asked to perform tasks (genuine or assigned) so the usability of the site can be evaluated. Their online activities will be recorded using screen capture software such as morae. example
You will write a 4-6 page paper to report your methodology and findings and present it to the class. In your finding, try to point out the usability issues of the site and make recommendations on how it can be improved.  
Detailed instructions for usability exercise


Baxter & Courage (2015) Before you choose an activity, from Understand your users: a practical guide to user research methods.
Bertot John Carlo (2004). Assessing digital library services: approaches, issues, and considerations available online
International Symposium on Digital Libraries and Knowledge Communities in Networked Information Society 2004 (DLKC'04).
Cook, Colleen & Fred M. Heath (2001a). Users' perceptions of library service quality: a LibQUAL+ qualitative study.
Hearsh Wiliam R. (2003). Information retrieval: a health and biomedical perspective. New York: Springer.
Jonathan L. Herlocker , Joseph A. Konstan , Loren G. Terveen , John T. Riedl, Evaluating collaborative filtering recommender systems, ACM Transactions on Information Systems (TOIS), v.22 n.1, p.5-53, January 2004 [doi>10.1145/963770.963772]
Hiller, Steve (2001). Assessing user needs, satisfaction, and library performance at the University of Washington Libraries. Library Trends 49(4). pp. 605-625.

Hiller, Steve, and James Self (2004). From measurement to management: using data wisely from planning and decision-making. Library Trends (53)1, pp. 129-155.

Keppel, G., & Wickens, T. A. (2004). Design and analysis: A researcher's handbook (4th ed.). Upper Saddle River, NJ: Prentice-Hall. 

Kyrillidou, Martha (2002). From input and output measures to quality and outcome measures, or, from the user in the life of the library to the library in the life of the user. 
Lancaster, F.W. (1993). If you want to evaluate your library. London : Library Association. 

O'Brien, H. L., & Toms, E. G. (2010). The development and evaluation of a survey to measure user.
 Journal of the American Society for Information Science and Technology, 61(1), 50-69.
Parush, A. (2015). Conceptual design for interactive systems: designing for performance and user experience. Morgan Kaufmann.
Taylor, R.S. (1984). Value-added process in information systems. Norwood, NJ: Ablex.

Tullis, T. & Albert, W., & (2013). Measuring the user experience: collecting, analyzing, and presenting usability metrics. Newnes.
Matthews, J. R. (2007). The Evaluation and Measurement of Library Services. Westport, CN: Libraries Unlimited.