HCI International 2015
Los Angeles, CA, USA
2-7 August 2015

Navigation Menu

T23: Practical Statistics for User Research

Tuesday, 4 August 2015, 14:00 - 17:30

James R. Lewis (short bio)
IBM Software Group, USA

Objectives:

The purpose of this tutorial is to provide practical information about using statistics in user research, focusing on estimation and testing for a variety of situations and types of measurements.

Content and Benefits:

If you don’t measure it you can’t manage it. Usability analysis and user research is about more than rules of thumb, good design and intuition: it’s about making better decisions with data. Is Product A faster than Product B? Will more users complete tasks using the new design? Did we meet our goal of a 75% completion rate? What sample size should we plan on for a survey, or for comparing products? Will five users really find 85% of all problems? Learn how to conduct and interpret appropriate statistical tests on usability data, compute sample sizes and communicate your results in easy to understand terms to stakeholders. The tutorial will begin with a brief review of the most common ways to measure usability and lead into fundamental concepts of statistics and terms. It will then cover the fundamentals of confidence interval estimation and statistical hypothesis testing for a subset of typical usability/user research metrics (completion rates, completion times, satisfaction) and situations (estimating values, comparison with benchmarks, comparing two interfaces).

  1. Introduction: Brief review of the most common ways to measure usability with lead into fundamental concepts of statistics and terms
    1. Statistical concepts, terms and misconceptions
    2. Understanding the role of chance in interpreting data
  2. Objective: Clearly understand the capabilities and limits of small sample usability data through the use of confidence intervals
    1. Interval and point estimation
    2. Confidence interval around a completion rate
    3. Confidence interval for completion times
  3. Objective: Compare two interfaces or versions (A/B Testing)
    1. Comparing two completion rates (N-1 2-proportion test; Fisher Exact Test)
    2. Comparing task times (standard 1- and 2-sample t-tests)
  4. Objective: Determine if a usability test task has met or exceeded a goal (e.g. users can complete the transaction is less than 2 minutes).
    1. Comparing a task completion rate against a benchmark
    2. Comparing an average task time against a specification limit
  5. Sample Size Estimation (Formative and Summative)
    1. Do you only need to test with five users?
    2. Sample Size and precision
    3. Sample size for comparisons
  6. Wrapping up

Target Audience:

The tutorial will be of particular value to practitioners and researchers with an interest in quantitative usability tests and other types of user research. Participants should be familiar with the process of conducting usability tests as well as basic descriptive statistics such as the mean, median and standard deviation. Although not mandatory, attendees should plan to bring laptops (Mac or PC) to get the most out of the planned exercises.

Bio Sketch of Presenter:

Dr. James R. (Jim) Lewis Ph.D., CHFP

James R. (Jim) Lewis graduated with an M.A. in Engineering Psychology in 1982 from New Mexico State University, and received his Ph.D. in Experimental Psychology (Psycholinguistics) from Florida Atlantic University in 1996. He has worked as a human factors engineer and usability practitioner at IBM since 1981. He has published influential research on the measurement of usability satisfaction, use of confidence intervals, and sample size estimation for usability studies. He is on the editorial board of the International Journal of Human-Computer Interaction and the Journal of Usability Studies, and wrote the chapter on usability testing for the 3rd and 4th editions of the Handbook of Human Factors and Ergonomics (2006/2012). He is the author of "Practical Speech User Interface Design" (2011) and co-author (with Jeff Sauro) of "Quantifying the User Experience" (2012). He is a BCPE Certified Human Factors Professional, an IBM Master Inventor, a member of UPA, HFES, ACM SIGCHI, and past-president of AVIxD.

 

follow us Icon Link: Follow us on Facebook Icon Link: Follow us on Twitter