Usability Testing for an E-learning course

7 Sep 2011 - 1:37am
2 years ago
7 replies
2236 reads
Nikhil Palekar
2010

 

Hi Everyone,

I have been wondering how to go about condusting a Usability Test for an e-learning course, considering the fact that we always do not have access to the end users/participants to see how the e-learning has made a defference at his work.

We do receive CSS scores, would it be correct to believe that a high CSS score equals to a Usable course?
While we do conduct a heuristic usability test to check the graphics, there hardly is any test done on the content thats used in the course.

Any tips or advice on how to conduct a Usability Test on an e-learning course would be appreciated.

Cheers

 

Comments

7 Sep 2011 - 8:47am
Sudhir Kulkarni
2007

Nikhil,

I think, usability (or effectiveness of) elearning courses have  two folds - 1. usability of the LMS or the delivery method and 2. effectiveness of content. Hence conducting usability test on a course is tricky.
Usability of LMS / method of delivery - standard usability measuring techniques can bring up results and actions. But these are at the system level and would only improve user experience of system Obviously, it does not improve anything on the learning front.
For content effectiveness, I think user's score would be a good measure. However, a instruction designer would be comment better.
For content presentation which includes graphics, animations, timing the instructions etc. - heuristic would be a good evaluation method.
Cheers,
Sudhir

7 Sep 2011 - 11:12am
Moses Wolfenstein
2010

A quick question for clarification: What is the "CSS" in CSS scores an abbreviation for. Customer Satisfaction Survey?

I see CSS and think cascading style sheets, and I'm pretty sure that's not what you mean here.
Thanks,
-Moses

7 Sep 2011 - 9:05pm
llschertler
2008

I'm assuming he is referring to cascading style sheets, since CSS is the conventional representation of that, when related to the web.

I have NEVER seen CSS used in context with customer service satisfaction relative to any electronic media, therefore if Nikhil is referencing that using CSS, it is very misleading and confusing.

7 Sep 2011 - 10:37pm
Nikhil Palekar
2010

Hi,

Thank you for your comments.

To clarify, CSS is Customer Satisfaction Survey. Apologies for the confusion caused.

Regards,
Nikhil Palekar

 

9 Sep 2011 - 11:45am
designfrontier
2010

I would say that basing the usability of a system on good customer service scores is unreliable. Amount of time spent on user support might be a better measure, though also unreliable because of student to student support, and instructor led support that is likely occuring.

Do you have access to analytics style information? That should give you some insight, via time on page, form completion time, etc. which if evaluated across multiple courses should shed some light on the sticking points for users. Measuring across multiple courses will be important, because that should provide some insulation from the content.

I've found there is no substitute for direct user observation. I believe there are some screen sharing systems that you could use for remote users... they are slipping my mind at the moment. You may just want to get some random people together and run them through the areas you think are sticking points. They don't need to be exactly representative of the end users. But you will find more from watching a few users than you would through extensive digging through metrics.

Another thought would be to interview the instructors and ask them if there are areas that their students consistently have issues with, or that they have issues with. I would bet they have a list...

9 Sep 2011 - 12:37pm
jpearce
2010

We don't conduct formal usability testing; however we try to employ several techniques to gage the effectiveness of elearning courses.Techniques are determined as a partnership with instructional designers working with multimedia and application developers.

  1. Alpha and Beta testing - each major course iteration runs through a number of small tests before final release. Alpha testing is conducted by the instructional designers to verify content integrity and to make sure graphics, links and the overall UI functions as expected. The beta test asks participants to give feedback about the content and their experience (Did the navigation and links work as expected?).
  2. Question metrics - Were the questions well phrased? Are the distractors too misleading? Did the audience understand the intent of the question.
  3. Industry best practices for UI development are employed to maintain course ware consistency for the target audience. Thus they know what to expect and where to go for help.
  4. Customer service issues are also used to identify how to improve content delivery.

Overall, there are several components to usability testing: course content, UI usability and design, and assessment usability (Are the instructions clear? Is the interaction intuitive?). As stated previously by designfrontier -

<blockquote>I've found there is no substitute for direct user observation.</blockquote>

In tight constraints this isn't always feasible. Making use of those you have available (instructional designers, subject matter experts and other developers) should allow you to catch most of your usability issues.

9 Sep 2011 - 2:06pm
Moses Wolfenstein
2010

A lot of the folks in this thread have already hit on the core issues, but to add my 2¢ the core problem you're looking at is the intersection of instructional design and UI design. As Daniel noted, measuring across multiple courses can be helpful as you're holding the interface constant while the content changes. This may allow you to at least get some sense of what usability issues (if any) persist across content.
Ultimately, this whole thing is kind of a sticky wicket. In traditional face-to-face instructional settings it can be a lot easier to draw a clear distinction between curriculum issues and pedagogy issues. The two certainly intersect, and in fact need to be well aligned hence Shulman's phrase pedagogical content knowledge (knowing how to teach within a specific discipline). With online instruction, more of the pedagogy is offloaded into the system, and then in addition instructors need a distinct type of pedagogical capacity (unless of course the learning is entirely self directed).
tl;dr version - There are a lot of moving parts in any learning experience, even more in an online learning experience. Start with measures of learning outcomes, then work back through other elements. The things others have said here should be helpful in that regard.

-Moses
On Fri, Sep 9, 2011 at 1:41 PM, jpearce <jpearce@sandia.gov> wrote:

We don't conduct formal usability testing; however we try to employ several techniques to gage the effectiveness of elearning courses.Techniques are determined as a partnership with instructional designers working with multimedia and application developers.

1) Alpha and Beta testing - each major course iteration runs through a
  number of small tests before final release. Alpha testing is conducted by
  the instructional designers to verify content integrity and to make sure
  graphics, links and the overall UI functions as expected. The beta test
  asks participants to give feedback about the content and their experience
  (Did the navigation and links work as expected?).
2) Question metrics - Were the questions well phrased? Are the distractors
  too misleading? Did the audience understand the intent of the question.
3) Industry best practices for UI development are employed to maintain
  course ware consistency for the target audience. Thus they know what to
  expect and where to go for help.
4) Customer service issues are also used to identify how to improve content
  delivery.

Overall, there are several components to usability testing: course content, UI usability and design, and assessment usability (Are the instructions clear? Is the interaction intuitive?). As stated previously by designfrontier -

<blockquote>I've found there is no substitute for direct user observation.</blockquote>

In tight constraints this isn't always feasible. Making use of those you have available (instructional designers, subject matter experts and other developers) should allow you to catch most of your usability issues.

(((Please
Syndicate content Get the feed