Cheaters
One problem that e-Learning designers face is designing with cheaters in mind. Early in my career as an Instructional Designer we used e-Learning software that presented evaluations on a single page in whatever order you entered it. Once an evaluation was submitted, learners saw a results page that revealed all the correct answers. We began to see a trend where the first learner from each location across the country would receive a lower average score, while each subsequent learner would score much higher. What we suspected turned out to be true. The first learner was sharing their results with other learners, resulting in much higher scores for the remaining learners.
To resolve this we switched to a new authoring tool capable of providing different evaluations to each learner who took our online courses.
The key features of this authorware's evaluations were:
Employing these techniques certainly reduced the number of cheaters who were taking our online courses, however improving the quality of questions helped as well. We almost entirely stopped using True/False questions. The reason is that a True/False question tends to lead the learners to the correct answer. In an organization where keeping employees positive about the products and services they sell, the answer to a True/False question is usually true. You don't want to draw attention to what isn't true about a product or service.
Also ensuring that all the answers in a multiple choice question are at least plausible is important. If the learner can deduce the correct answer by eliminating all the improbable answers, then you really haven't evaluated their knowledge of the topic. All you have tested is the learner's skills of deduction.
Another thing relating to the plausibility of possible answers is the length of the answers. Back in high school, I learned that when I didn’t know the correct answer to a multiple choice question, the longest answer was often the correct answer. The reason for this is that designer of an evaluation may have to be careful about phrasing correct answers to ensure there is no room for interpretation. The same is usually not true of wrong answers. Obviously a wrong answer only needs precise wording when there is risk of it being a correct answer.
To resolve this we switched to a new authoring tool capable of providing different evaluations to each learner who took our online courses.
The key features of this authorware's evaluations were:
- Questions in Random Order - The authorware was capable of randomizing questions so that the order of the questions was never the same
- Randomly Displayed Answers - while some of the questions may get repeated, the answers themselves appeared in random order. In otherwords, what might have been answer a) the first time the course was run, may become answer c) the next time
- Pulling from a Larger Pool of Questions - the authorware was able to generate its random set of questions from a larger pool of questions. For example, we might have 10 questions presented from a possible list of 30 questions. This would also present a completely new set of question each time the course is run
Employing these techniques certainly reduced the number of cheaters who were taking our online courses, however improving the quality of questions helped as well. We almost entirely stopped using True/False questions. The reason is that a True/False question tends to lead the learners to the correct answer. In an organization where keeping employees positive about the products and services they sell, the answer to a True/False question is usually true. You don't want to draw attention to what isn't true about a product or service.
Also ensuring that all the answers in a multiple choice question are at least plausible is important. If the learner can deduce the correct answer by eliminating all the improbable answers, then you really haven't evaluated their knowledge of the topic. All you have tested is the learner's skills of deduction.
Another thing relating to the plausibility of possible answers is the length of the answers. Back in high school, I learned that when I didn’t know the correct answer to a multiple choice question, the longest answer was often the correct answer. The reason for this is that designer of an evaluation may have to be careful about phrasing correct answers to ensure there is no room for interpretation. The same is usually not true of wrong answers. Obviously a wrong answer only needs precise wording when there is risk of it being a correct answer.