Six Essentials for an Effective Practical Skill Evaluation

Sarah Flesher

What makes a practical skill evaluation effective?

Anyone who's spent much time in the training industry has seen an ineffective learner evaluation. Some learners ace the final exam or skill test yet can't begin to perform the task on-the-job. Others fail miserably but go on to accomplish the task perfectly in their day-to-day work.

While the outcome of such evaluations demonstrates their ineffectiveness, those of us in the industry need to know whether an evaluation will be effective before we develop it, not after. So, what is it that makes an evaluation effective?

An effective learner evaluation is one with high fidelity, validity and reliability.

  • Fidelity: In the context of training evaluation, fidelity is a measure of how well an evaluation or evaluation item reflects the on-the-job task learners need to perform.
  • Validity: Validity is how well the evaluation measures achievement of the learning objective. Are you actually evaluating the skills you mean to evaluate, or are you testing something else? A common example of an evaluation item with low validity is the trick question. Trick questions usually test reading comprehension or linguistic sophistication rather than learner mastery of the learning content.
  • Reliability: If two employees perform an evaluation identically, a reliable evaluation will result in the same outcome for both regardless of when and where the evaluation occurred and who was responsible for evaluating it. Outside factors, such as the opinions, judgements or biases of different evaluators, can interfere with the outcome of a less reliable evaluation.

Fidelity, validity and reliability define an effective evaluation, but a definition only gets us so far. Let's take the next step and look at how to build an evaluation with these traits.

Essentials for an Effective Practical Skills Evaluation

1. The Right Standards - Good Learning / Performance Objectives

Learning objectives, also called performance objectives, form the foundation of an effective learner evaluation. If validity requires that the evaluation accurately assess performance of the learning objectives, fidelity requires that those learning objectives accurately describe the skills needed to perform the task on the job.

Learning objectives "specify the minimum acceptable level of expertise in the performance of a skill and how that performance will be measured." They should have four components:

  • Audience: Who is expected to complete the performance
  • Performance: What do you expect learners to be able to do or perform after completing the learning?
  • Conditions: Under what conditions must learners complete the performance? Can they use a job aid or other reference material?
  • Criteria: How well do they have to complete the performance? Is there a time limit? A degree of accuracy required?

Learning objectives must be developed in consultation with carefully selected subject matter experts (SMEs) who have the experience and skills necessary to identify and define the required performance.

For more about developing learning objectives, check out How to Write Effective Learning Objectives.

2. Criterion-Referenced Measurement

Are you trying to find out how your learners compare to each other, or do you just need to know whether they have the skills to perform work tasks? When fidelity matters, it should be the latter - so don't make the mistake of building a norm-referenced evaluation.

A criterion-referenced evaluation is "designed to measure student performance against a fixed set of predetermined criteria or learning standards." Norm-referenced evaluations compare or rank learners, measuring their performance against that of other learners.

With criterion-referenced measurement, learners should be explicitly instructed in the criteria they have to meet. Evaluation items that assess learners' ability to extend what they have learned are a feature of norm-referenced evaluations, designed to distinguish the most able performers. The only question you need to answer in a practical skill evaluation is, "Can the learner demonstrate the skills needed to be considered competent?"

3. The Right Performance Scoring System

Criterion-referenced measurement takes a different approach to scoring or grading. Norm-referenced grades, over the long term, should be distributed in a bell curve. Criterion-referenced scoring is only concerned with passing or failing, and it should be possible for (almost) all learners to pass - or to fail, although that's likely to be the result of poorly-designed training program.

Fault-Free Performance

The score required to pass a criterion-referenced practical skill evaluation can be very high: often 100%, also known as fault-free performance. A 95%, or an almost-correct performance, can lead to costly errors in terms of safety, injuries and decreased productivity.

To be fair to the learner, this system requires extensive opportunities for practice and feedback. It is also common to allow the learner multiple attempts at the evaluation.

Error Deductions

Another approach to criterion-referenced scoring is to deduct marks for errors. Potential errors can be categorized as critical, major and minor. If the pass grade is set at 80%, deductions might work as follows:

  • Critical error: - 21% (immediate fail - for safety and other serious errors)
  • Major error: -11% (learners can pass with a single major error)
  • Minor error: -7% (two minor errors allowed)

Error deduction scoring requires careful cataloguing of all possible errors.

Before choosing a scoring system, ask:

  • Which of the measurement criteria/learning standards are optional?
  • If some criteria aren't essential, do you really need to teach or evaluate them?
  • If some standards are essential and others are just 'nice to have', how will you ensure that learners with imperfect scores missed only non-essential items?

4. Skill Checks or Checklists

Skill checks or checklists are an essential part of the practical skill evaluation. They list each required action or behavior and the standard to which it must be performed. Evaluators check off items as they are completed.

Using documented standards for evaluations contributes to fidelity and validity by keeping the focus on pre-determined standards. They also ensure all learners are assessed by the same criteria, an essential element of reliability.

Evaluation skill check or checklist requirements should never come as a surprise to learners. They should be part of the learning material and available to learners during practice.

5. Objective Evaluation

Objective evaluation is free of influence from information external to the learning resources and training process that may skew or alter an evaluator's ability to judge the competence of the learner. Objectivity is key to reliability. It is also necessary for validity, as bias or other influences would mean that factors other than the learner's achievement of the learning objectives was measured by the evaluation.

Established standards provide objective evaluation criteria. Documenting those standards in skill checks or checklists also helps evaluators remain objective and adhere to the prescribed evaluation process.

6. Ethical Evaluation

Arbitrary or ineffective evaluation is unfair to learners, who have put effort into mastering the learning material and may be depending on the results for their employment, to the corporation, and even to external parties like insurers and regulatory agencies.

Ethical evaluation, which includes building an evaluation with fidelity, validity and reliability, builds trust across the organization that the system consistently works as it should.

Confidentiality is another element of ethics that should be emphasized. To maintain learner trust and confidence, evaluators and supervisors must not disclose events that occur during an evaluation (apart from the outcome).

Conclusion

Fidelity, validity and reliability must be combined for an effective evaluation. If you want to build a practical skills evaluation that really works, keep these traits in mind as you determine your standards, apply criterion-referenced measurement, select the scoring system that best suits your needs, build skill checks and checklists and ensure your evaluation is both objective and ethical.

Most of the principles discussed here are elements of competency-based learning. Want to learn more? Check out Why is Competency-Based Training so Effective?

📘 Ready to Elevate Your Learning Strategy?

Explore our comprehensive library of eBooks and tools on learning resource development, competency-based learning, and LMS implementation. Transform your training programs with insights from industry experts and practical templates.


Sarah Flesher

Sarah, our President, graduated from Concordia University in Montreal with a BA and an MA in Public Policy and Public Administration and completed her doctorate in Educational Technology. Sarah brings over 15 years of operational and management experience to her role as President at Base Corp. She works collaboratively with organizations to develop strategic learning plans, determine training requirements. When she doesn't have her nose in a book you can find her at the gym, on the ice, on the ski hill, drinking wine or in a coffee shop … with her nose in a book.