Chopeta C. Lyons
 
     
Crossing Over -- Evaluating Training

Here’s a scenario a Society for Technical Writing (STC) colleague reports…

Management came to her with a request to evaluate the company’s training curriculum and accompanying materials. Client feedback on the training hasn’t been as positive as management would like. The training materials are perpetually being updated in an attempt to fix whatever it is that’s broken. Problem is, no one really knows what’s broken. But, in the managers’ minds, because she’s produced good online help and solid manuals, the technical writer should know what’s right and wrong.

Sound familiar?

The Truth about Technical Writers and Instructional Designers

Note from Honcho ManagerHer managers are partially correct. As technical writers, our communication know-how is an invaluable edge. We do know how to dig out and organize information. We’re the ones who make decisions about what to document. And, we make sure that every step of an important process is covered in the manuals.

We also know about writing—grammar, mechanics, and how to communicate clearly and concisely. Good technical writers are master at determining what materials could or should be left out.

Through the best technical writers do keep in mind what the user has to do every day, we’re accustomed to organizing information in a way that’s logical for information retrieval; unfortunately, that’s not always the best organization for training people to do real tasks. Instructional designers must focus on these tasks. Doing so is their bread and butter.

Still, many instructional designers (and trainers forced to design training when they would rater be presenting) just aren’t that good as writers. Mention parallelism and they might think you are talking gymnastics. And it’s not a bad bet that many Participant’s Guides written by instructional designers suffer from wordiness.

To get a handle on how to approach her problem, our colleague researched Bill Gribbons’ analysis of the different perspectives of technical writers and instructional designers (summarized on page 8). Should you ever be called upon to “assess the class,” start with the Gribbons comparisons to get the correct perspective, then use the following five-step process to get your evaluation started.

STEP 1: Determine the purpose of the training.

What problem or issue is the training supposed to solve? After you determine that, try to identify what change in learner performance (skills and tasks accomplished) would demonstrate that the problem has been solved. In other words, what proves that “learning has occurred”?

Instructional designers call this setting the learning objective, and use transitive verbs to nail down the performance emphasis. You’ve probably seen these worded something like: “Learners will be able to create a new style, design a template, or record a macro.” Once you determine what changes are supposed to occur because of the training, write them down. You will return to this list repeatedly.

Figure 2: Griffin

STEP 2: Identify what are you evaluating.

This sounds deceptively simple. But, even assuming that the training “product” to be evaluated is a classroom offering, it still has several manifestations. Which portions are you evaluating?

  • The Participant’s Guide?
  • The trainer’s presentation?
  • The design of the training itself (what the class does and when), as represented by the Instructor’s Guide?
  • The layout of the training room and props?
  • The instructor’s materials: overheads, slide shows, and scripted presentations of software?
  • Computer-Based Training (CBT), Web-Based Training (WBT), or self-paced guides?

Be clear about what you are assessing and, if only part of the training is being evaluated, be clear about how that part affects the whole. Make sure to focus on the heart and should of good instruction: the activities, labs, or exercises that allow learners to engage the material.

STEP 3: Experience the whole.

Audit the class or training piece. If a classroom course is taught by several instructors, start with the “star performer,” but try to audit the class with as many instructors as possible. You will be better able to separate bad design from bad presentation. Take copious notes- another of our strengths as technical writers.

  • Pay careful attention to the objectives stated for the class and compare them to the purpose and tasks you wrote down earlier.
  • Watch the participants. Does the training seem matched to their level of expertise? Is it easily “customized” to their abilities? For example, well-designed CBT may have different paths based on diverse user profiles.
  • Check that every learning objective has a correlating activity or exercise. If large chunks of material don’t correspond to objectives, they might be irrelevant- or at least their relationship to an objective is not clear.
  • Write down any questions left unanswered by the training.
  • Note how paper-based materials are used. Back on the job, learners often prefer their Participant’s Guide over the User Manual because they made it their own with notes during training. Check to see if the Participant’s Guide is organized to support the training experience. For example, does it have tables to be filled in as learners work through problems, checklists that let learners create their own procedure tables, and lots of white space for notes?
  • Assess the interactivity: the questions, exercises, and activities that require learners to use the information they are being taught. Use the following three criteria:


  • Frequency: One activity every three hours? Then you know the training only tells, doesn’t teach, and is a real snoozer. Are there at least two or three mini-activities an hour? Are there labs and exercises in bigger blocks of time? Then the course is on the right track.

    Relevance: Are the interactions related to the learning objectives? Remember, it’s not important that a learner be able to write down every step (that’s what good online help is for). But it is important that the learner be able to do the steps.

    Opportunity: What missed opportunities for interaction do you see?

STEP 4: Research the “after instruction” performance.

If possible, try to see what problems users still have after the training. Places to start are:

  • Help Desk logs. Does the number of calls from sites that have had training decrease appreciably? Are the calls now about more complex questions rather than the newbie questions that clogged the phone lines before training?
  • Marketing interviews. As they follow up with customers, salespeople are often the first ones to hear “The training didn’t teach us anything about this feature!”
  • Learner interviews. Are people still confused about anything that was covered in the training? What was their Aha! experience (that epiphany that occurs when something the instructor said or a lab showed them the light)?
  • Trainer interviews. These often can give you good feedback on what learners seem to be having trouble with in the class; many trainers establish a rapport with learners who will call with questions long after the training is over. Also, trainers can point out where the instructional designer who created the materials may have missed the boat – and how they have to “dance” around the problem in the classroom.
  • System enhancement requests. A request for an enhancement may be a training need in disguise.

STEP 5: Examine the parts.

Evaluating some parts of a training event will be right up the technical writer’s alley. We’ll be able to edit the Participant’s Guide and pare down long-winded sentences. We also will be able to critique overheads and slide shows for readability, clarity, and format.

However, other components will be a little trickier for us to evaluate. For these, our strengths may actually blind us. To technical writers, the organization of an instructor-led class may seem upside-down. Or, we might wish the instructor would just “get on with it,” completely missing the instructional strategy of repetition and reinforcement. Our “just the facts, ma’am” approach to information may cause us to miss the purpose of a classroom game or “yet another” exercise.

Take, for example, our “tell them what we’re going to tell them, and then tell then” approach. It is well-documented that more learning occurs when participants think through a process and make mistakes, than when they do it “right” by mindlessly following a script. So sometimes instructional designers will employ a discovery technique. Here, participants are given a hook or a reason to learn some new material, then are asked questions on it. They are given all the tools they need, such as step tables and reports. Technical writers will look at this material and think the instructor “failed” to foreshadow – tell ‘em what you are going to tell ‘em – the process explicitly.

Wrapping It Up

Remember, the key is always this: Does the instruction match up with the expected learner outcome? In other words, if learners are supposed to be able to create a new paragraph style in Microsoft Word, does the training guide them through this process, provide them with opportunities to try it on their own, and then give them feedback on their activity?

This process of “Tell, Try, and Teach” seems obvious, but too often training, and system training in particular, consists of nothing but a “tell” walk-through (“Here is the style window… here are the fields you use to determine fonts…”). You find few, if any, “teach” activities matched to the desired learner outcome. The mantra of instructional designers is always “Teach, don’t tell” as opposed to the writer’s mandate of “document the details.”

To summarize, you can bet the particular training piece is not effective if…

  • There are no real-life examples
  • There are no non-examples (the “here’s what it looks like when it is wrong” example)
  • There are no exercises. Not to harp on this point, but as an illustration consider this: In her seminars, Dr. Ruth Clark, one of the masters of instructional design, has participants do four or five short activities within 70 minutes. Two might be group exercises, such as a thumbs up or thumbs down application of guidelines, two other activities might be a little longer, asking participants to confer with partners, and one might be an individual, paper-based, self-check. Her “interactivity radio” is something like 1 for every 12 to 15 minutes of instruction.
  • The exercises ask learners to merely regurgitate facts instead of using the information. Learners need practice applying the information in a way that is as close as possible to how they will use it in their jobs. For example, does an exercise ask learners to list the steps to calm an irate caller (regurgitate), or does it ask them to role-play the same steps with another participant (use)?
  • The trainer does all the talking and the participants do all the listening. True, there have been some entertaining classes where the trainer was a great stand-up comic – but were the learners able to design a template when they returned to work?

The most problematic component to assess is the instructor’s presentation. Evaluating the presenter is a whole subject in and of itself.

There are libraries about this stuff. Suffice it to say that even the best instruction is doomed by a presenter who drones on, never engages the participants, has no empathy with them, and… well, enough said.

In addition to these brief tips, there are numerous sources of training evaluation checklists. Many are specific to types of training products, but all can be useful. For example, Lynn McAlpine and Cynthia Weston provide helpful checklists for training materials in their 1994 article “The Attributes of Instructional Materials.”

Assessing Training over the Long Term

Finally, if you feel daunted as a technical writer who is supposed to evaluate training, don’t feel alone. Training professionals have been grappling with this issue for decades.

In 1959, Donald. L. Kirkpatrick created the benchmark for evaluating training with his four constructs: Reaction, Learning, Behavior, and Results. Although these four levels of evaluation have been much discussed and debated, they remain powerful even today. Most corporations do levels 1 and 2.

  • Level 1 – Reaction is often represented by “smile sheets,” the evaluation forms that learners fill out after a class. The sheets usually don’t gauge too much other than that the learners enjoyed their time away from their jobs.
  • Level 2 – Learning determines whether learning has occurred through some sort of performance-based (i.e., are they able to use the information?) mastery test of the material covered in the training.

Unfortunately, levels 3 and 4 are done less frequently.

  • Level 3 – Behavior evaluates how people transfer what they have learned to performance in their day-to-day jobs (“Okay, so you can define what a Word Style is, but do you use them in your daily work? Can you create one?”).
  • Level 4 – Results, which examines the return on investment. Level 4 occurs even more rarely because of expertise required, cost, and internal politics.
Ideally, here’s how the four levels might play out in ACME Corporation, which hired an outside vendor to teach a half-day seminar in Microsoft Word macros.

ACME thought if its documentation specialists would use macros in creating their Word documents, they would create user manuals 25% faster and save money (instead of needing to hire the projected four new writers).

ACME’s technical writers loved the seminar on Using Macros (Level 1 – Reaction, smile sheet evaluations). They all scored over 80% on the mastery test on using and creating macros (Level 2 – Learning occurred).

When ACME did a six-month, after training audit, they found that 75% of the writers were using macros daily to create their documents, and another 25% were creating the macros that their co-workers used (Level 3 – Behavior). To top it off, as part of ACME’s year-end wrap-up, management checked the time sheets and output of the technical writing staff. ACME is happy to say that, without any increase in staff or in time spent, the writers produced 285 more document pages than they did before the seminar. The staff reported that using macros reduced their document preparation time by 10% and actually allowed them to focus on creating better manuals (Level 4 – Results).

In sum, ACME believes a side benefit of the Using Macros class is that, since they no longer fuss as much with formatting, the writers seem to focus more on getting the right information and organizing it effectively.

This scenario with ACME represents the ideal. Many companies don’t have the time or resources to do this king of thorough follow-up. Some companies don’t even have the necessary benchmarks or the data collection tools in place, especially for Level 4 evaluation.

But if you can, move the assessment of training within your organization into the twenty-first century by initiating some informal polling of the participants, perhaps six months after the training event.

For further help in evaluating your company’s training, check out such industry magazines as Training, Performance Improvement, and Inside Technology Training. The Web sites of the American Society for Training and Development, www.astd.org, and The International Society of Performance Improvement, www.ispi.org can get you started.

horizontal divider

Originally published August 1999,© 2010 Chopeta Lyons Intercom pp.1,8-9.

 
» Ten Tips for Talking to Artists -- How to communicate your graphic ideas 2/8/06
» Crossing Over and Evaluating Training
-- How to evaluate training materials 9/0/99
» Picture Perfect: Selecting Graphics for Instruction -- What makes a graphic effective? 2/16/09
» Avoiding Shovelware - How to escape the wall of words syndrome 2001
» Good Screen Design Basics - Arranging screen elements to maximize learning 1994