Tony Karrer's eLearning Blog on e-Learning Trends eLearning 2.0 Personal Learning Informal Learning eLearning Design Authoring Tools Rapid e-Learning Tools Blended e-Learning e-Learning Tools Learning Management Systems (LMS) e-Learning ROI and Metrics

Monday, January 08, 2007

What Clients Want

Karyn Romeis created an interesting post: Assessments in elearning responding to an earlier post of mine: State of Assessment by E-Learning Developers. This simple exchange has sparked quite a few different posts that I'll work on over the next few days.

Let me start with something I consider to be almost misleading in our industry. We hear all the time about how we SHOULD be doing Level 3 & 4. Will just said in his post Assessment Mistakes by E-Learning:
Stunning: Even after all the hot air expelled, ink spilled, and electrons excited in the last 10 years regarding how we ought to be measuring business results, nobody is doing it !!!!!!!!!
In Approaches to Evaluation of Training: Theory & Practice the authors state.
Evaluation becomes more important when one considers that while American industries, for example, annually spend up to $100 billion on training and development, not more than “10 per cent of these expenditures actually result in transfer to the job” (Baldwin & Ford, 1988, p.63). This can be explained by reports that indicate that not all training programs are consistently evaluated (Carnevale & Shulz, 1990). The American Society for Training and Development (ASTD) found that 45 percent of surveyed organizations only gauged trainees’ reactions to courses (Bassi & van Buren, 1999). Overall, 93% of training courses are evaluated at Level One, 52% of the courses are evaluated at Level Two, 31% of the courses are evaluated at Level Three and 28% of the courses are evaluated at Level Four. These data clearly represent a bias in the area of evaluation for simple and superficial analysis.
Maybe the authors didn't mean to say this, but they clearly say that 10% transfer is because of inconsistent evaluation. And clearly we are all biased towards the "simple and superficial." Yikes, I feel so dirty not doing Level 3 & 4.

But wait, we've all had the experience of when we try to do Level 3 & 4, we face considerable push back from internal clients and line managers. It's not that expensive to get the data nor really that much work, but it does take commitment. Obviously, the level of effort isn't worth it to the line managers. And this will be a topic for a future post, i.e., why don't clients care? But for know recognize that when you start with a client, you need to do a quick assessment of what they really wants:


Do they really care about effectiveness: changing behavior and driving business results? Many clients don't really care about this. They have a particular product/project in mind and they want you to get that done. As Karyn said in her post, the client will tell you, "Don't worry, your job is safe." They don't care about business outcomes. You'll often find this out pretty quickly when you start asking questions about "What do you expect people to do differently after this intervention?" or "What numbers are we trying to hit?" A blank stare often indicates that they don't really care. I actually think a surprising number of "training" projects involve clients who are on the don't care end of the spectrum.

Do they care about the looks of the product/project? Many clients come in with expectations about what will be produced and likely around what a "good looking" project will look like. They may be looking for highly interactive, or fun, or engaging, or games, or simulations, or ... If you suggest that a checklist might actually have better results than an hour-long, interactive training course, you may get lots of push-back. Some clients honestly don't care about looks, but most do. In fact, its much more common for a client to care about looks than to care about personality (I mean business results). Good looks are important in that it often implies user engagement, but this post is purely about assessing what the client really wants.

Remember that beauty is in the eye of the beholder. You definitely need to understand what they believe will be a good looking project. It may include what kind of format, graphic design, level of interaction, etc.

Also keep in mind that your assessment of what the client wants may also need to include thinking about other stakeholders (end-users, the client's boss, etc.) and what they want. Clark Aldrich recently suggested that helping your client get a promotion might be the ultimate goal. Thus, the "wants assessment" in that case needed to include assessing what your client's boss really wants. Do they care about business results? Do they want something good looking?

Finally, please, please, please do not show this graphic to your client (or ask the implied questions directly). Every client will tell you that personality is much more important than looks. Of course, they want it to be effective! Of course, it doesn't matter how good looking it is! They know that's the right answer! That's what they will tell you if you ask. Your job is to find out what they really think.

Stay tuned for more ...

9 comments:

Anonymous said...

I reckon you're right about what matters to many line managers. They like a swish training room (for ILT) or a whizzy looking interface, they like shiny take-aways and nicely bound colour manuals. Sadly, I think for many I have encountered, they are looking for a tick in a box so that come review time, there can be no complaints about the provision of opportunities for development.

For me, it's about the learner - a person with whom I seldom have contact before the fact. Without an ongoing programme of M&E for learning, there is no way to ensure that the material remains useful and relevant to the learners' (and, by extension the organisation's) needs. If we are not able to assess the value added by learning (even just in the vaguest terms), when it comes time to cut budgets... well we all know what happens then.

Donald Clark said...

Hi Tony,
A very interesting post, however, the study that “10 per cent of these expenditures actually result in transfer to the job” is totally bogus. The paper by Baldwin & Ford (TRANSFER OF TRAINING: A REVIEW AND DIRECTIONS FOR FUTURE RESEARCH) reads: "It is estimated that while American industries annually spend up to $100 billion on training and development, not more than 10% of these expenditures actually result in transfer to the job (Georgenson, 1982)."

However, the article they cite by Georgenson (The Problem of Transfer Calls for Partnership - Training & Development Journal; Oct 82, Vol. 36 Issue 10, p75, 3p), reads:

------------------------------------
"How many times have you heard training directors say: "I need to find a way to assure that what I teach in the classroom is effectively used on the job"?

"I would estimate that only 10 percent of content which is presented in the classroom is reflected in behavioral change on the job. With increse demand from my management to demonstrate the effectiveness of training, I got to find a way to deal with the issue of transfer."
------------------------------------

Note that he actually uses quotation marks for the above two statements -- and the reason why is that he is asking a rhetorical question. Thus this 10% transfer myth is NOT based on any studies or research, it is based on a rhetorical question asked by an author! The are no studies or research cited in Georgenson's paper. in fact, Georgenson is not even a learning/training researcher, for the bio in the article reads, "David L. Georgenson is manager, product development, Xerox learning Systems, Stamford, Conn."

Not only do Baldwin & Ford base their paper on a rhetorical question in a magazine article, they also do not cite the statement correctly because no where in Georgenson's magazine article does it state how much is spent on training.

Anyone reading Georgenson article can clearly see he is simply asking a rhetorical question to lead off his story. Thus the blame resides on researchers like Baldwin & Ford who either do not bother to read the article they are citing or are misleading readers like us. No wonder no ones really knows what is happening in our field because we cannot even trust the researchers!

So what is the real rate of transfer? I don't believe anyone has done a complete study of it, however there is one study that surveyed members of a training and development society (Sacs and Belcourt, AN INVESTIGATION OF TRAINING ACTIVITIES AND TRANSFER OF TRAINING IN ORGANIZATIONS, Human Resource Management, Winter 2006, Vol. 45, No. 4, Pp. 629–648). They reported an initial transfer rate of 62%. Note that the rate drops over a period of time, however that is another discussion.

Sixty-two percent is still not all that high, however, it is much more believable than the "rhetorical question" of ten percent. Thus 38% of training fails to initially transfer. I think there are two main causes. First is that some of the training is poorly designed. Secondly is that many so called training programs are not really training, but are more development or educational in nature, thus an initial transfer rate should not be expected. That is, training is done to show an immediate or near immediate improvement on the job, while development and educational programs are done more to "grow" the learner.

Anyway, just some thoughts.

Anonymous said...

I know that we all work in different types of organizations and that the projects that we work on shape our opinions.

One of the biggest problems that I have faced when it comes to the question "What does the client really want" is the fact that sometimes the training is only a small piece of what they want.

I used to work for the training department of a large software development company. The clients were buying a new financial management system or other software system not the training.

Everyone knew that training was required, but the business developer who sold the solution to the client was selling the system, not the training. The client manager who was buying the software system didn't work with HR to determine the training needs their company would need to implement the system. As a result, training was a small line item on a large (multi-million dollar contract).

Near the end of the project, they would finally come to us and ask for help creating a training solution. Time was short, the budget already set, and, oh, by the way, it needs to look good because this will be the first that most people see of the new system.

Often, once the software and associated training was delivered, our group moved on to the next project. We didn't have a chance to evaluate the training and make improvements.

If we were in the ideal world, our group would have a separate contract to develop the training and work directly with the client's HR organization to make sure the training fits in with their current offerings and needs. We would be able to customize the training to meet the needs of their current employees who are familiar with the old system and new employees who may need more background in the workflow surrounding the system.

Tony Karrer said...

Karyn - you quickly jumped on two good points (1) the learners' needs and (2) future budget justification.

Unfortunately, the problem is when there's a disconnect between client wants and these two things. In other words, if the client really doesn't care about learner needs, you can push to a certain point, but that's it. The future budget justification is even harder to push for. There's almost NO BENEFIT for the client to fund you spending time to justify the expenditure. In fact, most clients perceive greater risk than reward.

How do you get a client to agree to participating in Level 3 or 4 when its only really to benefit future budget expenditures?

Tony Karrer said...

Donald - great comment! I'm curious what the 62% actually means. Is it knowledge transfer for post course eval? Is it some kind of transfer onto the job? Is it "improvement on the job"? Is this a Level 3 & 4 kind of number? Or more of a Level 2 after the trainging kind of number?

I'm surprised that we hear the 10% so often cited if the number should be 62%. That's quite a difference. Maybe we are all jaded by our own educational experiences (how much do you use integration and differential equations?) And definitely my bad for quoting a wrong number even if I was using it as a counter example.

Of course, my point was less on the transfer rate than on the fact that the author chides us for not doing Level 3 & 4. And we get that everywhere. Why aren't we all measuring Level 3 & 4?

Tony Karrer said...

Mike - your example, as I'm sure you knew and why you commented, is repeated quite often. In fact, it's almost certainly the majority and maybe even the norm. And, you correctly assessed what your client really wanted (and helped with a dose of reality).

One thing I found interesting though in your comment was that you said - "We didn't have a chance to evaluate the training and make improvements."

Is that anyone's expectation? Did the client want this? Did you even want it? I would guess that everyone is ready to move on - why bother with Level 3 & 4? Why did you say "didn't have a chance to" as if it was expected of you.

Donald Clark said...

Tony, to try to answer a couple of your questions. . .

The 62% means the transfer of training involving both the generalization and maintenance of training material on the job. Thus the participants were asked to indicate the percentage of employees in their organization that effectively apply and make use of what they learn in training programs on the job.

As to why aren't we all measuring Level 3 & 4? I don't believe we always have to show a level 3 or 4 evaluation to prove that the training is useful. What we really need to do is simply show that it actually "links" to other parts of the organization or business. That is, it visibly contributes to fulfilling your customers' business strategies.

For example, if I train a machine operator to effectively operate one of the manufacturing department's pieces of equipment, then there is really is no need for me to prove my training's effectivess with various evaluations. Of course to actually train an effective operator, I will probably have to use some type of evaluations, but these will not have to be formal ones to prove to the business unit that my training actually works.

I think we often try to use evaluations as the end product of our training. Yet to other business units, the true end-product is not a bunch of numbers, but rather how training actually links to their's to actually solve their needs.

Anonymous said...

I find that eLearning produced for young adults must be good looking in order to be effective. These learners grew up on a steady diet of slick computer graphics. A “clip art special” no matter how well written will bore them every time.

susan smith nash said...

What a great post & series of comments!! Here are a few thoughts...

1. "Good looking" is a cultural construct as well as a value-laden norm. It's really important to know your audience & understand their values, assumptions, belief systems, etc before attempting to build something "good looking" or "attractive."

2. That said -- aesthetics are important. They help engage the affect, and they help one pay attention. Gagne's theories apply here.

3. Design is a powerful cognitive tool & assists in the creation of schemata.

Just a few thoughts...

all best,
susan nash