Published on

October 1, 2021

How to Measure Your Company's Training Effectiveness

Justus Hunke photo

Justus Hunke

Marketing Analyst

Category:

Learning Hub

Reading time:

10

Minutes
People writting down infos and structuring ideas

When it comes to training programs, many companies neglect the process of evaluation though it is probably the most important step. In general, evaluation in the context of learning means that you collect all relevant information to decide if your training is effective and was worth the money, time, and effort.

Today, we provide you with a short overview of the classic evaluation frameworks in the L&D industry. Well, "classic" does not mean run-of-the-mill or outdated techniques. In fact, these are the most widely used methods for evaluating your company's training effectiveness.

Evaluation Methods for Training Effectiveness

Kirkpatrick's 4 Levels of Evaluation

The first method we present you today was developed by Dr. Donald Kirkpatrick (1924 – 2014) in the 1950s. Since then, a huge variety of extensions has emerged. Nowadays, it is the best-known model for analyzing and evaluating the effectiveness of training programs. Kirkpatrick's Model can, thereby, be implemented before, throughout, and after training to show the value of training to the business. The main thesis is that evaluation only adds value if all four levels of evaluation are followed. These represent the process that a training participant goes through. Success at one level is a prerequisite for success at a higher level.

The four levels for evaluation of training are:

1. Reaction

At the lowest level, participants' reaction to the training is to be assessed. What did learners feel about the learning experience? Was it enjoyable? If they perceive the training negatively, they might have no motivation to learn. In addition, the survey should serve to convey to the learners that their opinion is important and that changes will be derived from it.

2. Learning

In the second phase, the learning success of the participants should be examined. Did the learners actually learn something? Have their knowledge and skills improved? Have their attitudes changed? All three questions should be assessed before the training.

3. Behavior

At the third level, the actual change in behavior that results from what has been learned is checked. Did the learners actually do anything different as a result of the training? This behavioral change can thus also only be checked after the participant has actually had a chance to apply it.

4. Results

The last stage is to check the final results that the training has brought for the company. What was the effect of the training on the business as a whole? It is not easy to measure. Depending on the content of the training, the effects on productivity or turnover, for example, should be checked here.

Keep in mind that the key here is to collect, review, and act on the feedback as fast as possible – not just at the end of a particular learning experience. The first three levels help you especially when you do not achieve the required business results. To allow for actual comparison, Kirkpatrick recommends the use of comparison groups at each level that have received no (or a different) training.

Brinkerhoff’s Success Case Method

The second approach on our list to evaluate your training effectiveness is Brinkerhoff's Success Case Method (SCM). The SCM focuses only on qualitative rather than a quantitative analysis from a small number of parties. The model was originally developed by Robert Brinkerhoff to measure and evaluate the impact of organizational interventions in general. This method is different from the previous one since it involves identifying both the most and least successful cases within your learning program and learn from the least successful ones. Thus, the purpose is not to examine the average performance - you only examine the extreme cases.

Brinkerhoff’s Success Case Method helps to answer two very important questions:

  1. “How well does a training program work in a best-case scenario?”
  2. “When a training program doesn’t work, what’s the reason for it?”

The evaluation process could be as follows:

1. Identifying training goals and defining what 'success' should look like

What results do you wish to get at the end of the training? Which effect do you want to observe? What is a success then? By answering these questions, you get a better view of your goals and expectations which function as a qualitative metric later on. Start your program then.

2. Looking for outliers

Now it's time to take a look at both extremes, the top-performers and the worst cases. Who has performed really well and who has a lot of space for improvement? Split them into two groups.

3. Conducting in-depth interviews with participants

Evaluate both groups by conducting in-depth interviews. You want to find out the impact the training program has created on their work. Moreover, you need to identify the "success factors" of your program and also cut off bad factors.

4. Documenting your findings

Draw conclusions from your research and derive recommendations from it. What was the most compelling success story? Maybe you can create a case study for that. What are the "success factors"? Write them down as a guide for the next training program.

(cf. Brinkerhoff, 2003, p.29)

Although the research might take a while, your findings on these methods can be astonishing. But keep in mind that you shouldn't use this method solely on its own. Rather see it as an addition in order to dig deeper into the results of quantitative analyses.

Anderson's Value of Learning Model

The final one is the Anderson Model of Learning Evaluation which is also the shortest method we presented here. The model was first published in 2006 by the Chartered Institute of Personnel and Development (CIPD) and was designed as a three-stage learning evaluation cycle. The focus here is primarily on aligning the training program's goals with the organization’s strategic priorities.

Anderson's Model distinguishes itself by trying to solve two different but yet important challenges at once:

The Evaluation Challenge: Many organizations report that they seriously struggle to do an evaluation of training programs well.

The Value Challenge: On the other hand, organization leaders often require evidence showing the value of learning and training as well as cost-effective deployment of resources

By shifting the focus on the evaluation of a learning strategy, rather than the outcome of individual programs, this model is more practical and feasible than other learning evaluation approaches.

The three-stage cycle applied on the organizational level is as follows:

1. Determine current alignment of training against strategic priorities

The first stage evaluates how well learning in your organization matches with what you are trying to achieve as a company, e.g. driving sales, increasing production, or reaching a new market. You need to know your business's strategic goals and develop a learning program that supports them when you want to achieve a high alignment.

2. Assess and evaluate the contribution of learning

Similar to other models, the biggest challenge here is to determine the contribution of training to the goal of your business. Use a number of methods to precisely assess and evaluate the contribution of learning. Unfortunately, the model does not state specific methods to use but suggests at least four measures you should cover

  • Learning Function
  • Return on Expectation (ROE)
  • Return on Investment (ROI)
  • Benchmark & Capability

3. Establish your organization's most relevant approaches

Depending on the specific needs of your company, the importance of the previous measures obviously differ. The following matrix can help you to determine which measures are most likely to align with the needs of your business:

Source: Watershed, An Overview: Anderson's Model for Learning Evaluation


Since Anderson's method is a very high-level framework, this model needs to be combined and used in conjunction with other models, such as Kirkpatrick's model, to gain a holistic picture of the value your learning program is bringing to the organization.

Summary

In today's world, all economic activities are tracked and carefully monitored. Especially expenditures are closely scrutinized. Companies invest hundreds of thousands if not millions in employee training every single year. Thus, you should know how valuable training for your employees really is. Pre- and post-testing and simply hoping for the best won't make it. It is time to put old practices away. Use technology and new evaluation approaches instead to make L&D's life easier. Data and digital learning allow us to set a new standard in the education world. We can track individual results and identify potential for improvements immediately while also offering a highly personalized training program for each employee.

As you have seen, when it comes to learning evaluation you don't have to limit yourself to just one method. Indeed, it is highly recommended to take two, three, or even more methods into consideration. In general, the better your analysis is the more precise improvements you can derive from it. "Precise" in this context does not mean that all the approaches have to come to the same result. But they should not at least yield totally different conclusions and recommendations for action.

At edyoucated, we bring back the 'you' in education. We offer personalized learning plans adapting to your employee's roles and skills in real-time. When your employees get started, they just answer a few questions about their skills. The edyoucated personalization engine automatically determines the most effective learning plan for your employees. And the best thing: it learns over time and grows togetherwith your skills.

Sign up now and try our personalized learning plans for free!

Explore further with these stories

All posts

Subscribe for employee growth insights

We will only send you quality content and never share your details with anyone else.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.