Time to retire Kirkpatrick?

by Viv on February 29, 2008

When you first turned your thoughts to evaluating training, it’s odds-on that a colleague or consultant recommended you to have a look at the Kirkpatrick model. It’s a straightforward concept to grasp and it’s been in use for 50 years. However, in practice it’s really difficult to apply, and probably even more so in a professional services firm.

In March’s TJ, Hatty Richmond writes about her research on L&D practitioners who had been involved in evaluating training courses. Whilst the research base is small, there are some interesting themes which echo many of my experiences of implementing evaluation in a Big Four firm.

Here are five reflections:

  1. You can’t evaluate after the event. The process of establishing a baseline measurement, predicting what measures will change and then measuring them takes time to establish. Often the pressure will be on you to deliver the project to a tight deadline, so it’s no surprise if this piece of work gets deprioritised. Furthermore, the working process in a professional firm does not easily enable before and after comparison, as trainees move between projects (accountancy), seats (law) and often report to several different bosses – it would be far simpler to measure if a course improved someone’s client handling skills if they had the same boss in ths same room as them all the time.
  2. Levels are not steps. There is a commonly held misconception that you need to do Kirkpatrick Level 1 first and then Level 2 etc. All the levels co-exist independently.

  3. Why evaluate the process, if you’re interested in the outcome? Problem + training course does not equal success – there are lots of other factors involved in creating the success apart from the training course – if the business genuinely wants to know what’s causing success, other aspects of the situation need to be measured too.

  4. Is the ROI of finding the ROI positive? If you are aiming to improve the business development effectiveness of your client facing staff, does it matter that 40% rather than 30% of the incremental success related to the training course? Why waste time and resource establishing the fine detail of something that won’t make a difference anyway? If the evaluation is being done solely so that the L&D department can justify its existence, then it’s probably held in such irreparably low esteem by the rest of the business that no amount of data will change this view.

  5. Meaningful evaluation makes use of measures that link to the business strategy. If stakeholders do not understand (or are n’t allowed to know) what drives the business strategy, then it should be no surprise if your evaluation project lacks the time and scope to work out what these measures should be, as well as doing what it was supposed to.

Being able to evaluate the impact that a training course has is highly desirable, it’s just not a quest for the faint-hearted. Anyone who pretends that Kirkpatrick is the whole answer to evaluation should take some time to re-evaluate.

Leave a Comment

Previous post:

Next post: