Question Details
[solved] Chapter 11 Training Evaluation Chapter Learning Outcomes Afte
Question:
Conduct scholarly research to compose a 2-3 page APA formatted paper that addresses these issues.
Conducting evaluation studies using full experimental designs and sophisticated data collection is almost impossible in real organizations.?
Some managers argue that training evaluations can provide meaningful conclusions only when conducted using these techniques.?
They therefore conclude that in most cases training evaluations are a waste of time and money.?
Debate this conclusion:?
Is it the case that training evaluations should be conducted only when it is possible to use the more sophisticated procedure??
Why do organizational leaders need to see quantifiable results and how can these results be presented in a reliable format?
This assignment will address the following Unit Learning Outcomes:
- ULO 6.1 Assess the different types of training evaluation and discuss the barriers to evaluation.
- ULO 6.2 Explain how training evaluation can be quantifiable.
- UL0?6.5?Evaluate?relevant?scholarly?research?and?synthesize?research?to?complete?required?assignments.
Chapter 11 Training Evaluation Chapter Learning Outcomes
After reading this chapter, you should be able to: ? define training evaluation and the main reasons for conducting evaluations
? discuss the barriers to evaluation and the factors that affect whether or not an
evaluation is conducted ? describe the different types of evaluations
? describe the models of training evaluation and the relationships among them
? describe the main variables to measure in a training evaluation and how they are
measured ? discuss the different types of designs for training evaluation as well as their
requirements, limits, and when they should be used NEL 323 324 Managing Performance through Training and Development Bell Canada
Years ago, when Bell Canada installed a new telephone system for its business clients, it also sent
out service advisers whose task it was to train the
employees to use the new system. These training
sessions consisted of ?show-and-tell? activities in
which the instructors demonstrated the use of the
telephone. Simple as the training was, it was expensive, costing millions of dollars annually. With the
introduction of electronic equipment, the functionality of the telephone systems?and complexity for
the users?increased exponentially.
Initially, the company attempted to use its tradi?
tional training approach with purchasers of the electronic systems. However, a training evaluation was
conducted and it showed that following the training
experience, customer knowledge of the operation
of the electronic telephones was quite low. Training
was not effective.
A number of attempts were then made to
improve the situation. Different types of training,
presented by either Bell Canada or user personnel,
were tried and evaluated. None made any significant difference in terms of training effectiveness.
However, these training evaluation studies did
detect an important fact. No matter how training
was conducted, the users? knowledge of a limited number of functions?those they used a lot?
increased after training, indicating that practice
seemed to have a significant effect on learning.
This suggested that providing end users with
an instructional aid might help them gain greater
benefit from the electronic system. To that end, a
special instruction booklet was carefully prepared
and trainees were provided with a brief instructional session teaching the users how to use the
instruction booklet. The evaluation of this approach
showed, empirically, that the use of the instruction
booklet resulted in greater user mastery than the
formal training course.
Thus, the training evaluations conducted
throughout this process demonstrated a) that the
traditional training method was ineffective, b) that
changing the instructors had no effect, but c) that
the use of a well-developed instruction booklet had
greater effect.
This demonstrates the two main objectives
of training evaluation: to assess the effectiveness of
training and, equally important, to identify ways of
enhancing that effectiveness. At Bell, the traditional
program was discontinued and replaced with an
inexpensive booklet that was both more effective
and considerably cheaper. Training programs are designed to have an effect on learning and behaviour.
However, as the Bell Canada story demonstrates, this is not always the case.
Fortunately, in that case, the organization launched an evaluation program that
involved several studies, the results of which served not only to assess the effectiveness of the existing training but also to identify and test different strategies
for improving the situation.
In this chapter, you will learn several training evaluation models, the many
types of evaluations, the variables to measure and how to do so, as well as the
strengths and weaknesses of different data collection designs used in the ?conduct
of evaluations. NEL Chapter 11: Training Evaluation What Is Training Evaluation?
Organizational training and development is intended to improve technical competencies (e.g., learning new software), to modify attitudes (e.g., preparing a manager
for an international assignment), and/or to modify behaviours (e.g., better communication skills). Organizations invest in this organizational function because it is
expected that training makes a positive contribution to them and to their employees.
Training evaluation is concerned with whether or not these expected
?outcomes materialize as a result of training. They are designed to assist decision
making: Should the organization cancel or continue a training program? Should
it be modified? How?
Training evaluation is a process designed to assess the value?the ?worthiness?
of training programs to employees and to organizations. Training evaluation assesses
this value by analyzing data collected from trainees, supervisors, or others familiar
with the trainees and with the job context. Using a variety of techniques, objective
and subjective information is gathered before, during, and after training to provide
the data required to estimate the value of a training program.
Training evaluation is not a single procedure. Rather, it is a continuum of
techniques, methods, and measures. At one end of the continuum are simple
evaluations that focus on trainee satisfaction: Trainees indicate on a brief questionnaire if they liked the course. Certainly simple and easy to implement, they
regrettably yield minimal information.
At the other end of the training evaluation continuum lie more elaborate
procedures and more complete questionnaires and interviews that provide managers with more information of a richer quality about the value of a training
program. These evaluations may assess job performance improvements and even
isolate the unique role that training played in it. Others assess the psychological states of trainees immediately after training: Have they learned the skills?
Are they motivated to apply them? Are they confident they can? Will the work
environment help or hinder them? The more sophisticated the evaluations, the
better the information they provide, the surer the conclusions and the greater
the ?confidence with which they can be stated. However, more sophisticated evaluation procedures are more costly and
more complex and difficult to implement. Conducting elaborate evaluations may
entail disruptions to the training program and/or to the job. Specialized consultants may be required in some cases. But these factors are not required in every case. At times a simple and cheap
evaluation procedure will do while a more complex and sophisticated approach
might be overkill.1 In other cases the reverse is true. In the end, training evaluation choices are a trade-off, balancing between quality and complexity/costs,
between the informational needs of decision makers and the difficulty and
resources required to obtain that information. Why Conduct Training Evaluations? Organizations invest in the training of employees and managers as it is a necessity for competitiveness in the current global environment. With chronic understaffing (one result of cost-cutting layoffs), the amount of time available for NEL Training evaluation
A process to assess the
value?the worthiness?
of training programs
to employees and to
?organizations 325 326 Managing Performance through Training and Development training has become smaller and must be used more wisely. In this context, management has a stake in ensuring that the resources invested in training bear fruit.
Training evaluation is therefore of value to:
?? Help fulfill the managerial responsibility to improve training. ?? Assist managers in identifying the training programs most useful to
employees and to assist management in the determination of who should
be trained. ?? Determine the cost benefits of a program and to help ascertain which
program or training technique is most cost-effective (see Chapter 12). ?? Determine whether the training program has achieved the expected results
or solved the problem for which training was the anticipated solution. ?? Diagnose the strengths and weaknesses of a program and pinpoint
needed improvements. ?? Use the evaluation information to justify and reinforce, if merited, the
value and credibility of the training function to the organization. Do Organizations Conduct Training Evaluations? By the year 2000 most organizations in North America were conducting some
evaluation of most of the training programs offered to their employees.2 However,
there has been a gradual decline in the evaluation activities of organizations. The
survey of the American Society for Training and Development (ASTD) and the
latest Conference Board of Canada survey of organizations confirm this result.3
In 2002, 89 percent of Canadian organizations assessed training in some manner,
but by 2010 fewer than 50 percent still did. This significant decline occurred
even though the actual proportion of staff time invested in evaluation remained
steady (hovering between 4 percent and 10 percent). Notwithstanding the fact
that the same evaluation time is spent on fewer courses, the level of sophistication of the evaluations has not shown demonstrable growth. Most evaluations
are reaction based.
As depicted in Figure 11.1, although many organizations have abandoned
evaluation, almost half have not. Most (92 percent) of these organizations gauge
training success by soliciting the trainees? opinions (their ?reactions?), and they
do this for most (80 percent) of the courses held. Figure 11.1 also shows the
rarity of organizations that measure learning, behaviour, or job performance
improvements to evaluate training success. Such evaluations are more useful,
but they are more complex. Longitudinal data collected by the Conference Board
indicates little change to these percentages since the survey began a decade ago:
the percentage of organizations that evaluate reactions has remained the same. The new finding, however, is the confirmation that organizations are not
simply abandoning reaction measures: they are abandoning evaluation altogether! Perhaps dissatisfied with reaction-based evaluations, and unable or
willing to consider the use of better approaches, many organizations have chosen
to forgo training evaluation. NEL Chapter 11: Training Evaluation F i g u r e 11.1
The Percentage of Organisations that Evaluate Training and the
Percentage of Courses Evaluated at various Evaluation Levels
100
90
80
70 % of organizations that
conduct evaluations 92 % of courses
evaluated 80
73
66 60
50 49 40 39 30 28 20 22 10
0 Reaction Learning Behaviour Results Source: C. Lavis, Learning and Development Outlook 2011: Are Organizations Ready for Learning
2.0? The Conference Board of Canada, 2011. Barriers to Training Evaluation
Studies of training professionals showed that some employers do not conduct
training evaluations because they are perceived to be too complicated to implement, too time consuming, and/or too expensive.4 Indeed, this may explain why
companies that conduct evaluation choose the simplest of procedures (reactionbased evaluations). In some cases, training managers resist evaluation because of
the difficulty of isolating from other variables the unique impact that training has
on employee effectiveness. In yet other cases, training managers do not conduct
evaluations because top management does not demand them, while others do
not because they may not wish to know. Thus, barriers to training evaluation fall
into two categories: pragmatic and political. Pragmatic Barriers to Training Evaluation Increasingly, training departments, as with all other departments, are expected
to demonstrate their usefulness, their contribution to job performance improvements, and ultimately to the company?s bottom line. Based on the ASTD and The
Conference Board of Canada reports, the training departments in most companies
are finding it a challenge to fulfill this obligation. Collecting training success data
that addresses management?s needs requires more complex evaluation procedures. Fundamentally, evaluation requires that perceptual and/or objective information furnished by trainees, their supervisors, and even others?such as peers,
subordinates, and/or clients?be gathered before, during, and/or after the
training session. These data gathering and analytical efforts require extensive
collaboration from the trainees, their supervisors, and so on, which is disruptive
and understandably difficult to obtain. NEL 327 328 Managing Performance through Training and Development Some training departments do not assess training for a different reason. As
you will learn in this chapter, taking on the training evaluation task requires knowledge about evaluation models, research design, measurement, questionnaire construction, and data analysis. For some, that can seem an intimidating prospect. Evaluation also costs money. As the effectiveness of a training department
is often defined by the number of classes offered or the number of participants,
siphoning budgets from this main task to the evaluation of the remaining training
programs may prove unpalatable.5 This may partially explain why the Conference
Board of Canada reports show no real change in the budgets allocated to evaluation activities by training departments.
However, training evaluation has been unduly mystified. The principles,
techniques, and procedures involved in training evaluation?many of which are
described in this chapter as well as in Chapter 12?are logical, straightforward,
and implementable. Moreover, with the advent of modern information technologies (e.g., Web-based questionnaires and computerized work-performance data)
and new evaluation models and designs, the disruptive impact and costs of data
collection can now be seriously eased. Political Barriers to Training Evaluation Evaluations are conducted when there is pressure from management to do so
(see Training Today 11.1, ?Upper Management?s Role in Training Evaluation?). In
the absence of such pressures, many training managers would rather forgo the
exercise. Clearly, management needs to stress evaluation. Tra i ning To d ay 11.1 Upper Management?s Role in Training Evaluation
A few years ago, there was a rash of serious work accidents in a large transportation company. Some of these
accidents were the direct or indirect result of operator
errors due to the consumption of drugs and alcohol. As
a result, the firm declared a zero-tolerance policy concerning the use of such substances. The policy required
that no employee use substances that may impair effective and safe job performance, whether or not these
substances are legal. The key element of the policy was
that all supervisors were directly and personally responsible for enforcing the policy. Supervisors who failed
to enforce the policy would themselves be subject to
sanctions that could include dismissal.
The training department was directed to develop
and administer a training program to all supervisory
and managerial personnel in the company that was
aimed at teaching the policy and its implementation.
However, the CEO of the company also insisted that
the training program be evaluated to ensure that it was effective. As a result, the training department, which
normally only administered ?smile sheets? to evaluate
their training programs, launched a much more sophisticated training evaluation program that included three
measurement times and the collection of information
on dozens of variables. Clearly, this effort was launched
because the training program had attained high visibility and because top management demanded it. The
training evaluation did uncover some problems with the
training program and suggested a number of changes.
However, none of these changes was ever implemented.
This was because top management showed no interest
in the results of the evaluation study, as these became
available several months after the training program was
administered.
The moral of the story is that high-level visibility
can stimulate evaluation actions. However, maintaining
that visibility is important to ensure that the evaluation
results will prove of practical use.
NEL Chapter 11: Training Evaluation But evaluation can be threatening. These studies might conclude that part of
a training program?or even an entire training approach?is not effective. While
this should be considered a valuable finding (as in the Bell Canada case), some
trainers fear that this will reflect poorly on them and/or the training function and
the service they offer. But without evaluations, managers are unable to demonstrate their value to the organization, which may be inherently more risky than
launching an evaluation system that can improve training and its effectiveness.
Other trainers do not evaluate on ethical grounds. They feel that evaluations
should be conducted by external professionals to avoid a perceived or actual
conflict of interest. How can the person doing the training also be the one responsible for evaluating its effectiveness? Conflict of interest, although always a possibility, is unlikely when training managers make use of the established methods
of evaluation?many of which are treated in this chapter. Types of Training Evaluation Most training evaluations focus on the impact of a training program on trainees? perceptions and, to a much lesser degree, behaviours. Perceptions are assessed through
questionnaire measures, while behavioural data may require a combination of techniques including self-reports, observation, and performance data. Evaluations may
be distinguished from one another with respect to the data gathered and analyzed,
and the fundamental purpose for which the evaluation is being conducted. 1. The data collected: Evaluations differ with respect to the type of
?information that is gathered and how that is accomplished. a. The most common training evaluations rely on trainee perceptions at the
conclusion of training (did the participants like it?), while more sophisticated evaluations go further to analyze the extent of trainee learning and
the post-training behaviour of trainees. b. More recently, there has been a growing emphasis on evaluation
studies that also assess the psychological forces that operate during
training programs and that impact outcome measures such as learning
and behaviour change. Research in this area has helped to identify
psychological states (affective, cognitive, and skills-based) that are
important training outcomes because of the influence they have on
learning as well as to improvements in job behaviours.6 c. Finally, information about the work environment to which the trainee
returns can be useful in evaluation.7 For example, measures of
training transfer climate and a learning culture have been developed.8
Understanding the organization?s culture and climate as well as its policies
can strongly affect training choices and effectiveness.9 The degree to
which opportunities exist for on-the-job practice of new skills or the level
of support provided by others to new learners, amongst other things,
have been found to influence training success.10 Training courses that are
strongly aligned with the firm?s strategic vision tend to be more effective. It has also been shown that training programs are more likely to improve
job performance when using the new skill improves the performance of
participants whose remuneration depends on performance.11 NEL 329 330 Managing Performance through Training and Development Formative evaluations
Provide data about various
aspects of a training
?program Summative evaluations
Provide data about the
worthiness or effectiveness
of a training program Descriptive evaluations
Provide information that
describes the trainee once
he or she has completed a
training program
Causal evaluations
Provide information to
determine whether training
caused the post-training
behaviours 2. The purpose of the evaluation: Evaluations also differ with respect to
their purposes. Worthen and Sanders distinguished between formative
evaluation and summative evaluation.12 a. Formative evaluations are designed to help evaluators assess the value
of the training materials and processes with the key goal of identifying
improvements to the instructional experience (the clarity, complexity,
and relevance of the training contents, how they are presented, and the
training context). Hence, formative evaluation provides data that are of
special interest to training designers and instructors. b. Summative evaluations are designed to provide data about a
training program?s worthiness or effectiveness: Has the training
program resulted in payoffs for the organization? Cost?benefit
analyses (see Chapter 12) are usually summative. Economic indices
are often an integral and important part of these types of evaluations;
consequently, organizational managers show great interest in these
results. A further distinction can be made between descriptive and causal
?evaluations. Descriptive evaluations provide information describing trainees
once they have completed the program. What has the trainee learned in
training? Is the trainee more confident about using the skill? Is it used on the
job? Most evaluation designs have descriptive components. Causal evaluations are used to determine whether the training caused the post-training
behaviours. Was the performance improvement caused by the training program? Causal evaluations require more sophisticated experimental and statistical procedures. Models of Training Evaluation Models of training evaluation specify the information (the variables) that is to
be measured in training evaluations and their interrelationships. The dominant
training evaluation model is Donald Kirkpatrick?s hierarchical model.13 However,
research and practical experience has indicated that Kirkpatrick?s model can be
improved. The COMA model and the Decision-Based Evaluation model are two
recent efforts in that direction; both are discussed below.14 Kirkpatrick?s Hierarchical Model: The Four Levels
of Training Evaluation Kirkpatrick?s hierarchical model is the oldest, best known, and most frequently
used training evaluation model. For example the Conference Board of Canada
data summarized in Table 11.1 are organized using that model. The model identifies four levels of training evaluation criteria. According to this model, a training
program is ?effective? when: L1. Trainees report positive reactions to a training program (Level 1 5 ?reactions). L2. Trainees learn the training material (Level 2 5 learning). L3. Trainees apply what they learn in training on the job (Level 3 5 ?behaviours). L4. Training has a positive effect on organizational outcomes (Level 4 5 results). NEL Chapter 11: Training Evaluation Ta b l e 11.1
The Main Variables Measured in Training Evaluation
Variable Definition How Measured Reactions Trainee perceptions of the
program and/or specific aspects
of the course. Questionnaires, focus
groups, interviews. Learning Trainee acquisition of the program
material. Declarative learning
is knowing the information.
Procedural knowledge is being able
to translate that knowledge into a
behavioural sequence. Multiple choice or
True-False tests
(declarative); ?situational
and mastery tests
?(procedural). Behaviour On-the-job behaviour display,
objective performance measures. Self-reports, supervisory reports, direct and
indirect observations,
production records. Motivation Trainee desire to learn and/or
transfer skills. Questionnaires. Self-efficacy Trainee confidence in learning
and/or behaviour display on the job. Questionnaires. Perceived and/or
anticipated support The assistance trainees obtain
and/or the assistance trainees
expect. Questionnaires. Organizational
?perceptions How trainees perceive the organization?s culture and climate for
learning and transfer. Standardized
?questionnaires. Organizational
results The impact of training on
?organizational outcomes. Organizational
records. In a more recent articulation, an additional level has been added to the
Kirkpatrick model. Level 5 refers to return on investment (ROI), which is designed
to assess the financial benefit of training to the organization. However, for simplicity?s sake we continue this description using the original four-level version
of the model proposed by Kirkpatrick (ROI and the financial benefit of training
programs is discussed in Chapter 12).
The model states that the four levels are arranged in a hierarchy, such that
each succeeding level provides more important (though more difficult to obtain)
information than the previous one. The model also assumes that all levels are
positively related to one another, each level having a causal effect on the next
level. Hence, positive trainee reactions (L1) cause trainees to learn more (L2),
which in turn leads to the behavioural display of the new skill at work (L3),
which in turn impacts on organizational effectiveness (L4)?the ultimate reason
for conducting...
Solution details:
Answered
QUALITY
Approved
ANSWER RATING
This question was answered on: Sep 18, 2020
Solution~0001047001.zip (25.37 KB)
This attachment is locked
We have a ready expert answer for this paper which you can use for in-depth understanding, research editing or paraphrasing. You can buy it or order for a fresh, original and plagiarism-free copy from our tutoring website www.aceyourhomework.com (Deadline assured. Flexible pricing. TurnItIn Report provided)

Pay using PayPal (No PayPal account Required) or your credit card . All your purchases are securely protected by .
About this Question
STATUSAnswered
QUALITYApproved
DATE ANSWEREDSep 18, 2020
EXPERTTutor
ANSWER RATING
GET INSTANT HELP/h4>
We have top-notch tutors who can do your essay/homework for you at a reasonable cost and then you can simply use that essay as a template to build your own arguments.
You can also use these solutions:
- As a reference for in-depth understanding of the subject.
- As a source of ideas / reasoning for your own research (if properly referenced)
- For editing and paraphrasing (check your institution's definition of plagiarism and recommended paraphrase).
NEW ASSIGNMENT HELP?
Order New Solution. Quick Turnaround
Click on the button below in order to Order for a New, Original and High-Quality Essay Solutions. New orders are original solutions and precise to your writing instruction requirements. Place a New Order using the button below.
WE GUARANTEE, THAT YOUR PAPER WILL BE WRITTEN FROM SCRATCH AND WITHIN YOUR SET DEADLINE.
