Customer satisfaction with training

Customer satisfaction with training programs


Martin Mulder

The Authors

Martin Mulder, Head Chair Group of Educational Studies, Department of Social Sciences, Wageningen University and Research Centre, Wageningen, The Netherlands

Abstract

In this contribution, a model of evaluation of customer satisfaction about training programs is described. The model is developed and implemented for an association of training companies. The evaluation has been conducted by an independent organisation to enhance the thrustworthiness of the evaluation results. The model is aimed at determining the quality of training programs as perceived by project managers from the organisations that purchased in company training programs from the training companies. Reliability research showed satisfying results. The model is based on the methodology in effectiveness research, and the data was used to test a model of training effectiveness.

1. Introduction

Training organisations want to know answers questions about the quality of their training programs as perceived by their clients. If the results of such evaluations are disappointing, the training organisation can modify its policy in this respect, and if the results are promising, the training organisation can try to communicate this with prospects. Trustworthiness of the evaluation results is extremely important in this respect. In the market of knowledge intensive service, clients are getting more and more professional and critical, and do not anymore buy biased evaluations. Many clients mistrust evaluation data that is collected and presented by the training organisations themselves.

Under certain conditions, professional development can be better realised by a collective of training organisations than by an individual organisation. Examples of joint activities of associations of training organisations are: collective marketing, defending interest towards the tax system, public administration and certifying organisations, and evaluation research.

For a group of training organisations in The Netherlands this was the reason for creating an association that conducts these tasks. This association, the VETRON, also wants to be perceived as the group of training organisations of excellent quality. This association has about 40 members. They are all autonomous. Some of them employ only a few trainers/consultants, others over 100. Their total joint turnover per annum is some hundreds of millions of Euros.

In this contribution a system for independent project evaluation is described that has been implemented for the VETRON. The member organisations face the market and impact oriented questions like: "Do I deliver a contribution to the development of my clients?", "Did the recipients of my training projects become more competent for the performance of their increasingly complex tasks?", "Did their employability increase?", and "Do my training projects have the necessary impact for the client organisation?". They collectively supported independent training quality research to monitor their average quality, and to improve their individual quality. Quality improvement of individual training organisations was not only depending on the system of project evaluation. Other quality improvement instruments were used as well. The unique characteristic of the independent project evaluation was that the group of training organisations received comparative and historical trend data about their position within the group of training organisations, and the development of the quality of their training services as perceived by their client organisations.

The convincing power of the project evaluation system that will be described in this contribution, is mainly a function of the independent character of it. An individual training organisation would also be able to have an independent evaluation study conducted, but in the direction of the market this inspires less confidence than an independent evaluation study within the framework of an association. An individual training organisation could claim confidentiality from the research organisation when certain results are disappointing. They could demand concealment of such findings. At the level of an association, this is practically excluded because in that case there is no direct personal relationship between the interested directors of the training organisations and the researchers. For the VETRON this was the reason to contract a university to conduct independent project evaluation.

In the next part of this contribution, the evaluation system will be described. The third part presents the research program that was related to the development of the evaluation system. In the fourth part a model is tested on training effectiveness. This test is part of the research program, and it was an important step in the considerations about the further use of the system. Finally, in the fifth part, some implications and questions about the evaluation system are discussed.

2. The evaluation system

The evaluation system comprises a standardised evaluation procedure and instrument and a standard format for research reports for the training organisations.

As to the evaluation procedure, the research unit comprises a training project that is carried out by the training organisation that is a member of the VETRON. These are all training projects in which a question of a client organisation is the focal point. In many instances these are special projects, that are an investment risk, of great importance from the perspective of the organisation's strategy, costly and innovative. But there are also customised projects that include a clear development component, which is clearly limited in scope and objective oriented. In a number of cases there are customised ready-made projects, which is the case when existing programs are adapted to a specific question of the client organisation. Standard training programs with open registration are excluded from this evaluation.

The population of training projects is divided into three sub populations: in-company training projects; individual training projects; and training program development. The last group however, is small, and is excluded from this contribution. This contribution is based on three rounds of data collection. The total population of training projects in the three evaluation periods comprises over 10,000 projects. The number of evaluated projects exceeds 1,200.

For each evaluation a representative sample has been drawn. For that purpose the training organisations have prepared client lists of the training projects about which, during a specified half-year training, organisations sent invoices. To guarantee that these lists were complete, the director of the training organisation needed to sign a declaration - in which is specified that the list is complete. Furthermore, the research organisation had the right to conduct audits at training organisations. During such an audit, the invoice and project administration could be inspected and compared with the client list submitted.

The training organisations needed to complete forms in which several items were asked for each training project: the name of the project, the name and address of the client, the name and address of the training organisation and the name of the trainer/consultant. For each evaluation project this resulted in a file of over 3,000 projects. From this file a random sample was drawn. At the start, the clients were informed by the training organisations about the evaluation. The independent training evaluation is part of the official delivery specifications of the VETRON-organisations. Clients are requested to participate in the study. Clients get the VETRON delivery regulations when they contract one of the member training organisations. But it was confirmed to all clients that a data collection phase would start again, just to remind the clients that they could expect a questionnaire. The questionnaire was sent out subsequently, and after two weeks all respondents were called by an organisation for market research. The data were processed, and a benchmark report for each training organisation was produced.

The evaluation instrument consists of a closed questionnaire that is developed in a pilot-project. The list is based on the literature on the evaluation methodology for corporate training and development (Hamblin, 1974; Brinkerhoff, 1987; May et al., 1987; Bramley, 1991; Phillips, 1991; Basarab and Root, 1992; Kirkpatrick, 1994). Furthermore, it is based on insights in the field of integrated and result oriented training and development (Camp et al., 1986; Gaines-Robinsohn and Robinsohn, 1989), and the training marketing literature (Gilley and Eggland, 1989; Swanson, 1994).

The questionnaire exists of eight blocks of questions:

1 general questions;

2 questions about the training project;

3 questions about the objectives of the training project;

4 questions about the agreements about the responsibilities for attaining the objectives of the training project;

5 questions about the results of the training project;

6 questions about the success attribution of the attained objectives;

7 questions about the agreements on aspects of the training project, meeting these agreements as well as about the satisfaction regarding the aspects of the training projects; and

8 questions about the contacts with representatives of the training organisation.

Within the group of general questions a question is asked about the general satisfaction on the training project as a whole. The respondent can give a score of 1 (very bad) to 10 (excellent) for this question. All intermediate values on this scale are labelled, and the respondents are used to this rating scale. This parameter is indicated as the total satisfaction indicator (TSI). The questions about the objectives of the training project include a question about the level to which certain objectives are of importance within this project. Three categories of objectives are distinguished here. These are objectives that are, respectively, aimed at:

1 attaining learning results (knowledge, skills, attitudes);

2 improving changed work behaviour in the work situation; and

3 supporting change of the organisation.

It is possible that in certain training projects different categories of objectives are combined. The categories of objectives are based on the categories of training results as developed by Kirkpatrick (1994).

The questions about the division of responsibilities for attaining the objectives of the projects are related to the fact that for attaining training effects in practice, there are always two partners who put their effort in: the training organisation and the client organisation. For attaining learning results this probably is less the case, but for attaining changed work behaviour, and certainly for supporting organisational change, the client organisation itself has a reasonable large responsibility. And the training organisation can only be held responsible for that part of effectiveness of training projects for which it is contracted by the client organisation.

As to the results of the training projects, two variables are distinguished: the correspondence between the results of the project and the expectations (this is indicated as "expectation realisation"), and the level to which the objectives of the training project are attained. As for the attainment of objectives, three categories of objectives are distinguished, corresponding with three levels of attainable results: learning results, work behaviour, and organisational change.

Regarding the questions about the attribution of the results of the training projects, two categories of projects are distinguished: projects in which the objectives are attained or not. For the first category of projects the question is to what level the attainment of objectives can be attributed to the efforts of the training organisation. For the second category of projects the question is to what level the training organisation is accountable for the disappointing results.

In the seventh block of the questionnaire, a distinction is being made between phases in the training project and the administrative handling of the project. Three phases are distinguished: before, during, and after the training. Within these phases, a number of topics are distinguished. Three questions are asked about these topics: whether or not the client and training organisation made agreements about them, the level to which the training organisation met the agreements, and the satisfaction about all these aspects (irrespective of agreements which were made on the topics). The topics about which these questions are asked, are as follows:

1 Before the training:

  • determining of the target group;

  • determining the learning needs of the participants;

  • the training design.

2 During the training:

  • the materials and technical equipment;

  • the role of the trainer(s)/consultant(s);

  • the duration of the training project;

3 After the training:

  • the way in which the training project is being evaluated;

  • coaching or application of the newly acquired knowledge and skills in the workplace;

  • a final report prepared by the training organisation.

4 Administrative handling:

  • a cancellation regulation;

  • the invoicing procedures;

  • property and authors rights.

Following the above questions, there is also one about the reason why the client may be dissatisfied with certain aspects. Furthermore, the question is asked whether dissatisfaction about (any of) these aspects has led to problems with the training organisation. If that was the case, the respondent is being asked whether these problems are already solved, and if not, whether the respondent would appreciate it that the research organisation would contact the training organisation about this to solve the problem. If the respondent agreed with this, the training organisation was notified about the complaint of the client organisation, and it (the training organisation) was requested to contact the client about this. Furthermore, it was agreed with both parties that the research organisation would contact the client organisation again after two weeks, to learn whether the complaints were handled satisfactorily. If there had been no contact between the training and client organisations in this case, or the complaint was not resolved, then the research organisation should communicate the complaint to the board of the association.

Finally, in the last block of the questionnaire, questions are asked about the contacts the respondent has had with representatives of the training organisation and how satisfied he/she is about these contacts. The respondent is asked to mention the names of the persons with whom he/she has had contact. Three categories of persons are being distinguished in these questions: the executives, the trainers/consultants and the administrative/supporting personnel.

In the benchmark reports, the means of the organisations and the means of the association are contrasted. The reports mainly consist of a graphical representation of the results; for each question from the questionnaire a figure is made. For reasons of clear communication of the graphics, the respective question is formulated in title of the graphic. Furthermore, executive summaries are added and in that the design of the study and the most important results are given. Furthermore, ranking lists of training organisations were made. For that list the total satisfaction indicator was used. As of the third research period two ranking lists were developed. The first is based on the results of the third research period, the second on the historical average on the total satisfaction indicator. The ranking, based on the average total satisfaction, leads to a decrease in the dynamics in the ranking list and indicates an average level of quality of the training organisations.

3. The research program

Owing to the longitudinal character of the project evaluation, it is possible to conduct additional research into the quality of the research instrumentation and into the relationships within the database. The analysis of the database allows us to scrutinise the relationships between the factors that are included in the questionnaire.

During the research project, the following research phases can be distinguished.

1 Instrument development and pilot-tests; during this period the main interest was aimed at the value of the total satisfaction indicator.

2 Development of the ranking lists, the service reliability of the training organisations (did they actually do what they promised?), the relationships with the total satisfaction indicator and the reliability of the data.

3 Insight into the dynamics of the total satisfaction, and an interest in, the historical average of the training organisations, the variation in their effectiveness, and the relationships between the preparation of training projects and their implementation on the one hand and the effectiveness on the other hand.

4 Emphasis on increasing the absolute response rate per training organisation. This was done also to increase the usability of the evaluation results by the training organisations.

5 Presentation of performance of training organisations by performance profiles instead of separate indicators like the total satisfaction indicators. Furthermore, interest grew in a model test on the data set and on validity research.

At the same time there was much interest for the reliability of the data. To test that, a first look was taken at the representativity of the response group. A comparison is made between the response group versus a non-response group (one that was willing to participate in a non-response study). It was found that ratings of training projects by the non-response group did not systematically deviate from the rating of training projects by the response group. Subsequently, a test-retest reliability analysis is conducted. A number of respondents were asked to give a second rating of the training project. It appeared that there was a high level of correspondence between the first and second rating by the respondents. The respondents who participated in the test-retest reliability analysis were also asked to nominate a candidate for an independent rating as second rater (as a second opinion about the same training project). These candidates were approached for an inter-rater reliability analysis. The results indicated that second raters do not evaluate the training projects differently than the first raters (Mulder, 1996).

After the reliability analysis, the interest went to the considerable dynamics in the ranking lists of the training organisations. There were suggestions that the small response numbers of mainly small training organisations and the non-response caused the dynamics. This hypothesis, however, was incorrect as the reliability of the data was high. The answer on the question to the cause of the dynamics can also be sought in the statistical characteristics of the sample. Regression to the mean seems a more plausible explanation for the dynamics. But the small differences between the means of the organisations may also be an explanation. These differences are so small, that it may actually be unjustified to use them for ranking the training organisations. This needs to be analysed more thoroughly, since this would imply that the dynamics in the ranking has little to do with substantial differences in quality of the training organisations. And this would lead to the conclusion that developing the ranking list, based on the total satisfaction indicator, would not be justified.

Next, interest grew for the variation in effectiveness of the training projects. It appeared that there was considerable variation on the questions about the results of the training projects. The effect parameters received more attention by that. It was demonstrated that there was a significant relationship between the total satisfaction and the attained learning results, also the changed work behaviour and, to a lesser degree, also between the total satisfaction and the organisational change, but the absolute magnitude of the relationships appeared to be weak (see Mulder and Van Ginkel, 1995).

4. Model testing: training effectiveness

Based on these preliminary insights, a test is performed with a model for training effectiveness, in which three latent variables are distinguished: first of all, two latent predictor variables "Project definition" and "Project implementation", and second, the latent criterion variable "Project effects".

The variable "Project definition" comprises of three items:

1 Objective operationalisation: the level in which the objectives of the training project are specified; this item is of ordinal level, has five values and the extremes are "general" and "specific".

2 Distribution of responsibility: the level in which the training organisation carries responsibility for attaining the results; this item is of ratio level, has ten values and indicates the percentage of responsibility the training organisation has in attaining learning results, changed work behaviour and organisational change. This item exists of three partial items, and accordingly three scores.

3 Condition registration: the delivery conditions about which agreements have been made between the client organisation and the training organisation; this is a dichotomous item (about the respective conditions in which agreements are made or not), consisting of the 12 partial items within the phases before, during and after the training and the administrative handling.

The variable "Project implementation" also comprises of three items:

1 Total satisfaction: this is the satisfaction about the total project handling, the preparation of the project, the implementation of it and the follow-up activities; it is an ordinal item with ten values, varying from very bad to excellent satisfaction about the total handling of the training project.

2 Condition-realisation consistency: the delivery reliability of the training organisation; as well as the item "condition registration", this item consists of 12 partial items that correspond with the aspects about which client organisations have made agreements with the training organisations; these are ordinal items with five values, varying from not to completely sticking to agreements.

3 Condition-realisation satisfaction: the level of satisfaction about the performance of the training organisations with respect to the possible delivery conditions; just like the item "condition realisation satisfaction", this item consists of 12 partial items which in this case correspond with the aspects (or conditions) about which client organisations may have made agreements with the training organisations; these are also ordinal partial items with five values, varying from being very dissatisfied to being very satisfied with the performance of the training organisation on the different aspects.

And the variable "Project effectiveness" also consists of three items:

1 Expectation realisation: the level in which the project results meet the expectations of the client organisation; this is an item on ordinal level with five values, varying from not at all meeting and completely meeting the expectations of the total results of the training project.

2 Objective realisation: the level in which the intended objectives of the training project are achieved; three categories objectives are distinguished here: those aimed at reaching learning results, changed work behaviour, and organisational change respectively; these are three partial items at interval level, varying from having not at all to completely attained the project objectives.

3 Success attribution: the level in which the training organisation has been responsible for attaining the intended objectives; a distinction is made here again between the objectives at the level of learning results, work behaviour, and organisational change; these are also three partial items at interval level, varying from being unable to attribute the success of the training project at all to completely attributing it to the training organisation.

For the model, testing three groups of training projects are promulgated. A group of 569 projects, which were aimed at achieving learning results, a group of 433 projects which were aimed at realising changes in the work behaviour, and a group of 206 projects which were aimed at reaching organisational change. A Lisrel-analysis is performed, with which the unweighted least squares analysis is used; this is a procedure that is suited for not normally distributed data. Furthermore, when the results of the analyses indicated that, error terms are correlated.

The results of the analyses are depicted in Figure 1, Figure 2 and Figure 3. In Figure 1 the results are presented of the projects that were aimed at achieving learning results. Since the value of &khgr;2 has a p-value of 0.10, it can be concluded that the model fits. As appears from the coefficients, there is a rather strong relationship between the latent variables "Project implementation" and "Project effectiveness", but a very weak relationship between the latent variables "Project definition" and "Project effectiveness". Since there exists a moderate relationship between "Project definition" and "Project implementation", it can be concluded that the "Project definition" is worked out in the project effects via the implementation of the training project.

The same results are found for the projects that were aimed at reaching changed work behaviour. These results are depicted in Figure 2. Although the coefficients are slightly different, the total picture is the same: the model fits, and there is a relatively strong relationship between the latent variables "Project implementation" and "Project effectiveness", and a weak relationship between "Project definition" and "Project effectiveness".

The results from Figure 3 refer to training projects that were aimed at realising organisational change. The results indicate that this model does not fit. That means that the coefficients in Figure 3 do not have substantial meaning. The fact that this model does not fit is probably caused by the kind of results on which these training projects are aimed. Organisational change in most cases is not notable until the long term. Furthermore, training organisations do have less influence on achieving changes in their client organisations than on reaching learning results. Projects that are aimed at achieving changed work behaviour take the middle position in this respect. Obviously, the client organisation itself is responsible for the transfer of training results in job performance. But because of performance improvement strategies used by many client organisations, and the emphasis on results oriented training and transfer (Broad and Newstrom, 1992; Swanson and Holton, 1999; Phillips, 1994, 1997), training organisations are asked to design training projects in such a way that they facilitate transfer of learning results.

For training organisations it is, furthermore, interesting to scrutinise the relationships between items and partial items, to check what variables explain most of the variance in the training effectiveness. These are the factors that are of importance for managers of training organisations. These factors are the most relevant for quality management within the training organisation.

The total amount of variance in the effectiveness of training programs that is explained by the complete model is reasonable, but limited. From a integrated conceptual perspective, this is not remarkable, since there are many other factors that influence training effectiveness that are not included in the evaluation system. Other factors are personal factors, training program factors, organisational factors, and transfer conditions (Baldwin and Ford, 1988). The approach that has been used during the development of the training program (Kessels, 1993) is also important, but not included in the evaluation system. Including these other factors would raise the explanatory power of the model, but decrease the practical applicability of the system, as it would lead to a more complex evaluation instrument, certainly when the instrument would be adapted to contextual variations in training purposes and content.

5. Discussion

To conclude, a number of discussion issues will be raised. These issues relate to the present evaluation system, and the present state-of-affairs within the research program.

At first, it can be questioned in which way training organisations can make better use of the research results. Although the evaluation system is meant as an instrument for quality management at association level, there is a strong need within the individual training organisations to use the research data at organisational level. That is possible to a certain level, since the training organisations receive research reports in which their own results are contrasted with the average results of the association. The average of the association serves as a reference criterion for the quality policy of the individual training organisations. The most direct feedback, however, can be obtained by providing the training organisations with the primary data for the evaluated training projects. That would imply, however, that the anonymous character of the present evaluation procedure has to be broken. That necessitates at least that respondents are being asked to allow that the data are passed on to the training organisations. To anticipate this change, a question in this sense is already inserted in the second, slightly changed, questionnaire. The result of which was that only 5 per cent of the respondents would refuse this. But, this group cannot be neglected, and from an integrity perspective it would be unethical to still send the data of these respondents to the training organisations. In any case, the respondents who reject the proposition to pass their data on to the training organisations have to be respected. If, based on the results of the respective question, it is decided to pass the data set on to the training organisation, it should be noted that respondents may start to evaluate the training project differently in the future. The implications of the changed perspective for the respondent should be reviewed before deciding definitely to make the data public to the training organisations.

Second, questions whether a standard for a required level of quality should be introduced within the association. Up to now, only the magnitude of the total satisfaction, the attainment of the intended objectives, and the client satisfaction about various aspects of the training programs are observed. No standards are set for the required level of quality. It is conceivable to develop a score profile which exists of the values on the most crucial variables. For this profile standards can be determined. Examples of variables which are relevant in this respect are the total satisfaction indicator, expectation realisation, objective realisation, success attribution, condition realisation consistency, condition realisation satisfaction, and satisfaction about the contacts with employees of the training organisation. In view of the present research results an average of 7.0 (on a ten-point scale, with ten being the maximum degree of quality) on the profile seems to be a reasonable standard for the minimal degree of required quality. The values on the five-point scales need to be multiplied with a factor two to create comparable scales, when this standard will be agreed. Furthermore, it could be agreed that a training organisation should not have a 5.0 or less on a variable in the profile, and for instance two sixes, only if they are compensated by higher scores, so that the total average remains 7.0 or higher. At the level of association management, meetings should be held with those organisations that do not meet the standard, to analyse possible causes of the low rates. Such meetings could eventually lead to further agreements with the training organisation about measures that could lead to better performance during the next evaluation.

Furthermore, there are various research technical issues that need to be reflected on in optimising the evaluation system. For instance, it may be necessary to make a distinction between large and small training organisations. As has been noted already, the number of clients of training organisations varies significantly. Some of the large training organisations have objections against the fact that their evaluation results are taken, together with those of the small organisations. Consultants of small organisations do have a much more intensive and personal relationships with their clients, and they expect that these clients evaluate these consultants much more positively. The personal contact would have a strong influence on the evaluation of the quality of the training project. It is not sure, however whether this is an undesired rating effect. If clients, who evaluate training projects of small training organisations, are more positive than those who evaluate projects of large organisations, the question can also be whether the quality of the small training organisations is just better than that of the large organisations (according to the perception of the client, of course). A conclusive answer to this question cannot be given at this stage of the research program, but it can be found when data of clients, who are involved with both large and small training organisations, are compared. Despite the fact that relationships between large and small training organisations with their clients may or may not influence the rating of the training programs, it may be interesting to compare the results for different categories of training organisation. Size of organisation is one of the criteria on which they can be compared. But their program portfolio could also be an interesting criterion.

Another research technical issue relates to the question as to whether the research unit is covered well enough by the client list. As has been stated before, the client list is composed of projects for which the training organisation has invoiced the client organisation in a specified half year. But it happens that sometimes training organisations conduct big training projects that take longer than half a year, and about which clients receive invoices more than once. An invoice is being sent about a part of the training project. The intention, however, is to evaluate the complete training project at once. The question then is whether the financial administration is the best starting point for composing the client lists. Possibly, the present client lists contain a number of training activities that are part of a larger training project. The project administration would perhaps be a better starting point. The disadvantage of using project administrations however, is, that these vary considerably by training organisation, so that the verification of the completeness of the client lists is impeded, if not made impossible. That is different for the financial administration. This needs to comply with legal requirements, which make it possible to verify the completeness of the client lists conclusively. It will be necessary to analyse to what degree the present client lists contain training activities that actually are part of larger projects. This analysis will reveal the level to which the evaluations, that have been carried out already, contain training activities that should have been evaluated at a higher aggregation level: the complete training project level.

In the future, the training organisations can be asked to indicate whether the projects that are invoiced, and included in the client lists, are complete projects, or whether they are in fact a part of a larger project that should be evaluated as a whole. For long-term projects, the evaluation procedure should perhaps be different, and training organisations could be asked to indicate such projects in the client lists.

A next research technical issue concerns the evaluation term. Questions whether the evaluation system sufficiently meets the variation in the training objectives, and the related effects of the training projects. Training projects that are aimed at reaching learning results can, in principle, result in effects in the short term, but the contrary is the case for projects that are aimed at organisational change. Those projects, in most cases, will result in effects in the long term. Nevertheless, all training projects are evaluated over a fixed period. The rating of the training projects, by the respondents, takes place about five months after the completion of the half year that is being evaluated. The ratings are about the projects for which, during that half year, clients are invoiced. That means that between the moment of invoicing and rating minimally five, and maximally 11, months elapsed. More important, however, is the course of time between the actual training project and the rating. Yet, since the training organisations employ different terms of settlement, little can be said about that. It seems worthwhile, though, to a have a closer look at the evaluation term, and to analyse whether it sufficiently takes into account the variation in the categories of training projects, the moment at which they are rated, and the term in which effects can be expected.

Another issue relates to the judgement of the client. On what are the judgements based? At this point of the research program, there is no information about this. The first additional study will be aimed at this question. First of all, it needs to be analysed whether an evaluation of the training project by client organisations took place at all, and if so, at what level. Second, some criterion-related training project evaluations will be conducted. These evaluations will be used to test the concurrent validity between the standardised training evaluation and the criterion-related training evaluation.

Third, it should be analysed whether different categories of organisations, with different categories of clients, need to be distinguished. From a professional perspective, certain organisations may have more and others may have less critical clients. If the level of expectation is not constant, a fixed reference point is missing to judge the quality of training projects. And then the judgement about the training project is no solid indicator of the real quality of the project. A training project of average quality, for instance, will be rated as insufficient by critical clients, and good by less critical clients. It is necessary to determine whether this problem really exists, and if so, what consequences this has to have for the evaluation system.

Finally, new advancements in human resource development should be included in evaluating the effectiveness of training programs. Training is an expensive intervention that is being integrated in performance improvement strategies. Since Nadler's (1980) definition of HRD many changes have been taking place. McLagan (1983; 1989) used the concept for developing competency profiles for HRD specialists. Pace et al. (1991, pp. 6-7) conceptualised HRD as a set of integrated roles. DeSimone and Harris (1994) saw HRD as a set of activities. These authors defined HRD in terms of learning experiences, professional fields, roles and activities. Rothwell and Kazanas (1994, p. 2) applied insights from strategic planning within HRD and focused on strategic HRD. Gilley and Maycunich (1998) went a step even further and stressed the importance of integrated strategic HRD. Walton (1999) moved the field from a strict performance orientation (Swanson, 1994; Rummler and Brache, 1995) to a transferable skills orientation. One step further is to perceive HRD as a field that contributes to competence development (Mulder, 2000). This perspective builds on a line of research that started in the 1980s, in which basic skills were seen as performance requirements (Nijhof and Mulder, 1986, 1989). Training is still an important part of HRD, as the developments in the field of HRD reflect, is just one part, next to other ways in which transferable skills or competencies can be developed. Therefore, the training evaluation system described in the contribution needs to be adapted to these new developments. The basic structure however, remains applicable.

0030250603001.png

Figure 1 Lisrel model for projects aimed at achieving learning results

0030250603002.png

Figure 2 Lisrel model for projects aimed at achieving change in job performance

0030250603003.png

Figure 3 Lisrel model for projects aimed at achieving organisational change


References

Baldwin, T.T, Ford, J.K, 1988, "Transfer of training: a review and directions for future research", Personnel Psychology, 41, 63-105.

Basarab, D.J, Root, D.K, 1992, The Training Evaluation Process: A Practical Approach to Evaluating Corporate Training Programs, Kluwer Academic Publishers, Boston, MA.

Bramley, P. , 1991, Evaluating Training Effectiveness: Translating Theory into Practice, McGraw-Hill, London.

Brinkerhoff, R.O, 1987, Achieving Results from Training: How to Evaluate Human Resource Development to Strengthen Programs and Increase Impact, Jossey-Bass, San Francisco, CA.

Broad, M.L. , Newstrom, J.W, 1992, Transfer of Training. Action-Packed Strategies to Ensure High Payoff from Training Investments, Addison-Wesley, Reading, MA.

Camp, R.R, Blanchard, P.N, Huszczo, G.E., 1986, Toward a More Organisationnally Effective Training Strategy and Practice, Prentice-Hall, Englewood Cliffs, NJ.

DeSimone, R.L, Harris, D.M, 1994, Human Resource Development, 2nd ed., The Dryden Press, Fort Worth, TX.

Gaines-Robinson, D.G, Robinson, J.C., 1989, Training for Impact. How to Link Training to Business Needs and Measure the Results, Jossey-Bass, San Francisco, CA.

Gilley, J.W, Eggland, S.A, 1989, Principles of Human Resource Development, Addison-Wesley, Reading, MA.

Gilley, J.W, Maycunich, A, 1998, Strategically Integrated HRD. Partnering to Maximize Organizational Performance, Addison-Wesley, Reading, MA.

Hamblin, A.C, 1974, Evaluation and Control of Training, McGraw-Hill,, New York, NY.

Kessels, J.W.M., 1993, Towards Design Standards for Curriculum Consistency in Corporate Education, Faculty of Educational Science and Technology, University of Twente, Enschede.

Kirkpatrick, D.L, 1994, Evaluating Training Programs: The Four Levels, Berrett-Koehler, San Francisco, CA.

McLagan, P.A, 1983, Models for Excellence. The Conclusions and Recommendations of the ASTD Training and Development Competency Study, American Society for Training and Development, Washington, DC.

McLagan, P.A., 1989, Models for HRD Practice. The Models, American Society for Training and Development, Alexandria, VA.

May, L.S, Moore, C.A, Zammit, S.J, 1987, Evaluating Business and Industry Training, Kluwer Academic Publishers, Boston, MA.

Mulder, M, 1996, "Customer satisfaction and training program quality'", Holton III, E.F, Academy of Human Resource Development 1996 Conference Proceedings, AHRD, Austin, TX, 688-95.

Mulder, M, 2000, Competentieontwikkeling in bedrijf en onderwijs (inaugural speech), Wageningen University and Research Center, Wageningen.

Mulder, M, van Ginkel, K, 1995, "Quality evaluation of training programs", ECER 95. European Conference on Educational Research, University of Bath, Bath, 79.

Nadler, L., 1980, Corporate Human Resources Development: A Management Tool, ASTD/Van Nostrand Reinhold Company, Madison, WI/ New York, NY.

Nijhof, W.J, Mulder, M. , 1986, Basisvaardigheden in het beroepsonderwijs, Stichting voor Onderzoek van het Onderwijs, `s-Gravenhage.

Nijhof, W.J, Mulder, M, 1989, "Performance requirements analysis and determination", Bainbridge, L, Ruiz Quintanilla, S.A., Developing Skills with Information Technology, John Wiley and Sons, Chichester, 131-52.

Pace, R.W., Smith, P.C, Mills, G.E, 1991, Human Resource Development: The Field, Prentice-Hall, Englewood Cliffs, NJ.

Phillips, J.J, 1991, Handbook of Training Evaluation and Measurement Methods, 2nd ed., Gulf Publishing Company, Houston, TX.

Phillips, J.J, 1994, Measuring Return on Investment: 18 Case Studies from the Real World of Training - Volume I, American Society for Training and Development, Alexandria, VA.

Phillips, J.J, 1997, Measuring Return on Investment: 17 Case Studies fom the Real World of Learning - Volume II, American Society for Training and Development, Alexandria, VA.

Rothwell, W.J, Kazanas, H.C, 1994, Human Resource Development: A Strategic Approach, HRD, Amherst, MA.

Rummler, G.A., Brache, A.P, 1995, Improving Performance. How to Manage the White Space on the Organization Chart, 2nd ed, Jossey-Bass Publishers, San Francisco, CA.

Swanson, R.A., "1994", Analysis for Improving Performance. Tools for Diagnosing Organisations and Documenting Workplace Expertise, Berrett-Koehler, San Francisco, CA.

Swanson, R.A, Holton III, E.F, "1999", Results. How to Assess Performance, Learning, and Perceptions in Organisations, Berrett-Koehler, San Francisco, CA.

Walton, J, "1999", Strategic Human Resource Development, Pearson Education Limited, Harlow.


  
Make a Free Website with Yola.