Cookies help us deliver our services. By using our services, you agree to our use of cookies. More information

Scoring System

From CAF Network

How to evaluate: the scoring system

Why scoring?

Allocating a score to each subcriterion and criterion of the CAF model has four main aims:

  1. To provide information and give an indication of the direction and priorities to follow for improvement activities;
  2. To measure your own progress, if you carry out CAF assessments regularly; every two years is considered to be good practice according to most quality approaches;
  3. To identify good practices as indicated by high scoring for enablers and results;
  4. To help to find valid partners to learn from benchlearning (what we learn from each other).

The main aim of benchlearning is to compare the different ways of managing the enablers and achieving results. With regard to benchlearning however, it should be noted that comparing CAF scores carries a risk, particularly if it is done without validating the scores in a homogeneous way in different public organisations.

How to score?

The CAF provides two ways of scoring: classical scoring and fine-tuned scoring. As regards the enablers, the PDCA cycle is the fundament of both. The ‘classical’ CAF scoring gives a global appreciation of each subcriterion by indicating the PCDA phase of the subcriterion. The ‘fine-tuned’ CAF scoring reflects the analysis of the subcriteria in more detail. It allows you to score – for each subcriterion – all phases of the PDCA (PLAN, DO, CHECK, ACT) cycle simultaneously and independently. To compare the performance with others by means of benchmarking and benchlearning is at the highest level of both assessment panels.

CAF classical scoring[edit]

This cumulative way of scoring helps the organisation to become more acquainted with the PDCA cycle and directs it more positively towards a quality approach.

  • In the enablers assessment panel, the organisation is effectively improving its performance when the PDCA cycle is completely in place, on the basis of learning from its reviews and from external comparison.
  • In the results assessment panel, the trend of the results and the achievement of the targets are both taken into consideration. The organisation is in a continuous improvement cycle when excellent and sustainable results are achieved, all relevant targets are met and positive comparison with relevant organisations for the key results are made.

ENABLERS PANEL – CLASSICAL SCORING Enablers Panel - Classical Scoring


  • Find evidence of strengths and weaknesses and choose the level that you have reached among the phases. This way of scoring is cumulative: you need to have accomplished a phase (e.g. CHECK) before reaching the next phase (e.g. ACT).
  • Give a score between 0 and 100 according to the chosen phase. The scale of 100 allows you to specify the degree of deployment and implementation of the approach.

RESULTS PANEL – CLASSICAL SCORING Results Panel - Classical Scoring


  • Give a score between 0 and 100 on a scale divided into six levels. Each level takes into account both the trend and the achievement of the target simultaneously.

CAF fine-tuned scoring[edit]

The fine-tuned scoring is a way of scoring closer to the reality where, for example, many public organisations are doing things (DO) but sometimes without a clear planning phase (PLAN) or without any following check concerning the achievement. This way of scoring gives more information on the areas were improvement is mostly needed.

  • In the enablers panel, the emphasis lies on PDCA as a cycle (PLAN, DO, CHECK and ACT) and progress is represented as a spiral where in each turn of the circle improvement could take place in each of the phases.
  • In the results panel a distinction is made between the trend of the results and the achievement of the targets. This distinction clearly shows if you have to accelerate the trend or focus on the targets achievement.

ENABLERS PANEL – FINE-TUNED SCORING Enablers Panel - Fine-tuned scoring


  • Read the definition of each phase (PLAN, DO, CHECK and ACT).
  • Consider the evidence collected related to each phase, which can be illustrated by some of the examples.
  • Give a score for each phase.
  • Calculate a global score by considering the average of the scores of each phase.

RESULTS PANEL – FINE-TUNED SCORING Results Panel - Tine-tuned scoring


  • Consider separately the trend of your results for three years and the targets achieved in the last year.
  • Give a score for the trend between 0 and 100 on a scale divided into six levels.
  • Give a score for the achievement of targets for the last year between 0 and 100 on a scale divided into six levels.
  • Calculate a global score by considering the average of the scores of trends and targets.

Example 1: How to apply the fine-tuned scoring to enablers – Subcriterion 3.3[edit]

Here you have possible evidence from self-assessment for Subcriterion 3.3. They are related to the examples of the model; for each one there is an indication of the PDCA phase and if it is a strength (+) or a weakness (−).

Example Subcriterion 3.3: Involve and empower people and support their well-being
  • 3.3.a. The organisation maintains constant attention to the internal communication in the different directions: top-down, bottom-up and horizontal. It takes advantage of an open environment and use of different modes and tools: annual and quarterly meetings with the entire staff, use of digital tools like intranet, e-mails and social media.
    So far there is no approach to verify the effectiveness of the communication and the perception of staff about their involvement. PLAN +, DO +, CHECK −
  • 3.3.b. The teamwork and one-to-one dialogue are other ways to improve the internal dialogue and the exchange of expertise: teams and individuals are involved in the cascading of the strategic objectives in function/group targets, and teamwork Is a standard approach to improvement projects. This approach is positively appreciated in staff surveys. However for the moment teamwork and improvement groups are limited to the core processes. PLAN +, DO +−, CHECK +
  • 3.3.c. Moreover, no approaches are defined to collect ideas and suggestions. PLAN −
  • 3.3.d. The organisation conducts biennial staff surveys through an approach defined six years ago and not completely adequate to the recent structural and operative changes. PLAN +, DO +, CHECK −, ACT –
  • 3.3.e., 3.3.f. There is a strong attention by the management to the well-being of people, in particular creating good working conditions and taking care of the work–life balance. The initiatives were defined after a benchlearning with some important public and private organisations and staff consultation; last year some new projects were put in place, such as open-space offices and a daycare centre. PLAN +, DO +, ACT +
  • 3.3.g. From many years the organisation has addressed the problems of people with disabilities, and the buildings and facilities are designed for that. In the last year, a project was developed to facilitate distance working and flexitime. PLAN +, DO +, ACT +
  • 3.3.h No initiatives are currently in place to support social and cultural initiatives or other non-financial rewards for staff, or whether there is any mechanism to ask for this. So far there is no approach to verify the effectiveness of the communication and the perception of staff about their involvement. PLAN −, DO −

The above findings have been placed in the enablers matrix below, to help elaborate a global scoring for the subcriterion. The boxes of the matrix are used as a memo pad, to pass from the evidence collected during the subcriterion assessment to a global subcriterion scoring, and to guide the discussion in the consensus meeting.

Enablers Panel Matrix

Remarks about the scoring assigned

PLAN: A positive situation for internal communication and teamwork, staff surveys, well-being and work– life balance. Nothing planned for ideas collection and support of socio-cultural initiatives. So the assessment can be placed in the ‘Some good evidence related to relevant areas’, but on the right of the column: 50 points.

DO: A positive situation for internal communication, staff surveys, well-being and work–life balance. For teamwork the implementation is not overall, because it covers only core processes. Nothing in place for socio-cultural initiatives. So the assessment can be placed in the ‘Some good evidence related to relevant areas’, but on the right of the column: 50 points.

CHECK: In general there is weak evidence of CHECK for all the points. In particular, the organisation understands that the staff survey approach needs verification to adapt it to the changes in the organisation, but nothing is in place for that. Nevertheless, there were some relevant projects in the area of well-being and work–life balance, even if without an explicit connection with the check phase. So the assessment can be placed in the ‘Some weak evidence related to some areas’: 25 points.

ACT: There is evidence of some relevant improvements for well-being, work–life balance, and people with disabilities, but they are not clearly linked to the results of a CHECK activity. So the assessment can be placed in the ‘Some good evidence, related to relevant areas’, on the right of the column: 30 points.

Example 2: How to apply the fine-tuned scoring to results – subcriterion 7.2[edit]

Here you have possible evidence from a self-assessment of an organisation for subcriterion 7.2. The evidence is summarised for the two subtitles ‘General results’ and ‘Individual performance and skill development’. According to the scoring panel, there is indication of trends and targets and for each one, whether it is a strength (+) or a weakness (−).

Example subcriterion 7.2: Performance measurements

Synthesis of the evidence emerged from self-assessment

The organisation measures a large set of indicators for the people performance, summarised on the dashboard in the quarterly and annual report. We can summarise the 2018 results as follows, following the scheme of the CAF model; for more details refer to the 2018 Annual Report.

General results

The indicators refer to: absenteeism, sickness, involvement in improvement activities, complaints (number and response time) and voluntary participation in social activities and initiatives. For more than 60% of them we can see a positive trend in the last three years, while only the participation in social activities shows a small decrease in 2018. No targets are defined for the indicators. TREND + TARGET −

Individual performance and skill development

We measure hours of training for person, percentage of individual/group target achieved and overall competence gap. For all the indicators, specific targets are defined usually with at least 10% of increase year on year. Overall, 70% of indicators show a positive trend, while there is a small decrease of the competence coverage (increase of gap). For the targets, less than 50% are reached; training indicators and in particular competence gap did not reach targets. TREND +TARGET −

The above findings have been transformed into a score placed in the results matrix below, to help elaborate a global scoring for the subcriterion to be discussed during the consensus meeting.

Remarks about the scoring assigned

TRENDS: A large part of the results shows sustained progress. Only two indicators show a negative trend (in particular competence coverage). Both the assessments of general results and individual performance can be placed in the column ‘Sustained progress’ with an overall 60 points.

TARGETS: There are no targets for general results indicators (column ‘No or anecdotal information’), and individual performance reached less than 50% of targets (column ‘Few targets are met’) with an overall 25 points.