Bureau international d'éducation
Tel.: +41.22.555.06.00
Fax: +41.22.555.06.46

Content Section

Articles on assessment


The International Evaluation Association (IEA) research report Initial findings from the IEA international civic and citizenship education study (2010, Amsterdam: International Association for the Evaluation of Educational Achievement), by Schulz et al., is the first full report on the IEA international civic and citizenship education study (ICCS). The IEA is probably best known for its Third International Mathematics and Science Study (TIMSS), an international large-scale assessment of learning outcomes (ILSA) for primary school learners in science and mathematics, as well as for PIRLS (Progress in International Reading Literacy Study). These ILSAs are similar to the Organisation for Economic Cooperation and Development’s (OECD) Programme for International Student Assessment (PISA), with the important difference that, while PISA is based on a life skills approach, and focuses on learners of 15 years old, TIMSS is grade-based, focusing on primary school learners, based on a curriculum analysis of participating countries.

This study is of particular interest as it focuses on ‘soft’ learning outcomes. But it is not new. ICCS was preceded, in 1999, by the CIVED (civic education) study, in which 28 countries participated and, in 1971, by the IEA’s first civic and citizenship education study (which has no known acronym), in which nine countries participated. The ICCS itself saw the participation of over “140,000 [grade 8 or equivalent] students and 62,000 teachers in over 5,200 schools” in 38 countries (p. 3). It was conducted by a consortium of institutions consisting of the Australian Council for Education Research (ACER), the British National Foundation for Education Research (NFER) and the Laboratorio di Pedagogia Sperimentale (Laboratory for Experimental Pedagogy) at the Università Tre in Rome, Italy, which worked in close collaboration with the IEA and its different bodies, which include the Hamburg-based international data-processing centre (DPC).

The ICCS “studied the ways in which countries prepare their young people to undertake their role as citizens” through education, thereby investigating not only learners’ knowledge and understanding of civics and citizenship, but also their “attitudes, perceptions, and activities” (p. 9) as relevant to this domain. The ICCS concretely stipulated six specific research questions (ibid.), pertaining to:

1. Variations in civic knowledge;

2. Changes in content knowledge since 1999;

3. Student interest in engaging in public and political life and their disposition to do so;

4. Perceptions of threats to civil society;

5. Features of education systems, schools, and classrooms related to civic and citizenship education; and

6. Aspects of student background related to the outcomes of civic and citizenship education.

As the research questions indicate, ICCS did not look only into civic knowledge learning outcomes, which are relevant to research questions 1 and 2, but also at learner attitudes and behaviour (question 3) and perceptions (question 4). It is important to emphasise that different approaches to conceptualisation of civic and citizenship were encountered in the participating countries: in some cases, it existed as a stand-alone subject, sometimes as a cross-curricular theme. Topics covered also ranged widely and these different approaches were itself was a topic of study (question 5). The study also looked at variables associated with the various outcomes, relevant to questions 1 through 4, concerning the student background in terms of family and socio-economic background. Finally, the study defined civic knowledge broadly, as being “concerned with knowing about and understanding the elements and concepts of both citizenship and traditional civics” (ibid.).

The ICCS assessment framework consisted of two main elements: one pertaining to civic and citizenship education itself, the other to contextual factors. The substantive framework was organised around the dimensions of: content, an affective-behavioural dimension, and a cognitive dimension. The second and third dimensions referred to learning outcomes. These last two dimensions were set against four content domains, to arrive at weightings for the specification of items for the assessment: civic society and systems; civic principles; civic participation; and civic identities (p. 17). The mentioned content domains were again sub-divided into sub-dimensions. The instruments used for the students included an 80-item international student cognitive test, focusing for 75% on reasoning and analysis, and for the remainder on knowledge; a 40-minute questionnaire measuring perceptions and background; and a set of regional instruments. Additional questionnaires were given to the teachers and school principals.

The ICCS found the following: with reference to countries’ approaches to civic and citizenship education (question 5), “the findings showed no agreed approach” (p. 30). In the countries, civic and citizenship education covers a wide range of topics, and this type of education was delivered in a range of ways. In terms of civic knowledge (questions 1 and 2), the authors found a significant difference in learning outcomes by country, which should probably be seen in the light of the widely different approaches to civic education in the countries. A better performance of female learners was noted, overall, and a regression in knowledge between 2009 and the CIVED study of 1999 was found for the countries for which both data-points were available (except for Slovenia). It is interesting to note that the highest scoring countries, Finland and Korea, are also among the highest achieving systems in PISA 2009. The study also found that parents’ occupational status was associated strongly and consistently with learners’ civic knowledge.

From rhetoric to reality: the problematic nature and assessment of children and young people’s social and emotional learning (British Educational Research Journal, 2010, Vol. 36, Nr. 5, pp. 767-786) is a timely contribution to the debate on how to measure non-cognitive learning outcomes. The authors argue that, while there exists consensus on the importance of social-emotional learning (SEL), there exists little or no consensus as to what its learning outcomes should be (p. 767). Watson & Emery (re-) state the widely accepted understanding “that children and young people are more than just learners of academic knowledge; [also] their wider achievements as individuals who contribute to society, should be regarded” (ibid.). This is relevant to the debate on peace education learning outcomes, as SEL seems to be central to many manifestations of it. It is important to note that the authors add that there is also little consensus as to how SEL outcomes “can be taught / learnt and assessed / measured” (ibid.).

The article essentially consists of a theoretical exploration of this particular area, offering a conceptual model which is grounded in empirical work in Wales. The article first sets out the general recognition, in policy and practice, of non-cognitive learning. Watson & Emery indicate that the significance of social and emotional aspects of learning are recognised in the US, Europe and the UK (p. 770) and outlines some initiatives that were implemented in this field. Nevertheless, they argue that the academic community is struggling with significant confusions in this area (p. 771). They mention that, in the scientific literature, related terms are often being used interchangeably, such as “ soft skills, emotional intelligence, emotional literacy, … employability skills” and several others (ibid.). In response to this, Watson and Emery clarify some key concepts relevant to SEL, e.g. dispositions and traits (pp. 772, 774). Disposition is defined as an inclination to act, which the authors relate to knowledge; traits are seen as innate.

Further on, Watson & Emery discuss how such concepts can be measured. They illustrate how some practitioners believe that methods to assess these should include “observational assessment, portfolios, video evidence, diaries and journals” (p. 775), which is to say, non-standardised methods. On p. 776, they present their framework, which places the individual learner between context factors, and performed behaviour and skills, and shows the key roles of motivation and development. In further discussing the evaluation of SEL outcomes, the authors distinguish between educational measurement and assessment (p. 779); while measurement is assigning a numerical value to an observed learning outcome, they argue, assessment does not need to correspond with a numerical value. The authors conclude by saying that SEL deserves a non-conventional type of assessment, referring (p. 780, quoted from James & Brown, 2005: 19) to a new:

… methodology for assessment, perhaps drawing more on ethnographic and peer-review approaches … appreciation and connoisseurship … and advocacy, testimony and judgement.


Lee, J. and Shute, V.J. (2010). Personal and social-contextual factors in K-12 academic performance: an integrative perspective on student learning, Educational Psychologist, Vol. 45, Nr. 3, pp. 185-202.

Schulz, W., Ainley, J., Fraillon, J., Kerr, D. and Losito, B. (2010). Initial findings from the IEA international civic and citizenship education study. Amsterdam: International Association for the Evaluation of Educational Achievement.

Watson, D.L. & Emery, C. (2010). From rhetoric to reality: the problematic nature and assessment of children and young people’s social and emotional learning. British Educational Research Journal, Vol. 36, Nr. 5, pp. 767-786.

Navigation du livre