What summative assessments are available?
The Project’s summative assessment materials assess performance in mathematics, as described in the Common Core State Standards in Mathematics (CCSSM). Together they provide:
- a source of tasks and tests for teachers to use for periodic summative assessment during the school year
- a model for designers of high-stakes tests, showing the range and balance of types of performance that the standards imply.
Tasks variously ask students to use their mathematics in routine or non-routine situations to design, plan, estimate, evaluate and recommend, review and critique, investigate, re-present information, explain, define concepts, and show their skills in routine technical exercises.
At High School, 40-minute forms are the easiest for teachers to use for periodic assessments without disrupting the timetable, but are limited in the amount of curriculum they can assess. 3-hour forms offer a model for more comprehensive end-of-grade or end-of-term assessments. 90-minute forms offer a compromise between time and breadth of content.
For Middle School, we have provided 80-minute forms each divided into two 40 minute sections allowing them to be split over two lessons. These tests can be downloaded from the Tests section of this site.
These are only examples of how balanced tests can be assembled. If you require more flexibility, the complete bank of tasks, individually cross-referenced to the standards, is available from the Tasks section of this site.
What are 'novice', 'apprentice', and 'expert' tasks?
The goal of mathematics instruction is to equip students with the tools of the trade (mathematical practices, content, and productive dispositions) so that they can successfully engage with complex problems. Thus the goals for both curricula and assessment should be to have students develop such skills and understandings, and to asses those competencies, on rich problems. However, the pathways to such competencies and their assessments are not linear. Some skills may be best motivated, or developed, in the context of working rich problems; the idea of not engaging students in rich mathematics until they have "mastered" all the relevant skills can be intellectually problematic and deadly in curricular terms. For purposes of assessment, however, it is often useful to judge students' understandings in expanding levels of complexity. Thus, mathematical skills and practices might be assessed partly in isolation, partly under scaffolded conditions, and partly when students face substantial problems without scaffolded support. We call tasks that assess these three different types of performance novice, apprentice, and expert tasks respectively.
Novice tasks are short items, each focused on a specific concept or skill, as set out in the standards. They involve only two of the mathematical practices (MP2 – reason abstractly and quantitatively; MP6 – attend to precision), and do so only at the comparatively low level that short items allow.
Apprentice tasks are substantial, often involving several aspect of mathematics, and structured so as to ensure that all students have access to the problem. Students are guided through a “ramp” of increasing challenge to enable them to show the levels of performance they have achieved. While any of the mathematical practices may be required, these tasks especially feature MP2, MP6 and two others (MP3 – construct viable arguments and critique the reasoning of others; MP7 – look for and make use of structure). Because the structure guides the students, the mathematical practices involved are at a comparatively modest level.
Expert tasks are rich tasks, each presented in a form in which it might naturally arise in mathematics, science or daily life. They require the effective use of problem solving strategies, as well as concepts and skills. Performance on these tasks indicates how well a person will be able to do and to use mathematics beyond the mathematics classroom. They demand the full range of mathematical practices, as described in the standards, including: MP1 – make sense of problems and persist in solving them; MP4 – model with mathematics; MP5 – use appropriate tools strategically; MP8 – look for and express regularity in repeated reasoning.
It is known from research that the difficulty of a task depends on various factors, notably its:
- complexity – the number of variables, the variety and amount of data, and the number of modes in which information is presented, are some of the aspects of task complexity that affect the difficulty it presents.
- unfamiliarity – non-routine tasks (those which aren’t just like the tasks one has practiced solving) are more difficult than routine exercises.
- technical demand – tasks that require more sophisticated mathematics for their solution are more difficult than those that can be solved with more elementary mathematics.
- student autonomy – guidance from an expert (usually the teacher), or from the task itself (e.g., by structuring or “scaffolding” it into successive parts) makes a task easier than if it is presented without such guidance.
Assessments of student performance need to take these factors into account. For example, these factors imply that, in order to design a task for a given level of difficulty, a relatively complex non-routine task that students are expected to solve without guidance needs to be technically easier than a short exercise that tests a routine skill.
The difficulty of a task is determined by trialing the task with a sample of students covering the range of ability. All assessment tasks, whether for use in the classroom or in summative tests, should be developed in this way, establishing their level of difficulty without undermining their validity as good mathematics. They can then:
- give teachers and students performance targets that live up to the standards
- provide summative feedback that assesses progress towards the standards
- complement formative assessment in lessons
- complement whatever other assessment schools and school systems use
- act as models for other test developers to consider.
(Some tasks can be designed so as to allow students at different levels to provide different (correct) responses. For such tasks, difficulty has partly to do with the level at which the student engages the task. This is similar to the situation in English or History; the same essay question might be posed to a young student or to a college graduate, with quite different “good” answers in either case.)
Thus, the Project provides a source of tasks for assembly into tests that teachers can use for periodic summative assessment during the school year and, where appropriate, in substantial end-of-year examinations. They also provide a model for designers of high-stakes tests who aim to develop valid assessments of the mathematics described in CCSSM.