1. ‘Maths Mastery’
Mathematics Mastery is based on a simple way to teach mathematics originally developed by the Singapore Ministry for Education. The Mathematics Mastery model is distinctive in two ways. First, it aims to give pupils a thorough understanding of mathematical concepts, rather than a set of techniques or routines to get to the right answer. Mathematics Mastery shows that problems can be solved in a variety of ways, and ensures that pupils learn in sequence – first by manipulating real objects, then by drawing pictorial representations, and ultimately by using mathematical symbols. Second, Mathematics Mastery uses a ‘mastery’ approach, in which teachers do not move on until all pupils have acquired a basic understanding of the current topic. Additionally, the course is designed so that more able pupils can explore each topic in depth, and therefore remain engaged. The Institute of Education will conduct an independent evaluation using rigorous design and methods. The evaluation focus is on establishing an unbiased estimate of impact of the intervention on short-term (after one year of ‘treatment’) and long-term academic outcomes (performance on mathematics tests). We are testing the Maths Mastery programme both in primary and in secondary schools.
Maths Mastery Primary School study protocol :Primary_Protocol
Maths Mastery Secondary School study protocol : Secondary_School_Protocol
Secondary school trial registration can be found at: http://controlled-trials.com/ISRCTN70922140/
2. Chess in Schools
The intervention will investigate whether teaching primary school pupils to play chess for one hour a week over 30 weeks (during normal school time) boosts academic achievement. The intervention will target year 5 pupils in purposefully selected areas of England. The Institute of Education will conduct an independent evaluation using a clustered randomised control trial (RCT). The evaluation focus is on establishing an unbiased estimate of impact of the intervention on academic achievement tests (maths, English and science) one year after the intervention has finished. This is combined with an integrated process evaluation that has the scope to provide valuable insight should the intervention prove successful (or, should it not appear successful, why this might be).
Chess in Schools study protocol:
The Chess Intervention RCT looks quite interesting!
It would be interesting to see how the design raises / selects its comparison group for the counterfactual. In addition, what would be the theory of change argument connecting “learning chess” and “English test score outcomes”, for example?
Thanks Daniel.
The project is an RCT – so we are randomly selecting 50 schools for treatment and 50 schools for control. So our comparison group will be the 50 control schools – and we will be able to get an unbiased estimate of the counterfactual.
The idea of looking at English test scores was to see if there were spill-over effects into other subjects. Do we find an isolated impact for maths, or on a broader range of outcomes? Of course, maths is going to be our main focus as the primary outcome!
John,
I agree with the random selection principle. However, I still find some concerns on “group equivalence” between the treatment and comparison groups. It becomes more serious when separate randomisation in group selection (50 treatment, 50 comparison) introduces significant sample heterogeneity (non-equivalence in the underlying school/pupil characteristics between the two samples).
One way to go about it is to begin with a “random selection” of 100 schools in the first instance followed by a “random assignment” of 50 schools to “the treatment group” and, in the process, the other 50 schools go into “the comparison group”.
So what you are really saying is ‘internal’ versus ‘external’ validity.
Random assignment to treatment and control will give you internal validity. Random selection of schools into the trial will give you external validity (i.e. results can be generalised to the population).
For the latter we initially had a well defined population of 450 schools in 11 geographic areas. The Chess in Schools team then recruited 100 of these schools. So we have a sampling fraction of 20 – 25 percent. What we have done is to compare the selected schools versus the population of schools in terms of observable characteristics, and found little difference. This of course is not perfect – but from RCT standards in the social sciences, this consideration of external validity is more than is done for most interventions we have seen.