Tuesday, 21 June 2016

The role of private school chains in Africa

The role of private school chains in Africa

The trend towards private schooling has largely been a phenomenon of industrialised country education systems, starting with Charter schools in the USA  and spreading to other countries such as England where Government policy announced in 2016 is to convert all schools into ‘academies’ run by so-called multi-academy chains. In these systems the commercial returns from privatisation is often indirect, being expressed through the letting of contracts for support services and the like.
In developing countries, however, the privatisation is often directly commercially driven where for-profit companies set up or take over schools and charge parents for the education provided. The following commentary looks at the case of one corporation that involved in  several African countries where it makes claims for the superiority of its education it. Specifically, Bridge International Academies (BIA) has recently published a report comparing its schools in Kenya with neighbouring State schools and claims greater learning gains. The report can be viewed  at:
A detailed commentary and critique of this report has been compiled by Graham Brown-Martin and can be accessed at the following site.

Brown-Martin makes reference  to some of my own remarks on the report and what follows is a more detailed quantitative critique of the report.
The study sampled 42 Bridge Academy schools and 42 ‘geographically close’  State schools, and carried out testing in the early grades of primary schooling on approximately 2,700 pupils who were followed up one year later, with just under half being lost through attrition. On the basis of their analyses the report claims
“a Bridge effect of .31 standard deviations in English. This is equivalent to 64 additional days of schooling in one academic year. In maths, the Bridge effect is .09 standard deviations, or 26 additional days of schooling in one academic year.”

Such effect sizes are large, but there are serious problems with the analysis carried out.

First, and most importantly, parents pay to send their children to Bridge schools; $6 a month per student which represents a large percentage of the income of poor parents with several children, and where the daily income per household can fall below $2 a day. So  some adjustment for 'ability to pay' is needed, yet this is not attempted, presumably because such data is very difficult to  obtain. Presumably those with higher incomes can also support out-of-school learning. Does this go on?
 Instead the report uses factors such as whether the family has electricity or a TV, but these are relatively poor surrogates for income. Yet the report has no mention of this problem.
Some of the state schools approached to participate  refused and were replaced by others, but there is no comparison of the characteristics of the included schools and all non-bridge schools. Likewise we know little about the students who left the study (relatively more from the Bridge schools) after the initial assessment. Were these pupils who were 'failing'? For example did parents with children ‘failing’ at Bridge schools withdraw them more often, or did parents who could barely afford the school fee tend to withdraw their children more often? What is policy of bridge schools with pupils who fall behind? Are they retained a year or otherwise treated so that they are not included in the follow-up. Such policies, if different in Bridge and State schools would lead to potentially large biases. To be fair, section VII does look at the issue of whether differential attrition could affect results and suggests that it might and recommends further work. In these circumstances one might expect to see, for example, some kind of propensity score analysis whereby a model predicting propensity to leave, using all available data including school characteristics, would yield individual probabilities of leaving that can be used as weights in the statistical modelling of outcomes. Without such an analysis, apart from other problems, it is difficult to place much reliance on the results.
The differences in differences (DiD) model is the principal model used throughout the report,  yet has serious flaws which are not mentioned. The first problem is that it is scale dependent - thus any monotone (order preserving) transformation will produce different estimates - so that at the very least different scalings need to be tried. Since all educational tests are on arbitrary scales anyway this is an  issue that needs to be addressed, especially where the treatment groups (Bridge and non-Bridge schools) have very different student test score distributions.
Secondly, even ignoring scale dependency, the differences across time may in fact be (and usually are) a function of initial test score, so that the latter needs to be included in the model, otherwise the DiD will reflect the average difference and if, as is the case here, the baseline score is higher for bridge schools, and for the scale chosen the higher baseline scoring pupils tend to make more progress in Bridge schools,  then DiD will automatically favour the Bridge schools.
Thirdly, the claim that DiD effectively adjusts for confounders is only true if there are no interactions between such confounders and treatment. This point does appear to be understood, but nevertheless remains relevant and is not properly pursued in the report.
The report does carry out an analysis using a regression model which, in principle, is more secure than the DiD model but requires a nonlinear relationship with baseline, which is done, but also possible interactions with covariates which is not done. Even more important is that there needs to be an adjustment for reliability which is likely to be low for such early year tests. If the baseline test reliability is low - say less that 0.8,  then inferences will be greatly changed and the common effect found in other research around this age is that the treatment effect is weakened. (Goldstein, 2015).
Table 15 is especially difficult to interpret. It essentially looks at what happens to the lower achieving group at time 1 using a common cut-off score. Yet this group overall is even lower achieving in control schools than Bridge schools, so it will be easier on average for those in this group in Bridge schools to move out of this category. The evidence from these comparisons is therefore even less reliable than the above analyses and can be discounted as providing anything useful. Surprisingly this point appears to be understood  yet is still used as 'evidence'
There is a section in the report on cross-country comparisons. The problem is that country assessments are fundamentally different and comparability is a very slippery concept and this section’s results really should be ignored since they are highly unreliable.

In short, this report has such considerable weaknesses that its claims need to be treated with scepticism. It also appears to be authored by people associated with BIA, and hence presumably with a certain vested interest.  The issue of whether private education can deliver ‘superior’ education remains an interesting and open question.
Goldstein, H. (2015), Jumping to the wrong conclusions. Significance, 12: 18–21. doi: 10.1111/j.1740-9713.2015.00853.x
Harvey Goldstein
University of Bristol
21 June 2016