Has the “data revolution” thrown learning outcomes off the bus?

Helen Abadzi


Imagine that you live in a poor area, you get sick and go to a hospital.  But the hospital is only interested in measuring your vital signs.   They enter your anonymized data in the computer, do statistical analyses, write policy papers, and give seminars on your disease.  But treatment?  The hospital offers bandaids and painkillers. To cure your illness you must find someone outside the hospital.  And it’s up to you to find that person!  Would you get better? And would the country’s morbidity rates improve?

In the health sector this scenario would be bizarre, but in the education sector it is business as usual. A recent mission to a low-income African country served as a reminder.

A donor had financed EGRA (Early Grade Reading Assessment). As it often happens, the results showed that many students could hardly decode or make sense of text. Despite abundant donor assistance, textbooks had not been issued for at least a decade. Poorer students had simply never practiced enough to automatize reading! But it was impossible to discuss this necessary learning process for long. Officials quickly moved away from specifics and into systemic problems of teacher motivation, school management, and community involvement. They were obviously echoing policies about systems improvement promoted by donors, such as the World Bank.

Countries obviously need education systems that work. But the donors’ focus on abstract variables has created perverse incentives. The story is complex, but an outline is below.

For education, donor agencies hire staff with degrees that rarely include coursework on learning. Predictably they formulate “theories of change” that reflect their training and interests. The World Bank’s 2020 learning strategy theorized that school quality improves through autonomy, accountability, and assessment (AAA): School managers must become empowered and autonomously figure out how best to teach their students. The financiers will measure results and hold implementers accountable.

Measuring learning outcomes is very sensible, but it is also expensive, complex, and time-consuming. Psychometricians must be hired. Detailed workshops are needed to train staff on measurement concepts, procedures, results, and implications. Results ought to provide feedback for systemic improvement, but findings are rarely specific. After spending US$1 million and thousands of staff hours, governments may find out that they perform slightly better than their neighbors but that 59% of rural students fill out multiple tests at chance level while teachers in private schools with university diplomas are linked to higher scores.

So what has been the value added of international assessments? Detailed reviews exist. (See for example Liberman & Clarke, 2012 and Lockheed et al., 2015). Roughly, wealthier countries apparently can use the test feedback to improve, but poorer ones usually do not.

Furthermore, assessments in poorer countries may subtract value. Testing receives top billing and may occupy the most competent ministry staff. There are only 8 hours in a normal working day, and assessment-related tasks may distract from other functions. Then inability to take specific actions from test results may signal that learning problems are untreatable. And savvy officials understand that they can attribute bad outcomes to socioeconomic factors.

In recent conferences I attended, ministry officials talked extensively about measurement and accountability. And all participants heartily agreed that learning is all that counts. But no one uttered a word about how to make students actually learn in order to perform in these tests.

Donor agencies want to see improvements in the abysmal test results but stumble on their own limitations.

“I went myself to country X to explain performance data,” said to me a World Bank director recently. “So, what will the Ministry officials do about this?” She shrugged her shoulders. “I don’t know; the World Bank cannot be involved in teaching methodology.”

Classroom in Malawi, source unknown

Some critics see a conspiracy. Perhaps the “data revolution” was one more takeover coup by the elites. They convinced entire countries to give their time and knowledge for the enrichment of the rich. With these data, local elites attend workshops, make presentations, and get prominence or consultancies. International staff and consultants get publications, workshops, and exciting mission travel. They demonstrate leadership and productivity and get promotions, nice salaries, and wonderful retirement benefits. Plus, everyone gets the warm feeling of helping the poor.

Evidence favoring this “conspiracy theory” is the mysterious disappearance of accountability from the AAA strategy implementation. In principle, test results should be opportunities to show the efforts made, funds expended, reasons for failure. The heads of managers presiding over failures should roll. Donor staff giving bad advice or agreeing to failing actions should be fired. But nothing of the sort takes place! In various meetings, shocking performance data are presented with a straight face, and the speakers get applause. Then the data are used to justify the next loan, the next grant, the next “results based” financing.

So, intentionally or otherwise, the donor community is communicating to governments that test administration equals accountability.   We give you money and test results. You figure out how to teach the poor and improve results. If you measure and fail, you are off the hook. We have all performed the accountability function. And we keep writing advocacy blogs and articles about the importance of learning. If the poor want to know why their children remain illiterate, we will point them to the website that hosts the text.

All this activity whirlwind for the sake of learning keeps staff so busy, they have no time to look outside their business-class airplane seats. The last 25 years that witnessed the ascendance of international assessments have also witnessed an explosion of research on the brain and learning. If customized and applied, the findings can revolutionize learning quality in all countries. But applications remain unknown among donor agencies and are practically never discussed with governments.

So how to create incentives for actual provision of efficient learning to the students? Please give suggestions.

References :

Liberman, J., & M. Clarke (2012). Review of World Bank Support for Student Assessment Activities in Client Countries, 1998-2009. Washington, DC: The World Bank. https://openknowledge.worldbank.org/handle/10986/17476

Lockheed, M. et al. (2015). The Experience of Middle-Income Countries Participating in PISA 2000-2015. OECD/World Bank. http://www.oecd.org/publications/the-experience-of-middle-income-countries-participating-in-pisa-2000-2015-9789264246195-en.htm

Helen Abadzi is a Greek cognitive psychologist and polyglot with a background in psychometrics. She retired after 27 years as an evaluation and education specialist at the World Bank. Her publications on science-based learning solutions for the poor are found here.

This Post Has 2 Comments

  1. Standards for Educational and Psychological Standards, 2014 revision, p. 189 :” It is important to consider the unintended effects of large scale testing programs. […] : encouragement of instructional or administrative practices that may raise test scores without improving the quality of education. It is important for those who mandate and use educational tests to be aware of such potential negative consequences (including missed opportunities to improve teaching and learning)”.

Leave a Reply