Evaluation in OLPC: what for? What has been done, what could be done?


Evaluation in OLPC: What for ? what has been done, what could be done?

Acces the PDF here : Evaluation in OLPC

Acces the Powerpoint Presentation here : Evaluations in OLPC

These documents can be accessed from the Olpc Fundation research page, where you can find usefull papers. You can find an older paper in french : L’évaluation des déploiements OLPC, quelles méthodes, quels résultats ? Evaluation_OLPC_Varly_FR

Abstract

First OLPC deployments took place in early 2007, but evaluation plans were merely embedded at the first stage of the projects. If there is growing evaluation of 1:1 projects in education, few can produce reliable estimates of the ICT effects on pupils’ achievement. The OLPC deployments contexts are far more complex from an IT and educational perspective than in the Western countries, where most of the 1:1 projects have been evaluated. The expected outcomes of the OLPC deployments range from the digital device reduction, better self-esteem and motivation, to higher attendance and learning outcomes. Actual OLPC deployment evaluations are not addressing all these issues and just a few focuses on achievement measured by test score. Most reported outcomes are better motivation and attitudes and reduction of the repetition rates.

After wondering if systematic OLPC deployments’ evaluation is really required, this paper makes proposal to include evaluation plans and longitudinal studies in the OLPC deployments. Evaluation tools should be simple, inexpensive and manageable by OLPC volunteers on the field in order to measure the impacts and share experiences of what works and what’ don’t. More focus should be put in measuring reading literacy in the early grades, which seems to be the big issue in the developing countries.  Need for a change in education systems, potential private funding (Giving Pledge), widespread impact evaluation documentation, less curricula-driven tests and measure, simple test tools, leave more room for OLPC interventions and evaluation. It is an opportunity to learn that OLPC community should catch right away.

OLPC Evaluation: what for?

First OLPC deployments took place in early 2007. On the different forums, news’ group or wiki projects, web sites and blogs much attention is given to the technical IT aspects of OLPC project, to the detriment of the pedagogical and implementation parts. A consistent browsing of online resources on OLPC leads to such a statement (Varly 2010). If there are many experiences and knowledge sharing on how to run a XO Laptop, there is few on how to best run a XO laptop project[1]. Evaluation could a play a role in providing useful feedback on what works and what don’t in the different local deployments, or if certain Sugar activities work better than others for instance. Following a review of OLPC evaluations (Nugroho 2009), completed by more recent OLPC evaluation reports, a recent paper from the OLPC Foundation Learning Group clearly advocates for embedding evaluation process into the early stages of the deployment design, (Zehra 2010.)

A recent extensive OLPC evaluations review and categorization (Varly 2010) completes the OLPC ongoing work and makes proposals for an evaluation framework drawing on non-OLPC ICTE work and taking account for developing countries’ context. Simple and cheap tools can be used to monitor the effects of the OLPC projects, before thinking of more complex design, such as impact evaluation. This type of evaluation, inspired by health trial protocols and set by a MIT hosted Poverty Reduction Lab, is largely used to monitor education interventions, and somehow became a condition for funding large scales projects, including OLPC deployment under BIE or World Bank management.

In august 2009, on the World Bank blog on ICTE, Michael Trucano said :

“Most of the evaluations to date have been of very small pilots, and given the short duration of these projects, it is difficult — if not dangerous —  to try to extrapolate too much from the findings from such reports.  This is especially true given the ‘hothouse flower’ nature of most high profile ICT in education pilots in their initial stages, where enthusiasm and statements about expected future changes in behavior and perceptions substitute for a lack of rigorously gathered, useful hard data.”

For small OLPC deployments, replicating actual impact evaluation or quasi experimental methods may simply exceed the overall deployment costs if there are externally managed. However, there are solutions to draft evaluation tools that could be managed by OLPC volunteers and other members of the “good will coalition”. Considering efficiency issues, that are one possible use of evaluation, one parameter is the outcome and the other the cost. If little has been done yet to measure outcomes (at least from the learning side), there is now a broader knowledge of the XO unit cost, including implementation.

Efficiency vs. moral considerations in OLPC galaxy

In Nepal, the XO deployment overall unit cost[2] is estimated at 77$ per year compared to a 61$ current spending per primary pupil according to (Unesco 2010) and excluding investments such as school building etc…. In Uruguay, unit cost is estimated at 75$, including implementation. Comparatively, it’s still a costly investment as one XO Laptop can represent one month teacher salary in many countries. On the other side, traditional teaching and formal education methods have reached their limits and if more pupils got enrolled worldwide the past few years[3] many are off track in terms of basic skills. In Nepal, 79% of grade 2 learners are not able to read any word (see Graph 3). Out of the 61$ spent by the government, how much does really hit the Nepalese schools? Are OLPC deployments cost-effective[4] complementary of alternative ways of learning and fostering local development?

In secondary education, school fees, textbooks and uniforms are unaffordable for many households but increase of primary completion rates sustain a growing social demand, especially in Africa. Traditional forms of schooling that induce building colleges in rural areas and recruitment of qualified teachers are simply unaffordable for the government budget for secondary education and development aid financing focus on primary or tertiary education (UNESCO 2010). As a result, in Africa, 45.8% of lower secondary education spending comes from households (Pole de Dakar 2010) and comparing households unit spending per pupil and XO-HS unit cost is worthy. Alternative development scenarios including less public spending in lower secondary but public investment geared towards electricity supply in rural areas and XO-HS buying by families might be explored in terms of economic sustainability. Once again, even if this drastic scenario was economically sustainable, the issue of XO-HS impact on secondary pupils learning outcomes comes out.

If efficiency has been largely promoted in education systems, it is just one aspect of evaluation. Unlike the “assessment of learning approach”, who deals with efficiency (How much do we pay? what do we get?), the “assessment for learning” approach use evaluation tools to compare effects of different OLPC projects and identify deployment best practices. This paper does not make a choice between the two approaches.

Addressing the issue of efficiency will be necessary if OLPC wants to reach all kids and achieve its goal but at this stage, the OLPC community is still learning how to best run a Laptop project in different local environments, including post conflict situations. OLPC evaluation can foster deployment teams’ competencies in education measurement, monitoring and evaluation and develop further knowledge sharing on how to implement XOs in the school environment and not just on how to run XOs.

From a moral point of view, providing laptops to kids might just be a sufficient action and enough reward. Psychological effects (as pupils’ self-esteem) of giving one Laptop must not be underestimated although that cannot be the only intended outcome (Hourcade 2009). After all, OLPC is not “a laptop or an educational project” but a moral commitment to provide each child a laptop. Providing quality education to all kids remain the responsibilities of the government who signed the Education for All declaration (Dakar 2000) and promoted free access to education as a human right, guaranteed by constitution law. From a moral point of view, OLPC largely socially valued has one of the best mankind idea, does not necessarily need to be evaluated in terms of education or socio-economic achievement. If not necessary, it might be useful to use evaluation as a tool to improve the projects in all its dimensions and considering what has been said before of experimentation and evaluation being a condition for funding such a large scale project.

There is growing attention to ICTE as real leverage learning tools or, on the other side, as a way to make political commitments to education development quite visible. Some Western heavily indebted countries, such as Portugal, bought Classmate Laptop computers to all the school kids on the state budget (Magellan project). Once again, the morality of the original intention comes out. Addressing the efficiency issue of such a huge investment in an economic crisis period can also be strictly need from a moral or tax payer point of view as well.

Review of TICE evaluations outside the OLPC World

World Bank runs a dedicated blog on ICTE and already produced a comprehensive guideline for monitoring and evaluation (Wagner 2005). This publication includes extensive literature review but is slightly outdated given the rapid introduction of ICT in education and massive investment and research in education and IT technologies.

While many recent evaluations have still methodological flaws (Surh 2010), it is worth nothing that some papers are clearly anti-OLPC oriented even though there are not dealing with an OLPC deployment. However, recent work can be a good source of inspiration (Bethel 2009, Suhr 2010) to develop fair and sustainable evaluation tools applicable to OLPC deployment contexts.

A review of a sample of ICTE evaluations making explicit or implicit references to OLPC (Wainer 2008, Barrera-Osorio 2009) or focusing on 1:1 computing (Suhr 2010), including varieties of methods, different countries and contexts, reveal common patterns. Measurement is focused directly on students learning outcomes, without fully addressing the issue on how teaching methods were changed, and little if any info was given on the type of pedagogical platform embedded with the computers (Varly 2010). Except for (Suhr 2010), pupils’ activities have not been measured in detail. It is rather an evaluation of the computer effects than of the computers’ project effect, that should include teachers training and pedagogical tools. If ICT education projects are effective when combining inputs (Wagner 2005), these inputs are not well documented in these three papers and we don’t really know what is being evaluated.

In the OLPC framework, this last point is crucial as XOs are often delivered with a comprehensive set of pedagogical activities (Sugar), inspired by socio-constructivism. Hence measuring XO effects is measuring Laptop effects, Sugars or socio-constructivism pedagogy effects? It is useful to check upon what has been done recently in matter of 1:1 projects evaluation before looking at the OLPC deployments context and how modern evaluation techniques could be embedded further into OLPC projects.

A meta-analysis of One to One projects outside the OLPC World

Bethel undertook a comprehensive review of One to One projects. Out of hundred of papers, Bethel identified 144 articles with quantitative data, of which 44 including achievement data and 22 with quantitative data from which effect sizes could be extracted. The graph below shows the most commonly reported gains.

               Graph 1: Quantitative synthesis of attitude data

Data show frequently reported improvement of motivations and attitudes, better teacher/student interaction but effects on achievement and attendance are far from being systematic. The most complex and rigorous evaluation design of One to One effects (Suhr 2010), but not included in the Bethel meta-analysis, identified skills improvement in writing strategies & literacy responses and analysis. Pupils’ achievements are tight to the activities performed with the computer. Effects on achievement (measured by test scores) can be expected from the second year of implementation (Suhr 2010). In Magog (Canada), laptop introduction increased literacy and numeracy skills reduced the drop-out rates and increased attendance but these results were achieved after 3 or 4 years of program implementation (ETSB 2010).

.

..

.

.

.Source: Bethel (2009)

OLPC projects evaluation methods and results should follow similar patterns but the deployment contexts are different and thus the expected outcomes. Papers reviewed by Bethel focused on developing countries and mostly on USA, whereas OLPC targets developing countries.

OLPC deployment contexts from an IT perspective

When looking at contextual data of the countries having deployed more than 3000 XO as indicated in the OLPC wiki in March 2010, the higher ratio of computers per 100 inhabitant was for Uruguay (13.6) and the lowest for Haiti and Cambodia 0.2 (all measured in the year 2005 and prior to XO deployments, Varly 2010). Although there are strong disparities in the USA, the average access to home computer is 76% for 3-17 years old in 2003 (Barton 2007). In the 45 TIMSS participating countries, the average access to a computer was 60% for eight graders, in 2003.

Therefore, OLPC deployments clearly reduce the digital device and most of kids use XO as their first computer. With a minimum time of XO use by kids, technology use and literacy outcomes should be more frequently reported in the OLPC countries than in Western countries. There are clearly different starting points in terms of IT access and marginal returns cannot be strictly compared in the two contexts. As OLPC has been promoted as “an educational project and not a laptop project” we will not focus on technology access and use considerations but rather on how XO can change the education and social environment and pupils’ aptitudes. It is also too recent a project to assess economic benefits (Nugroho 2009).

OLPC deployments context from an educational perspective

When looking at education indicators of the major deployment countries and other micro-deployments, we can observe that most of them have not reached universal access to primary education. In 2007, Ethiopia, Ghana, Haiti and Mongolia have gross participation rates under 100% (Varly 2010). Reports from (Unesco 2010) and work focusing on learning conditions for the poor (Abdazi 2006), or more recent data from reading assessment (Gove 2010) points specific developing world problems in education access, participation and quality:

  • High repetition and drop-out rates
  • Poor literacy environment (no books home, parents illiterate…)
  • Inefficient teaching methods (in reading specially)
  • Bad classrooms equipment (few textbooks, large class size, …)
  • Very little effective learning time (low attendance, insufficient time on task, …)
  • Low social demand from parents (children work on the fields, …)
  • Language of instruction not spoken home (half of the out of school children population)
  • Very low learning outcomes (measured by test scores)
  • High gender, rural/urban and ethnic disparities

All these matters are not addressed in the non-OLPC One to One projects as there are not the big issues in the “developed” world. Reading and learning skills are the heart of education systems, in the North and South but the problem size is considerable in developing countries.

 Reading and learning are still the big issues

Learning outcomes in the developing world are very low. SACMEQ tests partially based on PIRLS items showed a four year gap between learning outcome levels in the Western countries and southern Africa. In Mali, 68% of 4th grader are not able to read aloud “My school is beautiful” in French or in their own native language. In Haiti, almost 50% of grade 3 kids are not able to read any word in French or Creole. The graph below using recent EGRA oral fluency data in the early grades, and including countries where the language of instruction is spoken home, show the long way to go towards literacy.

.

However, improvements are possible: in Gambia the proportion of pupils not able to read a word has been reduced from 47 to 27 percent between 2007 and 2009. Contextual factors such as low exposure to literacy materials outside schools and relatively inefficient teaching methods in the classroom are pointed out. OLPC deployment can be an avenue for better opportunities to learn both from a quantitative (increased learning content) and qualitative point of view (better pedagogical materials, modern teaching methods). Child centered pedagogical approach, group work and more contextualized school materials are also broadly promoted by international agencies in developing countries. This is an environment conducive for OLPC deployments but these warning data are an indication that Sugar activities should focus on basic literacy and numeracy skills for early grades, manageable by pupils and teachers…

If XO deployment could have an impact on learning outcomes, its measurement poses complex problems, specific to developing countries and IT related. Measuring learning outcomes via standardized testing is quite costly and required in-depth expertise. There is a large gap between US and national expertise in terms of monitoring and evaluation of ICT projects and in education metrics in general. Cultural bias in defining what “quality education” is not to be underestimated (Nugroho 2009). Exam pass rates are usually employed as a proxy indicator for quality but these data have a lot of flaws and can get influenced by political authorities when monitoring impact of a subsequent educational reform. The same remark applies to repetition rates. However, complexity of the measuring task cannot be an excuse for not trying to develop “assessment for learning” OLPC tools, as a first step.

Measuring impacts of OLPC deployment on school participation, attendance, repetition and drop-out can be done more easily and international definition of education indicators are now widely used at national level, such as the primary completion rate. It is now time to review what has been done to evaluate OLPC impacts, in terms of methods used and outcomes reported.

Evaluation of OLPC deployments: what has been done? What are the effects?

When the deployment is sponsored by a development agency, impact evaluation methods are embedded at the early stage of the program design, as for any other kind of educational intervention. Otherwise, given the relatively recent timeline of deployment and probably depending on deployment team competencies, focus is on formative evaluation: What has been done? How the project was perceived by the teachers and community? What are the pupils doing with the XOs? are the most common questions.

There is only one paper published in a scientific review that deals with an OLPC deployment evaluation (Hourcade 2009) and where evaluation techniques are rather traditional (focus group) and do not include quantitative or hard data on test scores. If some deployments include impact evaluation design, a few fit criteria set by Bethel to measure One to One effects most rigorously. This is not specific to OLPC but common to One to One evaluation. OLPC evaluation methods are as “bad” or as “good” as any other One to One project assessment.

However, little information is given on tests used to measure learning outcomes, and there is too much emphasis on the attitudes of motivations of pupils in the reporting. Clearly, the majority of the OLPC deployment generated better pupils’ self esteem, attendance and involvement in the classroom, but the reports does not tell us how that contributed to better learning outcomes. There are not enough observations and published papers to make rigorous comparisons with the One to One reported outcomes shown in graph1 and no measure of size effects is produced (Varly 2010). The available OLPC evaluation reports available in September 2010 are still too few and poorly documented to produce reliable outcomes.

A recent Peru report including proper impact evaluation design (treatment and control groups) fit the common pattern of measured outcomes (Santiago 2010). In particular, little impact is ever found considering learning achievement but better pupils’ attitudes and motivation are reported. An in-depth review of Intel Classmate website did not reveal any impact evaluation giving a marketing probe of the effects on pupils test scores (Varly 2010). The constant between all One to One projects is that a minimum technology appropriation time is needed before any significant effect on achievement can be identified. Moreover, the leverage effects on learning achievement seem bind to the type of activities performed by pupils with the XO, which have not been describe in details, except for (Hourcade 2009). Preliminary findings from Haiti deployment evaluation show an increase in pupils’ writing skills, as it was their favorite XO activity.

The type of pupils‘ activities are not always properly measured but the installation of a school server could help save a back up of pupils’ log file, as it has been tried in Nosy Komba. This is a central point that was not systematically addressed: “In order to understand the connection between the input (computer use) and the output (learning in school subjects), it is essential to have the learning measurement directly correspond to subject area in which the technology was used” (Wagner 2005), p.12.

Several reports are expected at the end of 2010, and some designs are especially interesting. Shakira’s Fundacion Pies Descalzos work would allow comparing the value added of Sugar platform versus Microsoft tools. Rwanda will compare XO and teacher mate effects. Haiti will use regional tests to assess the OLPC effects. World Bank evaluation impact is pending in Sri Lanka and use different types of tests. Regional SERCE tests would be valuable resources to compare effects of XO deployments across America Latina. Other regional tests are available for Africa (PASEC[5], SACMEQ) to allow comparisons of effects at regional level. Uruguay reported lower repetition rates, and better attendance (Universidad de la Republica 2010). However, it is the only case of saturation (One child=One XO) and the Ceibal plan is highly political. (Hourcade 2009) suggested that the coming of journalists and politicians in the little town of Villa Cardal could have induced large positive psychological effects (self-esteem), eventually more than the use of XOs.

Other evaluations reported similar trends but longitudinal study design have not been systematically embedded at the early stage of the projects (Zehra 2010, Varly 2010).  At this stage of development, the focus on the OLPC community is on how to run XOs, battery and touch pad issues as well as getting teachers involved and XO accepted by education authorities. Information on what works and what don’t and solutions to resolve common problems could be more formally reported. A framework to address more complex educational issues arising in the next OLPC development steps should be set up. OLPC France blog is a good example of this kind of initiative, sharing problems and solutions and is used below as a case study.

A framework for sharing deployment problems and solutions: case study of Nosy Komba

Operating under an initiative from OLPC France and G du Coeur Association, Nosy Komba volunteers were able to list properly problems and solutions encountered, making somehow an informal formative assessment. This draft table shows that most of the problems are not specific to OLPC. We can say there are “X” problems, not XO problems.

IT and developing world problems and solutions are well documented although a little bit underestimated by international development agencies. Corruption puts much weight on XO deployment and is clearly understated although some efforts are made by Transparency International to address or at least document this issue.

There is no perfect guidebook on how to run an IT project in a developing country, but OLPC deployment guideline should more developed using feedback and experience sharing and address issues that are really specific to OLPC, leaving other issues to specialized agencies documentation.

Despite a very short training but detailed evaluation documentation provided (Varly 2010), in Nosy Komba much time was spent installing a school server and little time for teachers’ training or to initiate a quick longitudinal data collection, showed in annex. This seems quite typical of an XO deployment: much emphasis on the IT aspects but too few on how to make the pupils really learn better with the XOs or try to collect some data.

.

A quick introduction to OLPC longitudinal evaluation

Detailed evaluation plan proposals are made for both formative assessment and impact evaluation or quasi experimental design in (Varly 2010). These proposals are inspired by (Suhr 2010) and (Leeming 2010) and further adapted to common OLPC deployments context. Let’s start with simple things, like measuring XO effects on school participation, retention and attendance. The idea is to collect baseline data and follow up at least three years. As a matter of fact, One to One project outcomes can be expected 2 or 3 years after the implementation. Three kinds of data can be collected:

  • Contextual data: National education context (keys indicators), Deployment context (local indicators), tools deployed (nb. XOs, teacher training, electricity access per week)
  • Baseline data: Number of students enrolled, repetition rates, attendance rates, dropout rates (targeted schools and local area schools), the year before implementation, the year of implementation
  • Cohort study: Follow up on Year+1, Year+2, Year+3…

Collecting this data can only be done with local authorities. It is a good way to foster the relationship with education institutions and a powerful way to communicate on the expected outcomes of the XO deployment and to reaffirm that it is not an IT but an education project.

Impact evaluations: take it or leave it to international organizations?

So far, only large international organizations such as World Bank and US research centers are capable of implementing and analyzing rapidly impact evaluation data. The situation is quasi monopolistic and information and messages delivered on specific ITCE institutional blog or papers are not really supportive to OLPC initiative (Barrera-Osorio 2009). If consistent World Bank documentation exists on how to monitor and evaluate ICT projects in education, little real indigenous work has been done in the developing world and within the OLPC community.

Really independently led impact evaluation would allow more transparency with regards to OLPC outcomes. Alternatively, internal common tools could be also designed for volunteers to produce their own data and compare experiences. Indeed, impact evaluation has a cost (negotiation with authorities, instrument printing, training of test administrators, data entry and analysis, reporting) but the more is done, the less is spent. (Varly 2010) makes proposal of formative framework and impact evaluation design specifically adapted to OLPC and possibly manageable for volunteers, with a few training. The impact evaluation design includes IT or XO component items, as piloted by (Hourcade 2009) for instance.

This would permit evaluate if kids had really hands on XO and what are they capable of doing with the machine, while testing more academic achievement with most used regional or international test[6] (such as SERCE, PASEC, PIRLS…), that could be declined in paper and electronic version. In Sri Lanka, World Bank sponsored assessment used specific tests: The baseline student survey included grade-specific learning assessments based on Piaget’s theory of cognitive development as well as on the mathematics syllabus and assessments administered in government primary schools”.

Control group schools would be also included in the design but not taking the IT or XO tests obviously. Plan would include pre and post test as in any impact evaluation design, and tests should be administered at least one year after the deployment, as suggested by (Suhr 2010). The Sri Lanka evaluation design is a good source of inspiration and the paragraph below explains very well the basics of impact evaluation:

“Students in schools having at least one (school) computer showed higher learning outcomes than students in schools having no computer, although this could be the result of other factors associated with a computer facility in the school. Only a before-and-after comparison of student learning outcomes across control and ‘treated’ schools (“difference in difference” estimator) will indicate the causal impact of computers on student learning and other outcomes.”

However, counterproductive effects are now well reported on using systematic evaluation based on test scores:

o   standardizing evaluation methods can standardize the deployment method as well (the best practice are replicated without taking account for the different context)

o   Teachers teach for the test

o   Results exploited in non adequate way by politicians 

A context conducive for OLPC deployment and evaluation

If enrolment rates have improved since 2000 EFA initiative, considering the quality issue, it is a quasi failure of many education systems. Since an independent review of World Bank assistance in education, there is actually a real focus on learning outcomes in the donors’ community along with new insight from neurosciences (Abadzi 2006) and on literacy in the developing world. The move is to target early grades (Gove 2010) to develop sustainable basic cognitive abilities using more child centered pedagogical approach. In this context, TICE are really considered as a possible solution but little documented impact of reputed costly OLPC solutions might hinder further development and funding. Need for a change in education systems, potential private funding (Giving Pledge), widespread impact evaluation documentation, less curricula-driven tests and measure, simple test tools, leave more room for OLPC interventions and evaluation. It is an opportunity to learn that OLPC community should catch right away.

References:

Abadzi (2006), Efficient learning for the poor: insights from the frontier of cognitive neuroscience, World Bank, Washington.

Barrera-OsorioF. and Linden L. L. (2009), The Use and Misuse of Computers in Education Impact Evaluation Series No. 29, Policy Research Working Paper 4836, World Bank, Washington.

Barton P.E., Coley J.C (2007), The family: America’s Smallest School, Policy Information Report, Educational Testing Service.

Education Townships School Board (2010), 1:1-Leading change in public education, Paper presented at 2010 Conference on 1:1 computing in education in Vienna, February 2010, ETSB.

Fundacion Pies Descalzos (?), El impacto de estrategias 1 a 1 en el desempeño académico de estudiantes, La experiencia de Fundación Pies Descalzos, Powerpoint presentation.

Hourcade J. (2009), Early OLPC Experiences in a Rural Uruguayan School, Mobile Technology for Children: Designing for Interaction and Learning, CH11.

Gove A. & Cvelich P. (2010), Early Reading: Igniting Education for All. A report by the Early Grade Learning Community of Practice. Research Triangle Park, NC: Research Triangle Institute.

Leeming D. & al (2010), Some feedback on challenges and impact of OLPC.http://www.olpcnews.com/files/OLPC_Oceania_Impacts_and_Feedback.pdf

Nugroho D. and Michele Lonsdale M. (2009), “Evaluation of OLPC Programs Globally a Literature Review”, http://wiki.ordinateurs portables.org/images/f/fb/Literature_Review_040309.pdf

Pole de Dakar (2010), Combien dépensent les familles africaines pour l’éducation ?, in La lettre d’information du Pôle, N°15, Janvier 2010.

Santiago A. & al (2010), Evaluacion experimental del Programma “Una Laptop per Nino” en Peru, BID Education, Aportes N° 5, Julio 2010.

Suhr. K.A.& al (2010). Laptop and Fourth-Grade Literacy: Assisting the Jump over the Fourth-Grade Slump. Journal of Technology, Learning, and Assessment, 9(5).

UNESCO (2010), EFA monitoring report 2010. Universidad de la Republica (2010), Proyecto Flor de Ceibo, Informe de la actuado, Montevideo, Abril 2010.

Varly P. (2010), L’évaluations des déploiements OLPC: quelles méthodes ?, Document de travail. http://varlyproject.blog/wp-content/uploads/2010/08/evaluation_olpc_varly.pdf

Wagner D. A. & al (2005), Monitoring and Evaluation of ICT in Education Projects: A Handbook for Developing Countries. Washington, DC: infoDev /World Bank. http://www.infodev.org/en/Publication.9.html

Wainer J. & al (2008), Too much computer and Internet use is bad for your grades, especially if you are young and poor: Results from the 2001 Brazilian SAEB, Computers & Education 51, p. 1417–1429.

Zehra H. (2010), Review of external OLPC Monitoring & Evaluation Reports, August 2010, OLPC Foundation Learning Group.


[1] In this post, Mark Warschauer criticizes the OLPC approach. “olpc-how-not-to-run-a-laptop-program”
[2] Here are useful tools to calculate 1:1 project costs.
[3] Especially in Africa, see here.
[4] It is a recurrent topic in the OLPC News forum, as in this post.
[5] A negotiation is ongoing with CONFEMEN to use PASEC tests.
[6] Such tests are based on a common set of competencies defined by experts after rigorous curriculum analysis validated by the different countries. They are made to reflect what are the expected achievements at a given age or grade, whatever the teaching methods are.
Annex : Proportion of non readers EGRA meta data
Pierre VARLY, Independent Consultant

Disclaimer : This paper only reflects the author’s view, Pierre Varly and none of the individuals or organizations mentioned.

This Post Has 9 Comments

  1. An OLPC impact evaluation report in Peru :
    “As part of the study, the impact on general cognitive abilities was explored. Three tests were given. They measured: a) non-verbal analytical capacity; b) executive functioning and language; and c) processing speed and short-term memory. The grades obtained by students of the treatment group were higher than those for students in the control group in all three cases, although the diference was statistically significant only for the non-verbal analytical capacity test.”
    https://edutechdebate.org/archive/olpc-in-peru/

  2. Anonymous

    Thanks a lot for sharing this info, very intersting especially on the use of XO across time
    will read it more carefully and make a synthesis if you are Ok.