The Pilot RCT Report

The Readingwise pilot study, conducted between 2012 - 2014, explored the viability of the methodology of the ReadingWise programme in primary, secondary and special school settings.

The Pilot RCT Report details the qualitative impact of ReadingWise Decoding across primary, secondary, and special schools. Findings include:

The ReadingWise English Programme for Accelerated Reading Acquisition in UK School Children: A Pilot Study (2012-14) 


The problems of children failing to acquire age-appropriate reading skills have been widely reported. Recommendations for improving reading abilities have been made by a number of researchers, including the call for the use of technology to promote reading acquisition. We carried out a pilot study to test the computer-based literacy programme, ReadingWise English. 

with 160 children (aged 6-15) whose reading age placed them in the lowest 20% in reading ability of their class. Using a randomised control design, we administered 20 hours of the decoding module to the intervention group who made significant improvements in reading age relative to controls. The findings of this pilot study contribute to the emerging evidence that suggests the ReadingWise English Programme accelerates children’s reading ability. These findings highlight the need for further quantitative and qualitative research to support these claims.  


Low literacy skills in school-aged children are a widespread concern for educational researchers and policymakers. Research has shown that children who fail to acquire basic alphabetic principles in early childhood, continue to display low literacy performance in later  childhood and adolescence (Juel, 1988; Chall, Jocobs & Baldwin, 1990). This is often referred  to as Matthew effect, where poor readers are less motivated to participate in reading activities,  which results in less print exposure than their more able peers and this quickly increases the gap in reading proficiency between poor and competent readers (Stanovich, 1986; Anderson,  Wilson, & Fielding, 1988; Torgesen, 2005; Adams, 2006). Education failure has been linked to behavioral problems, crime and social exclusion (Beddington et al, 2008) thus educational bodies have invested a significant amount of resources to improve the literacy performance of children and adolescents (Kim et al, 2010). This has been executed by two main strategies:  prevention and remediation.By examining the precursors of poor literacy skills, researchers have successfully developed early childhood interventions (e.g. Reading First & Title I) to prevent poor literacy outcomes in later childhood (Hatcher et al, 2006). Early interventions have incorporated phonological awareness and phonics skills training during preschool years, which aids later word recognition  (Byrne & Fielding-Barnsley, 1995; Bus & Van Ijzendoorn, 1999; Blachman, 2000, Hatcher,  Hulme & Snowling, 2004; Macaruso & Rodman, 2011). 

However, existing literacy interventions have proved unsuccessful for some individuals, particularly those with phonological retrieval deficits, encoding deficits, low verbal ability, and developmental delays  (Al Otaiba & Fuchs, 2002) and thus there still remains a large number of children who reach adolescence with deficient literacy ability. This has created a challenge for educational researchers to provide an intervention that addresses the complexities of low literacy skills in older children.  

There exist a number of useful interventions that have aided the development of literacy skills in middle childhood and adolescence. READ 180 is a mixed method program designed for children aged 8-14 that combines teacher-directed lessons in small groups, online reading activities, and independent reading of story books in two 90-minute sessions, to improve phonemic and phonological awareness, spelling, reading, and comprehension. 

Despite the lack of randomised controlled trials, READ 180 is widely used in US schools. Kim et al (2010,  2011) carried out the first randomised controlled trial on READ 180 and found promising results for 9 to 12-year-old children’s vocabulary and reading comprehension however no significant differences in oral reading or spelling were identified between groups (Kim et al,  2010, 2011). Other research has shown promising findings between phonological skills training interventions and reading attainment (Goswami & Bryant, 1990; Rack, Hulme, &  Snowling, 1993, Blachman, 1997 & National Reading Panel, 2000) however these interventions require extensive teacher-pupil time and are therefore not easily accessible to all schools. 

In recent years, there have been advances in the use of technology to implement computer-based literacy interventions and researchers and educators around the world have acknowledged the benefits of using ICT in literacy learning (Lim & Oakley, 2013). Online literacy interventions are advantageous as individual feedback is provided, thus children are  able to work independently. This is an attractive intervention method for schools, as one-to-one teaching is not required. Studies by Johnson and Howard (2003) and Meyer et al (2010) have shown the benefits of using computer-assisted programmes to teach literacy to children aged  7-11 however these studies did not carry out a randomised control trial and thus should be interpreted with caution.  

Few studies have rigorously tested the effectiveness of computer-based literacy interventions and the few existing RCTs have presented some discouraging findings (for a review, see See &  Gorard, 2014). Brooks, Miles, Torgerson & Torgerson (2007) carried out a randomised control trial to examine children’s spelling and reading ability after 10 hours of online literacy learning. Results showed that 11- 12-year-old children showed no significant advances in spelling and more concerning, a decline in reading skills after the intervention. 

A meta-analysis by Torgerson & Elbourne (2002) pooled findings from seven RCT studies and found no significant positive effects of computer-assisted spelling. Additionally, a meta-analysis on literacy interventions by Torgerson & Zhu (2003) found two RCTs examining the effectiveness of computer-assisted reading (Lin et al., 1991; Heise et al., 1991) however results indicated no statistically significant differences in reading ability between groups post-intervention.

A more recent study by Khan and Gorard (2012) carried out a large RCT to test the effectiveness of computer software designed to aid children’s reading acquisition. 672 children aged 12-13  were randomly assigned to receive the computer programme or a non-computer-based programme; the latter group showed significantly greater gains in reading. This research emphasises the need to rigorously test online literacy interventions through randomised control trials before mass implementation in schools (Brookes et al, 2007).  

Since the introduction of the National Literacy Strategy (1997), there has been a substantial increase in literacy performance in the UK. However, despite these advances, there still remain  5.2 million adults in the UK who display reading levels below the expected reading age of an  11-year-old (National Literacy Trust, 2014). A challenge for educational researchers is to  provide a literacy intervention that: 

a) addresses a diverse range of literacy abilities; 

b) is appropriate for a large age range; 

c) is sensitive to individual strengths and difficulties; 

d)  provides individualized instruction tailored to the ability of the student; 

e) has low teacher input; 

f) is novel and engaging for children; 

g) supports children with learning difficulties and 

 h) provides long-term gains rather than short-term benefits. 

The ReadingWise English programme was designed to tackle all the above aims. The purpose of this report is to explore the findings of the initial pilot study to inform future large-scale research on ReadingWise.

Materials and Methods 

Pilot Study Design 

We carried out a randomised control trial. This pilot study was a small-scale trial run in  preparation for a more rigorous large-scale RCT. Specifically, we aimed to: 

(i) test the  adequacy of the programme; 

(ii) ensure the programme was engaging; 

(iii) practice research  protocol to ensure the researchers were fully fluent in the standardised instructions; 

iv) ensure  our teacher training was adequate; 

v) identify any limitations of methods and procedures and  

vi) assess the feasibility of a large-scale study.  

Participant Selection and Recruitment

The sample consisted of 160 children aged 6.1 to 15.6 years old (96 primary students and 64  secondary students). Of this sample, 55 were female and 105 were male. This gender split among low literacy performers is expected and consistent with other studies in the field.  Participants were recruited from five schools in the South of England and the ethnicity of the sample was representative of the schools taking part. 

A formal targeting of schools was undertaken to ensure the highest participation and response rate. Initial contact was made with gatekeepers such as Headteachers, providing a detailed outline of the study and procedures.  Those who comprised the lowest 20% of reading ability in each class were invited to take part in the study and parental informed consent was collected. Overall the sample consisted of 73 English-only speaking students and 84 English as an Additional Language (EAL) students (see  Figure 1). 


Sample Size Calculation 

Studies on educational interventions to date have infrequently produced an effect size greater than .05 (Brooks et al, 2007), which is considered a moderate effect size (Cohen, 1988). Thus we aimed to recruit a sample with enough participants (a minimum of 64 in each group) to identify a difference of .05. The five schools that took part in this study included a total of 2,310 students. Our target sample was children who performed in the lowest 20% of their class,  providing us with a sample of 160, thus sufficient power to detect an effect size of .05. 

Ethics and Procedures

Children were randomly assigned to either the ReadingWise group or the control group as random allocation minimizes confounding factors that may explain results (Torgerson, 2009).  There were no significant differences in pretest scores between groups. Children allocated to the control group condition took part in activities unrelated to ReadingWise tasks. 

The  ReadingWise programme was administered in classrooms where each child in the  ReadingWise group was allocated to a computer and set of headphones and encouraged to follow the audio instructions provided by the programme. ReadingWise staff were present to  provide technical expertise and support to teachers and teaching assistants. After completion of  the programme, reading assessments were administered using online methods and pencil-and-paper tasks manually scored by ReadingWise assessors.  

Teachers and teaching assistants received an initial training session to ensure they felt confident administering the online lesson. The programme has been designed to require very little teacher input, in order for children to work more independently without relying on a  teacher. Therefore only brief training was required, this involved a 2-hour training session in  the programme including the e-learning programme for teachers developed for this purpose,  ‘TeachingWise.’ This incorporated practical information relating to the ReadingWise programme, along with guidance on appropriate classroom and behaviour management. See  Figure 2 for a flow chart of procedures. 

Figure 2. Flow chart of procedures (enrollment, intervention allocation, pre-test, post-test & data analysis) 


The ReadingWise English Programme

We administered the decoding module of the ReadingWise English programme which, uses over 30 techniques through online lessons to help individuals improve their literacy skills.  Students are able to work at their own pace, regardless of their current level of ability, through  20 sections that teach students a wide range of reading skills from basic alphabet recognition to reading difficult sentences (including all 4-syllables in the first three Harry Potter books).  Tasks provide positive feedback and animated movies are incorporated to keep learners motivated and engaged. 

Decoding lessons include interactive blend-to-sound and sound-to-blend sound screens to reinforce students’ learning. The programme includes a full set of phonemes in addition to blends from simple vowel-consonant and consonant-vowel types up to complex nonsense syllables (ReadingWise, 2014). Computer voice recognition assesses students’ out-loud reading skills and provides feedback. Lessons on trigger/dolch words use multi-sensory, memory and visualisation techniques which are effective for ‘typical’ readers and readers with dyslexia  (ReadingWise, 2014). 

Students received 20 - 30 minute decoding lessons in groups of 10 over a 4- 10-week period. The programme was administered by teachers and teaching assistants with the support of  ReadingWise staff however the programme required very little direction as all task instructions were provided by the programme’s audio feature. Students received decoding lessons in addition to their usual English lessons therefore receiving more literacy teaching overall compared to controls. The programme was only accessible by a login provided by the  ReadingWise staff in order to eliminate the possibility of the control group accessing the programme. 

Measures of Literacy  

Age-appropriate measures of literacy were administered to children by ReadingWise staff and scored by researchers blind to students’ group membership. The following standardised  measures were used to assess students’ literacy skills: 

GL Assessment-New Group Reading Test (GL-NGRT) 

An individual online test of sentence completion and passage comprehension that allows comparison with national age-level standards.  

Burt Reading Test – revised (Burt, 1974)  

A standardised pencil-and-paper reading assessment that lists 110 words of increasing reading difficulty. Words are read aloud until a sequence of 10 has been incorrectly pronounced.  

Schonell Graded Word Reading Test (Schonell, 1955) 

A standardised pencil-and-paper reading assessment that presents 100 words of increasing reading difficulty. Words are read aloud until a sequence of 10 has been incorrectly pronounced. 

ReadingWise Reading Intervention Results 

A total of 160 children were randomly assigned to the ReadingWise (RW) group (N= 85) or the control group (N= 75). The average age (in months) of the intervention group (M= 128.04,  SE = 3.64) and the control group (M=130.93, SE = 4.01) was not significantly different (t(151)  = 0.53, p =.50) and there were no pre-test differences between the two groups on the GL NGRT t(111) = -1.20, p = .23, Burt t(140) = -.30, p = .76 and Schonell t(140) = -.34, p = .74.  

Unexpectedly, the control group made more progress on the GL-NGRT than the intervention group, however, this difference was not significant t(103) = .073, p = .94. Both intervention and control groups made progress on the Burt and Schonell but the intervention group made significantly greater progress on the Burt t(141) = -3.41, p < .001; d = 0.24 and Schonell t(130)  = -4.16, p < .001; d = 0.28 compared to controls, though the effect sizes were relatively small. Overall, the ReadingWise group made an average 9-month reading gain on Burt and Schonell tests. Controls made an average 3-month reading gain on Burt and Schonell tests (see Figure 3 and Figure 4).


These gains were consistent when examining EAL students and English-only speaking students separately and primary students and secondary students separately (see Table 1). The highest reading age gains were among secondary school students in the intervention group who gained  10.58 months as measured by Burt and 12.92 months as measured by Schonell. 



The results of this pilot study found that UK school children gained an average of 9 months in reading age upon completion of the decoding module, in comparison to controls. These findings were consistent among primary/secondary students and EAL/English-only speaking students. In this section, we will present the findings embedded within existing literature and discuss the implications of our results. Lastly, we will explore the strengths and limitations of the current study and suggest ways in which future research can provide further support for the use of computer-assisted literacy learning.  

This pilot study heeded the call for the use of technological, cognitive, and empirical resources in the field of literacy learning (Beddington et al., 2008) and ReadingWise met the demand for a short-term, focused intervention for enhancing language acquisition (Brooks, 2003). The findings of the current study are inconsistent with previous literature that shows limited evidence of the effectiveness of computer-based literacy learning (Brooks et al, 2007; Torgerson & Zhu, 2003; Khan & Gorard, 2012). 

Furthermore, many previous interventions,  both computer-assisted and teacher-directed have shown to be unsuccessful for individuals with phonological retrieval deficits, encoding deficits, low verbal ability, and developmental delays (Al Otaiba & Fuchs, 2002). Through individualized instruction, tailored to the ability of the individual, the decoding module successfully aided the lowest performing primary and secondary school readers, and thus the prospect of matching children’s reading age to their chronological age, without the direct support of a teacher, appears to be a realistic objective.  It is important however to acknowledge the limitations of our study and suggest ways in which future research can overcome these problems. One limitation of the current study is the sample size. Additionally, there were gaps in the data including a number of missing pre and post-intervention dates, which are crucial to analysis and thus limited our freedom to calculate ratio gains. 

A rigorous large-scale RCT is required to confirm our findings. We also found that some children displayed reductions in reading on one scale (GL-NGRT ) and improvements on the other scales (Burt and Schonell) which raises suspicions of non-systematic error including deliberate non-engagement, perhaps due to task fatigue on the final GL-NGRT assessment.  

Future research may overcome this problem by administering assessments in a counterbalanced order. The lack of improvement on the GL-NGRT also leads us to question the relevance of the assessment as unlike the Burt and Schonell assessment, the GL-NGRT focuses on comprehension skills, rather than decoding. The next quantitative examination of ReadingWise must ensure assessments are applicable to the intervention modules trialed.  

Despite the limitations discussed above, this pilot study provided a preliminary investigation  into the effectiveness of ReadingWise and has indicated that ReadingWise: 

a) is suitable for a  diverse range of low literacy abilities; b) is appropriate for a large age range; c) is sensitive to  individual strengths and difficulties; 

d) provides individualized instruction tailored to the  ability of the student; 

e) has low teacher input; 

f) is novel and engaging for children; and 

g)  supports children with learning difficulties.  

The introduction of this report also highlighted the importance of achieving long-term literacy gains, rather than short-lived benefits. This was beyond the scope of this paper however future longitudinal research on ReadingWise may be able to answer this question. Although this study tackled a large age group (ages 5-15), it would also be interesting for future research to examine the success of ReadingWise among adult learners. Further research is also required to investigate other modules provided by the ReadingWise program, including a more advanced comprehension module that builds upon decoding lessons.  

In conclusion, this pilot study has achieved the aims set out at the beginning of this report: i)  testing the adequacy of the programme; (ii) ensuring the programme was engaging; (iii)  practicing research protocol; iv) ensuring the teacher training was adequate; v) identifying any limitations of methods and procedures and vi) assessing the feasibility of a large scale study.  

We argue that the ReadingWise English programme could represent an accelerated learner-driven reading programme for struggling readers, one that will be confirmed by replication in a rigorous large-scale RCT and through qualitative research that examines individual learners’  experiences in depth. To improve the quality of the evaluation, future research must combine both qualitative and quantitative data as mixed method approaches have pragmatic advantages  Driscoll, Appiah-Yeboah, Salib, & Rupert, 2007) that could further develop our understanding of the ReadingWise English programme.  

ReadingWise in the Classroom

Since this pilot RCT study, ReadingWise has helped over 100,000 pupils in primary and secondary school advance their reading abilities, fluency, and enjoyment of reading. 

Read our other studies: 

University of Cambridge, Reading More Wisely

What’s Missing from the Reading Strategy

DfE-funded RCT

"Reading Wise has been great for our school. It was recommended to us from a cluster school and what a difference it has made in such a short time! We have used Reading Wise as a targeted intervention for 10 children as a trial and have been blown away by the results.”

“In only one term, the majority of our children made at least 10 months of progress in their reading age- some even more! We have decided to purchase Reading Wise for the whole school and are very excited to see the progress throughout the school.”

We are extremely impressed with the impact Reading Wise has had in our school and are very excited to continue our journey. I would recommend Reading Wise to any school!"

Hannah Boardman, Irlam Primary School, Salford, UK

See what other headteachers and literacy leads say about ReadingWise in our case studies and frequently updated testimonials

Boost your school's literacy results: Arrange your 20-minute demo at a time to suit you. 

We highly recommend you spend 20-minutes running through the programme with our friendly and knowledgeable team by booking a demo to suit your diary


Adams, M. J. (2006). The promise of automatic speech recognition for fostering literacy growth in children and adults. In M. C. McKenna, L. Labbo, R. D. Kieffer, & D.  Reinking (Eds.), International handbook of literacy and technology (Vol. 2, pp. 109– 128). Mahwah, NJ: Lawrence Erlbaum Associates. 

Al Otaiba, Stephanie, and Douglas Fuchs. "Characteristics of Children Who Are Unresponsive to Early Literacy Intervention A Review of the Literature."Remedial and Special  Education 23.5 (2002): 300-316.

Anderson, Richard C., Paul T. Wilson, and Linda G. Fielding. "Growth in reading and how  children spend their time outside of school." Reading Research Quarterly (1988): 285- 303.

Blachman, B. A. (Ed.). (1997). Foundations of reading acquisition and dyslexia: Implications for early intervention. Mahwah, NJ: Erlbaum. 

Brooks, G., Miles, J. N. V., Torgerson, C. J., Torgerson, D. J. (2007). Is an intervention using computer software effective in literacy learning? A randomised control trial. Education  Studies, 32:2, 133-134.  

Byrne, B., & Fielding-Barnsley, R. (1989). Phonemic awareness and letter knowledge in the child’s acquisition of the alphabetic principle. Journal of Educational Psychology, 81,  313–321. 

Byrne, B., & Fielding-Barnsley, R. (1995). Evaluation of a program to teach phonemic awareness to young children: A 2- and 3-year follow-up and a new preschool trial.  Journal of Educational Psychology, 87, 488 –503. 

Driscoll, D. L., Appiah-Yeboah, A., Salib, P., & Rupert, D. J. (2007). Merging qualitative and quantitative data in mixed methods research: How to and why not. Ecological and  Environmental Anthropology (University of Georgia).

Goswami, U., & Bryant, P. E. (1990). Phonological skills and learning to read. London:  Erlbaum. 

Hatcher, Peter J., et al. "Evidence for the effectiveness of the Early Literacy Support  programme." British Journal of Educational Psychology 76.2 (2006): 351-367.

Heise, B.L., Papalewis, R. & Tanner, D.E. (1991) Building base vocabulary with computer-assisted instruction, Teacher Education Quarterly, 18, 55–63. 

Kim, J. S., Samson, J. F., Fitzgerald, R., Hartry, A. (2010). A randomised experiment of a mixed methods literacy intervention for struggling readers in grades 4-6: effects on word reading efficiency, reading comprehension and vocabulary, and oral reading fluency. Read Writ, 23, 1109-1129. 

Kim, J. S., Capotosto, L., Hartry, A., & Fitzgerald, R. (2011). Can a mixed-method literacy intervention improve the reading achievement of low-performing elementary school students in an after-school program?: results from an randomised control trial of READ  180 enterprise.  

Lin, A., Podell, D.M. & Rein, N. (1991) The effects of CAI on word recognition in mildly mentally handicapped and non-handicapped learners, Journal of Special Education  Technology, 11, 16–25. 

National Literacy Trust, (2014). Illiterate adults in England. Retrieved October 10, 2014, from:  

National Reading Panel. (2000). Report of the National Reading Panel: Reports of the subgroups.Washington, DC: National Institute of Child Health and Human  Development Clearing House. 

Rack, J., Hulme, C., & Snowling, M. J. (1993). Learning to read: A theoretical synthesis. In H.  Reese (Ed.), Advances in child development and behavior (Vol. 24, pp. 99–132). New  York: Academic Press. 

Stanovich, K. E. (1986). Matthew effects in reading: some consequences in individual differences in the acquisition of literacy. Reading Research Quarterly, 21, 360-407.

Torgesen, J. K. (2005). Recent discoveries on remedial interventions for children with dyslexia. In M. J.Snowling & C. Hulme (Eds.), The science of reading: A handbook  (pp. 521–537). Oxford: Blackwell. 

Torgerson, C.J. & Elbourne, D. (2002) A systematic review and meta–analysis of the effectiveness of information and communication technology (ICT) on the teaching of spelling, Journal of Research in Reading, 25, 129–143. 

Torgerson, C.J. & Torgerson, D.J. (2001) The need for randomised controlled trials in educational research, British Journal of Educational Studies, 49, 316–328. 

Torgerson, C.J. & Zhu, D. (2003) A systematic review and meta-analysis of the effectiveness of ICT on literacy learning in English, 5–16, in: Research Evidence in Education  Library (London, EPPI-Centre, Social Science Research Unit, Institute of Education,  University of London).

Arrange your 20-minute demo at a time to suit you.

We are proud to work with and support: