Teachers are constantly being urged to “differentiate” or “individualise” the curriculum, so that all learners can succeed on tasks and materials at their own optimum level of challenge. However, especially in large classes with a wide range of ability, actually doing this presents huge difficulties in the time available.
How can the class teacher closely monitor the day to day activity of all their pupils, to check that both the quantity and quality of such activity is optimally effective, and to enable teacher intervention to shape it towards greater effectiveness?
Computerised “Learning Information Systems” (LIS) provide teachers with a curriculum-based assessment tool to enable them to achieve this otherwise daunting task.
A Learning Information System enables learners to take curriculum-based assessments at the computer whenever the learner and/or teacher feel they are ready. The LIS delivers assessment adaptively, investigating only the relevant areas chosen for the individual learner. In some cases, the LIS responds to each learner so that only items which are neither too hard nor too easy and which discriminate well are presented. This greatly shortens the time taken to self-assess, and protects the learner from any sense of struggle or failure. The Learning Information System is specifically intended to have strong formative effects on subsequent learning.
The LIS then gives detailed feedback on performance – to the student, to the teacher, and to any other interested parties such as the parents. When the performance is competent and achieves the benchmark standard, this is made immediately evident to the learner. Where the performance shows a need for further practice or teacher intervention, the LIS highlights this.
Learning Information Systems do not incorporate computerised teaching – they leave the content, form and style of teaching to the teacher’s professional judgement. Note that Learning Information Systems (LIS) are not to be confused with “Integrated Learning Systems” (ILS). By contrast, Integrated Learning Systems deliver both computerised teaching and computerised assessment of only that teaching – they tend to be very expensive and have been found effective in the UK only in mathematics.
What Curriculum Areas?
In addition to the Accelerated Reader, currently there are also LIS available for mathematics, writing, and vocabulary. A number of schools in the UK are using the Accelerated Math LIS.
Learning Information System software and associated support and training is available from the following suppliers:
Neither the University of Dundee nor any of its staff have any commercial interest in LIS software
How does it work?
How does a LIS work?
Learning Information Systems enable individualised practice and individualised feedback.
They are intended to impact on learning effectiveness by providing structured and detailed formative feedback:
- directly to the learner
- directly to the teacher
- to the learner mediated and interpreted by the teacher
- to any other stakeholders, e.g. parents, head teacher or principal, advisors and inspectors.
A Learning Information System (LIS) enables:
- more frequent assessment
- more detailed assessment
- in less time
- with greater consistency
- immediate formative feedback to the learner
- which can raise meta-cognitive awareness
- and can motivate learners to work harder, work smarter, and aim higher
- formative feedback to the teacher about the effectiveness of individual children’s learning
- which is immediate and also longitudinal
- and class-wide diagnostic information and at-risk alerting
- which can help the teacher promote & manage effective learning.
For the teacher, detailed feedback on the performance of all the learners in the class is provided with no expenditure of teacher assessment time (although the teacher needs time to review the data, reflect and decide upon appropriate professional action). An indication of the degree of success in each learner’s performance is available, in relation to their individual functional operating level and the number and difficulty of tasks they have attempted. Thus the teacher is enabled to ensure that all learners are consistently operating within their own “zone of proximal development” – optimum level of challenge.
Of course, information on learning effectiveness is of no significance if it is not acted upon. Teachers thus need to respond to LIS data by intervening appropriately with learners (through informal learning counselling, adapted selection of tasks, or other guidance), subsequently using the LIS to track the effectiveness of their intervention.
Linking an LIS with a Tutoring System
The Centre for Peer Learning (CPL) is particularly interested in the linkage of its systems for tutoring by peers, parents and volunteers to a relevant Learning Information System. In this context, the LIS offers an effective and efficient means of tracking, evaluating and adjusting individualised tutoring activity.
For example, with the Accelerated Reader teachers may choose to allow pupils to self-assess on books read To and With them, as well as those read independently and silently, especially in the case of young or delayed readers. Where the programme is used with classwide, selective or elective peer tutoring, or parent or volunteer tutoring, AR tracks and flags which books have been read independently and which with a tutor. Both tutor and tutee may subsequently independently self-assess their comprehension of the book, enhancing motivation, meta-cognition and accountability. For emergently literate tutees, the most recent version also provides quizzes with digitised speech output. The default condition is that pupils can self-assess on a book only once.
Where the Centre for Peer Learning tutoring system known as “Paired Reading” is coupled with the Accelerated Reader LIS, the combination is known as “Duolog Reading”. Paired Maths with the Accelerated Math LIS is known as Duolog Math. Paired Writing with the Accelerated Writer LIS is known as Duolog Writing.
For further information on CPL tutoring systems, see the following:
- Topping, K. J. (2001) Peer and parent assisted learning in reading, writing, spelling and thinking skills: Spotlight No. 82. Edinburgh: Scottish Council for Research in Education. [26 April].
- Topping, K. J. (2001) Thinking Reading Writing: A Practical Guide To Paired Learning with Peers, Parents & Volunteers. New York & London: Continuum International.
The Read On Project [LINK]
- Paired Collaborative Writing: Scottish Council for Research in Education Newsletter, Research in Education No. 67 Winter 2000/2001 The Paired Learning for Inclusion.
- Topping, K. J. (2001) Thinking Reading Writing: A Practical Guide To Paired Learning with Peers, Parents & Volunteers. New York & London: Continuum International.
- Topping, K. J. (2001) Peer and parent assisted learning in maths, science and ICT: Spotlight No. 83. Edinburgh: Scottish Council for Research in Education. [26 April].
The Problem-Solving Project [LINK]
- Topping, K. J. (2001). Tutoring by peers, family and volunteers. Geneva: International Bureau of Education, United Nations Educational, Scientific and Cultural Organisation (UNESCO). [Online] Available: [January 1] (Also in translation).
Also see other publications linked from Keith Topping’s homepage.
The Accelerated Reader (AR) is a system for free-standing computer-assisted individualised learner self-assessment of comprehension of “real books”.
Learners select individualised books from the many thousands of titles for which AR quizzes are available, and read at their own pace, at school and at home. On completion, they take the multiple-choice AR comprehension quiz for that specific book at the computer, which scores the performance and keeps records. Each book has a maximum point value according to its length and difficulty. When the learner self-assesses, the computer awards points up to this maximum, according to their number of correct responses.
The AR quizzes are relatively brief and primarily assess literal comprehension rather than idiosyncratic reader inferences or other more complex responses (which might be culturally specific). They do not pretend to measure all relevant reading behaviours. Indeed, AR questions are deliberately restricted to those which demonstrate adequate psychometric reliability. They are thus likely to be narrower but more psychometrically reliable than those made up on the wing by creative teachers. Consequently, AR points gained are likely to be a consistent and accurate measure of the quantity and difficulty of words read and comprehended, and therefore a useful aggregate measure of successful reading practice.
For the assessment of idiosyncratic reader inference in a more open-ended way, teachers might generate their own questions, have learners who have read the book generate questions for each other, or utilise the “Literacy Skills” tests extension of AR. This latter is available for a smaller number of titles, assessing and reporting on 24 generic higher order literacy skills, including inferential reasoning, main idea, cause and effect, characterisation and recognising plot, (but does not claim the reliability and validity of the regular reading practice quizzes).
The AR programme provides the teacher with an automatically updated analysis of scores for individuals, whole classes, or any other groupings. This indicates average percent correct on quizzes, difficulty of books read, points earned, and other diagnostic information. The software designers recommend that teachers target a quiz success rate of 85% correct as optimal for learners. Automatically computer-generated “At Risk” reports flag a need for the teacher to intervene with any learner whose book reading activities appear currently ineffective. This might include high ability as well as low ability learners.
The software originated in the USA. AR is currently in over half of the schools in the USA, in many hundreds in the UK, and its use is spreading to other countries. AR is supported by extensive staff development opportunities, leading to a wider school development programme known as “Reading Renaissance”, now becoming established in the UK. The associated “Model Classroom Program” identifies and celebrates classrooms in which good practice in the implementation of Reading Renaissance has been evidenced.
What Do We Know about Effectiveness?
Only the abstracts of papers documenting research on the Accelerated Reader undertaken by the University of Dundee are given here. Other evidence is reviewed in the third item listed below. Interested readers should consult the full texts at source. Parallel research on Accelerated Math has been conducted by Professor James Ysseldyke at the University of Minnesota.
Vollands, S.R., Topping, K.J. & Evans, H.M. (1999) Computerized self-assessment of reading comprehension with the Accelerated Reader: Action research. Reading & Writing Quarterly, 15(3), 197-211 (themed issue on Electronic Literacy).
This study was a quasi-experimental action research evaluation of a program for computerized self-assessment of reading comprehension (the Accelerated Reader). It looked at the formative effects on reading achievement and motivation in two schools in severely socio-economically disadvantaged areas. The results suggested that the program, even when less than fully implemented, yielded gains in reading achievement for these at-risk readers that were superior to gains from regular classroom teaching and an alternative intensive method, even with less time devoted to class silent reading practice than in comparison classes. Additionally, the program yielded significant improvement in measured attitudes to reading for girls.
Topping, K.J. & Paul, T.D. (1999) Computer-assisted assessment of practice at reading: A large scale survey using Accelerated Reader data. Reading & Writing Quarterly, 15(3), 213-231 (themed issue on Electronic Literacy).
This study explored the relationship between practice at reading, student reading performance, and organizational features of the school system. Data generated through a program for computer-assisted student self-assessment of reading comprehension (the Accelerated Reader) were gathered as a measure of reading practice for over 659,000 students in grades K-12 in one school year. Students and states performing high and low on reading tests were compared. The data suggested that student reading ability was strongly positively related to amount of in-school reading practice. The time allocated in school to reading practice on self-selected materials declined after grades 5/6. The amount of reading practice was negatively correlated with school size. More reading practice occurred in private than public schools. Implications for action are discussed.
Topping, K. J. (1999) Formative assessment of reading comprehension by computer. Reading OnLine (I.R.A.)[November 4].
This multimedia review sought to consider the advantages and disadvantages of the Accelerated Reader LIS, and the evidence for its effectiveness. The full text is freely available by clicking the address above. The conclusions are given below:
Of 12 studies of AR that cite substantial outcome data, mostly on norm-referenced test scores, only one failed to find evidence of a positive impact. However, these studies are of very mixed quality, with many failing to control confounding variables or to provide data on implementation integrity; they are consequently unable to be definitive about causal direction. However, more recent studies suggest that, with good quality implementation, the AR program can contribute to teacher effectiveness, in terms of value added in reading and other core curricular areas. In short, it is not whether you have the software but rather what you do with it that makes a difference.
The characteristics of good and poor implementation were outlined, together with the potential advantages and disadvantages of the program. Whatever its advantages, AR is not a substitute for balanced reading instruction. Rather, it is intended as a supplementary and complementary resource – albeit a powerful one under the right circumstances – that can help the teacher deliver the curriculum effectively.
These conclusions have implications for local, state, and national guidelines on literacy instruction that aim to raise teacher effectiveness and standards of achievement. Placing intelligent software in classrooms does not guarantee it will be used intelligently. Information technology is not a replacement for the teaching professional, but a tool with the potential to enhance teacher effectiveness. In the case of AR, as for other Learning Information Systems, appropriate and sufficient high-quality training and support for teachers are needed if implementation integrity is to be sustained at the level necessary to raise student attainment.
Topping, K. J. & Sanders, W. L. (2000). Teacher effectiveness and computer assessment of reading: Relating value added and learning information system data. School Effectiveness and School Improvement, 11(3), 305-337.
The Tennessee Value-Added Assessment System (TVAAS) has for several years used the largest longitudinally merged database of student achievement data in the U.S.A. to generate estimates of school system, school, and teacher effects on indicators of student learning in a number of subjects, including reading comprehension. A Learning Information System (LIS) (not to be confused with an Integrated Learning System) is a formative, curriculum-based, criterion-referenced assessment delivered and scored by computer, which provides detailed feedback in different forms for different stakeholders. The Accelerated Reader (AR) LIS for reading comprehension of real (“trade”) books captures the data for each student on the length and difficulty of each book, and the student’s percent correct response on the assessment. It aims to help teachers promote and manage effective reading practice. AR data on 62,739 students from grades 2 to 8 in Tennessee schools was merged with the TVAAS teacher effects database, and relationships between these independently obtained measures explored. This illuminated some factors in teacher management of the quality and quantity of student reading practice which could be causative in improving reading comprehension, and the impact of implementation integrity of the LIS.
Topping, K. J. & Fisher, A. M. (2003). Computerised formative assessment of reading comprehension: Field trials in the UK. Journal of Research in Reading, 6(3), 267-279.
Increased curriculum time allocated to reading might not be effective in raising achievement. Teachers need to closely monitor and manage both the quality and quantity of individualised reading of all their pupils for optimal effectiveness. “Learning Information Systems” (LIS) for reading such as the “Accelerated Reader” (AR) enable this through individualised computerised assessment of pupil comprehension of “real books”, with feedback to both pupil and teacher. This study explored the impact of AR on reading achievement in 13 schools of different types spread across the UK, the majority socio-economically disadvantaged. Participating pupils were aged 7-14 years. Pre-post norm-referenced gains in reading achievement were measured by group paper reading tests and a computer-based adaptive reading test. The implementation integrity of AR was assessed by direct observation by researchers and through data generated by the programme itself. On both paper and computer-based reading tests, on aggregate pupils in the 13 schools gained in reading at abnormally high and statistically significant rates. Boys tended to show larger gains than girls on the paper test. However, implementation integrity was very variable. In particular, some teachers failed to intervene in response to AR data indicating that pupils were reading ineffectively. AR appears to have potential for raising reading achievement, but only if implemented appropriately.
Topping, K. J., Samuels, J., & Paul, T. (2007). Does practice make perfect? Independent reading quantity, quality and student achievement. Learning and Instruction, 17(3), 253-264.
Does reading practice make perfect? Or is reading achievement related to the quality of practice as well as the quantity? To answer these questions, data on 45,670 students in grades 1-12 who read over 3 million books were analyzed. Measures largely of quantity (engaged reading volume) and purely of quality (success in reading comprehension) showed a positive relationship with achievement gain at all levels of achievement. However, both high quantity and high quality in combination were necessary for high achievement gains, especially for older students. Both were weakly associated with student initial reading achievement, but more strongly associated with the classroom in which the student was enrolled, possibly suggesting the properties of teacher intervention in guiding independent reading were important. Implications for theory-building, research and practice are explored.
Topping, K. J., Samuels, J., & Paul, T. (2007). Independent reading: The relationship of challenge, non-fiction and gender to achievement. British Educational Research Journal, 34(4), 505-524.
To explore whether different balances of fiction/non-fiction reading and challenge might help explain differences in reading achievement between genders, data on 45,670 pupils who independently read over 3 million books were analyzed. Moderate (rather than high or low) levels of challenge were positively associated with achievement gain, but non-fiction read was generally more challenging than fiction. Non-fiction reading was negatively correlated with successful comprehension and reading achievement gain. Overall, boys appeared to read less than girls, but proportionately more non-fiction, but this less carefully – especially in the higher grades – and had lower reading achievement. Differences between classrooms in promoting successful comprehension of non-fiction were evident, suggesting intervention could improve achievement. Implications for research and practice are explored.
Topping, K. J., Samuels, J., & Paul, T. (2007). Computerized assessment of independent reading: Effects of implementation quality on achievement gain. School Effectiveness and School Improvement, 18(2), 191-208.
This study elaborated the “what works?” question by exploring the effects of variability in program implementation quality on achievement. Particularly, the effects on achievement of computerized assessment in reading were investigated, analyzing data on 51,000 students in grades 1-12 who read over three million books. When minimum implementation quality criteria were met, the positive effect of computerized assessment was higher in the earlier grades and for lower achievement students. Implementation quality tended to decline at higher grade levels. With higher implementation quality, reading achievement gains were higher for students of all levels of achievement and across all grades, but especially in the higher grades. Very high gains and effect sizes were evident with very high implementation quality, particularly in grades 1–4. Implications for practice, the interpretation of research, and policy are noted.
Topping, K. J. (2006). Accelerated Reading in Specialist Schools. Dundee: Centre for Peer Learning, University of Dundee.