On NAPLAN

On Monday and Tuesday of this week, I had the pleasure to be invited to participate in the National Assessments in the Age of Global Metrics symposium at Deakin University. This symposium was organised by Deakin University’s Research for Educational Impact (REDI) in collaboration with Laboratory of International Assessment Studies. The aims of the symposium are to “bring together scholars and practitioners from around the world to examine models of national assessments and explore how they are affecting the policy discourse and the practices of education in different parts of the world.”

The aims of the symposium were to address the following questions.

  • How are national and sub-national assessments evolving in the age of global metrics?
  • What is the relationship between national assessments and ILSAs?
  • What effects are they having?
  • What can we learn from the experiences over the past couple of decades?

What I liked about this event was that it aimed to bring together academics from diverse backgrounds to engage in dialogue, and maybe even learn from each other, in the fields of large-scale international assessments. And it was a bit of a star cast with presentations from Ray Adams (ACER), Sara Ruto (PAL), Anil Kanjee (Tshwane University of Technology), Sue Thompson (ACER), Hans Wagemaker (ex-IEA), Sam Sellar (MMU), and Barry McGaw (ex-ACARA). Ray Adams’ presentation was very interesting, making the case for homogenising ILSAs using criteria to enable a form of meta-standardisation and I may blog on this at some stage once I have thought about this further.

On the Tuesday morning there was a panel discussion that addressed the question ‘What’s the point of national assessments’. One of the participants was Barry McGaw, who was one of the architects of Australia’s NAPLAN and MySchool intervention, an area I have done a fair bit of work in. I must admit, during the presentation I was a bit annoyed, and when there was a chance for discussion, I asked a few questions. Because this was live-streamed, there were a number of people who tweeted out that I’d asked some questions, and I got lots of responses as to what they were. Here’s my list of questions:

  1. If NAPLAN is impactful, and I think on this we agree, why is it only ever impactful in positive ways such as in the anecdote that you shared? Why aren’t we equally interested in the negative impacts including trying to understand all of those schools that have gone backwards?
  2. Given the objective of this event, I am wondering which qualitative researchers you have read on the effects of NAPLAN that informed your attempts to make the assessments better through designing responses to the unintended consequences of the assessment?
  3. Results across Australia have flatlined since 2010*, how do you justify that NAPLAN has been a success in its own terms?
  4. I’m always concerned when people mischaracterise the unattended consequences of tests as being ‘teaching to the test’. It would be better to see a hierarchy of unintended consequences ranging from:
    1. making decisions about people’s livelihoods such as whether to renew contracts for teachers based on NAPLAN results
    2. making decisions about who to enroll in a school or a particular program based on NAPLAN results
    3. a narrowed curriculum focus where some subjects are largely ignored, or worse, not taught at all so that schools can focus on NAPLAN prep
    4. teaching to the test which may or may not be a problem depending upon how closely the test aligns with curriculum etc
  5. The problem with the branched design for online tests is not whether students will like it or not, it is a) whether schools have the computational capacity to run the tests, extending to whether or not BYOD schools advantage/disadvantage some students depending upon the type of device they use, problems of internet connection in rural and remote schools, bandwidth in large school etc. I am interested how you characterise this as a success?**

I was unimpressed with the answers I got, but I imagine that’s my problem. I think that psychometricians do rigorous research and have important insights into education systems that need to be taken seriously, but I equally think that qualitative fieldwork is desperately needed to advance the validity of this assessments, and when you shut that insight down you only damage your own assessments in the long run.

* At the end of the session, John Ainley from ACER came over and suggested to me that there had been significant improvement in Year 3 Reading and Year 5 Numeracy, with a bump in 2016 and 2017. I conceded the point, I stopped researching NAPLAN in 2015 so I hadn’t updated my trendlines. Across the other domains, however, they have remained fairly stable since 2010. This is known as the ‘wash back effect’ in the assessment literature.

** I had this question down to ask, but felt I had gone on too long so didn’t ask it.

#Ascilite17

Today, I intend to spend my 6 minutes discussing Vision 8: Teaching is delegated to computers to argue that this is both a past desire and a possible future that we need to contend with, particularly if we continue to allow conversations about learning to be framed within economic terms such as efficiency and effectiveness. Taking an historical approach, I will show that teaching machines do and have existed for a long time, and these machines have always presented themselves in the language of efficiency and effectiveness. If we don’t want to be taught by machines, maybe this is something that we should consider.

In the education technology industry there is a widely held belief that technical, or digital, personalisation of learning is the next ‘revolution’. Pearson, one of the largest edu-businesses in the world positioned itself in 2014 to take advantage of the emerging ‘digital ocean’ of Big Data ‘transforming’ learning in schools and classrooms. Corporations, philanthropic organisations, not-for profits are just some of the policy actors devoting significant energy and resources pursuing personalised learning in classrooms using adaptive technologies. Universities are heavily engaging in this space, promoting personalised learning solutions, learning analytics and adaptive platforms to solve various problems, including student drop-out rates, more effective and efficient forms of tutoring and the monitoring of student experience.

Teaching machines and personal learning – the story of the analogue

Of course, those digital and adaptive technologies have an analogue antecedent. In the early part of the 20th century there was much interest in the creation of teaching machines. While teaching machines may be difficult to define, given that there are many technologies or machines that exist in schools and classrooms that do not specifically teach, the standard definition is “an automatic or self-controlling device that (a) presents a unit of information, (b) provides some means for the learner to respond to the information, and (c) provides feedback about the correctness of the learner’s responses” (Benjamin, 1988, p. 704).

The most well known purveyor of teaching machines was B.F. Skinner, the behavioural psychologist, but the earliest patented teaching machine that satisfies the above requirements was developed and patented was by Sidney Pressey in 1928. This machine – the Machine for Intelligence Tests –used a large rotating drum to expose written material in a small window (Benjamin, 1988, p. 705). Students were presented with multiple choice questions with four alternative answers. Pressey’s machine could operate in a testing or teaching mode. In the testing mode, the machine recorded student responses and recorded the correct answers on a counter on the back of the machines. In the teaching mode, a lever was raised on the back that prevented the drum rolling onto the next question in the viewing panel until the correct answer had been entered (Benjamin, 1988, p. 705). In the teaching mode, students could attempt each question multiple time. The counter on the back recorded the number of attempts each students had made.

What is notable about Pressey’s teaching machine is that it is predicated on behavioural psychology and contemporary views on what constitutes learning – a moral imprimatur to be both efficient and precise. The machine could, if desired, be set to a reward dial that determined how many correct responses a student would need to gain candy as a reward (Benjamin, 1988, p. 706). The logic of Pressey’s machine was that efficient machines operating on behaviourist principles could make learning more effective. However, most telling is what Pressey wrote in his 1933 book Psychology and the New Education.

There must be an “industrial revolution” in education, in which educational science and the ingenuity of educational technology combine to modernize the grossly inefficient and clumsy procedures of conventional education. Work in the schools of the future will be marvelously though simply organized, so as to adjust almost automatically to individual differences and the characteristics of the learning process… for the freeing of teacher and pupil from educational drudgery and incompetence. (pp. 582-
583)

The key logics here are individualism, efficiency and the ‘freeing’ of teachers and pupils from education drudgery. What it demonstrates is that teaching machines have long manifested the desires of their creators for solutions to education problems such as increased student engagement, forms of personalised instruction and learning and for more efficient learning as demonstrated through the precise and correct movement, or speed, that a student manifests in these machinic tasks. Of course, Pressey’s machine (and those of his contemporaries) should be seen as analogue machines, in that they specified singular paths and patterns (ie spaces) through which a student progresses. The Pressey example shows that the desire for personal, efficient technologies to accelerate learning in ways that free the students from drudgery and, presumably, from the incompetence of the teacher, are neither new nor novel. It is almost as though these desires are hardwired into the problem of education and the history of solutionism in schools and other educational institutions.

Digital Personalisation – the story of adaptive technologies

If we move forward to the new millennium, advances in computational capacity, software design and the creation of networked infrastructure have created renewed interest in the possibilities of adaptive learning solutions. Personalised learning encompasses many digital promises in education, but its appeal stems largely from the seductive message of the uniqueness of each learner, the promise of learning experiences that will be able to adapt to these individuals with the promise of improved efficiency. Learning personalisation is the modification of resources and environments using adaptive technologies “with the goal that learners remain invested and continue to seek the highest level of knowledge possible for them in each specific field of knowledge” (Thompson & Cook, 2017, p. 746). Adaptive learning systems are “education technologies that can respond to a student’s interactions in real-time by automatically providing the student with individual support” (EdSurge, 2016, p. 15). Adaptive technologies, then, are responsive to the learner, with the promise that continually adapting and updating the content, human-computer interactions and the tasks that each learner confronts improve engagement and motivation. The promise is of improved, if not complete, learner investment, as the learning content, environment and tasks are continually updated and responsive to the profiles/patterns of the learner. ITS are descendants of the behaviourist psychology that informed the early teaching machines. Technological advances, including the ability to make decisions, to analyse and respond to learner dispositions, assess levels of motivation and competence and devise the next challenge are effected in ‘real time’. Many ITS aim to deliver personalised learning experiences to students.

Implications for education governance

Edtech has long been seen as a key driver of how schools might change. This is big business, the edtech industry is adept at marketing solutions to education problems. The next suite of technologies being marketed to systems and institutions are adaptive and ‘real-time’, promising rapid feedback and predictive applications. If, as Rhodes (1996) suggests, a new governing structure has emerged in public policy, there is much to gain through examining those “socio-cybernetic systems” at work.

There is something of a double potential here. On the one hand, personalising technologies in schools have the potential to merely reinforce the programmatic mass consumption that Stiegler sees as disastrous for culture and the human. In this model, personalising technologies act to reinforce this system such as through a standardisable approach to personalisation. Tech solutions are sold to schools and school systems, promising quicker, faster, more efficient tools that will adapt to learners levels of motivation, to keep them moving, obedient, compliant so to speak. There are a number of dangers with this model. First, it may be that technicising engagement in this manner forecloses possibilities to think, to be bored, to communicate that may be central to the actualisation of learning itself. Second, there is the possibility that what we will get is not tools to help us think better, but a homogenising technics that in effect standardises a hyper-individualism when the problems that confront us require form of collective consideration and action. Third, these technologies may shut down possibilities to understand the future as it might be, the temporal beings co-constituted by this mode of technics will always harkening to a past inscribed into the algorithms and code at the outset.

However, as Stiegler argues, the double potential of technology is that it also has the potential to deliver what he terms “singularisation”. One of the ways of thinking about this is to consider how it is that culture can interrupt (or catch up) with the ultrarapid technological change that students, schools and school personnel are increasingly contending with. As Stiegler argues, what is needed is an engaged politics of technology:

such a politics must be a politics of technics, a practical thought of becoming capable of furnishing it with an idea projecting into the future in which becoming is an agent… A politics of technics should be able to elaborate practical ideas capable of asking and regularizing the question as to what must be done within the practical domain. (Stiegler, 2011, pp. 198-9)

This does not mean that the speed at which this technology operates, its ‘real-time’ analysis and decision-making, and using engagement as a tool to keep people moving aren’t problematic. However, it does suggest an interesting line of inquiry for studies in the socio-cybernetic systems of educational governance.