‣ Emeritus Professor of Educational Assessment at University College London
‣ Former Dean of the School of Education, King’s College London
‣ Former Senior Research Director at the Educational Testing Service in Princeton University
‣ Former Deputy Director of the Institute of Education, University of London
Dylan Wiliam, PhD, is one of the world’s foremost education authorities. He has helped to successfully implement classroom formative assessment in thousands of schools all over the world, including the United States, Singapore, Sweden, Australia, and the United Kingdom. A two-part BBC series, “The Classroom Experiment,” tracked Wiliam’s work at one British middle school, showing how formative assessment strategies empower students, significantly increase engagement, and shift classroom responsibility from teachers to their students so that students become agents of and collaborators in their own learning.
Wiliam is professor emeritus of educational assessment at UCL Institute of Education (IOE), London, UK. After a first degree in mathematics and physics, Wiliam taught in urban schools for seven years, during which time he earned further degrees in mathematics and mathematics education.
He has served as dean and head of the School of Education (and later assistant principal) at King’s College London; senior research director at the Educational Testing Service in Princeton, NJ; and deputy director (Provost) of the Institute of Education, University of London. Since 2010, he has devoted most of his time to research and teaching.
Wiliam’s most recent book Creating the Schools Our Children Need: Why What We’re Doing Now Won’t Help Much (And What We Can Do Instead) breaks down the methods American schools use to improve, and the gaps between what research tells us works and what we actually do. His additional works focus on the profound impact strategic formative assessment has on student learning. He is co-author of Inside the Black Box, a major review of the research evidence on formative assessment, as well as Embedding Formative Assessment: Practical Techniques for K-12 Classroom, the Embedding Formative Assessment Professional Development Pack, and Leadership for Teacher Learning.
Making Room For Impact: A Guide to Effective De-implementation
BY Emeritus Prof. Dylan Wiliam
It is perhaps obvious that if teachers are working as hard as they can, then adding something else is only possible if we first take something away. The problem is that most of what teachers do has a positive impact on pupils, so creating room for any innovation involves stopping teachers doing good things to give them time to do even better things. Deimplementation can also be used to help teachers gain a better work-life balance by taking things off teachers’ plates, and putting nothing back.
Deimplementation can be done in many ways. We can remove things that aren’t contributing much to pupil learning, we can reduce how much we do or the number of people involved, we can re-engineer processes to make them more efficient and we can replace time-consuming activities with more efficient alternatives. The problem is that schools are extremely complex, and removing things that may not seem important can have severe unintended consequences. That is why deimplementation has to be done carefully, anticipating problems and understanding the complexities of how schools work.
In this presentation, participants will learn about a four-step process for deimplementation. The process begins with discovering where there may be opportunities for deimplementation, deciding which areas are likely to be the best targets for deimplementation, carrying out the deimplementation plan, and then reviewing the impact of the deimplementation process. For each of the four steps, participants will also learn about a number of practical strategies that can be used in any school.
Research will never tell teachers what to do—classrooms are just too complex for this ever to be the case. Research can, however, help school leaders in three ways. The first is to identify “blind alleys”—areas where changes are unlikely to be of much benefit to students. The second is to identify areas where changes will improve students’ learning. The third, and perhaps the most important, is to provide information that school leaders can use to choose research-based interventions that will have the greatest impact in their local context.
In this keynote presentation, participants will find out why meta-analysis, although popular with researchers, rarely provides useful guidance to school leaders about what will work most effectively in their schools. They will also learn about the five key questions that school leaders need to ask to become critical consumers of educational research so that the improvements they make will maximize the benefits for their teachers and their students.
How Artificial Intelligence Will Revolutionize Education
BY Emeritus Prof. Dylan Wiliam
The rapid improvement in the ability of large language models such as Chat-GPT to produce sophisticated responses to natural language questions has led many to claim that artificial intelligence (AI) will revolutionize education. AI will, undoubtedly have profound implications for schools, teachers, and students, but there are also significant technical and ethical challenges that need to be addressed if AI is to improve education.
In this masterclass,, Dylan Wiliam will begin by outlining how large language models work, and show how AI-based tools will help teachers with a number of routine tasks, such as lesson planning, and the creation and scoring of high-quality constructed-response assessments. AI-based tools will also help teachers develop diagnostic models of their students’ needs, allowing for more personalized approaches, although significant—possibly insuperable—technical difficulties will need to be overcome if these models are to be useful to teachers and students.
In additional to the technical challenges, AI-based tools present significant ethical challenges in educational settings. These include the increased surveillance of students, the tendency of such tools to be developed with data on neuro-typical students, rendering them less useful, or even harmful, for neuro-diverse students, with similar problems for students from minorities. The fact that the most sophisticated models are not “inspectable”, so the reasons for the choices made are not open to scrutiny, makes such issues even harder to address.
Ultimately, even the most powerful tools are unlikely to replace teachers, but—like most technology—they will allow teachers to spend more time doing things that only humans can do, and thus have the possibility to produce substantial improvements in education.
Cracking The Code: Assessment Literacy For Educators
BY Emeritus Prof. Dylan Wiliam
Even the best-designed assessment system needs to be implemented thoughtfully, which requires that all users of assessment evidence have a certain degree of assessment literacy— an understanding of both the meanings and the consequences of educational assessments.
The masterclass will include a consideration of how assessments are interpreted, recorded, and reported to key stakeholders, as well as some in-class suggestions for how to get good and quick feedback from students.
Quality in assessment
While it is common to talk about assessments needing to be both reliable and valid, thinking about assessments in this way can often lead to confusion, since reliability can be thought of as both a pre-requisite for validity and at the same time, in tension with it (in the sense that attempts to improve validity can reduce reliability). Participants will learn about how to see validity as a property of inferences, rather than of assessments, and understand how changing assessments to improve the way they support some inferences may make them less able to support other, desired, inferences. In other words, any assessment system involves trade-offs.
An understanding of both the meanings and the consequences of educational assessments is an essential component of teacher expertise but there is little agreement about the term’s meaning. In this session, participants will learn about what makes some assessments better than others, why student progress measures are almost entirely useless, why most tests will never produce useful diagnostic information on students, and why most school assessment systems do not do the things they are intended to do.
This session covers the key stages of developing an assessment system:
‣ Selecting a small number of big ideas for each subject
‣ Creating learning progressions for each of the big ideas
‣ Identifying key check points in the learning progressions
‣ Developing assessments for each of the checkpoints
Since assessments are, in essence, procedures for drawing inferences, it makes sense to design assessment by starting with the inferences they are intended to support. In other words, assessment design should be evidence centered. In this session, participants will learn about the four main processes in the evidence-centered design paradigm (task selection, task presentation, evidence identification, evidence accumulation) and understand how these ideas can be applied to a wide range of assessment design issues, including recording and reporting student achievement.
In this interactive masterclass, participants will learn:
– what makes some assessments better than others
– why there is no such thing as a valid or a reliable assessment
– why student progress measures are rarely useful
– why it is difficult to get useful diagnostic information on students from assessments, and
– why school assessment systems often do not do the things they are intended to do