Friday, 24 November 2017

Guest post by Sarah Selezynov - Japanese lesson study: a question of culture

Given the publication of the EEF's evaluation report on Lesson Study, it seems sensible to take a slightly broader perspective on Lesson Study, so this week's post is by Sarah Selezynov - Programme Leader - Bespoke Leadership Programmes; IOE - Learning & Leadership; UCL Institute of Education. Sarah has extensive knowledge of Lesson Study and is currently organising two lesson study events in December - with Professor Akihiko Takahashi - to explore the Japanese approach to problem solving in mathematics, lesson study as a tool to improve teaching and learning, and the role of the koshi.


As a school leader who is interested in Japanese lesson study (JLS), you are probably reading the debate on this blog with interest – Should I or shouldn’t I?  Will it make a difference to my pupils in my school?  How can I be sure that the time and effort my school invests in this will pay dividends for pupil learning?  And you are right to be cautious.

And yet, I qualify this warning by saying that I believe that JLS has great potential for teachers and pupils.  JLS aligns with the wider research base on effective teacher professional development: it focuses on learning and not performance, begins with an end goal, engages teachers in and with research over an extended time frame, in collaborative groups.  And our research has shown strong evidence of improved teacher practice and student learning (Godfrey et al, forthcoming). 

So why the warning?  Because borrowing an education policy from another country and expecting it to simply work here as it does there, doesn’t really work – it rarely has.  Pasi Sahlberg (who is not against global borrowing per se) describes how a ‘network of interrelated factors – educational, political and cultural - …function differently in different situations’ (2011: 6) meaning that we cannot be sure that any one educational approach will function in the same way when it is translated from one country to the next.

So what do we need to consider when attempting to use JLS as an approach to teacher professional development in Britain?  First and foremost, we need to understand the cultural differences between Japan and Great Britain and how this might affect teachers’ responses to JLS. 

Hofstede (2010) categorises cultural differences along five dimensions, using a 0-100 ranking.  And on three of these rankings, Great Britain has a very different score to Japan:

  1. Uncertainty avoidance (Gap: Japan 11, Great Britain 68.5)
‘The extent to which the members of a culture feel threatened by ambiguous or unknown situations' (2010: 191).

The lengthy, meticulous and detailed planning of JLS, including exploring known evidence through kyozai kenkyu, and the significant time spent predicting student responses, are all attempts to avoid any unanticipated events in the research lesson.  British teachers are very likely to be much less averse to taking risks in lessons and to therefore plan in less detail and not see the need for kyozai kenkyu.  

Uncertainty avoidance cultures also feel a greater need for protocols and rules, which may explain the formal and rigid processes of JLS.  It is highly likely that English teachers would not see the need for this level of formality and would want to deviate from LS protocols.  

Uncertainty avoidance leads to a greater tendency to believe in and revere expertise: hence the valued role of the koshi or ‘expert other’ in JLS.  British teachers engaged in LS are likely to value practice expertise as much as academic expertise, and less likely to see the need for a koshi

  1. Individualism versus collectivism (Gap: Japan 36, Great Britain 3)
Individualistic societies are those where 'ties between individuals are loose: everyone is expected to look after him- or herself and his or her immediate family' (2010: 92).  In collectivist societies, 'people …. are integrated into strong, cohesive in-groups, which throughout people's lifetime continue to protect them in exchange for unquestioning loyalty' (2010: 92).

Great Britain is a highly individualistic society: occupational mobility is higher, teachers are managed as individuals and feedback on performance is given directly.   This difference is reflected in the English performance management system for teachers, the hiring and firing based on performance judgements, and performance feedback being given directly to the teacher after a lesson observation.  Japan is a collectivist society: occupational mobility is lower, teachers are managed collectively and it would not be productive to the group to give direct feedback to an individual.   JLS has evolved as a way of giving feedback on performance through the group, with the lesson plan as a collaborative product.  We might predict that teachers in Great Britain would shy away from the live observation element of JLS, fearing a judgement on their individual professional performance, which may affect job security.   Other collaborative aspects may also be challenging to implement in Great Britain, such as committing extended amounts of time to collaborative lesson planning process and working towards a whole school shared research theme.

  1. Long term orientation (Gap: Japan 3, Great Britain 40.5)
‘The fostering of virtues oriented toward future rewards - in particular perseverance' (2010: 239). 

The importance of perseverance and effort is clearly seen in JLS, where a research theme will be pursued by a school for two or three years.  This contrasts with the British short-termist attitude which is likely to influence policy decisions about the time teachers are asked to commit to investigating a research theme. 

 In summary, some key cultural differences between Great Britain and Japan are likely to mean teachers struggle with several distinguishing features of lesson study as a research process, namely:
  • Focusing on a shared research theme over a longer period of time;
  • Spending time on collaborative lesson planning, including exploring relevant material around their research theme;
  • Being observed by colleagues as they gather evidence in the research lesson;
  • Seeking outside expertise to develop and enhance their research ideas.
In our work with schools, we have managed to support teachers to engage with models of JLS that feature all of the above elements and these teachers and leaders have spoken highly of lesson study.  However, we have also encountered schools who say they are doing ‘lesson study’ but do not work on a shared research theme, nor plan collaboratively, nor act as silent observers in the lesson observation, and do not look to outside expertise to enhance their learning.  

What does this mean for you as a school leader?  If you are already using JLS, make sure that teachers are not just paying lip service to its features but adhering strictly to the features that distinguish lesson study as a research process.  If you are seeking to introduce JLS, anticipate the above cultural resistance.  Make sure teachers really understand why JLS is designed in the way it is and what you will lose if you leave out any of its critical research features.

References

Godfrey, D., Seleznyov, S., Wollaston, N., and Barrera-Pedamonte, F. (forthcoming). Target oriented lesson study (TOLS) Combining lesson study with an integrated impact evaluation model.
Hofstede, G., Hofstede, G.J. and Minkov, M., 2010. Cultures and organizations: Software of the mind (Vol. 3). London: McGraw-Hill.

Sahlberg, P., 2011. Finnish lessons. Teachers College Press.




Thursday, 16 November 2017

The school research lead, improvement research and implementation science



This week saw the welcome announcement of the appointment of Dr Becky Allen as the director of the UCLIOE’s Centre for Education Improvement Science.  On appointment, Dr Allen wishes to help develop “a firmer scientific basis for education policy and practice” and drawing on methods such as laboratory experiments and classroom observation.

Now regular readers of this blog will know that I have often expressed a concern over how educational researchers often misuse terms associated with evidence-based practice.  So, given this new initiative in improvement science it seems sensible look at a definition of improvement science/research and to do this, I’ll use the work of (LeMahieu et al., 2017)

Improvement Research : a definition (LeMahieu et al., 2017)

Improvement research is … about making social systems work better. Improvement research closely inspects what is already in place in social organizations – how people, roles, materials, norms and processes interact. It looks for places where performance is less than desired and brings tools of empirical inquiry to bear and to produce new knowledge about how to remediate the undesirable performance. Put simply, improvement research is not principally about developing more “new parts” such as add-on programs, innovative instructional artifacts or technology; rather, it about making the many different parts that comprise an educational organization mesh better to produce quality outcomes more reliably, day in and day out, for every child and across the diverse contexts in which they are educated.

Examples of Improvement Research/Science

  1. Networked Improvement Communities;
  2. Design-Based Implementation Research;
  3. Deliverology;
  4. Implementation Science;
  5. Lean for Education;
  6. Six Sigma;
  7. Positive Deviance
As such, (LeMahieu et al., 2017) state that All seven of the approaches  ……. share a strong “common core”. All are in a fundamental sense “scientific” in their orientation. All involve explicating hypotheses about change and testing these improvement hypotheses against empirical evidence. Each subsumes a specific set of inquiry methods and each aspires transparency through the application of carefully articulated and commonly understood methods – allowing others to examine, critique and even replicate these inquiry processes and improvement learning. In the best of cases, these improvement approaches are genuinely scientific undertakings

In other words, improvement research is a form of ‘disciplined inquiry’ (Cronbach and Suppes, 1969)

What Improvement Science Is Not?

However,  as (LeMahieu et al., 2017) note a major distinguishing feature of  improvement research, is what it does not attempt to do.  Improvement research is not about creating new theories or research and development.  Nor is about seeking to evaluate existing teacher strategies, interventions of field-based trials.   Rather improvement science is about doing more of what works, stopping what doesn’t and making sure everything is joined up in ways which bring about improvements in a particular setting

Given this stance, then statements about the Centre for Education Improvement Science (CEISbeing about ‘laboratory experiments and classroom observations’ seem a little incongruent with the existing work in the field.

My confusion about the work of the CEIS is further compounded by mention in Schools Week where it describes Improvement Science London, which is also based at UCL, improvement science involves the recognition of “the gap between what we know and what we put into practice” and using the “practical application of scientific knowledge” to identify what needs to be done differently.   However, that could probably more accurately be described as ‘implementation science’ (a subset of improvement science admittedly).  So, let’s delve into a little more detail about what is meant by the ‘implementation science.

What is implementation science?

(Barwick, 2017) defines Implementation science (as) the scientific study of methods that support the adoption of evidence based interventions into a particular setting (e.g., health, mental health, community, education, global development).  Implementation methods take the form of strategies and processes that are designed to facilitate the uptake, use, and ultimately the sustainability – or what I like to call the ‘evolvability’ – of empirically-supported interventions, services, and policies into a practice setting (Palinkas & Soydan, 2012 ; Proctor et al., 2009); referred to herein as evidence-based practices (EBPs).

Barwick goes onto state that Implementation focuses on taking interventions that have been found to be effective using methodologically rigorous designs (e.g., randomized controlled trials, quasi-experimental designs, hybrid designs) under real-world conditions, and integrating them into practice settings (not only in the health sector) using deliberate strategies and processes (Powell et al., 2012 ; Proctor et al., 2009; Cabassa, 2016).  Hybrid designs have emerged relatively recently to help us explore implementation effectiveness alongside intervention effectiveness to different degrees (Curran et al,  2012).

As a consequence – implementation science sits on the right hand side of the following figure (taken from (Barwick, 2017))




So where does this leave us?

Well on the one hand, I am really excited that educational researchers are beginning to pay attention being done in field such as improvement and implementation science.  On the other hand, I’m a bit disappointed that we are likely to make the same mistakes as we have with evidence-based practice, and not fully understand the terms we have borrowed. 

Finally – this post may be completely wrong as I have relied on press releases and press reports to capture the views of the major protagonists – as such I may be relying on ‘fake news.’

References

BARWICK, M. 2017. Fundamental Considerations for the Implementation of Evidence in Practice. MelanieBarwickJourneysInImplementation [Online]. Available from: https://melaniebarwick.wordpress.com/ [Accessed 15 November 2017].

LEMAHIEU, P., BRYK, A., GRUNOW, A. & GOMEZ, L. 2017. Working to improve: seven approaches to improvement science in education. Quality Assurance in Education, 25, 2-4.