Sunday 13 September 2015

The School Research Lead - Getting Better at Analysing What the Experts Write and Say

In previous posts I've written about the need to 'strip, flip and trace' when trying to determine good educational science from the bad. In this post, and once again drawing upon the Daniel Willingham's 2012 book - When Can You Trust The Evidence: How to tell good science from bad in education - I will look at a number of steps which evidence-informed teachers and school research leads can take to effectively analyse educational research. The rest of this post will be in three sections:  
  • what is the role of experience in analysing research; 
  • Willingham's nine steps in analysing research; 
  • the notion of practical significance.

The role of experience

The true test of friendship is not when you agree with someone, it's when you disagree them. I've got a huge amount of time and respect for Tom Bennett and for his work (along with Helen Galdin O'Shea) in developing the researchED movement. Unfortunately, in the following quote from his 2013 book - Teacher Proof - I think Tom gets it wrong.

… there are few things that educational science has brought to the classroom that could not already have been discerned by a competent teacher intent on teaching well after a few years of practice. If that sounds like a sad indictment of educational research, it is. I am astounded by the amount of research I come across that is either (a) demonstrably untrue or (b) patently obvious ... Here’s what I believe; this informs everything I have learned in teaching after a decade: Experience trumps theory every time. (Bennett 2013, 57-59)

Willingham argues that informal knowledge can mislead us in two ways; first, when we think with certainty; second, when we misremember or misinterpret past experience. As Willingham explains

... 'I know what happens in this sort of situation.' I think to myself, 'My daughter loves playing on the computer. She'll think reading program is great!' I might be right about my experiences - my daughter loves the computer - but that experiences happened to have been unusual; perhaps she loved the two programmes that she used, but further expertise will reveal that the she doesn't love to fool around with other programs. Another reason my experience might lead me astray is that I misremember or misinterpret my past experience, possibly due to confirmation bias. Perhaps it's not that my daughter loves playing on the computers; actually, I'm the one who loves playing on the computer. So I interpret her occasional, reluctant forays onto the Internet as enthusiasm. (p186).

So if teachers cannot relay on their experience to provide guidance on how to proceed then what are we to do?  Willingham helpfully identifies fours steps which can be taken to help manage our experiences;

  • recognise that experience can be both fallible but it can also be insightful;
  • check-out your experience with others. How does it relate to their experience or interpretations;
  • think of the opposite to what your experience tells you. In other words, if you think of explanation or possible outcome, try and think of the exact opposite and see whether that is reasonable;
  • actively look for daily examples of events which do not confirm past experience.

Willingham's Nine Steps Approach to Analysing Evidence

Having discussed the role of experience it is now appropriate to look in more details at the steps Willingham suggests you take to analyse evidence.  Before we do that it is necessary to define two terms - the Change and the Persuader.

The Change refers to a new curriculum or teaching strategy or software package or school restructuring plan - generically anything than someone is urging you to try as a way to better educate kids.

The Persuader refer(s) to any personal who is urging you to try the Change, whether it's a teacher, administrator, salesperson, or the President of the United States (Willingham, p136)

Willingham's nine steps to analyse evidence are summarised in Table 1.

Table 1 Actions to be taken when analysing evidence (Willingham, 2012 p 205)


Suggested Action


Why You Are Doing This?

Compare the Change’s predicted effects to your experience, but bear in mind whether the outcomes you’re thinking about are ambiguous and ask other people whether they have the same impression.


Your own accumulated experience may be valuable to you, but it is subject to misinterpretation and memory biases.


Evaluate whether or not the change Change could be considered a breakthrough.

If it seems revolutionary, it’s probably wrong.  Unheralded breakthroughs are exceedingly rare in science.


Imagine the opposite outcomes for the Change that you predict.

Sometimes when you imagine ways that an unexpected outcome could happen, it’s easier to see that your expectations was short-sighted It’s a way of counter-acting the confirmation bias.


Ensure that the evidence is not just a fancy labels.

We can be impressed by a technical sounding terms, but it may mean nothing more than an ordinary conversational term.



Ensure that bona fide evidence applies to the Change, not something related to the Change.

Good evidence for a phenomenon related to the Change will sometimes be cited as it proves the change.


Ignore testimonials.

The person believes that the Change worked, but he or she could easily be mistaken.  You can find someone to testify to just about anything?


Ask the Persuader for relevant research.

It’s a starting point to get research articles, and it’s useful to know whether the Persuader is aware of the research.


Look up research on the Internet.


The Persuader is not going to give you everything.


Evaluate what was measured, what was compared, how many kids were tested, and how much the Change helped.

The first two items get at how relevant the research really is to your interests.  The second two items get at how important the results are.


We know need to turn to the role of practical significance in determining how usefulness of research evidence for your practice.

Practical significance

In reading research articles you will come across the terms statistical significance and effect size.  Coe (2002) argues there is a difference between significance and statistical significance. Statistical significance means that you are justified in thinking that that the difference between the two groups is not just an accident of sampling. Effect size, on the other hand, is a way of measuring the extent of a difference between two groups (Higgins et al 2013). Now if we combine both effect size and statistical significance, this helps get a sense of the notion of practical significance of a change or intervention.

If the confidence interval includes zero, then the effect size would be considered not to have reached conventional statistical significance. The advantage of reporting effect size with a confidence interval is that it lets you judge the size of the effect first and then decide the meaning of conventional statistical significance. So a small study with an effect size of 0.8, but with a confidence interval which includes zero, might be more interesting educationally that a larger study with a negligible effect of 0.01, but which is statistically significant. (Higgins et al p6)

In other words, as Willingham states, practical significance refers to whether not that difference is something you care about (p 203). And, as such, requires you the reader to make a judgement call. Now making judgement calls about research evidence which you are not sure about is never easy. However, Willingham suggests three approaches to tackle the issue. 
  • Make a mental note that you think the research maybe of practical significance. 
  • If you have the opportunity raise the matter with the Persuader. 
  • Ask how does the practical significance of the Change relate to your goals and what you are trying to achieve. Is the improvement which is being offered consistent with your objectives and the resources available to achieve them?
The next step

Having - flipped, stripped, traced and analysed the evidence - the next step is to consider whether the change should be adopted and that will be focus of a forthcoming post.


References

Coe, R. (2002) It's the Effect Size, Stupid What effect size is and why it is importantpaper presented at the Annual Conference of the British Educational Research Association, University of Exeter, England, 12-14 September 2002
Bennett, T. 2013. Teacher Proof: Why research in education doesn’t always mean what it claims, and what you can do about it. London: Routledge.
Higgins S., Katsipataki, M. Kokotsaki, Coe R., Elliot Major, L.  and  Coleman, R. (2013)  The Sutton Trust-Education Endowment Foundation Teaching and Learning Toolkit: Technical Appendices
Willingham, D. (2012) When Can You Trust The Experts: How to tell good science from bad in education, Jossey-Bass, San Francisco.






10 comments:

  1. Thank you for sharing nice information.
    Tatva Leadership

    ReplyDelete
  2. There would possibly stand more of the regarded facts and the opinions which must be followed by the one and surely for the future would even proved to be much better for them. statement of purpose internship

    ReplyDelete
  3. Thanks a lot for the post. It has helped me get some nice ideas. I hope I will see some really good result soon.
    Global Assignment Help

    ReplyDelete
  4. Many students have expressed their disappointment with other tools that they had used previously. Therefore, our developers have designed a tool that will deliver 100% accurate results. Your professors will not be able to detect even 1% plagiarism in your document. Also, the paraphrasing tool online will keep the meaning of your texts intact.

    ReplyDelete
  5. Clara Smith is a financial accounting professor from a reputed college in US. You can also find him offering his expertise as a part-time tutor and delivering some excellent Essay writing Services to students online.
    and best paraphrasing tool Visit allessaywriter.com to avail her knowledge.

    ReplyDelete
  6. No matter what is the problem you are facing, My assignment experts is always available 24*7 to help you out.

    ReplyDelete
  7. Skip Hire Leeds offers the best price skip hire in Leeds Guaranteed. Our Price Promise ensures that we beat any like for like skip hire quote. Compare Prices Now!

    ReplyDelete
  8. Yes, you are right, I completely agree with your points. I am also an educator at Unifolks and I always shr=are good posts with students.

    ReplyDelete

  9. Hi there!

    Very nice content and blog, I found it very informative and useful, hope to read more nice articles like this one around here,

    Keep sharing the best content,

    Best regards and take care!

    Your follower

    Salvatore from Visite Foz do Iguaçu e Conheça as Cataratas do Iguaçu (Foz) onde se encontra o Parque Nacional do Iguaçu.

    take care

    ReplyDelete
  10. "Mistakes are unavoidable. Students make mistakes consciously, and it affects their grades. In this article, we have shared a brief concept on editing or proofreading your assignment. If you want to know more, you can contact university assignment help.
    If you want to achieve an A+ grade and desire to hold the first ranking in the class, you must hire a professional assignment editor. Only a professional can proofread your content and make it flawless. But, before you hire, here are some exciting tips for you!
    1. Compendious or Conciseness- Before you say “rewrite my essay’ to someone, here is a first pick-up point! Concise assignment writing is related to improving your score.
    . Proofreading hacks - To get professional advice, hire an assignment proofreader online. Many assignment writing services offer free proofreading advantages after availing their writing experts.
    3. Rewrite and edit When you proofread or edit your paper, you should be looking for more than grammatical mistakes. Many students go through the challenging spelling and change it accordingly. You can buy assignments online, and you are running out of your time."

    ReplyDelete