- Age Range: 8 and up
- Paperback: 392 pages
- Publisher: Routledge; 1 edition (December 26, 2008)
- Language: English
- ISBN-10: 0415476186
- ISBN-13: 978-0415476188
- Product Dimensions: 6.8 x 0.9 x 9.7 inches
- Shipping Weight: 1.5 pounds (View shipping rates and policies)
- Average Customer Review: 51 customer reviews
- Amazon Best Sellers Rank: #28,911 in Books (See Top 100 in Books)
Enter your mobile number or email address below and we'll send you a link to download the free Kindle App. Then you can start reading Kindle books on your smartphone, tablet, or computer - no Kindle device required.
To get the free app, enter your mobile phone number.
Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement 1st Edition
Use the Amazon App to scan ISBNs and compare prices.
Frequently bought together
Customers who bought this item also bought
About the Author
John Hattie is Professor of Education and Director of the Visible Learning Labs, University of Auckland, New Zealand.
Top customer reviews
There was a problem filtering reviews right now. Please try again later.
Hattie creates a single scale upon which to evaluate education impacts. An effect size of d = 0.0 - 0.15 is considered "developmental effects" - what would have occurred anyway due to children aging. An effect size of d = 0.15 - 0.4 is consider "teacher effects" - presumably the effect of having an average quality teacher working with the students. An effect size of d > .4 is what Hattie considers "desirable effects". Is this summary of effect size accurate? Well that depends. If the underlying studies effect was calculated by comparing student outcomes at the end compared to the beginning of the study then it makes sense to dismiss some effect as developmental effects or average teacher effects. But if the underlying study calculated the effect by comparing the difference in growth compared to a control group, that also aged and also had average teachers then to dismiss small effects as developmental or "teacher effects" is completely wrong.
Even for cases where the effect size is calculated by comparing outcomes at end of the study to the starting position the size of the effect that can dismissed as developmental effects depends on items such as length of study (was it a one afternoon or multi year intervention?) and the outcome variable studies (was outcome measured knowing names of letters or was it likelihood of graduating college?). To control for things like development effects is exactly why studies use controls! It is impossible to just make up rules of thumb like this to replace using controls.
Meta analyses can be used to summarize multiple studies that have similar outcome variables and similar design, comparing radically different studies like this is a fool's errand. Hattie often references rules of thumb "and effect size of X is equivalent to being Y months ahead in school" ect. Such rules of thumb might be true for a single study but there is no reason for them to be universally true. Hattie seems to have read various meta analyses with out understanding when the explanations given applied specifically to the studies in question and when they are a universal trait of calculated effect sizes. A key point in seeing that he doesn't understand the statistics he is reporting is in his "Common Language Effects" (CLE) where he attempts to calculate the percent of classes that would do worse than a class with an intervention. He give the example of homework which has an effect size d = 0.29 as having a CLE of 21%. This is clearly wrong. An intervention that has no effect (d = 0.0 ) would be a median class (CLE = 50%). An effect size of d = .29 must be above 50%. In other places he calculated CLEs of negative numbers and over 1 - these numbers a completely nonsensical. That is formula for calculating CLE had a bug in it is a problem - that he reported numbers that are so obviously wrong without realizing something was wrong is clearly indicative that he is way out his depth.
I am still giving this three stars. Despite being completely mistake ridden I still read through the hundreds of pages to get some idea of where research is at. For this it is completely limited since he didn't break results into studies with control groups and those without.
Hattie separates his research into sections highlighting the effectiveness of different strategies within the following contexts:
Contributions from the Student
Contributions from the Home
Contributions from the School
Contributions from the Teacher
Contributions from the Curriculum
In summary I would highly recommend this book. While I have thoroughly enjoyed reading through it, the book's greatest strength may be its use as a reference tool. If you'd like to see the effectiveness of whole language vs. phonics instruction, concept mapping, teacher knowledge of subject matter, socioeconomic status or almost any other topic you can think of, just open the book, flip to the appropriate section and you have a synthesis of all the meta-analyses pertaining to the topic. The book has all of the earmarks of quality research and at the very least, it was carefully synthesized as it took Hattie 15 years to write. As Andrew Jackson said, "Mere precedent is a dangerous source of authority." Stop abiding by policies because "this is how it's always been done." Buy this book, evaluate your existing practices and start making evidence-based decisions to help your student learn.
John Hattie leads by saying that nearly everything works. I suppose that's because humans learn naturally. The question is what works well. Hattie shows that simply being in a classroom for a year has an effect size of 0.4 so the important innovations must have an effect size greater than that.
What really makes a big difference? Visible Learning and Visible Teaching. Specifically, getting into the students shoes and exploring the learning process. And students getting in the teachers shoes. Measuring and adapting the teaching and learning processes to fit the peope involved.
One caution: Many of the ideas have very narrow definitions when they are measured in this book. So, before pushing a concept with a high effect size or dismissing something with a low one, be sure to read Hattie's commentary and really understand what the studies have shown.