Wednesday, September 17, 2014

A Future of Box-Tickers

This post is part of the 30-Day Blog Challenge from TeachThought. To learn more about the challenge go to

DAY 17: What do you think is the most challenging issue in education today?


This is my answer today. A month from now, I may feel otherwise, but assessment has been on my mind a lot lately. It's probably the combination of many factors: the recent release of the scores for last year's Minnesota Comprehensive Assessment (MCA) in Science, the writing of performance goals for my annual review, and a shift to standards-based grading in the classes I teach.

For this post, I'm just going to tackle standardized tests. The national discussion around science standards and developing a scientifically-literate populous has centered on promoting critical thinking, allowing students to ask their own questions, applying evidence in problem-solving, and finding connections in a systems-based approach to learning. Just recently, Tim Birkhead and Bob Montgomerie, two university science professors, wrote an article for Times Higher Education calling for science education to stop training students to strive for the "right" answer as they feel it is leading to rampant scientific misconduct. Here are a couple of quotes from the article that I think are especially relevant:

"The obsession with box-ticking is a major culprit, where assessment rewards only the right answer rather than the process of research and the integrity of reporting."

"..teachers have not been given sufficient time by governments and curriculum developers to properly teach the scientific process and to do experiments carefully"

But do our state assessments measure these process skills? No, they are assessing for a "right answer." Students are simply box-ticking. Not only that, but in Minnesota, the assessments are given once, at the very end of the year, and teachers don't receive the scores until the following fall, making it impossible to use the results to help my students. Not that the scores are a clear indication of what the students know anyway; passing the test is not required for graduation so most students don't take it seriously, and, get this, the test results are not statistically significant. I know this because I volunteered to be a question writer one year to get a more well-rounded picture of the exam, and one of the test developers admitted the results aren't statistically significant after I continued to ask questions he couldn't answer. It turns out that Minnesota has so many science standards that a test designed to ask enough questions to deem the results significant would be unbelievably long.

Does this mean that I'm anti-standardized assessments? No, I'm anti-standardized assessments that don't help students learn and don't help teachers teach better. And this is where our greatest challenge lies. How can we develop national assessments that are:

  • Valid?
  • Anti-Memorization and Pro-Thinking?
  • Unbiased?
  • Learner-Centered?
  • Valued by Educators?
Regular assessments that truly illuminate students' thinking are key to teaching and learning. As challenging as it is to design effective assessments for my own small group of students, imagining what this would look like on a state or national level is certainly beyond me. But this year I've decided that I'm finally going to stop simply complaining about it and take action. I have some ideas in mind and people I'm planning on contacting to start the conversations about what science assessment in Minnesota should look like. Because it all comes down to one question,

Are we trying to promote a future of scientific thinkers or a future of box-tickers?

Photo from Alberto G. on Flickr. Licensed via Creative Commons.

No comments:

Post a Comment