On the Monday and Tuesday of Thanksgiving week, our faculty met for 2 full days of assessment related PD. This wasn’t participatory bacon wrapped lesson planning. It was 16 hours of lecture and table notes, packaged curriculum and bullet points. While I recognize the logistic realities that make giant scale whole group instruction like that an unavoidable reality, the format did not help the material.
Teachers often exhibit the same sorts of behaviors as their students would in similar situations. Middle school teachers if you sit amid a desk for six or eight hours 10 tend to lean a little snarky. But it’s interesting to note that snarky teachers (and students!) aren’t checked out, they’re just funneling their fidget impulse into a backchannel that yields immediate positive feedback from their peers.
In a really well-designed move the reflection for that for those PD days was put off for a couple weeks. So this morning, two weeks later by the calendar and a lifetime later according to the discontinuous internal teacher clock, were coming back into reflect on the experience and our thoughts on assessment. It’s a great move because I know all the snarky middle school teachers have been chewing and mulling over those ideas that entire time will come back together not as students watching the clock for the end of the day, but as educators ready make better decisions for our classrooms and our students.
Do I have 16 hours worth of new wisdom? Probably not. But I have a refinement of an earlier belief about how information access can reveal rich assessment, and a better sense of what practices teacher can iterate on to push them towards better assessments and stronger classrooms. [The restatement ran long, so I’ll talk about the iterative practice angle tomorrow.]
I first started poking at this idea in a somnambulistic rant three years ago. That first form was just a floor, a cut off line for assessment strategies that have lost relevance in a information. This is a bit more nuanced, a floor and a ceiling for “kinda okay” assessment.
At the minimum, an assessment must distinguish between a studied/prepared student working in information isolation (classic testing model) and an unprepared student with unfettered information access.
A rich and well designed assessment reveals distinctions between an unprepared student with information access and a well prepared student with that same access.
I’ve had a number of uncomfortable conversations around that first principle, because it calls out many accepted teacher/classroom practices as wrong. It’s exactly the argument that teachers expect a tech person to make, full of bravado about how the internet has all the answers so students don’t need to learn anything. I honestly believe there’s more nuance than that, but it’s a fair cop.
We have tools that can graph any system of equations faster than any student, available on demand from any device. The floor criteria doesn’t say you can’t assess student’s ability to graph equations, rather that you also ask them to make judgments or extrapolations from that process beyond what Google or Wolfram Alpha yield.
I also want to point out that “prepared” included a lot of literacy skills that help students comprehend and tackle a given problem. My model for an unprepared student is a teacher from a different department in the same school. Without hearing a single minute of class experience, how well could I handle your history exam? If the class vocabulary is nothing more nuanced than the Wikipedia bullet points, then I have a pretty good chance of keeping up. If the question relies on meaningful work done throughout the term, then I’d need to churn through that cognitive backlog before even approaching the question.
That’s just the minimum we should expect from rich assessments, and I’ve struggled to articulate cross-curricular principles for truly great assessments. It’s easy enough to find specific examples, but those are great because the mode of assessment is so deeply tied to the subject matter. Build a line following robot from these components. Great, how would I do than in history? Produce a museum exhibit from some subset of these items that tells a story about the historical individuals life, beliefs and society. Great, what does that look like for 7th grade Bio? Andrew Watt suggests trying to confirm some of Hooke’s observations with hand ground lenses and a USB microscope. What about 12th grade Econ? Dude, I don’t know yet! We’re trying to build a Grimore for that stuff!
The criteria for rich assessments suggests that there needs to be skills brought into use during an assessment, rather than just information. The core must be some cognitive task that’s been practiced and refined through the duration of the course, which students have to apply in moderately novel context. Shawn is great at this stuff: How much energy is in Mario’s fireball? With information access but no skill practice, students will flounder and produce “naive” work. ** With practice and no information access, students produce shallow journeyman work, like an well structured AP Lit essay that doesn’t cite or analyze the text in question. When students have information access and practiced skills, there’s no ceiling to what they can accomplish.
The question I’m skirting in all of this is time. I can probably design a linear systems quiz that’s full of tricks and shortcuts so that a practiced student would be faster and more accurate than a naive google-bot. What does that count as? Building a robot from scratch is a nice idea, but that’s not a “exam” in any meaningful sense of the term. Time scale is what separates an assessment from a project, and I’m still unsure how that distinction changes the prompts and questions I’d use for either.
That’s not a perfect criteria, but it’s better than where I was before. Looking back at my teaching career, I’ve been really happy when I created assessments that managed to clear the floor criteria. I think I’ve had a half dozen that found their legs and managed to reach the second criteria, and several of those were accidental creations. That’s a valuable sobering thought. Even with my best intentions, I can’t count on myself to create assessments with enough head room for well prepared internet-enabled students to truly shine. Since CMK, I’ve been using Gary’s good prompt guidelines to steer me though this process. Someday I hope to have a library of capstone assessments for all manner of subjects, each printed as a single sentence on a 3×5 card.
** Am I begging the question here? Can we qualitiativley identify naive work in respective disciplines? This is what I spend my time contemplating on mathmistakes.