Applying Lean Methodology to K-12 (Part IV): What We Learned

Recapping Part I – Developing an MVP: We identified our problem, crafted a hypothesis, and used lessoncasts as the communication vehicle to define the minimum viable practice. Part II – What to Build: We developed a two-week cycle to facilitate collaborative planning, reflection, iteration, and refinement. Part III – How to Measure Impact: We developed lookfors to observe during classroom instruction and created shared assessments to measure student learning over time. 

As the school year progressed, we continued to engage in this process of using a lessoncast to focus on a minimum viable practice, plan collaboratively, implement a strategy, exchange feedback, and measure impact. This guided inquiry approach helped us to conduct a solid experiment, a scalable and repeatable set of procedures to test a hypothesis and discover new learning. As part of the Lean Startup Methodology, Eric Ries explains validated learning as “a rigorous method for demonstrating progress when one is embedded in the soil of extreme uncertainty.” I’m willing to wager that most who have taught in a classroom have at one point or another felt “embedded in the soil of extreme uncertainty.” Well, I certainly did. By applying this lean methodology principle of validated learning, we were able gain new insight into what students really needed and make adjustments to improve.

As part of our build-measure-learn loop, we examined student assessment results. One week we would see great growth; another week learning seemed to stagnate. We couldn’t figure out why the student data fluctuated so greatly from week to week. In attempting root cause analysis, we asked ourselves “why” and the answer was a resounding, “we don’t know.” When student scores from the shared classroom assessments were inconclusive, we looked into a less common approach for assessment.

On one grade-level exam in particular, the students’ scores dropped significantly. We knew we needed to do something differently, but we didn’t know why we weren’t seeing the anticipated results. Rather than dump the students’ papers in a box and try to forget about our failures to make progress, the teachers decided to have the students retake the exam in teams. The students had to discuss each item and come to consensus for a response. They had to explain their reasoning to one another. This way, we would be able to hear their thinking aloud. Our goal wasn’t necessarily an increase in scores (although that would have been nice), but rather we hoped to gain a better understanding as to why students weren’t showing growth in reading comprehension.

In listening to the students, one explained, “I just didn’t know what that word meant, so I skipped it.”

“But when you skip the word, this whole part doesn’t make any sense.”

“I know. It didn’t.”

We realized that vocabulary acquisition and the ability to decipher unfamiliar words had become a huge impediment – and an opportunity – for overall comprehension. In hindsight, it may have seemed like “well, duh;” if you don’t understand the words in a text, then a comprehension strategy might not help much. But we had to go beyond common assessment practices to even get to this point.

Ries explains that for startups the most aggregate measures of success, like total revenue, may not be the most meaningful. I have found that aggregate measures of academic success (the class average on an exam, student scores on a state assessment, school average on a district-level benchmark test) also lack meaning for informing true validated learning. What does a class or school average on one test tell us? Even if we look at a student’s average for the quarter, what information can we glean? If a student starts the quarter already demonstrating mastery on the concepts being presented and she ends the quarter still demonstrating mastery, can we claim success? If a student is achieving well below expected grade-level performance and she increases her understanding to early-level proficiency (often awarded a high C for the oh-so-scientific grading process), were the strategies used less effective when compared to her A-level counterpart? Using traditional measures, it’s difficult to validate learning about student growth, to gather whether or not an intervention made a difference. From a scientific approach, the common aggregate assessment measures, which we hold so near and dear, tell us very little about what students have actually learned; therefore, we are able to learn very little about the effectiveness of our efforts when we only focus on averages.

Instead, we had to examine how the students responded to the strategies over time. We had to ask why were we seeing these results. And, more importantly, we had to listen to their thinking to make sure that the instructional strategies truly addressed their key challenges. Here we learned that the key challenge for students was the difficulty level of the vocabulary and our identified strategies didn’t equip them with the skills or confidence for deciphering unfamiliar words and making meaning within context. By asking why and adjusting our assessment process, we were able to drill down to an appropriate and validated adjustment: increase exposure to higher-level vocabulary and teach strategies for deciphering the meaning of unfamiliar words.

In Lean Startup terminology, this type of shift is called a pivot, a change in strategy without a change in vision. We were still focused on improving reading comprehension, but we shifted instructional strategies to include vocabulary development. This pivot also led to another unanticipated result. As the students acquired higher levels of vocabulary and new strategies for deciphering unfamiliar words, they developed increased confidence in themselves as learners. They expressed feeling smarter and more empowered because they understood sophisticated words, and they didn’t give up when encountering a word they didn’t know; they had specific strategies and a bank of previously learned words to draw upon. We never would have learned this if we simply persisted with our original approach and avoided measuring the impact to validate our learning.

Admittedly, it was difficult to change direction without feeling like energy and time has been wasted, but the validated learning wouldn’t have been possible without the previous effort. Around the time that we were making this pivot, I wrote a blog entry about this willingness to seriously examine results and make the necessary changes in strategy.

Quote from previous entry, A Principal for Failing Fast:

There was a moment for swallowing pride, getting over feeling defeated. I asked the teachers, is it better to find out now or at the end of the school year? We came together, collaborated to find solutions, and made adjustments. 

I’ve learned a lot from the past year applying lean principles to instruction. Most profoundly, I’ve learned what I’d do differently, which I’ll discuss in part V (the final segment).