Recapping Part I of our journey to apply lean methodology in a K-12 setting…
We started by defining the problem to be solved: increasing reading comprehension despite a limited amount of instructional time devoted to literacy instruction. Then we crafted a hypothesis: if we identify key reading strategies and apply them across content areas, then we’ll increase the amount of time spent in authentic reading-to-learn experiences and improve the students’ ability to comprehend a range of texts. And we developed a minimum viable practice (MVP): getting clear on the essential elements for effectively implementing the identified teaching practice.
With this hypothesis and MVP, we determined what to build to put our plan into action and test our theory.
In The Lean Startup, Eric Ries explains a process called the Build-Measure-Learn feedback loop. Instead of making extensive plans based on assumptions, we can build a minimum viable product and measure results in order to inform our decisions – whether to stay the course or make adjustments. This process starts with a hypothesis that will enable us to learn; then we execute and iterate. The point is “to get through the build-measure-learn loop with the maximum speed.”
In schools, developing a strong hypothesis for the build-measure-learn loop requires synthesizing the target change in practice with anticipated results. A hypothesis framework might look like: If we do ______ (insert: instructional practice, teaching technique, educational innovation), then ______ (insert: desired outcome for student learning) will occur. With this hypothesis in mind, the school community then builds what is required to test this hypothesis and learn as much as possible as fast as possible in order to make informed instructional decisions.
In applying this method in our middle school, we decided that we needed to build the following: a vehicle to communicate the MVP, a space for sharing implementation resources, and a process for improvement.
Our implementation of the feedback loop started with using each lessoncast as a vehicle to communicate the minimum viable practice, clarifying what elements of an instructional idea needed to be in place to collect the maximum amount of learning about teacher implementation and student progress. We used the framework to boil down the what, why, and how of each reading strategy. Then we created 2-minute videos to capture and explain the essential components in a concrete way to support implementation. This communication vehicle did not take on the format of a lesson plan; rather it replicated a teacher-to-teacher conversation ideal for sharing expertise.
Space for Sharing
Next, we created a space for sharing the lessoncasts and other resources to support implementation. Here teachers could access and view the videos and related materials on-demand and at their own speed. They could also add resources that they found or created, helping to build our community repository of instructional artifacts.
Process for Improvement
We also needed a foundational process to facilitate collaboration, reflection, review, iteration, and refinement. Our team developed a two-week cycle. Week 1: watch, plan, and implement. Week 2: discuss, refine, assess.
The process began as teachers watched the lessoncasts on their own and then used the collaborative planning time to discuss how they would implement the target strategy during week 1. After using the strategy in the classroom, they would come together to discuss what went well and what didn’t and share ideas for refinement. During week 2, they continued using the strategy and at the end measured student learning through a common assessment. Using informal and formal assessment information and observations from classroom implementation, we determined if we needed to continue teaching the same strategy, what instructional adjustments needed to be made, what other strategies may be more successful in improving comprehension. We examined which students were making progress and which weren’t. We tried tweaking aspects of implementation to see if it impacted student learning.
At certain points, we realized that the barrier to improvement may not be related to the specific reading strategies at all. These realizations – validated learning – resulted in pivots (which I’ll discuss in more detail in What We Learned – coming soon). But before I explain what we learned and how we used that information, my next entry will detail how we measured impact.