Here’s a great question I recently received:

In your experience, what are the optimal numbers of TIM Lesson Plan (TIM-LP) reviews per teacher and TIM Observations (TIM-O) per teacher to get actionable data?

This is an interesting question and there is no one-size-fits-all response.

In order to determine the optimal frequency for data collection, you first need to ask another question: “What will I do with the data once I have it?” You might also phrase this as, “What problem am I trying to solve by collecting this data?” Your answers will dictate the frequency. Obviously it’s also important to understand what the TIM levels mean as well as what they don’t mean (a topic for another day). Whatever you decide, it’s always best to make sure teachers understand how and why the data are being collected.

If your purpose is coaching a teacher, the observation data becomes a way to ground and inform the coaching conversation. Ideally, a teacher has both voice and choice about how the coaching relationship is structured, identifying areas he or she wants to improve. Also, ideally, the coach’s role is non-evaluative. (A coach is there primarily to help the teacher reach his or her goals for professional growth, not to rate performance.) The coach and the teacher should have some interaction (face-to-face, email, etc.) on a regular basis (weekly or bi-weekly, if possible). I would suggest an observation using the TIM-O at the outset of the coaching cycle, then following up with further TIM-O, TIM-R, and TIM-LP data as frequently as the coach and teacher decide is necessary to inform the work they are doing. Having at least one piece of data per month (TIM-O, TIM-R, or TIM-LP) will keep the coaching conversation grounded in specific experiences and may help in goal-setting, monitoring progress toward goals, and reflecting on growth.

If the purpose is to get a picture of the levels of technology integration common across the school (perhaps to support decisions about professional development resources or to evaluate the effectiveness of professional development efforts), it makes sense to collect data at the start of the school year, at the end of the school year, and at any other intervals as are likely to yield useful data to support decisions, but no more than monthly.

Identify a bounded period of time (1 to 2 weeks) during which each set of observations is conducted. For example, you might say all beginning-of-year observations will take place between September 1st and September 15th. This is to ensure you are evaluating status at a single point in time rather than changes in behavior happening over a period of time. If more than one person is collecting data, plan a training to establish inter-rater agreement before you begin.

If you are not able to observe every teacher, make sure the choice of which lessons to observe is consistent and purposeful. Selection can be random, but make sure all areas of interest are represented. For example, a true random selection might not include any math lessons. If math lessons are important, best practice would be to identify the categories of interest first (e.g. subject areas, grade levels, types of classrooms), then do random selection within each category.

Alternatively, you could ask each teacher to sign up for a lesson to be observed. Specify the kind of lesson you would like to see. For instance, ask teachers to select a lesson that is typical of technology use in their classrooms or a lesson that represents their highest level of technology integration. If teachers understand the purpose for the data collection and trust that it won’t be used against them, they are more likely to choose the type of lesson you are looking for when requested. Either way, make your selection consistent and be sure you aren’t unintentionally leaving out parts of the population.

If you are looking for change over time, you need at least enough time between data points for change to occur. If your first data point is collected in September and you are looking for the effects of professional development sessions that are conducted in October, you need to allow enough time for teachers to attend the PD sessions and integrate new learning into their classroom practice. The teacher may need time to reflect on the new information, to plan new lessons in advance, and to practice new techniques before it would be possible to observe differences in the classroom. The amount of time it takes for the anticipated change to show up (or not) may vary by school or district, but the point is that you can’t expect the change to become visible immediately. In our example, perhaps your first data collection period is in September, your PD is in October, and your second data collection is in January, looking for changes in behavior.

Of course, it can be helpful to use a combination of data collection to paint a fuller picture. You might start the school year with a TIM-O and a TIM-R, collect TIM-LP data once a month, then close out the year with another TIM-O and TIM-R.

Creating the most appropriate data collection plan for your school or district is an extremely important first step that should not be overlooked. Hopefully, the guidelines discussed above will help you create a schedule that supports your goals. Please let us know what has worked well for you and share any feedback you have on our suggestions.

James Welsh is the director of FCIT, project leader for the TIM, and a former classroom teacher. His research interests include evaluation of educational technology, critical media literacy, student creation of multimedia texts, and the role of genre in student composition. Dr. Welsh has published work in Teacher Education Quarterly, Journal of Reading Education, Qualitative Studies in Education, International Journal of Multicultural Education, and The International Journal of Learning, as well as book chapters and articles addressing technology and student composition.