As presented on the page titled, National TIM Performance Measures, the definitions provided for the three national TIM performance measures provide agencies a starting point for the consistent collection and reporting of TIM performance. Applied broadly, however, such measures may not be very informative given the range of different types of incidents that occur and the varying conditions under which incidents occur. On the other hand, using this information to put the performance measures into context can help agencies realize a fuller range of benefits of performance measurement, such as allowing them to more readily direct their resources and refine their approaches. The challenge with putting the performance measures into context is that, as not all agencies do so, direct comparisons may be limited or not possible at all.
To perform a meaningful comparison of TIM performance measures across multiple regions or states, they almost assuredly must be put into context to ensure that the comparison is fair. Consider for example that one region reporting ICT only for incidents involving fatalities may have a very high value, such as 120 minutes or more. Another region may report all incidents, which will typically skew the reported number much lower, say for example, 30 minutes. A direct comparison of 120 minutes to 30 minutes with the assumption that both numbers represent all incidents, may lead to the wrong inference that the first region is not skilled in incident management practices or has substantial room for improvement. The following sections are typical ways of using incident characteristics to put TIM performance measures into context: