Free Standard AU & NZ Shipping For All Book Orders Over $80!
Register      Login
International Journal of Wildland Fire International Journal of Wildland Fire Society
Journal of the International Association of Wildland Fire
RESEARCH ARTICLE

Evaluation of the Experimental Climate Prediction Center’s fire danger forecasts with remote automated weather station observations

Hauss J. Reinbold A C , John O. Roads B and Timothy J. Brown A
+ Author Affiliations
- Author Affiliations

A Desert Research Institute, 2215 Raggio Parkway, Reno, NV 89512-1095, USA.

B Scripps Experimental Climate Prediction Center, University of California San Diego, 0224 La Jolla, CA 92093, USA.

C Corresponding author. Telephone: +1 775 673 7386; fax: +1 775 674 7007; email: hauss.reinbold@dri.edu

International Journal of Wildland Fire 14(1) 19-36 https://doi.org/10.1071/WF04042
Submitted: 24 August 2004  Accepted: 21 December 2004   Published: 7 March 2005

Abstract

The Scripps Experimental Climate Prediction Center has been routinely making regional forecasts of atmospheric elements and fire danger indices since 27 September 1997. This study evaluates these forecasts using selected remote automated weather station observations over the western USA. Bias and anomaly correlations are computed for daily 2-m maximum, minimum, average temperature, 2-m maximum, minimum and average relative humidity, precipitation and afternoon 10-m wind speed, and four National Fire Danger Rating System indices—ignition component, spread component, burning index and energy release component. Of the atmospheric elements, temperature generally correlates well, but relative humidity, precipitation and wind speed are less correlated. Fire danger indices have much lower correlations, but do show useful spatial structure in some areas such as Southern California, Arizona and Nevada.


References


Anderson BT , Roads J (2002) Regional simulation and of summertime precipitation over the Southwestern United States. Journal of Climate  15, 3321–3342.
Crossref | GoogleScholarGoogle Scholar | Bradshaw LS, Deeming J, Burgan R, Cohen J (1983) ‘The National Fire-Danger Rating System: technical documentation.’ USDA Forest Service General Technical Report INT-169.

Burgan RE (1988) ‘1988 revisions to the 1978 National Fire-Danger Rating System.’ USDA Forest Service, Southeastern Forest Experiment Station Research Paper SE-273. (Asheville, NC)

Chen SC, Roads J, Juang H , Kanamitsu M (1999) Global to regional simulation of California’s wintertime precipitation. Journal of Geophysical Research  104, 31 517–31 532.
Deeming JE, Lancaster J, Fosberg M, Furman R, Schroeder M (1972) ‘National Fire-Danger Rating System.’ USDA Forest Service, Rocky Mountain Forest and Range Experiment Station Research Paper RM-84. (Fort Collins, CO)

Deeming JE, Burgan R, Cohen J (1977) ‘The National Fire-Danger Rating System–1978.’ USDA Forest Service, Intermountain Forest and Range Experiment Station General Technical Report INT-39. (Ogden, UT)

Han J , Roads J (2004) US climate sensitivity simulated with the NCEP Regional Spectral Model. Climatic Change  62, 115–154.
Crossref | GoogleScholarGoogle Scholar | Higgins R, Shi W, Yarosh E, Joyce R (2000) A gridded precipitation database for the United States (1963–1999). NCEP Climate Prediction Center Atlas No 7. (Climate Prediction Center, NCEP, NWS Camp Springs, MD)

Juang H , Kanamitsu M (1994) The NMC nested regional spectral model. Monthly Weather Review  122, 3–26.
Crossref | GoogleScholarGoogle Scholar | Reinbold H (2003) Verification of ECPC’s Regional Spectral Model fire climate and fire danger forecasts. Master’s Thesis, University of Nevada, Reno.

Roads JO (2004) Experimental weekly to seasonal, global to regional US precipitation forecasts. Journal of Hydrology  288, 153–169.
Crossref | GoogleScholarGoogle Scholar | Wilks DS (1995) ‘Statistical methods in the atmospheric sciences.’ (Academic Press: San Diego)




Appendix 1. Statistical measures

The purpose of forecast verification is to determine the quantitative accuracy of the forecast. As described previously by Roads et al. (2005) the statistical methods employed in this study as a means of forecast verification were bias, root mean square error, anomaly correlation and standard deviation. Due to space constraints, the root-mean square error and standard deviation are not discussed here, but these measures can be found in Reinbold (2003).

Bias is a simple calculation of forecast minus observation (Wilks 1995), or

E1

where f is the forecast value and o is the value of the observation. When shown in graphical form, this calculation has the benefit of revealing under what situations the model is over- or under-forecasting and by how much. It is also useful in determining potential seasonal characteristics in the errors between the datasets.

Anomaly correlations are commonly used to evaluate extended forecasts. This measure is designed to reflect good forecasts in the pattern of an observed field, but does not effectively measure the magnitude of the values (Wilks 1995). There are two different equations, representing the two types of anomaly correlation used in this study. The first is for judging the spatial variation and correlation of the anomalies (Roads 2004; Roads et al. 2005) and is not shown here. The second is better described as temporal variations in spatial correlations (equation 3; Roads et al. 2005). Anomalies are first computed by taking the difference between the total forecast (either by region or for the entire Western USA) and the climatological monthly means. In other words,

E2

where A is the anomaly, f is the forecast and Cf is the climatological mean for that forecast type (weekly, monthly or seasonal mean).

Given that A is a forecast anomaly of any type (weekly, monthly or seasonal mean) and that B is the validating anomaly from observation, the temporal variations in the spatial anomaly correlations (AC, sometimes known as pattern correlation) are calculated using

E3

where M is the total number of RAWS in the current region (M = 262 for the western USA). Missing values in the observational anomalies for equation (3) reduce the anomaly summations for both datasets by the number missing (the missing RAWS values and matching interpolated validation or forecast values are removed).