Tuesday, May 19, 2009: 12:00 PM
Pantlind Ballroom
Our ability to assess the effects of human activity on natural ecosystems and their biota is largely based on our ability to specify the reference condition that is appropriate to individual sites. Classification and continuous modeling are two approaches used to predict reference conditions. Neither approach makes perfect predictions, and assessments are therefore prone to error. Previous consideration of prediction error has largely assumed that errors are random, which affects the precision of assessments. However, systematic prediction errors (SPEs) can be as large as random errors but are often effectively “invisible” because they are unrecognized or ignored. We illustrate the nature and magnitude of SPEs associated with several landscape-scale models (e.g., temperature and water chemistry models, niche models, and models used to derive O/E and multimetric indices). All predictors were prone to bias, especially classifications. In some cases ignoring strong natural gradients produced bias. In other cases, models did not adequately describe true predictor-response relationships. Because SPEs can be both large and invisible, they are especially problematic to researchers, managers, and decision makers. In some cases, use of robust modeling approaches (e.g., Random Forests, GAMs, ANNs) may minimize the magnitude of SPEs. In other cases, post-modeling adjustments may be required.
See more of: Special Session - Application of Landscape, Bioassessment and Predictive Models in Stream Ecology
See more of: Special Sessions
See more of: Special Sessions
<< Previous Abstract
|
Next Abstract