Tutorial–model criticism ✏️
Practical 3 – Model criticism
Line transect analysis
This data set was simulated so we know both the true population density and the true underlying detection function. Our interest lies in the robustness of the density estimates in the face of model uncertainty. With actual data, we will not know the shape of the underlying process that gives rise to the detection process. It would be reassuring if density estimates were relatively insensitive to choice of detection function model. Let’s find out how sensitive our estimates are for this data set.
Examine the sensitivity of the density estimates from the three models fitted to data truncated at 20m:
- To the nearest percent, what is the relative percentage difference between smallest and largest estimates presented above?
- What effect did adjustment terms have upon model fit for the half normal and hazard rate key functions with 20m truncation?
Model fit
One oversight of the analysis of LTExercise
simulated data is the failure to assess model fit. Using the gof_ds
function, below is code to perform Cramer-von Mises goodness of fit tests upon all three key function models with 20m truncation. We use the argument plot=FALSE
to skip production of the Q-Q plot in this instance.
- All of the three fitted models are admissable to use for inference?
Capercaillie data
Watch out for danger signs in output of functions. Examine the output of this simple half normal fitted to the exact capercaillie distances. Consider the following output:
- What strikes you as strange about the variability associated with encounter rate (
se.ER
andcv.ER
) - This is an actual data set, so we do not know the true density of capercaillie in this study area. However we can compare the point estimates of density derived from distances treated as exact and from binned distances.