Problem
What should we infer when our models make predictions that are not reflected in the world? Obviously there is a level of inaccuracy in every model, but is it better to presume we misunderstand the world or to use such errors to find problems preventing the world from matching our model?