Troubled about the state of my highly scientific NCAA bracket, I read with interest Nate Silver’s New York Times blog entry on the odds of Virginia Commonwealth reaching the final four in the NCAA Basketball Tournament. Right there on the virtual sports page, I was amazed to find a pearl of pure modelling wisdom:
Whenever you come across a statistical model which suggests that something extremely unlikely has occurred, you ought to be in the habit of questioning whether whether the event really was that unusual, or instead whether the model was designed with faulty assumptions.
Beautifully put. Freak events do happen–a magnitude 9.0 earthquake in Japan is a freak event in any given year. When your statistical model says a that something extremely uncommon has happened, however, it may actually be that something extremely common has happened: another statistical model has revealed itself to be badly flawed.
Over the last decade or two, complex statistical models have become an important part of risk management in the financial sector. In light of the role of, e.g., AIG’s risk modelling in the financial crisis some folks argue that we should abandon these models.
This is wrong. Nate Silver hints at the right lesson. I put it this way:
Risk models are like chain saws–powerful tools, but you’d better learn some good habits for using them or someone is likely to get hurt.
Fortunately, I didn’t get too leveraged investing in that NCAA bracket.