Monday, February 10, 2014

When Forecasts Go Wrong.

During the past week, forecast after forecast dealing with local snow has gone wrong.  On Thursday, Portland was hit hard even though the forecast a day before called for a 20% chance of snow showers.  On Saturday, the previous day's prediction  kept snow south of Seattle and even Saturday afternoon official forecasts (and this blogger) thought that the snow would not reach Seattle until late evening.   We were wrong.

Although no one likes to make a poor prediction, forecast busts like this week, when our most advanced tools fail, are what makes weather research fun.  This was not a human failure nor a mistake:  it was a demonstration that our science is still inadequate and our technology requires improvement.  It means future employment for folks like myself who spend much of our time researching weather systems and improving numerical models of the atmosphere.


It was thus with some amusement, that I and others in my fields have gotten some spicy comments and emails, some calling for apologies to those we deceived, others suggesting that these were easy forecasts if one only looked at the radar, or that we were somehow lazy and indifferent to the obvious.  Perhaps the most strident message I received was one demanding I stop teasing snow-hype-king Jim Forman of KING-5 TV.  Never!

Why do weather forecast models fail?

There are at least three reasons:

  1. The description of the atmosphere, the starting point of the simulation called the initialization, is flawed.
  2. The physics of the model, how basic processes like radiation, clouds and precipitation are described, are flawed.
  3. The forecasting problem is not possible considering the inherent uncertainties of atmospheric flows and the tendency for errors to grow in time.

Picture courtesy of KING TV

In my blog of last week, I noted that the nature of the snow forecasts this week were particularly hard.  A frontal zone, separating cold air over Washington and warm air over California, was stuck over Oregon.  A series of frontal disturbances or frontal lows forming over the ocean were moving eastward along the front.   These disturbances were small in scale, fast moving, and developing offshore where data was sparse. Thus, it was a very hard forecast, with the models unable to get a consistent solution.

And ALL the models were failing, including the state-of-the-art global European Center Model, the US. GFS, the US NAM, the UW WRF, the UW ensemble system, and the highly capable NOAA Rapid Refresh and HRRR models.    Extrapolating radar images might give you and hour or two lead time, but is useless for forecasts a day before.

For Saturday's snow that hit Seattle, model failures were profound.  The UW high-resolution WRF model's prediction of the snow, for the run starting at 4 AM Saturday, did not have snow over Seattle at dinner time (see graphic for 3-h snowfall ending 7 PM Saturday).  In fact, it kept the snow south of Olympia at that time.
In contrast, surface observations and NWS radar (at 7 PM, see below), showed that the snow band reached Seattle, about 100 miles north of the forecast position.  The European Center model did even worse ,and the National Weather Service short-range ensemble system (SREF)  did not suggest the northward movement (not shown).


Now one thing meteorologists do to determine whether to believe a model forecast is to check the consistency of the models...if they are all on the same page, one has more confidence in the forecast.  In this case, the models were pretty much uniformly holding the snow back from Seattle in the early evening.

Another test was to evaluate the initial conditions of the simulation or the fidelity of the early part of the forecast.  Here is the forecast for 10 AM on Saturday of sea level pressure, winds, and lower atmospheric temperature.  You see the low offshore of Oregon?

Here are observed low level winds at virtually the same time observed by a satellite sensor called a scatterometer (measures wind speed and direction by how microwave radiation is scattered off the waves). If you look carefully you will see the swirl of winds associated with the low...the model's position and shape for the low looks good.


And the track and amplitude of the simulated low continued to match observations later in the afternoon.

Perhaps most disappointing of all is the failure of the NOAA Rapid Refresh systems, designed to give good forecasts for the next 15-18 hours.  These system are run every hour, starting with the assimilation of a huge quantity of local data.  Here is the High Resolution Rapid Refresh (HRRR) forecast of 1-h snowfall valid at 6 PM for a simulation starting at 10 AM.  Way too far south.

And here is the total accumulation from HRRR through 1 AM Sunday....bad news--no snow over Seattle at all.  The HRRR system got better during the afternoon, but never got the timing right.  This is a real disappointment since HRRR often does very well in short term prediction.


So we have a mystery on our hands, at least until analyze the situation better.  Was there some subtle feature aloft that was not properly initialized in the simulations?  Was there an error in describing the moisture distribution offshore?  Was it an error in the model moist physics (cloud and precipitation description)?

Some detective work is needed.

As Sherlock Holmes said:  "The game is afoot"


No comments:

Post a Comment