In a word, computers. Loads of data are continually fed to high-capacity supercomputers specialized for numerical weather prediction, where it is transformed into a wide range of graphic forecast products. This data includes historical climatological information, recent trends and observations, geographic influences, thermodynamic and other meteorological equations, and much more. And yes, even Coriolis force. All this information is constantly updated, reinitialized, and processed into model runs generated for future time intervals.
Some models cover the globe, some focus on a specific continent or other region, some give short-term forecast information, some forecast out to 16 days, some focus on hurricanes, and some even forecast thunderstorm locations. Together with direct observations—such as surface observations from airports; cloud coverage, temperature, and movements using satellite imagery; radar imagery; and wind and temperature aloft information from radiosonde weather balloons—forecasters analyze and compare model outputs, then apply their own experience to come up with the final analyses that we see on weather websites and television reports.
An individual forecaster’s knowledge of local weather can have an outsized influence over some computer-generated products. For example, some models have a hard time analyzing small-scale weather features, such as sea breezes and mountain circulations, so for those situations forecasters often rely less on a particular model run and more on their experience and intuition. Then there are data-sparse areas, like mountain ranges, deserts, oceans, and other remote regions. There aren’t as many observation sites for the supercomputers to analyze, so accuracy suffers.
A complete discussion of numerical weather predictions is way beyond the scope of a magazine article, mainly because of the wide range of forecast products. Some, like mixing ratios or fluid trapping, are pretty esoteric and of interest primarily to meteorologists. Others, like isotachs, lifted indexes, lifting condensation levels (LCLs, which predict cloud base altitudes), cloud cover, and predictive radar, are certainly more relevant to pilots.
In any event, there’s absolutely nothing wrong with the Aviation Weather Center website, Garmin, ForeFlight, and other conventional weather sources. However, if you’re interested in longer-range planning, are concerned about the next day’s thunderstorm or precipitation chances, or simply want to feed your inner weather nerd, knowing something about forecast models is a great asset. Here are some brief insights to a few of them.
Runs and valid times
First things first. Model data isn’t very useful unless you’re looking at the latest computer runs. For that information, be sure to check the initialization date and time, and determine the valid time(s) coinciding with your flight. This is posted at the top or bottom of a forecast panel. It’s always in Zulu time, so you’re already up to speed in that department. The product name will be posted, and then the initialization date and time, in the year-month-date-time format. This is the time that the model was updated. You’ll also see a valid time posting, which often looks as follows: “F009 Valid: Wed 2020-07-29 21Z.” Translated, this means you’re looking at a forecast nine hours in the future, on Wednesday, July 29, 2020, at 21Z. Ideally, you want the latest initialization time available, bearing in mind that there can be a delay between the actual update and the time you receive it. You can select valid times by clicking on buttons corresponding to the forecast intervals that interest you.
Four models
There are plenty of websites that let you access forecast models, but lately I’ve been using pivotalweather.com and weathernerds.org. Once you log on, you’ll see 10 or so model acronyms posted on the homepage, so click on one and off you go. What are the strengths of each model?
The GFS (Global Forecast System) is often called the “American” model, as opposed to what is usually called the “European” model put out by the ECMWF (European Center for Medium-Range Weather Forecasts). The GFS comes out four times a day, with each initialization making predictions for up to 384 hours (16 days) in three-hour increments. This makes it great for scoping out the weather for trips up to two weeks in the future. Just remember a cardinal rule: Forecast accuracy of any model decreases with time. The ECMWF model initializes every 12 hours, and predicts 240 hours into the future. Maybe that’s the reason why the European model has a reputation for greater accuracy. But the European model is stingy with its data. It puts out far fewer products—just pressure, winds, temperature, and precipitation—compared to the GFS’s dozens of datasets. But both let you check out the weather in the United States as well as the tropics, Europe, and the Middle East.
The NAM (North American Mesoscale) model comes out every six hours and predicts for intervals out to 84 hours, or three days. Its focus is on the continental United States, and its resolution is denser than the GFS.
The HRRR (High Resolution Rapid Refresh) model is worth a look in the warmer months. That’s because it puts out “future radar” imagery. Click on the “composite reflectivity” or “1 km AGL [base] reflectivity” links and up pops predictive radar echoes. This imagery isn’t meant for making tactical weather-related decisions, but it’s great for advance situational awareness. The HRRR is a speed demon in that it initializes every hour and can predict out to 18 hours.
There are at least 20 additional models but those will be more than enough to get you started. Especially when you consider that—geek alert!—if you click on a point in the models’ maps a Skew-T Log P chart fills your screen. Studying numerical weather prediction models may not count toward a legal weather briefing, but with future radar, future Skew-Ts, and forecasts days ahead of any TAFs, what’s not to like?
Email [email protected]