Since its introduction in the 1980s, automated weather observing stations have become standard equipment at many airports around the world. These include the federally funded ASOS (automated surface observation system) and the federal- and/or state-funded AWOS (automated weather observation system) instrument suites. It's all part of a grand plan to enhance the density of weather data, provide more weather information in support of aviation safety, and improve the quality of both short- and long-range forecasts for aviation users and the general public alike. They don't spell the end of human surface observations; these are still made at major airports, and humans augment, or modify, automated reports at many sites.
You can tell whether an automated METAR is augmented by a human observer by looking in the modifier section of the report. This comes after the time of the report, and before the wind direction/speed portion. If the section says AUTO, then the METAR is fully automated. When an observer augments or backs up an ASOS site, the AUTO modifier is dropped. AWOS reports carry their AUTO notifications as a prefix and, in telephone recordings and broadcast messages over dedicated AWOS frequencies, the term "automated weather" is used at the start of the observation. Human comments can include observations with great impact on flight operations that automatic reporting gear is unable to detect. For example: tornado (+FC); funnel cloud (FC); thunderstorm began (TSB, along with the minutes past the hour); thunderstorm ended (TSE, also with the minutes past the hour); wind shift (WSHFT); and surface visibility (SFC VIS, the visibility at and along the surface).
Studies have shown that the accuracy of automated weather observations correlates very well with human observations. And there are many other advantages to automated reports or broadcasts. Thanks to them, more airports now have weather reporting capability. The observations are consistent in their measuring methodology from site to site. The reports are updated and issued at intervals as often as one minute; human observers make their observations once an hour, unless sudden, big changes occur. Automated reports also can allow instrument approaches at airports that previously had no weather reporting capability and permit lower minimum descent altitudes (MDAs) and decision altitudes (DAs) on certain instrument approaches.
For all these reasons, it's a good practice to monitor ASOS and AWOS broadcasts as you fly. You get an immediate, complete weather update, and can track changes in ceiling, visibility, wind, altimeter setting, and more. This is especially helpful when flying on instrument flight plans, in instrument meteorological conditions. Is the ceiling lifting? Has the front passed? You can answer those questions simply by tuning in. ASOS and AWOS frequencies are printed on sectional and instrument en route charts.
As good as they are, automated reports do have their shortcomings. ASOS and AWOS are certainly better than no report at all — there's universal agreement on that — but there are situations where their computer algorithms (the rules that govern the sampling, programming, and sequencing of information) can let us down. There are also elements of automated reports that are very, very accurate, and we should know which these are. For the purposes of this article, we'll talk about ASOS only. A full discussion of the ins and outs of both systems, plus their service levels, plus their printed and broadcast formats, is simply outside the scope of a magazine article.
First, the most reliable parts of the ASOS data. These are the weather variables that the equipment measures directly: temperature; dew point; barometric pressure/altimeter setting; and wind. We can pretty much count on these reports to be accurate, because the observation methods involve a minimum of computerized massaging.
A thermistor provides temperature readings. Thermistors measure the relationship of electrical resistance to temperature. They're extremely accurate, and make updated temperature reports once a minute.
Dew point is determined by the ASOS' hygrothermometer, which is a sensor that uses a chilled mirror. The mirror is electrically cooled to the point where a fine film of condensation occurs. An infrared beam detects the condensation, then triggers a temperature measurement of the mirror's surface. This is reported as the dew point. Meanwhile, a fan draws ambient air into the instrument housing so that a representative air sample is constantly maintained. Dew-point measurements are reported every five minutes. Dirt, dust, sand, spider webs, dew points near the freezing mark (the mirror can ice up), and other contaminants can cause erroneous readings.
Barometric pressure also is computed, updated, and broadcast every minute. ASOS has two or three pressure transducers that measure atmospheric pressure. At least two sensors must agree within 0.04-inch mercury to report pressure, and the lowest pressure reported by the sensors is the one that's transmitted. ASOS also has the capability to report density altitude.
Wind measurements get a bit more involved. For wind speed, ASOS uses the rotating anemometer cups we're all familiar with. A wind vane records wind direction to within 5 degrees of error. ASOS constantly records wind speed and direction, but its observations are biased toward those reported in the past 10 minutes. Published and broadcast reports are a two-minute average that's updated every five seconds. That's a very rapid update cycle, which is a good thing for us. ASOS gives wind updates faster than any human ever could — and does it 24 hours a day.
What about gusts? The algorithm always looks for wind speeds that exceed the current two-minute average by 5 knots or more — and holds those values for 10 minutes. If, at the end of the 10 minutes, the gust is higher than the current two-minute average, a gust is reported.
The cups and vanes can freeze up in snow and icing conditions, in which case there can be reports of zero wind and false wind directions. Some newer wind sensors are heated, though, and freeze-ups are nonissues.
The more complex components work in ways that require a greater understanding. To measure cloud height, sky coverage, and vertical visibility, a laser-beam ceilometer (LBC) shines a narrow beam straight up into the sky. Any reflections from clouds are recorded with a height value, reported to tolerances of 100 feet. Thirty minutes' worth of 30-second samples of cloud "hits" is processed, with the last 10 minutes of data double-weighted by the algorithm. New observations are created every minute.
The ceilometer can report only three cloud layers at a time, and the beam on most current ceilometers tops out at 12,000 feet. Therefore, clouds above that altitude aren't reported. Another drawback to the LBC is its narrow beam. You could have a huge cumulonimbus cloud or a tornado next to the site, but the LBC will report "sky clear" if its narrow, vertical beam can't see it. Conversely, if a single cloud parks over the LBC, an overcast will be reported.
Another problem happens when fog or precipitation prevents the LBC from getting good cloud hits. Instead, the beam reflects from the fog or precipitation, and cloud bases can be reported lower than they actually are. If enough of these false hits happen, the visibility is one mile or less, and there's a cloud layer detected at 2,000 feet or less, a formula is invoked to produce vertical visibility (formerly called "obscurations") reports. These are made in 100-foot increments (e.g., "vertical visibility 200 feet"). The nature of the obscuring phenomena isn't identified. Still another problem comes with contamination of the ceilometer lens by dirt, dust, bird droppings, and spider webs. As with other anomalies, that's when ASOS is programmed to report "sky condition missing."
ASOS isn't influenced by the "packing effect," which occurs when human observers overestimate cloud coverage because of slant-range considerations. An observer on the ground can't see distant breaks in clouds, and pilots flying over an undercast can overestimate cloud coverage because cloud breaks can appear smaller because of visual compression caused by speed. ASOS sees only directly above, so its cloud-cover reports are skewed. It may see vertically, but pilots climbing or descending may well see much more cloud coverage than ASOS would have you believe.
There's also a lag when it comes to reports of changing cloud cover. If an LBC seeing clear sky suddenly has an overcast layer move overhead, a report of FEW clouds will be made in two minutes, and a broken layer in 10 minutes. The biggest problems with LBC reports happen when fast-moving, moisture-laden fronts pass by. That's when rising and falling cloud heights will be reported in rapid succession, and the LBC reads each one. ASOS ceiling readings are most accurate in stable, nonfrontal weather, when there's no low-level obscuration, no precipitation, and consistent cloud layers. Not much help in a hard IFR situation but, as we said earlier, it's better than nothing. And, of course, you can still descend to the MDA or DA to take a firsthand look at the airport environment. Or not, and execute a missed approach.
Visibility reports come via a forward-scatter meter. This measures the amount of light scattered from a beam. The result is processed into visibility values designed to measure the clarity of the nearby air. The visibility algorithm is designed to respond quickly to decreases in visibility, and more slowly to increases. Each minute, ASOS processes the most recent 10 minutes' worth of sensor data. If visibility drops from seven miles to one mile in a minute, it takes about three minutes for the 10-minute mean values to register three-mile visibility. Then a special observation (SPECI) is transmitted.
If a fog bank moves over the airport and the visibility goes to zero, ASOS reports two-mile visibility in one minute, one mile in two more minutes, and less than one-half mile in three minutes. On the other hand, if the visibility then suddenly rises above three miles, ASOS reports go from less than one-half mile to two miles in nine minutes, and to more than three miles in 10 minutes.
While scatter meters correlate well with the transmissometer readings used to report runway visual ranges (RVRs), there is a complication when it comes to conditions where bright, backscattered light exists. For example, when daytime clouds, fog, snow, or drizzle reflect sunlight. In these situations, ASOS can report visibilities twice what a human observer would perceive.
One explanation uses an automotive analogy. When driving in fog, your headlights can reflect off the moisture particles and severely reduce your visibility, or even blind you. But the headlights of an oncoming car seem to penetrate the fog, letting you see that car farther into the fog. Why is this? Because your headlights are scattering light back to you (backscatter); the approaching car's headlight beams are forward scattered. Research has shown that the visibility difference between forward- and backscattered light is 2-to-1. Since ASOS measures forward scatter — albeit in a very confined space designed to represent the volume of air within a two-to-three-mile radius — it can report visibility twice what a human eye would perceive. The moral: Be suspicious of nonaugmented ASOS visibility reports when bright daytime fog, clouds, or precipitation exists. For example, radiation fog after daybreak.
ASOS' precipitation identifier (a light-emitting diode weather identifier, or LEDWI) and freezing rain detector work fairly reliably, but there are cases where the LEDWI can give false reports of light precipitation. That's because the LEDWI detects precipitation fall rates, and is designed to report rain or snow at fall rates of 0.01 inch per hour or greater. So light rain or snow may go unnoticed. Freezing rain is reported via a heated, vibrating probe that measures the vibration's frequency — much like those used in aircraft ice-detection systems. After the probe collects a tenth of an inch of ice, a signal triggers a heating cycle, the ice is shed, and a new reporting cycle is begun. Updates happen at one-minute intervals.
Many ASOSs now have lightning detectors, which we'll discuss in an upcoming article. Like other components in the ASOS suite of instruments, lightning detection is being slowly upgraded at many sites.
The bottom line: ASOS provides huge benefits to the pilot community. With a few exceptions — unfortunately, these exceptions involve the kinds of IFR and LIFR (low IFR — ceilings below 500 feet and visibilities at or below one statute mile) weather that most heavily impact the takeoff and landing phases of flight — automated reports correlate well with reality. Equipment outages will always occur, but nine times out of 10, pilots are better off for automated surface observations. Tune in early and often.
E-mail the author at [email protected].
Links to additional information about automated observations may be found on AOPA Online.
Outflow boundaries occur ahead of thunderstorms — especially lines of thunderstorms and squall lines. These boundaries are masses of cold, high-altitude air, brought to the surface by downdrafts and merged into traveling, low-altitude masses of dense air. Because they are small-scale events, computer-forecasting models have a hard time detecting them. That means they won't immediately appear on any prognosis charts, METARs, or TAFs. But by their wedging under warmer air masses, boundary layers cause low-level air convergence and rising air motions. This can trigger a fresh batch of thunderstorms — even a day after the boundary first appeared. Next time your briefing involves thunderstorm complexes, be on guard for the chance of a second round of storms the next day — ahead of the previous day's convective areas.
The University of Wisconsin-Madison's meteorologists have made a computer simulation of the storm immortalized in the Gordon Lightfoot song "The Wreck of the Edmund Fitzgerald." The November 1975 storm on Lake Superior is re-created in a video clip and computer graphics online. You can click on links for winds at various altitudes, and see simulations of precipitation levels during the 60-hour study period. The simulation is noteworthy because computer modeling of storm systems was in its raw infancy in 1975. This simulation is based in large part on a reanalysis of archived data.