Updated

When a computer glitch at a Federal Aviation Administration center caused widespread airline delays this week, it served as a reminder that the U.S. flight system is waiting for a modernizing overhaul. But it also appears the FAA's management of its existing technologies falls short of standards in other vital sectors.

By using computing practices that would be considered poor in credit card networks or power plant operators, for example, the FAA was vulnerable to a problem caused when new software was loaded at the Atlanta center that distributes flight plans.

Because the FAA relies on just two computing systems, one in Atlanta and one in Salt Lake City, to handle that chore for the entire nation, the software glitch all but sank the system Tuesday. The Salt Lake center remained up and served as a backup, but it became overloaded by information coming from airlines. More than 600 flights were delayed from Atlanta all the way to Boston and Chicago.

A failure at the same Atlanta center caused major delays across the East Coast in June 2007.

Such breakdowns often can be prevented with sufficient redundancy, or enough different computers and communication channels to handle the same workload in an emergency.

Redundancy is so critical for power and water utilities that they can be fined hundreds of thousands of dollars a day if they're found insufficiently prepared — and $1 million per day if they're found to be willfully negligent.

"In the industries I work in, if you have something that critical, you generally build more redundancy," said Jason Larsen, a security researcher with consultancy IOActive Inc. who previously spent five years at Idaho National Laboratory examining electrical plants' control systems. "If this [FAA outage] happened at a power plant, I'd be telling them to open up their checkbook and expect to be fined."

FAA spokeswoman Tammy Jones stressed that these types of problems "don't happen on a mass scale or a regular basis," and noted that the FAA handles 50,000 to 60,000 fights a day. And flying on U.S. airlines has never been safer.

"The system is working," she said. "We are making sure people are getting from one place to another."

Basil Barimo, vice president of operations and safety for the Air Transport Association of America, a trade association that represents the nation's largest carriers, says the fundamental problem is that the FAA still relies on outdated technology, including a radar-based control system designed in the 1940s and '50s. Barimo is optimistic that the FAA's NextGen modernization program — a $15 billion-plus upgrade to satellite-based technology that will take nearly 20 years to complete — will help make more efficient use of the nation's airspace and safely allow more planes in the sky.

At the Atlanta center that saw this week's failure, the National Airspace Data Interchange Network computer has been owned and operated by the FAA since the 1980s, after the Dutch company that developed it went out of business. The network is being upgraded, and will have much more memory, process data much more quickly and be more robust and "fault-tolerant."

"We should see significant improvements by the end of September ... which should prevent the type of problem we had on Tuesday," said FAA spokeswoman Laura Brown. The agency also is considering adding a third backup site for that and other systems at a technology center in New Jersey, but no final decisions have been made, she added.

However, Doug Church, a spokesman for the National Air Traffic Controllers Association — a union that has been locked in a contract dispute with the FAA since 2006 — argues that the agency has tried to focus on future technology to deflect its lack of diligence in maintaining its current systems.

Not only did Church cite the agency's lack of a "safety net of redundancy," but he also pointed to its "fix-on-fail" policy of waiting for something to break before addressing a problem.

Indeed, in December, the agency exempted its computer maintenance personnel from having to perform some periodic certification checks as required by government handbooks for technical equipment. The FAA said that would eliminate unnecessary certifications that historically had little or no effect on total system performance and safety. And a 2006 report from the Government Accountability Office had found support for the idea in some instances.

But computing experts say they often advise private companies to reject that approach.

"It's common, you see it in retail too — it's the whole 'don't fix it if it ain't broke' thing," said Branden Williams, director of a unit of VeriSign Inc. that assesses the security of retailers' payment systems. "It's unfortunate because it's very reactive, and it typically winds up costing you more. If you do fix-on-fail, it usually costs you more."

Of course, there's a difference between a private company's outage that delays your DVD order, and one at the agency administering airline traffic. And such events have happened to the FAA multiple times.

Communications between an air traffic control center in Memphis, Tenn., which directs planes passing through a 250-mile radius from the city, and an unknown number of airplanes were disrupted this month when a car struck a utility pole, severing a fiber-optic cable. Last September, the same center lost all its communications and some air traffic controllers had to use their personal cell phones to route planes out of the seven-state area. The FAA blamed that outage on the failure of a major AT&T Inc. phone line.

In May, the FAA system that issues preflight notices to pilots about runway, equipment and security issues went down for about a day when a server crashed and the backup operated too slowly to be effective. The database was not able to issue updates or new notices, but pilots continued to receive relevant information from local air traffic controllers and through alternate systems.

After this week's outage, Paul Proctor, a Gartner Inc. analyst focused on security and regulatory compliance for large corporations, said it appeared that the FAA didn't deploy the flight-plan computers with nearly as much redundancy as big companies generally have in systems critical to their operations.

"You need to do a good analysis about whether this is acceptable risk," Proctor said. "One of the things the government is betting on is the fact that if there's ... a failure, it's not a safety issue."

Sid McGuirk, associate professor and coordinator of the air traffic management program at Embry-Riddle Aeronautical University in Daytona Beach, Fla., believes that given the budget realities facing the FAA, the agency has maintained a good balance. It keeps the system running efficiently without compromising safety, said McGuirk, a former air traffic controller and FAA manager for 35 years.

"From time to time, we are going to have a glitch, but it's a tradeoff," he said. "Would I like to see more modern equipment in the system? Sure. But most folks would not want to see their taxes tripled to pay for new technology every two years."