الكل
← Back to Squawk list
Air France jet 'seconds from disaster' after autopilot fails in drama with chilling echoes of Brazil crash
An Air France jet was just seconds from nose-diving to disaster after the autopilot failed during 'extreme turbulence' over the South Atlantic Ocean. The high-altitude alert in July chillingly echoed the cockpit chaos that preceded the fatal crash of an Air France Rio-Paris flight two years earlier, in which all 228 passengers died. In the latest drama, the autopilot shut down as the plane hit a storm at 35,000 feet while flying from Venezuelan capital Caracas to Paris. Read more:… (www.dailymail.co.uk) المزيد...Sort type: [Top] [Newest]
@Vancouverjake, You forgot a biggie ... AF lost a Concorde, still no blame put on AF, (despite their 'comprehensive' enquiry), yet they didn't install the protective shields adjacent to the tyres, had a wheel with a shim missing, (poor maintenance), causing the plane to steer offline, and the flight engineer shut down an engine without orders from the Captain. Still happy with AF flight crew professional standards? I'm not!!
Why is the report assuming the jet was "just seconds from nose-diving to disaster"? If the instruments are reporting correct airspeed, I'm sure the pilots can recover from a stall. From what I read, aircraft hits turbulence, auto-pilot "fails" or disengages sensing overspeed, increases angle of attack to reduce speed, then apparently something goes wrong and it stays at high angle of attack approaching stall speed. As I said, if instruments are reporting correct airspeed, then pilots should be able to recover from a stall at altitude, IMHO.
Why is everyone assuming the plan was seconds from disaster simply because it approached its stall speed? If the instruments are reporting correct airspeed then I'm sure the pilots would have been able to recover from a stall.
The autopilot failed? What's the big deal? Maybe Air France needs to look into hiring pilots and buying Boeing - airplanes that pilots are MEANT to fly
The lack of problem solving discipline really offends me. I really expect better from this industry.
We have missed step 0 in the 8D problem solving discipline. We swept it under the rug with AF447, but now, with a second occurence, we have evidence of a growing problem that MUST be dealt with. It's very simple.
Step1: GROUND ALL AIRBUSSES! (If JAR won't then FAA *MUST*!)
Step2: AIRBUS - get to where you can repeat the problem (simulation or in flight) - you get 24 hours ... GO!
Step3: Get to root cause within 1 week.
Step4: Prove it is the root cause by making the problem appear and then making it disappear.
Step5: Propose Irrevokable Corrective Action within 2 weeks
Step6: Implement Irrevokable Corrective Action within 4 weeks
Step7: If you can't get to Step 6 in 4 weeks, figure out a series of mitigations and prove their % effectiveness.
At any time during the process, as you have DATA, you may propose a mitigation that is less severe than grounding the fleet (from Step 1). But only with 1) the ability to repeat the problem, 2) DATA that *proves* the proposed relaxation of the all-fleet grounding is safe from this defect, and 3) a %effectiveness estimate that you continue to monitor.
Finally, and I made this point in last week's thread. If their is a bug in the computer software, you cannot trust the data that it has recorded. That's real-time system programming 101. Toyota had car computers saying that the driver was flooring the accelerator, and 200+ drivers saying they were pushing the brake pedal to the floorboard. Given that people lie, I might not believe the last driver. But, I sure as heck wouldn't believe the first 20 computers before the story went public.
We have missed step 0 in the 8D problem solving discipline. We swept it under the rug with AF447, but now, with a second occurence, we have evidence of a growing problem that MUST be dealt with. It's very simple.
Step1: GROUND ALL AIRBUSSES! (If JAR won't then FAA *MUST*!)
Step2: AIRBUS - get to where you can repeat the problem (simulation or in flight) - you get 24 hours ... GO!
Step3: Get to root cause within 1 week.
Step4: Prove it is the root cause by making the problem appear and then making it disappear.
Step5: Propose Irrevokable Corrective Action within 2 weeks
Step6: Implement Irrevokable Corrective Action within 4 weeks
Step7: If you can't get to Step 6 in 4 weeks, figure out a series of mitigations and prove their % effectiveness.
At any time during the process, as you have DATA, you may propose a mitigation that is less severe than grounding the fleet (from Step 1). But only with 1) the ability to repeat the problem, 2) DATA that *proves* the proposed relaxation of the all-fleet grounding is safe from this defect, and 3) a %effectiveness estimate that you continue to monitor.
Finally, and I made this point in last week's thread. If their is a bug in the computer software, you cannot trust the data that it has recorded. That's real-time system programming 101. Toyota had car computers saying that the driver was flooring the accelerator, and 200+ drivers saying they were pushing the brake pedal to the floorboard. Given that people lie, I might not believe the last driver. But, I sure as heck wouldn't believe the first 20 computers before the story went public.
@ Victor Hugo, except for the Kingston accident, the rest of the American Airlines accidents you mention have been thoroughly investigated and documented: horrible pilot errors, the airline had to accept this (although I personally think that some really strong fines should have been collected, in the millions but were not). As a result of this American redoubled its training, pilot screening manuals and procedures yielding very good results. The Kingston accident is still under investigation and it has all the characteristics of pilot error, but we will see. I am not familiar with the Continental accidents that you listed so I won't comment. But again, Airbus makes excellent aircraft but their "overcomputerizing" the aircraft diminishing the pilot's abilities to disconnect the AP and recover the aircraft from severe situations (like AF447 and several others) is definitely something that cries for deep rethinking.