(Replying to PARENT post)
Everybody else has to get out of the way, though, when it declares an emergency. It can't really communicate with ATC. So it's strictly an emergency system, for now.
This is really just integrated control of the existing avionics. The existing systems are good enough that you can input waypoints and have them followed, and do an automatic instrument landing on a designated runway. This mostly sets up a flight path. It can't deal with traffic. There are no new sensors. The additional hardware just lets it lower the landing gear, apply the wheel brakes, and shut down the aircraft after landing. After which you're blocking the runway until someone comes out and moves the aircraft. Again, emergency use only.
It's intended for the "sick pilot, healthy airplane" case - the pilot is out of action, but the hardware is fine. It's not helpful in making hard decisions when the aircraft is having problems.
(Replying to PARENT post)
That said, the reason pilots are paid to fly dangerous whirlybird machines is primarily for take-off and landing. Takeoff and landing are operations that require the most concentration, coordination with air-traffic controllers and other aircraft and regardless of weather carry the most risk. Takeoff and landing are likely to be some of the last functions to be automated, I'd argue this even further with take-off. Another important aspect here is the pilot should be able to act independently of multiple systems failing - as the moniker goes... "fly the plane until its on the ground or stopped".
The coolest pilot safety automation tool to improve safety I've seen thus far is an iPad app developed by the OG developer of the X-Plane flight sim Austin Meyer[0] (definitely check out his blog) called Xavion [1]. Basically, it's an app that with decent GPS will calculate a glide plane to the nearest airport in seconds. Austin is an avid pilot and clearly a brilliant guy - I'm eager to see if he starts commenting on autopilot ai initiatives like that of Deadalean.
(Replying to PARENT post)
The better analogy, IMO, is how people are being pushed into a computing model and adopting computing attributes as characteristics. We are becoming like "semi-autonomous computers". And worse, it is a client-server model, with corporations and governance taking the executive decisions!
Rather than describing some software as intelligent, we would be better describing the change that this idea (and the idea of collectivisation - as opposed to individuation) has had on people. It is people that are changing, machines are still inanimate.
(Replying to PARENT post)
(Replying to PARENT post)
see where you are without GPS or radio or inertial navigation
see where you can fly without ADS-B or RADAR or ATC
see where you can land without ILS or PAPI
Isn't this the 'Tesla' approach?(Replying to PARENT post)
(Replying to PARENT post)
That said, this is still pretty cool, and I could see something like it being one component of a much larger fully automated flight management system.
Edit: link should be fixed. If I'm still crushing it, the title of the lecture is "Children of the Magenta Line."
(Replying to PARENT post)
I find it very surprising that there isn't at least non-AI software to monitor what the pilots are doing.
(Replying to PARENT post)
On the one hand, the author discusses what they have achieved so far - a machine-vision based system that can fly reasonably competently in good visibility and low traffic density, comparable, they say, to aviation 80 years ago (actually, as I mentioned in another comment, aviation in 1941 had already advanced significantly beyond that.)
A little later, they write this;
There is no reason to believe computers will always be worse at that than you are. There is no reason the machine canβt reliably make the call to land in the Hudson when all engines are out and to do so in adversarial conditions safely.
True enough, as far as it goes, but there is also no reason to suppose that the technologies the author is discussing here will deliver that level of performance. The good judgement demonstrated by Sullenberger that day (and by many pilots in many other dire situations) depended on an extensive understanding of how the world works, and to reason about outcomes outside of the rules of the game, so to speak (for example, short-cutting the checklist in order to ensure the aircraft continued to have auxiliary power.) Current machine-vision systems, on the other hand, lack the ability to reason about how things ought or might be, and so can make utterly bizarre-seeming judgements about what they are "seeing."
Personally, I believe fully-automated aviation will become both feasible and acceptable, but with arguments like the one quoted above, this article is glossing over the challenges that remain.