I find this difficult to believe

A prominent hacker has told the FBI that he managed to make an airliner “climb” and move “sideways” after infiltrating its in-flight entertainment system.

The claim was made by Chris Roberts, the founder of the cybersecurity firm One World Labs, who was escorted from a United Airlines flight last month after sending in-air tweets bragging that he could deploy the oxygen masks.

Seriously?

The plane designers aren’t going to connect the flight control systems to the entertainment wifi systems are they?

Really?

46 thoughts on “I find this difficult to believe”

  1. You would be surprised how stupid some designers can be in this regard. Your right it is highly unlikely that he would be able to do it via the WiFi directly but there are other routes into for example the airline to cockpit link (encrypted yes but not as strongly as you would expect). I have talked to the folks that write the code for the avionics and frankly one of them I am not very impressed with their ability to write solid code. For example one was bragging about the technical requirements of the avionics network on a plane and I just had to laugh because from writing internet software the requirements they gave where below that of even the smallest web site in terms of bandwidth and reliability. Also some avionics companies are very very cheap I saw someone wanting a person to do all the math for the navigation system for less then $500 (on elance)…. this requires 3d differential equations.

  2. I can’t really see how it would cut costs. In fact it might increase them, having to integrate the entertainment system with the planes systems. Compared to the plane system the entertainment system is trivial.

    My feeling though is that it is 99.9% likely it is a grade one b/s merchant.

  3. I was talking about how one would do it not how this idiot claimed her did it…. and yes planes can move sideways (very hard to do and you need a very strong cross wind)…. your right in theory cross connecting the two networks would usually increase costs unless you decided to have a unified controller… which many developers who are not used to mission/life critical applications would recommend as a way to cut costs…. one needs to look no farther then some the really stupid stuff hospitals have done in an area I have worked in EMR’s http://qr.ae/f4l8M… the person who answered is a healthcare economist who consults with my company and hates the idea I am a fan of Tim’s since she is quiet liberal in her economics…. she would love the fact I sited her on this blog ;-)…. from my experience most mission critical applications are just as badly made…. for example Boeing admits it was at fault for an almost catastrophic crash in Canada due to bad cockpit design (http://www.boeing.com/commercial/aeromagazine/aero_23/EFII_story.html)…. when you combine with cost cutting all bets are off

  4. Mr Roberts admitted to investigators accessing plane computers systems more than a dozen times since 2011, accessing the systems by attaching an ethernet cable directly to the “Seat Electronic Box” that can be found under some seats, according to Wired Magazine.

    I don’t know what that is, but presumably not what you meant by “the entertainment wi-fi system”?

  5. Maybe he hacked the moving map display that the entertainment system displays to the passengers?

  6. Steve is very probably right – the crew and passengers would have noticed (and objected to) a real unintended sideslip. And the entertainment system (just linked, stripped down PCs) would not connect to the fly-by-wire bus.

  7. The plane designers aren’t going to connect the flight control systems to the entertainment wifi systems are they?

    Yes.

    Because the plane designers are not security experts.

    Most designers are not. The things that people will do just simply do not occur to them.

    Hell, one company equipped a *production* toilet design with Bluetooth (so, you can flush with your phone I guess . . . ) and locked the password to 0000.

    All because no one on that design team realized that there are people who will drive up and down roads all day looking for unsecured appliances just so they can mess with them.

    Bruce Schneier puts it as ‘amateurs produce amateur cryptography’. Their only defense is, literally, obscurity.

  8. Founder of cyber security firm claims risk that his company can fix! News at eleven. I’m yawning already.

    No, no, a thousand times no. Not possible.

  9. One of the major weight factors on an aircraft is cabling. Modern digital data bus systems, such as the ARINC 629 on the 777, replace the miles of wiring with a couple of networks with systems interfacing through terminals. The flight controls tend to be on one system; other systems such as the IFE and engine monitoring systems on a second (the monitoring data is fed back to P&W, RR etc in near real time, through the same comms system used by the IFE, satcom etc). So if the security and design are not designed with hacking in mind, and allow access to the FADEC through the monitoring programme……

  10. I remember a few years back I tried to install a wireless heat radiation monitoring device on one of our facilities, and it was rejected because anything wireless was deemed to be an entry-point for hackers who could gain access to the main control system. This came as a surprise to me – until then I’d assumed the wireless is banned because of reliability or interference issues – but it makes sense.

  11. @Paul

    I can’t really see how it would cut costs.

    Elimination of “redundant” equipment and cabling. Consequential effects of lower fuel consumption due to lower weight.

    @JeremyT

    And the entertainment system (just linked, stripped down PCs) would not connect to the fly-by-wire bus.

    One would hope not, but I’ve been in the computer security biz long enough to see plenty of lousy design decisions that were just asking for trouble (which they eventually got, in spades). Again, one would hope that this is much less common or impossible in safety-critical systems (aerospace, automotive, medical), but as older engineers leave IT, lessons tend to get forgotten.

  12. As I understand it, aircraft entertainment systems are stand alone. They are a box of gubbins connected by cables to each seat.

    What need would there be to connect the entertainment system to the system which manages/controls the aircraft?

  13. Agammamon is spot on. This sort of thing doesn’t happen because of flawed security implementation; it happens because of a total lack of security implementation.

    For those who don’t know, this is (very roughly) how software development works. Devs build software on their own isolated systems. Their machines are set up the way they like them, with all sorts of non-real-world attributes — for instance, a dev could be working on a system whose clock endlessly cycles between 19:59 and 22:05, because they’re working on a program that runs at 22:00 every day. Perhaps they’re working on a system that relies on a secure feed from another live system, in which case they will have faked it with an insecure feed from a test system. Devs systems never have any security, so that they can run and test things without having to enter passwords or whetever every bloody time. Then, when software is deployed, it is supposed to go through a stage where all necessary security is added, test data is replaced with live, example assumptions are removed, etc. The important thing to note is that that stage is not the same as the development stage and that therefore software can be at a very advanced stage of development and still have no security whatsoever. In fact, it often is.

    This is why so many phone apps ask for all your personal data in order to run. In a very small number of cases, it’s because the apps are made by bastards who want all your data. In most cases, it’s because the devs build their software with all data-grabbing turned on and all security turned off, because that’s easier, and then just forget to change the security settings before deployment. It’s sheer bloody absent-mindedness. And you may notice that it happens not only with apps made by little one-person operations but also those made by big wealthy firms who have the resources to get this stuff right. They don’t get the security wrong: getting it wrong would require doing it at all. Hey, if we believe Google (big if), their Streetview cars accidentally grabbed private wifi data across three continents simply because a dev forgot to switch off an experiment they were working on in the lab.

    So I reckon this question is backwards:

    > The plane designers aren’t going to connect the flight control systems to the entertainment wifi systems are they?

    Of course they are, because show a dev two systems and they’ll connect them. It’s what devs do. The question isn’t, did someone connect them? The questions is, did anyone remember to disconnect them? If so, did that person find every single connection point? Were they relying on documentation provided by the devs? [hollow laugh]

  14. @John B

    What need would there be to connect the entertainment system to the system which manages/controls the aircraft?

    Same reason your car’s engine management unit, various sensors, CD player and stalk controls are probably all connected via a CAN bus ( http://en.wikipedia.org/wiki/CAN_bus#Automotive ) – it’s cheaper and less weight to have a bus style topology rather than a point-to-point or mesh topology ( http://en.wikipedia.org/wiki/Network_topology ).

  15. > Same reason your car’s engine management unit, various sensors, CD player and stalk controls are probably all connected via a CAN bus

    And, indeed, it’s been demonstrated that it is possible to hack a car and drive the damn thing against the driver’s wishes. Utter idiocy, but normal.

  16. I suspect Mr. Roberts is taking credit for standard Airbus programming. “It’s not a bug: it’s a feature!”

  17. “The plane designers aren’t going to connect the flight control systems to the entertainment wifi systems are they?”

    Of course they are, or at least, they won’t take sufficient precautions to ensure that nobody else can connect them.

    We keep reading about the risks to power grids from malware, hackers, and so on; an air gap (ie nothing, but nothing, connects to the Internet) would fix this overnight, but it’s too inconvenient for everyone concerned, so they don’t do it.

    And now “they” want to modernise (read “connect to the internet”) railway signalling systems too.

    Heaven help us, because systems designers won’t.

  18. @andrew you would be very surprised that is only if the same dev team does the whole project end to end…. take something I am working on right now (not telling you what it is for obvious reasons) where be gov. regulation it has to be encrypted…. the only problem is encryption makes it impossible to debug/do low level testing on so what we have decided to do is add the encryption as the last part ***AFTER*** we have completely debugged the part of the system that is encrypted (once we add the encryption not even we will be able to look at the raw data to see if it is correct or not)

  19. An other thing to remember about the wired article is the FBI and most security experts assume any unsolicited “security advice” is form a hacker and thus even if I found a flaw in someone’s system the very act of pointing that out to them would make me a suspect. Most of the time this assumption is correct because hackers do use such tricks for social engineering reasons.

  20. Bloke in Costa Rica

    Vulnerabilities can exist in the weirdest places. It would not surprise me in the least if there were some way of bridging the passenger-side LAN with flight control systems. Obviously finding a rational use case for why you would want to do this is hard, but as others have pointed out, simple convenience during development is one possible. There’s also combinatorics working against you. If you have n subsystems then there’s (n² – n)/2 ways they can interact. An example from last week: there is a popular virtualisation system called QEMU which allows you to run sandboxed operating systems inside containers on a machine called a hypervisor. Inside each container it looks like you are in a full-blown computer, and the hypervisor is invisible to you. Someone found that the driver for the virtual floppy disk controller could be used to break encapsulation and potentially run code on or crash the hypervisor. No-one uses floppy disks anymore but the driver was left in for legacy reasons. It’s so often the case that a neglected corner of a system can be used as a back door. And this is open source software. Doing a security audit on something as commercially sensitive as an avionics system is horrendously difficult.

  21. @Bloke in Costa Rica you misread the QEMU story (hypervisors are something I know well from making a IaaS/PaaS [the part of cloud computing that interacts with the VM]):

    1. It only effects the Linux versions of hypervisors

    2. It effects all hypervisors on linux that use the native floppy controller (the bug is in linux not the hyperv’s)

    3. Container based virtualization *IS NOT* the same as hyperv based virtualization (containers are nothing more then a walled off kernel process [FreeBSD calls them jails for this reason]) where hypervisor based creates a complete virtual machine. The primary difference is hyperv’s can run any operating system as a guest [aka VM] where containers can only run copies of the host OS (i.e. if it is linux then it can only run linux containers). Containers are inherantly unsecure because there is no wall going from host to guest.

    4. Floppies are used by OS developers (if you want to make a new bootloader doing it as a floppy image is easier then a HDD or CD image when it comes to writing out the MBR [master boot record]). Thus QEMU/KVM took the wrong solution by just turn off floppies for reason 3

    Side note most of my comments on securing systems I have worked on are about how to make stuff secure even when you have bugs like that. The primary reason why the bug is an issue is most cloud providers do not encrypt the virtual filesystems. And most cloud platforms like OpenStack (my direct competitor) have made the mistake of thinking that they only need one layer of defense (SSL and oAuth)…. the first is known to be breakable by the NSA and the second ignores the Achilles’s Heel theorem of crypto: If “Eve” can get in on the first transmission she can always keep up, but if if she doesn’t get the first one it is always possible to blocker (btw note to you bitcoin fanatics this is also the fundimental flaw in bitcoin).

  22. I could kind of believe this, except that said researcher would have to know a heck of a lot about the details of the interface spec of the control-side components. He would not have access to client code that translated intentions to formatted messages on the control path: he’d have to write his own from scratch. How did he get hold of that info? It would need to be a fairly good bit of social engineering or direct technical espionage which doesn’t seem to be mentioned in the article, and which a security firm director would seem ill-placed to acquire.

  23. @hopper not as hard you think it just takes pissing off the right person and they will tell you all kinds of things in an attempt to shut you up and often give away more then they should. For example I pissed of https://www.quora.com/Christopher-Pow and learned all kinds of things about avionics in the process such as the system bus runs at 2kB/s (which he was bragging about as being “hard” to manage [your typical web browser does 10 times that ;-)]) and does 1 second polling (a creative hack could do some amazing things in that 1 second)… also the protocols are standardized http://en.wikipedia.org/wiki/Avionics#Aircraft_networks including the flight control commands

  24. Bloke in Costa Rica

    I was simplifying for a non-technical audience. I run KVM using qemu/libvirt on production machines.

  25. S2,

    “And, indeed, it’s been demonstrated that it is possible to hack a car and drive the damn thing against the driver’s wishes. Utter idiocy, but normal.”

    Show me the video of someone standing outside of a car without any key fob who can do this. Or even inside a car with a laptop.

    I know you’re in something to do with banking and computing, but there’s a whole world of difference in software development when you step over the line from “pissing off customers” to “people dying”. I used to work in software development for financial services (mostly insurance brokerage and mortgages), and it’s subject to almost no external audits, except ATM software. The system design of aircraft systems, or the design of software for drug testing is subject to numerous, rigorous external audits. It’s the level of software where no-one just bungs a change in. It’s software built with the same sort of discipline as bridge design, because you can’t easily undo dead people.

    And it’s why these “hacking cars/planes/sewage” stories are lies once you scratch past the first few paragraphs. Yes, you can hack the CAN bus, but the CAN bus isn’t secure by design. The applications that communicate with each other are expected to manage that. No-one links airline navigation systems with in-flight entertainment because that would be insane. Even if some junior programmer thought it was a good idea, it wouldn’t get past the his boss, let alone the software architects, project management or external auditors.

    These stories are just bullshitters angling for jobs in news punditry. They produce a story with a whiff of truth that creates a shocking story. The likes of cable news companies aren’t going to do anything but back them up because it makes a good story, and pretty soon they become the network’s computer security expert.

  26. @stiger I don’t know about cars or planes but I do work in a life critical software development (transport of medical test data) and you would not believe the stupid I have seen out of designers of EMR’s (which are by regulation required to be secure… according to HIPAA [American regulations] all data must be “encrypted in transit and encrypted at rest”). In my own small area of EMR’s (due to an NDA I am not at liberty to say what it is) out 20 competitors we are the *ONLY* one that meets the above requirement (and our client seems to be the only one that cares…. some of his competitors have gone up on criminal charges even for being lax here). One problem is no COTS DB engine meets the encrypted at rest requirement (HHS office of civil rights has said encrypted drives are not sufficient) and thus we had to make our own.

    There are also cloud computing platforms that claim HIPAA compliance when they are not (I have already named one in an earlier comment). In short I really doubt everyone is as careful as they should be with life/society critical applications. If you want I can point you to any number of jobs that care more about cost then they do reliability for medical equipment for example (specifically ones that require HL7 which is the standard XML dialect used for such applications). For example here is a random one I found to interface HL7 to AWS (Amazon’s cloud service) which by definition travels across the public Internet.

    It possible to do such things across the public internet but I highly doubt everyone looking for a bagin when going to elance would be so careful. For example we did a medicare date job for doing bundled payments (which congress just recently passed into law based on the algorthm we where hired to implement [so fucked up I do not trust any implementation of it and we refered to it as the evil document]) and we got data from CMS (center for medicare/medicaid) that contained confidential information about patients including what their diagnosis and outcomes where: https://www.elance.com/j/medicare-claims-data-analytic-application/61401998/

    An other example is entire ER’s needing to close down for days at a time because their EMR’s refused to work correctly (dispensing the wrong meds for example)

  27. Finally found that insane bargain basement job I mentioned at first on making the flight controls for nothing

    They’re offering $10-15 an hour for 4-6 months of 40 hour weeks. And it’s ARM-based, so probably a cheapo drone of some kind, nothing to do with airliner avionics.

    As for this story, if anyone who mattered actually believed it was true, the FAA would have told all airlines operating in and out of America to pull the circuit breakers for their IFE system, at least on aircraft that couldn’t be proven safe. Since they didn’t, they clearly don’t believe it any more than I do. The IFE is probably vastly more sophisticated than the actual avionics hardware, and unable to interface with it even if it wanted to.

  28. Then he can presumably name an airliner which uses ARM CPUs in its avionics. I’m not aware of any.

    ARM is generally used for low cost or low power. Based on the price of some of the avionics boxes I have at work, I’d guess the avionics suite is probably charged at $1,000,000 or more per airliner, and there’s no need to save a few watts when you have two or more jet engines to run generators. Hobbyist drones, on the other hand, often cost $100 or less, and have tiny batteries. The only ones I’ve seen dismantled use… ARM.

    In addition to which, an airliner crash can easily run to $1,000,000,000 once all the compensation is paid out. No-one’s going to buy cut-price software which isn’t even written to established industry standards just to save a few thousand dollars. Heck, just taking one airliner out of service to install a software bug-fix will cost more than that, if it has to be done outside a normal maintenance schedule.

  29. How about the Green Hills INTEGRITY-178B tuMP (ARM based) being used in the Northrop UH-1V and AH-1Z (both Military assault helecopters) http://it.tmcnet.com/topics/it/articles/2013/02/15/327082-northrop-grumman-picks-green-hills-integrity-178b-tump.htm and the whitepaper for the 176B that specific says it is ARM based http://www.intelligent-aerospace.com/articles/2014/05/green-hills-software.html

    Is that mission critical enough?!?!?! If not here is the FAA certification for it in the Sikorsky S-92 civilian chopper http://www.highbeam.com/doc/1G1-98105806.html

  30. @Hopper

    I could kind of believe this, except that said researcher would have to know a heck of a lot about the details of the interface spec of the control-side components. He would not have access to client code that translated intentions to formatted messages on the control path: he’d have to write his own from scratch. How did he get hold of that info?

    Conceivably, he could reverse engineer the message format from observing (“sniffing”) actual control messages:

    “About four months ago, the FBI in Denver, where Rogers is based, requested a meeting. They discussed his research for an hour, and returned a couple weeks later for a discussion that lasted several more hours. They wanted to know what was possible and what exactly he and his colleague had done. Roberts disclosed that they had sniffed the data traffic on more than a dozen flights after connecting their laptops to the infotainment networks.”

    http://www.wired.com/2015/04/twitter-plane-chris-roberts-security-reasearch-cold-war/

  31. Anyone who thinks that the fact that lives depend on something means its safeguards are insurmountable is being hopelessly naive. People die by stepping into lift shafts when the lifts aren’t there, for fuck’s sake. You think those lifts aren’t audited and safety-checked?

  32. Is that mission critical enough?!?!?!

    I wasn’t talking about ‘mission critical’, I was talking about actual safety-critical flight computers, since that’s what we were discussing earlier. There’s very little information in those links you posted, but that specific example is the ‘mission computer’, which isn’t flying the helicopter.

    Flight computers are generally extremely conservative. I worked on one in the mid-90s, and it still used core memory.

  33. @ed then you just don’t read very well Green Hills described it as “Safety Critical” here is the ***TITLE*** of the link that you said said nothing about safety critical:

    “Real-time software for ARM processors in safety-critical avionics applications introduced by Green Hills”

  34. Also the above meets the DO-178B standard for the FAA the B level is specifically defined as for “Hazardous” systems with Hazardous being defined as:

    Hazardous – Failure has a large negative impact on safety or performance, or reduces the ability of the crew to operate the aircraft due to physical distress or a higher workload, or causes serious or fatal injuries among the passengers. (Safety-significant)

    See the wikipedia article http://en.wikipedia.org/wiki/DO-178B

    Just so you know DO-178B is in use on (or in the process of being certified for us) on the following air craft (according to Green Hills):

    irbus A380, Boeing 777, Boeing 787, Lockheed Martin F-35 Joint Strike Fighter, F/A-22, Eurofighter Typhoon, Lockheed Martin F-16, Northrop Grumman UH-1Y and AH-1Z helicopters, Rockwell Collins RQ-7B Shadow UAS,

  35. @sad just a note I never claimed that Robert’s actually did anything (I personally doubt it just because if he did the story would of not been published at all for safety reasons) all I was saying is as long there is any communications link (even if in theory one directional) it is always possible to trick it. The NSA uses this trick all the time for example (see the definition of “Eve” in http://en.wikipedia.org/wiki/Alice_and_Bob)

  36. Aryeh,
    This is Ellie. Please do not reference my casual comments on Twitter here, especially out of context. I did NOT say that the cut-rate Elance job was for making low-budget ICBMs. Well, I did not say it as a serious conjecture. I was chatting humorously with you and Luis on Twitter.

    If you’re not careful, you’ll get all of us on some sort of watch list, which is the last thing we need 😮 My interpretation of what Luis said was consistent with what Edward M. Grant said, namely, that the cheapo Elance job was for making some cruddy drone, NOT airline avionics.

    Also, I am not “quite liberal in my economics” nor do I hate the idea that you are a fan of Tim Worstall’s 😉 Okay, my protestations and allegations of innocence are complete. Carry on with your HIPAA drone theories.

Leave a Reply

Your email address will not be published. Required fields are marked *