Monday, June 29, 2015

A Data Loss-fuelled Plane Crash, and What That Means for the Cyber Sphere

A data loss-related plane crash calls attention to automation tech.

It was going to be a typical pre-test flight, nothing out of the ordinary. For the crew of the Airbus A400M in Seville, Spain, the flight was totally routine. It was May 9, and the six crew members piloted the plane for its inaugural flight. The A400M is a big, powerful military plane. On its website, Airbus states that it’s “the ideal airlifter to fulfill the most varied requirements of any nation.” Yet on May 9, things for the Seville crew quickly took a turn for the worse. Mere minutes into the flight, the plane crashed, and four of the crew members were dead. Amid the shock and tragedy, an immediate question arose: How did this happen?

A data loss issue
When investigators looked into what had caused the plane crash, what they found likely came as a surprise to many: data loss. When we think of a tragedy on the scale of a plane crash, software glitches are hardly the first thing to come to mind as the inciting incident. And yet in the case of the Sevil​le A400M, that’s exactly what happened.

The A400M is a relatively new plane. The first time an A400M was airborne was back in 2009, when the “aircraft had sparkling performance,” according to an Airbus representative at the time. But the flight on May 9 was a decidedly different story. To understand how data loss caused the crash, it’s first important to point out how this particular plane works. The A400M finds its power in four massive turboprop engines. Each of these engines, in turn, is outfitted with its own unique connection to a set of computer-controlled “torque calibration parameters.” Basically, the engines are run by predetermined computer code, which is specially calibrated for each separate turboprop engine. When constructing its A400M models, experts at the Airbus facility install software that accompanies each of the plane’s engines. The software-connected engines then work in tandem to keep the plane in the air.

This, at least, is how the A400M should function. But unfortunately for the crew of the plane that took off on May 9, the ability to fly had essentially been doomed from the start. That’s because, as a subsequent investigation found, key computer data powering three of the plane’s engines had been inadvertently wiped during the installation phase of the plane. Without these files in place, the Electronic Control Units of the three engines would not be able to function properly, and crew members would not have control over the engines. That day, 75 percent of the A400M’s engines didn’t work due to a software glitch. The plane crashed, and four crew members are dead.

How could this happen?
As an unnamed source with expertise about the A400M told Reuters, “Nobody imagined a problem like this could happen to three engines.” What investigators are suggesting about the crash – that it stemmed from a software glitch – is unthinkable to many. After all, can’t something like a software error be overridden by the people in the plane?

Not in the case of the A400M. The plane was constructed to rely on heavy automation as far as engine powering is concerned. Thus, the plane relies on a system wherein the ECU acts as an intermediary between the pilots and the engines. When pilots deliver a command to the engine, it’s filtered through the ECU. When the six crew members were in the air on May 9, they quickly realized something was wrong – and they did everything in their power to fix the problem.

The issue, though, is that their power was limited. As Airbus reported following the incident, the pilots of the doomed flight noticed that power had been frozen to three out of the four engines, and therefore immediately sprung into action by setting the controls for the three problem engines to “flight idle,” in a move to conserve as much energy as possible. But when it came time to ground the plane, crews required control of the three engines. Because of the data loss and the software glitch it triggered, though, this control wasn’t possible. At this point, the plane couldn’t maintain its air position and began its fatal descent.

A cautionary tale for cyber-fueled automation?
The horrific details of the A400M flight are still being investigated. But the incident, and the broader investigation surrounding it, prompts a broader question: Was this crash simply a horribly anomalous episode, or does it stand as a cautionary tale about the future of automated tech? And if it was a software glitch that accidentally caused the plane to go down, isn’t this the kind of vulnerability a hacker could exploit to intentionally cause a disaster incident?

These are questions that need to be asked as we move toward cyber solutions that are increasingly fuelled by advanced programming and even AI. And the incident with the A400M, while the most tragic, is hardly the only recent example of the potential perils of automation. Here are some other illustrative instances:

  • Car troubles: The self-driving car is a piece of technology that’s been years in the making. But as experts have long warned, this kind of advancement – while certainly a breakthrough by any standard – can also provide a means for a different kind of advanced, targeted cyber attack: One that hits while you’re behind the wheel.”With the autonomous technology, hackers can crash your car or change your route completely while you’re taking a nap,” said Edgar Scholl, founder of IT security company Datengold, in an interview with DW.One big problem, as Scholl pointed out, is that automobile manufacturers are putting a lot more time and effort into building autonomous cars than figuring out how to prevent them from attack. This is understandable, since the appeal of working on an emerging technology can often eclipse the importance of slowing down every once in a while to make sure it’s safe.”Automobile companies have worked on security issues, but [testing] is not sufficient enough,” said Scholl. “They have excellent engineers, but many of them are not familiar with IT security. There ought to be more cooperation between the two sectors.”
  • Automated cars colliding: The idea that autonomous cars can fall into the hands of hackers is something that we need to prepare for in the future. But self-driving cars made a far-from-great headline in the present, with The Washington Post reporting on a near collision between two such vehicles. The incident happened in Palo Alto, California recently, when two autonomous vehicles – one a Google product, the other a vehicle by Delphi Automotive – reportedly faced an occurrence that’s all too common among human drivers: one of the vehicles cut the other one off.The story – which has been disputed by Google and downplayed Delphi – was initially reported by Delphi Silicon Valley lab worker John Absmeier. But Absmeier didn’t just report on the story – he was a part of it, since he was a passenger in the Delphi vehicle, which is an Audi Q5. Absmeier’s account of the incident stated that the Audi Q5 was driving along when it got cut off by the Google car, which is a Lexus SUV. However, those who think that this story illustrates a particularly literal example of competing brands going neck and neck will be disappointed: Experts from Delphi emphasized that the original story by Reuters had blown the incident out of proportion, and dismissed any idea that it was a significant moment in terms of competing brands.Still, the story of this cut off isn’t likely to alleviate a general sense of anxiety about driverless cars. A recent study conducted by the European Commission and discussed in the Post article found that 61% of those surveyed suggested they would be uneasy in an autonomous or driverless vehicle. These concerns are grounded in truth, as Matt Windsor pointed out in a ScienceDaily article.”How will a Google car, or an ultra-safe Volvo, be programmed to handle a no-win situation — a blown tire, perhaps — where it must choose between swerving into oncoming traffic or steering directly into a retaining wall?” he asked.
  • Refrigerator targeted: In an automated future, even the place where we store our food isn’t safe. This was one of the findings that came out of an industry study back at the beginning of 2014. According to the study, the smart fridge in question was one of over 100,000 targeted devices. In a future that will be characterized by automation and the infusion of intelligence into more items, stories like this will become increasingly common.

Yet neither hypothetical car hacks nor the targeting of one smart refrigerator reverberate as tragically as the story of the doomed A400M. While cyber advancements are for the most part a good and productive thing, a story like this illustrates that in our day and age, software functions and human functions are often inextricably linked. As automation and IoT deployments continue to gain steam, it will be imperative for developers to devote as much attention to security as to innovation. Failure to do this could result in another incident of even greater magnitude – which, fortunately, is something that can be avoided via better proactive solutions across the board.



from Trend Micro Simply Security http://ift.tt/1Huc1Gk
via IFTTT

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.