Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This would be a good time to point out that a similar auto software story found its way to HN recently: https://news.ycombinator.com/item?id=9643204 Once Toyota's ECU code was reviewed by domain experts, they found extraordinary lapses in basic software quality practices. This and the VW fiasco clearly show that hiding ECU code is the wrong way to go. We can directly measure the negative consequences, and these are only 2 such incidents discovered recently. Who knows how many more issues are still hiding. We won't really know how at risk we are unless there is some kind of 3rd party review process, required by law. Clearly, the auto industry will prioritize profits and liability over actual quality, in much the same way that banks will never voluntarily limit their risk at the sacrifice of profits. Self-regulation is not working.


History shows us that "3rd party review process"es turn into paperworks games, with all of the effort going into making sure some boxes are ticked and no effort going into actually thinking about the code. Especially for large code bases operating in complicated domains where the effort required to really understand both the code and its context is at about the same order of magnitude as writing it in the first place.

A 3rd party review can reveal horrid practices, but it's hard-pressed to make any sort of guarantee, no matter how soft.

We know how to do better than "process". Software verification techniques are approaching economical. If a piece of safety-critical software is going to put lives at stake on every road in America, it's reasonable to ask the creator of the software for a set of formal specs that is concise enough it can actually be reviewed by experts, together with a proof that their software meets that formal spec.


Current history is showing that if the source code isn't revealed, the formal specs will be rigged. Concise formal specs and proofs are a great addition to the source code, but they are not a substitute. Access to source code is even more critical when lives are at stake.

> We know how to do better than "process".

Are you saying that revealing source will lead to more bureaucracy that formal specs will somehow avoid? I don't see how that follows.


Open sourcing the code is an orthogonal issue to whether we should consider the use of formal methods a best practice.

> Are you saying that revealing source will lead to more bureaucracy that formal specs will somehow avoid?

I'm arguing that process is insufficient (which, crucially, doesn't imply formal methods are by themselves sufficient, and also doesn't imply process or access to source code isn't necessary).

Merely revealing the source code isn't enough if you need to know how the car behaves and have to recover the meaning of a bunch of complicated equations in order to make any sense of the code. Really reading and trusting the code would require more effort than re-writing it.

> Current history is showing that if the source code isn't revealed, the formal specs will be rigged

Hm. That's very surprising to me. Can you give an example of a company providing a rigged formal spec?

Also, rigging formal specs is why the formal specs should be reivewed by a third party expert.

And if/when fraud does happen (e.g., by giving a bunk formal spec or cheating on the proof), formal methods still have two advantages over process:

* The debate over whether enough work went into QC becomes trivial, and skimping on QC becomes willful deceit (something that's incredibly hard to demonstrate in the status quo).

* Unlike process, there are technical solutions to the problem of rigging proofs. And, the specs themselves are much easier to review than the code. So the review process is pretty likely to catch cheating.


Agreed why add process when financial and criminal liability would suffice. Add sufficient liability and players will improve their internal processes appropriately.


I think the Bookout v. Toyota case is a pretty good example of how culpability alone is an insufficient motivator and how an add process (e.g. external audit) could have prevented a tragedy.

Michael Barr's review of Toyota's ECU code (http://www.safetyresearch.net/Library/BarrSlides_FINAL_SCRUB...) showed numerous compliance issues with established industry best practices (80,000 violations of MISRA-C) and failure to even follow Toyota's much laxer internal coding standards (32% rule violation). Toyota shipped uncertified versions of their code and the design and behavior of that code prevented defect detection.


External audits meant nothing with Enron. Criminal liability is the key.


I agree. The US legal system has historically been way more lenient on financial companies than automobile companies. In the case of Enron, they had a lot of financial incentive for their actions, with only the prospect of a few fines as the expected downside. I don't see that being the case for GM, Toyota, etc.


Because they're slightly different cases considering how they begin. The GM and Toyota cases are that they screwed up. They failed to design a safe system. Meanwhile this VW case and those financial cases are that they designed the system intentionally to screw up other people. It's like accidental homicide vs murder.


You need more than this. GM just off scott-free after killing and wounding hundreds with the ignition switch malfeasance. The company plead down to do some PR and pay a teensy fine. Meanwhile, the murderers among them were not charged.


Depends on how it is managed.


Definitely. Formal methods don't replace process carte blanc, but rather make the tasks from which the review process is composed tractable.


External audits are already required for finances. Safety critical systems are even more important and should be subject to even more scrutiny as a result.


It'll take a lot of resources to develop institutions to do this. It'll take a long time to debug these institutions.

We're still arguing over "goto considered harmful", "the billion dollar mistake" and how toolchains influence code-safety-behavior.

And there exists a class of CASE tools from the '90s that, while being somewhat clunky, addressed some of the transparency issues. But those are hardly even relevant any more.

And to make it worse - the final configuration of the system matters just as much as whether the code bases for the individual nodes have passed an audit. This is especially true of CAN networks.


The thing to bear in mind about banks is that nowadays they are really computerized service companies that happen to specialize in finance. Competent tech isn't just a competitive edge, it's fundamental to being able to operate at all.


I've worked in two of the biggest US banks for years and I can confidently say both important departments were far from completely understood by most people working there, let alone outsiders doing an audit.

Most of the complexity is in the messages passed between systems. In a code audit it's fine to remove sending tag X by system A but then you find out when you hit prod system B used tag X and now it's not there which causes a problem in system C. And that would be a simplified example.

I'd be disturbed by any outsider claiming they understand the dynamics of multiple systems with millions of lines of code which often have feedback loops.


Honest question, have you ever worked for a bank? Many still run on AS/400 mainframes in the backends. Knowing what I know, that comment actually couldn't be further from the truth. Banks are insanely risk adverse, and rightfully so. If it works perfectly, they absolutely will not change it, even if the newer shinier tech has real benefits.

Source: I don't work for a bank, but have worked in electronic trading for the past 8 years and work directly with the (in)competent tech teams from other banks to clear and reconcile trading bits.


Using AS/400 systems would make them a computer service company. I actually work at a bank (developing apps) and we use a massive COBOL system for a lot of data. However, we also have a massive Hadoop cluster (much larger than the COBOL systems, and firewalls everywhere.

Banks might not be the most competent at web programming, but they higher a ridiculous number of awesome security folks. I would still put them behind Apple or Google in their given domains, but many banks have pretty robust technical systems for their domain.


Every bank I've ever been with has had a pathetic limit on its online banking passwords. One of them (NAB) was limited to something like 10 characters and only numbers and letters.

Maybe that's changed recently but still, firewalls won't help you when somebody brute forces your users' passwords in 10 minutes.


> Many still run on AS/400 mainframes in the backends.

Are you implying that using old technology makes it somehow incompetent? I don't know about you, but I absolutely do not want my financial institutions to be running their risk analysis software in Node just because someone wants to try it out.


Nope. I'm implying that they don't use the latest and greatest or "best tech". I'm agreeing with your opinion and having worked with the tech teams of banks am confirming it as truth via firsthand experience.


Like in any industry there are good teams and bad teams. Working in finance verses working in say social does not automatically make you a bad developer.

Culturally there are two things make finance different. One is that it is a hideously conservative and risk adverse. AS/400's are used because they are well understood, very reliable, and supported by someone other than an attention deficit teenager in a bedroom.

Second is that mainstream finance doesn't see itself as a technology industry. There are areas like quant investment and high frequency trading that are, but most finance companies still look at technology as a line item on the budget rather than the foundation that their business is based on. This is changing slowly (see one) but will mostly likely require an external disruption to push change through any quicker.


Entirely agreed on all points. I work in hft and only am in this industry due to the fantastic bleeding edge tech.


So which part of simonh's comment were you saying couldn't be further from the truth? I didn't see anything in his comment about the latest and greatest or "best tech".


I have seen systems that ran on hardware as old as DEC VAXs as late as 2008, but I'm not sure the reason behind not transitioning was risk aversion.

Regardless, there aren't formal controls in place. Otherwise issues like Knight Capital [1] wouldn't have happened.

[1] https://en.wikipedia.org/wiki/Knight_Capital_Group#2012_stoc...


> about banks ... competent tech isn't just a competitive edge, it's fundamental to being able to operate at all.

Automobiles are going along that curve too. Maybe this whole VW scandal is an indication of this change?

edit and the Toyota firmware scandal in 2013 has similar implications: http://www.latimes.com/business/autos/la-fi-hy-toyota-damage...


That may be true of the largest banks, but there are hundreds of others that are running on fairly incompetent tech. Either way, external audits have been a part of them for much longer than the computerization has been.


Absolutely. But finances are pretty standardized, software is vastly more complex. Audits are a good idea, but it's an incredibly hard problem.


That's true, and it isn't hard problem. But note that audits are also a hard problem. Auditing teams don't go through and reconcile every transaction. They conduct spot checks of sample transactions and scrutinize controls, and aggressively follow up when any failure of controls is observed. I think a lot of those concepts could be applied to code audits.


I think a better approach would be requiring that developers (and their managers and testers etc.) working on software that could kill or injure people if it malfunctioned have some sort of a professional license, that would be granted and revoked similarly to how medical and engineering licenses are granted and revoked.


I'm not opposing this idea, but I'm not sure it would have helped in the VW case. There were some people (engineers? Managers?) who were cheating and they knew that what they were doing was wrong. I don't believe a license would have changed that.


Other people have raised the question of how well the prospect of losing a license would act as a deterrent.

One other aspect which might be even stronger would be if the professional organization had a role not unlike a union in protecting its members’ professional decisions. Imagine if you worked at VW and your boss told you to make a change which affected safety, emissions, etc. – how different might your reaction be if you know that if you refused or reported it to the appropriate regulators and there were repercussions the Bitpackers Guild could provide legal representation and expert witnesses for you, stage a strike where no licensed engineer would work for an irresponsible company, or simply ensure a lot of publicity? Suddenly it's not “go lean on Sally until she gives the engineering sign-off. She can't afford to quit until her kid's out of college” but “do we want a team of professional engineers to hold a press conference saying we're cutting corners over our experts' judgement?”

There are certainly potential downsides but … anyone who drives a car, uses medical equipment, etc. might reasonably conclude they're worth it, particularly if the system was structured to focus on transparency and due process rather than the pathology some unions are prone to where members are always defended even when they're in the wrong.


If a developer is asked to do something obviously wrong they might not feel they can refuse, because they can be replaced with someone willing to do it.

If an architect is asked to design a bridge that isn't safe they can refuse, secure in the knowledge they can't be replaced with someone willing to do it, as no licensed architect will knowingly design an unsafe bridge.

Of course, a licensing scheme would probably have a bunch of disadvantages.


Perhaps the threat of having their license pulled, thereby nullifying potential future employment might have caused them to think twice about wilfully cheating emissions controls?


While the FDA may not be a great regulatory group, if someone at a pharmaceutical were found to cheat like this they could potentially be barred from working in the industry again. This works in some cases, at least in theory.


Perhaps the angle is that this would constitute ethical turpitude sure to cause loss of license and ejection from one's specialty.


While I don't agree with the license requirement the least we could require is publication of the source code and validation by an industry body made up for subject matter experts for safety critical code.


> some sort of a professional license

Sure, as per the construction industry.

Or perhaps simply the threat of being prosecuted for manslaughter or bodily harm, etc?


Maybe parent meant the software in finances, because that also requires external audits to some extend.


Even the possibility of an external source code, revision history, requirements etc... audit would change working practices dramatically ... particularly if there were legal penalties against developers found responsible for introducing bugs.


There's no way in Hell that I will consent to be held responsible for the output if I do not have full control over the inputs.

If I am an employee of the company, and someone else is telling me what to do for my job, and particularly if they are telling me how to do my job, they must necessarily share responsibility for anything that I do pursuant to obeying those instructions.

And threat of retribution leads to stupid practices:

  public void CoverYourAss()
  {
    try
    {
      int x = 0;
    }
    catch
    {
      throw;
    }
  }
This is a simplified example of a real-world coding standard. At one of my former workplaces, everything had to be wrapped in a try-catch block, including statements that would only ever generate run-time exceptions, like out-of-memory exceptions. It didn't matter if you re-threw the exception you just caught. You just had to make sure the try-catch was there. In every function. Or you're fired. I am not making this up. If the software ever crashed to desktop for any reason, including a bad memory module in the computer running it, or someone nuking parts of the filesystem while it was running, or even a bullet striking the motherboard, someone was getting blamed for it on the development team, and fired. As it would be a witch hunt anyway, the inquisition squad would obviously look at the code written by those most threatening to them, or least popular, or both, before anyone else, and seize upon any irregularity to lay blame.

You'd better believe I was sending out resumes the day I found out about that.

I can only imagine how bad it would be if the penalty was to be fired plus arrested and/or sued.


But if there were a standard set of industry-specific tests that the program had to comply with, it's not like it would just be on you.


You really have to remove the incentive to cheat from the software group before the tests happen.

A defeat device does not get installed accidentally. It's not like a mutation propagating through evolution of living things. Someone decided to put it there, and someone got paid to do it. There was an additional requirement added, one that had no official test coverage. It was to increase fuel economy and produce more pollution when no one was paying attention to the emissions.

As far as the developers were concerned, they did everything right. They built the code their employers asked them to build. It passed the official tests. This was a triumph; I'm making a note here: "huge success!"

The developers worked for the automakers, not the testers or the public. They did what VW wanted, which was to game the system to make more money. You're not ever going to do more than start an arms race as long as the developer is taking orders (and getting hired or fired) by the guy who just wants to sell more cars.


The irony is that instead of incentivizing auditing of system, hackers and security researchers put themselves at huge risk whenever they look for, find, and report vulnerabilities. The companies that have bounties are doing it right.


I agree completely, but on the other hand I'm not convinced that tighter government regulation of ECU code would be better. Can a bunch of government bureaucrats come up with a set of standards and regulations that would actually be beneficial? Given the track record with similar projects, it looks doubtful.

Really I'd say that part of the problem here is that academia has been letting us down. CS programs are universally of fairly low quality, in my opinion, and proper software engineering programs are very rare. There has been insufficient pure research into software development practices, software design patterns and features, and so on in regards to what is required and what is beneficial when it comes to creating control software and firmware. Industry too has been letting us down with their lack of pure research in general, but that's been obvious for a while.

We're starting to reap what we've been sowing for the last several decades in software engineering. We got out of the first "software crisis" where many software projects didn't even deliver anything worthwhile or functional, but now we are in another perhaps even more severe software crisis. One where shipping software that "works" isn't a problem, but where making sure that it does the "right thing" and is sufficiently secure, robust, etc. for the intended use is becoming a huge issue. And not just a financial one, but one that can (and will, and has) result in injury, death, and destruction. We very much need to wake up to the seriousness of this problem, it's not going to get better without concerted efforts to fix it.


I develop safety critical software for railway applications. We have to follow some ISO norms that contain some sensible rules. For example, code reviews are mandatory, we need to have 100% test coverage, the person who writes the tests must be different from the person who writes the code etc. This leads to reasonably good code.

It also makes some things a lot more difficult. For example the compiler must be certified by a government authority. This means we're stuck with a compiler nobody ever heard of that contains known (and unknown) bugs that can't be fixed because that would mean losing the certification.

I assume the car industry has a similar set of rules and the problem is not a lack of rules, but a lack of enforcement.


> We have to follow some ISO norms that contain some sensible rules. For example, code reviews are mandatory, we need to have 100% test coverage, the person who writes the tests must be different from the person who writes the code etc.

The exact same thing happens in the car industry.

> I assume the car industry has a similar set of rules and the problem is not a lack of rules, but a lack of enforcement.

Bingo! Right now I'm staring at some ECU code(not safety relevant, thankfully) that looks like it's been written by a monkey, but I'm a new addition to the team, have no authority here yet and we have to ship it like yesterday.

Guess what will happen.

Truth be told, for safety relevant applications, I've seen the code and it's quite good. And the issue in this case is not that the software was badly built, it's that it was built with deceit on their mind.


>some ECU code(not safety relevant, thankfully) //

What parts of the running of an automobile engine aren't safety relevant?

Sounds like "oh we made the stock for that shotgun from cheap, brittle plastic as the stock isn't safety relevant; how were we to know that it would crack and embed itself in someones shoulder?".

You're right that the primary issue here is deceit but the issue of closed source code in such systems is how that deceit was possible [edit, should probably say "facilitated that deceit" as the deceit would still be possible, just harder and move discoverable with open source]; and that leads to questions of safety as if companies will screw over the environment against the democratic legislation then they're unlikely to be mindful of other morally negative consequences.


Infotainment, air conditioning, etc. There are many many more ECUs in a car than just the one in the engine.


When you have an organizational culture that places meeting deadlines without sufficient planning or resourcing above quality and safety ... the result is inevitable.


I work in a different industry. We have to follow some sensible rules: code reviews, 80% minimum coverage. What happens in practice is that the test verify that the result is not null (and nothing else) and the code reviews pass... God knows how. I have seen methods with a cyclomatic complexity of 65 and methods a few hundred lines long. Oh, and this is in Python - the Java code is worse.

[I was also told by my team leader "no, you can't fix that code, it belongs to X from the main office and he will get angry and not help us anymore".]


> This means we're stuck with a compiler nobody ever heard of that contains known (and unknown) bugs that can't be fixed because that would mean losing the certification.

This is why regulators should embrace formal methods as an alternative to process-heavy regulation. They're actually measuring ground truth, and today are not that much more expensive after accounting for all the costs associated with certification processes.


... or highly intensive 8-years-of-in-service-operation-equivalent testing at the system level ...


System-level testing doesn't always suffice; see the Toyota UA case.

Or, more topically, see the VW case for examples of why testing "in-service-operation-equivalent" requires a certain level of trust that's not ideal in a regulatory relationship.


Governments just need to regulate one thing on the ECU: access to the code. They don't have to make any specific laws. Of course to be able to proof, this is the code uses, one would need access to upload it to the ECU. (And compile it.)

Which gives another issues: people breaking the law by changing their ECU map, to exhaust more pollutants.

But I suppose this is no more of an issue then it was/is with WiFi access points, not a lot of people do it, and you don't want to brick your car :D


I'd say there are some successful efforts to regulate software in safety critical areas. The FAA comes to mind. I worked in the avionics industry for a while, and there are strict standards to which flight management and avionics displays software must be adhered. The DO-178 family of documents defines these standards/guidance/whatever. As a young engineer at the time, I remember two of thinks were were not allowed to do under DO-178B... Pointer math of any kind, and dynamic memory allocation.

These standards have been around a long time too.

https://en.wikipedia.org/wiki/DO-178B


I find your comment about CS programs a bit misplaced. Are you aware that a research team from WVU actually uncovered this emissions problem in the first place?

The "VW diesel-gate" aside, I do share your feelings about the quality of CS programs in general. There is nowhere near enough education about real-time systems, high-reliability systems, and formal verification methods. All of these topics are completely appropriate academic material, in addition to being fundamentally useful for business needs. I'm not sure any of these topics are covered in the usual undergraduate curriculum.


Auto makers have crash tests and even pay private independent companies for it, AFAIR. Let's create a software certification entity that gives stars and stuff that auto makers can display in their ads.


What about mandatory "preferred form for modification" source code releases and mandatory bug bounties?

That is, if anyone finds a bug impacting safety in the ECU code, the manufacturer has to pay $1 million to them.

If any employee shows they release obfuscated source code, or the binary is not compiled from that source code, they get $100 million reward paid by the company and criminal charges are filed against those responsible.


Self-driving cars are also starting to be commercialized, which raises another level of questions.


Perhaps the hackers trying to release an open source version of the John Deere on board computer can extend this drive to all computer driven vehicles.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: