This site graciously hosted
by our friends at




Opinions/Editorials

10 June 2003

Recently, the Organization for Internet Safety (OIS) group proposed a "Draft Security Vulnerability Reporting and Response Process" and opened it up to public discussion (until 7 July 2003). The full text is available at http://www.oisafety.org/process.html.

While we applaud the effort, we have concerns about the proposed process.

LIFE-CYCLE CONCERNS

The OIS proposes a 30-day grace period between the time when a newly discovered vulnerability is reported to the affected product vendor(s) and when the vulnerability gets disclosed to the public. Specifically, the OIS states that, "the Finder and Vendor must work together to develop a target timeframe..." and "By convention, 30 calendar days has been established as a good starting point for the discussions, as it often provides an appropriate balance between timeliness and thoroughness."

We think it's great that the 30-day grace period is not a firm-fixed period, but rather is open to negotiation. The document describes several factors that weigh in during the decision of how long the period of time should be, including "the engineering complexity of the fix." Ah, there's the problem. Significant practical complications lurk behind that simple phrase.

In our book, Secure Coding: Principles and Practices (O'Reilly, 2003), we describe an approach to producing secure software that begins with sound architectural principles and proceeds through a robust design process, implementation, and deployment of the software application or system. The "engineering complexity" of the fix depends to a large degree on where in this software development process the mistake was made that lead to the vulnerability. A relatively simple implementation flaw, such as many of the "buffer overflows" that riddle media reports on a seemingly daily basis, can typically be fixed quite quickly. On the other hand, a design-level flaw may require a thorough re-designing process that could take a great deal more time than the 30-day starting point proposed by the OIS. And aren't there some architectural-level bugs that can't be fixed?

Our point is that the process needs to recognize that there are some bugs that will take months to fix, especially if a wide range of platforms offered by the particular vendor are affected. If a bug "fixer" reports to the "finder" that the fix will require several months or re-engineering--can't be fixed at all, or fixed in a way that makes economic sense to the fixer--do we expect most fixers to accept that answer? If they don't, is any vendor going to be interested in explaining all this to an "arbitrator"? Can you see Big Company, Inc. asking for permission to fix it right, or not to go broke (or fall behind the competition) pursuing a chimera?

MOTIVATIONS AND SOCIAL FACTORS

Just briefly, we think that the road to success for any such scheme is to identify the motivations that would lead different parties to participate. That analysis seems to be lacking in the current draft. Let's take a look.

Some of the factors that motivate "finders" are:
  • Removing a dangerous vulnerability from the Internet
  • Avoiding the widespread break-ins that often follow a "premature" disclosure
  • Making their own software and information safer
  • Fame and admiration from an adoring and grateful public
Similarly, vendors might look for opportunities to:
  • Improve their product
  • Make the world (and their own networks, and their customers') a safer place
  • Avoid liability and lawsuits
  • Keep their name out of the paper (if the news is "bad")
  • Gain favorable publicity for appearing to be responsible
We just don't see, in this scheme as proposed, the chords of interlinking benefit that would tie the interests of the two parties tightly enough together. Remember, any good plan will have to offset losing the ability to proceed according to their own judgment.

What to do? Well, maybe we could add assurances.

Participating finders could promise to abide by the agreed-upon procedure, disclosing no vulnerabilities outside of this process.

Participating vendors could pledge, for finders who promise to abide by this procedure, to:
  • Waive prosecution under any "anti-reverse-engineering" statutes that might apply
  • Shield the finders from any liability resulting from the eventual disclosure
And perhaps we can work out a way for participating vendors to be shielded, perhaps by allied consumer groups, perhaps by legislation, from product liability.

This arrangement of incentives would go a long way toward ensuring meaningful participation. But--and this is a topic for a separate discussion--we still haven't reached the point where software vendors are held liable for producing vulnerabilities. Until we do, we just don't see why a big software company would really follow such an intricate and constraining process. They might agree informally to do it; but actually adhering to it? That's another topic entirely. Hey, just how is compliance to this process going to be monitored and enforced, anyway?

THE "MANY VENDORS, ONE BUG" PROBLEM

One aspect of the vulnerability-reporting problem that we don't find addressed in any depth has to do with what we call the "many vendors, one bug" problem.

Suppose an awful new vulnerability is discovered that affects most or all vendors of Internet-enabling software. (It might be, for example, a flaw in the TCP/IP protocol.) The draft process calls for notifying other affected vendors, all right. Doing that quickly and correctly can be extremely complicated, as we know from first-hand experience. But that's not the main issue we see.

Suppose vendors A and B work with finder X towards a fix on a vulnerability that also affects the products of vendor C (who is not a participant in this process). Suppose further that A and X amicably agree about a public announcement of the vulnerability, and it's released and announced. X is happy. A's customers may be happy; B's customers--well, some folks are never satisfied. What about vendor C, and its customers, who don't have a fix for this now-well-publicized problem?

Of course, we already have a latent liability problem today if A and B individually announce a fix for a problem that affects C's customers (who don't have a fix). We think this proposal may enflame the liability issues further. Worse yet, aren't there scaling problems and restraint-of-trade issues?

If A or B enter into formal arbitration concerning a widely occurring vulnerability, do the drafters envision them talking separately or individually with finder X?

If all the talks involve only two parties, that's markedly inefficient (and X will be completely overwhelmed when the number of vendors swells past a few). But if we insert a "Coordinator" Y (CERT/CC, for example) in the process, new complications demand our attention. Just to name one: if A says the fix will take two weeks and B chooses a different approach that will take two months, who gets to decide when the vulnerability is announced? Whether you answer A, B, X, or Y (or one of the combinations), we see a problem.

And if A and B "conspire" to work on the problem together (just for fun, we chose a word that C's attorneys might like), will C want to be heard on the subject of their interests (and those of their customers) that were harmed by the announcement?

PROCESS COMPLEXITY

The draft we reviewed ran to over 10,000 words, and contained about sixty steps. It certainly has a great many moving parts; in any event, we find it cumbersome and over-specific.

It looks, in fact, like a plan designed by a committee of engineers. We believe a better approach would be to devote more energy to motivating desired behaviors, and less energy to telling individuals (or big companies) how to meet their goals.

CONCLUDING THOUGHTS

The OIS proposal is, by and large, a good effort that has clearly been thought through very carefully and deliberately. We feel, though, that the draft process:
  1. Pays insufficient attention to the "engineering complexity" of any particular vulnerability
  2. Does not recognize the critical need for "life cycle" considerations in the design, creation, and maintenance of secure code
  3. Does not include sufficient enticements to motivate widespread participation
  4. Is too complicated (and too detailed) to be practical
In the very early days at the Carnegie Mellon CERT/CC, one of us (Ken) recalls clearly the discussions that went into vulnerability and fix notification planning. The measuring stick at each stage of the process was whether or not the proposed next step reduced the problem. In applying the same rubric here, we find that the process as proposed will not "reduce the problem" significantly; and so, reluctantly, we can't support it in this form.

Mark G. Graff
Kenneth R. van Wyk
Authors, Secure Coding
http://www.securecoding.org

Copyright (C) 2003, Mark G. Graff and Kenneth R. van Wyk. Permission granted to reproduce and distribute in entirety with credit to authors.


Site Contents Copyright (C) 2002, 2003 Mark G. Graff and Kenneth R. van Wyk. All Rights Reserved.
webmaster@securecoding.org