This site graciously hosted
by our friends at




3.  An Approach to Secure Coding

Table of Contents  |  Previous Section  |  Next Section

3.1  Introduction: What can be done

What can be accomplished, in a large heterogeneous enterprise network, to produce secure application code? What is a reasonable goal for the impact of this survey?

Gary McGraw's observations in Securing Java: Getting Down to Business with Mobile Code [McGraw 2000] help to bound both expectations.
Writing secure Java code is very difficult. There is no magic bullet that will solve your security problems; all you can do is think hard (perhaps with help from formal analysis tools) and use prudent engineering practices to minimize risks. Sometimes a pair of objective outside eyes can help. The rules set forth here are intended to describe some prudent engineering practices for writing secure Java code. They won't solve your security problems, but they will reduce the number of ways in which things can go wrong.
3.2  A Security Bestiary

To provide a context for the detailed technical tips and techniques provided later in the survey, it is important to define the relation between threats, attacks, vulnerabilities, and bugs.
  • A threat is something bad that can happen in a business sense. An example would be the unauthorized release of confidential financial projections.
  • An attack occurs when a malefactor attempts to manipulate a system to execute a threat.
  • A vulnerability is a state in which a system acts in violation of a security policy.
  • A bug is a system flaw that makes it possible to bring the vulnerability to life.
3.2.1  Threats

The major threats, in this sense, abetted by insecure code are:
  • Denial of service
  • General systemic compromise
  • Bypass of access controls
  • Data leakage
  • Loss of data integrity
A small samble of the resulting potential business impacts from such threats are:
  • Corruption of a web site, affecting consumer and investor confidence
  • Embezzlement and fraud through manipulation of databases
  • Theft of trade secrets
3.2.2  General attacks

Some of the most commonly-seen technical attacks today, judging from the available literature, are buffer overflow attacks, exploitation of race conditions, and (in web applications) hidden field manipulation or parameter tampering. The highlights of these attacks are presented here. Later in the survey, advice about how to develop defenses against these attacks is offered.

Buffer overflow

A large amount of effort has been invested in detecting and unfortunately exploiting buffer overflows. The best explanation of how they arise is from Matt Bishop in "Checking for Race Conditions in File Accesses" [Bishop 1996b].
The so-called buffer overflow vulnerability is difficult to characterize and define. There are many variations, but they essentially have one of the forms shown in figure 1. A program tries to copy some data from one object into another, does not check that the destination object is large enough to contain the source object, and uses a routine such as sprintf to do the copying.
Table 1.  Genesis of Buffer Overflows (After Bishop and Dilger).

Race conditions

Race conditions are also known as "time-of-check-to-time-of-use (TOCTTOU)" flaws. In the same paper cited above, Matt Bishop once again has done the best published research.
A subclass of TOCTTOU flaws, which we call TOCTTOU binding flaws, arise when object identifiers are fallaciously assumed to remain bound to an object.

The archetypal TOCTTOU binding flaw in a privileged program on the UNIX operating system arises when a setuid to
root program is to save data in a file owned by the user executing the program. The program should not alter the file unless the user could alter the file without any special privileges. Code to do so typically looks like this:
If the program were to omit the access(2) system call, the open(2) system call would always succeed, because the effective UID of the process is root. If the user executing the program could not write to the file, the access system call would return -1 and the open would never be attempted. So this fragment allows the process to write to the file if, and only if, the user executing the program could do so.

If the object referred to by
filename changes between the two system calls, though, the second object will be opened even though its access was never checked (access to the first object was checked) [Bishop 1996b].
3.2.3  Trouble with environmental variables

Throughout the literature of secure coding, authors plead with programmers not to rely on the user supplying reasonable, rational, or safe values for a process's environmental variables. Here is an example of what can go wrong.

Suppose that a shell script or Perl script is careless enough not to reset the value of the IFS variable. (It's an obscure one: UNIX allows the user to define the inter-field separator character.) The exploiting code fragment might look like this.
Having set these values, the script can now invoke a setuid program which, in term, invokes, say, /bin/foo. The shell will interpret "/bin/foo" as "./bin" with arg foo. So the prepared attacker, who has placed a program in the current directory named bin, can now execute it with setuid privileges!

Hidden field manipulation

Hidden fields are often used in CGI scripts to save information about the client's session. This allows the normally stateless Web server to operate with a king of pseudo-state. The popular CGI.pm Perl package, for example, uses hidden fields to great effect. But it's possible for a malicious user to save a valid form, manipulate the hidden fields, then POST the modified form back to the server. This trick can devastate an application's security.

Advosys Consulting, in "Preventing HTML form tampering" [Advosys 2001], gives this example.
[Modifying] form fields is very simple. For example, let's assume the price of a product is kept in a hidden field, a common practice allowing for e-shoplifting, and thus is trusted by any back-end system. A hacker can change the price, and the invoked CGI will charge him/her for the new amount, as follows:
1.  Open the html page within an HTML editor.
2.  Locate the hidden field (e.g., "<type=hidden name=price value=99.95>")
3.  Modify its content to a different value (e.g. "<type=hidden name=price value=1.00>")
4.  Save the html file locally and browse it.
5.  Click the "buy" button to perform electronic shoplifting via hidden manipulation.
Parameter tampering

Tampering with CGI parameters embedded inside a hyperlink can have a similar effect. From the same article by AdvoSys:
For example, let's take a search CGI that accepts a template parameter:
Search.exe?template=result.html&q=security
By replacing the template parameter, a hacker can obtain access to any file he wants, such as /etc/passwd or the site's private key, e.g.:
Search.exe?template=/etc/passwd&q=security
Cookie poisoning

Still another form of tampering is called "cookie poisoning."

Some applications make it a practice to store authentication data in a cookie on the client's machine. This is an error, especially if the cookie is not cryptographically secure. The user can tamper with the cookie--as it's on his system--at will.

Table of Contents  |  Previous Section  |  Next Section


Site Contents Copyright (C) 2002, 2003 Mark G. Graff and Kenneth R. van Wyk. All Rights Reserved.
webmaster@securecoding.org