This site graciously hosted
by our friends at

4.  Elements of Secure Coding

Table of Contents  |  Previous Section  |  Next Section

4.1  Introduction

We have broken the advice that follows into three broad categories.
  • Architectural principles. Rules that will apply with equal force across a wide range of platforms and applications, based on psychology, experience, and computing theory.
  • Design ideas. Detailed technical advice that can nevertheless be applied to many different languages and operating systems.
  • Language-specific tips. Do's and Don'ts derived from particular language features.
4.2  Architectural Principles

Sound security architecture is what lets a programmer say "Yes" with confidence, and "No" with certainty.

Here are the fundamental principles of secure system architecture. Most were first enunciated formally in J.H. Salzer's "The Protection of Information in Computer Systems" [Salzer 1975]. Many of the comments were adapted from Matt Bishop's SANS '96 presentation [Bishop 1996a].
  • Principle of least privilege. A program should execute with only the privilege it needs.
  • Principle of fail-safe defaults Default actions are selected to be as safe as possible. For example, a user is assumed not to have access rights unless determined to have them. Further, if a program terminates abnormally, protection levels must be restored to the status quo ante.
  • Principle of economy of mechanism. Simple systems are easiest to design well and test thoroughly. Moreover, features that do not exist cannot be subverted, and programs that did not need to be written have no bugs.
  • Principle of complete mediation. Every attempted action should be tested against policy, step by step, before being carried out.
  • Principle of open design. Reliance upon concealment of design details is generally misplaced. Security should be intrinsic.
  • Principle of separation of privilege. Don't put all your security eggs in one basket. Require, for example, both proper permissions and a password as a prerequisite to file access.
  • Principle of least common mechanism. Minimize shared channels or resources.
  • Principle of psychological acceptability. Select a user interface that makes it easy to do the right thing. Use mental models and paradigms drawn from the real world and familiar to everyone. Banks, safes, locks and keys are useful exemplars.
And a few additional pieces of counsel collected from experienced security architects.
  • Defense in depth is better than relying on a single barrier.
  • Design your systems as if your keenest adversary will be attacking them.
  • Any design is only as secure as its weakest component.
  • Require individual accountability.
  • Build checkpoints (single points of entry) into your designs.
  • Make use of the access control provided by the operating system. Don't rely on it.
  • Modularize.
  • Don't invoke untrusted programs from within trusted ones.
  • Fail cleanly. Degrade gracefully.
4.3   Design Ideas

On a more technical level platform-independent tips.
  • Program defensively.
  • Strip symbols from binary files.
  • Make use of program wrappers to clean and constrain the program's operating environment.
  • Self-limit program consumption of resources such as memory and processing time.
  • Maintain minimal retained state. It's harder to get into a confused, disallowed state.
  • Cleanse all data from outside the program, such as:
    • Initialization and configuration files
    • Command line arguments
    • File names (e.g., a file name that begins with "../" can lead to trouble)
    • URL's
  • Check all user-input strings from characteristics such as:
    • Strange character sets (Unicode, etc.)
    • Appropriate string termination
    • Appropriate string length
    • Disallowed characters (better yet, check for allowed characters)
  • Figure out the correct initial value for variables and buffers, and set them that way.
  • Do not confuse random and pseudo-random numbers--or their respective uses.
  • Do not keep sensitive data in a database without password protection.
  • Do not echo passwords or display them on the user's screen for any reason.
  • Do not send passwords via email, especially mail generated by a program.
  • Do not code usernames or passwords into an application.
  • Do not store unencrypted passwords in a file.
  • Do not transmit unencrypted passwords across a network.
  • Do not authenticate users in the client portion of the application, then pass directives on to the database server for execution without further checking.
  • Avoid invoking a shell or command line from within an application.
  • Do not make access decisions based on environmental variables.
  • Use file descriptors, not file names repeatedly.
  • Avoid embedding an encryption key in a program given the program itself.
  • Avoid authentication based on shared knowledge of a password.
  • Read through your code. Think of how you might attack it.
  • Link static. A user might interpose his own shared library otherwise.
  • Run privileged for as short a time as possible.
  • Stop on errors. Do not try to recover unless defined further behavior is assured.
  • Always check return codes.
  • Don't rely on source IP (Internet Protocol) addresses as a means of authentication.
  • Set the working directory explicitly when the program starts.
  • Do not create files in world-writeable directories.
  • Use/re-use library calls, system calls, and other well tested code whenever possible.
  • Don't let your programs core dump. Exit under control instead.
UNIX-specific suggestions
  • Make use of chroot to constrain execution to a subset of the directory tree.
  • If you are expecting to create a new file with the open call, use the O_EXCL|O_CREAT flags.
  • When performing a series of operations on a file, such as changing its owner, stating the file, or changing its mode, first open the file and then use the fchown(), fstat(), or fchmod() system calls.
  • If you think that a file should be a file, use lstat() to make sure that it is not a link.
4.4  Special topics

More involved topics in this area are reviewed here.

4.4.1  Handling temporary files

If there is a need to create a temporary file, consider using the tmpfile() function. This will create a temporary file, open it, delete the file, and return a file handle.

If it is available, mkstemp (3) can also be used. It's slightly safer than tmpfile(), being less vulnerable to race conditions. However, mkstemp() doesn't support the TMP or TMPDIR environmental variables, which means the programmer will probably have to code that feature.

Which ever approach is taken, do not write a temporary file in a world-writeable directory. Race conditions and UNIX symbolic-link magic could allow a malicious user to cause overwriting of a system file instead. Another drawback: if the program writes the file out so that the program can read it in again later, it should not be written into a directory where the file can easily be changed. It's far better to use whatever temp-file system calls are available on the system.

4.4.2  Handling privileges

The correct handling of privileges in UNIX can be intricate. A short summary is provided here. The "real UID" is the USER ID of the user running the program. The "effective UID" is the User ID of user with whose privileges the process runs. The "Login/Audit UID" is the User ID of user who originally logged in. The "Saved UID" is the User ID before the last change by the program.

The goal is to use the minimum needed privilege at a given moment, and "give away? privileges irreversibly as soon as the program doesn't need it any more.

The detailed discussion that follows is from Matt Bishop's SANS '96 tutorial [Bishop 1996a].

Looking up the user ID?s with system calls:
  • getuid() returns the real UID
  • geteuid() returns the effective UID
  • getauid() returns audit (login) UID (varies) (On Solaris, must be root to run this)
  • getlogin() returns login (audit) UID
Warning: on systems, getlogin returns the name of the user associated with the terminal connected to stdin, stdout, or stderr (which is very different than the above.)
  • getsuid() returns saved UID (on some systems) On others, your program must save this if you plan to refer to it later
Setting the UID's with system calls:
  • setuid(uid) set UID if root, sets real, effective, saved; if not root, sets effective
  • setruid(uid) set real UID
  • seteuid(uid) set effective, saved UID
  • setauid(uid) set audit (login) UID (varies) (On Solaris, must be root to run this)
  • setlogin(uid) set login (audit) UID
  • setreuid(rid,eid) set real (rid), effective, saved UID (eid)
4.4.3  Handling random numbers

Many application programmers create security vulnerabilities by employing flawed or inappropriate random number generation algorithms. Some of the most famous Internet vulnerabilities (for example, the flaw in Kerberos v. 5.0 in 1996, and the TCP random number sequencing attacks of about the same time) had this as their root cause.

Picking a good random number is not easy. It may be best to leave the job to true specialists. (See, for example, Wheeler [Wheeler 2001].) However, here are some more ideas to consider.
  • Use a genuine source of randomness, such as a radioactive source, thermal noise, movement on a lava lamp, or something similar.
  • Ask the user to type a set of text, and sample the time between keystrokes. Hash the result with a cryptographic hash function.
Here are methods to avoid.
  • Avoid relying on the system clock.
  • Don't use Ethernet addresses or hardware serial numbers.
  • Beware of using information such as the time of the arrival of network packets.
  • Don't use random selection from a large database.
4.5  Language-specific Tips

4.5.1  C/C++ Tips
  • Avoid popen(), because of problems with environmental variables.
  • Avoid execlp() and execvp(), which use
  • Use enum.
  • Use the Assert macro.
  • Compile with -Wall -Wpointer-arith -Wstrict-prototypes -O2. Some warnings only show up under optimization analysis.
  • Use exec() instead of system(), to avoid invoking a shell.
  • Avoid calls to strcpy(), strcat(), sprintf(), and gets(). They are prone to buffer overflows.
  • Use instead strncpy(), strncat(), snprintf(), and fgets().
  • Be wary of buffer overflows when using: Scanf(), fscanf(), sscanf(), realpath(), getopt(), getpass(), streadd(), strecpy(), strtrns(), and getwd().
  • For file i/o, especially non-atomic operations, use fchown(), fchmod(), fstat(), fchroot() and--in place of the non-existent funlink()--ftruncate().
4.5.2  Java Tips

The following tips for secure Java tips are adapted primarily from [Wheeler 2001] and [McGraw 2000].
  • In general, reduce the scope of methods and fields as much as possible. (Can package-private methods be private? Can protected members be package-private/private [Sun 2000]?)
  • Refrain from using non-final public static variables. (You can't check to see if the code that changes them has appropriate permissions.)
  • Avoid using static field variables, which can be found by any other class.
  • Don't return mutable objects to potentially malicious code. (This includes arrays.)
  • Don't depend on initialization.
  • Make everything final unless there's a good reason not to.
  • Don't use inner classes, because when they are translated into byte codes they are translated into a class accessible to any class in the package.
  • Make your classes uncloneable. This prevents attackers from instantiating your class without running its constructors.
  • Make your classes unserializable. This prevents attacks from viewing the internal state of your objects.
  • Make your classes undeserializeable, if possible.
  • Don't compare classes by name.
  • Don't store secrets in the code or data. A hostile JVM could view it.
  • Make methods private unless you can't refrain from using public fields or variables. (Use accessor methods instead.)
4.5.3  Perl/CGI Tips
  • Turn off server-side includes, if that is feasible.
  • Don't run the web server as root.
  • Be careful of eval.
  • Use Perl's emulation mode for handling setuid scripts.
  • Set the PATH variable, including just the directories you need. Example:
{"PATH"} = "/bin:/usr/bin:/usr/local/bin";
  • Don't trust http_referer.
4.6  Special Topics

4.6.1  Using Taint in Perl

Use the "taint" checking mechanism. (Invoke it with a "-T" command line flag.) Any variable that is set using data from outside the program (including data from the environment, from standard input, and from the command line) is considered tainted and cannot be used to affect anything else outside your program. The taint can spread. If you use a tainted variable to set the value of another variable, the second variable also becomes tainted. Tainted variables cannot be used in eval(), system(), exec() or piped open() calls. If you try to do so, Perl exits with a warning message.

Tainted Perl will also exit if you attempt to call an external program without explicitly setting the PATH environment variable.

4.6.2  Filtering special characters

CGI scripts in particular are vulnerable to manipulation by the insertion of special characters in the input stream. David Wheeler [Wheeler 2001] warns that.
One of the nastiest examples of this problem are shell metacharacters. The standard Unix-like command shell (stored in /bin/sh) interprets a number of characters specially. If these characters are sent to the shell, then their special interpretation will be used unless escaped; this fact can be used to break programs. According to the WWW Security FAQ [Stein 1999], these metacharacters are:
& ; ` ' \ " | * ? ~ < > ^ ( ) [ ] { } $ \n \r
Wheeler displays the following code fragment to cope with this problem.
4.6.3  Message digests

How can a web application detect tampering? One way is to use secure hash algorithms, called "Message Digest" algorithms. Digests are used in SSL browser connections, Virtual Private Networks (VPN) and Public Key Infrastructure (PKI) systems to "sign" data. The same technique can be used in a web app to "sign" hidden fields.

Message digests, like checksums, determine a mathematical "fingerprint" of the characters in a string. The fingerprint can be compared with a saved or known good value to detect tampering.

You can use MD5 or another message digest to checksum hidden field data. After a moment's thought it can be shown that this naive approach is not immune to compromise; but it might be good enough for specific threats. If it's not, many more complex methods are available in [Phillips 1995].

4.6.4  Concerns about setuid shell scripts

If at all possible, avoid using setuid shell scripts. (The operating system may not allow them, especially in the Bourne shell.) It's a dangerous approach, even when using wrappers to constrain the environment passed to the shell.

Shells interact closely with their environment. Be sure, if they are used to use them as setuid scripts, to strip all environmental variables to start with, then add back in the ones the script needs. At all costs do redefine the PATH and IFS variables. They are the source of several widely known vulnerabilities.

Shells are also complicated and powerful, and can produce surprising behavior. (For example, if arg 0 begins with '-' it's a login--interactive--shell!

Most operating systems, including all mainstream SYSV derivatives, will strip effectively the setuid bit off the script when the kernel executes it. This action protects the system against a race-condition exploitation that surfaced in the 1980's.

Finally, the following trivial root exploit shows why the Bourne shell makes a terrible setuid candidate.
% ls -l /etc/reboot
-rwsr-xr-x 1 root 17 Jul 1992 /etc/reboot
% ln /etc/reboot /tmp/-x
% cd /tmp
% -x
Table of Contents  |  Previous Section  |  Next Section

Site Contents Copyright (C) 2002, 2003 Mark G. Graff and Kenneth R. van Wyk. All Rights Reserved.