in the future. Take OpenSSH, for example. In version
2.5.2p2, ssh drops all privileges permanently by calling
setuid(getuid()). In the newer version 3.5p1, how-
ever, ssh introduces a weakness because it adds a call
seteuid(getuid()) before setuid(getuid())
and therefore fails to drop privileges completely. Incor-
porating static analysis tools into the build processes of
software packages would help the programmers to catch
vulnerabilities as soon as they are introduced into the
programs. In addition, in the same way that regression
tests prevent old bugs from being re-introduced, MOPS
could be used to prevent old security holes from being re-
introduced into security-critical applications.
5.4. Configurations Make Static Analysis Diffi-
cult
Many packages have build-time configurations and run-
time configurations. Build-time configurations affect
which code gets compiled and linked. Some of them are
set by autoconf, which detects the features of the platform,
and some can be set by the user. Therefore, one CFG re-
flects only one build-time configuration, but in some cases
only certain configurations result in vulnerable executa-
bles. This is often true for programs that make setuid-
like calls. Because these calls differ on different plat-
forms, many programs put them into conditional macros
(#ifdef) hoping that they work correctly on every plat-
form. However, MOPS can only check the package for
the property on the platforms where the package has been
built.
Run-time configurations affect the control flow of the
program — some control flows are feasible only in some
configurations. Since MOPS lacks data flow analysis, run-
time configuration causes false positives. Furthermore,
there are often constraints on the parameters in a config-
uration. Therefore, even a data-flow savvy tool cannot
avoid some false positives without domain knowledge of
each configuration. This demands more human assistance.
Therefore, configuration-dependent code remains a weak
spot for software model checking, static analysis, and for-
mal verification in general.
6. Related Work
A number of static analysis techniques have been used
to detect specific security vulnerabilities in software.
Wagner et al. used integer range analysis to find buffer
overruns [21]. Koved et al. used context sensitive, flow
sensitive, inter-procedural data flow analysis to compute
access rights requirement in Java with optimizations to
keep the analysis tractable [14]. CQUAL [11] is a type-
based analysis tool that provides a mechanism for speci-
fying and checking properties of C programs. It has been
used to detect format string vulnerabilities [18] and to ver-
ify authorization hook placement in the Linux Security
Model framework [23], which are examples of the devel-
opment of sound analysis for verification of particular se-
curity properties. The application of CQUAL, however, is
limited by its flow insensitivity and context insensitivity,
although it is being extended to support both.
Metal [9, 1] is a general tool that checks for rule vi-
olations in operating systems, using meta-level compila-
tion to write system-specific compiler extensions. Use
of Metal basically requires the programmer/security an-
alyst to specify annotations on the program being stud-
ied. Metal then propagates these annotations and is very
good at finding mismatches: e.g., when a pointer to a user-
mode address is dereferenced inside the kernel. Over time,
Metal has evolved from a primarily intra-procedural tool
to an inter-procedural tool. Recent work [22] has been on
improving the annotation process. The most substantial
difference between Metal and MOPS are Metal’s annota-
tions: with more effort, Metal can find data-driven bugs
that MOPS currently cannot. Overall, the approaches of
MOPS and Metal are converging. One could easily rec-
ommend the use of both tools to analyze a particular piece
of software, with each tool used where it has particular
strength. Moreover, we expect that the results of this pa-
per would carry over to Metal; all of the security proper-
ties we considered can be expressed within Metal’s anno-
tation system, and it seems likely that most (or all) of the
bugs we found with MOPS could also have been found
with Metal.
SLAM [3, 4] is a pioneer project that uses software
model checking to verify temporal safety properties in
programs. It validates a program against a well designed
interface using an iterative process. During each itera-
tion, a model checker determines the reachability of cer-
tain states in a boolean abstraction of the source program
and a theorem prover verifies the path given by the model
checker. If the path is infeasible, additional predicates
are added and the process enters a new iteration. SLAM
is very precise, and as such, represents a promising di-
rection in lightweight formal verification. SLAM, how-
ever, does not yet scale to very large programs. Similar to
SLAM, BLAST is another software model checker for C
programs and uses counterexample-driven automatic ab-
straction refinement to construct an abstract model which
is model checked for safety properties [12]. Compared to
SLAM and BLAST, MOPS trades precision for scalability
and efficiency by considering only control flow and ignor-
ing most data flow, as we conjecture that many security
properties can be verified without data flow analysis. Also
since MOPS is not an iterative process, it does not suf-
fer from possible non-termination as SLAM and BLAST
do. ESP [8] is a tool for verifying temporal safety proper-
Comentários a estes Manuais