OpenWorks LLP, Cambridge, UK julian@open-works.co.uk Department of Computer Sciences, University of
Texas at Austin njn@cs.utexas.edu
We present Memcheck, a tool that has been implemented with the
dynamic binary instrumentation framework Valgrind. Memcheck detects a
wide range of memory errors in programs as they run. This paper
focuses on one kind of error that Memcheck detects: undefined value errors.
Such errors are common, and often cause bugs that are hard to find in programs
written in languages such as C, C++ and Fortran. Memcheck's definedness
checking improves on that of previous tools by being accurate to the level of
individual bits. This accuracy gives Memcheck a low false positive and false
negative rate.
The definedness checking involves shadowing every bit of data in
registers and memory with a second bit that indicates if the bit has a
defined value. Every value-creating operation is instrumented with a
shadow operation that propagates shadow bits appropriately. Memcheck
uses these shadow bits to detect uses of undefined values that could
adversely affect a program's behaviour.
Under Memcheck, programs typically run 20-30 times slower than
normal. This is fast enough to use with large programs. Memcheck
finds many errors in real programs, and has been used during the
past two years by thousands of programmers on a wide range of systems,
including OpenOffice, Mozilla, Opera, KDE, GNOME, MySQL, Perl, Samba,
The GIMP, and Unreal Tournament.
The accidental use of undefined values is a notorious source of bugs
in programs written in imperative languages such as C, C++ and
Fortran. Such undefined value errors are easy to make, but can
be extremely difficult to track down manually, sometimes lurking
unfound for years.
In this paper we describe Memcheck, a practical tool which detects a
wide range of memory errors, including undefined value errors.
Memcheck is implemented using the dynamic binary instrumentation
framework Valgrind [10,9].
First, it tracks the addressability of every byte of memory, updating the
information as memory is allocated and freed. With this information, it can
detect all accesses to unaddressable memory.
Second, it tracks all heap blocks allocated with malloc(), new and
new[]. With this information it can detect bad or repeated frees of
heap blocks, and can detect memory leaks at program termination.
Third, it checks that memory blocks supplied as arguments to
functions like strcpy() and memcpy() do not overlap. This does not
require any additional state to be tracked.
Fourth, it performs definedness checking: it tracks the definedness of
every bit of data in registers and memory. With this information it can detect
undefined value errors with bit-precision.
All four kinds of checking are useful. However, of the four,
definedness checking is easily the most sophisticated, and it is
this checking that this paper focuses on.
Memcheck uses dynamic binary instrumentation
to instrument the program to be checked (the client)
on-the-fly at run-time. Execution of the added instrumentation code is
interleaved with the program's normal execution, not disturbing normal program
behaviour (other than slowing it down), but doing extra work ``on the side'' to
detect memory errors. Because it instruments and analyses executable machine
code, rather than source code or object code, it has the following nice
properties.
Memcheck is part of the Valgrind suite, which is free (GPL) software,
and is available for download from the Valgrind website
[14]. It currently runs only on the x86/Linux
platform, although work is currently underway to port it to other
platforms such as AMD64/Linux, PowerPC/Linux, x86/FreeBSD, and
PowerPC/MacOSX.
Memcheck is easy to use. As an example, consider the (contrived) program
badprog.c in Figure 1. It contains three undefined
value errors. To check the compiled program badprog the user only has
to type:
Figure 2 shows the resulting output. The first three lines
are printed by Memcheck on startup. The middle section shows three
error messages issued by Memcheck. The final three lines are printed at
termination, and summarise Memcheck's findings. Each line of Memcheck's output
is prefixed with the client's process ID, 27607 in this case.
The three error messages are quite precise. The first indicates that
the memory passed to the system call write() via the buf argument
contains undefined values; its last line indicates that the undefined
value is on the stack of thread 1 (the main thread). The second
indicates that a conditional jump depends on an undefined value.
The third indicates that an undefined value is used as a pointer.
All three error messages include a stack trace that indicate precisely
where the error is occurring. In order to get exact line numbers in
the stack traces, the client must be compiled with debugging
information. If this is missing, the code locations are less precise,
as can be seen with the location within the write() function
(not to be confused with the write() system call)--GNU libc on
this system was not compiled with debugging information.
Attentive readers may note that the final line of badprog.c
could cause a segmentation fault due to the use of the uninitialised
variable
Memcheck is the first tool we are aware of that tracks definedness
down to the level of bits. Other tools track definedness at byte
granularity (Purify) or word granularity (Third Degree).
This means Memcheck correctly handles code which deals with
partially-defined bytes. In C and C++, two common idioms
give rise to such bytes: use of bit arrays and use of bitfields in
structures. Tools which track definedness at byte or word
granularities necessarily give inaccurate results in such
situations - either they fail to report genuine errors resulting from
uses of uninitialised bits, or they falsely flag errors resulting from
correct uses of partially defined bytes.
The program shown in Figure 3 uses an
uninitialised bit in a bit-array. Memcheck reports this, but Purify
does not2.
Memcheck is known to have found previously-undetected uses
of single uninitialised bits in C++ structure bitfields, in at least
one large, widely used C++ code base.
The rest of this paper is structured as follows. Section 2 describes how Memcheck works, in particular the
details of shadow bit operations, which are crucial in ensuring
Memcheck's accuracy and speed. Section 3 evaluates
Memcheck by considering the cost of its use--in terms of how easy it
is to obtain and run, the ease of using its results, and its impact on
performance--and the benefits provided by its bug-finding abilities.
Section 4 discusses related work.
Section 5 concludes.
The core spends most of its execution time making, finding, and
running translations. None of the client's original code is run. The
core also provides many services to tools, to ease common tasks such
as recording errors and reading debug information. Only one tool can
be used at a time.
The Valgrind distribution contains the core, plus five tools:
Memcheck, Addrcheck (a lightweight version of Memcheck that omits
definedness checking), a cache profiler, a memory space-use (heap)
profiler, and a data race detector for POSIX pthread-ed programs.
Most operations do not directly affect the program's observable
behaviour. In such cases, the V bits are not checked. Instead,
V bits for the result of the operation are computed. Hence,
for the most part, Memcheck silently tracks the flow of
undefined values, and only issues an error message when use of such
a value is potentially dangerous. This scheme is required
because undefined values are often copied around without any
problem, due to the common practice, used by both programmers and
compilers, of padding structures to ensure fields are
word-aligned.
From this overview, the main overheads of definedness checking are
apparent. First, the use of V bits doubles the amount of memory in
use. Second, most instructions compute new values, and thus require a
shadow operation itself consisting of one or more instructions.
Every 32-bit general purpose register is shadowed by a 32-bit shadow register,
and every byte of memory has a shadow V byte.
After that, there are some exceptions to the basic rule of ``one V bit per data
bit''.
In principle it is possible to directly state the V bit transformations
required to shadow each x86 instruction. In practice, the x86
instruction set is so complex and irregular that this would be
difficult and fragile. Fortunately, UCode (Valgrind's intermediate
representation) is RISC-like and clearly exposes all memory and arithmetic
operations, which makes Memcheck's instrumentation task much easier.
Another important aspect of UCode is that it supports the use of an
infinite supply of virtual registers. The initial translation from
x86 is expressed in terms of such registers. Memcheck
interleaves its own instrumentation UCode with it, using as
many new virtual registers as required. Valgrind's core has the job of
performing instruction selection and register allocation to
convert this sequence back to executable x86 code.
The V bit instrumentation scheme is best described in terms of a
family of simple abstract operations on V bits. We will use d1 and
d2 to denote virtual registers holding real values, and v1
and v2 denote virtual shadow registers holding V bits.
Also, some operations below use X and Y indicate the operand and
result widths in bytes (and 0 represents an operand with a width of a
single bit). Those operations for which the width is not specified are
width-independent.
Memcheck uses the following binary operations.
For exactly analogous reasons, ImproveOR behaves
similarly, with the interesting case being ImproveOR(1,
defined) = defined.
The following unary operations are also needed.
These casts are used in various approximations in which the
definedness checking needs to consider the worst-case across a whole word
of bits. It is important to appreciate that the narrowing
casts (where
) do not simply discard
the high bits of the operand. Similarly, the case where
is not the identity function.
In all cases, each result bit
depends on all operand bits, regardless of their relative sizes.
PCast can be implemented in at most three x86 instructions: for
narrowing casts, negation followed by subtract-with-borrow;
for widening casts, a shift, a negation, and a subtract-with-borrow.
Every operation (instruction or system call) that creates a value must
be instrumented with a shadow operation that computes the
corresponding V bits. This section describes these shadow operations
in detail.
Recall that for each virtual register
in the incoming UCode, Memcheck allocates a
shadow virtual register
to
carry the corresponding V bits.
All memory bytes mapped at startup (i.e. code and data segments, and
any shared objects) have their V bits set to zero (defined).
Memcheck also uses such events to update its maps of which address
ranges are legitimately addressable. By doing that it can detect
accesses to invalid addresses, and so report to the user problems such
as buffer overruns, use of freed memory, and accesses below the stack
pointer. Details of this addressability checking are beyond the scope of
this paper.
x86 contains a byte-swap (endianness-change) instruction. As this
merely rearranges bits in a word, a byte-swap instruction is
also applied to the shadow value.
The same scheme is used for multiplies. This is overly conservative because
the product of two numbers with and consecutive least-significant
defined bits has least-significant defined bits, rather than
as the
Add/Sub scheme generates. It would be possible
to do a better job here, but the extra expense does not seem justified
given that very few, if any, complaints have arisen over the subject
of false positives arising from multiplies.
The shadow operation for Neg (negation) is trivially derived by
constant-folding the shadow operation for Sub(0,d), giving the result
Left(v), where v is the shadow for d.
The rule for Not is trivially derived by
constant-folding the rule for Xor(0xFF..FF,d), giving,
as one might expect, the simple result
v, where v is the shadow for d; i.e. the V bits are
unchanged.
Given input d3 = OP(d1, d2), where d2 is the
shift/rotate
amount, and the sizes of d1/d3 and d2 are
respectively
X and Y,
the resulting instrumentation assignment is
In all five cases, the definedness bits are processed using the same
operation as the original. For Rol and Ror, the definedness bits
must be rotated exactly as the data bits are.
Shl and Shr shift
zeroes into the data, and so corresponding zeroes--indicating
definedness--need to be shifted into the definedness word.
Sar copies the
top bit of the data, and so needs to also copy the top bit of the
definedness word.
Signed and unsigned widening conversions give rise respectively to a
single SWiden or ZWiden operation.
Because of this, Memcheck can only offer crude instrumentation of such
instructions. Such instrumentation is safe in the sense that
all uses of undefined values, and all illegitimate memory
accesses, will still be caught. The crudeness of the instrumentation
has the effect that some computations, when done with FP or SIMD
registers, may elicit false-positive undefined value errors, when
similar or identical operations done using integer registers would not. The
most notable case is that copying undefined data through the FP or SIMD
registers will elicit false positives.
The instrumentation scheme is as follows. Neither the FP nor SIMD
registers have any associated V bits. When a value is loaded
from memory into such a register, if any part of the value is
undefined, an error message is issued. When a value is written from such a
register to memory, shadow memory is marked as defined. This is in
keeping with the Eager approximation scheme described shortly.
So far, this crude scheme has proven adequate, mostly because
programmers and compilers rarely copy and manipulate
partially-undefined data through FP or SIMD registers.
However, vectorising compilers for SIMD architectures are becoming
increasingly common [8],
and this scheme cannot continue much longer--it
is the biggest weakness of Memcheck. A new version of Memcheck
under development will shadow data in floating point registers
and in individual lanes of SIMD registers, thus remedying
this deficiency.
In such situations, two approximation schemes are possible.
Using this scheme, undefinedness can be made to ``flow''
through unknown operations, albeit in a pessimistic manner.
No error messages will be issued when such operations execute.
Memcheck currently uses Eager for all floating point and SIMD
operations, as decribed above, and Lazy in all other situations.
At every point where an undefined value could be consumed by
an operation, Memcheck has a choice: should it report the error
right now, or should it silently propagate the undefinedness into
the result? Both approaches have advantages.
Memcheck mostly takes the second alternative, deferring error reporting
as far as it can. Checks for undefined values are made only
when the program is in immediate danger of performing one of the following
actions, which could change its observable behaviour.
Accordingly, instrumentation for such events is as follows.
Conditional moves could be handled using either the eager or lazy
scheme. Memcheck handles them eagerly, testing the condition code
and reporting any error immediately.
Each error check consists of testing a shadow virtual register4 against zero for
any undefinedness, and calling a helper function to issue an error message if
so. An important but non-obvious extra step is that, immediately
following the test, the shadow register should be set to zero (``all
defined''). Doing so prevents subsequent checks on it issuing
essentially duplicate errors, which would confuse users. Consider
the following C fragment:
This helps reduce error cascades. A load from an invalid address
will in any case cause Memcheck to issue an invalid-address error message.
If the loaded data was marked as undefined, Memcheck might, as a
result, later issue undefined value error messages. These would confuse users
and obscure the true cause of the error--the invalid address.
Marking the loaded data as defined avoids that problem.
Memcheck has a very low false positive rate. However, a few
hand-coded assembly sequences, and a few very rare compiler-generated
idioms can cause false positives. The few examples we know of are as follows.
Memcheck has a two-part work-around. First, it replaces the standard
versions of these functions with its own less optimised versions that do
not cause problems. But GCC sometimes inlines calls to these functions,
and handling them currently involves a nasty hack. Such code requires
addition/subtraction of carefully chosen constants, such as 0x80808080.
If Memcheck sees adds/subtracts with such suspect constants as operands,
some undefined value checks in the containing basic block are omitted.
A better solution would be to use the presence of such constants
as a signal that adds/subtracts in this block should be
instrumented using an alternative, more accurate but more
expensive formulation which properly tracks carry propagation
[6]. We are developing such a scheme.
Finally, Memcheck's underlying assumptions are occasionally invalid. For
example, some programs deliberately use undefined values as an additional
source of entropy when generating random numbers.
We believe there are very few situations in which Memcheck fails
to flag uses of undefined values that could have any observable
effect on program behaviour. The exceptions we are aware of are as follows.
Finally, Memcheck of course
cannot detect errors on code paths that are not executed, nor can it
detect errors arising from unseen combinations of inputs.
This limitation is inherent from the fact that Memcheck uses dynamic analysis.
As a result, Memcheck is best used in conjunction with a thorough test suite.
In comparison, static analysis does not suffer these limitations, but the
power of the analysis is necessarily much lower [1].
A system such as Memcheck cannot simultaneously be free of false
negatives and false positives, since that would be equivalent to
solving the Halting Problem. Our design attempts to almost completely
avoid false negatives and to minimise false positives. Experience
in practice shows this to be mostly successful. Even so,
user feedback over the past two years reveals an
interesting fact: many users have an (often unstated) expectation that Memcheck
should not report any false positives at all, no matter how strange the code
being checked is.
We believe this to be unrealistic. A better expectation is to accept
that false positives are rare but inevitable. Therefore it will
occasionally necessary to add dummy initialisations to code to make
Memcheck be quiet. This may lead to code which is slightly more
conservative than it strictly needs to be, but at least it gives a
stronger assurance that it really doesn't make use of any undefined
values.
A worthy aim is to achieve Memcheck-cleanness, so that new errors are
immediately apparent. This is no different from fixing source code to
remove all compiler warnings, even ones which are obviously harmless.
Many large programs now do run Memcheck-clean, or very nearly so. In
the authors' personal experience, recent Mozilla releases come close
to that, as do cleaned-up versions of the OpenOffice.org-680
development branch, and much of the KDE desktop environment. So this
is an achievable goal.
Finally, we would observe that the most effective use of Memcheck
comes not only from ad-hoc debugging, but also when routinely used on
applications running their automatic regression test suites. Such
suites tend to exercise dark corners of implementations, thereby
increasing their Memcheck-tested code coverage.
As part of this, we refer to a survey of Valgrind users that we
conducted in November 2003, to which 116 responses were received. The
results are available on the Valgrind website [14].
Memcheck's operation has changed very little in the time since, so the
responses about it are still relevant.
Typically, the only further effort a user must make is to compile her
program with debugging information, to ensure error messages are as
informative as possible. Indeed, 40 of the 116 survey responders
praised Valgrind/Memcheck's ease of running, and another another 14
commented that ``it just works'', or similar. Only one responder
complained that it could be easier to use.
Although Valgrind/Memcheck is robust--we have had feedback from users using it
on systems with 25 million lines of code --it occasionally fails due to its own bugs.
The main source of problems is that Valgrind interacts closely with the
kernel and GNU libc. In particular the signal and thread handling is
fragile and hard to get right for all Linux distributions, and a
maintenance headache. By comparison, the parts dealing with x86 code
and instrumentation cause few problems, because instruction sets
change very slowly. We hope to decrease our level of interaction with
the kernel and GNU libc in the future, and recent development efforts
have made major progress in that area.
To help on this front, Memcheck provides another client request that
can be inserted into the client's source code. It instructs Memcheck
to check if a particular variable or memory address range is defined,
issuing an error message if not. Judicious uses of this client
request can make identifying root causes of undefined value errors
much easier.
Ideally, error messages would indicate where the
undefined value originated from, e.g. from a heap
block whose contents have not been initialised. However, this would require
augmenting each value with extra information about where it came from, and we
cannot see how to do this without incurring prohibitive overheads.
Ease of interpreting error messages is also important. Ten survey
responders complained that the messages are confusing, but 7 praised them.
One issue in particular is the depth of stack traces: the default is four,
but many users immediately adjust that to a much higher number. This gives
more information, but also makes the error messages longer. This is a
case where a GUI (requested by 8 responders) would be useful, in that large
stack traces could be gathered, shown to a small depth by default, and then
``unfolded'' by the user if necessary. Some users also simply prefer graphical
tools over text-based ones. As it happens, there are several GUI front-ends
for Valgrind, including Alleyoop [17] and Valgui [2].
All measurements were performed using Valgrind 2.1.2 on an 1400 MHz AMD Athlon
with 1GB of RAM, running Red Hat Linux 9, kernel version 2.4.20.
The test programs are a subset of the SPEC CPU2000 suite
[16]. All were tested with the ``test'' (smallest) inputs. The
time measured was the ``real'' time, as reported by /usr/bin/time. Each
program was run once normally, and once under each of the Valgrind
tools.
This
is not a very rigorous approach but that does not matter, as the figures here
are only intended to give a broad idea of performance.
Table 1 shows the time performance of the three
tools. Column 1 gives the benchmark name, column 2 gives its normal running
time in seconds, and columns 3-5 give the slow-down factor for each tool
relative to column 2 (smaller is better). The first nine programs are integer
programs, the remaining four are floating point programs. The bottom two rows
give the median and geometric mean for the slow-down factors.
The slow-down figures for Memcheck are quite high. This is partly due to the
cost of definedness checking, partly due to the cost of Memcheck's other kinds
of checking, and partly because of Valgrind's inherent overhead.
Memcheck also has a large memory cost. Since each byte is shadowed by
a byte holding V bits and also by a single A bit indicating whether
that location is addressable, memory use is increased by a factor of
approximately , that is, slightly more than doubled.
Shadow storage is allocated on demand, so programs which do not use
much memory when running natively still do not use much when running
under Memcheck. Another space cost is that of holding translations:
Valgrind can store translations of approximately 200000 basic blocks,
occupying about 70MB of storage. This too is allocated incrementally.
Finally, Memcheck records on the order of 50 to 100 bytes of administrative
information for each block allocated by malloc/new/new[].
As a result of this, programs with large memory footprints (above
about 1.5GB) die from lack of address space. Since V-bit storage is
usually the largest component of the overhead, we are considering
compressed representations for V bits. These rely on the observation
that about 99.95% of all bytes are all-defined or all-undefined, and
so their definedness state could be summarised using a single bit.
This would be merely a representational change and would not affect
Memcheck's ability to track definedness with bit-level precision.
Of the various costs to the user of using Memcheck, for many people the
slow-down is the greatest cost. Among the survey responders, 9 praised
Memcheck's performance, and 32 complained about it. In general, we have found
that users who have used other, similar, memory checkers praise Memcheck's
performance, and those who have not used other such tools complain about it.
However, judging from overall feedback, Memcheck's performance is good enough
for the vast majority of users.
It is never easy to convincingly demonstrate in a paper that a tool
such as Memcheck, which is designed to find bugs in programs, works
well. To truly appreciate the usefulness of such a tool, one must
really use it ``in anger'' on a real system. Nonetheless this section
provides two pieces of evidence that Memcheck, particularly
its definedness checking, is effective. The first is a case study of using it
on OpenOffice; the second is general information about Memcheck's popularity.
Of the 11 categories, five (1, 2, 3, 7 and 10) involve undefined values.
This accounts for 96 of 102, or 94%, of the problems found.
Rechtien estimates that, regarding the consequences of the detected problems,
about one third would never show up as a program failure for the user, another
third are bugs which have no consequences yet, but might lead to regressions
later if code is changed, and the last third are plain bugs which might crash
the application or lead to malfunction anytime. The commits made to fix these
errors can be seen in the OpenOffice bugs database [11].
As for false positives, Rechtien states that when compiling with optimisation,
he saw ``a few'', but none when compiling without optimisation.
From this example, Memcheck's usefulness is clear, in particular the
usefulness of its definedness checking.
A short list of notable projects using Valgrind includes: OpenOffice,
StarOffice, Mozilla, Opera, KDE, GNOME, AbiWord, Evolution, MySQL, PostgreSQL,
Perl, PHP, Mono, Samba, Nasa Mars Lander software, SAS, The GIMP, Ogg Vorbis,
Unreal Tournament, and Medal of Honour. A longer list is available at the
Valgrind website [14]. This includes a huge range of software
types, almost anything that runs on x86/Linux.
Our November
2003 survey also found that Memcheck is by far the most commonly used of the
Valgrind tools, accounting for approximately 85% of Valgrind use. In
comparison, Addrcheck, which is the same as Memcheck but without the
definedness checking, only accounts for 6% of Valgrind use.
There are a number of tools that detect various kinds of memory errors in
programs, particularly memory leaks and bounds errors. However, only a small
fraction of them detect undefined values. This section discusses only work
that relates directly to Memcheck's definedness checking.
Huang [4] described, very early, the possibility
of using dynamic analysis to detect variables that are written but never read,
and reads of undefined variables. However, the proposed instrumentation was at
the source code level rather than the machine code level.
Kempton and Wichmann [5] discussed the detection of
undefined value errors in Pascal programs. They suggest using one
shadow definedness bit per data bit in memory, just as Memcheck does. They did
not discuss how the shadow bits would be propagated.
Purify [3] is a widely used commercial memory checking tool
that can detect several kinds of memory errors including undefined value
errors. It adds instrumentation to object code at link-time, in order to
do checking at run-time. It shadows every byte of memory with a two-bit value
that encodes one of three states: unaddressable, writable, and readable.
Reads of writable (i.e. allocated but undefined) memory bytes may get flagged
immediately as errors. This is a form of eager checking, as mentioned in
Section 2.6. Because the shadowing is at
the byte-level, it does not feature bit precision.
Third Degree [18] is a memory checking tool built using the
ATOM object code instrumentation framework [15]. As well as
detecting some accesses to unaddressable memory, it uses a crude form of
definedness checking: when a stack or heap location first
becomes addressable, it is written with a special ``canary'' value. An error
is issued if any loads read this canary value. This could lead to false
positives when undefined values are copied around, and there is a
small chance that the canary value might occur legitimately,
although the value is chosen carefully to minimise this likelihood. As
Third Degree only ran on Alphas, it is unfortunately now defunct.
Insure++ [12] is a commercial memory checking tool. It detects
various kinds of errors, including undefined value errors. It adds
instrumentation to the source code (C or C++ only) at compile-time, in order to
do checking at run-time. It has two modes of undefinedness checking: the
first is eager, immediately reporting any reads of undefined variables; the
second is less eager, allowing copies of undefined variables to occur without
warning (but not other operations). Insure++ also comes with Chaperon, a tool
for x86/Linux that works much like Memcheck--it also uses dynamic binary
instrumentation--which can detect reads of uninitialised memory.
The most notable feature of Memcheck compared to previous tools is
that its definedness checking is much more sophisticated,
featuring bit-precision, which means it can be used reliably with
bit-field operations.
Memcheck works well, and is widely used. Our main
challenge is ensuring that this remains true. Work is in progress to develop a
new intermediate representation for Valgrind, which will provide sufficient
description of FP and SIMD operations to keep up with the increasing
capabilities of vectorising compilers. The new intermediate representation
will also be architecture-neutral, in order to provide a solid foundation for
ports to architectures other than x86, which will help keep Valgrind viable
over the long-term. A key challenge here is that instrumentation should also
be architecture-neutral, so that tools do not have to be partially or wholly
re-written for each new architecture supported. We also are working on ports
to other operating systems, in order to remove Valgrind's dependence on Linux.
This is a significant task, but one that is orthogonal to Memcheck's workings.
If you are developing programs in C, C++, Fortran or Pascal
on the x86/Linux platform, you almost certainly should be
using Memcheck. If you are not, all the bugs Memcheck would find will have to
be found manually, wasting your development time, or not be found at all,
compromising your code quality.
Abstract
1 Introduction
Memcheck is a dynamic analysis tool, and so checks programs for errors
as they run. Memcheck performs four kinds of memory error checking.
1.1 Basic operation and features
1.2 Using Memcheck
valgrind --tool=memcheck badprog
The --tool=
option specifies which tool in the
Valgrind suite is used. The program runs under Memcheck's control,
typically 20-30 times slower than usual. This slow-down is partly
due to Memcheck's definedness checking, partly due to its other
checking, and partly due to Valgrind's inherent overhead.
The program's
output is augmented by Memcheck's output, which goes by default to standard
error, although it can be redirected to a file, file descriptor, or socket with
a command line option.
p
as an address. On one system we tried this test on,
exactly that happened, and so Memcheck issued an additional
``Invalid read of size 4'' warning immediately before the memory
access, thanks to its addressability checking. For the run presented,
the memory access (un)luckily hit addressable memory, so no
addressability error message was issued.
1.3 Bit-level precision
1.4 Contributions
Our main contribution is a detailed description of Memcheck's definedness
checking, which has only been briefly touched upon in previous publications
about Valgrind [10,9]. Memcheck's definedness
checking improves on that performed by previous tools by being accurate to the
level of individual bits. Its false
positive and false negative rates are very low. Finally, the run-time overhead
of the definedness checking is reasonable enough that it is practical to use on
very large programs.
1.5 Paper structure
2 How Memcheck works
2.1 Valgrind
Memcheck is implemented as a plug-in to
Valgrind [10,9]. Valgrind is a
framework for creating tools that use dynamic binary instrumentation;
it does all the hard work of inserting instrumentation into
machine code at run-time. Tools are created as
plug-ins, written in C, to Valgrind's
core.
The basic view is:
Valgrind core tool plug-in Valgrind tool.
The resulting tool loads the client at start-up, grafting itself onto
the process as it does so. It then starts executing the client, by
(re)compiling the client's code, one basic block at a time, in a
just-in-time, execution-driven fashion. The compilation process
involves disassembling the machine code into an intermediate
representation called UCode. UCode is instrumented by the tool
plug-in, and the instrumented UCode is then converted back into x86
code. The resulting code is stored in Valgrind's code cache, to be
rerun as necessary.
2.2 Overview
The basic idea underlying the definedness checking is straightforward.
2.3 Details of V bits
A V bit of zero indicates that
the corresponding data bit has a properly defined value, and a V bit of one
indicates that it does not. This is counterintuitive, but makes some of the
shadow operations more efficient than they would be if the bit values were
inverted.
2.4 Instrumentation basics
2.5 Abstract operations on V bits
ImproveAND(0, undefined) = undefined
ImproveAND(0, defined) = defined
ImproveAND(1, undefined) = undefined
ImproveAND(1, defined) = undefined
The second case is the interesting one. If one of the arguments
is a defined zero bit, we don't care that the other argument
might be undefined, since the result will be zero anyway. Hence
ImproveAND creates a ``defined'' V-bit, which, as
described in Section 2.6,
is merged into the final result for AND using
DifD. This has the effect of forcing that bit of the
result to be regarded as defined. ``undefined'' is the identity
value for DifD, so the other three cases have no effect on the
final outcome.
2.6 The instrumentation scheme proper
2.6.0.1 Register and memory initialisation
At startup, all registers have their V bits set to one,
i.e. undefined. The exception is the stack pointer, which has its
V bits set to zero, i.e. defined.
2.6.0.2 Memory allocation and deallocation
The Valgrind framework intercepts function and system calls which
cause usable address ranges to appear/disappear. Memcheck is notified
of such events and marks shadow memory appropriately. For
example, malloc and mmap bring new addresses into play:
mmap makes memory addressable and defined, whilst malloc makes
memory addressable but undefined. Similarly, whenever
the stack grows, the newly exposed area is marked as
addressable but undefined.
Whenever memory is deallocated, the deallocated area also has its values
all marked as undefined.
2.6.0.3 Literals
All literals are, not surprisingly, considered
to be completely defined.
2.6.0.4 Data copying operations
These are straightforward.
Register-to-register moves give rise to a move between the corresponding
shadow registers. Register-to-memory and memory-to-register transfers
are instrumented with a move between the corresponding shadow register
and shadow memory.
2.6.0.5 Addition and subtraction
Given d3 = Add(d1,d2) or d3 = Sub(d1,d2),
each result bit can simplistically be considered
defined if both the corresponding argument bits are defined. However,
a result bit could also be undefined due to an undefined carry/borrow
propagating upwards from less significant bit positions. Therefore
Memcheck needs to generate v3 = Left(UifU(v1,v2)).
2.6.0.6 Add with carry and subtract with borrow
These take the CPU's carry flag as an additional single-bit operand.
Let vfl be the virtual 1-bit-wide register tracking the definedness of
the condition codes (%eflags). If this extra operand is
undefined, the entire result is undefined, so the following
formulation derives straightforwardly from the Add/Sub
case:
v3 = UifU( Left(UifU(v1,v2)),
PCast0X(vfl) )
where X is the width of v1, v2 and v3.
2.6.0.7 Xor
A simple case: given d3 = Xor(d1,d2), generate
v3 = UifU(v1,v2).
2.6.0.8 And and Or
These require inspection of the actual operand values as well as their
shadow bits. We start off with a bitwise UifU of the operands,
but fold in, using DifD, ``improvements'' contributed by
defined zero-arguments (for And) or defined one-arguments (for
Or). So, given:
d3 = And(d1,d2)
d3 = Or(d1,d2)
the resulting instrumentation assignments are, respectively:
v3 = DifD( UifU(v1,v2),
DifD( ImproveAND(d1,v1),
ImproveAND(d2,v2) ) )
v3 = DifD( UifU(v1,v2),
DifD( ImproveOR(d1,v1),
ImproveOR(d2,v2) ) )
This means instrumentation of And/Or is quite expensive. However,
such instructions are often used with one constant operand, in which
case Memcheck's post-instrumentation cleanup pass can fold these expressions
down to a single ImproveAND/OR term.
2.6.0.9 Shl, Shr, Sar, Rol, Ror
(Shift left, Unsigned shift right, Signed shift right, Rotate left,
Rotate right). In all cases, if the shift/rotate amount is undefined,
the entire result is undefined. Otherwise, for reasons which are
somewhat subtle, the result V bits are obtained
by applying the same shift/rotate operation to the V bits of the
value to be shifted/rotated.
v3 = UifU( PCastYX(v2), OP(v1,d2) )
2.6.0.10 Widening and narrowing conversions
A narrowing conversion
on data throws away some of the top bits of the word, and so the same
operation can be used to throw away the top bits of the shadow word.
2.6.0.11 Instructions which set flags
On x86, most integer arithmetic instructions set the condition codes
(%eflags) and Memcheck duly tracks the definedness state of %eflags
using a single shadow bit. When an integer operation sets condition
codes, it is first instrumented as described above. Memcheck
pessimistically narrows the result value(s) of the shadow operation using
PCastX0 to derive a value for the %eflags shadow
bit.
2.6.0.12 Loads, stores, conditional branches and conditional moves
These are discussed in the next section.
2.6.0.13 Floating point (FP) and MMX, SSE, SSE2 (SIMD) operations
Valgrind does not disassemble floating point or SIMD instructions to
the same level of detail as it does integer instructions. Instead, it
merely modifies some of the register fields in the instruction, marks
any instructions referencing memory as such, and copies them otherwise
unchanged into the output instruction stream.
2.6.0.14 Approximating everything else
The above cases give sufficient accuracy to achieve a
near-zero false positive rate on almost all compiler-generated and
handwritten code. There are a multitude of other cases which could be tracked accurately, but for which there appears to be no
point. These include: division, rotates through the carry flag, and
calls to helper functions which implement obscure features (CPUID,
RDTSC, BCD arithmetic, etc).
2.7 Deciding when to issue error messages
int* p; /* not defined */
... = *p;
*p = ...
For the load, p's shadow is tested, and an error message is issued if
necessary. Subsequently in the store, reporting another such error
for p would not help lead users to the root cause of the
problem.
2.7.0.1 Loads from invalid addresses
One interesting question is how to handle loads from memory which
Memcheck regards as not validly addressable. Our solution is
counterintuitive: data loaded from unaddressable memory is marked as
defined.
2.8 Avoiding false positives
2.9 False negatives
2.10 Setting realistic expectations
For a tool such as Memcheck to be worth using, a user must first be using a
language such as C, C++ or Fortran that is susceptible to the kinds of memory
errors Memcheck finds. Then, the benefits of use must outweigh the costs.
This section considers first the costs of using Memcheck, in terms of its ease
of use and performance. It then considers the benefits, that is, its
effectiveness in helping programmers find real bugs.
3 Evaluation
3.1 Ease of use
Ease of use has a number of facets: how easy Memcheck is to obtain,
how easy it is to run, and how easy it is to act upon its results.
3.1.0.1 Obtaining Memcheck
Memcheck is very easy to obtain, because the Valgrind suite is free
software, is available in source form on the Valgrind website
[14], and is widely available in binary form on the
web, packaged for a number of Linux distributions.
3.1.0.2 Running Memcheck
Memcheck could hardly be easier to run. Using it only requires prefixing a
program's command line with valgrind -tool=memcheck, plus any other
desired Valgrind or Memcheck options.
3.1.0.3 Custom allocators
There are some cases where the user needs to expend a little more
effort. If a program uses custom memory management rather than
malloc(), new and new[], Memcheck can miss some errors it would
otherwise find. The problem can be avoided by embedding small number
of client requests into the code. These are special assembly
code sequences, encoded as C macros for easy use, that Valgrind
recognises, but which perform a cheap no-op if the client is not
running on Valgrind. They provide a way for the client to pass
information to a Valgrind tool. For example, when using a custom
allocator, client requests can be used to inform Memcheck when a heap
block has been allocated or deallocated, its size and location, etc.
3.1.0.4 Self-modifying code
Extra effort is also needed to handle self-modifying code. Dynamically
generated code is not a problem, but if code that has executed is
modified and re-executed, Valgrind will not realise this, and will
re-run its out-of-date translations. Auto-detecting this is
possible but expensive [7]; the cost is
not justified by the small number of programs that use self-modifying
code. Our compromise solution is to provide another client request
which tells Valgrind to discard any cached translations of code
in a specified address range.
3.1.0.5 Different behaviour under Memcheck
Client programs sometimes do not behave exactly the same under
Valgrind/Memcheck as they do normally. Programs that run successfully
normally may fail under Memcheck. This is usually because of latent
bugs that are exposed e.g. by different execution timings, or because
Valgrind provides its own implementation of the Pthreads library.
This can be regarded as a good thing. More rarely, programs that fail
normally may succeed under Memcheck for the same reasons, which can be
frustrating for users who hoped to track down a bug with Memcheck.
3.1.0.6 Acting upon Memcheck's results
In general, Memcheck's
addressability checking, deallocation checking, and overlap checking do quite
well here, in that Memcheck's report of the problem is usually close to
the root cause. However, for definedness checking this is often not the case,
as Section 2.7 explained.
3.2 Performance
This section discusses the performance of three Valgrind tools. Besides
Memcheck, it considers two other tools: Addrcheck, and Nulgrind. Addrcheck is
a cut-down version of Memcheck that does not perform definedness checking. The
performance difference between Memcheck and Addrcheck give an indication of the
cost of Memcheck's definedness checking. Nulgrind is the ``null'' Valgrind tool
that adds no instrumentation. It gives an idea of the overhead due to
Valgrind's basic operation.
3.3 Evidence of usefulness
Jens-Heiner Rechtien used Memcheck with OpenOffice7, and
systematically recorded all the error messages that Memcheck issued
[13]. He ran OpenOffice's basic ``smoke
test'' (with Java disabled), which only exercises a fraction of OpenOffice's
code. Memcheck detected 102 problems in 26 source files (plus 6 more in system
libraries such as GNU libc).
Table 2 (copied from [13])
gives a breakdown: column 1 numbers the problem category, column 2 describes
the problem, column 3 gives the number of occurrences, and column 4 gives the
number of distinct source files in which the error occurred.
3.3.0.1 OpenOffice case study
Ref. no.
Error type
#problems
#files
1
not initialized instance data member
ca. 76
10
2
not initialized local variables
7
5
3
not initialized variable used as in/out parameter in method call
11
3
4
overlapping buffers in strncpy()
1
1
5
off by one error
1
1
6
unchecked return value of system call
1
1
7
partly initialized struct used in inappropriate ways
1
1
8
no check for potentially invalidated index
1
1
9
use of buffer size instead of content size for writing out data
1
1
10
write not initialized buffer into stream
1
1
11
feed unterminated buffer into method expecting a C-style string
1
1
3.3.0.2 Popularity
Another indication of the usefulness of Memcheck's definedness checking is its
general popularity. We have received feedback from at least 300 users
in more than 30 countries, and so can conservatively estimate its user
base is in the thousands.
4 Related work
We have described a tool named Memcheck, built with the dynamic binary
instrumentation framework Valgrind, which can detect several kinds of memory
errors that are common in imperative programs. In particular, we described the
novel definedness checking that Memcheck performs to detect undefined
value errors with bit-precision. We also considered the costs and benefits of
using Memcheck, and concluded that it definitely is worth using, as shown by
the 102 bugs found in OpenOffice by one user, and more generally by the
thousands of programmers who use it on a wide range of software.
5 Conclusion
6 Acknowledgments
Thanks to Donna Robinson for encouragement, and Jens-Heiner Rechtien for
his meticulous record-keeping. The second author gratefully acknowledges the
financial support of Trinity College, University of Cambridge, UK.
Bibliography
Static and dynamic analysis: synergy and duality.
In Proceedings of WODA 2003, pages 6-9, Portland, Oregon, May
2003.
Valgui, a GPL front-end for Valgrind.
https://valgui.sf.net/.
Purify: Fast detection of memory leaks and access errors.
In Proceedings of the Winter USENIX Conference, pages 125-136,
San Francisco, California, USA, January 1992.
Detection of data flow anomaly through program instrumentation.
IEEE Transactions on Software Engineering, 5(3):226-236, May
1979.
Run-time detection of undefined variables considered essential.
Software--Practice and Experience, 20(4):391-402, April 1990.
Re: Valgrind for PowerPC.
Message to the valgrind-developers mailing list, March 2004.
Instrumenting self-modifying code.
In Proceedings of AADEBUG2003, Ghent, Belgium, September 2003.
Autovectorisation in GCC.
In Proceedings of the 2004 GCC Developers' Summit, Ottawa,
Canada, June 2004.
Dynamic Binary Analysis and Instrumentation.
PhD thesis, Computer Laboratory, University of Cambridge, United
Kingdom, November 2004.
Valgrind: A program supervision framework.
In Proceedings of RV'03, Boulder, Colorado, USA, July 2003.
https://www.openoffice.org/
issues/show_bug.cgi?id=20184.
Automatic C/C++ application testing with Parasoft Insure++.
White paper.
Validating and debugging openoffice.org with valgrind, 2003.
https://tools.openoffice.org/ debugging/usingvalgrind.sxw.
Valgrind.
https://www.valgrind.org/.
ATOM: A system for building customized program analysis tools.
In Proceedings of PLDI '94, pages 196-205, Orlando, Florida,
USA, June 1994.
SPEC CPU2000 benchmarks.
https://www.spec.org/.
Alleyoop.
https://alleyoop.sf.net/.
KCachegrind.
https://kcachegrind.sf.net/.
Footnotes