Friday, August 4, 2:00 pm3:30 p.m., British Room
Session Chair: Doug Szajda, University of Richmond
Accepted WiPs and Abstracts
Exploiting MMS Vulnerabilities to Stealthily Exhaust Mobile
Phone's Battery
Radmilo Racic, Denys Ma, and Hao Chen, University of California, Davis
As cellular data services and applications are being widely
deployed, they become attractive targets for attackers, who could
exploit unique vulnerabilities in cellular networks, mobile devices,
and the interaction between cellular data networks and the
Internet. Furthermore, as mobile phones become more powerful with more
bandwidth, cellular end hosts will become the next target for attacks
that are widely deployed on the Internet.
We demonstrate an attack, which surreptitiously drains mobile devices'
battery power up to 22 times faster and therefore could render these
devices useless before the end of business hours. This attack targets a
unique resource bottleneck in mobile devices (the battery power) by
exploiting an insecure cellular data service, Multimedia Messaging Service
(MMS), and the insecure interaction between cellular data networks and the
Internet, Packet Data Protocol (PDP) context retention and the paging
channel. The attack proceeds in two stages. In the first stage, the
attacker compiles a hit list of mobile devicesincluding their cellular
numbers, IP addresses, and model informationby exploiting MMS
notification messages. In the second stage, the attacker drains mobile
devices' battery power by sending periodical UDP packets and exploiting PDP
context retention and the paging channel. When a packet is sent to a phone,
the network will deliver the packet if the phone's location is known, or
attempt to locate the phone by sending a page request to it. However, since
cellular phones spend most of their time in the dormant, battery-saving
mode, the page on the paging channel will awaken the phone to the ready
state and force it to perform a location update. The sine qua non of this
attack is to keep the phone in this ready, high battery consumption state,
therefore disabling its ability to preserve battery life, or to let the
phone temporarily go into the battery-saving state only to be immediately
awakened with a page and forced to perform a location update; both of which
consume a lot of energy. This attack is unique in that the victims are
unaware when their batteries are being drained, and that the attack
exploits vulnerable cellular services to target mobile devices. We will
identify two key vulnerable components in cellular networks and propose
mitigation strategies for protecting cellular devices from such attacks
from the Internet.
Accepted at the IEEE/CreateNet International Conference on Security and
Privacy in Communication Networks (SECURECOMM '06)
Applying Machine-model Based Countermeasure Design to Improve
Protection Against Code Injection Attacks
Yves Younan, Frank Piessens, and Wouter Joosen, Katholieke Universiteit Leuven, Belgium
Many countermeasures for code injection attacks are built in an ad-hoc
manner, with countermeasure designers building a countermeasure and
attackers finding ways around the countermeasure followed by the
design of better countermeasure. We propose a using a more
methodological approach to building countermeasures using a model of
the execution environment of the program. This model contains all
abstractions and memory locations that the OS relies upon to execute a
program (e.g. stack, GOT, etc.), with the operations that are
performed on them. Such a model is called a machine model and can allow
a countermeasure designer to design countermeasures at a more abstract
level. It also provides a platform for comparing and evaluating
countermeasures.
Such a machine model is strongly linked to the architecture, the
operating system , the programming language and the compiler that it
is based on. This limits the applicability of a specific
machinemodel. To counter this we are also designing a metamodel and
devising a methodology for constructing machine models based on this
metamodel, reducing the initial cost of building a machine model. The
metamodel is an abstraction of several machine models: it provides
uniformity when constructing machine models and allows a designer to
work out the global principles of a countermeasure independent of a
specific platform. By keeping machine models uniform, the task of
implementing or porting countermeasures from one platform to another
is simplified.
Building a Trusted Network Connect Evaluation Test Bed
Jesus Molina, Fujitsu Laboratories of America
One important set of TCG (Trusted Computing Group) standards is about
TNC (Trusted Network Connect), a proctocol for end point
authentication. Multiple components and interfaces are involved in the
TNC architecture, and for each component, different companies are
making products. Therefore, it is important to show that when
components are interacting, the TNC system is still secure. To achieve
this goal, we are building a TNC test bed. Within this test bed, we
will implement the entire TNC architecture, try each module with
multiple products (including open source components when appropriate),
and investigate the complete workflow. Then, we will try different
ways to attack the system, such as inserting rogue interfaces and/or
injecting false data. Our main focuses are 1) to have a better
understanding of the TNC standards, and 2) to locate any potential
security holes or threat models 3) The repercussion of TNC on the
trusted platform strategy 4) Comparison with other emrging standards
(Cisco's NAC, etc).
The SAAM Project at UBC
Konstantin Beznosov, Jason Crampton, and Wing Leung, University of British Columbia
We introduce the concept, model, and policy-specific algorithms for
inferring new access control decisions from previous ones. Our
secondary and approximate authorization model (SAAM) defines the
notions of primary vs. secondary and precise vs. approximate
authorizations. Approximate authorization responses are inferred from
cached primary responses, and therefore provide an alternative source
of access control decisions in the event that the authorization server
is unavailable or slow. The ability to compute approximate
authorizations improves the reliability and performance of access
control sub-systems and ultimately the application systems themselves.
The operation of a system that employs SAAM depends on the type of
access control policy it implements. We proposed algorithms and
analyzed their performance for computing secondary authorizations in
the case of policies based on the Bell-LaPadula model. Preliminary
results of evaluation of SAAMblp algorithms demonstrate a 30% increase
in the number of authorization requests that can be served without
consulting access control policies.
ID-SAVE: Incrementally Deployable Source Address Validity Enforcement
Toby Ehrenkranz, University of Oregon
Routers in the Internet today know which direction a packet should be
sent towards, but not which direction a packet should have come from.
This is the root cause of problems on the Internet such as IP-spoofing
being common in network attacks and source-address-based protocols such
as RPF being unreliable. Previous work has either not attacked this root
cause, or has had unrealistic deployment assumptions.
Our current work, ID-SAVE (Incrementally Deployable Source Address
Validity Enforcement), utilizes ideas similar to those presented in [1].
ID-SAVE attacks the root cause by building up "incoming tables" for
routers, much like the forwarding tables currently in use by routers. It
uses a variety of novel mechanisms to be incrementally deployable such
as packet marking, neighbor discovery, on-demand updates, blacklists,
and packet-driven pushback.
[1] J. Li, J. Mirkovic, M. Wang, P. L. Reiher, and L. Zhang, "SAVE:
Source address validity enforcement protocol", in INFOCOM, New York,
2002, pp. 1557-1566.
Automatic Repair Validation
Michael E. Locasto, with Matthew Burnside and Angelos D. Keromytis, Columbia University
There is an active movement in the security research community
focusing on automated intrusion prevention and self-healing software.
However, a major hurdle prevents the widespread deployment of these
types of systems: system administrators lack confidence in the quality
of the generated fixes. Thus, a key requirement for these systems is
that the efficacy of each fix must be tested and validated after it
has been automatically developed, but before it is deployed. Under
the response rates required by these systems, we believe such
verification must proceed automatically. We call this process
Automatic Repair Validation (ARV). To illustrate the difficulties
faced by ARV, we propose Bloodhound, a system that tracks and stores
malicious network flows for later replay during validation for
self-healing software. Our goal is to motivate additional research in
this direction by describing the problem and the challenges in
addressing it, and to explore part of the solution space.
Secure Software Updates: Not Really
Kevin Fu, University of Massachusetts Amherst
A client can use a content distribution network to securely download
software updates. These updates help to patch everyday bugs, plug
security vulnerabilities, and secure critical infrastructure. Yet
many deployed software update mechanisms are insecure, and emerging technologies
pose further hurdles for deployment. Our analysis of several popular
software update mechanisms shows that deployed systems often rely on
trusted networks to distribute critical software updatesdespite
the research progress in secure content distribution. We demonstrate
how many deployed systems are susceptible to weak man-in-the-middle
attacks.
Integrated Phishing Defenses
Jeff Shirley, University of Virginia
Increasingly sophisticated phishing attacks are growing in
frequency, and existing defenses have so far failed to disrupt the
problem. Most previous phishing defenses detect and disrupt phishing
attacks either by examining incoming email and marking suspicious
messages or by detecting and blocking phishing attacks using
extensions for web browsers. Our system integrates analysis of the
incoming portion of the attacks with blocking of the outgoing portion
by combining an email analyzer with an HTTP proxy. The email analyzer
parses email messages to extract linked URLs, including those embedded
in redirects or scripts, and classifies them to determine whether the
websites they point to are likely to be phishing sites. Visits to URLs
classified as phishing are intercepted by the proxy and redirected to
a warning page. Our system uses a combination of previously published
heuristics (for link obfuscation and email text contents) and a URL
popularity metric obtained from search engine APIs (such as those
provided by Google and MSN). We have tested our system's effectiveness
on a corpus of over 200,000 email messages and found that we can block
94% of phishing URLs present with a false positive rate of
approximately 3% of the total number of URLs present. Since our system
is capable of examining both web and email portions of the phishing
attacks, it provides a framework that will be able to incorporate
defenses against a wide variety of threats.
The Utility vs. Strength Tradeoff: Anonymization for Log Sharing
Kiran Lakkaraju, National Center for Supercomputing Applications,
University of Illinois, Urbana-Champaign
Many organizations, and in particular the network security teams
within those organizations, have come to the conclusion that sharing
their network logs is essential in the detection and prevention of
intrusions. Log sharing is also useful in network research and
education. The main impediment to sharing network logs is the
potential loss of sensitive information that can be used by malicious
entities to break into the organizations systems. At NCSA we are
developing a log sharing infrastructure that utilizes anonymization to
remove sensitive information from logs. Anonymization is the process
of modifying the data in the log so that sensitive information is not
shared, but the log can still be useful to other users.
The main problem in applying anonymization to logs is deciding on how
much information to remove from the logs. This has a direct impact on
the ability of an attacker to use the shared logs to attack the
organizers system. We dub this the Utility vs Strength tradeoff:
Utility refers to the usefulness of the log, and Strength refers to
the difficulty an attacker will have in "deanonymizing" the log. Under
the auspices of a NSF Cybertrust grant we are studying this tradeoff
for network security logs in order to create a log sharing system that
will allow our security engineers to quickly share logs with a
multitude of clients. In this talk I will speak on the utility
vs. strength tradeoff. In addition I will mention FLAIMa tool
developed at NCSA that allows multi-level anonymization and is easily
extensible to many logs.
Malware Prevalence in the KaZaA File-Sharing Network
Jaeyeon Jung, MIT CSAIL
In recent years, more than 200 viruses have been
reported to use a peer-to-peer (P2P) file-sharing network as
a propagation vector. Disguised as files that are frequently
exchanged over P2P networks, these malicious
programs infect the user's host if downloaded and opened,
leaving their copies in the user's sharing folder for further
propagation. Using a crawling-based malware detector built for
the KaZaA file-sharing network, we study the prevalence of
malware in this popular P2P network, the malware's propagation
behavior in the P2P network environment and the
characteristics of infected hosts.
With 364 malware signatures constructed by our detector,
we found that over 15% of the crawled files were infected
by 52 different viruses. Many of the malicious programs that we
find active in the KaZaA P2P network open a backdoor through
which an attacker can remotely control the compromised machine,
send spam, or steal a user's confidential information.
The assertion that these hosts were used to send spam was
supported by the fact that over 70% of infected
hosts were listed on DNS-based spam black-lists.
Election Audits
Arel Cordero, University of California, Berkeley
Election audits have the ability to establish an objective and
quantifiable measure of confidence in an election. Unfortunately, in
the U.S., there are no clear best practices (nor rigorous
requirements) regarding audits. For example, of the states requiring
audits, none specify how random selection must be done. This and other
ambiguities are generally interpreted (to various degrees of success)
at the local, county level, leading to practices that in many cases
defeat the effectiveness of the audit. For instance, the requirement
for an audit to be fully transparent to allow for public oversight, is
jeopardized by the use of software to perform the random selection. To
address this, we have proposed using well-known, physical methods of
random selectionsuch as dice, or lotteriesto do the selection. In
our analysis, though, each solution has its own (sometimes subtle and
non-obvious) pitfalls. Contributing to the problem is the critical
issue of public perception, which (ironically) is a source of
resistance to accepting our proposals. For example, some election
officials are wary of using dice because of a feared association with
gambling, (while others have embraced it). Also because of perception,
technically superior methods had to be turned down over simpler ones.
The work in progress talk will describe the problem and briefly cover
some of the takeaway messages of our experiences.
The Joe-E Subset of Java
Adrian Mettler, University of California, Berkeley
Joe-E is a a subset of Java designed to build secure
systems. The goal of object capability languages is to support the
Principle of Least Authority (POLA), so that each object naturally
receives the least privilege (i.e., least authority) needed to do its
job. Joe-E is defined as a subset of Java that places additional
restrictions on programs in order to eliminate sources of ambient
authority that make enforcement of POLA impossible. The semantics of
Java are preserved; any Joe-E program is a valid Java program. Since
this allows use of the existing Java tool chain and programmer
experience, we hope that Joe-E will support secure programming while
remaining familiar to Java programmers everywhere.
A current draft of the specification and implementation are available
at http://www.joe-e.org
Prerendered User Interfaces for Higher-Assurance Electronic Voting
Ka-Ping Yee, University of California, Berkeley
I will describe plans and work completed so far on a new electronic
voting architecture in which the voting user interface is prerendered
and published before election day as an "electronic sample ballot."
The publishing of the prerendered UI as a separate artifact enables
public participation in the review, verification, usability testing,
and accessibility testing of the ballot.
Preparing the user interface outside of the voting machine also
dramatically reduces the amount of security-critical code in the
machine, thus reducing the amount and difficulty of software
verification required to assure the correctness of the election
result. Our prototype software for a high-assurance touchscreen
voting machine can support a wide range of user interface styles. Its
implementation, which includes a validator for the ballot file, the
interaction with the ballot itself, and history-independent storage of
the cast votes, fits in less than 300 lines of Python.
Fine-Grained Secure Localization for 802.11 Networks
Patrick Traynor, Pennsylvania State University
The erosion of well-defined network boundaries makes the use
of identity a necessary but insufficient means of authentication in
numerous contexts. In a number of settings, proving a user's location
may in fact be more critical. In this work, we use standard 802.11
access points to broadcast cryptographic tokens at a variety of power
levels. Clients attempting to prove their location report the
overheard tokens to a controller. To improve certainty, the controller
then requests that desktop machines with inexpensive wireless radios
in the macro-vicinity broadcast an additional round of tokens. In so
doing, we are able to develop a fine-grained, non-forgeable wireless
localization system that is resistant to sophisticated attackers
attempting to spoof their physical location.
KernelSecNet
Manigandan Radhakrishnan and Jon A. Solworth, University of Illinois at Chicago
The authorization system of computer is responsible for making
decisions on whether operations which have external affect are allowed
or not. It is critical part of the security architecture of computer
system. Unfortunately, in most contemporary operating systems, common
networking operationslike creating new socket, binding to port,
accepting connection, etc.Ņare not tied in with authorization
system. Therefore, barring few restrictions, these operations are
nonprivileged (meaning, any process can perform these and there is no
explicit authorization action performed). In addition, the systemlevel
protection abstractions are very weak for distributed communications.
KernelSecNet is intended to provide similar (and simple) abstractions
to networking and distributed computing as traditional operating
system based protection provide to singlesystems. These abstractions
include user authentication and addressspace separation. KernelSecNet
is designed to (1) extend KernelSec1 protections to networking and
distributed computing, and also (2) be backported to Unixbased systems
to apply the same protections. This project is being implemented as
part of the Linux kernel.
Taking Malware Detection To The Next Level (Down)
Adrienne Felt (speaker), Nathanael Paul,
David Evans, and Sudhanva Gurumurthi, University of Virginia
Several highly sophisticated rootkits have garnered media attention
over the past few weeks, highlighting the vulnerability of current
anti-malware techniques to layer-below attacks. These new rootkits
are not the only area of weakness in traditional anti-malware
techniques. For example, "morphing" viruses evade string scanning by
altering their code structure between generations. Emulation
techniques can sometimes detect these morphing viruses, but its
effectiveness is limited by its high computational cost, imprecision,
and the development of anti-emulation techniques.
Our solution to layer-below attacks and morphing viruses is low-level,
behavior-based threat detection. As malicious software has grown more
complex, disk drive processors have grown more powerful. We propose
using this new, under-utilized processing power to augment traditional
anti-virus and rootkit detection techniques with direct computation on
the disk processor. Disk processors are privy to the low-level
behavior of malware that alters data on its host, allowing us to
identify threats based on patterns of I/O requests. The location and
isolation of the disk make it well-suited for malware detection, since
it can see all I/O requests and is immune to subversion by a rootkit.
Additionally, signatures made from low-level behavioral patterns
cannot be confused by equivalent code substitution (i.e., morphing
viruses). As an added benefit, disk-level monitoring comes at a low
cost: it requires little extra effort from the CPU because it observes
the behavior of normally running programs.
Data Sandboxing for Confidentiality
Tejas Khatiwala and Raj Swaminathan, University of Illinois at Chicago
When an application that reads private information
communicates on an output channel such as a file or a network
connection that is visible, how can we ensure a policy that the data
written is free of private information? We address this question for a
practical setting in this work through the use of a technique called
data sandboxing. Essentially, data sandboxing intends to use the
popular technique of system call interposition to mediate operations
in communication channels such as files. The problem with such
interposition techniques is that they cannot distinguish between
operations that intend to process sensitive information from those
that do not. As a result, any confidentiality policy that blocks
writes to public output channels will essentially fail to successfully
execute programs. To distinguish between sensitive and public data in
programs, we partition the application into two different programs
(that are separated through standard address spaces) and enforce two
different confidentiality policies on them. The first program performs
operations on public output channels, and the confidentiality policy
does not allow it to read sensitive information. The second program is
allowed to read sensitive information, but is not allowed to write to
public channels. This partitioning enables it to successfully enforce
a confidentiality policy that in totality prevents leakage of
sensitive information from the original program on publicly observable
channels. We perform such partitioning based on techniques from
program slicing. In this talk, we sketch the design, implementation
and evaluation of a tool that enforces confidentiality policies on C
programs using the technique described above.
To be presented at the Annual Computer Applications Security
Conference (ACSAC), Miami, FL, December 2006.