Back to Program
|
WORKSHOP PROGRAM ABSTRACTS
|
Tuesday, March 29, 2011
|
9:45 a.m.–10:35 p.m. |
Exposing the Lack of Privacy in File Hosting Services Back to Program
File hosting services (FHSs) are used daily by thousands of people as a way of storing and sharing files. These services normally rely on a security-through-obscurity approach to enforce access control: For each uploaded file, the user is given a secret URI that she can share with other users of her choice.
In this paper, we present a study of 100 file hosting services and we show that a significant percentage of them generate secret URIs in a predictable fashion, allowing attackers to enumerate their services and access their file list. Our experiments demonstrate how an attacker can access hundreds of thousands of files in a short period of time, and how this poses a very big risk for the privacy of FHS users. Using a novel approach, we also demonstrate that attackers are aware of these vulnerabilities and are already exploiting them to get access to other users' files. Finally we present SecureFS, a client-side protection mechanism which can protect a user's files when uploaded to insecure FHSs, even if the files end up in the possession of attackers.
One Bad Apple Spoils the Bunch: Exploiting P2P Applications to Trace and Profile Tor Users Back to Program
Tor is a popular low-latency anonymity network. However, Tor does not protect against the exploitation of an insecure application to reveal the IP address of, or trace, a TCP stream. In addition, because of the linkability of Tor streams sent together over a single circuit, tracing one stream sent over a circuit traces them all. Surprisingly, it is unknown whether this linkability allows in practice to trace a significant number of streams originating from secure (i.e., proxied) applications.
In this paper, we show that linkability allows us to trace 193% of additional streams, including 27% of HTTP streams possibly originating from "secure" browsers. In particular, we traced 9% of all Tor streams carried by our instrumented exit nodes. Using BitTorrent as the insecure application, we design two attacks tracing BitTorrent users on Tor. We run these attacks in the wild for 23 days and reveal 10,000 IP addresses of Tor users. Using these IP addresses, we then profile not only the BitTorrent downloads but also the websites visited per country of origin of Tor users. We show that BitTorrent users on Tor are over-represented in some countries as compared to BitTorrent users outside of Tor. By analyzing the type of content downloaded, we then explain the observed behaviors by the higher concentration of pornographic content downloaded at the scale of a country. Finally, we present results suggesting the existence of an underground BitTorrent ecosystem on Tor.
|
11:00 a.m.–12:15 p.m. |
The Nuts and Bolts of a Forum Spam Automator Back to Program
Web boards, blogs, wikis, and guestbooks are forums frequented and contributed to by many Web users. Unfortunately, the utility of these forums is being diminished due to spamming, where miscreants post messages and links not intended to contribute to forums, but to advertise their websites. Many such links are malicious. In this paper we investigate and compare automated tools used to spam forums. We analyze the functionality of the most popular forum spam automator, XRumer, in details and find that it can intelligently get around many practices used by forums to distinguish humans from bots, all while keeping the spammer hidden. Insights gained from our study suggest specific measures that can be used to block spamming by this automator.
The Underground Economy of Spam: A Botmaster's Perspective of Coordinating Large-Scale Spam Campaigns Back to Program
Spam accounts for a large portion of the email exchange on the Internet. In addition to being a nuisance and a waste of costly resources, spam is used as a delivery mechanism for many criminal scams and large-scale compromises. Most of this spam is sent using botnets, which are often rented for a fee to criminal organizations. Even though there has been a considerable corpus of research focused on combating spam and analyzing spam-related botnets, most of these efforts have had a limited view of the entire spamming process.
In this paper, we present a comprehensive analysis of a large-scale botnet from the botmaster's perspective, that highlights the intricacies involved in orchestrating spam campaigns such as the quality of email address lists, the effectiveness of IP-based blacklisting, and the reliability of bots. This is made possible by having access to a number of command-and-control servers used by the Pushdo/Cutwail botnet. In addition, we study Spamdot.biz, a private forum used by some of the most notorious spam gangs, to provide novel insights into the underground economy of large-scale spam operations.
On the Effects of Registrar-level Intervention Back to Program
Virtually all Internet scams make use of domain name resolution as a critical part of their execution (e.g., resolving a spam-advertised URL to its Web site). Consequently, defenders have initiated a range of efforts to intervene within the DNS ecosystem to block such activity (e.g., by blacklisting "known bad" domain names at the client). Recently, there has been a push for domain registrars to take a more active role in this conflict, and it is this class of intervention that is the focus of our work. In particular, this paper characterizes the impact of two recent efforts to counter scammers' use of domain registration: CNNIC's blanket policy changes for the .cn ccTLD made in late 2009 and the late 2010 agreement between eNom and LegitScript to reactively take down "rogue" Internet pharmacy domains. Using a combination of historic WHOIS data and co-temporal spam feeds, we measure the impact of these interventions on both the registration and use of spam-advertised domains. We use these examples to illustrate the key challenges in making registrar-level intervention an effective tool.
|
3:15 p.m.–4:30 p.m. |
Characterizing Internet Worm Infection Structure Back to Program
Internet worm infection continues to be one of top security threats and has been widely used by botnets to recruit new bots. In this work, we attempt to quantify the infection ability of individual hosts and reveal the key characteristics of the underlying topology formed by worm infection, i.e., the number of children and the generation of the worm infection family tree. Specifically, we apply probabilistic modeling methods and a sequential growth model to analyze the infection tree of a wide class of worms. Through both mathematical analysis and simulation, we find that the number of children has asymptotically a geometric distribution with parameter 0.5. As a result, on average half of infected hosts never compromise any vulnerable host, over 98% of infected hosts have no more than five children, and a small portion of infected hosts have a large number of children. We also discover that the generation follows closely a Poisson distribution and the average path length of the worm infection family tree increases approximately logarithmically with the total number of infected hosts.
Why Mobile-to-Mobile Wireless Malware Won't Cause a Storm Back to Program
The enhanced capabilities of smartphones are creating the opportunity for new forms of malware to spread directly between mobile devices over short-range radio. This has been observed already in Bluetooth radios, and WiFi capabilities of smartphones provide an opportune new spreading vector. The increasing complexity of phone operating systems coupled with disclosed vulnerabilities suggest it is simply a matter of time before WiFi based worms are possible. Works that have considered this problem for Bluetooth suggest outbreaks would result in epidemics [11,28,32]. We use traditional epidemiological modeling tools and high-fidelity realistic human mobility data to study the spreading speed of this emergent threat. As opposed to other works, we take in to account the effects of exposure times, wireless propagation radii, and limited population susceptibility. Importantly, we find that lowering the susceptibility of the population to infection gives significant herd immunity as with biological infections, but unlike traditional Internet worms, making such threats unlikely in the near to medium term. Specifically, with susceptibility rates below 10% the result is near total immunity of the population. We find exposure times, and wireless transmission radii have no significant effect on outbreaks.
Inflight Modifications of Content: Who Are the Culprits? Back to Program
When a user requests content from a cloud service provider, sometimes the content sent by the provider is modified inflight by third-party entities. To our knowledge, there is no comprehensive study that examines the extent and primary root causes of the content modification problem. We design a lightweight experiment and instrument a vast number of clients in the wild to make two additional DNS queries every day. We identify candidate rogue servers and develop a measurement methodology to determine, for each candidate rogue server, whether the server is performing inflight modifications or not. In total, we discover 349 servers as malicious, that is, as modifying content inflight, and more than 1.9% of all US clients are affected by these malicious servers. We investigate the root causes of the problem. We identify 9 ISPs, whose clients are predominately affected. We find that the root cause is not sophisticated transparent in-network services, but instead local DNS servers in the problematic ISPs.
|
4:45 p.m.–6:00 p.m. |
Application-Level Reconnaissance: Timing Channel Attacks Against Antivirus Software Back to Program
Remote attackers use network reconnaissance techniques, such as port scanning, to gain information about a victim machine and then use this information to launch an attack. Current network reconnaissance techniques, that are typically below the application layer, are limited in the sense that they can only give basic information, such as what services a victim is running. Furthermore, modern remote exploits typically come from a server and attack a client that has connected to it, rather than the attacker connecting directly to the victim. In this paper, we raise this question and answer it: Can the attacker go beyond the traditional techniques of network reconnaissance and gain high-level, detailed information?
We investigate remote timing channel attacks against ClamAV antivirus and show that it is possible, with high accuracy, for the remote attacker to check how up-to-date the victim's antivirus signature database is. Because the strings the attacker uses to do this are benign (i.e., they do not trigger the antivirus) and the attack can be accomplished through many different APIs, the attacker has a large amount of flexibility in hiding the attack.
Reconstructing Hash Reversal based Proof of Work Schemes Back to Program
Proof of work schemes use client puzzles to manage limited resources on a server and provide resilience to denial of service attacks. Attacks utilizing GPUs to inflate computational capacity, known as resource inflation, are a novel and powerful threat that dramatically increase the computational disparity between clients. This disparity renders proof of work schemes based on hash reversal ineffective and potentially destructive. This paper examines various such schemes in view of GPU-based attacks and identifies characteristics that allow defense mechanisms to withstand attacks. In particular, we demonstrate that, hash-reversal schemes which adapt solely on server load are ineffective under attack by GPU utilizing adversaries; whereas, hash-reversal schemes which adapt based on client behavior are effective even under GPU based attacks.
Andbot: Towards Advanced Mobile Botnets Back to Program
With the rapid development of the computing and Internet access (i.e., using WiFi, GPRS and 3G) capabilities of smartphones, constructing practical mobile botnets has become an underlying trend. In this paper, we introduce the design of a mobile botnet called Andbot which exploits a novel command and control (C&C) strategy named URL Flux. The proposed Andbot would have desirable features including being stealthy, resilient and low-cost (i.e., low battery power consumption, low traffic consumption and low money cost) which promise to be appealing for botmasters. To prove the efficacy of our design, we implemented the prototype of Andbot in the most popular open source smartphone platform - Android (Google) - and evaluated it. The preliminary experiment results show that the design of Andbot is suitable for smartphones and hard to defend against. We believe that mobile botnets similar to Andbot will break out in the near future, consequently, security defenders should pay more attention to this kind of advanced mobile botnet in the early stage. The goal of our work is to increase the understanding of mobile botnets which will promote the development of more efficient countermeasures. To conclude our paper, we suggest possible defenses against the emerging threat.
|
|