Check out the new USENIX Web site.
2007 USENIX Annual Technical Conference
TRAINING TRACK

Overview | Sunday | Monday | Tuesday | Wednesday | Thursday | Friday | By Instructor

Sunday, June 17, 2007
Full-Day Tutorials
S1 Computer Forensics
Simson L. Garfinkel, Naval Postgraduate School

Who should attend: Anyone interested in forensics: recovering lost or deleted data, hunting for clues, and tracking information.

Computer forensics is the study of information stored in computer systems for the purpose of learning what happened to that computer at some point in the past—and for making a convincing argument about what was learned in a court of law. Today computer forensics covers five broad categories:

  • Hard drive forensics, which aims to inventory and locate information that is on a computer's hard drive, whether or not the information is visible to the computer's user. Hard drive forensics includes the recovery of deleted files and file fragments, the construction of timelines, and the creation of profiles of a computer's user.
  • Memory forensics, which analyzes the memory (or memory dump) of a computer system to reveal information about what the computer has been doing.
  • Network forensics, which captures and analyzes information moving over a computer network. Network forensics can be based on full-content analysis or the analysis of network flows.
  • Document forensics, in which specific files are analyzed for subtle and possibly hidden information that they may contain. Document forensics can recover deleted information from Microsoft Word files or reveal which computers were used to create an individual file.
  • Software forensics, in which a computer program is analyzed to reveal information about the program's author or lineage.

Take back to work:

  • An in-depth understanding of computer forensics, including the history of computer forensics (celebrated cases)
  • Enough information about operating systems to understand why forensic tools are possible, what they can do, and their limits
  • Modern forensic tools, including both open source and commercial
  • The legal environment that governs forensics in the U.S.

Topics include:

  • Introduction to computer forensics
    • What is forensics?
    • Why is information left behind on computer systems?
    • Forensics history
    • Computer forensics vs. physical forensics
    • Forensics and the law (discovery, criminal law, etc.)
    • The C.S.I. effect
  • Disk forensics
    • Understanding file systems
    • ASCII and Unicode
    • Recovery of deleted files without the use of forensic tools
    • Recovery of deleted files with commercial and open source tools
      • Sleuth Kit
      • EnCase
      • FTK
    • What to do when you can't recover an entire file
    • Hash code databases
    • Disk anti-forensics: wiping tools, cryptographic file systems
    • LAB1: Hide-and-seek (just find some files)
    • LAB2: Where is Tanya? (Tanya has gone missing; figure out where she is from her computer)
  • Network Forensics
    • Understanding IP packets, UDP, TCP, protocols (in 5 minutes)
    • Understanding network hubs, switches, where you monitor
    • Data rates
    • Flows vs. full-content
    • Network anti-forensics: crypto, TOR, Stego
    • Using commercial and open source tools
      • Ethereal
      • NetIntercept
    • LAB1: Find the hacker
  • Document forensics
    • MS Word structure
    • PDF structure
    • Identifying similar documents
  • Memory forensics
    • Memory hierarchy, swap space, sleep and hibernation
    • Tools for understanding:
      • Microsoft memory
      • UNIX memory
  • Cell phone forensics
Simson L. Garfinkel (S1) is an Associate Professor at the Naval Postgraduate School in Monterey, CA, and a fellow at Simson L. Garfinkelthe Center for Research on Computation and Society at Harvard University. He is also the founder of Sandstorm Enterprises, a computer security firm that develops advanced computer forensic tools used by businesses and governments to audit their systems. Garfinkel has research interests in computer forensics, the emerging field of usability and security, information policy, and terrorism. He has actively researched and published in these areas for more than two decades. Hewrites a monthly column for CSO Magazine, , for which he has been awarded four national journalism awards, and is the author or co-author of fourteen books on computing. He is perhaps best known for Database Nation: The Death of Privacy in the 21st Century and for Practical UNIX and Internet Security.


S3 Hands-on Linux Security Class: From Hacked to Secure in Two Days (Day 1 of 2)
Rik Farrow, Security Consultant

Who should attend: System administrators of Linux and other UNIX systems; anyone who runs a public UNIX server.

We will work with systems that have been hacked and include hidden files, services, and evidence of intrusions. You will learn how to uncover exploited systems and properly secure them. You will perform hands-on exercises with dual-use tools that can replicate what intruders do as well as improve security. The tools vary from the ordinary, such as ps, find, and strings, to less familiar but important ones, such as lsof, various scanners, sniffers, and the Sleuth Kit.

Day 1 topics will begin with a quick assessment of a system, looking for obvious signs of intrusion. Next we will examine the root of UNIX security and how this impacts what attacks do as well as how UNIX systems must be secured. We will cover TCP/IP and how it relates to different types of attacks and scanning, to learn about what an attacker can "see" from the network, and the limitations of certain styles of attack. The inner workings of buffer overflows, with examples, will graphically illustrate how these attacks work and what defenses against them exist. Day 1 will conclude with an examination of a buggy Web script, how to quickly audit CGI scripts, and what can be done to prevent this attack.

Class exercises will require that you have an X86-based laptop computer that can be booted from a CD, or a system capable of booting the CD within a virtual environment such as VMPlayer or Parallel. You can use an Apple notebook for this only if it is Intel-based. Students will receive a Live CD (KNOPPIX) that contains the tools, files, and exercises required for the course. You can download KNOPPIX yourself (v3.9) and see if your laptop is supported. In the past, some course attendees have come without laptops and teamed up with friendly laptop owners.

Take back to work: An understanding of UNIX security principles, TCP/IP, scanning, and popular attack strategies; defenses for networks and individual systems; how to determine if a system has been exploited; how to use network scanning/evaluation tools; how to improve the security of your systems; and how to check Web scripts for weaknesses; how to use patching and vulnerability assessment tools.

Exercises include:

  • Searching for hidden files
  • TCP/IP and its relation to probes and attacks
  • Uses of ARP and ethereal
  • hping2 probes
  • nmap (connect and SYN scans)
  • Buffer overflows in sample C programs
  • Weaknesses in Web scripts (using a Perl example)

Rik Farrow (S3, M3) provides UNIX and Internet security consulting and training. He has been working with Rik Farrow UNIX system security since 1984 and with TCP/IP networks since 1988. He has taught at the IRS, Department of Justice, NSA, NASA, US West, Canadian RCMP, Swedish Navy, and for many U.S. and European user groups. He is the author of UNIX System Security and System Administrator's Guide to System V. Farrow is the editor of ;login: and works passionately to improve the state of computer security.


S4 Solaris 10 Administration Workshop (Hands-on)
Peter Baer Galvin, Corporate Technologies

Who should attend: Solaris systems managers and administrators interested in learning the new administration features in Solaris 10 (and features in previous Solaris releases that they might not be using).

Solaris has always been the premier commercial operating system, and this remains the case today. Its novel features and applications (like Zfs, Dtrace, and Zones) keep it at the forefront of enterprise use, and many of these features have been copied in other operating systems.

This course covers a variety of system administration topics surrounding Solaris 10. Solaris 10 includes many features introduced since the last major release of Solaris, and there are new issues to consider when deploying, implementing, and managing Solaris 10. This will be a workshop featuring instruction and practice/exploration.

Note that, except for a few instances, Solaris 10 security is not covered in this workshop (see W3: Solaris 10 Security Features Workshop, for that).

Take back to work: All you need to consider in deploying, implementing, and managing Solaris 10.

Topics include:

  • Overview
  • Solaris releases (official, Solaris Express, OpenSolaris, others)
  • Installing and upgrading to Solaris 10
    • Flash archives and live upgrade
  • Patching the kernel and applications
  • Service Management Facility
  • The kernel
    • Update
    • /etc/system
  • Crash and core dumps
    • Management and analysis
  • Cool commands you need to know
  • Zfs, the new endian-neutral file system that "will make you forget everything you thought you knew about file systems"
  • N1 Grid Containers (a.k.a. Zones). Solaris 10 has built-in virtualization support, enabling administrators to create multiple virtual machines, each completely isolated from the others.
    • Installation
    • Management
    • Resource management
  • DTrace: Solaris 10’s system profiling and debugging tool
  • FMA (the Fault Management Architecture)
  • Sysadmin best practices: using the new features effectively and efficiently without hurting yourself

Laptop Requirements: Each student should have a laptop with wireless access for remote access into an instructor-provided Solaris 10 machine (if you do not have a laptop, we will make every effort to pair you up with another student to work as a group; your laptop does not need to be running Solaris).

Peter Baer Galvin (S4, W3) is the Chief Technologist for Corporate Technologies, Inc., a systems integrator and VAR, and was Peter Baer Galvin the Systems Manager for Brown University's Computer Science Department. He has written articles for Byte and other magazines. He wrote the "Pete's Wicked World" and "Pete's Super Systems" columns at SunWorld. He is currently contributing editor for Sys Admin, where he manages the Solaris Corner. Peter is co-author of the Operating Systems Concepts and Applied Operating Systems Concepts textbooks. As a consultant and trainer, Peter has taught tutorials on security and system administration and has given talks at many conferences and institutions on such topics as Web services, performance tuning, and high availability.

Sunday SANS Security Tutorial
504.1 Incident Handling Step-by-Step and Computer Crime Investigation
John Strand, Northrop Grumman

See the full description.

Sunday Morning Half-Day Tutorials
S5 Higher-Order Perl
Chip Salzenberg, Consultant and Author

Who should attend: Programmers involved in the development and maintenance of large systems written partly or mostly in Perl.

One of the most powerful techniques available to Perl programmers is writing functions that can manufacture or modify other functions. Instead of writing ten similar functions that must be maintained separately, you can write a single function that will create the others as needed. This class will teach you how.

Take back to work: How to write functions that can manufacture or modify other functions, instead of writing ten similar functions that must be maintained separately.

Topics include:

  • Dynamically replacing functions with facades: Without changing a function's code, we can add caching behavior to it, or have it enforce an interface contract, or automatically track its own performance.
  • Iterators—functions for generating data a little bit at a time: For files, Perl provides filehandles, but the technique is more generally applicable. As with filehandles, the technique is suitable when the total amount of data is too large to use all at once.
  • An improved version of Perl's standard File::Find module: Unlike the usual implementation, the improved version can be stopped in the middle and resumed later as often as desired. Multiple searches can be active simultaneously, making it possible to recursively compare two separate directory structures.
  • Parsing: By writing functions that build more complex parsers from simple, interchangeable parts, we can easily built up a parser for any kind of input.

Chip Salzenberg (S5, S8, M5) is Principal Engineer at Cloudmark, where he fights spam withChip Salzenberg flair and aplomb. Chip is also chief coder ("pumpking") of the Parrot virtual machine (http://parrotcode.org,) with which Chip plans to bring all dynamic languages together and, in the darkness, dynamically bind them. Chip is a well-known figure in the Perl and free and open source communities, having worked on free and open source software for over 20 years, Perl for 18 years, and Linux for 13 years. Chip was pumpking for Perl release 5.4. He created the automated Linux install-and-test system for VA Linux Systems and was VA's Kernel Coordinator. Chip is a perennial presenter at the O'Reilly Open Source Conference and YAPC (Yet Another Perl Conference), teaches Perl and C++ commercially, and has been published by O'Reilly and Prentice Hall on Perl and other topics. When away from his keyboard, Chip plays with (live) parrots and trains in Krav Maga. Chip's journal is at http://pobox.com/~chip/journal/.


S6 Problem-Solving for IT Professionals
Strata Rose Chalup, Project Management Consultant

Who should attend: IT support people who would like to have a better grasp of problem-solving as a discipline.

In the world of IT support, you build up a lot of specialized domains of knowledge that may or may not interact. As you will see, most types of troubleshooting rely on what you might call call "guided intuition"-- focusing your attention down a probable path of diagnosis, and then making an intuitive leap. If you haven't practiced your intuitive pole vaulting lately, don't worry. By using checklists and patterns to do brute-force troubleshooting, you will gradually build up a reservoir of understanding that will eventually have you shouting "Aha!" while other folks are still scratching their heads in puzzlement.

Take back to work:

  • A solid grounding in the process of solving problems
  • A framework on which to build specialized troubleshooting techniques that are specific to your environment
  • Confidence in your ability to apply logic and common sense to debug problems in complex interacting systems
  • How to trace out common patterns of interaction
  • How to apply basic principles to isolate symptoms and interactions between subsystems
What this class does not provide:
  • Detailed instruction in specific problem-solving situations, such as "what to do when the mouse stops moving"
  • Information on custom environments that are unique to your employer or organization
  • An intro or remedial tutorial on IT basics such as how DNS lookups occur or what TCP steps happen when a request to a Web server comes in

Rather than cover ground many of you already know, we have chosen to focus exclusively on the domain of problem-solving itself as a discipline, not on solving specific problems common to IT situations.

Strata Rose Chalup (S6, M8) has been leading and managing complex IT projects for many years, serving in roles ranging from Strata Rose Chalup Project Manager to Director of Network Operations. She has written a number of articles on management and working with teams and has applied her management skills on various volunteer boards, including BayLISA and SAGE. Strata has a keen interest in network information systems and new publishing technologies and built a successful consulting practice around being an avid early adopter of new tools, starting with ncsa_httpd and C-based CGI libraries in 1993 and moving on to wikis, RSS readers, and blogging. Another MIT dropout, Strata founded VirtualNet Consulting in 1993.


S7 Security Without Firewalls
Abe Singer, San Diego Supercomputer Center

Who should attend: Administrators who want or need to explore strong, low-cost, scalable security without firewalls

Good, possibly better, network security can be achieved without relying on firewalls. The San Diego Supercomputer Center does not use firewalls, yet managed to go almost 4 years without an intrusion. Our approach defies some common beliefs, but it seems to work, and it scales well.

"Use a firewall" is the common mantra of much security documentation and is the primary security "solution" in most networks. However, firewalls don't protect against activity by insiders, nor do firewalls provide protection for activity that is allowed through the firewall. As is true for many academic institutions, firewalls just don't make sense in our environment. By considering internal threats equally with external threats, SDSC has built an effective, scalable, host-based security model.

Of course, we're not perfect. Recently we had a compromise as part of a security incident that spanned numerous institutions. However, firewalls would have done little if anything to have mitigated that attack, and we believe our approach to security reduced the scope of compromise and helped us to recover faster than some of our peers.

The key parts to that model are centralized configuration management, regular and frequent patching, and strong authentication (no plaintext passwords). This model extends well to many environments besides the academic.

In addition, our system administration costs scale well. The incremental cost of adding a host to our network (beyond the cost of the hardware) is negligible, as is the cost of reinstalling a host.

Take back to work: How to build effective, scalable, host-based security without firewalls.

Topics include:

  • The threat perspective from a data-centric point of view: It's ultimately about protecting your data, be that user data or operating system binaries
  • How to implement and maintain centralized configuration management using Cfengine, and how to build reference systems for fast and consistent (re)installation of hosts
  • Secure configuration and management of core network services such as NFS, DNS, and SSH
  • Good system administration practices
  • Implementing strong authentication and eliminating use of plaintext passwords for services such as POP/IMAP
  • A sound patching strategy
  • An overview of the compromise, how we recovered, and what we learned
Abe Singer (S7, M2,T2) is a Computer Security Researcher in the Security Technologies Group Abe Singerat the San Diego Supercomputer Center. In his operational security responsibilities, he participates in incident response and forensics and in improving the SDSC logging infrastructure. His research is in pattern analysis of syslog data for data mining. He is co-author of of the SAGE booklet Building a Logging Infrastructure and author of a forthcoming O'Reilly book on log analysis.

Sunday Afternoon Half-Day Tutorials
S8 Perl Program Repair Shop and Red Flags
Chip Salzenberg, Consultant and Author

Who should attend: Anyone who writes Perl programs regularly. Participants should have at least three months experience of programming in Perl.

You've probably been working too hard when you program, writing twenty lines of code when you only needed ten. But there is a better way, and I will show it to you. Smaller code contains fewer bugs and takes less time to maintain.

We will examine several real code examples in detail and see how to improve them. We'll focus on "red flags"—warning signs in your code that are plainly visible once you know what to look for—and on techniques that require little complex thought or ingenuity. All the bad code in this class is guaranteed 100% genuine and typical.

Participants are encouraged to submit their own code for anonymous review in the class. (Send it to chip+usenix07@pobox.com by May 1). Class content varies depending on submissions but is sure to include some of the topics listed below.

Take back to work: How to improve your own code and the code of others, making it cleaner, more readable, more reusable, and more efficient, while at the same time making it 30–50% smaller.

Topics may include:

  • Families of variables
  • Making relationships explicit
  • Refactoring
  • Programming by convention
  • The Flesh Blanket
  • Conciseness
  • Why you should avoid the "." operator
  • Elimination of global variables
  • Superstition
  • The "use strict" zombies
  • Repressed subconscious urges
  • The cardinal rule of computer programming
  • The psychology of repeated code
  • Techniques for eliminating repeated code
  • What can go wrong with "if" and "else"
  • The Condition That Ate Michigan
  • Resisting "Holy Doctrine"
  • Trying it both ways
  • Structural vs. functional code
  • Elimination of structure
  • Boolean values
  • Programs that take two steps forward and one step back
  • Programs that are 10% backslashes
  • 'print print print print print'
  • C-style "for" loops
  • Loop counter variables
  • Array length variables
  • Unnecessary shell calls
  • How (and why) to let "undef" be the special value
  • Confusion of internal and external representations of data
  • Tool use
  • Elimination of repeated code with higher-order functions
  • Learning to use a hammer
  • The "swswsw" problem
  • Avoiding special cases
  • Using uniform data representations
Chip Salzenberg (S5, S8, M5) is Principal Engineer at Cloudmark, where he fights spam withChip Salzenberg flair and aplomb. Chip is also chief coder ("pumpking") of the Parrot virtual machine (http://parrotcode.org,) with which Chip plans to bring all dynamic languages together and, in the darkness, dynamically bind them. Chip is a well-known figure in the Perl and free and open source communities, having worked on free and open source software for over 20 years, Perl for 18 years, and Linux for 13 years. Chip was pumpking for Perl release 5.4. He created the automated Linux install-and-test system for VA Linux Systems and was VA's Kernel Coordinator. Chip is a perennial presenter at the O'Reilly Open Source Conference and YAPC (Yet Another Perl Conference), teaches Perl and C++ commercially, and has been published by O'Reilly and Prentice Hall on Perl and other topics. When away from his keyboard, Chip plays with (live) parrots and trains in Krav Maga. Chip's journal is at http://pobox.com/~chip/journal/.
 
S9 Performance Tracking with Cacti
John Sellens, SYONEX

Who should attend: Network and system administrators ready to implement a graphical performance and activity monitoring tool, who prefer an integrated, Web-based interface. Participants should have an understanding of the fundamentals of networking, familiarity with computing and network components, UNIX system administration experience, and some understanding of UNIX programming and scripting languages.

This tutorial will provide in-depth instruction in the installation and configuration of Cacti, a popular Web-based tool for graphing time-series data from systems and devices on your network, using RRDTool, PHP, and MySQL.

Take back to work: The information needed to immediately implement and use Cacti to monitor systems and devices on your networks.

Topics include:

  • Installation: Basic steps, prerequisites, common problems and solutions
  • Configuration, setup options, and how to manage larger and nontrivial configurations
  • User management and access control
  • Special cases: how to deal with interesting problems
  • Extending Cacti: how to write scripts or programs to extend the functionality of the basic package
  • Security concerns and access control
  • Ongoing operations

John Sellens (S9, M6, T6) has been involved in system and network administration since 1986 and is John Sellens the author of several related USENIX papers, a number of ;login: articles, and the SAGE Short Topics in System Administration booklet #7, System and Network Administration for Higher Reliability. He holds an M.Math. in computer science from the University of Waterloo and is a chartered accountant. He is the proprietor of SYONEX, a systems and networks consultancy. From 1999 to 2004, he was the General Manager for Certainty Solutions in Toronto. Prior to joining Certainty, John was the Director of Network Engineering at UUNET Canada and was a staff member in computing and information technology at the University of Waterloo for 11 years.


S10 Distributed Source Code Management Systems: Bzr, Hg, and Git
Theodore Ts'o, IBM Linux Technology Center

Who should attend: Developers, project leaders, and system administrators dealing with source code management systems who want to take advantage of the newest distributed development tools.

Are you still using CVS or SVN? Find out what you've been missing! Bzr, hg, and git are new source code management systems which, unlike CVS and SVN, do not require a single centralized server. Instead, they are peer-to-peer systems, where no one repository has any more privilege than another, other than that obtained by usage and custom. These systems have many advantages. They are perfect for people who wish to commit changes while disconnected from the network (for example, while in an airplane). In addition, there is no need for commit rights before a new developer can become a first-class user of the SCM system. Instead, the developer simply clones a copy of the repository on his local machine, makes changes, and commits them to the repository. These changes are then pushed to the maintainer, who reviews them before merging them into his local repository. In larger projects, a hierarchical system can be used, where a changeset may be approved by a subsystem maintainer, who must then forward the changeset to a higher-level maintainer for approval for the changeset to enter the project master repository.

These attributes make distributed SCMs an ideal match for open source software projects. Indeed, hg and git were created specifically for the Linux kernel developers. Today, projects such as Solaris, Xen, moinmoin, Alsa, and e2fsprogs use Mercurial; Linux, Cairo, Wine, X.org, and XMMS2 use git; and Ubuntu and Drupal use bzr.

Take back to work: An understanding of distributed SCMs, how to use them, and how to choose the distributed SCM that is most appropriate for your project.

Topics include:

  • Basic concepts of distributed SCMs
  • Advantages of peer-to-peer systems
  • How distributed SCMs work
  • Strengths and weaknesses of each distributed SCM
  • Guidance on choice criteria
Theodore Ts'o (S10, W4) has been a Linux kernel developer since almost the very beginnings Theodore Ts'oof Linux: he implemented POSIX job control in the 0.10 Linux kernel. He is the maintainer and author of the Linux COM serial port driver and the Comtrol Rocketport driver, and he architected and implemented Linux's tty layer. Outside of the kernel, he is the maintainer of the e2fsck filesystem consistency checker. Ted is currently employed by IBM Linux Technology Center.
Monday, June 18, 2007
Full-Day Tutorials
M1 Administering Linux in Production Environments
Æleen Frisch, Exponential Consulting

Who should attend: Both current Linux system administrators and administrators from sites considering converting to Linux or adding Linux systems to their current computing resources.

Linux has graduated from being a "toy" operating system favored by hobbyists into a production-level operating system embraced by major corporations such as IBM, Novell, and Amazon. It is used on the front line for mission-critical applications, and mastery of this operating system is now becoming a major asset to system administrators.

Linux system administrators in production environments face many challenges: the inevitable skepticism about whether an open source operating system will perform as required; how well Linux systems will integrate with existing computing facilities; how to locate, install, and manage high-end features which the standard distributions may lack; and many more. Sometimes the hardest part of ensuring that the system meets production requirements is matching the best solution with the particular local need. This course is designed to give you a broad knowledge of production-worthy Linux capabilities, as well as where Linux currently falls short. The material in the course is all based on extensive experience with production systems.

This course will cover configuring and managing Linux computer systems in production environments. We will be focusing on the administrative issues that arise when Linux systems are deployed to address a variety of real-world tasks and problems arising from both commercial and research and development contexts. This course is designed for both current Linux system administrators and for administrators from sites considering converting to Linux or adding Linux systems to their current computing resources.

Take back to work: The ability to select the appropriate facilities for use of Linux in your environment and to begin deploying them.

Topics include:

  • Recent kernel developments
  • High-performance I/O
    • Advanced file systems and logical volumes
    • Disk striping
    • Optimizing I/O performance
  • Advanced computer-server environments
    • Beowulf
    • Clustering
    • Parallelization environments/facilities
    • CPU performance optimization
  • High availability Linux: fault-tolerance options
  • Enterprise-wide authentication and other security features
  • Automating installations and other mass operations
  • Linux performance tuning

Æleen Frisch (M1, T5) has been a system administrator for over 20 years. She currently looks Aeleen Frischafter a pathologically heterogeneous network of UNIX and Windows systems. She is the author of several books, including Essential System Administration (now in its 3rd edition). Æleen was the program committee chair for LISA '03 and is a frequent presenter at USENIX and SAGE events, as well as presenting classes for universities and corporations worldwide.


M2 Building a Logging Infrastructure and Log Analysis for Security
Abe Singer, San Diego Supercomputer Center

Who should attend: System, network, and security administrators who want to be able to separate the wheat of warning information from the chaff of normal activity in their log files.

This tutorial will show the importance of log files for maintaining system security and general well-being, offer some strategies for building a centralized logging infrastructure, explain some of the types of information that can be obtained for both real-time monitoring and forensics, and teach techniques for analyzing log data to obtain useful information.

The devices on a medium-sized network can generate millions of lines of log messages a day. Although much of the information is normal activity, hidden within that data can be the first signs of an intrusion, denial of service, worms/viruses, and system failures.

Take back to work: How to get a handle on your log files, which can help you run your systems and networks more effectively and can provide forensic information for post-incident investigation.

Topics include:

  • Problems, issues, and scale of handling log information
  • Generating useful log information: improving the quality of your logs
  • Collecting log information
    • syslog and friends
    • Building a log host
    • Integrating MS Windows into a UNIX log architecture
  • Storing log information
    • Centralized log architectures
    • Log file archiving
  • Log analysis
    • Log file parsing tools
    • Data analysis of log files (e.g., baselining)
    • Attack signatures and other interesting things to look for in your logs
  • Legal issues: How to handle and preserve log files for human resources issues and legal matters
Abe Singer (S7, M2, T2) is a Computer Security Researcher in the Security Technologies Group Abe Singerat the San Diego Supercomputer Center. In his operational security responsibilities, he participates in incident response and forensics and in improving the SDSC logging infrastructure. His research is in pattern analysis of syslog data for data mining. He is co-author of of the SAGE booklet Building a Logging Infrastructure and author of a forthcoming O'Reilly book on log analysis.


M3 Hands-on Linux Security Class: From Hacked to Secure in Two Days (Day 2 of 2)
Rik Farrow, Security Consultant

Who should attend: System administrators of Linux and other UNIX systems; anyone who runs a public UNIX server.

We will work with systems that have been hacked and include hidden files, services, and evidence of intrusions. You will learn how to uncover exploited systems and properly secure them. You will perform hands-on exercises with dual-use tools that can replicate what intruders do as well as improve security. The tools vary from the ordinary, such as ps, find, and strings, to less familiar but important ones, such as lsof, various scanners, sniffers, and the Sleuth Kit.

The lecture portion of this class covers the background you need to understand UNIX security principles, TCP/IP, scanning, and popular attack strategies, as well as defenses for networks and individual systems. The class will end with a discussion of the use of patching and vulnerability assessment tools.

Day 2 will begin with a look at passwords, including a quick spin with John the Ripper. Next, we will examine SUID files as potential backdoors and show how to bypass the common defense against these backdoors. Network services provide the necessary access for attackers, and we will practice determining exactly what services are necessary and how UNIX systems should be hardened. Tools that look for rootkits, often the most subtle way for an attacker to maintain a presence, have their weak points. We will learn both about rootkits and how to search for them. Then we will look at the output of Sleuthkit to discover what happened, and when, on poorly secure systems. Finally, we will look at other defensive software, including firewalls (netfilter), patching, and vulnerability scanning.

Take back to work: How to uncover the more subtle indicators of compromise, such as backdoors and rootkits, and improve the security of your networked systems.

Topics include:

  • John the Ripper, password cracking
  • Misuses of suid shells, finding backdoors
  • Uncovering dangerous network services
  • Searching for evidence of rootkits and bots
  • Sleuth Kit (looking at intrusion timelines)
  • netfilter
Laptop Requirements: Class exercises require that you have a X86-based laptop computer that can be booted from a CD. Students will receive a Live CD (KNOPPIX) that includes the tools, files, and exercises required for the course. You can download KNOPPIX yourself (v3.9) and see if your laptop is supported. Some people have come without laptops and teamed up with friendly laptop users.

Rik Farrow (S3, M3) provides UNIX and Internet security consulting and training. He has been working with Rik Farrow UNIX system security since 1984 and with TCP/IP networks since 1988. He has taught at the IRS, Department of Justice, NSA, NASA, US West, Canadian RCMP, Swedish Navy, and for many U.S. and European user groups. He is the author of UNIX System Security and System Administrator's Guide to System V. Farrow is the editor of ;login: and works passionately to improve the state of computer security.


M4 System and Network Performance Tuning
Marc Staveley, Soma Networks

Who should attend: Novice and advanced UNIX system and network administrators, and UNIX developers concerned about network performance impacts. A basic understanding of UNIX system facilities and network environments is assumed.

We will explore procedures and techniques for tuning systems, networks, and application code. Starting from the single system view, we will examine how the virtual memory system, the I/O system, and the file system can be measured and optimized. We'll extend the single host view to include Network File System tuning and performance strategies. Detailed treatment of networking performance problems, including network design and media choices, will lead to examples of network capacity planning. Application issues, such as system call optimization, memory usage and monitoring, code profiling, real-time programming, and techniques for controlling response time will be addressed. Many examples will be given, along with guidelines for capacity planning and customized monitoring based on your workloads and traffic patterns. Question and analysis periods for particular situations will be provided.

Take back to work: Procedures and techniques for tuning your systems, networks, and application code, along with guidelines for capacity planning and customized monitoring.

Topics include:

  • Performance tuning strategies
    • Practical goals
    • Monitoring intervals
    • Useful statistics
    • Tools, tools, tools
  • Server tuning
    • Filesystem and disk tuning
    • Memory consumption and swap space
    • System resource monitoring
  • NFS performance tuning
    • NFS server constraints
    • NFS client improvements
    • NFS over WANs
    • Automounter and other tricks
  • Network performance, design, and capacity planning
    • Locating bottlenecks
    • Demand management
    • Media choices and protocols
    • Network topologies: bridges, switches, and routers
    • Throughput and latency considerations
    • Modeling resource usage
  • Application tuning
    • System resource usage
    • Memory allocation
    • Code profiling
    • Job scheduling and queuing
    • Real-time issues
    • Managing response time

Marc Staveley (M4) works with Soma Networks, where he is applying his many years of experience with UNIX Marc Staveley development and administration in leading their IT group. Previously Marc had been an independent consultant and also held positions at Sun Microsystems, NCR, Princeton University, and the University of Waterloo. He is a frequent speaker on the topics of standards-based development, multi-threaded programming, system administration, and performance tuning.

Monday SANS Security Tutorial
504.2 Computer and Network Hacker Exploits: Part 1
John Strand, Northrop Grumman

See the full description.

Monday Morning Half-Day Tutorials
M5 Regular Expression Mastery
Chip Salzenberg, Consultant and Author

Who should attend: System administrators and users who use Perl, grep, sed, awk, procmail, vi, or emacs.

The first section of the class will explore the matching algorithms used internally by common utilities such as grep and Perl. Understanding these algorithms will allow us to predict whether a regex will match, which of several matches will be found, and which regexes are likely to be faster than others, and to understand why all of these behaviors occur. We'll learn why commonly used regex symbols such as ".", "$.", and "\1" may not mean what you thought they did.

In the second section, we'll look at common matching disasters, a few practical parsing applications, and some advanced Perl features. We'll finish with a discussion of optimizations that were added to Perl 5.6, and why you should avoid using "/i".

Take back to work: Fixes for all your regexes: unexpected results, hangs, unpredictable behaviors . . .

Topics include:

  • Inside the regex engine
    • Regular expressions are programs
    • Backtracking
    • NFA vs. DFA
    • POSIX and Perl
    • Quantifiers
    • Greed and anti-greed
    • Anchors and assertions
    • Backreferences
  • Disasters and optimizations
    • Where machines come from
    • Disaster examples
    • Tokenizing
    • New optimizations
    • Matching strings with balanced parentheses
Chip Salzenberg (S5, S8, M5) is Principal Engineer at Cloudmark, where he fights spam withChip Salzenberg flair and aplomb. Chip is also chief coder ("pumpking") of the Parrot virtual machine (http://parrotcode.org,) with which Chip plans to bring all dynamic languages together and, in the darkness, dynamically bind them. Chip is a well-known figure in the Perl and free and open source communities, having worked on free and open source software for over 20 years, Perl for 18 years, and Linux for 13 years. Chip was pumpking for Perl release 5.4. He created the automated Linux install-and-test system for VA Linux Systems and was VA's Kernel Coordinator. Chip is a perennial presenter at the O'Reilly Open Source Conference and YAPC (Yet Another Perl Conference), teaches Perl and C++ commercially, and has been published by O'Reilly and Prentice Hall on Perl and other topics. When away from his keyboard, Chip plays with (live) parrots and trains in Krav Maga. Chip's journal is at http://pobox.com/~chip/journal/.
 
M6 Databases: What You Need to Know
John Sellens, SYONEX

Who should attend: System and application administrators who need to support databases and database-backed applications.

Databases used to run almost exclusively on dedicated database servers, with one or more database administrators (DBAs) dedicated to their care. These days, with the easy availability of database software such as MySQL and PostgreSQL, databases are popping up in many more places and are used by many more applications.

As a system administrator you need to understand databases, their care and feeding.

Take back to work: A better understanding of databases and their use and of how to deploy and support common database software and database-backed applications.

Topics include:

  • An introduction to database concepts
  • The basics of SQL (Structured Query Language)
  • Common applications of databases
  • Berkeley DB and its applications
  • MySQL installation, configuration, and management
  • PostgreSQL installation, configuration, and management
  • Security, user management, and access controls
  • Ad hoc queries with standard interfaces
  • ODBC and other access methods
  • Database access from other tools (Perl, PHP, sqsh, etc.)
John Sellens (S9, M6, T6) has been involved in system and network administration since 1986 and is John Sellens the author of several related USENIX papers, a number of ;login: articles, and the SAGE Short Topics in System Administration booklet #7, System and Network Administration for Higher Reliability. He holds an M.Math. in computer science from the University of Waterloo and is a chartered accountant. He is the proprietor of SYONEX, a systems and networks consultancy. From 1999 to 2004, he was the General Manager for Certainty Solutions in Toronto. Prior to joining Certainty, John was the Director of Network Engineering at UUNET Canada and was a staff member in computing and information technology at the University of Waterloo for 11 years.
 
M7 Disk-to-Disk Backup and Eliminating Backup System Bottlenecks
Jacob Farmer, Cambridge Computer Services

Who should attend: System administrators involved in the design and management of backup systems and policymakers responsible for protecting their organization's data. A general familiarity with server and storage hardware is assumed. The class focuses on architectures and core technologies and is relevant regardless of what backup hardware and software you currently use.

The data protection industry is going through a mini-renaissance. In the past few years, the cost of disk media has dropped to the point where it is practical to use disk arrays in backup systems, thus minimizing and sometimes eliminating the need for tape. In the first incarnations of disk-to-disk backup—disk staging and virtual tape libraries—disk has been used as a direct replacement for tape media. While this compensates for the mechanical shortcomings of tape drives, it fails to address other critical bottlenecks in the backup system, and thus many disk-to-disk backup projects fall short of expectations. Meanwhile, many early adopters of disk-to-disk backup are discovering that the longterm costs of disk staging and virtual tape libraries are prohibitive.

The good news is that the next generation of disk-enabled data protection solutions has reached a level of maturity where they can assist—and sometimes even replace—conventional enterprise backup systems. These new D2D solutions leverage the random access properties of disk devices to use capacity much more efficiently and to obviate many of the hidden backup-system bottlenecks that are not addressed by first-generation solutions. The challenge to the backup system architect is to cut through the industry hype, sort out all of these new technologies, and figure out how to integrate them into an existing backup system.

This tutorial identifies the major bottlenecks in conventional backup systems and explains how to address them. The emphasis is placed on the various roles for inexpensive disk in your data protection strategy; however, attention is given to SAN-enabled backup, the current state and future of tape drives, and iSCSI.

Take back to work: Ideas for immediate, effective, inexpensive improvements to your backup systems.

Topics include:

  • Identifying and eliminating backup system bottlenecks
  • Conventional disk staging
  • Virtual tape libraries
  • Removable disk media
  • Incremental forever and synthetic full backup strategies
  • Block- and object-level incremental backups
  • Information lifecycle management and nearline archiving
  • Data replication
  • CDP (Continuous Data Protection)
  • Snapshots
  • Current and future tape drives
  • Capacity Optimization (Single-Instance File Systems)
  • Minimizing and even eliminating tape drives
  • iSCSI

Jacob Farmer (M7, M10) is a well-known figure in the data storage industry. He has written numerous papers and articles and is a regularJacob Farmer speaker at trade shows and conferences. In addition to his regular expert advice column in the "Reader I/O" section of InfoStor Magazine, the leading trade magazine of the data storage industry, Jacob also serves as the publication's senior technical advisor. Jacob has over 18 years of experience with storage technologies and is the CTO of Cambridge Computer Services, a national integrator of data storage and data protection solutions.


M8 Practical Project Management for Sysadmins and IT Professionals
Strata Rose Chalup, Project Management Consultant

Who should attend:
System administrators who want to stay hands-on as team leads or system architects and need a new set of skills with which to tackle bigger, more complex challenges. No previous experience with project management is required.

People who have been through traditional multi-day project management courses will be shocked, yet refreshed, by the practicality of our approach. To get the most out of this tutorial, participants should have some real-world project or complex task in mind for the lab sections.

This tutorial focuses on complementing your own organizational style (or lack thereof) with a toolbox of ways to organize and manage complex tasks without drowning in paperwork or clumsy, meeting-intensive methodologies. Also emphasized is how to bridge the gap between ad hoc methods and the kinds of tracking and reporting traditionally trained managers will understand.

Take back to work: A no-nonsense grounding in methods that work without adding significantly to one's workload. You will be able to take an arbitrarily daunting task and reduce it to a plan orf attack that will be realistic, will lend itself to tracking, and will have functional, documented goals. You will be able to give succinct and useful feedback to management on overall project viability and timelines and easily deliver regular progress reports.

Topics include:

  • Quick basics of project management
    • The essentials you need to know
    • How to map the essentials onto real-world projects
  • Skill sets
    • Defining success
    • Chunking and milestoning
    • Delegating
    • Tracking
    • Reporting
  • Problem areas
    • Teams, interactions among people
    • The albatross project
    • When to go deep and when to get "pointy-haired"
    • When disaster strikes, should you scrap, or salvage?
  • Project management tools
    • What tools should do for you
    • Leveraging the command line: UNIX PM
    • Freeware PM tool options
    • The only 15 minutes of MS Project you'll ever need

Strata Rose Chalup (S6, M8) has been leading and managing complex IT projects for many years, serving in roles ranging from Strata Rose Chalup Project Manager to Director of Network Operations. She has written a number of articles on management and working with teams and has applied her management skills on various volunteer boards, including BayLISA and SAGE. Strata has a keen interest in network information systems and new publishing technologies and built a successful consulting practice around being an avid early adopter of new tools, starting with ncsa_httpd and C-based CGI libraries in 1993 and moving on to wikis, RSS readers, and blogging. Another MIT dropout, Strata founded VirtualNet Consulting in 1993.


M9 Ethereal and the Art of Debugging Networks
Gerald Carter, Centeris/Samba Team

Who should attend:
System and network administrators who are interested in learning more about the TCP/IP protocol and how network traffic monitoring and analysis can be used as a debugging, auditing, and security tool.

System logs can turn out to be incomplete or incorrect when you're trying to track down network application failures. Sometimes the quickest, or the only, way to find the cause is to look at the raw data on the wire. This course is designed to help you make sense of that data.

Take back to work: How to use the Ethereal protocol analyzer as a debugging and auditing tool for TCP/IP networks.

Topics include:

  • Introduction to Ethereal for local and remote network tracing
  • TCP/IP protocol basics
  • Analysis of popular application protocols such as DNS, DHCP, HTTP, NFS, CIFS, and LDAP
  • How some kinds of TCP/IP network attacks can be recognized

Gerald Carter (M9, W2, R3) has been a member of the Samba Development Team since 1998. He has been developing, Gerald Carter writing about, and teaching on open source since the late 1990s. Currently employed by Centeris as a Samba and open source developer, Gerald has written books for SAMS Publishing and for O'Reilly Publishing.

 


M10 Next Generation Storage Networking
Jacob Farmer, Cambridge Computer Services

Who should attend:
Sysadmins running day-to-day operations and those who set or enforce budgets. This tutorial is technical in nature, but it does not address command-line syntax or the operation of specific products or technologies. Rather, the focus is on general architectures and various approaches to scaling in both performance and capacity. Since storage networking technologies tend to be costly, there is some discussion of the relative cost of different technologies and of strategies for managing cost and achieving results on a limited budget.

There has been tremendous innovation in the data storage industry over the past few years. Proprietary, monolithic SAN and NAS solutions are beginning to give way to open-system solutions and distributed architectures. Traditional storage interfaces such as parallel SCSI and Fibre Channel are being challenged by iSCSI (SCSI over TCP/IP), SATA (serial ATA), SAS (serial attached SCSI), and even Infiniband. New filesystem designs and alternatives to NFS and CIFS are enabling high-performance filesharing measured in gigabytes (yes, "bytes," not "bits") per second. New spindle management techniques are enabling higher-performance and lower-cost disk storage. Meanwhile, a whole new set of efficiency technologies are allowing storage protocols to flow over the WAN with unprecedented performance. This tutorial is a survey of the latest storage networking technologies, with commentary on where and when these technologies are most suitably deployed.

Take back to work: An understanding of general architectures, various approaches to scaling in both performance and capacity, relative costs of different technologies, and strategies for achieving results on a limited budget.

Topics include:

  • Fundamentals of storage virtualization: the storage I/O path
  • Shortcomings of conventional SAN and NAS architectures
  • In-band and out-of-band virtualization architectures
  • The latest storage interfaces: SATA (serial ATA), SAS (serial attached SCSI), 4Gb Fibre Channel, Infiniband, iSCSI
  • Content-Addressable Storage (CAS)
  • Information Life Cycle Management (ILM) and Hierarchical Storage Management (HSM)
  • The convergence of SAN and NAS
  • High-performance file sharing
  • Parallel file systems
  • SAN-enabled file systems
  • Wide-area file systems (WAFS)

Jacob Farmer (M7, M10) is a well-known figure in the data storage industry. He has written numerous papers and articles and is a regularJacob Farmer speaker at trade shows and conferences. In addition to his regular expert advice column in the "Reader I/O" section of InfoStor Magazine, the leading trade magazine of the data storage industry, Jacob also serves as the publication's senior technical advisor. Jacob has over 18 years of experience with storage technologies and is the CTO of Cambridge Computer Services, a national integrator of data storage and data protection solutions.

Tuesday, June 19, 2007
Full-Day Tutorials
T1 Configuring and Deploying Linux-HA
Alan Robertson, IBM Linux Technology Center

Who should attend: System administrators and IT architects who architect, evaluate, install, or manage critical computing systems. It is suggested that participants have basic familiarity with system V/LSB-style startup scripts, shell scripting, and XML. Familiarity with high availability concepts is not assumed.

The Linux-HA project (http://linux-ha.org/) is the oldest and most powerful open source high-availability (HA) package available, comparing favorably to well-known commercial HA packages. Although the project is called Linux-HA (or "heartbeat"), it runs on a variety of POSIX-like systems, including FreeBSD, Solaris, and OS X.

Linux-HA provides highly available services on clusters from one to more than 16 nodes with no single point of failure. These services and the servers they run on are monitored. If a service should fail to operate correctly, or a server should fail, the affected services will be quickly restarted or migrated to another server, dramatically improving service availability.

Linux-HA supports rules for expressing dependencies between services, and powerful rules for locating services in the cluster. Because these services are derived from init service scripts, they are familiar to system administrators and are easy to configure and manage.

Take back to work: Both the basic theory of high availability systems and practical knowledge of how to plan, install, and configure highly available systems using Linux-HA.

Topics include:

  • General HA principles
  • Compilation and installation of the Linux-HA ("heartbeat") software
  • Overview of Linux-HA configuration
  • Overview of commonly used resource agents
  • Managing services supplied with init(8) scripts
  • Sample Linux-HA configurations for Apache, NFS, DHCP, DNS, and Samba
  • Writing and testing resource agents conforming to the Open Cluster Framework (OCF) specification
  • Creating detailed resource dependencies
  • Creating co-location constraints
  • Writing resource location constraints
  • Causing failovers on user-defined conditions
Alan Robertson (T1) founded the High-Availability Linux (Linux-HA) project in 1998 and has been project Alan Robertsonleader for it since then. He worked for SuSE for a year, then in March 2001 joined IBM's Linux Technology Center, where he works on it full time. Before joining SuSE, he was a Distinguished Member of Technical Staff at Bell Labs. He worked for Bell Labs for 21 years in a variety of roles. These included providing leading-edge computing support, writing software tools and developing voicemail systems.


T2 Incident Response
Abe Singer, San Diego Supercomputer Center

Who should attend: Security folks, system administrators, and operations staff (e.g., help desk). Examples are primarily from UNIX systems, but most of what is discussed will be operating system neutral. Note that this is not a forensics class. Although some forensic analysis will be discussed, especially with regard to examples, it is only a small portion of the class.

You get a complaint that seems to indicate that you have one or more compromised machines. What do you do? Where do you start? How do you proceed? Do you have the tools that you need and the authority to use them?

Responding to an incident can be very stressful and, without the right tools and procedures in place in advance, very difficult. It can be easy to panic, and there is a lot of pressure to "do something" even when you don't know what's actually going on. Often, sites that do react rashly end up in a worse state and do not completely remove the intruder from their systems.

This course will cover putting together a comprehensive incident response program, from identifying the policies and tools you need, to assessing the situation and determining an effective, measured response. Some examples from real intrusions will be provided.

Take back to work: An understanding of how to prepare for security incidents and how to handle incidents in an organized, effective manner, without panicking.

Topics include:

  • Goals: What results do you want?
  • Policies: Having the authority to do the job
  • Tools: Having the stuff to do the job
  • Intelligence: Having the information to do the job
  • Initial suspicion: Complaints, alarms, anomalies
  • The "Oh, sh*t" moment: When you realize it's a compromise
  • Gathering information on your attacker
  • Assessing the extent of the compromise
  • Communicating: Inquiring minds want to know
  • Recovery: Kicking 'em out and fixing the damage
  • Evidence handling
  • The law: Dealing with law enforcement, lawyers, and HR
Abe Singer (S7, M2, T2) is a Computer Security Researcher in the Security Technologies Group Abe Singerat the San Diego Supercomputer Center. In his operational security responsibilities, he participates in incident response and forensics and in improving the SDSC logging infrastructure. His research is in pattern analysis of syslog data for data mining. He is co-author of of the SAGE booklet Building a Logging Infrastructure and author of a forthcoming O'Reilly book on log analysis.


T3 NFSv4 and Cluster File Systems
Peter Honeyman, CITI, University of Michigan

Who should attend: System builders developing storage solutions for high-end computing, system administrators who need to anticipate and understand the state of the art in high-performance storage protocols and technologies, and researchers looking for an intensive introduction to an exciting and fertile area of R&D.

Symmetric parallel file systems coordinate sharing in a back end to present an identical view of storage across multiple front-end nodes. Stateless NFS file servers mesh well with cluster file systems to provide scalable remote access. But NFSv4, with its delegations, locks, and share reservations, requires shared server state to be effectively coordinated as well. Furthermore, realizing the scalability of cluster file systems also demands a solution to the single server bottleneck inherent to client/server architectures.

Take back to work: Knowledge of the challenges and solutions in marrying NFSv4 with cluster file systems.

Topics include:

  • Features of NFSv4 and cluster file systems
  • Major coordination issues of locking, delegation, and shares, giving special attention to fair queueing for NFSv4, NLM, and POSIX locks
  • Efficient client recovery and migration for NFSv4 on cluster file systems
  • An introduction to pNFS, the emerging parallel extension to NFSv4, which offers the potential to deliver the bisectional bandwidth of a cluster file system to a single client
Peter Honeyman (T3) is Research Professor of Information at the University of Michigan and Scientific Director of the Center forPeter Honeyman Information Technology Integration, where he leads a team of scientists, engineers, and students developing the Linux-based open source reference implementation of NFSv4 and its extensions for high end computing. With 25 years of experience building middleware for file systems, security, and mobile computing—including Honey DanBer UUCP, PathAlias, MacNFS, Disconnected AFS, and WebCard (the first Internet smart card)—Honeyman is regarded as one of the world's leading experimental computer scientists.


T4 Solaris 10 Performance, Observability, and Debugging
James Mauro and Richard McDougall, Sun Microsystems

Who should attend: Anyone who supports or may support Solaris 10 machines.

Take back to work: How to apply the tools and utilities available in Solaris 10 to resolve performance issues and pathological behavior, and simply to understand the system and workload better.

Topics include:

  • Solaris 10 features overview
  • Solaris 10 tools and utilities
    • The conventional stat tools (mpstat, vmstat, etc.)
    • The procfs tools (ps, prstat, map, pfiles, etc.)
    • lockstat and plockstat
    • Using kstat
    • Dtrace, the Solaris dynamic tracing facility
    • Using mdb in a live system
  • Understanding memory use and performance
  • Understanding thread execution flow and profiling
  • Understanding I/O flow and performance
  • Looking at network traffic and performance
  • Application and kernel interaction
  • Putting it all together

James Mauro (T4) is a Senior Staff Engineer in the Performance and Availability Engineering group at Sun Microsystems. Jim's current interests and activities are centered on benchmarking Solaris 10 performance, workload analysis, and tool development. This work includes Sun's new Opteron-based systems and multicore performance on Sun's Chip Multithreading (CMT) Niagara processor. Jim resides in Green Brook, New Jersey, with his wife and two sons. He spent most of his spare time in the past year working on the second edition of Solaris Internals. Jim co-authored the first edition of Solaris Internals with Richard McDougall and has been writing about Solaris in various forums for the past eight years.

Richard McDougall (T4), had he lived 100 years ago, would have had the hood open on the first four-stroke Richard McDougallinternal combustion gasoline-powered vehicle, exploring new techniques for making improvements. He would be looking for simple ways to solve complex problems and helping pioneering owners understand how the technology works to get the most from their new experience. These days, McDougall uses technology to satisfy his curiosity. He is a Distinguished Engineer at Sun Microsystems, specializing in operating systems technology and system performance. He is co-author of Solaris Internals (Prentice Hall PTR, 2000) and Resource Management (Sun Microsystems Press, 1999).


T5 Beyond Shell Scripts: 21st-Century Automation Tools and Techniques
Æleen Frisch, Exponential Consulting

Who should attend: System administrators who want to explore new ways of automating administrative tasks.

Although a good system administrator will be proficient in creating shell scripts to solve specific problems and automate routine tasks, the skill alone is no longer sufficient for the automation requirements in typical 21st-century computing environments. As system administration has moved from an informal, poorly defined, and widely varying job title to a recognized and respected profession, so its processes and procedures have developed from homegrown, ad hoc, single-purpose strategies into systematic, wide-ranging ones supported by powerful and well developed software tools. This course introduces you to several enterprise-worthy, open source administrative packages, each of which supports the configuration, management, and/or monitoring of a specific aspect of system functioning.

Take back to work: You will be ready to begin using these packages in your own environment and to realize the efficiency, reliability, and thoroughness that they offer compared to traditional approaches.

Topics include:

  • Cfengine
    • Basic and advanced configurations
    • Samples uses, including: installations and beyond; "self-heaing" configs; data collection; and more
    • Cfengine limitations: when not to use it
  • Expect: automating interactive processes
    • What to Expect . . .
    • Using Expect with other tools
    • Security issues
  • Bacula, an enterprise backup management facility
    • Prerequisites
    • Configuration
    • Getting the most from Bacula
  • Network and system monitoring tools
    • SNMP Overview
    • Nagios: Monitoring network and device performance
    • RRDTool: Examining retrospective system data
    • Ethereal: monitoring network data
Æleen Frisch (M1, T5) has been a system administrator for over 20 years. She currently looks Aeleen Frischafter a pathologically heterogeneous network of UNIX and Windows systems. She is the author of several books, including Essential System Administration (now in its 3rd edition). Æleen was the program committee chair for LISA '03 and is a frequent presenter at USENIX and SAGE events, as well as presenting classes for universities and corporations worldwide.


T6 System and Network Monitoring: Tools in Depth
John Sellens, SYONEX

Who should attend: Network and system administrators ready to implement comprehensive monitoring of their systems and networks using the best of the freely available tools. Participants should have an understanding of the fundamentals of networking, familiarity with computing and network components, UNIX system administration experience, and some understanding of UNIX programming and scripting languages. It will build on the background provided by Monday's System and Network Performance Tuning tutorial, so participants should be familiar with the topics covered in that tutorial.

Monitoring systems and networks is crucial not only for efficient operations, but also for enhancing security. Knowing what your systems are supposed to be doing is the only way you can tell when they are doing something that they are not supposed to do.

This tutorial will provide in-depth instruction in the installation and configuration of some of the most popular and effective system and network monitoring tools, including Nagios, Cricket, MRTG and Orca.

Take back to work: The information needed to immediately implement, extend, and manage popular monitoring tools on your systems and networks.

Topics include, for each of Nagios, Cricket, MRTG, and Orca:

  • Installation: Basic steps, prerequisites, common problems and solutions
  • Configuration, setup options, and how to manage larger and nontrivial configurations
  • Reporting and notifications, both proactive and reactive
  • Special cases: How to deal with interesting problems
  • Extending the tools: How to write scripts or programs to extend the functionality of the basic package
  • Dealing effectively with network boundaries and remote sites
  • Security concerns and access control
  • Ongoing operations
John Sellens (S9, M6, T6) has been involved in system and network administration since 1986 and is John Sellens the author of several related USENIX papers, a number of ;login: articles, and the SAGE Short Topics in System Administration booklet #7, System and Network Administration for Higher Reliability. He holds an M.Math. in computer science from the University of Waterloo and is a chartered accountant. He is the proprietor of SYONEX, a systems and networks consultancy. From 1999 to 2004, he was the General Manager for Certainty Solutions in Toronto. Prior to joining Certainty, John was the Director of Network Engineering at UUNET Canada and was a staff member in computing and information technology at the University of Waterloo for 11 years.


T7 High-Capacity Email System Design
Steve VanDevender, University of Oregon

Who should attend: Anyone who needs to design a high-volume, secure email system or upgrade an existing one.

This tutorial will help you design an email system or upgrade an existing one to deal with large numbers of users, high volumes of email, and increased availability and security.

We'll start with an overview of mail system architecture and its commonly recognized components. For each of these components, concerns relating to scalability, reliability, and interoperability will be reviewed and implementation suggestions will be discussed.

Take back to work: An understanding of available choices in email system software and methods, with their trade-offs and domains of applicability.

Topics include:

  • Mail system architecture and components:
    • Message transfer agents (MTAs) and SMTP
    • Local delivery agents (LDAs) and the mail store
    • Mail access via POP and IMAP
    • Mail user agents (MUAs)
  • Implementation concerns
    • MTAs and SMTP
      • Mail relaying for users
      • STARTTLS for optional transport encryption
      • SMTP AUTH and why it needs STARTTLS
      • Mail queuing
    • Spam
      • Spam and malware blocking at SMTP time
      • "Refuse during SMTP or deliver" philosophy
      • Avoiding accept-then-bounce/backscatter
    • LDAs and the mail store
      • mbox, Maildir, and other store formats
      • Delivery-time mail filtering and sorting
    • POP, IMAP
      • POP vs. IMAP comparison
      • TLS encryption for security
      • Improving POP/IMAP server performance
    • Coping with MUAs
      • Common MUA issues with interoperability and security
      • Webmail systems as MUAs
      • Handling multiple concurrent access
  • Scaling and reliability methods
    • Considerations for backup/multiple MX hosts
    • Load-balancing or failover for SMTP, POP, and IMAP
    • How mail store format affects performance and reliability
    • User authentication
    • Ways to grow your mail system
Steve VanDevender (T7) by once not knowing to be afraid of Sendmail, has ended up specializing in email system administration for much of his system Steve VanDevenderadministration career. At efn.org between 1994 and 2002, he ended up managing a mail system that grew to 10,000 users; at the University of Oregon since 1996, he has helped manage a mail system that has grown from 20,000 to 30,000 users and, more important, has grown even more in message volume and user activity, with many corresponding changes to cope with that growth. Since 2000, he has taught a popular course in system administration for the University of Oregon's Department of Computer and Information Science.
Tuesday SANS Security Tutorial
504.3 Computer and Network Hacker Exploits: Part 2
John Strand, Northrop Grumman

See the full description.

Wednesday, June 20, 2007
Full-Day Tutorials
W1 Network Security Monitoring with Open Source Tools
Richard Bejtlich, TaoSecurity

Who should attend: Anyone who wants to know what is happening on their network. I assume command-line knowledge of UNIX and familiarity with TCP/IP. Anyone with duties involving intrusion detection, security analysis, incident response, or network forensics will profit from this course.

This course will show there is more to network security monitoring (NSM) than Snort and Wireshark. In fact, we won't talk about either, unless it's to mention something you might not have seen before! Past participants have discovered intrusions during the class, using concepts learned in a few hours. The instructor bases his teaching on his books, professional consulting experience, and latest security research.

From the start of the course to the first break I will present NSM theory and the problems with performing intrusion detection with Web-based alert browsers such as BASE and ACID. From the first break until lunch I will describe Sguil, a free, open source NSM suite that compensates for the deficiencies of Web-based alert browsers. After lunch I will discuss a reference intrusion model which provides context for the sorts of intrusions one detects with NSM principles and will cover deployment considerations for network sensors, a topic ignored by most books and briefings. I will then turn to the tools and techniques of collecting full content data. After the final break I plan to describe the tools and techniques of collecting and analyzing sessions and statistical data.

Students with VMware Player installed will be able to follow along with the technique and tool demonstrations, using an NSM VMware image provided by the instructor.

Take back to work: You will immediately be able to implement numerous new techniques and tools to discover normal, malicious, and syspicious network events.

Topics include:

  • NSM theory
  • Building and deploying NSM sensors
  • Accessing wired and wireless traffic
  • Full content tools: Tcpdump, Ethereal/Tethereal, Snort as packet logger
  • Additional data analysis tools: Tcpreplay, Tcpflow, Ngrep, Netdude
  • Session data tools: Cisco NetFlow, Fprobe, Flow-tools, Argus, SANCP
  • Statistical data tools: Ipcad, Trafshow, Tcpdstat, Cisco accounting records
  • Sguil (sguil.sf.net)
  • Case studies, personal war stories, and attendee participation
Richard Bejtlich (W1, R2, F2) is founder of TaoSecurity LLC (http://www.taosecurity.com), a company that helpsRichard Bejtlich clients detect, contain, and remediate intrusions using network security monitoring (NSM) principles. Richard was previously a principal consultant at Foundstone, performing incident response, emergency NSM, and security research and training. He has created NSM operations for ManTech International Corporation and Ball Aerospace & Technologies Corporation. From 1998 to 2001, Richard defended global American information assets in the Air Force Computer Emergency Response Team (AFCERT), performing and supervising the real-time intrusion detection mission. Formally trained as an intelligence officer, he holds degrees from Harvard University and the United States Air Force Academy. Richard wrote the Tao of Network Security Monitoring: Beyond Intrusion Detection and the forthcoming Extrusion Detection: Security Monitoring for Internal Intrusions and Real Digital Forensics. He also wrote original material for Hacking Exposed, 4th Ed., Incident Response, 2nd Ed., and Sys Admin Magazine. Richard holds the CISSP, CIFI, and CCNA certifications. His popular Web log resides at http://taosecurity.blogspot.com.
 
W2 Using Samba 3.0
Gerald Carter, Centeris/Samba Team

Who should attend: System administrators who are currently managing Samba servers or are planning to deploy new servers this year. This course will outline the new features of Samba 3.0, including working demonstrations throughout the course session.

Samba is often viewed as the Swiss Army knife of Windows/UNIX integration packages. But making UNIX hosts on a network look like Windows is no easy task. It requires a solid understanding of both Windows and UNIX operating systems. You job is to make the differences as transparent to your users as possible.

This tutorial is designed to take you from the bottom to the top, beginning with the basics and continuing to a complete member server, fully integrated into Active Directory.

Take back to work: You will understand not only how to configure Samba in a variety of environments, but also how to troubleshoot the unpredictable glitches that occur at the most inopportune times.

Topics include:

  • Providing common file and print services
    • Configuring Samba's support for Access Control Lists and the Microsoft Distributed File System
    • Making use of Samba VFS modules for features such as virus scanning and a network recycle bin
    • Centrally managing printer drivers for Windows clients
  • Integrating Samba with Active Directory
    • Providing seamless user and group management using Winbind
    • Enabling NTLM and Krb5 authentication for UNIX services other than Samba
    • Providing support for Kerberized server applications services such as OpenSSH
  • Enabling Samba as a Domain Controller in its own domain
    • Migrating accounts from an existing Windows NT 4.0 domain to a Samba domain
    • Managing services such as CUPS and Apache using MMC plugins on Windows clients

Gerald Carter (M9, W2, R3) has been a member of the Samba Development Team since 1998. He has been developing, Gerald Carter writing about, and teaching on open source since the late 1990s. Currently employed by Centeris as a Samba and open source developer, Gerald has written books for SAMS Publishing and for O'Reilly Publishing.

 


W3 Solaris 10 Security Features Workshop (Hands-on)
Peter Baer Galvin, Corporate Technologies

Who should attend: Solaris systems managers and administrators interested in the new security features in Solaris 10 (and features in previous Solaris releases that they might not be using).

Solaris has always been the premier commercial operating system, but it is also somewhat different from other UNIX/Linux systems. It has novel features and applications (some have been copied in other operating systems), and there are things you need to know to use them effectively and securely.

This course covers a variety of topics surrounding Solaris 10 and security. Note that this is not a class about specific security vulnerabilities and hardening; rather, it examines new features in Solaris 10 for addressing the entire security infrastructure, as well as new issues to consider when deploying, implementing, and managing Solaris 10. This will be a workshop featuring instruction and practice/exploration.

Take back to work: During this exploration of the important new features of Solaris 10, you'll not only learn what it does and how to get it done, but also best practices. Also covered is the status of each of these new features, how stable it is, whether it is ready for production use, and expected future enhancements.

Topics include:

  • Overview
  • N1 Grid Containers (a.k.a. Zones): Solaris 10 has built-in virtualization support, enabling administrators to create multiple virtual machines, each completely isolated from the others
  • RBAC: Role Based Access Control (giving users and application access to data and functions based on the role they are filling, as opposed to their login name)
  • Privileges: A new Solaris facility based on the principle of least privilege; instead of being root (or not), users are accorded 43 distinct bits of privilege, sometimes spanning classes of actions and sometimes being confined to a specific system call
  • NFSv4: The latest version of NFS (based on an industry standard) features stateful connection, more and better security, write locks, and faster performance
  • Flash archives and live upgrade (automated system builds)
  • Moving from NIS to LDAP
  • DTrace: Solaris 10's system profiling and debugging tool
  • FTP client and server enhancements for security, reliability, and auditing
  • PAM (the Pluggable Authentication Module) enhancements, for more detailed control of access to resources
  • Auditing enhancements
  • BSM (the Basic Security Module) provides a security auditing system, (including tools to assist with analysis) and a device allocation mechanism (providing object-reuse characteristics for removable or assignable devices)
  • Service Management Facility (a replacement for rc files)
  • Solaris Cryptographic Framework: A built-in system for encrypting anything, from files on disks to data streams between applications
  • Kerberos enhancements
  • Packet filtering with IPfilters
  • BART (Basic Audit Reporting Tool): similar to Tripwire, BART enables you to determine what file-level changes have occurred on a system, relative to a known baseline

Laptop Requirements:
Each student should have a laptop with wireless access for remote access into an instructor-provided Solaris 10 machine (if you do not have a laptop, we will make every effort to pair you up with another student to work as a group; your laptop does not need to be running Solaris).

Peter Baer Galvin (S4, W3) is the Chief Technologist for Corporate Technologies, Inc., a systems integrator and VAR, and was Peter Baer Galvin the Systems Manager for Brown University's Computer Science Department. He has written articles for Byte and other magazines. He wrote the "Pete's Wicked World" and "Pete's Super Systems" columns at SunWorld. He is currently contributing editor for Sys Admin, where he manages the Solaris Corner. Peter is co-author of the Operating Systems Concepts and Applied Operating Systems Concepts textbooks. As a consultant and trainer, Peter has taught tutorials on security and system administration and has given talks at many conferences and institutions on such topics as Web services, performance tuning, and high availability.


W4 Inside the Linux 2.6 Kernel
Theodore Ts’o, IBM Linux Technology Center

Who should attend: Application programmers and kernel developers. You should be reasonably familiar with C programming in the UNIX environment, but no prior experience with the UNIX or Linux kernel code is assumed.

The Linux kernel aims to achieve conformance with existing standards and compatibility with existing operating systems; however, it is not a reworking of existing UNIX kernel code. The Linux kernel was written from scratch to provide both standard and novel features, and it takes advantage of the best practice of existing UNIX kernel designs.

This class will primarily focus on the currently released version of the Linux 2.6 kernel, but it will also discuss how it has evolved from Linux 2.4 and earlier kernels. It will not delve into any detailed examination of the source code.

Take back to work: An overview and roadmap of the kernel's design and functionality: its structure, the basic features it provides, and the most important algorithms it employs.

Topics include:

  • How the kernel is organized (scheduler, virtual memory system, filesystem layers, device driver layers, networking stacks)
    • The interface between each module and the rest of the kernel
    • Kernel support functions and algorithms used by each module
    • How modules provide for multiple implementations of similar functionality
  • Ground rules of kernel programming (races, deadlock conditions)
  • Implementation and properties of the most important algorithms
    • Portability
    • Performance
    • Functionality
  • Comparison between Linux and UNIX kernels, with emphasis on differences in algorithms
  • Details of the Linux scheduler
    • Its VM system
    • The ext2fs filesystem
  • The requirements for portability between architectures

Theodore Ts'o (S10, W4) has been a Linux kernel developer since almost the very beginnings Theodore Ts'oof Linux: he implemented POSIX job control in the 0.10 Linux kernel. He is the maintainer and author of the Linux COM serial port driver and the Comtrol Rocketport driver, and he architected and implemented Linux's tty layer. Outside of the kernel, he is the maintainer of the e2fsck filesystem consistency checker. Ted is currently employed by IBM Linux Technology Center.

Wednesday SANS Security Tutorial
504.4 Computer and Network Hacker Exploits: Part 3
John Strand, Northrop Grumman

See the full description.

Thursday, June 21, 2007
Full-Day Tutorials
R1 Advanced Perl Programming
Tom Christiansen, Consultant

Who should attend: Perl programmers with at least a journeyman-level working knowledge of Perl programming and a desire to hone their skills.

This class will cover a wide variety of advanced topics in Perl, including many insights and tricks for using these features effectively.

Take back to work: A much richer understanding of Perl, which will help you more easily to make it part of your daily life.

Topics include:

  • Symbol tables and typeglobs
    • Symbolic references
    • Useful typeglob tricks (aliasing)
  • Modules
    • Autoloading
    • Overriding built-ins
    • Mechanics of exporting
    • Function prototypes
  • References
    • Implications of reference counting
    • Using weak references for self-referential data structures
    • Autovivification
    • Data structure management, including serialization and persistence
    • Closures
  • Fancy object-oriented programming
    • Using closures and other peculiar referents as objects
    • Overloading of operators, literals, and more
    • Tied objects
  • Managing exceptions and warnings
    • When die and eval are too primitive for your taste
    • The use warnings pragma
    • Creating your own warnings classes for modules and objects
  • Regular expressions
    • Debugging regexes
    • qr// operator
    • Backtracking avoidance
    • Interpolation subtleties
    • Embedding code in regexes
  • Programming with multiple processes or threads
    • The thread model
    • The fork model
    • Shared memory controls
  • Unicode and I/O layers
    • Named Unicode characters
    • Accessing Unicode properties
    • Unicode combined characters
    • I/O layers for encoding translation
    • Upgrading legacy text files to Unicode
    • Unicode display tips

Tom Christiansen (R1) has been involved with Perl since day zero of its initial public release Tom Christiansen in 1987. Author of several books on Perl, including The Perl Cookbook and Programming Perl from O'Reilly, Tom is also a major contributor to Perl's online documentation. He holds undergraduate degrees in computer science and Spanish and a Master's in computer science. He now lives in Boulder, Colorado.
 


R2 TCP/IP Weapons School, Layers 2–3 (Day 1 of 2)
Richard Bejtlich, TaoSecurity

Who should attend: Junior and intermediate analysts and system administrators who detect and respond to security incidents.

TWS is the right way for junior and intermediate security personnel to learn the fundamentals of TCP/IP networking. The point of the class is to teach TCP/IP by looking at nontraditional TCP/IP traffic. I will make comparisons to normal TCP/IP traffic for reference purposes. The name of the course is a reference to the U.S. Air Force Weapons School, which is the "Top Gun" of the Air Force.

The class will concentrate on the protocols and services most likely to be encountered when performing system administration and security work. Students will inspect traffic such as would be seen in various malicious security events.

Take back to work: The fundamentals of TCP/IP networking. You will learn how to interpret network traffic by analyzing packets generated by network security tools and how to identify security events on the wire, using open source tools.

Topics for Day 1 include:

  • Layer 2
    • What is layer 2?
    • Ethernet in brief
    • Packet delivery on the LAN
    • Ethernet interfaces
    • ARP basics, ARP request/reply, ARP cache, Arping, Arpdig, and Arpwatch
    • VLANs
    • Dynamic Trunking Protocol
  • Layer 2 attacks
    • MAC address trickey
    • MAC flooding (Macof)
    • ARP denial of service (Arp-sk)
    • Port stealing (Ettercap)
    • Layer 2 man-in-the-middle (Ettercap)
    • Dynamic Trunking Protocol attack (Yersinia)

Richard Bejtlich (W1, R2, F2) is founder of TaoSecurity LLC (http://www.taosecurity.com), a company that helpsRichard Bejtlich clients detect, contain, and remediate intrusions using network security monitoring (NSM) principles. Richard was previously a principal consultant at Foundstone, performing incident response, emergency NSM, and security research and training. He has created NSM operations for ManTech International Corporation and Ball Aerospace & Technologies Corporation. From 1998 to 2001, Richard defended global American information assets in the Air Force Computer Emergency Response Team (AFCERT), performing and supervising the real-time intrusion detection mission. Formally trained as an intelligence officer, he holds degrees from Harvard University and the United States Air Force Academy. Richard wrote the Tao of Network Security Monitoring: Beyond Intrusion Detection and the forthcoming Extrusion Detection: Security Monitoring for Internal Intrusions and Real Digital Forensics. He also wrote original material for Hacking Exposed, 4th Ed., Incident Response, 2nd Ed., and Sys Admin Magazine. Richard holds the CISSP, CIFI, and CCNA certifications. His popular Web log resides at http://taosecurity.blogspot.com.


R3 Implementing [Open]LDAP Directories
Gerald Carter, Centeris/Samba Team

Who should attend:
Both LDAP directory administrators and architects. The focus is on integrating standard network services with LDAP directories. The examples are based on UNIX hosts and the OpenLDAP directory server and will include actual working demonstrations throughout the course.

System administrators are frequently tasked with integrating applications with directory technologies. DNS, NIS, LDAP, and Active Directory are all examples of the directory services that pervade today's networks. This tutorial will focus on helping you to understand how to integrate common services hosted on UNIX servers with LDAP directories. The demo-based approach will show you how to build and deploy an OpenLDAP-based directory service that consolidates account and configuration information across a variety of applications.

Take back to work: Comfort with LDAP terms and concepts and an understanding of how to extend that knowledge to integrate future applications using LDAP into your network.

Topics include:

  • Replacing an NIS domain with an LDAP directory
    • Storing user and group account information
    • Configuring PAM and Name Service Switch libraries on the client
  • Integrating Samba domain file and print servers
    • Configuring a Samba LDAP account database
    • Performance-tuning account lookups
  • Integrating MTAs such as Sendmail and Postfix
    • Configuring support for storing mail aliases in an LDAP directory
    • Using LDAP for storing mail routing information and virtual domains
    • Managing global address books for email clients
  • Creating customized LDAP schema items
    • Defining custom attributes and object classes
  • Examining scripting solutions for developing your own directory administration tools
    • Overview of the Net::LDAP Perl module

Gerald Carter (M9, W2, R3) has been a member of the Samba Development Team since 1998. He has been developing, Gerald Carter writing about, and teaching on open source since the late 1990s. Currently employed by Centeris as a Samba and open source developer, Gerald has written books for SAMS Publishing and for O'Reilly Publishing.

 


R4 Issues in UNIX Infrastructure Design
Lee Damon, University of Washington

Who should attend: Anyone who is designing, implementing, or maintaining a UNIX environment with 2 to 20,000+ hosts; system administrators, architects, and managers who need to maintain multiple hosts with few admins.

This intermediate class will examine many of the background issues that need to be considered during the design and implementation of a mixed-architecture or single-architecture UNIX environment. It will cover issues from authentication (single sign-on) to the Holy Grail of single system images.

This class won't implement a "perfect solution," as each site has different needs. We will look at some freeware and some commercial solutions, as well as many of the tools that exist to make a workable environment possible.

Take back to work: Answers to the questions you should ask while designing and implementing the mixed-architecture or single-architecture UNIX environment that will meet your needs.

Topics include:

  • Administrative domains: Who is responsible for what, and what can users do for themselves?
  • Desktop services vs. farming: Do you do serious computation on the desktop, or do you build a compute farm?
  • Disk layout: How do you plan for an upgrade? Where do things go?
  • Free vs. purchased solutions: Should you write your own, or hire a consultant or company?
  • Homogeneous vs. heterogeneous: Homogeneous is easier, but will it do what your users need?
  • The essential master database: How can you keep track of what you have?
  • Policies to make life easier
  • Push vs. pull
  • Getting the user back online in 5 minutes
  • Remote administration: Lights-out operation; remote user sites; keeping up with vendor patches, etc.
  • Scaling and sizing: How do you plan on scaling?
  • Security vs. sharing: Your users want access to everything. So do the crackers . . .
  • Single sign-on: How can you do it securely?
  • Single system images: Can users see just one environment, no matter how many OSes there are?
  • Tools: The free, the purchased, the homegrown
Lee Damon (R4) has a B.S. in Speech Communication from Oregon State University. He has been Lee Damona UNIX system administrator since 1985 and has been active in SAGE since its inception. He assisted in developing a mixed AIX/SunOS environment at IBM Watson Research and has developed mixed environments for Gulfstream Aerospace and QUALCOMM. He is currently leading the development effort for the Nikola project at the University of Washington Electrical Engineering department. Among other professional activities, he is a charter member of LOPSA and SAGE and past chair of the SAGE Ethics and Policies working groups, and he was the chair of LISA '04.
Thursday SANS Security Tutorial
504.5 Computer and Network Hacker Exploits: Part 4
John Strand, Northrop Grumman

See the full description.

Friday, June 22, 2007
Full-Day Tutorials
F1 Introduction to VMware Virtual Infrastructure 3
John Arrasjid and Shridhar Deuskar, VMware

Who should attend: System administrators and architects who are interested in deploying a VMware Virtual Infrastructure, including ESX Server and VirtualCenter, in a production environment. No prior experience with VMware products is required. Knowledge of Linux is helpful; basic knowledge of SANs is useful but not required.

VMware ESX Server is virtual infrastructure software for partitioning, consolidating, and managing systems in mission-critical Intel/AMD environments. In this tutorial, we will provide an overview of virtual machine technology, as well as the features and functionality of VMware Virtual Infrastructure 3, which includes ESX Server and VirtualCenter. Migration strategies using tools such as VMware Converter (replacement for P2V) will be covered. Installation, configuration, and best practices will be the focus of the session. Time permitting, live demonstrations will be given of key features such as VMotion.

Take back to work: How to deploy a VMware virtual infrastructure effectively on your own site.

Topics include:

  • Virtual Infrastructure overview
  • ESX Server and VirtualCenter overview
  • Installation and configuration
  • Virtual machine creation and operation
  • Migration technologies such as VMware Converter
  • Operations and administration best practices
  • Advanced configuration (SAN and networking)

John Arrasjid (F1) has 20 years of experience in the computer science field. His experienceJohn Arrasjid includes work with companies such as AT&T, Amdahl, 3Dfx Interactive, Kubota Graphics, Roxio, and his own company, WebNexus Communications, where he developed consulting practices and built a cross-platform IT team. John is currently a senior member of the VMware Professional Services Organization as a Consulting Architect. John has developed a number of PSO engagements, including Performance, Security, and Disaster Recovery and Backup.

Shridhar Deuskar (F1) has over 10 years of experience in system administration of UNIX and Windows servers. Shridhar DeuskarHe has consulted with companies such as Caterpillar, HP, and EMC. Currently he is a Consulting Architect in VMware's Professional Services organization and is responsible for delivering services tied to virtualization to clients worldwide.
 


F2 TCP/IP WEAPONS SCHOOL, Layers 2–3 (Day 2 of 2)
Richard Bejtlich, TaoSecurity

Who should attend: Junior and intermediate analysts and system administrators who detect and respond to security incidents.

TWS is the right way for junior and intermediate security personnel to learn the fundamentals of TCP/IP networking. The point of the class is to teach TCP/IP by looking at nontraditional TCP/IP traffic. I will make comparisons to normal TCP/IP traffic for reference purposes. The name of the course is a reference to the U.S. Air Force Weapons School, which is the "Top Gun" of the Air Force.

The class will concentrate on the protocols and services most likely to be encountered when performing system administration and security work. Students will inspect traffic such as would be seen in various malicious security events.

Take back to work: The fundamentals of TCP/IP networking. You will learn how to interpret network traffic by analyzing packets generated by network security tools and how to identify security events on the wire, using open source tools.

Topics for Day 2 include:

  • Layer 3
    • What is layer 3?
    • Internet Protocol
    • Raw IP (Nemesis)
    • IP options (Fragtest)
    • IP time-to-live (Traceroute)
    • Internet control message protocol (Sing)
    • ICMP error messages (Gnetcat)
    • IP multicast (Iperf)
    • IP multicast (Udpcast)
    • IP fragmentation (Fragtest)
  • Layer 3 attacks
    • IP IDs (Isnprober)
    • IP IDs (Idle Scan)
    • IP TTLs (LFT)
    • IP TTLs (Etrace and Firewalk)
    • ICMP Covert Channel (Ptunnel)
    • IP fragmentation (Fragroute and Pf)
Richard Bejtlich (W1, R2, F2) is founder of TaoSecurity LLC (http://www.taosecurity.com), a company that helpsRichard Bejtlich clients detect, contain, and remediate intrusions using network security monitoring (NSM) principles. Richard was previously a principal consultant at Foundstone, performing incident response, emergency NSM, and security research and training. He has created NSM operations for ManTech International Corporation and Ball Aerospace & Technologies Corporation. From 1998 to 2001, Richard defended global American information assets in the Air Force Computer Emergency Response Team (AFCERT), performing and supervising the real-time intrusion detection mission. Formally trained as an intelligence officer, he holds degrees from Harvard University and the United States Air Force Academy. Richard wrote the Tao of Network Security Monitoring: Beyond Intrusion Detection and the forthcoming Extrusion Detection: Security Monitoring for Internal Intrusions and Real Digital Forensics. He also wrote original material for Hacking Exposed, 4th Ed., Incident Response, 2nd Ed., and Sys Admin Magazine. Richard holds the CISSP, CIFI, and CCNA certifications. His popular Web log resides at http://taosecurity.blogspot.com.
Friday SANS Security Tutorial
504.6 Hacker Tools Workshop
John Strand, Northrop Grumman

See the full description.

?Need help? Use our Contacts page.

Last changed: 11 June 2007 ch