Overview | Sunday | Monday | Tuesday |
By Instructor
Sunday, June 22, 2008
|
Full-Day Tutorials
|
S1
Solaris 10 Administration Workshop
Peter Baer Galvin,
Corporate Technologies
Who should attend:
Solaris
systems managers and administrators interested in learning the new
administration features in Solaris 10 (and features in previous
Solaris releases that they might not be using).
Solaris has always been the premier commercial operating system, and this remains the case today. Its novel features and applications (such as ZFS, Dtrace, and Containers) keep it at the forefront of enterprise use, and many of these features have been copied in other operating systems.
This course covers a variety of system administration topics related
to Solaris 10. Solaris 10 includes many features introduced since the
last major release of Solaris, and there are new issues to consider
when deploying, implementing, and managing Solaris 10. This will be a
workshop featuring instruction and practice/exploration.
Note that, except for a few instances, Solaris 10 security is not
covered in this workshop (see T1,
Solaris 10 Security Features Workshop, for that).
Take back to work: All you need to consider in deploying, implementing, and managing Solaris 10.
Topics include:
- Overview
- Solaris releases (official, Solaris Express, OpenSolaris, others)
- Installing and upgrading to Solaris 10
- Flash archives and live upgrade
- Patching the kernel and applications
- Service Management Facility
- The kernel
- Crash and core dumps
- Cool commands you need to know
- ZFS, the new endian-neutral file system that "will make you forget everything you thought you knew about file systems"
- Virtualization
- Containers (a.k.a. Zones), lightweight virtual environments for application isolation and resource management
- Installation
- Management
- Resource management
- Other Solaris virtualizations: LDOMs, Xen
- DTrace, Solaris 10's system profiling and debugging tool
- Fault Management Architecture (FMA)
- Sysadmin best practices: using the new features effectively and efficiently without hurting yourself
S2
Inside the Linux 2.6 Kernel
Theodore Ts'o,
IBM Linux Technology Center
Who should attend:
Application programmers and kernel developers. You
should be reasonably familiar with C programming in the UNIX environment, but no prior experience with the UNIX or Linux kernel
code is assumed.
The Linux kernel aims to achieve conformance with existing standards and compatibility with existing operating systems; however, it is not a reworking of existing UNIX kernel code. The Linux kernel was written from scratch to provide both standard and novel features, and it takes advantage of the best practice of existing UNIX kernel designs.
This class will primarily focus on the currently released version of the Linux 2.6 kernel, but it will also discuss how it has evolved from Linux 2.4 and earlier kernels. It will not delve into any detailed examination of the source code.
Take back to work: An overview and roadmap of the kernel's design and functionality: its structure, the basic features it provides, and the most important algorithms it employs.
Topics include:
- How the kernel is organized (scheduler, virtual memory system, filesystem layers, device driver layers, networking stacks)
- The interface between each module and the rest of the kernel
- Kernel support functions and algorithms used by each module
- How modules provide for multiple implementations of similar functionality
- Ground rules of kernel programming (races, deadlock conditions)
- Implementation and properties of the most important algorithms
- Portability
- Performance
- Functionality
- Comparison between Linux and UNIX kernels, with emphasis on differences in algorithms
- Details of the Linux scheduler
- Its VM system
- The ext2fs filesystem
- The requirements for portability between architectures
S3
Botnets: Understanding and Defense
NEW!
Bruce Potter,
The Shmoo Group
Who should attend:
IT security professionals, system administrators,
and network administrators who want to learn the inner workings of
botnets and how to defend against them.
Described by some as the largest threat to the global Internet,
botnets are largely hidden from the average Internet user. Botnets
have a long legacy and initially were not used for malicious
purposes. However, as bots have evolved, they have taken on sinister
uses. Using thousands of compromised machines, botnets can be used
for a variety of tasks including sending mountains of spam, launching
crushing denial-of-service attacks, and harvesting massive amounts
of personal information. One of the unfortunate aspects of botnets
is that many individuals are active participants in botnets and do
not even know it. Bots have become very sophisticated at hiding
themselves from anti-virus and security programs. Also, many bots
have even become resilient to large-scale network security systems
and represent problems to not just home users but to large enterprises
as well.
Take back to work: A broad understanding of the current threat from
botnets, how they work, and how to defend against them.
Topics include:
- History of botnets: From their innocuous roots to the current worldwide threat
- Botnet uses: A broad view of the actual threats from current bots, including network and system analysis
- Scope of the current botnet problem: The current problem is larger than you may think
- Botnet communications: Command and control of botnets exposed
- Internal structure: A breakdown of the functionality of modern botnets, including hiding, propagation, and modularity
- Examination of some standard bots: We will look at some of the classic bots (Agobot, SDBot, Storm, etc.) in order to gain a better understanding of what we're defending against
- Host-based botnet defenses: Practical guidance on what can really be done to detect and defend against bots at the host level
- Networked-based botnet defenses: More practical guidance, but this time at the network level
- Future of botnets: A brief discussion of where bots are going so that we can arm ourselves against future outbreaks
S4
Introduction to the Open Source Xen Hypervisor
NEW!
Todd Deshane and Patrick F. Wilbur,
Clarkson University; Stephen Spector,
Citrix
Who should attend:
System administrators and architects who are
interested in deploying the open source Xen hypervisor in a
production environment. No prior experience with Xen is required;
however, a basic knowledge of Linux is helpful.
The Xen hypervisor offers a powerful, efficient, and secure feature
set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU
architectures, and has been used to virtualize a wide range of
guest operating systems, including Windows, Linux, Solaris,
and various versions of the BSD operating systems. It is widely
regarded as a strategically compelling alternative to proprietary
virtualization platforms and hypervisors for x86 and IA64 platforms.
Take back to work:
How to build and deploy the Xen hypervisor.
Topics include:
- Xen architecture overview
- Building a Xen hypervisor from Xen.org
- Installation and configuration
- Virtual machine creation and operation
- Performance: tools and methodology
- Best practices using Xen
S5
System and Network Monitoring: Tools You Need
John Sellens,
SYONEX
Who should attend:
Network and system administrators ready to implement comprehensive monitoring of their systems and networks using the best of the freely available tools. Participants should have an understanding of the fundamentals of networking, familiarity with computing and network components, UNIX system administration experience, and some understanding of UNIX programming and scripting languages.
Monitoring systems and networks is crucial not only for efficient operations, but also for enhancing security. Knowing what your systems are supposed to be doing is the only way you can tell when they are doing something that they are not supposed to do.
This tutorial will introduce attendees to common goals and techniques
for monitoring, various monitoring tools, and provide instruction
in the installation and configuration of some of the most popular
and effective system and network monitoring tools, including Nagios,
Cricket, MRTG, and Orca.
Take back to work: The information needed to immediately implement, extend, and manage popular monitoring tools on your systems and networks.
Topics include, for each of Nagios, Cricket, MRTG, and Orca:
- Installation: Basic steps, prerequisites, common problems and solutions
- Configuration, setup options, and how to manage larger and nontrivial configurations
- Reporting and notifications, both proactive and reactive
- Special cases: How to deal with interesting problems
- Extending the tools: How to write scripts or programs to extend the functionality of the basic package
- Dealing effectively with network boundaries and remote sites
- Security concerns and access control
- Ongoing operations
S6
Advanced Perl Programming: What's New and Where We're Going
NEW!
Tom Christiansen,
Consultant
Who should attend:
Perl programmers with at least a journeyman-level working knowledge of Perl programming and a desire to hone their skills.
This class will cover a wide variety of advanced topics in Perl, including many insights and tricks for using these features effectively.
Take back to work:
A much richer understanding of Perl, which will help you more easily make it part of your daily life.
Topics include:
-
What's been removed or deprecated in Perl v5.8.1v5.10?
-
What's new in or has been added to Perl v5.8.1v5.10?
- New operators: ~~ and //
- New control structures: built-in switch statement
- my(), our(), and state() variables
- Signals: pick safe or unsafe: your choice
- New :attributes
- New matching features (see regex section below)
- New and enhanced standard stand-alone programs
- New modules
- New pragmas to control internals (e.g., for open(), sort(), etc.)
- Threading unification
-
Regular expressions
- Interpolation subtleties
- qr// operator
- Smart-matching
- Named capture-buffers for backreferences
- Relative numbers backreferences
- Embedding code in regexes
- Recursive regular expressions
- More control over backtracking:
-
- The (?>....) construct
- Possessive quantifiers like *+ and ++
- Backtracking control verbs: PRUNE, COMMIT, FAIL, etc.
-
Unicode and I/O layers
- The Unicode 4.1.0 standard (character database)
- Virtual characters
- Accessing Unicode properties
- Unicode combined characters
- I/O layers for encoding translation
- Upgrading legacy text files to Unicode
- Unicode display tips
-
Lots more: you have to come to the class to find out!
-
What's next?
If time permits, we'll also look at some of Perl's
internals, such as:
-
Modules: symbol tables and typeglobs
- Symbolic references
- Useful typeglob tricks (aliasing)
- UNIVERSAL methods, ->can, ->isa, ->VERSION, ->DOES
- Autoloading and overriding builtins
- Function prototypes, including the new _ prototype
-
References and objects
- Using weak references for self-referential data structures
- Autovivification
- Closures
- Overloading of operators, literals, and more
- Tied objects
S7
Resource Management with Solaris Containers
NEW!
Jeff Victor,
Sun Microsystems
Who should attend:
System administrators who want to improve resource utilization
of their Solaris (SPARC, x64, and x86) systems.
This tutorial covers the facilities available in Solaris for isolating
workloads and managing system resources. These facilities enable you to
safely host multiple workloads on one instance of an operating system
by creating virtual operating system instances and controlling their
resource usage. The features also enable workload management and
service level management, as well as the ability to leverage available capacity
and to manage system utilization. Controls for CPUs, processes and
threads, CPU affinity, scheduling classes, memory, and network
bandwidth management will be explained and demonstrated.
Take back to work:
A solid understanding of the facilities and commands available for maximizing usage of the Solaris systems in your data center.
Topics include:
-
What are resources?
-
Why would you want to manage them?
-
How do you use these Solaris features:
- Dynamic Resource Pools, including processor sets
- Physical Memory management with Resource Capping and Memory Sets
- Network bandwidth management with IPQoS
- Schedulers
- Application isolation with Zones
-
Projects and Tasks
Monday, June 23, 2008
|
Full-Day Tutorials
|
M1
Solaris 10 Performance, Observability, and
Debugging
James Mauro,
Sun
Microsystems; Richard McDougall, VMware
Who should attend:
Anyone who supports or may support Solaris 10 machines.
Take back to work: How to apply the tools and utilities available in Solaris 10 to resolve performance issues and pathological behavior, and simply to understand the system and workload better.
Topics include:
- Solaris 10 features overview
- Solaris 10 tools and utilities
- The conventional stat tools (mpstat, vmstat, etc.)
- The procfs tools (ps, prstat, map, pfiles, etc.)
- lockstat and plockstat
- Using kstat
- Dtrace, the Solaris dynamic tracing facility
- Using mdb in a live system
- Understanding memory use and performance
- Understanding thread execution flow and profiling
- Understanding I/O flow and performance
- Looking at network traffic and performance
- Application and kernel interaction
- Putting it all together
M2
Configuring and Deploying Linux-HA
Alan Robertson,
IBM Linux Technology Center
Who should attend:
System administrators and IT architects who architect, evaluate, install, or manage critical computing systems. It is suggested that participants have basic familiarity with system V/LSB-style startup scripts, shell scripting, and XML. Familiarity with high availability concepts is not assumed.
The Linux-HA project (https://linux-ha.org/) is the oldest and most
powerful open source high-availability (HA) package available,
comparing favorably to well-known commercial HA packages. Although the
project is called Linux-HA (or "heartbeat"), it runs on a variety of
POSIX-like systems, including FreeBSD, Solaris, and OS X.
Linux-HA provides highly available services on clusters from one to more than 16 nodes with no single point of failure. These services and the servers they run on are monitored. If a service should fail to operate correctly, or a server should fail, the affected services will be quickly restarted or migrated to another server, dramatically
improving service availability.
Linux-HA supports rules for expressing dependencies between services, and powerful rules for locating services in the cluster. Because these services are derived from init service scripts, they are familiar to system administrators and are easy to configure and manage.
Take back to work: Both the basic theory of high availability systems and practical knowledge of how to plan, install, and configure highly available systems using Linux-HA.
Topics include:
- General HA principles
- Compilation and installation of the Linux-HA
("heartbeat") software
- Overview of Linux-HA configuration
- Overview of commonly used resource agents
- Managing services supplied with init(8) scripts
- Sample Linux-HA configurations for Apache, NFS, DHCP, DNS, and Samba
- Writing and testing resource agents conforming to the Open
Cluster Framework (OCF) specification
- Creating detailed resource dependencies
- Creating co-location constraints
- Writing resource location constraints
- Causing failovers on user-defined conditions
M3
Network Flow Analysis
NEW!
Bruce Potter,
The Shmoo Group
Who should attend:
IT security professionals, network engineers,
and IT managers who want to learn how to analyze and learn from the
traffic on their networks.
We put a great deal of effort into controlling the data we have on
our networks. Firewalls attempt to keep out the bad guys, proxies
inspect traffic that goes in and out of the enterprise, and intrusion
detection systems attempt to find attacks as they occur. But do
you know what's really going on inside your network? Are your
policies and protections keeping out the bad guys, or do you have
problems that you are unaware of?
Most modern networks have the ability to view deep into your traffic,
but many organizations don't even know it. Most routers and even
some firewalls can export network flow data, information about the
type of traffic, and where it's going. By analyzing this data, you
can quickly find interesting traffic including use of unauthorized
software, malware, and malfunctioning systems.
This tutorial will guide attendees through the basics of network
flows, how to configure systems to export flow data, and how to
examine flows to look for anomalous and malicious behavior.
Take back to work:
An understanding of how to deploy NetFlow capability
within your network, as well as tools and techniques for analyzing
the resulting data.
Topics include:
- Network analysis basics: What network analysis is, when it is appropriate, and its role in IT security
- Understanding NetFlow: A primer on Cisco's NetFlow implementation, the various NetFlow versions, and other flow-based architectures
- NetFlow sensor placement: Where to deploy NetFlow sensors for maximum effectiveness
- Configuring Cisco devices for NetFlow: How to configure and customize various versions of NetFlow using a Cisco router
- Using softflowd on Linux: For times when you don't have access to a NetFlow-capable router, the OSS package softflowd can do the job instead
- NetFlow analysis with Psyche: Psyche is an OSS tool for basic statistical analysis of NetFlow; the tutorial will include analysis of "known bad" data
- NetFlow analysis with SiLK: SiLK is a more advanced NetFlow tool; the tutorial will including analysis of more "known bad" data
- Future ideas: A brief discussion on other uses for NetFlow in your network
M4
Securing Virtual Environments
NEW!
Phil Cox,
SystemExperts Corporation
Who should attend:
System administrators who are tasked with implementing or maintaining
the security of virtual environments, site managers charged with
selecting and setting virtual environment security requirements, and
general users who want to know more about the security features of
popular virtual environments.
Virtualization is popping up all over corporate networks and may soon
comprise a significant proportion of the services provided by a company.
As virtual environments become more pervasive, the proper administration
and security of them becomes critical to the security of the entire
corporate network. The instructors of this tutorial present the problems
and solutions surrounding the security of virtual environments. They
will focus on the three main virtualization products in use today:
VMware, Xen, and Microsoft Virtual Server. The instructors will focus on
practical information and solutions that people who use the technologies
(or are tasked with providing it to their companies) can use. Some of
the topics will be demonstrated live during the course.
This course assumes no previous knowledge or experience with virtual
server technologies.
Take back to work:
A familiarity with
current virtualization and popular technical implementations of it, as well as an understanding of how to secure virtual environments that
use those current technologies.
Topics include:
-
Virtualization 101
- What is it?
- Who's using what?
- What really matters?
-
Threats
- What are the issues?
- How can configuration problems hurt you?
-
Popular technologies
- VMware
- Xen
- Microsoft Virtual Server
-
Configuring a secure virtual environment
- Securing the host OS
- Securing the guest machine
-
Miscellaneous Topics
M5
Beyond Shell Scripts: 21st-Century Automation Tools and Techniques
Æleen Frisch,
Exponential Consulting
Who should attend:
System administrators who want to explore new ways of automating administrative tasks. Shell scripts are
appropriate for many jobs, but more complex operations will often benefit
from sophisticated tools.
Although a good system administrator will be proficient in creating
shell scripts to solve specific problems and automate routine tasks,
the skill alone is no longer sufficient for the automation
requirements in typical 21st-century computing environments. As system
administration has moved from an informal, poorly defined, and widely
varying job title to a recognized and respected profession, so its
processes and procedures have developed from homegrown, ad hoc,
single-purpose strategies into systematic, wide-ranging ones supported
by powerful and well developed software tools. This course introduces
you to several enterprise-worthy, open source administrative packages,
each of which supports the configuration, management, and/or
monitoring of a specific aspect of system functioning.
|
|
As modern UNIX/Linux systems have increased in complexity, the tried-
and-true "just write a shell script" has become outdated.
While simple tasks can still be performed this way, there are tools
available that can make your job simpler yet much more sophisticated,
especially when managing large numbers of systems.
The first half of this course covers Cfengine in depth, and the second half introduces several other important tools.
Take back to work: You will be ready to begin using these packages in your own environment and to realize the efficiency, reliability, and thoroughness that they offer compared to traditional approaches.
Topics include:
- Cfengine
- Basic and advanced configurations
- Sample uses
- Installations and beyond
- "Self-healing" system configurations
- Data collection
- Cfengine limitations and when not to use it
- Other important tools
- Expect: automating interactive processes
- Bacula, an enterprise backup management facility
- Network and system monitoring tools
- SNMP overview
- Nagios: Monitoring network and device performance
- RRDTool: Examining retrospective system data
- Munin and other data collectors for RRDTool
M6
System and Network Performance Tuning
Marc Staveley,
Soma Networks
Who should attend:
Novice and
advanced UNIX system and network administrators, and UNIX developers
concerned about network performance impacts. A basic understanding of
UNIX system facilities and network environments is assumed.
We will explore procedures and techniques for tuning systems,
networks, and application code. Starting from the single system view,
we will examine how the virtual memory system, the I/O system, and the
file system can be measured and optimized. We'll extend the single
host view to include Network File System tuning and performance
strategies. Detailed treatment of networking performance problems,
including network design and media choices, will lead to examples of
network capacity planning. Application issues, such as system call
optimization, memory usage and monitoring, code profiling, real-time
programming, and techniques for controlling response time will be
addressed. Many examples will be given, along with guidelines for
capacity planning and customized monitoring based on your workloads
and traffic patterns. Question and analysis periods for particular
situations will be provided.
|
|
Take back to work: Procedures
and techniques for tuning your systems, networks, and application
code, along with guidelines for capacity planning and customized
monitoring.
Topics include:
- Performance tuning strategies
- Practical goals
- Monitoring intervals
- Useful statistics
- Tools, tools, tools
- Server tuning
- Filesystem and disk tuning
- Memory consumption and swap space
- System resource monitoring
- NFS performance tuning
- NFS server constraints
- NFS client improvements
- NFS over WANs
- Automounter and other tricks
- Network performance, design, and capacity planning
- Locating bottlenecks
- Demand management
- Media choices and protocols
- Network topologies: bridges, switches, and routers
- Throughput and latency considerations
- Modeling resource usage
- Application tuning
- System resource usage
- Memory allocation
- Code profiling
- Job scheduling and queuing
- Real-time issues
- Managing response time
M7
Implementing [Open]LDAP Directories
Gerald Carter,
Centeris/Samba Team
Who should attend:
Both LDAP directory administrators and architects. The focus is
on integrating standard network services with LDAP directories. The
examples are based on UNIX hosts and the OpenLDAP directory server
and will include actual working demonstrations throughout the course.
System administrators are frequently tasked with
integrating applications with directory technologies.
DNS, NIS, LDAP, and Active Directory are all examples
of the directory services that pervade today's networks. This tutorial will focus on helping you to understand how
to integrate common services hosted on UNIX servers
with LDAP directories. The demo-based approach will
show you how to build and deploy an OpenLDAP-based directory
service that consolidates account and configuration
information across a variety of applications.
Take back to work: Comfort with LDAP terms and concepts and an understanding of how to extend that knowledge to integrate future applications using LDAP into your network.
Topics include:
- Replacing an NIS domain with an LDAP directory
- Storing user and group account information
- Configuring PAM and Name Service Switch
libraries on the client
- Integrating Samba domain file and print servers
- Configuring a Samba LDAP account database
- Performance-tuning account lookups
- Integrating MTAs such as Sendmail and Postfix
- Configuring support for storing mail aliases in
an LDAP directory
- Using LDAP for storing mail routing information
and virtual domains
- Managing global address books for email clients
- Creating customized LDAP schema items
- Defining custom attributes and object classes
- Examining scripting solutions for developing your
own directory administration tools
- Overview of the Net::LDAP Perl module
Tuesday, June 24, 2008
|
Full-Day Tutorials
|
T1
Solaris 10 Security Features Workshop
(Hands-on)
Peter Baer Galvin,
Corporate Technologies
Who should attend:
Solaris systems managers and administrators interested in the new security features in Solaris 10 (and features in previous Solaris releases that they might not be using).
Solaris has always been the premier commercial operating system, but it is also somewhat different from other UNIX/Linux systems. It has novel features and applications (some have been copied in other operating systems), and there are things you need to know to use them
effectively and securely.
This course covers a variety of topics surrounding Solaris 10 and security. Note that this is not a class about specific security vulnerabilities and hardening; rather, it examines new features in Solaris 10 for addressing the entire security infrastructure, as well as new issues to consider when deploying, implementing, and managing Solaris 10. This will be a workshop featuring instruction and practice/exploration.
Take back to work: During this exploration of the important new features of Solaris 10, you'll not only learn what it does and how to get it done, but also best practices. Also covered is the status of each of these new features, how stable it is, whether it is ready for production use, and expected future enhancements.
Topics include:
- Overview
- Virtualization
- Containers (a.k.a. Zones), light-weight virtual environments for application isolation and resource management
- Installation
- Management
- Resource management
- Other Solaris virtualizations: LDOMs, Xen
- RBAC: Role Based Access Control (giving users and
application access to data and functions based on the role they are
filling, as opposed to their login name)
- Privileges: A new Solaris facility based on the principle of least privilege; instead of being root (or not), users are accorded 43 distinct bits of privilege, sometimes spanning classes of actions and sometimes being confined to a specific system call
- NFSv4: The latest version of NFS (based on an industry standard), featuring stateful connection, more and better security, write locks, and faster performance
- Flash archives and live upgrade (automated system builds)
- Moving from NIS to LDAP
- DTrace: Solaris 10's system profiling and debugging tool
- FTP client and server enhancements for security, reliability, and auditing
- PAM (the Pluggable Authentication Module) enhancements, for more detailed control of access to resources
- Auditing enhancements
- BSM (the Basic Security Module), providing a security auditing
system (including tools to assist with analysis) and a device
allocation mechanism (providing object-reuse characteristics for
removable or assignable devices)
- Service Management Facility (a replacement for rc files)
- New "Secure By Default" settings
- Solaris Cryptographic Framework: A built-in system for encrypting anything, from files on disks to data streams between applications
- Kerberos enhancements
- Packet filtering with IPfilters
- BART (Basic Audit Reporting Tool): similar to Tripwire, BART enables you to determine what file-level changes have occurred on a system, relative to a known baseline
- Trusted Extension: Additions to Solaris 10 to make it "Trusted Solaris"
- Securing a Solaris 10 system
Laptop Requirements:
Each student should have a laptop with wireless access for remote access into an instructor-provided Solaris 10 machine (if you do not have a laptop, we will make every effort to pair you up with another student to work as a group; your laptop does not need to be running Solaris).
T2
Administering Linux in Production Environments
Æleen Frisch,
Exponential Consulting
Who should attend:
Both current Linux system administrators and administrators from sites considering converting to Linux or adding Linux systems to their current computing resources.
This course discusses using Linux as a production-
level operating system. Linux is used on the front
line for mission-critical applications in major corporations and institutions, and mastery of this operating
system is now becoming a major asset to system administrators.
|
|
Linux system administrators in production environments face many
challenges: the inevitable skepticism about whether an open source
operating system will perform as required; how well Linux systems will
integrate with existing computing facilities; how to locate, install,
and manage high-end features which the standard distributions may
lack; and many more. Sometimes the hardest part of ensuring that the
system meets production requirements is matching the best solution
with the particular local need. This course is designed to give you a
broad knowledge of production-worthy Linux capabilities, as well as
where Linux currently falls short. The material in the course is all
based on extensive experience with production systems.
This course will cover configuring and managing Linux computer systems in production environments. We will be focusing on the administrative issues that arise when Linux systems are deployed to address a variety of real-world tasks and problems arising from both commercial and research and development contexts. This course is designed for both current Linux system administrators and for administrators from sites considering converting to Linux or adding Linux systems to their current computing resources.
Take back to work: The knowledge
necessary to add reliability and availability to your systems and to assess and implement tools needed for production-quality Linux
systems.
Topics include:
- Recent kernel developments
- High-performance I/O
- Advanced file systems and the LVM
- Disk striping
- Optimizing I/O performance
- Advanced compute-server environments
- HPC with Beowulf
- Clustering and high availability
- Parallelization environments/facilities
- CPU performance optimization
- Enterprise-wide security features, including centralized authentication
- Automation techniques and facilities
- Linux performance tuning
T3
Live Forensics
NEW!
Frank Adelstein,
ATC-NY;
Golden G. Richard,
University
of New Orleans
Who should attend:
Security professionals, CERT members, and security-aware users who would like to know more about live digital forensics
investigation.
Traditional digital forensics focuses on analyzing a copy (an
"image") of a disk to extract information—e.g., deleted files,
file fragments, Web browsing history—and to build a timeline that
provides a partial view of what has been done on the computer. Live
forensics, an emerging area in which information is gathered on
running systems, offers some distinct advantages over traditional
forensics. Live forensics can
provide information, such as running processes, memory
dumps, open network connections, and unencrypted versions of encrypted
files, that cannot
be gathered by static methods. This information can both serve as digital evidence and
help direct or focus traditional analysis methods. Despite the
usefulness of live forensics, however, it offers significant
challenges, many of which are related to malware.
|
We will spend approximately 25% of the time on static disk analysis techniques and then move on to
gathering and analyzing live data. We will give examples and demonstrations
of some techniques and tools.
The tutorial does not assume that students have a background in forensics.
Students are assumed to have a reasonably mature knowledge of
systems. Familiarity with operating systems structure, disk layouts,
and the basic interactions between operating systems and hardware
will be beneficial but is not required. Note that the course
emphasizes what types of information are available and how this
information can be extracted, rather than providing a 10-step
checklist of how to investigate cases. Those familiar with
traditional forensic analysis will benefit from the course. This course will not cover
legal issues.
Take back to work:
An understanding of
what live state information is available on a computer, some of the methods for gathering the information, how this information
can be used to build up the picture of what happened, and issues
that might affect the integrity of captured evidence.
Topics include:
- Types of information that can be gathered
- How the evidence can be analyzed
- How the evidence can work in conjunction with traditional methods to satisfy forensic requirements
T4
VMware ESX Performance and Tuning
NEW!
Richard McDougall,
VMware
Who should attend:
Anyone who is involved in planning or deploying
virtualization on VMware ESX and wants to understand the performance
characteristics of applications in a virtualized environment.
We will walk
through the implications to performance and capacity planning in a
virtualized world to learn about how to achieve best performance in a
VMware ESX enviroment.
Take back to work:
How to plan, understand, characterize, diagnose, and
tune for best application performance on VMware ESX.
Topics include:
- Introduction to virtualization
- Understanding different hardware acceleration techniques for virtualization
- Diagnosing performance using VMware tools
- Diagnosing performance using guest OS tools in a virtual environment
- Practical limits and overheads for virtualization
- Storage performance
- Network throughput and options
- Using Virtual-SMP
- Guest Operating System Types
- Understanding the characteristics of key applications, including Oracle, MS SQLserver, and MS Exchange
- Capacity planning techniques
T5
Issues in UNIX Infrastructure Design
Lee Damon,
University of Washington
Who should attend:
Anyone who is designing, implementing, or maintaining a UNIX environment with 2 to 20,000+ hosts; system administrators, architects, and managers who need to maintain multiple hosts with few admins.
This intermediate class will examine many of the background issues that need to be considered during the design and implementation of a mixed-architecture or single-architecture UNIX environment. It will cover issues from authentication (single sign-on) to the Holy Grail of single system images.
This class won't implement a "perfect solution," as each site has different needs. We will look at some freeware and some commercial solutions, as well as many of the tools that exist to make a workable environment possible.
Take back to work: Answers to the questions you should ask while designing and implementing the mixed-architecture or single-architecture UNIX environment that will meet your needs.
Topics include:
- Administrative domains: Who is responsible for what, and what can users do for themselves?
- Desktop services vs. farming: Do you do serious computation on the desktop, or do you build a compute farm?
- Disk layout: How do you plan for an upgrade? Where do things go?
- Free vs. purchased solutions: Should you write your own, or hire a consultant or company?
- Homogeneous vs. heterogeneous: Homogeneous is easier, but will it do what your users need?
- The essential master database: How can you keep track of what you have?
- Policies to make life easier
- Push vs. pull
- Getting the user back online in 5 minutes
- Remote administration: Lights-out operation; remote user sites; keeping up with vendor patches, etc.
- Scaling and sizing: How do you plan on scaling?
- Security vs. sharing: Your users want access to everything. So do the crackers . . .
- Single sign-on: How can you do it securely?
- Single system images: Can users see just one environment, no matter how many OSes there are?
- Tools: The free, the purchased, the homegrown
T6
Solaris/Linux Performance Measurement and Tuning
NEW!
Adrian Cockcroft,
Netflix, Inc.
Who should attend:
Capacity planning engineers and system administrators with an
interest in performance optimization who work with Solaris or Linux.
This half-day course focuses on the measurement sources and tuning
parameters available in Solaris and Linux.
Take back to work:
An understanding of the meaning and behavior of metrics; knowledge of the common fallacies, misleading indicators, sources of
measurement error, and other traps for the unwary.
Topics include:
- TCP/IP measurement and tuning
- Complex storage subsystems
- Virtualization
- Advanced Solaris metrics
- Microstates
- Extended system accounting
T7
Disk-to-Disk Backup and Eliminating Backup System Bottlenecks
Jacob Farmer,
Cambridge Computer Services
Who should attend:
System administrators involved in the design and management of backup systems and policymakers responsible for protecting their organization's data. A general familiarity with server and storage hardware is assumed. The class focuses on architectures and core technologies and is relevant regardless of what backup hardware and software you currently use.
|
|
The data protection industry is going through a mini-renaissance. In the past few years, the cost of disk media has dropped to the point where it is practical to use disk arrays in backup systems, thus minimizing and sometimes eliminating the need for tape. In the first incarnations of disk-to-disk backup—disk staging and virtual tape libraries—disk has been used as a direct replacement for tape media. While this compensates for the mechanical shortcomings of tape drives, it fails to address other critical bottlenecks in the backup system, and thus many disk-to-disk backup projects fall short of expectations. Meanwhile, many early adopters of disk-to-disk backup are discovering that the longterm costs of disk staging and virtual tape libraries are prohibitive.
The good news is that the next generation of disk-enabled data protection solutions has reached a level of maturity where they can assist—and sometimes even replace—conventional enterprise backup systems. These new D2D solutions leverage the random access properties of disk devices to use capacity much more efficiently and to obviate many of the hidden backup-system bottlenecks that are not addressed by first-generation solutions. The challenge to the backup system architect is to cut through the industry hype, sort out all of these new technologies, and figure out how to integrate them into an existing backup system.
This tutorial identifies the major bottlenecks in conventional backup systems and explains how to address them. The emphasis is placed on the various roles for inexpensive disk in your data protection strategy; however, attention is given to SAN-enabled backup, the current state and future of tape drives, and iSCSI.
Take back to work: Ideas for immediate, effective, inexpensive improvements to your backup systems.
Topics include:
- Identifying and eliminating backup system bottlenecks
- Conventional disk staging
- Virtual tape libraries
- Removable disk media
- Incremental forever and synthetic full backup strategies
- Block- and object-level incremental backups
- Information lifecycle management and nearline archiving
- Data replication
- CDP (Continuous Data Protection)
- Snapshots
- Current and future tape drives
- Capacity Optimization (Single-Instance File Systems)
- Minimizing and even eliminating tape drives
- iSCSI
T8
Nagios: Advanced Topics
NEW!
John Sellens,
SYONEX
Who should attend:
Network and system administrators ready to implement or extend their
use of the Nagios system and network monitoring tool.
Nagios is a very widely used tool for monitoring hosts and services on a network. It's very flexible, configurable, and can be extended in many ways, using home-grown or already existing extensions.
This tutorial will cover the advanced features and abilities of
Nagios and related tools, which are especially useful in larger
or more complex environments, or for higher degrees of automation
or integration with other systems.
Take back to work: The information you need to immediately implement and use the advanced features of Nagios and
related tools for monitoring systems and devices on your networks.
Topics include:
- Theory of operation
- Configuration for more complex environments
- Plug-ins: Their creation, use, and abuse
- Extensions: NRPE, NSCA, NDOUtils
- Add-ons: Graphing, integration with other tools
- Abuse: Unexpected uses and abuses of Nagios
T9
Performance Management with Free and Bundled Tools
NEW!
Adrian Cockcroft,
Netflix, Inc.
Who should attend:
Capacity planning engineers and system administrators looking for
an overview of methodologies and freely available tools.
Capacity planning and performance management tools have been
commercially available for many years. A new generation of freely
available tools provides data collectors and analysis packages. As
the underlying computer platforms and network devices have evolved,
they have added improved data sources and have bundled free data
collectors. Several open source and freeware projects have sprung
up to collect and display cross-platform data, and with the advent
of highly functional free statistics and modeling packages comprehensive
analysis, modeling and archival storage can now be assembled. Free
and bundled tools are of special interest to sites with very diverse
mixes of systems, very large sites where licensing costs become
prohibitive, and sites replacing a few large single systems with
many more low cost horizontally scaled systems.
Take back to work:
An vendor- and operating
system-independent understanding of capacity planning techniques and
tools.
Topics include:
Computer system and network performance data collection, analysis,
modeling, and capacity planning on any platform using bundled utilities
and freely available tools such as Orca, BigBrother, OpenNMS, Nagios,
Ganglia, SE Toolkit, R, Ethereal/Wireshark, Ntop, MySQL, and PDQ.
T10
Next-Generation Storage Networking
Jacob Farmer,
Cambridge Computer Services
Who should attend:
Sysadmins running day-to-day operations and those who set or enforce budgets. This tutorial is technical in nature, but it does not address command-line syntax or the operation of specific products or technologies. Rather, the focus is on general architectures and various approaches to scaling in both performance and capacity. Since storage networking technologies tend to be costly, there is some discussion of the relative cost of different technologies and of strategies for managing cost and achieving results on a limited budget.
|
|
There has been tremendous innovation in the data storage industry over the past few years. Proprietary, monolithic SAN and NAS solutions are beginning to give way to open-system solutions and distributed architectures. Traditional storage interfaces such as parallel SCSI and Fibre Channel are being challenged by iSCSI (SCSI over TCP/IP), SATA (serial ATA), SAS (serial attached SCSI), and even Infiniband. New filesystem designs and alternatives to NFS and CIFS are enabling high-performance filesharing measured in gigabytes (yes, "bytes," not "bits") per second. New spindle management techniques are enabling higher-performance and lower-cost disk storage. Meanwhile, a whole new set of efficiency technologies are allowing storage protocols to flow over the WAN with unprecedented performance. This tutorial is a survey of the latest storage networking technologies, with commentary on where and when these technologies are most suitably deployed.
Take back to work: An understanding of general architectures, various approaches to scaling in both performance and capacity, relative costs of different technologies, and strategies for achieving results on a limited budget.
Topics include:
- Fundamentals of storage virtualization: the storage I/O path
- Shortcomings of conventional SAN and NAS architectures
- In-band and out-of-band virtualization architectures
- The latest storage interfaces: SATA (serial ATA), SAS (serial attached SCSI), 4Gb Fibre Channel, Infiniband, iSCSI
- Content-Addressable Storage (CAS)
- Information Life Cycle Management (ILM) and Hierarchical Storage Management (HSM)
- The convergence of SAN and NAS
- High-performance file sharing
- Parallel file systems
- SAN-enabled file systems
- Wide-area file systems (WAFS)
T11
Inside the Box: What You Need to Know About Your Hardware
NEW!
Rudi van Drunen,
Competa IT
Who should attend:
Sysadmins and tech support who want black boxes to
become less opaque to them. A clearer understanding of your
hardware's electronics will help you deploy, support, and
troubleshoot systems easily and quickly.
This course will top up your toolbox with comprehensive knowledge
of hardware and underlying electronics. We will cover the basic electronics of the hardware the syadmin needs to work with and troubleshoot.
Practical tips for avoiding common pitfalls will be offered.
Take back to work:
A more thorough understanding of electronics, with the ability to attack hardware-related problems at a fundamental level.
Topics include:
-
Technologies
- Analog electronics
- Digital electronics
-
Integrated circuits
- custom/semi-custom
- Programmable logic
Ohm's law
Signals
- Analog signal levels
- Digital logic levels
- Cabling: USB, Ethernet, SCSI
- Crosstalk
- RF issues (including wireless)
Power
- Power calculations
- Power layout
Signal processing
- Mixed signal circuits
- A/D conversion
- Audio systems
- Video/VGA/DVI
Circuit boards
How to fix your hardware or keep it running until on-site support arrives
|