Home  | At a Glance  | Register/Hotel  | Tutorials  | Technical Sessions  | Exhibition  | Organizers  | Activities

Tutorial Descriptions    [Tutorial Overview]

TUESDAY, OCTOBER 10, 2000


T1 Linux Systems Administration
Bryan C. Andregg, Red Hat Software

Who should attend: This tutorial is directed at System Administrators that are planning on implementing a Linux solution in a production environment. Course attendees should be familiar with the basics of systems administration in a UNIX(tm)/Linux(tm) environment: user level commands, administration commands and TCP/IP networking. The novice Administrator and the Guru should both leave the tutorial having learned something.

From a single server to a network of workstations, the Linux environment can be a daunting task for administrators knowledged in other platforms. Starting with a single server and finishing with a multi-server 1000+ user environment, case studies will provide practical information for using Linux in the real world. The following areas will be covered with a special emphasis on security:

  • Installation Features
  • Disk Partitioning and RAID
  • Networking
  • User Accounts
  • Services
  • NFS and NIS
  • High Availability Environments
  • The Workplace
  • Up and Coming in the Linux World (CODA, LVM, etc.)

At the completion of the course attendees should feel confident in their ability to setup and maintain a secure and useful Linux network. The tutorial will be conducted in an open manner that allows for question and answer interruption.

Bryan C. Andregg is the Director of Networks at Red Hat, Inc. He has been with the company for three years and in that time has moved from being the only Systems Administrator through almost every job in IS. Bryan is responsible for some horrible PERL and even worse shell which have made the lives of his replacements a nightmare. His job title on his next round of business cards will say "firefighter."

T2 Internet Security for Linux System Administrators
Ed DeHart, Prism Servers, Inc.

Who should attend: Linux/UNIX System Administrators, Network Managers, and Operations/Support Staff. The tutorial materials assume that the attendees have a good working knowledge of Linux/UNIX system administration, and are experienced Internet users.

At this one-day tutorial you will learn strategies and techniques to help eliminate the threat of Internet intrusions and to improve the security of Linux/UNIX systems connected to the Internet.

This tutorial will also help you understand, set up, and manage a number of Internet services appropriate to your site's mission.

Topics include:

  • Latest information on security problems
  • Linux/UNIX system security
  • TCP/IP network security
  • Site security policies

After completing the tutorial, attendees will be able to establish and maintain a secure Internet site that allows the benefits of Internet connectivity while protecting their organization's information.

Ed DeHart is a former member of the CERT Coordination Center, which he helped found in 1988. The CERT was formed by the Defense Advanced Research Projects Agency (DARPA) to serve as a focal point for the computer security concerns of Internet users. Ed recently was the President of Pittsburgh OnLine, Inc., a successful Internet Service Provider that operated several UNIX and Linux servers. Currently, Ed is the President of Prism Servers, Inc. building Internet Firewalls.

T3 Inside the Linux Kernel
Stephen C. Tweedie, Red Hat; Ted Ts'o, MIT

Who should attend: Application programmers and kernel developers. You should be reasonably familiar with C programming in the UNIX environment, but no prior experience with the UNIX or Linux kernel code is assumed.

This tutorial will give you an introduction to the structure of the Linux kernel, the basic features it provides, and the most important algorithms it employs.

The Linux kernel aims to achieve conformance with existing standards and compatibility with existing operating systems , however, it is not a reworking of existing UNIX kernel code. The Linux kernel was written from scratch to provide both standard and novel features, and takes advantage of the best practice of existing UNIX kernel designs.

Although the material will focus on the release version of the Linux kernel (v 2.2), it will also address aspects of the development kernel codebase (v 2.3) where its substance differs from 2.2. It will not contain any detailed examination of the source code but will rather offer an overview and roadmap of the kernel's design and functionality.

Topics include:

  • How the Linux kernel is organized: scheduler, virtual memory system, filesystem layers, device driver layers and networking stacks.
    • The interface between each module and the rest of the kernel, and the functionality provided by that interface.
    • The common kernel support functions and algorithms used by that module.
    • How modules provide for multiple implementations of similar functionality (network protocols, filesystem types, device drivers and architecture-specific machine interfaces ).
  • Basic ground rules of kernel programming, such as races and deadlock conditions.
  • Implementation of the most important kernel algorithms and their general properties (aspects of portability, performance and functionality).
  • The main similarities and differences between Linux and traditional UNIX kernels, with attention to places where Linux implements significantly different algorithms.
  • Details of the Linux scheduler, its VM system, and the ext2fs filesystem.
  • The strict requirements for ensuring that kernel code is portable between the many architectures that Linux supports.

Stephen Tweedie works on Linux kernel internals and high availability for Red Hat, Inc. Before that he worked on VMS filesystem internals for Digital's Operating Systems Software Group. He has been contributing to Linux for a number of years, in particular designing some of the high-performance algorithms central to the ext2fs filesystem and the virtual memory code.

Theodore Ts'o has been a Linux kernel developer since almost the very beginnings of Linux - he implemented POSIX job control in the 0.10 Linux kernel. He is the maintainer and author for the Linux COM serial port driver, and the Comtrol Rocketport driver. He architected and implemented Linux's tty layer. Outside of the kernel, he is also the maintainer of the e2fsck filesystem consistency checker. Theodore is currently employed by VA Linux Systems.

T4 Building Linux Applications
Michael K. Johnson, Red Hat

Who should attend: This class is designed for programmers who are familiar with the C programming language, the standard C library, and some basic ideas of UNIX shells: primarily pipes, I/O redirection, and job control. We will discuss (come prepared to ask questions) the major O/S related components of a Linux application and how they fit together. This course will prepare you to start building Linux applications. Since Linux is very similar to UNIX, you will be fundamentally prepared to build UNIX applications as well.

The core of the tutorial will be an introduction to system programming: The process model, file I/O, file name and directory management, and signal processing lead the list. We will more briefly cover (in more or less depth depending on participant interest) ttys and pseudo ttys, time, random numbers, and simple networking.

We will then cover some system library functionality, including globbing and regular expressions, command line parsing, and dynamic loading. If there is sufficient interest and time, we will then briefly survey the great variety of application programming libraries.

Michael K. Johnson has worked with Linux since the first publicly released version. He is the co-author of "Linux Application Development" (Addison-Wesley, 1998) and is a software developer for Red Hat, Inc. Michael has written kernel, system, and application code for Linux, and has been teaching Linux courses and tutorials for six years.

WEDNESDAY, OCTOBER 11, 2000


W1 Administering Linux in Production Environments
Aeleen Frisch, Exponential Consulting

Who should attend: This course is designed for both current Linux system administrators as well as administrators from sites considering converting to or adding Linux systems to their current computing resources.

This course will cover configuring and managing Linux computer systems in production environments. We will be focusing on the administrative issues that arise when Linux systems are deployed to address a variety of real-world tasks and problems arising from both commercial and research and development contexts.

Topics include:

  • Why Linux? Justifying a free operating system in a production environment
  • High performance I/O: advanced filesystems (Coda, logical volumes), disk striping, optimizing I/O performance
  • Linux and enterprise-level networking
  • High-performance compute-server environments: Beowulf, clustering, parallelization environments and facilities, CPU performance optimization
  • High availability Linux: fault tolerance options, UPS configuration
  • Databases and Linux
  • Linux systems in office environments
  • Automating Linux installation and configuration
  • Integrating with (other) UNIX and non-UNIX systems
  • Security considerations and techniques for production environments

Aeleen Frisch has been a system administrator for over 15 years. She currently looks after a very heterogeneous network of UNIX and Windows NT systems. She is the author of several books, including Essential Windows NT System Administration.

W2 Intrusion Detection and Network Forensics
Marcus J. Ranum, Network Flight Recorder, Inc.

Who should attend: Network and system managers, security managers, and auditors. This tutorial will assume some knowledge of TCP/IP networking and client/server computing.

What can intrusion detection do for you? Intrusion detection systems are designed to alert network managers to the presence of unusual or possibly hostile events within the network. Once you've found traces of a hacker, what should you do? What kind of tools can you deploy to determine what happened, how they got in, and how to keep them out? This tutorial provides a highly technical overview of the state of intrusion detection software and the types of products that are available, as well as the basic principles to apply for building your own intrusion detection alarms. Methods of recording events during an intrusion are also covered.

Topics include:

  • What is IDS?
    • Principles
    • Prior art
  • Can IDS help?
    • What IDS can and can't do for you
    • IDS and the WWW
    • IDS and firewalls
    • IDS and VPNs
  • Types and trends in IDS design
    • Anomaly detection
    • Misuse detection
    • Traps
    • Future avenues of research
  • Concepts for building your IDS
    • What you need to know first
    • Performance issues
  • Tools for building your IDS
    • Sniffers and suckers
    • Host logging tools
    • Log recorders
  • Reporting and recording
    • Managing alerts
    • What to throw away
    • What to keep
  • Network Forensics
    • So you've been hacked
    • Forensic tools
    • Brief overview of evidence handling
    • Who can help you
  • Resources and References

Marcus J. Ranum is CEO and founder of Network Flight Recorder, Inc. He is the principal author of several major Internet firewall products, including the DEC SEAL, the TIS Gauntlet, and the TIS Internet Firewall Toolkit. Marcus has been managing UNIX systems and network security for over 13 years, including configuring and managing whitehouse.gov. Marcus is a frequent lecturer and conference speaker on computer security topics.

W3 Designing Resilient Distributed Systems - High Availability
Evan Marcus, Veritas Software

Who should attend: Beginning and intermediate UNIX system and network administrators, and UNIX developers concerned with building applications that can be deployed and managed in a highly resilient manner. A basic understanding of UNIX system programming, UNIX shell programming, and network environments is required.

This course will explore procedures and techniques for designing, building and managing predictible, resilient UNIX-based systems in a distributed environment. Hardware redundancy, system redundancy, monitoring and verification techniques, network implications, system and application programming issues will all be addressed. We will discuss the trade-offs between cost, reliability and complexity.

Topics include:

  • What is high availability? Who does and does not need it?
  • Defining uptime and cost; "big rules" of system design
  • Disk and data redundancy; RAID and SCSI arrays
  • Host redundancy in HA configurations
  • Network dependencies
  • Application system programming concerns
  • Anatomy of failovers: applications, systems, management tools
  • Planning disaster recovery sites and data updates
  • Security implications
  • Upgrade and patch strategies
  • Backup systems: off-site storage, redundancy and disaster recovery issues
  • Managing the system managers, processes, verification

Evan Marcus is a Senior Systems Engineer and High Availability Specialist with VERITAS Software Corporation. Evan has more than 12 years of experience in UNIX Systems Administration. While working at Fusion Systems and OpenVision Software, Evan worked to bring the first High Availability software application for SunOS and Solaris to market. Evan has authored several articles and talks on the design of High Availability Systems. In February of 2000, Evan's first book, Blueprints for High Availability: Designing Resilient Distributed Systems, co-authored with Hal Stern of Sun Microsystems, was published by John Wiley and Sons.

W4 Network Administration
Bryan C. Andregg, Red Hat, Inc.

Who should attend: This tutorial is directed at System Administrators who are implementing Network Services and are looking for a background in the configuration of those services as well as basics of the protocols and performance tuning. Attendees should have used or been the client of an IP network before and have a basic knowledge of Systems Administration, but do not need to be experienced Network Administrators. Both new Network Administrators and Gurus will leave the tutorial having learned something.

From a stand-alone client attached to the Internet to a distributed network of web servers, Systems Administrators are being tasked with bring their office environments on-line. The Network Services that need to be configured in order to this can be daunting to Administrators who aren't familiar with the applications required to do this. Configuration examples as well as brief overviews of the under-lying protocols will give the usable examples that work after the conference.

Topics include:

  • Networking Overview
  • Client Networking
  • Serving Networked Clients
  • Network Services:
    • SSH - Secure Shell
    • FTP - File Transfer
    • HTTP - Web
    • SMTP - Mail
    • NFS - Network File Systems
    • DHCP - Dynamic Networking
  • Network Troubleshooting
  • Neat Network Tricks
  • Up and Coming Topics
    • VPN - Virtual Private Networks
    • IPv6 - The future of IP(?)

At the completion of the course attendees should feel confident in their ability to setup and maintain secure Network Services. The tutorial will be conducted in an open manner that encourages question and answer interruption.

Bryan C. Andregg is the Director of Networks at Red Hat, Inc. He has been with the company for three years and in that time has moved from being the only Systems Administrator through almost every job in IS. Bryan is responsible for some horrible PERL and even worse shell which have made the lives of his replacements a nightmare. His job title on his next round of business cards will say "firefighter."


Last changed: ,

Conference index