A primary source of
information on any system is its log files. Of course, log files are
not unique to networking software. They are simply another aspect of
general systems management that you must master.
Some applications manage their own log files. Web servers and
accounting software are prime examples. Many of these applications
have specific needs that aren't well matched to a more general
approach. In dealing with these, you will have to consult the
documentation and deal with each on a case-by-case basis.
Fortunately, most Unix software is now designed to use a central
logging service,
11.2.1. syslog
You
are probably already familiar with
syslog, a
versatile logging tool written by Eric Allman. What is often
overlooked is that
syslog can be used across
networks. You can log events from your Cisco router to your Unix
server. There are even a number of Windows versions available. Here
is a quick review of
syslog.
An early and persistent criticism of Unix
was that every application seemed to have its own set of log files
hidden away in its own directories.
syslog was
designed to automate and standardize the process of maintaining
system log files. The main program is the daemon
syslogd, typically started as a separate process
during system initialization. Messages can be sent to the daemon
either through a set of library routines or by a user command,
logger.
logger is
particularly useful for logging messages from scripts or for testing
syslog, e.g., checking file permissions.
11.2.1.1. Configuring syslog
syslogd 's
behavior is initialized through a configuration file, which by
default is
/etc/syslog.conf. An alternative file
can be specified with the
-f option when the
daemon is started. If changes are made to the configuration file,
syslogd must be restarted for the changes to
take effect. The easiest way to do this is to send it a HUP signal
using the
kill command. For example:
bsd1# kill -HUP 127
where
127 is the PID for
syslogd, found using the
ps
command. (Alternately, the PID is written to the file
/var/run/syslogd.pid on some systems.)
The configuration
file is a text file with two fields separated by tabs, not spaces!
Blank lines are ignored. Lines beginning with
# in
the first column are comments. The first field is a
selector, and the second is an
action. The selector identifies the program or
facility sending the message. It is composed of both a facility name
and a security level. The facility names must be selected from a
short list of facilities defined for the kernel. You should consult
the manpage for
syslogd for a complete list and
description of facilities, as these vary from implementation to
implementation. The security level is also taken from a predefined
list:
emerg,
alert,
crit,
err,
warning,
notice,
info, or
debug. Their
meanings are just what you might guess.
emerg is
the most severe. You can also use
* for all or
none for nothing. Multiple facilities can be
combined on a single line if you separate them with commas. Multiple
selectors must be separated with semicolons.
The Action field tells where to send
the messages. Messages can be sent to files, including device files
such as the console or printers, logged-in users, or remote hosts.
Pathnames must be absolute, and the file must exit with the
appropriate permissions. You should be circumspect in sending too
much to the console. Otherwise, you may be overwhelmed by messages
when you are using the console, particularly when you need the
console the most. If you want multiple actions, you will need
multiple lines in the configuration file.
Here are a few lines from a
syslog.conf file
that should help to clarify this:
mail.info /var/log/maillog
cron.* /var/log/cron
security.* @loghost.netlab.lander.edu
*.notice;news.err root
*.err /dev/console
*.emerg *
The first line says that all informational messages from
sendmail and other mail related programs should
be appended to the file
/var/log/maillog. The
second line says all messages from
cron,
regardless of severity, should be appended to the file
/var/log/cron. The next line says that all
security messages should be sent to a remote system,
loghost.netlab.lander.edu. Either a hostname or
an IP address can be used. The fourth line says that all notice-level
messages and any news error messages should be sent to root if root
is logged on. The next to last line says that all error messages,
including news error messages, should be displayed on the system
console. Finally, the last line says emergency messages should be
sent to all users. It is easy to get carried away with configuration
files, so remember to keep yours simple.
One
problem with
syslog on some systems is that, by
default, the log files are world readable. This is a potential
security hole. For example, if you log mail transactions, any user
can determine who is sending mail to whom -- not necessarily
something you want.
11.2.1.2. Remote logging
For anything but the smallest of
networks, you really should consider remote logging for two reasons.
First, there is simply the issue of managing and checking everything
on a number of different systems. If all your log files are on a
single system, this task is much easier. Second, should a system
become compromised, one of the first things crackers alter are the
log files. With remote logging, future entries to log files may be
stopped, but you should still have the initial entries for the actual
break-in.
To do remote logging, you will need to make appropriate entries in
the configuration files for two systems. On the system generating the
message, you'll need to specify the address of the remote
logging machine. On the system receiving the message, you'll
need to specify a file for the messages. Consider the case in which
the source machine is
bsd1 and the destination
is
bsd2. In the configuration file for
bsd1, you might have an entry like:
local7.* @bsd2.netlab.lander.edu
bsd2 's configuration file might have an
entry like:
local7.* /var/log/bsd1
Naming the file for the remote system makes it much easier to keep
messages straight. Of course, you'll need to create the file
and enable
bsd2 to receive remote messages from
bsd1.
You can use the
logger command to test your
configuration. For example, you might use the following to generate a
message:
bsd1# logger -p local7.debug "testing"
This is what the file looks like on
bsd2:
bsd2# cat bsd1
Dec 26 14:22:08 bsd1 jsloan: testing
Notice that both a timestamp and the source of the message have been
included in the file.
There are a number of problems with remote
logging. You should be aware that
syslog uses
UDP. If the remote host is down, the messages will be lost. You will
need to make sure that your firewalls pass appropriate
syslog traffic.
syslog
messages are in clear text, so they can be captured and read. Also,
it is very easy to forge a
syslog message.
It is also possible to overwhelm a
host with
syslog messages. For this reason, some
versions of
syslog provide options to control
whether information from a remote system is allowed. For example,
with FreeBSD the
-s option can be used to enter
secure mode so logging requests are ignored. Alternately, the
-a option can be used to control hosts from
which messages are accepted. With some versions of Linux, the
-r option is used to enable a system to receive
messages over the network. While you will need to enable your central
logging systems to receive messages, you should probably disable this
on all other systems to avoid potential denial-of-service attacks. Be
sure to consult the manpage for
syslogd to find
the particulars for your system.
Both
Linux and FreeBSD have other enhancements that you may want to
consider. If security is a major concern, you may want to investigate
secure syslog
(
ssyslog) or
modular
syslog (
msyslog). For
greater functionality, you may also want to look at
syslog-ng.
11.2.2. Log File Management
Even after you have the log files, whether
created by
syslog or some other program, you
will face a number of problems. The first is keeping track of all the
files so they don't fill your filesystem. It is easy to forget
fast-growing files, so I recommend keeping a master list for each
system. You'll want to develop a policy of what information to
keep and how long to keep it. This usually comes down to some kind of
log file rotation system in which older files are discarded or put on
archival media. Be aware that what you save and for how long may have
legal implications, depending on the nature of your organization.
Another issue is deciding how much information you want to record in
the first place. Many authors argue, with some justification, that
you should record anything and everything that you might want, no
matter how remote the possibility. In other words, it is better to
record too much than to discover, after the fact, that you
don't have something you need. Of course, if you start with
this approach, you can cut back as you gain experience.
The problem with this approach is that you are likely to be so
overwhelmed with data that you won't be able to find what you
need.
syslog goes a long way toward addressing
this problem with its support for different security levels -- you
can send important messages one place and everything else somewhere
else. Several utilities are designed to further simplify and automate
this process, each with its own set of strengths. These utilities may
condense or display log files, often in real time. They can be
particularly useful if you are managing a number of devices.
Todd Atkins'
swatch (simple watcher) is one of the best
known. Designed with security monitoring in mind, the program is
really suitable to monitor general system activity.
swatch can be run in three different
ways -- making a pass over a log file, monitoring messages as they
are appended to a log file, or examining the output from a program.
You might scan a log file initially to come up-to-date on your
system, but the second usage is the most common.
swatch's actions include ignoring the
line, echoing the line on the controlling terminal, ringing the bell,
sending the message to someone by
write or mail,
or executing a command using the line as an argument. Behavior is
determined based on a configuration file composed of up to four
tab-separated fields. The first and second fields, the pattern
expression and actions, are the most interesting. The pattern is a
regular expression used to match messages.
swatch is written in Perl, so the syntax used
for the regular expressions is fairly straightforward.
While it is a powerful program, you are pretty much on your own in
setting up the configuration files. Deciding what you will want to
monitor is a nontrivial task that will depend on what you think is
important. Since this could be almost anything -- errors, full
disks, security problems such as privilege
violations -- you'll have a lot of choices if you select
swatch. The steps are to decide what is of
interest, identify the appropriate files, and then design your
filters.
swatch is not unique.
xlogmaster is a GTK+ based program for
monitoring log files, devices, and status-gathering programs. It was
written by Georg Greve and is available under the GNU General Public
License. It provides filtering and displays selected events with
color and audio. Although
xlogmaster is no
longer being developed, it is a viable program that you should
consider. Its successor is GNU AWACS. AWACS is new code, currently
under development, that expands on the capabilities of
xlogmaster.
Another program worth looking at is
logcheck. This began as a shell script written
by Craig Rowland.
logcheck is now available
under the GNU license from Psionic Software, Inc., a company founded
by Rowland.
logcheck can be run by
cron rather than continuously.
You should be able to find a detailed discussion of log file
management in any good book on Unix system administration. Be sure to
consult
Appendix B, "Resources and References" for more information.
11.2.3. Other Approaches to Logging
Unfortunately, many
services traditionally don't do logging, either through the
syslog facility or otherwise. If these services
are started by
inetd, you have a couple of
alternatives.
Some implementations of
inetd have options that will allow connection
logging. That is, each time a connection is made to one of these
services, the connection is logged. With
inetd
on Solaris, the
-t option traces all
connections. On FreeBSD, the
-l option records
all successful connections. The problem with this approach is that it
is rather indiscriminate.
One
alternative is to replace
inetd with Panos
Tsirigotis's
xinetd.
xinetd is an expanded version of
inetd that greatly expands
inetd
's functionality, particularly with respect to
logging. Another program to consider is
tcpwrappers.
11.2.3.1. tcpwrappers
The
tcpwrappers
program was developed to provide additional security, including
logging. Written by Wietse Venema, a well-respected security expert,
tcpwrappers is a small program that sits between
inetd (or
inetd-like
programs) and the services started by
inetd.
When a service is requested,
inetd calls the
wrapper program,
tcpd, which checks permission
files, logs its actions, and then, if appropriate, starts the
service. For example, if you want to control access to
telnet, you might change the line in
/etc/inetd.conf that starts the
telnet daemon from:
telnet stream tcp nowait root /usr/libexec/telnetd telnetd
to:
telnet stream tcp nowait root /usr/sbin/tcpd telnetd
Now, the wrapper daemon
tcpd is started
initially instead of
telnetd, the
telnet daemon. You'll need to make similar
changes for each service you want to control. If the service is not
where
tcpd expects it, you can give an absolute
path as an argument to
tcpd in the configuration
file.
TIP:
Actually, there is an alternative way of configuring
tcpwrappers. You can leave the
inetd configuration file alone, move each
service to a new location, and replace the service at its default
location with tcpd. I strongly discourage this
approach as it can create maintenance problems, particularly when you
upgrade your system.
As noted,
tcpwrappers is typically used for two
functions -- logging and access control.
[40] Logging is done through
syslog. The
particular facility used will depend on how
tcpwrappers is compiled. Typically,
mail or
local2 is used. You
will need to edit
/etc/syslog.conf and recompile
tcpwrappers if you want to change how logging is
recorded.
Access is typically controlled through the file
/etc/hosts.allow, though some systems may also
have an
/etc/hosts.deny file. These files
specify which systems can access which services. These are a few
potential rules based on the example configuration:
ALL : localhost : allow
sendmail : nice.guy.example.com : allow
sendmail : .evil.cracker.example.com : deny
sendmail : ALL : allow
tcpwrappers uses a first match wins approach.
The first rule allows all services from the local machine without
further testing. The next three rules control the
sendmail program. The first rule allows a
specific host,
nice.guy.example.com. All hosts
on the domain
.evil.cracker.example.com are
blocked. (Note the leading dot.) Finally, all other hosts are
permitted to use
sendmail.
There are a number of other forms for rules
that are permitted, but these are all pretty straightforward. The
distribution comes with a very nice example file. But, should you
have problems,
tcpwrappers comes with two
utilities for testing configuration files.
tcpdchk looks for general syntax errors within
the file.
tcpdmatch can be used to check how
tcpd will respond to a specific action. (Kudos
to Venema for including these!)
The primary limitation to
tcpwrappers is that,
since it disappears after it starts the target service, its control
is limited to the brief period while it is running. It provides no
protection from attacks that begin after that point.
tcpwrappers is a ubiquitous program. In fact, it
is installed by default on many Linux systems. Incidentally, some
versions of
inetd now have wrappers technology
built-in. Be sure to review your documentation.