Yesterday I posted an introduction to logstash. Today I hope to explain how I combined it with OSSEC for an awesome experience.
First, I'll explain how I have logstash setup. mylogstash.conf is broken up into 3 parts: input, filter, and output. This should sound familiar. You can get a copy of the file from here. You can get a copy of my rsyslog.conf from here. I'll be explaining most of the configuration in snippets below.
Here's the input section:
input {
file {
type => "linux-syslog"
path => [ "/var/log/cron", "/var/log/messages", "/var/log/syslog", "/var/log/secure", "/var/log/maillog", "/var/log/auth.log", "/var/log/daemon.log", "/var/log/dpkg.log" ]
}
The first section in the input is a file and labeled as linux-syslog. The path lists a number of logfiles on the local system for logstash to monitor and import new messages from.
file {
type => "ossec-syslog"
path => "/var/log/ossec.log"
}
The /var/log/ossec.log file is a collection of all ossec alerts forwarded to the logstash host using ossec-csyslogd. rsyslog puts all of the OSSEC alerts into their own file to make using grok easier.
tcp {
host => "127.0.0.1"
port => "6999"
type => "linux-syslog"
}
Lastly the tcp section. I have this to push old logs into logstash. logstash will only look at new log messages that come in when you use "file." So to get older messages I have logstash listen to a port and push old logs into it with since and netcat.
Onto the filter section:
filter {
grok {
type => "ossec-syslog"
pattern => "ossec: Alert Level: \d; Rule: \d+ - .* srcip: %{IP:src_ip}"
add_tag => "Source IP"
}
The above code is a grok filter. The type of "ossec-syslog" makes sure grok only looks at the inputs labeled as "ossec-syslog." We're matching the source IP and tag it as the "src_ip."
grok {
type => "ossec-syslog"
pattern => "(ossec: .*; Location: \(%{NOTSPACE:source_host}\) |ossec: .*; Location: \S+\|\(%{NOTSPACE:source_host}\) |ossec: .*; Location: \S+\|%{NOTSPACE:source_host}->\S+; )"
add_tag => "Source Host"
}
This grok filter will tag the true source_host, otherwise my logstash host or the OSSEC server would be tagged as the source.
grok {
type => "ossec-syslog"
pattern => "ossec: Alert Level: %{BASE10NUM:alert_level};"
}
grok {
type => "ossec-syslog"
pattern => "ossec: Alert Level: \d+; Rule: %{BASE10NUM:sid} -"
#add_field => ["sid", "%{@BASE10NUM}"]
}
grok {
type => "ossec-syslog"
pattern => "ossec: Alert Level: \d+; Rule: \d+ - %{GREEDYDATA:description}; Location"
add_tag => "Description"
}
grok {
type => "ossec-syslog"
pattern => "ossec: .*; Location: .*\S+->%{PATH:location};"
}
}
These tag the alert_level, rule ID, description, and file the log message came from.
Finally the output:
output {
stdout { }
I log to stdout for troubleshooting purposes. If this were a real installation this would be commented out.
elasticsearch {
cluster => "elasticsearch"
}
I run an elasticsearch cluster, the cluster's name is elasticsearch by default. I didn't change this, but it would probably be a good idea to name it something different.
## Gelf output
gelf {
host => "gamont.example.com"
}
I'm also outputting my logs to graylog2 via gelf. graylog2 is pretty, but I haven't really dug into it much. I'm hoping to make a post about it in the future.
statsd {
host => "rossak.example.com"
port => "8125"
increment => "TEST.OSSEC.alerts.%{alert_level}.%{sid}"
}
}
I also output some statistical data to statsd (which sends it to graphite, see the graph I posted in this post).
For an easier setup, you can change the elasticsearch output above to the following to use the embedded version:
elasticsearch { embedded => true }
You may have noticed that I have logstash configured to listen to the network on the localhost address only. I could have configured it to listen for syslog messages on the external interface, and shipped OSSEC logs directly there with ossec-csyslogd. Having logstash listen on the network would be easy, but I believe using a dedicated syslog daemon like rsyslog is the safer path. rsyslog does syslog, it's been tested heavily in sending and receiving syslog. To me it makes more sense to use a dedicated, full featured application when I can.
So I have rsyslog forwarding messages
I could add another layer and use ossec-csyslogd to forward alerts to a local rsyslog daemon, which can then forward those messages to the logstash host. The benefit of this would be that rsyslog can deliver the messages reliably and over an encrypted channel.
On the logstash host I've added the following to rsyslog.conf to filter out the ossec messages:
$template DYNossec,"/var/log/ossec.log"
if $msg startswith 'Alert Level:' or $msg contains 'Alert Level:' then ?DYNossec
if $msg startswith 'Alert Level:' or $msg contains 'Alert Level:' then ~
OSSEC alerts are put into /var/log/ossec.log and then discarded so they don't end up in any other log files.
This has been one of the most important elasticsearch links I've come across. The dreaded "Too many open files" error has popped up a couple of times for me.
The default search (click the link on the landing page) just looks for everything for the past few days:
Here's an OSSEC event with details:
OSSEC details |
OSSEC details with src_ip |
In the screen shots above you can see (a lot of information about my home network) the grok filters at work.Clicking on one of those fields will filter the output using that selection. For instance, I've clicked on the "6" in alert_level:
Alert Level 6 |
I currently have logstash holding a little over 3 million log messages:
No comments:
Post a Comment