Thursday, October 27, 2011

3WoO: Watching for Potentially Malicious Domains with OSSEC

I like logs, lots and lots of logs. When I find out certain logging capabilities aren't turned on I get confused. When I find out that they're turned on but not monitored I get angry.

DNS has been a thorn at a few places I've done work for in the recent past. There are requirements to record and monitor DNS queries, but no one seems to have a good solution. The general purpose of monitoring DNS was to look for malware that used DNS to connect to outside sites for data exfiltration. It may sound lame (not to me), but it can be quite effective depending on the intelligence you have on the threats most important to your organization.

Using snort to monitor DNS queries and responses has provided less than perfect results. Usually the sensor is placed in such a way that only recursive lookups are seen, so we end up with a bunch of alerts blaming the DNS server itself.

This post will help document part of a plan to monitor DNS. This only covers the major *nix daemons, not Windows (if someone gets me Windows logs I'll work on that too). It also only monitors the queries, not the responses. I had something that worked with bro-ids to monitor responses, but I haven't been able to test it in quite a while. I'm planning on waiting for the next major bro-ids release to update it. This method can also be worked around in a number of ways (including using domain names that aren't known to be bad, or subdomains not in the list), but it's a start.

I've tested this method with both bind and unbound, but should work with any software that logs these queries and an appropriate OSSEC decoder.

The first step is to turn on the proper logging.

Setting up the unbound.conf is simple:

log-queries: yes

Configuring named.conf is a bit more difficult:

logging {

        channel "default2" {
                syslog local7;
                severity info;
        };

        category lame-servers { null; };
        category "queries" { "default2"; };
        category "unmatched" { "default2"; };
};

I just have to make sure my syslogd is watching for local7 alerts, and writing them to a file. Configuring OSSEC to watch this logfile is pretty simple. On my system I just add the following to ossec.conf:


<localfile>
  <log_format>syslog</log_format>
  <location>/var/log/local7</location>
</localfile>

After making the logging changes to your dns daemon you'll have to restart it. Make sure logs are being populated with client queries.

Example unbound logs (differences in timestamps are from rsyslog vs OpenBSD's syslogd):

2011-10-26T15:46:15.508083-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 www.ossec.net. A IN
2011-10-26T15:45:54.895874-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 www.google.com. A IN
2011-10-26T15:46:48.366164-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 caladan.example.com. A IN
2011-10-26T15:47:24.372937-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 wallach9.example.com. A IN
2011-10-26T15:47:24.373670-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 wallach9.be.example.com. A IN

And bind:
Oct 26 16:07:08 ix named[14044]: client 192.168.17.9#22193: query: www.ossec.net IN A +
Oct 26 16:05:51 ix named[14044]: client 192.168.1.9#19095: query: wallach9.example.com IN A +
Oct 26 16:05:51 ix named[14044]: client 192.168.1.9#26269: query: wallach9.be.example.com IN A +
Oct 26 16:03:21 ix named[14044]: client 192.168.1.16#38892: query: www.google.com IN A +

Let's look at these logs in ossec-logtest. bind first:
[root@zanovar ossec]# /var/ossec/bin/ossec-logtest
2011/10/26 16:07:46 ossec-testrule: INFO: Reading local decoder file.
2011/10/26 16:07:46 ossec-testrule: INFO: Started (pid: 11804).
ossec-testrule: Type one log per line.

Oct 26 16:07:08 ix named[14044]: client 192.168.17.9#22193: query: www.ossec.net IN A +


**Phase 1: Completed pre-decoding.
       full event: 'Oct 26 16:07:08 ix named[14044]: client 192.168.17.9#22193: query: www.ossec.net IN A +'
       hostname: 'ix'
       program_name: 'named'
       log: 'client 192.168.17.9#22193: query: www.ossec.net IN A +'

**Phase 2: Completed decoding.
       decoder: 'named'
       srcip: '192.168.17.9'
       url: 'www.ossec.net'

The domain name is decoded in the url section. Let's try the unbound log:
[root@zanovar ossec]# /var/ossec/bin/ossec-logtest
2011/10/26 16:11:47 ossec-testrule: INFO: Reading local decoder file.
2011/10/26 16:11:47 ossec-testrule: INFO: Started (pid: 11805).
ossec-testrule: Type one log per line.

2011-10-26T15:46:15.508083-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 www.ossec.net. A IN


**Phase 1: Completed pre-decoding.
       full event: '2011-10-26T15:46:15.508083-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 www.ossec.net. A IN'
       hostname: 'arrakis'
       program_name: 'unbound'
       log: '[8113:0] info: 127.0.0.1 www.ossec.net. A IN'

**Phase 2: Completed decoding.
       No decoder matched.

There is no unbound decoder currently. That's easy to fix. (in case there are any errors in the HTMLization of the decoders below feel free to grab this file)
Add the following to /var/ossec/etc/local_decoder.xml:
<decoder name="unbound">
  <program_name>^unbound</program_name>
</decoder>

<decoder name="unbound-info">
  <parent>unbound</parent>
  <prematch offset="after_parent">^\p\d+:\d+\p info: </prematch>
  <regex offset="after_prematch">^(\S+) (\S+) \S+ \S+$</regex>
  <order>srcip, url</order>
</decoder>


And this is how it decodes now:
[root@zanovar ossec]# /var/ossec/bin/ossec-logtest
2011/10/26 16:17:16 ossec-testrule: INFO: Reading local decoder file.
2011/10/26 16:17:16 ossec-testrule: INFO: Started (pid: 11810).
ossec-testrule: Type one log per line.

2011-10-26T15:46:15.508083-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 www.ossec.net. A IN


**Phase 1: Completed pre-decoding.
       full event: '2011-10-26T15:46:15.508083-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 www.ossec.net. A IN'
       hostname: 'arrakis'
       program_name: 'unbound'
       log: '[8113:0] info: 127.0.0.1 www.ossec.net. A IN'

**Phase 2: Completed decoding.
       decoder: 'unbound'
       srcip: '127.0.0.1'
       url: 'www.ossec.net.'

We're one step closer to having this all work. The next step is to decide on which domain names you want to alert on. I use a number of sources and a bad python script (seriously bad, you can't have it) to create a list of suspicious domains.
I use lists from Malware Domain List, DNS-BH - Malware Domain Blocklist (please donate!), and abuse.ch Zeus Tracker. I also have a list setup for domains I hear about that may not be on these other lists, and some other lists I don't pull via the script. A quick note about the DNS-BH site: That site is primarily a source for creating a DNS blackhole. This can keep your systems from ever getting to these bad sites (redirect them to a "honeypot" system to see what traffic they try to pass). I definitely recommend doing this, it's almost a free bit of security. It's also easy to setup in both bind and unbound/nsd. I've configured both to do this, so if anyone is interested let me know!
Of course, if you're worried about RAM usage this may not be the best idea:
  PID USERNAME PRI NICE  SIZE   RES STATE     WAIT      TIME    CPU COMMAND
14044 named      2    0 1088M  885M sleep/1   select   34:53  0.00% /usr/sbin/named

  PID USERNAME PRI NICE  SIZE   RES STATE     WAIT      TIME    CPU COMMAND
19753 _nsd       2    0  296M  249M idle      select    0:03  0.00% /usr/sbin/nsd -c /etc/nsd.conf

After collecting a list of possibly malicious domains, you'll want to create a CDB list. CDB is a key-value database format. It's great for lists that are fairly static. There's no way to add or remove items from the database, you have to recompile it from scratch. Recompiling these lists is pretty simple and quick, so it isn't much of an issue
The format is also simple:
key: value

My lists generally look like:
DOMAIN: Suspicious Domain

After this, move the list to your ossec directory, I keep mine in /var/ossec/lists. Next we configure OSSEC to use the list by adding them to the rules section:
<rules>
  ...
    <list>lists/blocked.txt.cdb</list>
    <list>lists/userlist.txt.cdb</list>
    <include>local_rules.xml</include>
</rules>

Compiling the lists is easy, and if the files haven't changed since the last recompile ossec-makelists will notify you that they don't need to be recompiled. When a list is updated you should not have to restart the OSSEC processes, they should pick up the changes automatically.
# /var/ossec/bin/ossec-makelists
 * File lists/blocked.txt.cdb does not need to be compiled
 * File lists/userlist.txt.cdb does not need to be compiled
# rm /var/ossec/lists/userlist.txt.cdb
# /var/ossec/bin/ossec-makelists
 * File lists/blocked.txt.cdb does not need to be compiled
 * File lists/userlist.txt.cdb need to be updated

The last piece will be creating a rule to use the list. These are the rules I have for unbound, but the bind rules look very similar:
   <rule id="40000" level="0" noalert="1">
    <decoded_as>unbound</decoded_as>
    <description>Grouping for unbound.</description>
  </rule>

  <rule id="40001" level="10">
    <if_sid>40000</if_sid>
    <list field="url">lists/blocked.txt</list>
    <description>DNS query on a potentially malicious domain.</description>
  </rule>

The list item compares the decoded url to the keys in blocked.txt.cdb database. If there is a match rule 40001 fires, if that url isn't in the database then it doesn't fire.
Hopefully this post gave you some ideas. There's more you can do with lists, so take a look at the documentation (more here).
Just a little teaser, I'm planning on another documentation post tomorrow or Friday. Stay tuned!

Wednesday, October 26, 2011

3WoO: Day X through Y Roundup!

I've been slacking! In no particular order:

Alerting on DNS Changes - Daniel Cid
Leveraging Community Intelligence - Michael Starks
Mapping OSSEC Alerts with Afterglow - Xavier Mertens
Detecting Defaced Websites with OSSEC - Xavier Mertens
Five Tips and Tricks for OSSEC Ninjas - Michael Starks
Week of OSSEC Roundup - Daniel Cid
You Got Your OSSEC in my Logstash
You got your OSSEC in my Logstash Part 2

If I missed something, let me know!

UPDATE:
It's not really 3WoO related, but BSD magazine has an OSSEC on OpenBSD article.

UPDATE 2:
Someone made an ossec reddit.

3WoO: You got your OSSEC in my Logstash part 2

EDIT: Vic Hargrave has posted a more up to date OSSEC + Logstash post to his blog, check it out here.


Yesterday I posted an introduction to logstash. Today I hope to explain how I combined it with OSSEC for an awesome experience.

First, I'll explain how I have logstash setup. mylogstash.conf is broken up into 3 parts: input, filter, and output. This should sound familiar. You can get a copy of the file from here. You can get a copy of my rsyslog.conf from here. I'll be explaining most of the configuration in snippets below.

Here's the input section:

input {
file {
type => "linux-syslog"
path => [ "/var/log/cron", "/var/log/messages", "/var/log/syslog", "/var/log/secure", "/var/log/maillog", "/var/log/auth.log", "/var/log/daemon.log", "/var/log/dpkg.log" ]
}


The first section in the input is a file and labeled as linux-syslog. The path lists a number of logfiles on the local system for logstash to monitor and import new messages from.


file {
type => "ossec-syslog"
path => "/var/log/ossec.log"
}


The /var/log/ossec.log file is a collection of all ossec alerts forwarded to the logstash host using ossec-csyslogd. rsyslog puts all of the OSSEC alerts into their own file to make using grok easier.


tcp {
host => "127.0.0.1"
port => "6999"
type => "linux-syslog"
}

Lastly the tcp section. I have this to push old logs into logstash. logstash will only look at new log messages that come in when you use "file." So to get older messages I have logstash listen to a port and push old logs into it with since and netcat.

Onto the filter section:

filter {
grok {
type => "ossec-syslog"
pattern => "ossec: Alert Level: \d; Rule: \d+ - .* srcip: %{IP:src_ip}"
add_tag => "Source IP"
}


The above code is a grok filter. The type of "ossec-syslog" makes sure grok only looks at the inputs labeled as "ossec-syslog." We're matching the source IP and tag it as the "src_ip."


grok {
type => "ossec-syslog"
pattern => "(ossec: .*; Location: \(%{NOTSPACE:source_host}\) |ossec: .*; Location: \S+\|\(%{NOTSPACE:source_host}\) |ossec: .*; Location: \S+\|%{NOTSPACE:source_host}->\S+; )"
add_tag => "Source Host"
}


This grok filter will tag the true source_host, otherwise my logstash host or the OSSEC server would be tagged as the source.


grok {
type => "ossec-syslog"
pattern => "ossec: Alert Level: %{BASE10NUM:alert_level};"
}
grok {
type => "ossec-syslog"
pattern => "ossec: Alert Level: \d+; Rule: %{BASE10NUM:sid} -"
#add_field => ["sid", "%{@BASE10NUM}"]
}
grok {
type => "ossec-syslog"
pattern => "ossec: Alert Level: \d+; Rule: \d+ - %{GREEDYDATA:description}; Location"
add_tag => "Description"
}
grok {
type => "ossec-syslog"
pattern => "ossec: .*; Location: .*\S+->%{PATH:location};"
}
}


These tag the alert_level, rule ID, description, and file the log message came from.

Finally the output:

output {
stdout { }


I log to stdout for troubleshooting purposes. If this were a real installation this would be commented out.


elasticsearch {
cluster => "elasticsearch"
}


I run an elasticsearch cluster, the cluster's name is elasticsearch by default. I didn't change this, but it would probably be a good idea to name it something different.


## Gelf output
gelf {
host => "gamont.example.com"
}


I'm also outputting my logs to graylog2 via gelf. graylog2 is pretty, but I haven't really dug into it much. I'm hoping to make a post about it in the future.


statsd {
host => "rossak.example.com"
port => "8125"
increment => "TEST.OSSEC.alerts.%{alert_level}.%{sid}"
}
}


I also output some statistical data to statsd (which sends it to graphite, see the graph I posted in this post).

For an easier setup, you can change the elasticsearch output above to the following to use the embedded version:

elasticsearch { embedded => true }

You may have noticed that I have logstash configured to listen to the network on the localhost address only. I could have configured it to listen for syslog messages on the external interface, and shipped OSSEC logs directly there with ossec-csyslogd. Having logstash listen on the network would be easy, but I believe using a dedicated syslog daemon like rsyslog is the safer path. rsyslog does syslog, it's been tested heavily in sending and receiving syslog. To me it makes more sense to use a dedicated, full featured application when I can.
 So I have rsyslog forwarding messages

I could add another layer and use ossec-csyslogd to forward alerts to a local rsyslog daemon, which can then forward those messages to the logstash host. The benefit of this would be that rsyslog can deliver the messages reliably and over an encrypted channel.

On the logstash host I've added the following to rsyslog.conf to filter out the ossec messages:

$template DYNossec,"/var/log/ossec.log" if $msg startswith 'Alert Level:' or $msg contains 'Alert Level:' then ?DYNossec if $msg startswith 'Alert Level:' or $msg contains 'Alert Level:' then ~

OSSEC alerts  are put into /var/log/ossec.log and then discarded so they don't end up in any other log files.

This has been one of the most important elasticsearch links I've come across. The dreaded "Too many open files" error has popped up a couple of times for me.

The default search (click the link on the landing page) just looks for everything for the past few days:


Here's an OSSEC event with details:

OSSEC details
Another OSSEC alert, this one including a src_ip:
OSSEC details with src_ip

In the screen shots above you can see (a lot of information about my home network) the grok filters at work.Clicking on one of those fields will filter the output using that selection. For instance, I've clicked on the "6" in alert_level:

Alert Level 6

I currently have logstash holding a little over 3 million log messages:

Tuesday, October 25, 2011

3WoO: You got your OSSEC in my Logstash

There are a number of popular topics on the OSSEC mailing list, OSSEC-wui being the one I dread the most. OSSEC-wui is an old project that is currently unmaintained. It's a "web user interface" providing a view of the logs, agent status, and syscheck information.

The wui parsed OSSEC's plain text log files, requiring the web server to have access. Because of this the web server needed to be a member of the ossec group, and had to be able to access the OSSEC directories. That means no chrooted httpd unless you install OSSEC inside of the chroot. I didn't like this, and didn't use the wui.

Another problem with the wui was the log viewing interface. Working with OSSEC's rules and decoders has shown me how crazy the log space is. Everyone needs to create a new format, or thinks certain bits are not important/very important. It's crazy! There's also the possibility of attack through log viewers. XSS via a log webpage? It's definitely possible, and I've heard people talking about that possibility. Now, how much do I want to trust an unmaintained php application with access to potentially malicious log messages?

Fortunately there are alternatives: Graylog2, Splunk, logstash, Loggly, ArcSightOSSIMOctopussy, and probably dozens of others. Some are commercial and some are open source. Some have more features, and some are a bit more spartan. Paul Southerington even created a Splunk app for OSSEC called Splunk for OSSEC. It integrates Splunk and OSSEC quite nicely.

Two of the downsides to Splunk in my opinion are that it is not open source, and there is a 500 megabyte limit per day. I'm guessing that I shouldn't have ever hit the 500MB limit since I'm running this setup at home, but I did. Nine times in one month. Since this is a home use setup a commercial license for Splunk just wasn't worth it, so I migrated to logstash.

Here's what logstash has to say about itself:
logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching). Speaking of searching, logstash comes with a web interface for searching and drilling into all of your logs.

logstash has a simple interface

Logstash is written in ruby, utilizing some of the benefits of jruby. If you're using a version of logstash before 1.1.0 you may want to install libgrok as well. I believe it's being integrated into 1.1.0, but my information may be a bit off. I'll be using libgrok later in this post. That's the extent of the requirements.

A quick note about grok: It requires a relatively recent version of libpcre. CentOS (and probably RHEL) use a relatively ancient version of libpcre. While there are probably ways to get it working, I didn't want anything to do with that jazz, so I moved to Debian. I think it says something when Debian has a more up-to-date package than your former distro of choice.

I'm using logstash as an endpoint for logs, but you can also use it to push logs somewhere else. You could install a minimal logstash instance on one system, and have it provide that system's logs to another logstash instance somewhere else.

Looking at the documentation you'll see that logstash puts all configurations into one of three categories: input, filter, or output. An input is a source of log events. A filter can modify, exclude, or add information to an event. And an output is a way to push the logs into something else.

One of the most important outputs logstash offers is elasticsearch. If you're going to use logstash's web interface, you'll need this output. elasticsearch provides search capabilities to logstash. logstash and especially elasticsearch will use all of the RAM you give them. Luckily elasticsearch is clusterable, so adding more resources can be as simple as configuring a new machine. Sizing your elasticsearch installation is important, and knowing your events per second may help.

The logstash monolithic jar file contains the logstash agent, web interface, and elasticsearch. It allows you to run all 3 pieces of logstash with 1 command and to even use the embedded elasticsearch. This is much more limited than using a separate elasticsearch installation, but may be adequate for small or test installations.

In the following screen shot the first process is an elasticsearch node. The other 2 java processes are logstash's agent and web processes. I have 1 more elasticsearch node using slightly more memory than this one as well. Setting it up this way gives me a chance to play with it a bit.

logstash processes


I run the agent with the following command:
java -jar ./logstash-1.0.16-monolithic.jar agent -f ./mylogstash.conf

And the web process:
java -jar ./logstash-1.0.17-monolithic.jar web

This is getting a bit long, so I'll be detailing mylogstash.conf in my next post.

Monday, October 24, 2011

3WoO: OSSEC Documentation

The OSSEC documentation isn't good. In some areas it's downright bad.And this is my fault. I kind of took over the OSSEC documentation a while back, and trying to improve it has been tough. It takes time, energy, and skill. I have (at most) 1 of those.

It is (very slowly) getting better, but it still needs a lot of work. I'd love to have some help with it, so I figured I'd write up a few ways you can help.

One of the best things you can do is read the documentation. Read it to yourself, to your child as a bed time story, or just out loud to a rubber ducky. It doesn't really matter how you read it, just read it. You can spot more errors in the documentation when you read it. It's a fact!

If you spot an error, an omission, or just something that should be improved let me know.You're more than welcome to email me (my email's all over the mailing list), catch me on irc (#ossec on irc.freenode.net), or open a ticket. Opening a ticket is probably the most reliable. I'll probably just open a ticket if I'm contacted any other way.

Just like the base system, we use mercurial and bitbucket to manage the documentation. The main repository for it is https://bitbucket.org/ddpbsd/ossec-rules. bitbucket has a decent issue creation page, and it'll send me an email when a new ticket is opened.

Open Tickets

Creating a new issue is easy. If you don't know something, leave it at the default. Please include as much information as you can about the issue, it'll make it easier for me to fix it.

New Issue

If you want a more hands on approach, you can fix problems yourself and send me the changes. As I mentioned above we're using mercurial and bitbucket for the repository. To build the documentation we use Paver and Sphinx. Sphinx uses the reStructuredText markup language.

Start by simply cloning the repository with "hg clone https://bitbucket.org/ddpbsd/ossec-rules":

ht clone https://bitbucket.org/ddpbsd/ossec-rules
This will checkout a copy of the repository in your current directory. When you change into that directory ("cd ossec-rules") you'll see a number of other directories. The documentation is kept in docs. Make your changes and run "paver html" when you are done. The resulting HTML files are created in "docs/docs".

"hg commit" will commit the changes you make to your repository (use "hg add FILE" to add FILE to the repository if FILE is new). "hg outgoing https://bitbucket.org/ddpbsd/ossec-rules" will create a diff of all changes you've made to the repository. You can email that diff to me, and I'll look at integrating it.

You can skip the last bit by forking my ossec-rules repository using the fork button on my bitbucket page (you will need your own bitbucket account, it's free).

Fork the repository

A fork in progress


Once you've forked the repository, clone it using the "hg clone" command with the URL for your own repository. Then make and commit your changes, and finally push the changes back into your repository with "hg push".

After you've pushed a change into your repository, you can initiate a "pull request" against ossec-rules. Include a little description to give me an idea of what changes you've made. I will then be notified of the request, and have the opportunity to pull those changes into the main repository.

pull request
That's a very quick over-view of the OSSEC documentation setup. I hope to see some more participants in the future, and I hope you've found the documentation useful.

Keep your eyes peeled for a second documentation post later this week.

Third Annual Week of OSSEC

I'm a bit late to the party, but the Third Annual Week of OSSEC has begun. Michael Starks has planned a nice week, with some awesome blog posts. I've got a few I'm working on (hopefully I finish them).

Here's Michael's email to the ossec-list about the upcoming week. Sunday was day 1, so today is day 2:
"Tell your story. How has OSSEC helped you?"
 
For Day 2's blog post, Michael Starks posted 3WoO Day 2: Calculating Your EPS. He includes a little script to calculate your Events Per Second.
Knowing an estimate of your EPS is very important in specing out hardware, and preparing the network for the extra load created by OSSEC.
Usually when I want an idea of how many EPS I'm getting I look at this:
I'll edit the post if there's anything more today.
 
More contributions:
Xavier Mertens posted on how he's creating maps with OSSEC and AfterGlow.
 
Update 2:
Daniel Cid posts about a new feature in OSSEC!