Tuesday, December 27, 2011

OSSEC 101: The Slackening

In a small attempt to stop slacking I managed to add a bit to the OSSEC 101 project. Yesterday I wrote the initial draft of the OSSEC Linux agent installation. I had already written the companion OSSEC server installation, so an agent install was the next logical step. Since I don't have a lot of Windows machines, capturing the Linux agent installation was much easier. (I just noticed the server installation is missing an image, and I'm sure it needs some polish.)

Hoping that a bit of color would differentiate the agent bits from the server bits, I decided to use red backgrounds for putty in the agent installation screenshots. Please let me know if it just looks stupid, or if a different color would be more appropriate.

I'm not quite sure how much information should be in these sections. I don't want OSSEC 101 to turn into the typical how-to document, with a bunch of copy & paste commands. I want you to know what you're running and why you're running it. There will probably be more added to these install pages before I consider them done, but they're already useful.

The Windows agent page might stay blank for a while. I do have a Windows machine I could reinstall OSSEC onto, but meh.

Along with the slacking I've been trying to come up with scenarios for OSSEC 101. I thought it might be easier to explain things if I had real world examples of how some people use OSSEC, and I have a few ideas already. I'm always looking for more ideas, so feel free to send me any ideas (or create an issue at my bitbucket).

So that's 2 sections down, a LOT more to go. Hopefully I'll be able to devote more time to this next year. Hopefully.

Tuesday, November 1, 2011

More OSSEC Documentation

During the 3WoO I posted about the current state of the OSSEC documentation (3woo ossec documentation) and how you can help. This post is about the future of the OSSEC documentation.

I want to keep new features that haven't been in an official release documented, but not in the official OSSEC documentation. I thought this might keep people from getting confused and trying to use a new feature in an older version (I've done it, it isn't pretty). Thankfully mercurial makes having multiple repositories simple. I have a number of sandbox repositories, that will never see the light, filled with half baked ideas and dead ends.

The OSSEC documentation has now been forked. By the person maintaining the old documentation. It's not very exciting, I know (there's more exciting bits later, keep reading!). Most of the information will be the same between the ossec-rules repository and the ossec-docs-dev repository. ossec-docs-dev is just the development area. So changes to ossec-rules will make it into ossec-docs-dev, and when the next version of OSSEC (currently 2.7) is released ossec-docs-dev changes will be pushed into ossec-rules.

Now here's the exciting bit of the new repository: OSSEC 101! We're starting a new section to detail the life cycle of an OSSEC setup. It will cover installation, configuring, tuning, expanding, integrating, and more! It's just a skeleton outline at the moment, but it's being worked on.

I'd love input from the community on anything I'm doing right, or wrong, or missing. What works? What doesn't? What would you like to see? You can keep an eye on the commits at the bitbucket repository, as well as file issues, fork the repository, etc.

As long as the traffic isn't too heavy I'll have the development documentation up at devio.us as well. Go here to see it.

I had originally meant to post this for the recent Third Annual Week of OSSEC, but ran out of time.

Thursday, October 27, 2011

3WoO: Watching for Potentially Malicious Domains with OSSEC

I like logs, lots and lots of logs. When I find out certain logging capabilities aren't turned on I get confused. When I find out that they're turned on but not monitored I get angry.

DNS has been a thorn at a few places I've done work for in the recent past. There are requirements to record and monitor DNS queries, but no one seems to have a good solution. The general purpose of monitoring DNS was to look for malware that used DNS to connect to outside sites for data exfiltration. It may sound lame (not to me), but it can be quite effective depending on the intelligence you have on the threats most important to your organization.

Using snort to monitor DNS queries and responses has provided less than perfect results. Usually the sensor is placed in such a way that only recursive lookups are seen, so we end up with a bunch of alerts blaming the DNS server itself.

This post will help document part of a plan to monitor DNS. This only covers the major *nix daemons, not Windows (if someone gets me Windows logs I'll work on that too). It also only monitors the queries, not the responses. I had something that worked with bro-ids to monitor responses, but I haven't been able to test it in quite a while. I'm planning on waiting for the next major bro-ids release to update it. This method can also be worked around in a number of ways (including using domain names that aren't known to be bad, or subdomains not in the list), but it's a start.

I've tested this method with both bind and unbound, but should work with any software that logs these queries and an appropriate OSSEC decoder.

The first step is to turn on the proper logging.

Setting up the unbound.conf is simple:

log-queries: yes

Configuring named.conf is a bit more difficult:

logging {

        channel "default2" {
                syslog local7;
                severity info;
        };

        category lame-servers { null; };
        category "queries" { "default2"; };
        category "unmatched" { "default2"; };
};

I just have to make sure my syslogd is watching for local7 alerts, and writing them to a file. Configuring OSSEC to watch this logfile is pretty simple. On my system I just add the following to ossec.conf:


<localfile>
  <log_format>syslog</log_format>
  <location>/var/log/local7</location>
</localfile>

After making the logging changes to your dns daemon you'll have to restart it. Make sure logs are being populated with client queries.

Example unbound logs (differences in timestamps are from rsyslog vs OpenBSD's syslogd):

2011-10-26T15:46:15.508083-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 www.ossec.net. A IN
2011-10-26T15:45:54.895874-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 www.google.com. A IN
2011-10-26T15:46:48.366164-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 caladan.example.com. A IN
2011-10-26T15:47:24.372937-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 wallach9.example.com. A IN
2011-10-26T15:47:24.373670-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 wallach9.be.example.com. A IN

And bind:
Oct 26 16:07:08 ix named[14044]: client 192.168.17.9#22193: query: www.ossec.net IN A +
Oct 26 16:05:51 ix named[14044]: client 192.168.1.9#19095: query: wallach9.example.com IN A +
Oct 26 16:05:51 ix named[14044]: client 192.168.1.9#26269: query: wallach9.be.example.com IN A +
Oct 26 16:03:21 ix named[14044]: client 192.168.1.16#38892: query: www.google.com IN A +

Let's look at these logs in ossec-logtest. bind first:
[root@zanovar ossec]# /var/ossec/bin/ossec-logtest
2011/10/26 16:07:46 ossec-testrule: INFO: Reading local decoder file.
2011/10/26 16:07:46 ossec-testrule: INFO: Started (pid: 11804).
ossec-testrule: Type one log per line.

Oct 26 16:07:08 ix named[14044]: client 192.168.17.9#22193: query: www.ossec.net IN A +


**Phase 1: Completed pre-decoding.
       full event: 'Oct 26 16:07:08 ix named[14044]: client 192.168.17.9#22193: query: www.ossec.net IN A +'
       hostname: 'ix'
       program_name: 'named'
       log: 'client 192.168.17.9#22193: query: www.ossec.net IN A +'

**Phase 2: Completed decoding.
       decoder: 'named'
       srcip: '192.168.17.9'
       url: 'www.ossec.net'

The domain name is decoded in the url section. Let's try the unbound log:
[root@zanovar ossec]# /var/ossec/bin/ossec-logtest
2011/10/26 16:11:47 ossec-testrule: INFO: Reading local decoder file.
2011/10/26 16:11:47 ossec-testrule: INFO: Started (pid: 11805).
ossec-testrule: Type one log per line.

2011-10-26T15:46:15.508083-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 www.ossec.net. A IN


**Phase 1: Completed pre-decoding.
       full event: '2011-10-26T15:46:15.508083-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 www.ossec.net. A IN'
       hostname: 'arrakis'
       program_name: 'unbound'
       log: '[8113:0] info: 127.0.0.1 www.ossec.net. A IN'

**Phase 2: Completed decoding.
       No decoder matched.

There is no unbound decoder currently. That's easy to fix. (in case there are any errors in the HTMLization of the decoders below feel free to grab this file)
Add the following to /var/ossec/etc/local_decoder.xml:
<decoder name="unbound">
  <program_name>^unbound</program_name>
</decoder>

<decoder name="unbound-info">
  <parent>unbound</parent>
  <prematch offset="after_parent">^\p\d+:\d+\p info: </prematch>
  <regex offset="after_prematch">^(\S+) (\S+) \S+ \S+$</regex>
  <order>srcip, url</order>
</decoder>


And this is how it decodes now:
[root@zanovar ossec]# /var/ossec/bin/ossec-logtest
2011/10/26 16:17:16 ossec-testrule: INFO: Reading local decoder file.
2011/10/26 16:17:16 ossec-testrule: INFO: Started (pid: 11810).
ossec-testrule: Type one log per line.

2011-10-26T15:46:15.508083-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 www.ossec.net. A IN


**Phase 1: Completed pre-decoding.
       full event: '2011-10-26T15:46:15.508083-04:00 arrakis unbound: [8113:0] info: 127.0.0.1 www.ossec.net. A IN'
       hostname: 'arrakis'
       program_name: 'unbound'
       log: '[8113:0] info: 127.0.0.1 www.ossec.net. A IN'

**Phase 2: Completed decoding.
       decoder: 'unbound'
       srcip: '127.0.0.1'
       url: 'www.ossec.net.'

We're one step closer to having this all work. The next step is to decide on which domain names you want to alert on. I use a number of sources and a bad python script (seriously bad, you can't have it) to create a list of suspicious domains.
I use lists from Malware Domain List, DNS-BH - Malware Domain Blocklist (please donate!), and abuse.ch Zeus Tracker. I also have a list setup for domains I hear about that may not be on these other lists, and some other lists I don't pull via the script. A quick note about the DNS-BH site: That site is primarily a source for creating a DNS blackhole. This can keep your systems from ever getting to these bad sites (redirect them to a "honeypot" system to see what traffic they try to pass). I definitely recommend doing this, it's almost a free bit of security. It's also easy to setup in both bind and unbound/nsd. I've configured both to do this, so if anyone is interested let me know!
Of course, if you're worried about RAM usage this may not be the best idea:
  PID USERNAME PRI NICE  SIZE   RES STATE     WAIT      TIME    CPU COMMAND
14044 named      2    0 1088M  885M sleep/1   select   34:53  0.00% /usr/sbin/named

  PID USERNAME PRI NICE  SIZE   RES STATE     WAIT      TIME    CPU COMMAND
19753 _nsd       2    0  296M  249M idle      select    0:03  0.00% /usr/sbin/nsd -c /etc/nsd.conf

After collecting a list of possibly malicious domains, you'll want to create a CDB list. CDB is a key-value database format. It's great for lists that are fairly static. There's no way to add or remove items from the database, you have to recompile it from scratch. Recompiling these lists is pretty simple and quick, so it isn't much of an issue
The format is also simple:
key: value

My lists generally look like:
DOMAIN: Suspicious Domain

After this, move the list to your ossec directory, I keep mine in /var/ossec/lists. Next we configure OSSEC to use the list by adding them to the rules section:
<rules>
  ...
    <list>lists/blocked.txt.cdb</list>
    <list>lists/userlist.txt.cdb</list>
    <include>local_rules.xml</include>
</rules>

Compiling the lists is easy, and if the files haven't changed since the last recompile ossec-makelists will notify you that they don't need to be recompiled. When a list is updated you should not have to restart the OSSEC processes, they should pick up the changes automatically.
# /var/ossec/bin/ossec-makelists
 * File lists/blocked.txt.cdb does not need to be compiled
 * File lists/userlist.txt.cdb does not need to be compiled
# rm /var/ossec/lists/userlist.txt.cdb
# /var/ossec/bin/ossec-makelists
 * File lists/blocked.txt.cdb does not need to be compiled
 * File lists/userlist.txt.cdb need to be updated

The last piece will be creating a rule to use the list. These are the rules I have for unbound, but the bind rules look very similar:
   <rule id="40000" level="0" noalert="1">
    <decoded_as>unbound</decoded_as>
    <description>Grouping for unbound.</description>
  </rule>

  <rule id="40001" level="10">
    <if_sid>40000</if_sid>
    <list field="url">lists/blocked.txt</list>
    <description>DNS query on a potentially malicious domain.</description>
  </rule>

The list item compares the decoded url to the keys in blocked.txt.cdb database. If there is a match rule 40001 fires, if that url isn't in the database then it doesn't fire.
Hopefully this post gave you some ideas. There's more you can do with lists, so take a look at the documentation (more here).
Just a little teaser, I'm planning on another documentation post tomorrow or Friday. Stay tuned!

Wednesday, October 26, 2011

3WoO: Day X through Y Roundup!

I've been slacking! In no particular order:

Alerting on DNS Changes - Daniel Cid
Leveraging Community Intelligence - Michael Starks
Mapping OSSEC Alerts with Afterglow - Xavier Mertens
Detecting Defaced Websites with OSSEC - Xavier Mertens
Five Tips and Tricks for OSSEC Ninjas - Michael Starks
Week of OSSEC Roundup - Daniel Cid
You Got Your OSSEC in my Logstash
You got your OSSEC in my Logstash Part 2

If I missed something, let me know!

UPDATE:
It's not really 3WoO related, but BSD magazine has an OSSEC on OpenBSD article.

UPDATE 2:
Someone made an ossec reddit.

3WoO: You got your OSSEC in my Logstash part 2

EDIT: Vic Hargrave has posted a more up to date OSSEC + Logstash post to his blog, check it out here.


Yesterday I posted an introduction to logstash. Today I hope to explain how I combined it with OSSEC for an awesome experience.

First, I'll explain how I have logstash setup. mylogstash.conf is broken up into 3 parts: input, filter, and output. This should sound familiar. You can get a copy of the file from here. You can get a copy of my rsyslog.conf from here. I'll be explaining most of the configuration in snippets below.

Here's the input section:

input {
file {
type => "linux-syslog"
path => [ "/var/log/cron", "/var/log/messages", "/var/log/syslog", "/var/log/secure", "/var/log/maillog", "/var/log/auth.log", "/var/log/daemon.log", "/var/log/dpkg.log" ]
}


The first section in the input is a file and labeled as linux-syslog. The path lists a number of logfiles on the local system for logstash to monitor and import new messages from.


file {
type => "ossec-syslog"
path => "/var/log/ossec.log"
}


The /var/log/ossec.log file is a collection of all ossec alerts forwarded to the logstash host using ossec-csyslogd. rsyslog puts all of the OSSEC alerts into their own file to make using grok easier.


tcp {
host => "127.0.0.1"
port => "6999"
type => "linux-syslog"
}

Lastly the tcp section. I have this to push old logs into logstash. logstash will only look at new log messages that come in when you use "file." So to get older messages I have logstash listen to a port and push old logs into it with since and netcat.

Onto the filter section:

filter {
grok {
type => "ossec-syslog"
pattern => "ossec: Alert Level: \d; Rule: \d+ - .* srcip: %{IP:src_ip}"
add_tag => "Source IP"
}


The above code is a grok filter. The type of "ossec-syslog" makes sure grok only looks at the inputs labeled as "ossec-syslog." We're matching the source IP and tag it as the "src_ip."


grok {
type => "ossec-syslog"
pattern => "(ossec: .*; Location: \(%{NOTSPACE:source_host}\) |ossec: .*; Location: \S+\|\(%{NOTSPACE:source_host}\) |ossec: .*; Location: \S+\|%{NOTSPACE:source_host}->\S+; )"
add_tag => "Source Host"
}


This grok filter will tag the true source_host, otherwise my logstash host or the OSSEC server would be tagged as the source.


grok {
type => "ossec-syslog"
pattern => "ossec: Alert Level: %{BASE10NUM:alert_level};"
}
grok {
type => "ossec-syslog"
pattern => "ossec: Alert Level: \d+; Rule: %{BASE10NUM:sid} -"
#add_field => ["sid", "%{@BASE10NUM}"]
}
grok {
type => "ossec-syslog"
pattern => "ossec: Alert Level: \d+; Rule: \d+ - %{GREEDYDATA:description}; Location"
add_tag => "Description"
}
grok {
type => "ossec-syslog"
pattern => "ossec: .*; Location: .*\S+->%{PATH:location};"
}
}


These tag the alert_level, rule ID, description, and file the log message came from.

Finally the output:

output {
stdout { }


I log to stdout for troubleshooting purposes. If this were a real installation this would be commented out.


elasticsearch {
cluster => "elasticsearch"
}


I run an elasticsearch cluster, the cluster's name is elasticsearch by default. I didn't change this, but it would probably be a good idea to name it something different.


## Gelf output
gelf {
host => "gamont.example.com"
}


I'm also outputting my logs to graylog2 via gelf. graylog2 is pretty, but I haven't really dug into it much. I'm hoping to make a post about it in the future.


statsd {
host => "rossak.example.com"
port => "8125"
increment => "TEST.OSSEC.alerts.%{alert_level}.%{sid}"
}
}


I also output some statistical data to statsd (which sends it to graphite, see the graph I posted in this post).

For an easier setup, you can change the elasticsearch output above to the following to use the embedded version:

elasticsearch { embedded => true }

You may have noticed that I have logstash configured to listen to the network on the localhost address only. I could have configured it to listen for syslog messages on the external interface, and shipped OSSEC logs directly there with ossec-csyslogd. Having logstash listen on the network would be easy, but I believe using a dedicated syslog daemon like rsyslog is the safer path. rsyslog does syslog, it's been tested heavily in sending and receiving syslog. To me it makes more sense to use a dedicated, full featured application when I can.
 So I have rsyslog forwarding messages

I could add another layer and use ossec-csyslogd to forward alerts to a local rsyslog daemon, which can then forward those messages to the logstash host. The benefit of this would be that rsyslog can deliver the messages reliably and over an encrypted channel.

On the logstash host I've added the following to rsyslog.conf to filter out the ossec messages:

$template DYNossec,"/var/log/ossec.log" if $msg startswith 'Alert Level:' or $msg contains 'Alert Level:' then ?DYNossec if $msg startswith 'Alert Level:' or $msg contains 'Alert Level:' then ~

OSSEC alerts  are put into /var/log/ossec.log and then discarded so they don't end up in any other log files.

This has been one of the most important elasticsearch links I've come across. The dreaded "Too many open files" error has popped up a couple of times for me.

The default search (click the link on the landing page) just looks for everything for the past few days:


Here's an OSSEC event with details:

OSSEC details
Another OSSEC alert, this one including a src_ip:
OSSEC details with src_ip

In the screen shots above you can see (a lot of information about my home network) the grok filters at work.Clicking on one of those fields will filter the output using that selection. For instance, I've clicked on the "6" in alert_level:

Alert Level 6

I currently have logstash holding a little over 3 million log messages:

Tuesday, October 25, 2011

3WoO: You got your OSSEC in my Logstash

There are a number of popular topics on the OSSEC mailing list, OSSEC-wui being the one I dread the most. OSSEC-wui is an old project that is currently unmaintained. It's a "web user interface" providing a view of the logs, agent status, and syscheck information.

The wui parsed OSSEC's plain text log files, requiring the web server to have access. Because of this the web server needed to be a member of the ossec group, and had to be able to access the OSSEC directories. That means no chrooted httpd unless you install OSSEC inside of the chroot. I didn't like this, and didn't use the wui.

Another problem with the wui was the log viewing interface. Working with OSSEC's rules and decoders has shown me how crazy the log space is. Everyone needs to create a new format, or thinks certain bits are not important/very important. It's crazy! There's also the possibility of attack through log viewers. XSS via a log webpage? It's definitely possible, and I've heard people talking about that possibility. Now, how much do I want to trust an unmaintained php application with access to potentially malicious log messages?

Fortunately there are alternatives: Graylog2, Splunk, logstash, Loggly, ArcSightOSSIMOctopussy, and probably dozens of others. Some are commercial and some are open source. Some have more features, and some are a bit more spartan. Paul Southerington even created a Splunk app for OSSEC called Splunk for OSSEC. It integrates Splunk and OSSEC quite nicely.

Two of the downsides to Splunk in my opinion are that it is not open source, and there is a 500 megabyte limit per day. I'm guessing that I shouldn't have ever hit the 500MB limit since I'm running this setup at home, but I did. Nine times in one month. Since this is a home use setup a commercial license for Splunk just wasn't worth it, so I migrated to logstash.

Here's what logstash has to say about itself:
logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching). Speaking of searching, logstash comes with a web interface for searching and drilling into all of your logs.

logstash has a simple interface

Logstash is written in ruby, utilizing some of the benefits of jruby. If you're using a version of logstash before 1.1.0 you may want to install libgrok as well. I believe it's being integrated into 1.1.0, but my information may be a bit off. I'll be using libgrok later in this post. That's the extent of the requirements.

A quick note about grok: It requires a relatively recent version of libpcre. CentOS (and probably RHEL) use a relatively ancient version of libpcre. While there are probably ways to get it working, I didn't want anything to do with that jazz, so I moved to Debian. I think it says something when Debian has a more up-to-date package than your former distro of choice.

I'm using logstash as an endpoint for logs, but you can also use it to push logs somewhere else. You could install a minimal logstash instance on one system, and have it provide that system's logs to another logstash instance somewhere else.

Looking at the documentation you'll see that logstash puts all configurations into one of three categories: input, filter, or output. An input is a source of log events. A filter can modify, exclude, or add information to an event. And an output is a way to push the logs into something else.

One of the most important outputs logstash offers is elasticsearch. If you're going to use logstash's web interface, you'll need this output. elasticsearch provides search capabilities to logstash. logstash and especially elasticsearch will use all of the RAM you give them. Luckily elasticsearch is clusterable, so adding more resources can be as simple as configuring a new machine. Sizing your elasticsearch installation is important, and knowing your events per second may help.

The logstash monolithic jar file contains the logstash agent, web interface, and elasticsearch. It allows you to run all 3 pieces of logstash with 1 command and to even use the embedded elasticsearch. This is much more limited than using a separate elasticsearch installation, but may be adequate for small or test installations.

In the following screen shot the first process is an elasticsearch node. The other 2 java processes are logstash's agent and web processes. I have 1 more elasticsearch node using slightly more memory than this one as well. Setting it up this way gives me a chance to play with it a bit.

logstash processes


I run the agent with the following command:
java -jar ./logstash-1.0.16-monolithic.jar agent -f ./mylogstash.conf

And the web process:
java -jar ./logstash-1.0.17-monolithic.jar web

This is getting a bit long, so I'll be detailing mylogstash.conf in my next post.

Monday, October 24, 2011

3WoO: OSSEC Documentation

The OSSEC documentation isn't good. In some areas it's downright bad.And this is my fault. I kind of took over the OSSEC documentation a while back, and trying to improve it has been tough. It takes time, energy, and skill. I have (at most) 1 of those.

It is (very slowly) getting better, but it still needs a lot of work. I'd love to have some help with it, so I figured I'd write up a few ways you can help.

One of the best things you can do is read the documentation. Read it to yourself, to your child as a bed time story, or just out loud to a rubber ducky. It doesn't really matter how you read it, just read it. You can spot more errors in the documentation when you read it. It's a fact!

If you spot an error, an omission, or just something that should be improved let me know.You're more than welcome to email me (my email's all over the mailing list), catch me on irc (#ossec on irc.freenode.net), or open a ticket. Opening a ticket is probably the most reliable. I'll probably just open a ticket if I'm contacted any other way.

Just like the base system, we use mercurial and bitbucket to manage the documentation. The main repository for it is https://bitbucket.org/ddpbsd/ossec-rules. bitbucket has a decent issue creation page, and it'll send me an email when a new ticket is opened.

Open Tickets

Creating a new issue is easy. If you don't know something, leave it at the default. Please include as much information as you can about the issue, it'll make it easier for me to fix it.

New Issue

If you want a more hands on approach, you can fix problems yourself and send me the changes. As I mentioned above we're using mercurial and bitbucket for the repository. To build the documentation we use Paver and Sphinx. Sphinx uses the reStructuredText markup language.

Start by simply cloning the repository with "hg clone https://bitbucket.org/ddpbsd/ossec-rules":

ht clone https://bitbucket.org/ddpbsd/ossec-rules
This will checkout a copy of the repository in your current directory. When you change into that directory ("cd ossec-rules") you'll see a number of other directories. The documentation is kept in docs. Make your changes and run "paver html" when you are done. The resulting HTML files are created in "docs/docs".

"hg commit" will commit the changes you make to your repository (use "hg add FILE" to add FILE to the repository if FILE is new). "hg outgoing https://bitbucket.org/ddpbsd/ossec-rules" will create a diff of all changes you've made to the repository. You can email that diff to me, and I'll look at integrating it.

You can skip the last bit by forking my ossec-rules repository using the fork button on my bitbucket page (you will need your own bitbucket account, it's free).

Fork the repository

A fork in progress


Once you've forked the repository, clone it using the "hg clone" command with the URL for your own repository. Then make and commit your changes, and finally push the changes back into your repository with "hg push".

After you've pushed a change into your repository, you can initiate a "pull request" against ossec-rules. Include a little description to give me an idea of what changes you've made. I will then be notified of the request, and have the opportunity to pull those changes into the main repository.

pull request
That's a very quick over-view of the OSSEC documentation setup. I hope to see some more participants in the future, and I hope you've found the documentation useful.

Keep your eyes peeled for a second documentation post later this week.

Third Annual Week of OSSEC

I'm a bit late to the party, but the Third Annual Week of OSSEC has begun. Michael Starks has planned a nice week, with some awesome blog posts. I've got a few I'm working on (hopefully I finish them).

Here's Michael's email to the ossec-list about the upcoming week. Sunday was day 1, so today is day 2:
"Tell your story. How has OSSEC helped you?"
 
For Day 2's blog post, Michael Starks posted 3WoO Day 2: Calculating Your EPS. He includes a little script to calculate your Events Per Second.
Knowing an estimate of your EPS is very important in specing out hardware, and preparing the network for the extra load created by OSSEC.
Usually when I want an idea of how many EPS I'm getting I look at this:
I'll edit the post if there's anything more today.
 
More contributions:
Xavier Mertens posted on how he's creating maps with OSSEC and AfterGlow.
 
Update 2:
Daniel Cid posts about a new feature in OSSEC! 

Friday, May 20, 2011

Encrypting OSSEC Alert Emails

OSSEC cannot currently encrypt alert emails before sending them out, and I do not think anyone is currently working on this feature. It can be accomplished, in a hacky, way using procmail.

First install an smtp server, procmail, and your encryption program on the manager. Next, create a user account to receive the OSSEC alert emails. Create a .procmailrc with the following contents:
PATH=/usr/bin:/usr/local/bin
SHELL=/bin/sh

:0 Bfbw
| gpg --armor -r 'gpg_name' --encrypt

:0 c
! final_email@example.com

:0
$DEFAULT
The above procmailrc assumes gpg is the encryption program. Replace gpg_name with the name used by the key you want to use, and final_email@example.com with the email address of the user receiving the alerts.

This user should have the public keys of any users that will be receiving the email. Try encrypting a dummy file to make sure there are no yes/no prompts when using gpg (you may need to sign the keys). Send a test email to this user account to test procmail. It should show up in final_email's account, encrypted.

Finally, configure OSSEC to send the alert emails to the local user:
<email_to>user@localhost</user>
<smtp_server>127.0.0.1</smtp_server>
OSSEC should email the user account, which will encrypt the body of the message using procmail, and forward it to final_email@example.com. Like I said, it's a bit hackish, but it should work.

This whole idea started from an email to the OSSEC users list asking if OSSEC can encrypt emails. Of course my answer was no. But the question triggered something in the back of my brain, so I started to look into procmail. A few weeks of procrastinations, a bunch of google searches, and a short testing period resulted in the above. Hopefully someone finds it useful.

If you happen to be a procmail expert and have any comments, please add them. The above is mostly from examples I found online, and I know it will match ALL email sent to that user. I imagine anyone using it for more than OSSEC could easily add a subject line check in it, I didn't think it was necessary for this example.

Thursday, March 31, 2011

World Backup Day



Is it World Backup Day already? Time sure does fly! It feels like this day was thought of only last week. Oh yeah, it was.
According to Ars Technica, today (March 31, 2011) is the first World Backup Day, and hopefully not the last. Check out the World Backup Day website for information and deals.
Just like the Ars contributors, I'll go ahead and write a few words about my backup strategy. Most of my data doesn’t change often, especially the bits that can’t be replaced. I do most of my real backups manually, about once a week. Some of these procedures will be automated in the future, but I’m not in a huge hurry to get this done.  I also use Gnu Privacy Guard with each of these processes.  Some data can (and should) be encrypted twice.
How I backup my data depends on how important I believe that data is. Very important data gets encrypted and stored in multiple (local and off-site) locations. Less important data may be backed up to a removable drive, saved on multiple systems, or even occasionally written to a cd-r.
For my most important data, I use the online backup service Tarsnap. It claims to be the "Online backups for the truly paranoid," and encrypts the data on the local system before uploading it. The service has saved me time and effort after a failure, so I’ll be sticking with it for a while. If nothing else, tarsnap gives me an excuse to use the term "picodollar."
I also have a free Dropbox account. More things get saved here than my tarsnap account, but they are less sensitive. I also keep the things I’d like to be able to access from my phone in my dropbox. Anything I don’t want other people to see gets encrypted with gnupg.
My offline backups include saving files to multiple systems. This might protect me against one drive dying, but it isn’t a good long term strategy. But that’s why I use the online backup services. I also have a portable USB hard drive I take with me on trips. It pulls data off of other systems nightly (one of the few automated processes I have setup).
So however you do it, backup your data today!

Wednesday, January 19, 2011

Shmoocon 2011

About a week from now the Shmoocon 2011 conference will be kicking off. I've managed to attend the past few Shmoocons and it's been a lot of fun. I'm a little disappointed that a couple of friends couldn't get tickets this year. They just seemed to sell out quicker this year. My friends will just have to watch the streams.

There are a lot of great talks this year, as there are every year. The ones I'm most interested in are (based on the descriptions):
 My biggest hope for Shmoocon this year is NO SNOW! Last year's snow was fun, but once was enough.

Friday, January 14, 2011

I'm a slacker

I know I've been ignoring the blog, and I'd make some excuse if I had a good one. I don't, so it must be laziness. It's a new year, and a good time to change that.

Besides catching up on Chrono Trigger, I've been playing around with a few things that might lead to blog posts.

I've gotten the chance to learn a bit more about Splunk, and if I can ever stop violating my 500MB/day license (I didn't think I'd ever violate that on my home network) I hope to write something up on it. I want to specifically dig into the Splunk for OSSEC app.

DragonFly BSD is something else I've been playing with. The HAMMER file system sounded really interesting, and I wanted to give it a shot. I haven't been disappointed so far.

Finally, network flow data has interested me for a while. I wanted to dig into it a bit more so I went looking for a tool. The one I chose is Argus. There are a lot of options available in argus, but I'll post a few of the things I look at.

Here's to a good 2011!