Tuesday, October 25, 2011

3WoO: You got your OSSEC in my Logstash

There are a number of popular topics on the OSSEC mailing list, OSSEC-wui being the one I dread the most. OSSEC-wui is an old project that is currently unmaintained. It's a "web user interface" providing a view of the logs, agent status, and syscheck information.

The wui parsed OSSEC's plain text log files, requiring the web server to have access. Because of this the web server needed to be a member of the ossec group, and had to be able to access the OSSEC directories. That means no chrooted httpd unless you install OSSEC inside of the chroot. I didn't like this, and didn't use the wui.

Another problem with the wui was the log viewing interface. Working with OSSEC's rules and decoders has shown me how crazy the log space is. Everyone needs to create a new format, or thinks certain bits are not important/very important. It's crazy! There's also the possibility of attack through log viewers. XSS via a log webpage? It's definitely possible, and I've heard people talking about that possibility. Now, how much do I want to trust an unmaintained php application with access to potentially malicious log messages?

Fortunately there are alternatives: Graylog2, Splunk, logstash, Loggly, ArcSightOSSIMOctopussy, and probably dozens of others. Some are commercial and some are open source. Some have more features, and some are a bit more spartan. Paul Southerington even created a Splunk app for OSSEC called Splunk for OSSEC. It integrates Splunk and OSSEC quite nicely.

Two of the downsides to Splunk in my opinion are that it is not open source, and there is a 500 megabyte limit per day. I'm guessing that I shouldn't have ever hit the 500MB limit since I'm running this setup at home, but I did. Nine times in one month. Since this is a home use setup a commercial license for Splunk just wasn't worth it, so I migrated to logstash.

Here's what logstash has to say about itself:
logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching). Speaking of searching, logstash comes with a web interface for searching and drilling into all of your logs.

logstash has a simple interface

Logstash is written in ruby, utilizing some of the benefits of jruby. If you're using a version of logstash before 1.1.0 you may want to install libgrok as well. I believe it's being integrated into 1.1.0, but my information may be a bit off. I'll be using libgrok later in this post. That's the extent of the requirements.

A quick note about grok: It requires a relatively recent version of libpcre. CentOS (and probably RHEL) use a relatively ancient version of libpcre. While there are probably ways to get it working, I didn't want anything to do with that jazz, so I moved to Debian. I think it says something when Debian has a more up-to-date package than your former distro of choice.

I'm using logstash as an endpoint for logs, but you can also use it to push logs somewhere else. You could install a minimal logstash instance on one system, and have it provide that system's logs to another logstash instance somewhere else.

Looking at the documentation you'll see that logstash puts all configurations into one of three categories: input, filter, or output. An input is a source of log events. A filter can modify, exclude, or add information to an event. And an output is a way to push the logs into something else.

One of the most important outputs logstash offers is elasticsearch. If you're going to use logstash's web interface, you'll need this output. elasticsearch provides search capabilities to logstash. logstash and especially elasticsearch will use all of the RAM you give them. Luckily elasticsearch is clusterable, so adding more resources can be as simple as configuring a new machine. Sizing your elasticsearch installation is important, and knowing your events per second may help.

The logstash monolithic jar file contains the logstash agent, web interface, and elasticsearch. It allows you to run all 3 pieces of logstash with 1 command and to even use the embedded elasticsearch. This is much more limited than using a separate elasticsearch installation, but may be adequate for small or test installations.

In the following screen shot the first process is an elasticsearch node. The other 2 java processes are logstash's agent and web processes. I have 1 more elasticsearch node using slightly more memory than this one as well. Setting it up this way gives me a chance to play with it a bit.

logstash processes


I run the agent with the following command:
java -jar ./logstash-1.0.16-monolithic.jar agent -f ./mylogstash.conf

And the web process:
java -jar ./logstash-1.0.17-monolithic.jar web

This is getting a bit long, so I'll be detailing mylogstash.conf in my next post.

Monday, October 24, 2011

3WoO: OSSEC Documentation

The OSSEC documentation isn't good. In some areas it's downright bad.And this is my fault. I kind of took over the OSSEC documentation a while back, and trying to improve it has been tough. It takes time, energy, and skill. I have (at most) 1 of those.

It is (very slowly) getting better, but it still needs a lot of work. I'd love to have some help with it, so I figured I'd write up a few ways you can help.

One of the best things you can do is read the documentation. Read it to yourself, to your child as a bed time story, or just out loud to a rubber ducky. It doesn't really matter how you read it, just read it. You can spot more errors in the documentation when you read it. It's a fact!

If you spot an error, an omission, or just something that should be improved let me know.You're more than welcome to email me (my email's all over the mailing list), catch me on irc (#ossec on irc.freenode.net), or open a ticket. Opening a ticket is probably the most reliable. I'll probably just open a ticket if I'm contacted any other way.

Just like the base system, we use mercurial and bitbucket to manage the documentation. The main repository for it is https://bitbucket.org/ddpbsd/ossec-rules. bitbucket has a decent issue creation page, and it'll send me an email when a new ticket is opened.

Open Tickets

Creating a new issue is easy. If you don't know something, leave it at the default. Please include as much information as you can about the issue, it'll make it easier for me to fix it.

New Issue

If you want a more hands on approach, you can fix problems yourself and send me the changes. As I mentioned above we're using mercurial and bitbucket for the repository. To build the documentation we use Paver and Sphinx. Sphinx uses the reStructuredText markup language.

Start by simply cloning the repository with "hg clone https://bitbucket.org/ddpbsd/ossec-rules":

ht clone https://bitbucket.org/ddpbsd/ossec-rules
This will checkout a copy of the repository in your current directory. When you change into that directory ("cd ossec-rules") you'll see a number of other directories. The documentation is kept in docs. Make your changes and run "paver html" when you are done. The resulting HTML files are created in "docs/docs".

"hg commit" will commit the changes you make to your repository (use "hg add FILE" to add FILE to the repository if FILE is new). "hg outgoing https://bitbucket.org/ddpbsd/ossec-rules" will create a diff of all changes you've made to the repository. You can email that diff to me, and I'll look at integrating it.

You can skip the last bit by forking my ossec-rules repository using the fork button on my bitbucket page (you will need your own bitbucket account, it's free).

Fork the repository

A fork in progress


Once you've forked the repository, clone it using the "hg clone" command with the URL for your own repository. Then make and commit your changes, and finally push the changes back into your repository with "hg push".

After you've pushed a change into your repository, you can initiate a "pull request" against ossec-rules. Include a little description to give me an idea of what changes you've made. I will then be notified of the request, and have the opportunity to pull those changes into the main repository.

pull request
That's a very quick over-view of the OSSEC documentation setup. I hope to see some more participants in the future, and I hope you've found the documentation useful.

Keep your eyes peeled for a second documentation post later this week.

Third Annual Week of OSSEC

I'm a bit late to the party, but the Third Annual Week of OSSEC has begun. Michael Starks has planned a nice week, with some awesome blog posts. I've got a few I'm working on (hopefully I finish them).

Here's Michael's email to the ossec-list about the upcoming week. Sunday was day 1, so today is day 2:
"Tell your story. How has OSSEC helped you?"
 
For Day 2's blog post, Michael Starks posted 3WoO Day 2: Calculating Your EPS. He includes a little script to calculate your Events Per Second.
Knowing an estimate of your EPS is very important in specing out hardware, and preparing the network for the extra load created by OSSEC.
Usually when I want an idea of how many EPS I'm getting I look at this:
I'll edit the post if there's anything more today.
 
More contributions:
Xavier Mertens posted on how he's creating maps with OSSEC and AfterGlow.
 
Update 2:
Daniel Cid posts about a new feature in OSSEC! 

Friday, May 20, 2011

Encrypting OSSEC Alert Emails

OSSEC cannot currently encrypt alert emails before sending them out, and I do not think anyone is currently working on this feature. It can be accomplished, in a hacky, way using procmail.

First install an smtp server, procmail, and your encryption program on the manager. Next, create a user account to receive the OSSEC alert emails. Create a .procmailrc with the following contents:
PATH=/usr/bin:/usr/local/bin
SHELL=/bin/sh

:0 Bfbw
| gpg --armor -r 'gpg_name' --encrypt

:0 c
! final_email@example.com

:0
$DEFAULT
The above procmailrc assumes gpg is the encryption program. Replace gpg_name with the name used by the key you want to use, and final_email@example.com with the email address of the user receiving the alerts.

This user should have the public keys of any users that will be receiving the email. Try encrypting a dummy file to make sure there are no yes/no prompts when using gpg (you may need to sign the keys). Send a test email to this user account to test procmail. It should show up in final_email's account, encrypted.

Finally, configure OSSEC to send the alert emails to the local user:
<email_to>user@localhost</user>
<smtp_server>127.0.0.1</smtp_server>
OSSEC should email the user account, which will encrypt the body of the message using procmail, and forward it to final_email@example.com. Like I said, it's a bit hackish, but it should work.

This whole idea started from an email to the OSSEC users list asking if OSSEC can encrypt emails. Of course my answer was no. But the question triggered something in the back of my brain, so I started to look into procmail. A few weeks of procrastinations, a bunch of google searches, and a short testing period resulted in the above. Hopefully someone finds it useful.

If you happen to be a procmail expert and have any comments, please add them. The above is mostly from examples I found online, and I know it will match ALL email sent to that user. I imagine anyone using it for more than OSSEC could easily add a subject line check in it, I didn't think it was necessary for this example.

Thursday, March 31, 2011

World Backup Day



Is it World Backup Day already? Time sure does fly! It feels like this day was thought of only last week. Oh yeah, it was.
According to Ars Technica, today (March 31, 2011) is the first World Backup Day, and hopefully not the last. Check out the World Backup Day website for information and deals.
Just like the Ars contributors, I'll go ahead and write a few words about my backup strategy. Most of my data doesn’t change often, especially the bits that can’t be replaced. I do most of my real backups manually, about once a week. Some of these procedures will be automated in the future, but I’m not in a huge hurry to get this done.  I also use Gnu Privacy Guard with each of these processes.  Some data can (and should) be encrypted twice.
How I backup my data depends on how important I believe that data is. Very important data gets encrypted and stored in multiple (local and off-site) locations. Less important data may be backed up to a removable drive, saved on multiple systems, or even occasionally written to a cd-r.
For my most important data, I use the online backup service Tarsnap. It claims to be the "Online backups for the truly paranoid," and encrypts the data on the local system before uploading it. The service has saved me time and effort after a failure, so I’ll be sticking with it for a while. If nothing else, tarsnap gives me an excuse to use the term "picodollar."
I also have a free Dropbox account. More things get saved here than my tarsnap account, but they are less sensitive. I also keep the things I’d like to be able to access from my phone in my dropbox. Anything I don’t want other people to see gets encrypted with gnupg.
My offline backups include saving files to multiple systems. This might protect me against one drive dying, but it isn’t a good long term strategy. But that’s why I use the online backup services. I also have a portable USB hard drive I take with me on trips. It pulls data off of other systems nightly (one of the few automated processes I have setup).
So however you do it, backup your data today!

Wednesday, January 19, 2011

Shmoocon 2011

About a week from now the Shmoocon 2011 conference will be kicking off. I've managed to attend the past few Shmoocons and it's been a lot of fun. I'm a little disappointed that a couple of friends couldn't get tickets this year. They just seemed to sell out quicker this year. My friends will just have to watch the streams.

There are a lot of great talks this year, as there are every year. The ones I'm most interested in are (based on the descriptions):
 My biggest hope for Shmoocon this year is NO SNOW! Last year's snow was fun, but once was enough.

Friday, January 14, 2011

I'm a slacker

I know I've been ignoring the blog, and I'd make some excuse if I had a good one. I don't, so it must be laziness. It's a new year, and a good time to change that.

Besides catching up on Chrono Trigger, I've been playing around with a few things that might lead to blog posts.

I've gotten the chance to learn a bit more about Splunk, and if I can ever stop violating my 500MB/day license (I didn't think I'd ever violate that on my home network) I hope to write something up on it. I want to specifically dig into the Splunk for OSSEC app.

DragonFly BSD is something else I've been playing with. The HAMMER file system sounded really interesting, and I wanted to give it a shot. I haven't been disappointed so far.

Finally, network flow data has interested me for a while. I wanted to dig into it a bit more so I went looking for a tool. The one I chose is Argus. There are a lot of options available in argus, but I'll post a few of the things I look at.

Here's to a good 2011!