Tuesday, February 27, 2007

Making firewall ruleset details available to system administrators

In many environments, firewall configurations are a mystery to the administrators whose machines are protected by them. Often, firewall admins and operators receive questions like "Can machine X talk to protected machine Y, and on what ports?" or "Should my traffic get through from A to B?". The black box approach to firewalls can drive a wedge between systems administrators who are told to make the application work right now, and security folks who want to lock it down and keep it clean.

How can we fix this?

Most firewall systems can provide a text based rule dump. With some cleverness, and a reasonably sharp programmer, this text based rule dump can be built into a rule checking application and made available with appropriate authentication. Firewalls also generally include a useful syslog utility that includes information such as details ofblocks.

What does this do?

Now systems administrators can check if traffic is passed by a rule, or if it is blocked. They can also make more intelligent requests for changes based on what they already have open. If you add the ability to see blocked traffic from the firewall's syslog, administrators can also check to see what their systems are doing that they aren't aware of, or if a badly behaved application is doing something that the documentation (or lack of documentation) doesn't mention. This is great when that new application or appliance is dropped off and something mysteriously isn't working, or you want to see how chatty the box is.

Rulesets can also benefit from the increased transparency by becoming more accurate, and can require less time from the firewall operator when they no longer have to track down requests for "whatever ports the new system is trying to use".

Does this decrease the security of your organization? That depends on the mode that your firewalls operate in and what your local security policies are. If rulesets are considered highly confidential information, and administrators operate in a black box, yes, having this information available means that they can see what you have built. Security via obscurity rarely does more than delay the inevitable however, and the benefits can easily outweigh the negatives. If your firewalls operate through a standardized change system, or if you have a relatively ad-hoc process, making the rules visible can make all the difference in how sysadmins think about firewalls and firewall rules - and about those pesky security guys.

I've worked with a number of administrators with varied degrees of security knowledge, and every one of them has been delighted to have this information available to them. Simply plugging two IP addresses into a form and seeing what ports are open, or checking if port 80 TCP traffic can get to a system, and where it can come from means that their troubleshooting process can be greatly simplified. This puts control back into their hands, rather than making the network a mysterious black box.

If you build it, they will come. When they do, here are a few suggestions:

  • Limit access to the rulesets or machines to the appropriateadministrators. If you have an Active Directory environment, limit access to the rules for it to those administrators who actually manage it.
  • Make sure your users understand how and when updates to the systemare made. If you pull rulesets once a day and parse them at midnight, let them know that. Hours can be spent troubleshooting something that was fixed during a change window which your system hasn't picked up.
  • Test the system: look for ways it can fail. A bi-directional firewall ruleset might allow traffic in but block it on the way out, and if your application can't display that, issues will abound.
  • Ask your administrators what they need to get their jobs done. Giving then input into the tool can make it much more effective.
  • Make the tool generic - a plug-in architecture helps make your life easier when you change firewalls, syslog formatting, or your vendor updates a field in a patch bundle.

Monday, February 26, 2007

AcmeWare: Now with over 10,000 new features!

David posted earlier an article written by Eric Allman for the ACM queue that discusses typical cases of how security bugs in software are dealt with. Now as I read this I noticed (and perhaps you did too) that a few paragraphs in is the following not-so-subtle advertisement link:

"Visual Studio 2005. Over 400 new features, the difference is obvious."
I found myself chuckling out loud in my cubicle at the irony in that advertisement when taken with the lead-in quote at the top of the article:
"A sad truism is that to write code is to create bugs (at least using today's software development technology)"

I'd like to believe that by that ad placement, someone was trying to raise an interesting question in a humorous way: How many security vulnerabilities are introduced into code through new features, and of these new features how many should have really been added? It seems to me that many software releases, whether they be minor revisions, service packs, or brand new major versions, are a race to cram as much crap (disguised not-so-cleverly as "features") as possible into the product before the release date.

With each new release comes a whole new slew of vulnerabilities that previously did not exist,. This, of course, ties back in to one of the points made in the article about software bugs being inevitable. I just wonder how many of them are there as a result of features that have no real business cases driving them or could have been pushed out to later release where more though and care could have been taken in their development.

Has anyone ever done a study on this?

Perils of Penetration Testing

Federal Computer Week is running an article discussing penetration testing - specifically they compare in house to outsourced penetration tests. It also discusses penetration testing software like Core Security Technologies’ Core Impact.

In general, insiders know the network, systems, and applications better than an outsider will, but outsiders often have the cachet necessary to sway management. With tools like the Metasploit framework commonly available, in house security folks have capabilities they have never had before.

So should you outsource your penetration testing? If your inside security folks have the skills, it all comes down to what you need out of the test and how their time and your money can best be spent. Arming them with the best tools you can afford? That's a no brainer...

Sunday, February 25, 2007

BotNet Operators Getting More Savvy?

I ran across a short piece from DarkReading discussing trends seen by the botnet trackers at Arbor Networks. It seems that botnet operators are starting to see the writing on the wall and are moving to greener pastures than straight laced IRC. Encrypted IRC, HTTP, P2P are all up for grabs. I also found the anti-honeypot tactics interesting. This more than anything shows why investigators shouldn't use the "let's poke it with a stick and see what it does" method on any old IP found while investigating a compromised system.

Still, I don't think it's all doom and gloom. Even with superBot 6000 around the corner, there's still plenty of folks running plain Jane IRC bots out there and even more Joe User's ready to click on that link and serve up a fresh new machine for the zombie ranks. Overall these new bots are just another move in the security chess match.

I'd say the article is a good warning for Network Security folks to keep changes in mind as they build future defenses and countermeasures. Building a security mechanism based only on current incarnations of risks is shortsighted and foolhardy. Technology changes, deal with it.

Handling bugs in your code

Eric Allman, the Chief Science Officer of Sendmail wrote an article for the ACM Queue about handling bugs in your code. He notes that "A sad truism is that to write code is to create bugs...The really sad part is that at least some of these are likely to be security bugs.".

The article is a well written brief overview of methods of dealing with security bugs and their repercussions. He also discusses important questions that you will want to answer when determining your strategy and how to deal with the announcement, patching or fix process and aftermath. If you're doing software development, or if you provide a product or service that would be subject to bugs or vulnerabilities, this is probably worth a read.

Anti-DNS pinning and Google Desktop

Infoworld is running an article on Watchfire's announcement of a vulnerability in Google Desktop. In a worst case scenario, this vulnerability could give outsiders access to any item indexed by Google Desktop. Google handles this nicely - they made a patch available, but as with any such vulnerability, if it was exploited, many people would likely not have patched.

This is one reason why I strongly recommend that Google Desktop be prohibited in areas that deal with sensitive or restricted information. A third party indexing service is very dangerous if it is found to be vulnerable - and any indexing system can find files that you or your users may not realize are there. A little over two years ago Bruce Schnier wrote about the Google Desktop's indexing dangers. This is also an excellent reason to keep sensitive files encrypted and backed up on a remote system and access them only as needed.

Friday, February 23, 2007

You say you need a web application security primer?

Heise Security posted their PHP focused Security Know-how for web application security. While it is focused on PHP security, much of the content is applicable in a general way to any web application programming environment. They're targeting a reasonably technical user, so this isn't suited to showing your management to make things easily understood for them, but this is a good article for your local PHP developer to read.

If you don't read anything else, make sure you read the last page - it covers the most important security settings in php.ini.

This catches the other side of Matt's post - build your applications to be secure, and lock them down first, then test them. As all three of us can attest, even good developers make mistakes, but having standards and being aware of security practices is a good start.

OWASP Testing Guide v2 Released

OWASP has recently released the latest version of its Web Application Testing guide. The guide provides a framework for testing throughout the software development life cycle, as well as walk through guide for testing web applications for known vulnerabilities using popular attacks.

If you are charged with testing for security issues with a web application, are interested in learning techniques for becoming a web application pen tester, or are even a programmer this guide should be a great source of information for you.

"But Matt, we've got a $2,000 license for PicoDyne's 'Super-Karate-Monkey-Web Application Assessment' tool. Why on earth would I spend my time going through this guide?".

Well, I'm glad you ask. Nearly all automated webappec testing tools are stupid, literally. They plow through a web application throwing pre-defined and generated test cases, but they have little useful intelligence behind them. They find all of the stuff that anyone can find. Don't worry, though, these tools are fabulous as LHF (Low Hanging Fruit) detectors. They can get all of the obvious stuff out of the way and let you, the tester, spend more time focusing on the difficult things like blind SQL injection attacks.

Thursday, February 22, 2007


Welcome to Devil's Advocate Security. We're going to try to do something a little different with this blog. In addition to normal security topics, we will be taking on current security topics, platforms, and technologies. We'll do exactly what the title says - try to provide a devil's advocate view of the topic in addition to the mainstream view.

The authors of this blog are all full time security professionals, and we'll bring that experience in to give a view from the front lines of IT security - often with a focus on higher education.

As we move forward, we'll also try to bring in other experts and professionals. If you have a topic you would like to have us discuss, let us know.