Thursday, May 28, 2009

Frontpage Lessons - UC Berkeley's Health Data Breach

UC Berkeley's recent announcement of a large scale health data breach serves as a great reminder of two basic basic security best practices - service separation and monitoring.

In this case, at least part of the flaw was a multi-use front facing system providing both web and database services with sensitive health data. Berkeley's announcement notes that "The attackers accessed a public Web site and subsequently bypassed additional secured databases stored on the same server." A multi-tier architecture with appropriate security precautions between each tier would likely have provided better security, and likely at a far lower cost than that of notifying 160,000 individuals of their potential exposure. The first lesson is that a strong architecture design for public facing systems can save a huge expense in both dollars and man hours later.

In addition to the multiple services on the single server, the length of time that the system was compromised indicates that it is likely that a range of other diagnostic and detection systems that could have been monitoring the server were not in use. According to the announcement, the server breach began on Oct. 9, 2008, and continued until April 9, 2009, and was only detected by routine server maintenance.

The second lesson here is that system monitoring is crucal - Tripwire, border flows to detect remote SSH sessions, and log auditing would have likely helped to find this compromise earlier.

Despite the flaws that led to the compromise, Berkeley appears to have done a good job after the fact of providing resources to affected individuals at datatheft.berkeley.edu and their process lists both activation of a CIRT and contact with the FBI. As any IT organization knows, some flaws will slip through, either old or new - Berkeley's response shows that they have a coherent plan, and this lesson should only serve to improve their overall security posture.

Thursday, May 21, 2009

The Failure of Security Questions

MIT's Technology Review author Robert Lemos recently tackled security questions as a method of password retrieval or resets. We've all seen these before - often a small number of fixed questions that have predictable answers. I wrote about them from a user perspective back in 2008 - The Problem With Security Questions - And An Easy Solution, where I discussed using a password safe utility and using answers unique to each site.

Lemos points out that research found that "answers that require only a little personal knowledge to guess should also be considered unsafe" and that "Of people that participants would not trust with their password, 45 percent could still answer a question about where they were born, and 40 percent could correctly give their pet's name, the researchers found."

Your pet's name is likely in your Flickr photo stream, or your email inbox. Your co-workers likely know your favorite sports team, your favorite color may be easy to guess from your fashion choices, and your pet and your significant other may be conversational topics that they would rememember - making many security questions useless.

Many security questions are less than creative, and worse, because they're intended to be something that everybody can provide an answer for, they're likely to be something that others also know or can find out from easily accessible records.

  • What is your mother's maiden name?
  • What is your father's middle name?
  • What is your favorite sports team?
  • What is your favorite color?
Most users proceed to use actual answers to these, meaning that the answers are easy to find in today's connected, database driven world of available information. How hard is it to go from a first name, a last name, and a geographic location to a user's personal details? Not that difficult. Parents names can be found in birth records, or you may be able to simply check their LinkedIn, Facebook, or other profile. Their favorite color or sports team can be found in similar places - and there are a limited number of guesses for most people.

Worse, family, friends, and acquaintances can often guess their way into such sites. Security staffers will tell you stories of disgruntled spouses logging into their partner's accounts using the facts that they know about the person to reset their password.

Many sites handle this with an email based password send capability - which shrdlu notes that he simply uses every time he visits the site so that he doesn't have to remember the site's password. I'm sure many of the rest of us have developed similar bad habits - and, of course, if you can get passwords sent via email, anybody who takes over your email account needs only visit those sites and request a password reset to take over those accounts too.

But security questions serve a very useful purpose, particularly for sites that have a large number of users, or who have users who may use the site only infrequently. They're a somewhat reasonable way of allowing users to have the ability to reset their password, and they push some responsibility to those users to keep their security questions difficult. The problem remains that without better options, users often create a back door into their account.

So, what alternatives are there?
  • Out of band methods, such as sending an SMS
  • Multiple factor methods, such as
  • Validate against another data point or preferably, data points
  • Skip a reset method and have customer service deal with it
Of course, not having security questions can also be a problem for some sites - social engineering to get passwords reset has worked many times in the past too.

I'll keep my eyes open for clever ways to handle this problem, and, perhaps more importantly for ways to explain the risk model effectively to management.

Monday, May 18, 2009

Compromise Investigation: JavaScript Unescape

A simple method of obfuscating JavaScript code in a page is to use escaped characters - you get text like this:

\%100\%101\%%118\%105\%108\%115\%97\%100\%118\%111\%99\%97\%116\%101\%115\%101\%99\%117\%114\%105\%116\%121
This is then wrapped in a simple call as a normal JavaScript:

eval(unescape('data'));

This results in a block that isn't intelligible to the average user, but which also doesn't require much effort. The example that this came from was a simple webpage re-direct in a compromised web directory which was set up as a search engine redirect site. Simple, and yet reasonably effective and somewhat hard to locate using my normal searches.

How can you read this if you run into it? Simple. On a secure machine that is properly protected from exploits, copy the page, make sure that none of the other code is malicious, and replace "eval" with "document.write". Open the page in your web browser and you will see the actual text.

Where did this script point? An alias to another site, both of which appear to be used for search engine based advertising spam.

Friday, May 15, 2009

Selling Changes: Security Implementations

Part of the security program that I am in charge of is a transition to a Cisco Clean Access based "zoned" network. In parallel, we're migrating from a relatively open campus wireless network to an authenticated network with unencrypted and encrypted SSIDs for guests and our own users.

None of this is particularly attractive to end users or technical staffers in general - we're creating additional work for many by making them touch systems to install the Clean Access agent, or more user support time when their users can't figure out why they don't have network access when their system isn't authenticated. Many worry that we will filter their traffic, monitor it, or otherwise make their network access less usable than it was pre-change.

Since not every device is capable of handling CCA or web authentication, we have to deal with some switches on a port by port basis, and we also have to whitelist many devices.

How, then, you may ask, do I sell the project with thousands of users, over a hundred buildings, and a user base who are used to a flat, undivided network?

  • First, I explain why the project is part of my program: we did a risk assessment, and this was identified as a risk to the organization, and in addition, we're doing it because we want to know who to contact for any given system, and who they are, at least on a role basis.
  • Second, I explain why our management is behind this, and what their expectations are. This includes personal responsibility for both our IT staff and our users, and that they are responsible for protecting our network and our data.
  • Third, I describe our test scenarios, our communication plans, and what our migration process involves. We've deployed to a variety of areas, we've tested extensively, and we have the ability to exclude problem areas. We also ensure that the changes are well communicated, that we meet with all parties in an area before deploying, and we have an on-site team post change.
So far, we're meeting with reasonable success. Management is behind the effort, and we have a talented technical team - but this effort is going to boil down to a large communications and sales effort.

Thursday, May 14, 2009

IPhone Security FAQs

Creative Commons licensed image courtesy dailylifeofmojo

iphonelinksandfaqs.com has a number of useful including details on iPhone password options using both Exchange and local configuration capabilities, remote wipe, content and application restrictions and more.

The iPhone still doesn't have quite the same depth of security tricks that Blackberries do, but they're starting to get there.

Wednesday, May 13, 2009

Firewall Change Management: Changes That Require No Change

Creative Commons firewall image courtesy ecastro.

Firewall administrators are frequently asked to update rules for systems that don't work. Upon further research, the problem is frequently not with the firewall ruleset, but with the system itself. In fact, many problems magically solve themselves when they are looked at in the light of day - or at least, a tcpdump and a few minutes of actual review of the traffic between the host and its destination.

Thus today's security analyst quote: "The changes we do that require no change are always done without error".

A corollary is, "Those changes that require no change often require just as much time as those that do". This is because the research to find out why a request isn't necessary often involves just as much log review and traffic analysis as a properly specified request would, particularly if the firewall system does not allow arbitrary queries for existing rules.

There are a number of methods that you can use to help reduce the number of requests that turn out to be already allowed, or that are poorly specified. I'm going to take a moment today to outline a few of the most common reasons that I encounter poorly specified rules in change requests.

These poorly specified rules are often due to one of a small handful of issues:
  1. Poorly understood architecture - the client/server relationships are unclear, either because the software is outside the norm - some software initiates connections on both sides, on the same port, and sometimes at very odd times, or because the architecture isn't understood and thus the requests don't account for the logical locations of the systems affected.
  2. Lack of vendor documentation or poor vendor documentation - many vendors provide poor or little documentation for their applications. Some are particularly unclear, and simply specify a port, or a wide range of ports. Others don't document whether they mean TCP, UDP, or both. Yet another set don't document which system initiates the connection, and where it goes, or how long it stays open without keepalives. All of these can create a great deal of work for the firewall administrator if they aren't documented or if the request is poorly drafted.
  3. Untested software or services -if documentation doesn't exist, administrators must test their systems to determine what they actually do. This can help determine actual ports and protocols, and can find those undocumented calls that can wreak havoc if missing from the ruleset.
  4. Lack of communications and planning, or a total lack of a real design - when projects are conducted without review, designs can be created that don't fit the capabilities of the firewall, or which can put undue stress on it. Sometimes this results in data center wide system backups occurring through an already stressed firewall, or a reliance on broadcast UDP traffic between hosts that just won't work through a stateful firewall.
How does your organization handle rule requests? Perhaps more importantly, how do you handle the workload required to document the changes and to track them?

I'll talk about a few methods to help shape requests into a useful form in a post in the near future.

Friday, May 8, 2009

Book Review: Mac OS X Leopard Security

One of the interesting challenges I've faced recently in my project load was how to deal with security training for Mac OS X. While Windows and Unix training are common and easily available, the only Mac OS security training available was a pending SANS course, and a dated Apple course.

What's a security guy to do? Our Mac community is generally aware that their resistance to attacks thus far has been more based on being a smaller community than on the Mac having some mystical inherent resistance to compromise - although a few holdouts beg to differ. We have begun to see the beginnings of a Mac OS malware threat, and we have definitely seen more standard SSH compromises of systems with weak passwords.

My best solution was to identify a book to serve as a resource and as a basis for short workshops for our Mac support staff. Our staff range in expertise from hardcore Mac techs to staffers who are interested either at the hobby level, or who support a handful of random Macs in their department. Thus, the book had to cover both the basics and more advanced topics. In addition, it had to be current - many MacOS 10.3 and 10.4 books are out there, but far fewer cover 10.5.

I read reviews and flipped through quite a few books before settling on Charles Edge, William Barker, and Zack Smith's Foundations of Mac OS X Leopard Security. The book has received many positive Amazon reviews as well as a Slashdot review, and had actually been independently purchased by a couple of our campus administrators based on their own flip-throughs of the book.


What does the book offer? Well, it offered a lot of what I was looking for, including:

  • GUI based instructions for most basic MacOS security topics
  • Details on malware, rootkits, and
  • User account security
  • File services security
  • Server security
  • And a selection of advanced topics
What you don't get is a down and dirty command line level toolkit, although many command line basics are covered. The book also makes no mention of security standards and profiles such as the CIS standards. Overall though, our Mac admins have generally been impressed, and have read it closely enough that they have pointed out a couple of mistakes.

Does Foundations of Mac OS X Leopard Security replace a training class? No. But it does give the staff a ready resource and a common body of knowledge on which we can base our own discussions. I'll schedule followup discussions of specific topics as necessary, and we'll keep our eyes open for training. For the time being, I'd recommend this book to anybody needing a good primer on Mac OS security.

Tuesday, May 5, 2009

More Google Whacking to Detect Compromises

Tom Liston posted "Putting the ED _back_ in .EDU" on the ISC diary yesterday. I've discussed using Google Alerts to monitor institutional webspace in the past:

The lessons from both remain valid - I've detected a number of webspace compromises in the past year, and continue to use Google Alerts as an easy detection method. The methodology is simple: just build a query along the lines of:

site (your site) -pdf -ppt -doc "poker" or "xanax" or "viagra" or "cialis"

Then set your alert and watch. I keep mine sorted into a unique mail folder, so all I have to do is see if that folder shows a new alert. You can end up with some false positives, particularly with the inurl directive, but in general, you'll find that this is a great tripwire for large institutional webspaces with dynamic or user generated content.

This technique can also be used to monitor for internal documents and files - simply build your search to include the search terms that are of interest for your specific site.