Thursday, February 7, 2008

Are security departments wasting their time?

An article on Dark Reading got a lot of play time in the office today, and a friend asked me about my response. Here's what I sent back:

"Only 3 percent of the vulnerabilities that are discovered are ever exploited".

Many vulnerabilities are patched before exploits are widely released. Many more vulnerabilities have relatively narrow exposure, or require in depth knowledge. Not having an active vulnerability testing community would mean that the low hanging fruit - easily exploited vulnerabilities might be even easier to find.

In addition, throwing a percentage out without qualifiers is suspect - three percent of all vulnerabilities? Three percent of vulnerabilities with admin level access? Statistics without perspective are difficult to judge for their merit.

In today's IT workplace, it is difficult to justify not actively patching vulnerabilities and monitoring for them. Any audit firm that found that you were willfully ignoring vulnerability monitoring, testing, and patching would result in a very nasty note to your management. If you are exposing web applications to the world, and you have customer data, and you don't make at least a reasonable attempt to secure your code, most people won't support your claim that you've done your due diligence.

Comparing vulnerability research to automotive safety research isn't as apt of a comparison as the illustration makes it seem. To expand the metaphor, if vulnerabilities are seen as similar to shooting the sunroof of a vehicle, then one has to presume that there are hundreds, if not thousands of skilled archers, even larger numbers of amateurs, and even further hordes of automatic arrow throwing machines shooting at your vehicle.

In addition, all of them can get to your vehicle from anywhere in the world. You may have a shield up, but you probably allow some holes in the shield because people inside need access, and those holes may let arrows in. If the arrows penetrate, they often become arrow shooters themselves, and can shoot arrows into the rest of your vehicles behind your shield.

Is vulnerability research the end-all solution to security? No. Is it necessary? At least in the foreseeable future it will be. The perfectly safe car hasn't been built, and the perfectly secure computer hasn't been either. It is one aspect of a full information security program. There is a balance to be struck between searching for and fixing vulnerabilities, and ensuring that the system isn't vulnerable in the first place. This is all part of the development lifecycle - and no part of it should be ignored.

After we deal with vulnerability research, we have to deal with strategies. Security strategies revolving around a single computer are definitely necessary - you can't ignore the individual building blocks. Even if the keystone is important to the arch, it doesn't mean much without the other building blocks to make it necessary. That's a layer - protecting endpoints, but again, it isn't the only thing you do. You have to assess risk, and apply controls based on the criticality and sensitivity of the system or systems you are protecting. So yes, you protect one system. But that's just at the first layer. Then you protect other systems at the network layer, and then you protect your border, and so on.

Security professionals realize that security is seen in shades of grey. The only completely secure system is one that's turned off - and somebody will steal that if you're not watching it! Again, layers and assessment are the keys.

Tippett comments that people believe that perfect process can make an organization more secure. Without question that's true - not false! Most security professionals would take any level of success, if it was better than what existed before, and if they understood at least roughly what the rate was. If you have a well designed process, and you use it, and it addresses a valid threat, yes, that is better than not doing it, or doing it haphazardly. Doing it at all is better in some cases than not doing it ever, as long as you understand that you don't have complete coverage.

Will statistics show that? It depends on the other layers, random chance, and of course, what other risks there are. Security reporting rarely captures all of the factors in the equation, and many organizations don't report information in a useful way - willfully or not, it doesn't matter.

Patching your systems without any firewalling will probably fail - even if your patching process is perfect. But, it will help prevent you from falling prey to exploits of the vulnerabilities your patching program took care of.

Tippett suggests enabling default deny on routers. In general, routers use access control lists, and the normal recommendation for firewalls is to set them to default deny. This may be a technical difference that the writer didn't capture - so I won't argue the semantics. In either case, proper router ACLs and firewall rules are useful, and a good part of the world knows it, but the move these days is to get proper outbound rules and monitoring, and to use better anomaly detection.

Finally, security awareness is obviously critical, but you have to assess the risks! Incident classification - impact and severity are crucial here. Reducing your incidents by 30% - but only getting the incidents involving people who had a minor security issue, and not getting the one major virus infection that leads to the trojaning of your highest value fileserver is not a good tradeoff.

In the end, the real dilemma that we face is assessing our risk, and then using the assets - time, people, and money - to the best of our ability to meet those risks. That requires brutal honesty, knowledge of what your threats really are, and a willingness to face risks rather than to deny their existence.

Archer photo credit to foundphotoslj

3 comments:

kurt wismer said...

"In today's IT workplace, it is difficult to justify not actively patching vulnerabilities and monitoring for them."

it's difficult to imagine he was referring to that practice... once a vulnerability becomes publicly known and a patch has been made, it makes sense to patch...

i believe he was instead referring to the hunting of vulnerabilities... if his figures (in their broadest interpretation) are correct then certainly it was a waste of time finding and creating patches for most of those vulnerabilities...

"To expand the metaphor, if vulnerabilities are seen as similar to shooting the sunroof of a vehicle,"

this isn't expanding the metaphor, it's twisting it... the fact that an arrow can pass through the sunroof is the vulnerability, actually firing an arrow through a sunroof is exploiting the vulnerability...

"Is vulnerability research the end-all solution to security? No. Is it necessary? At least in the foreseeable future it will be. The perfectly safe car hasn't been built, and the perfectly secure computer hasn't been either."

while the perfectly safe care hasn't been built, arrows through the sunroof is a ridiculous thing to waste time on because no one is doing it, just as no one is exploiting most of the vulnerabilities...

"Tippett comments that people believe that perfect process can make an organization more secure."

i'm pretty sure his comment was that people believe it will make an organization more secure, not that it can... hopefully you can see the distinction...

David said...

Excellent points Kurt. Like most discussions like this, I suspect that given a face to face discussion, I'd find that I agree with Mr. Tippett more than I agree with his quotes in the article.

I think that hunting vulnerabilities is part of the patching process, and that they will be found, either by the good guys or the bad guys. For many of the good guys, it makes sense to hunt because the cost is too high if they're found by the other side.

My emphasis here is really that the risk must be appropriately measured and countered. If you're building custom web applications, you need to protect yourself against vulnerabilities, and you need to do an appropriate amount of testing - probably at multiple points in the software development lifecycle. If you're a software vendor, you need to test your software - if for no other reason than to protect yourself from liability and losing upset customers.

If his numbers are correct and none or very few of those vulnerabilities would have been exploited, of course we're wasting money and effort. If you step back and look at the tightening of the time from vulnerability announcement to exploit release, and the pressure that has put onto the software industry, and if we had the same statistic for a class of, say, local admin vulnerabilities, we'd be looking at a very different and worthwhile number.

Let's chalk this one up to as statistics with no detail are useless except to impress the masses.

With that said, the basic, and in most cases, quite reasonable presumption of a hostile environment means that there is a continued need to spend money, time, and effort. We have to hunt, because someone else will if our resources are worth protecting.

As for the sunroof quote - I don't think I twisted the metaphor - the original quote is,

"If I sat up in a window of a building, I might find that I could shoot an arrow through the sunroof of a Ford and kill the driver," he said. "It isn't very likely, but it's possible"."

Similarly, if I send a string of characters to a webserver, I might manage to crash it. It's not very likely, but it's possible.

The problem is that that web server is what my business depends on to conduct e-commerce. Or that bug is a crash bug in an generator control system, or it is the login script for college admissions. Interconnected systems mean that hunting vulnerabilities often means fixing relatively obscure things to ensure that they're not escalated.

If we were in an environment in which people frequently looked for ways to get arrows into cars, and if we had high value assets in the cars, then we would have to gauge the risk appropriately. His statement covers part of the risk spectrum, and I'll agree that in some organizations, too much time is likely spent fixing sunroofs, but there are very valid threats out there that need to be taken care of. Dismissing them as unlikely *can* (not is, but can) be a mistake. I'll point to my post on denial in risk assessments there. We're in an era where automated tools can take on an application and find holes automatically, and in which there is often a paycheck associated with finding a new way to compromise systems.

As long as owning systems pays, there is a financial reward to shoot arrows through sunroofs.

(I'm officially marking the metaphor as dead due to an arrow through the sunroof now.)

Finally, you're right that the direct quote is "that my organization will be more secure". The key is the follow-up though - "But studies have shown that there isn't necessarily a direct correlation between doing these processes well and the frequency or infrequency of security incidents".

As I stated, I don't believe that simple statistics are sufficient, or that current reporting is accurate enough to make broad statements about practices. The two practices he cites are some that I've seen be real saviors to organizations. If you asked me "is penetration testing necessary for all organizations" I'd likely have reacted differently!

Typically, security practices should have multiple reasons for implementation - flawless antivirus patching is probably in response to virus outbreaks, and vulnerability patching is likely there to ensure that you don't miss that critical server. If they're done only because they're on a checklist of best practices, I'd still find it hard to argue with - having been in organizations that learned the hard way that they should have implemented them fully.

You'll note that my emphasis isn't really on flawless process, but on assessing the problem and applying an appropriate level of care. Understanding why you're doing it, and what the risks are in your approach is useful.

kurt wismer said...

"Like most discussions like this, I suspect that given a face to face discussion, I'd find that I agree with Mr. Tippett more than I agree with his quotes in the article."

indeed, an interactive conversation allows for semantic clarification that just isn't possible when dealing with a static article...

"My emphasis here is really that the risk must be appropriately measured and countered."

this much i actually agree with... mitigating a small percentage of high impact incidents does seem like it could be more worthwhile than mitigating a larger percentage of low impact incidents...

"As for the sunroof quote - I don't think I twisted the metaphor - the original quote is, "

that quote again describes exploitation, not simply the vulnerability... you equated exploitation with the vulnerability itself, which confuses the issue and lead to what seemed to me to be a non-sensical extension...

"You'll note that my emphasis isn't really on flawless process, but on assessing the problem and applying an appropriate level of care. Understanding why you're doing it, and what the risks are in your approach is useful."

which means you're precisely not the type of security practitioner that mr. tippett was referring to... that doesn't mean that the type of practitioner he was referring to isn't out there though...