Sunday, March 30, 2008

Taking a toll: Separation of duties...or not.

Security analysts often find themselves turning a blind eye to another organization's security issues in order to have their own needs met. At other times, we just appreciate the irony when we're told about the security process, and how the organization's representative just violated it to get their job done. Here's a recent example:

A co-worker recently signed up for the local toll road's quick pass through system. This is more arduous than it needs to be, and after a number of rounds with them due to process issues such as missing activation IDs, incorrect instructions, documentation that didn't match the process documentation, and missing emails, he found that he needed an activation PIN that should have been sent via email, but wasn't, and called for a third time.

The organization in question is carefully set up so that individuals don't see both your account number and your PIN at the same time. If a support team member prints your PIN and address to be mailed, another department will mail it without knowing your account number.

This sounds pretty reasonable, and should be workable. It even ensures a reasonable amount of safety to the customer - or should. Unless, of course, the PIN is sent by email, and the provider has DNS issues and can't resolve your well known TLD. That makes CSRs get creative.

The service representative did the right customer service thing, and the wrong separation of duty thing: he walked to the other department, got the paper from the printer, and read the PIN over the phone. No controls prevented it, making the separation only meaningful to those who wish to follow the rules.

Not exactly effective compartmentalization - but it worked in the customer's favor this time.

This is a useful case to remind us that our carefully built process requires checks and monitoring. Simply relying on processes without validating them and verifying that they aren't being violated can be worse than knowing that no process exists - you're left with a false sense of security.

The co-worker? He found out that the pass system charges a maintenance fee every month in addition to the tolls and the money kept by the toll system to "charge" his account, and is considering switching to the alternate system available in the area.

Creative Commons image credit billjacobus1

Thursday, March 27, 2008

Identity theft: Income tax returns and stolen identities

The University of California's Irvine campus recently made the following announcement:

UC Irvine has received more than 50 reports that Social Security numbers have been stolen and used to file fraudulent tax returns to gain refunds. Victims are identified as current and former UCI graduate students and medical students. In most cases, the students have discovered the issue when they electronically submit their federal income tax returns and the IRS informs them that someone has already filed using their name and Social Security number.
This should be particularly disturbing to people, as it isn't simple credit card fraud - actual tax returns were filed using these Social Security numbers. Not only could this cause real hassles in dealing with the IRS if it becomes a widespread issue, but it means that attackers have discovered how little validation there is in the IRS tax return system and exploited it to their advantage.

This is the first time I've seen a report of relatively large scale tax return fraud with the intent to make money from returns on more than an individual basis, but, if it is successful and doesn't result in a successful investigation, it likely won't be the last.

It will be interesting to see if similar issues occur as other campuses dealing with SSN exposures. The IRS is handling it reasonably well - they're allowing second, valid returns to be filed, and are asking the affected individuals to file police reports. It is worth noting that they are requesting a paper tax return be sent to a specific office: online tax return submission may make this exploit much easier.

UCI now gets to try to find out if they were the source of a data leak - from the FAQ on their announcement page:
Q. Does this appear to be an isolated case?
There are more than 50 cases at UCI, but this currently appears to be focused on graduate and medical students.
With that sort of pattern, we may well see a breach announcement in the near future as required by the California Breach Disclosure Act - UCI notes that they are currently investigating.

Flickr Creative Common licensed image credit to Matt Honan.

Tuesday, March 25, 2008

Listing: the Craigslist attack vector

Most of us don't worry about people looting our homes while we're at work - but a new form of attack can create more than a nuisance. A recent Craigslist hoax resulted in large numbers of people taking possessions from Robert Salisbury's Jacksonville, Oregon home.'s article on the event is worth a read.

This is somewhat similar to the "SWATters" who fake 911 calls using callerID spoofing, social engineering, and other tactics. In each case, the attack is reasonably easy to conduct anonymously, can cause great damage, and uses third parties to conduct the actual attack. In many ways, this is a physical manifestation of what security professionals are used to seeing from botnets and zombies conducting an attack.

Will we see a new term for Craiglist lootings and other attacks - Listing, perhaps?

Other events, such as the massive out of control party in England after it was announced online and by a radio DJ point to the power of broadcast media. Normal social controls are often ignored when people feel that they were invited to take advantage of a situation - and damages can be hard to calculate.

Have you updated your home inventory recently?

Creative Commons licensed Flickr image credit to user blmurch

Monday, March 24, 2008

Bruce Schneier: Inside the Twisted Mind of a Security Professional

Bruce Schneier's commentary on Wired about how security professionals think is a good read - and a great opportunity. Those of us in the industry often hear statements such as "Wow, I'm glad you're on our side" or "That's pretty evil!" when we make suggestions of how to break a system. Bruce says:

"This kind of thinking is not natural for most people. It's not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail. It involves thinking like an attacker, an adversary or a criminal. You don't have to exploit the vulnerabilities you find, but if you don't see the world that way, you'll never notice most security problems."

Bruce's thoughts on this closely match my own - we are, in some ways, engineers of failure. Where others look to make systems work, we seek to stress and test them to the breaking point. The mindset is often difficult to escape - the switch is always on.

When I'm at my local credit union branch, I'm watching to see how they handle my transaction, and how security is set up inside. I look for flaws everywhere - from simple issues like not locking doors to complex issues with data and programming. I know that I check security automatically, and that I analyze almost any system I'm faced with to find flaws or opportunities for exploit.

Since we're security professionals, and we'd like to make other people more aware, we're faced with the question: can we teach the security mindset? Bruce's contention is that it isn't trivial:

"I've often speculated about how much of this is innate, and how much is teachable. In general, I think it's a particular way of looking at the world, and that it's far easier to teach someone domain expertise -- cryptography or software security or safecracking or document forgery -- than it is to teach someone a security mindset."

So, if it isn't trivial, how do we do it? Can we teach the rudiments of the security mindset in a way that makes it available and open to the layman? I believe we can. Will they be effective security analysts overnight? Of course not - there's a degree of technical knowledge and understanding that we can't instill easily. The analytical mindset is, however, something we can plant the seeds for. Just a little crack in the normal mindset that accepts systems, and that instead looks for issues is all we need!

Here are a few ideas that you can use to prompt people in your organization to adopt the security mindset:
  1. Challenge them to think like a bank robber when they next do their banking. Ask if they pay attention to security cameras, how they identify themselves, and if the cashier has money out and visible.
  2. Get them interested in how a system they are involved with can break. Web developers often delight in breaking an application if you show them how, and system administrators are tickled to learn how to break into a machine - if it isn't theirs! Find something that the person works with every day, and show them how the system can be broken.
  3. Make opportunities to ask questions and to test systems available, and encourage your staff to do so. I've had the opportunity to lecture college classes on physical security and you can help foster the moment when the light comes on through simple means - tell stories, point out issues and fixes, and then ask simple questions. You'll be surprised at how the pace picks up once one person answers.
How do you plant the seeds of the security mindset?

Thursday, March 13, 2008

Core Impact and Ed Skoudis: Penetration Testing Ninjitsu

Core Security is sponsoring Ed Skoudis's presentations on ethical hacking and penetration testing under the title "Penetration Testing Ninjitsu". Ed and Core will be doing a total of three webcasts - the first was focused on Windows command-line tricks, future webcasts are slated to include social engineering and other techniques. The next webcast will be on May 20th, 2008.

Ed is an excellent speaker, particularly for those who are unfamiliar with the techniques but who have a reasonable level of general technical knowledge. He's well worth listening to if you get a chance.

In the first presentation in the series Ed emphasized one of my favorite characteristics of an information security analyst right up front - the ability to think out of the box, and to use their innate creativity.

In his presentation, Ed talked about some of the basics of penetration testing which are worth repeating:

  • You have to know the limitations - things like scope, time, access, methods, and the final truth: you won't find all of the vulnerabilities.
  • Penetration testing can help find things that other approaches missed, and that unknown problems can be found. In addition, it often goes deeper than most audits.
  • Penetration testing isn't the only approach you should use - reviews of configurations and architecture, automated tools, audits, and interviews with personnel. The key: a comprehensive security program.
You'll find more about how to deal with the risks that you find in a penetration test in my writeup on risk handling methods and denial.

Ed also covered a number of Windows command-line tips - many of these are covered in the GCIH training for Security 504 that SANS offers, as well as in his Windows Command-line Kung-Fu. If you've taken either class, today's presentation was largely review - ping, dns lookups, arp cache checking, SMB enumeration and shares, a huge amount of detail about for loops, and a few other tricks.

The amusing observation that I and others who watched the webcast had was that during a security presentation where users were unable to see each other's names to provide anonimity, questions showed the name of the submitter. If you want to remain a bit more anonymous, you can't ask questions...

Creative Commons licensed image credit Flickr user R'eyes.

Tuesday, March 11, 2008

Upcoming webcast: Risk Assessment and Risk Transfer in a Digital Age

Core Security and Chubb (an insurance company that offers "CyberSecurity" insurance) are hosting a webcast titled "Risk Assessment and Risk Transfer in a Digital Age" on Tuesday, March 25th at 2 PM EST. If you're using a risk based security model, or you're interested in insurance as a risk transfer strategy, this might be an interesting webcast to listen in on. Their topic list includes why attacks are made, how to improve security, a comparison of assurance and false confidence, and how to transfer risks. That last topic is of particular interest, as Chubb's involvement should mean a good discussion of what options insurance can offer you when transferring risk.

I've started to see more organizations looking into cyber insurance, and the market appears to be maturing - in fact, the first webcast I've seen hosted by an insurance company. With cyber-theft insurance and other similar coverage has become a more frequently discussed topic in the past year, and I'll be interested to see how Chubb positions themselves in the market.

Lessons in adaptability: a TSA screener's response to the MacBook Air

Most IT people have probably seen a commercial for the MacBook Air, even if we haven't seen one in person. It's thin, it doesn't have a standard optical drive, and may not even have a spinning hard drive if it has the SSD option.

A post on Wide Awake Developers offers a good reminder about awareness and security training. The TSA employees who were faced with a MacBook Air didn't recognize it as a laptop - according to the post, they called it a "device", and delayed the poster long enough to make him miss his flight. The good news is that the TSA agents did eventually ask for their normal "boot the machine and demonstrate an application" method of validating that it is a computer. A perfect process? No, but at least they eventually got through to it.

What's the lesson? It's a simple one: don't forget to teach adaptability and to have a method for dealing with unrecognized issues and technologies when you're building a security system. Adaptable security models are more likely to catch issues, and can prevent process breakdowns that can cost money or response time. Every system should have a fall through catch-all - if something doesn't fit the expected norms, a process needs to take over that will handle the event.

There's one more lesson to be learned thanks to the MacBook Air - don't lose it. Steven Levy demonstrates how easy it is to lose a small device, and with a decent size drive in it, an unencrypted MacBook or other small, executive friendly device can expose a large amount of data.

Creative Commons licensed Flickr photo credit to Marcin Wichary.

Friday, March 7, 2008

Combating bots: Anti-Botnet software versus IDS, flows, and other methods

Ryan Naraine's Eweek article titled "Growth of Anti-Botnet Startups Points to AV Deficiencies" got me thinking about how I and my peers handle botnets. Naraine cites Andrew Jaquith from the Yankee Group who said "

Traditional AV

May detect some bots, or components of botnets, but central reporting is necessary to get a big picture view. Some AV also includes the ability to block some outbound traffic such as outbound IRC traffic. This can help stop systems from joining the botnet - you may have a compromised machine that just won't phone home.

Traditional AV is a great first step if you are getting useful data from it. If it only protects endpoints and doesn't contribute to your overall awareness, you're missing out on functionality, and you'll miss out on chances to see when something new hits you.


Installing an IDS on your outbound link can be a great way to detect botnet traffic. Knowing what you expect to send out, and watching for traffic that doesn't match - IRC traffic from a server, or http traffic to many hosts in quick succession, or any of a host of other things that you're used to seeing coming in as attacks can be a good indicator of a compromised host. As botnets move to encrypted HTTP communication, you may not be able to see what the traffic is - but the attacks and other actions are likely to still trip your sensors.


Flows are a great tool when combating botnets. A simple filter can help catch new outbound flows, and watching for flow patterns associated with DDoS attacks and other outbound traffic can help you pin bots down quickly.

Flows are also useful when looking for other compromised hosts. Often identifying a single host and matching what it does to other can quickly show you all the hosts in your network that have been compromised with the same package.

External Reporting

Reports from third parties and organizations such as ISACs can be invaluable. While it is poor practice to rely on third party notices as your sole source of information, ignoring reports is not only bad net citizen ship, it can be outright dangerous. Check to see if your organization has access to an ISAC or other peer group that might feed useful data to you from an external perspective.

Future Issues and Direction

Much as we have seen in the market as the major antivirus companies have added anti-spyware capabilities, we will likely see the major vendors acquire anti-botnet technologies to add to their stable. For now, those products are likely to be stand alone, but progress should lead to the capabilities being added to edge devices and security appliances. We may even see anti-botnet capabilities added to enterprise class desktop security suites - monitoring of outbound traffic via host IDS/IPS and firewall capabilities pushes extrusion detection to the endpoint, and will provide a more granular security environment.

Will we see the smaller independent vendors with good products acquired? Will they lose their edge if they are? Time will tell, but my feeling is that botnet detection technology growth will continue to mirror the development cycle of other security products in the market.

Thursday, March 6, 2008

Log Management: Observations from the Log Management Thought Leadership roundtable webcast

I listened in on WhiteHatWorld's log management roundtable webcast that I mentioned on Monday. The panel provided a few noteworthy tidbits. If you're just starting to look at log management systems, or you are trying to sort out some of the decision points between SIM/SEM and log management devices, you should look for a panel like this. The review of concepts and issues can be useful, and I felt that the panel reflected many of the experiences that I've had. Here are a few of the highlights:

Appliance vs. software - the panel generally supported appliances due to:

  • Ease of deployment.
  • Ease of use.
  • Fixed price, fixed form factor - value is more easily determined.
  • Support and updates.
SIM/SEM versus log management
  • There is an increasing use of a blended approach - both ends of the market are growing toward the middle.
  • Most vendors started at one end, some did analysis, some did log management. They tend to do what they started as best.
  • Logging versus security - the emphasis is different, as security isn't the only use of logs
  • Compliance, forensics, and analysis as drivers for either type of implementation
Choosing a solution - a few of the top selection criteria and testing hints were:
  • Fit your collection infrastructure to your environment and your requirements. You have options including: agentless vs. agents, multi-level collector/analysis engines, and other design choices. Architecture can have a major influence on performance.
    • Remote sites may make agents particularly useful
  • The ability to collect different data types flow data, syslog via TCP and UDP
  • The ability to scale as your environment or deployment changes
  • Analysis capabilities and other automated handling. Decide what you need, and what would provide the greatest benefit.
  • Test and assess the speed of access to data and the ability to search the data. Pay particular attention to indexing capabilities and storage methods
  • Be careful of the dangers of looking at single performance specification - vendors often measure under ideal conditions. Real scenario and testing is useful - what happens when features are enabled, UI is in use, and other actual usage models.
  • If you're intending to use the system for incident response review your legal requirements such as verifiable chain of custody, validation, and audit.
  • The experts suggested reviewing NIST standards such as 800-92
  • Deploy a proof of concept:
    • See how your network actually works.
    • Check the items you're logging.
    • Remember that space is cheap
Finally, a few ways to fail:
  • Roll your own and don't carry through leading to failure
  • Choose a product based primarily on price or an informal relationship
  • Miss important functionality requirements
If this sounds useful, their next presentation is their Log Management TLR webcast on March 19, 2008, at 2 PM EST.

Edit, 03/07: you can listen to the recording here.

Tuesday, March 4, 2008

Cold Boot Encryption Attack goes Open Source

If you've been reading along you'll be familiar with the cold boot/RAM harvesting encryption key attack that came to light a few weeks ago. Originally, researchers from Princeton posted a video highlighting their work in harvesting encryption key information from RAM. If you haven't been following along, take the time to watch the video and then read here and here for our write-up.

As predicted, a tool has been released into the wild to harvest RAM data from Microsoft Windows computers. The McGrew Security RAM Dumper is not a tool that is script-kiddie friendly, however with a little work and utilizing the instructions provided you too can grab the contents of RAM and run.

In the end, tools like this will become more readily available and used more often to expose what we would prefer to keep confidential. As the adage goes, "If the bad guys get their hands on your computer, it's not your computer any longer."

Monday, March 3, 2008

Upcoming events: WhiteHatWorld Log Management webcast

WhiteHatWorld will be carrying a log management roundtable webcast on Wednesday, March 5th at 2 PM EST (GMT -05:00). Topics are slated to include log management, value propositions and capabilities, features, implementation and operation

Big name panelists from a few of the major vendors in the log management space will be participating:

You may recognize Dr. Chuvakin from his Security Warrior blog - he's been kind enough to drop a link our way in the past.

You can sign up for WhiteHatWorld's event notifications at

Sunday, March 2, 2008

Fighting data exposure in small claims court

StorefrontBacktalk's Eric Schuman writes about Theodore Karantsalis's pursuit of Wells Fargo and Sprint Nextel for exposing his personally identifiable information. Interestingly, in this case Karansalis went after both companies in small claims court, claiming that class action rarely saw any real return to the consumer, and that it was often not in a reasonable timeframe.

Schuman asks an interesting question - what happens to large corporations if consumers begin to sidestep the normal process of litigation and take their claims to small claims court. Often, large companies will settle rather than fight, as their costs are higher than the small payouts requested. Karatansalis requested three times the cost of a PGP license ($597) in his claim, and received it. If this became a standard practice, corporations would have to defend themselves more actively, or establish precedent against such claims - something that would be difficult to do if consumers can show real costs associated with the loss of their data.

The original StorefrontBacktalk article can be found here.