Thursday, December 20, 2007

DHS: CFATS - have you accounted for your chemicals?

Many higher education institutions will be preparing their lists of "chemicals of interest" for the Department of Homeland Security early next year. The rule stipulates that listings be delivered 60 days after the release of the final rule meaning lists will have to be provided by January 19th unless you request and receive a 60 day extension. Chemicals range from chlorine to aluminum chloride to propane, meaning that many departments on campus will likely have to report their totals.

If you haven't thought about what the chemicals on your campus could be used for, take a look. Each chemical lists what it would be potentially useful for, providing a convenient overview of what risks your campus might face.

The Chemical Facility Anti-Terrorism Standards require a variety of things, ranging from risk assessments to reporting of chemical amounts on hand . These apply to many higher education institutions, and responsibility for detailing may have fallen to risk management or facilities staff. If you're an information security staffer, you may want to check with the appropriate department at your school to see how that data is being stored and secured.

Where did all of this come from? It is part of the Department of Homeland Security Appropriations Act of 2007, which President Bush signed in October, 2006. Section 550 of the Act gave DHS the authority to enact the rules above. The Act defines the covered entities as "chemical facilities that, in the discretion of the Secretary, present high levels of security risk." More details can be found in the final rule here.

Monday, December 17, 2007

OSXCrypt - TrueCrypt for OS X status update

Frequent readers may recall me posting about a donation funded port of TrueCrypt for OS X. The group just published their first update.

It sounds like they're taking on a bit more than a simple TrueCrypt port, as the post notes that:

...we realized that this project will not be a simple port of truecrypt to Mac OS X, but this will provide a multiple enciphered disks support encryption platform for the Apple operating system.
Right now the project has a simple XOR'ing kernel module, but progress is being made. I continue to hope that TrueCrypt support of OS X becomes a practical endeavor through this project.

Monday, December 10, 2007

Soft-R's CD Cryptex

Soft-R, maker of "Self Recordable Media Technology" is looking for OEM and industry customers for it's latest ware - the CD Cryptex. The CD Cryptex is a CD-R that aims to bridge the gap between users' knowledge of encryption software and the need for data on CD-R's to be encrypted. Soft-R claims that the device, loaded with it's own burning and encryption engine can be used without mastering complicated encryption software. Perfect since they are only supported on Windows 2000 and greater platforms.

On the technical side, AES256 in CBC mode is used to encrypt a container that houses all of the data files/folders sent to the disk via the on board burning engine. Keys are managed via pass phrases (limit 64 bytes) using SHA-256 hashes - which after the fact are needed to access, edit or view the files. Interestingly, Soft-R has included a virtual keyboard that one assumes is for use on machines that cannot be trusted. To aid in lingering copies of data, all temp files are wiped after the disk is burned. They even include a "secured photo viewer."

I can't wait to play with one of these to see if they live up to the claims. Would you trust one of these with your data versus PGP encrypted files burned to a CD?

Thursday, December 6, 2007

The Rule of Two

The group I work with has a simple rule that pays off in spades.

Any time a security recommendation is made, we check it against another team member. Thus, any decision follows our rule of two. The second person's job is to play the devil's advocate and to check for assumptions, mistakes, and to provide a second viewpoint on the recommendation.

Often we take into account the other team members' history and other specialties to best choose the person to look at our recommendation. That allows us to make sure we're not missing out on crucial tidbits of institutional knowledge or expertise.

The rule of two also gives us better depth - while a documented recommendation is made and archived, having two people who know about it on staff means that more people will actually remember the recommendation and know what it was and why it was made. With the shades of grey approach that security often has to take to make business work, that knowledge can be critical.

MacOS Password Safe

I recently wrote about a community effort to fund a MacOS port of TrueCrypt. I was delighted to find out that another favorite program has been ported to MacOS - Password Safe is available for MacOS at:

Keepass is also available for MacOS/Linux in X, and is an excellent alternative.

Tuesday, December 4, 2007

Solving the wrong problem...

For those of you who are not familiar with Gene Spafford from Purdue's CERIAS (the Center for Education and Research in Information Assurance and Security) or his blog, I would encourage you to check them both out. I've had the great pleasure of working with Spaf and one of his latest posts is absolutely on target, albeit from an altruistic standpoint.

In "Solving Some of the Wrong Problems" Spaf points out that most of our efforts in information security are pointed only at treating the symptoms created by the very nature of the unsecure products we or our companies use. Simply put, we know how to create more secure software, databases, networks and systems in general - however our vendors or we don't do it.

"We know how to prevent many of our security problems — least privilege, separation of privilege, minimization, type-safe languages, and the like. We have over 40 years of experience and research about good practice in building trustworthy software, but we aren’t using much of it.

Instead of building trustworthy systems (note — I’m not referring to making existing systems trustworthy, which I don’t think can succeed) we are spending our effort on intrusion detection to discover when our systems have been compromised..."

"I’m not trying to claim there aren’t worthwhile topics for open research — there are. I’m simply disheartened that we are not using so much of what we already know how to do, and continue to strive for patches and add-ons to make up for it...

Let’s start using what we know instead of continuing to patch the broken, unsecure, and dangerous infrastructure that we currently have. Will it be easy? No, but neither is quitting smoking! But the results are ultimately going to provide us some real benefit, if we can exert the requisite willpower."

It's a great read and don't blame me if you get sucked into reading for quite a while with some of his other posts. Speaking of which - check out his view on passwords. These both put my day of HIPAA policy review in perspective!

Monday, December 3, 2007

Wireless insecurity: keyboards

The folks at Hack A Day link to dreamlab's analysis of Microsoft Wireless 1000 and 2000 keyboards. There's a nice whitepaper in PDF form for further information. We've seen issues with wireless keyboards typing on other systems before, but this is one of the first public exposures of an "encrypted" keyboard link.

Disclosure of the type of encryption used for wireless devices is going to be a must, and lightweight encryption for devices will become ever more important. The same rules that you use for wireless network security end up applying to your wireless devices.

Monday, November 19, 2007

POC iPhone hack

Aside from the rant and the usual "Apple is secure" backlash - an article on the Wired Blog ran recently about a proof of concept hack for the iPhone. This particular hack enabled the remote user to listen in, record from the microphone, read stored emails and even see calling history on the iPhone. While the particular vulnerability that was used to exploit the iPhone has already been patched by Apple - the tide is coming in for this device and it's only a matter of time before another un-patched exploit is used in the wild. So, which of your users has an iPhone?

Friday, November 9, 2007

All the buzz about Abe Torkelton: a followup

Since my post about "Abe Torkelton", a web form submission bot, we've had a lot of hits on the site and a few comments. I thought that a few of them were worth responding to here, as some of the details might be useful:

chuckn wrote:

"...[site] which doesn't have any linkage yet - so i'm not too sure how they found me."
The bot is likely either randomly or sequentially scanning IP space, or is checking registered hostnames via a registrar. In either case, even un-advertised system may be probed. Since the bot appears to look for web submission forms, the only way to hide from it will be to have some sort of human recognition system or pre-existing userID in place.

Thanks to Kate and Wes, we have IP addresses:
"The ID address I got was" resolves that to a ThePlanet IP range, which is different from what I originally saw. Kate posted and saw, which is a IP.

So we know that the Abe Torkelton bot is coming from multiple IPs. What we don't know is if it is tool, a bot, or the early stages of something more malicious. I have yet to see a report of the registrations being used for more than posting to a site.

How can you prevent it? Well, thus far it appears that human input required systems such as CAPTCHAs. Since the IP address is changing, you likely can't block it via IP, and blocking bots with derivations of "Abe Torkelton" will only save you until the name changes.

I'll keep tracking this here, so keep throwing what you find into the comments. Thanks folks!

Wednesday, November 7, 2007

Forensic tools: WiebeTech HotPlug

Engadget has a short writeup of WiebeTech's HotPlug forensic system - in short, a tool for moving powered on systems, either by powering their power strip, or for injecting power into their power cord or outlet.

If you're doing police forensics, this looks like a tool to investigate!

Friday, November 2, 2007

It's the simple stuff really

You know, it's a little frustrating to have taken exams with hundreds of questions about the obscurities and specifics of information security just so that I can prove that I know my stuff. I get these little letters after my name that impress the HR drones. That’s right I’m am information security professional! Banded together we geeks can enjoy a lively conversation on encryption for data at rest across disparate systems...“If we make cipher text on a system in an ASCII character set then transport it to a system using EBCDIC...” and so we digress. Ah, the upper echelon of geekdom.

However, it’s the simple stuff that makes or breaks your information security program. In the news recently was a decent account of the “Khaki Bandit” and his ability to walk right into a corporate setting and fill his bag with their laptops – and then walk right back out. Better yet, there’s an account here where a reporter walked into a major mail sorting facility in the UK and took up a position by simply claiming he already worked there.

In both incidents simple procedures could and would have stopped these individuals in their tracks. Sadly, neither was asked for proper ID nor escorted to their purported hosts. I’m not aware of any major loses or data breaches directly linked to these events, still both had the potential to wreak havoc on an organization and its clientèle. Not to mention its public image and brand trustworthiness.

Every organization should take a look at these stories and ask “what if?” Of course smaller organizations will have an easier time addressing unknown individuals while larger ones will struggle with adequate controls. However, it’s all about the simple controls – “Who are you? Who are you here to see? I’ll call to make sure they’re in. This person will escort you to them.” Funny, that reminds me of my introduction to Kerberos.

Friday, October 19, 2007

Interesting physical security government resources

Most of us deal with network and information security on a daily basis. At times, it can be both edifying and also daunting to read about what those on the front lines of physical security deal with.

Two of my favorites are the FBI list of concealed and hidden weapons, and the the DEA's Microgram Bulletin which lists concealed drugs - an interesting read for security analysts. Cocaine in hammock supports, other drugs in trailer hitches, and more.

If you've found an interesting resource like these, drop a link in the comments!

Thursday, October 18, 2007

MacOS 10.5 - Leopard security features

If you're a Mac user, you've probably been looking forward to MacOS 10.5. The official release feature list has been posted, and there are a few security standouts on the list:

  • Library randomization is included to make stack attacks more difficult.
  • Firewalling is more granular at the application behavior level. This is good news, and I'll be interested in trying it out to see how much control we will get.
  • Stronger disk encryption - AES 256 is now supported for those with high encryption requirements.
There are a number of other security features included. Time Machine, Apple's automatic backup and versioning system may be the most interesting for security analysts, as it may preserve data that users or attackers believe that they have removed from the system.

Thursday, October 11, 2007

Help fund a MacOS port of TrueCrypt

If you're like me, you use TrueCrypt for storing sensitive data. While MacOS has a capable encryption capability, having portable encrypted volumes is nice. There's a community effort to fund a port of TrueCrypt for MacOS - a great way to help make software you want available.

Make sure you read the Fundable FAQ - it is an interesting service, based on a 7% fee for funds gathered. Slashdot has discussed uses of Fundable for open source software projects, and other sites have discussed it as well.

Wednesday, October 10, 2007

Who is Abe Torkelton? - finding a webform bot

A recent web form hit made me curious, and a little bit of digging showed interesting behavior. Here's a bit about the observable anatomy of a form crawler bot going by the alias of "Abe Torkelton".

The bot has been tracked before, and apparently may show up as "Jorge Gonzales" leaving a phone number of 617-750-5939.

Hundreds of websites show in Google with hits from a registered user with a user string in the form:

Abe ???Torkelton????
The first three wildcards are letters, the last four are numbers - apparently part of a unique ID for the testing bot. Many more of these registrations can be seen by simply googling for either "Abe Torkelton"or "".

The domain itself is registered through a domain proxy service run by This effectively hides the identity of the person running the bot.

What is the data being used for? I don't know yet - but somebody is finding every web form that they can submit user data to across the Internet, and they're seeing how those websites respond. Check your logs folks - this one is interesting to see.


Thanks to comments on this post, I've posted an update.

Tuesday, October 9, 2007

SSN: when a unique ID isn't.

As regular readers know, I work in higher education. I switched employers earlier this year, and recently discovered that the switch led to some interesting issues with insurance. The description below is the best fit to what appears to have happened, however it is written with no inside technical confirmation.

The sequence appears to be:

  1. End employment at former employer A, with insurance provided by insurance company X.
  2. Start employment with new employer B, employer B also uses insurance company X.
  3. Employer B insurance starts, and is identified by SSN to company X.
  4. Employer A carries my insurance through for a few weeks, then sends notice to the same insurer to terminate insurance for my SSN.
This led to my insurance being invalid, despite my current employer - B, believing that it was active. It also points to some interesting flaws behind the scenes.
  • A trusted entity can end insurance for a given SSN.
  • A trusted entity can declare themselves authoritative or is by default authoritative for a given SSN.
  • Crossovers are not flagged for activity - if employer A makes a change, then employer B makes a change, then A makes a change, this is not caught and investigated.
  • There is no regular feed that updates this information.
  • SSNs are used as unique IDs for the insurance - and even if you select a non-SSN ID (which the insurer offers) they appear to still be the primary key for your account.

Monday, October 1, 2007

Reconaissance: LinkedIn and social engineering

I attended Ed Skoudis's SANS 504 track in Las Vegas last week, and picked up a lot of useful tidbits. One of the more interesting offhand comments Ed made was about using LinkedIn to assess what vendors a given organization is buying from based on their recent link adds.

It makes for a fun exercise, and could potentially be useful when doing recon of an organization for penetration testing. A quick look at my own contacts lends some credence to the idea, and given a bit of other research, a LinkedIn survey seems like a clever method to get a few extra bits of information.

Does this mean that using professional social networking should be banned? Probably not, but it is a great reminder of the level of detail an intelligent aggressor can gather given a bit of cleverness and time.

Thursday, September 20, 2007

CSRF handling and SunGard Banner

Paul Asadoorian from OSHEAN published a whitepaper on CSRF vulnerabilities in SunGard Banner - an ERP system common used in higher education. The whitepaper is a useful read for developers who work with Banner, but would also be useful background material for any programmer who works with authenticated web sessions. Very few applications that I've seen account for CSRF, and getting the techniques described in the paper implemented as part of your standard framework could save you a lot of pain in the future.

Tuesday, September 4, 2007

Emergency notification systems - SMS for emergencies

The College of Notre Dame (note, that's not the University of Notre Dame) used their e2Campus SMS based emergency alert system recently. e2Campus provides bulk SMS messaging (and other features) for emergencies - a similar system is the NTI group's Connect-ED messaging service. These services provide emergency contact via SMS, email, and phone messages - and may provide additional options. This goes beyond what many of us were used to with campus tornado sirens having special tones for other emergencies - these services can provide news briefs in a timely manner to large groups.

Many colleges are starting to adopt these services as a useful way of contacting their cell-phone carrying student base. It is interesting to note that a percentage of students did not receive the message - while the article says that this percentage is low, it does point out that SMS messaging is not a total coverage solution and should be only a part of a comprehensive emergency communication system. With that said, notifying your constituents with enough detail to do something useful is an amazing tool to have in an emergency.

Would a system like this be useful to you? Quite possibly, depending on your user demographics and what your communication needs are.

A few caveats and comments up front:

  1. A test should be done to ensure that information is properly entered - in the case of a University, this would probably need to be done each semester.
  2. Rules need to be in place to control the use of the system - like any communication system, if the SMS capability is co-opted for non-emergency use it will be more easily ignored. The backlash from spam from the emergency system would likely be massive.
  3. Users must be made aware that the system will not be a 100% solution - cell phones may not always receive SMS messages. Use your alternate information sources as well.
  4. Spoofing may be possible via a VOIP or other system - having a known sender is still useful, but not a guarantee of validity.
  5. Control expectations, and let your users know how and why you will contact them.

Thursday, August 30, 2007

SSL testing: Foundstone's SSLDigger

If you're required to be PCI-DSS compliant, or you just want to check the SSL settings for your site, Foundstone provides a great free tool: SSLDigger. It checks certificate details, encryption and cipher settings, and is generally a good way to double check your SSL setup. Note that it does require the .NET package to be installed first.

Wednesday, August 29, 2007

Email bomb threats

The Indiana Daily Student is carrying an article about an email bomb threat to Indiana University's Bryan Hall. The University of Iowa received a similar threat and noted that other universities are receiving these threats as well. With widespread email bomb threats spreading, it is useful to note that the email was sent with specific details - to a dean and threatening Bryan Hall in the case of IU, and threatening the library at UI. In addition, the article notes that the email was sent through an anonymous service. This is the modern day equivalent of phoning in your bomb threat from a payphone.

I've seen old style phone-in bomb threats can shut down classes and campus buildings for hours at a time. With the heightened security response from many schools in a post VA Tech mode, emailed bomb threats have a significant chance of disrupting school activities. As the school year starts, it will be interesting to see if this is just the beginning of a trend. Hopefully this won't become widespread - targeted availability attacks like this can wreak havoc on campus schedules and events.

Now is the time to review your emergency communications plan - can you communicate effectively to your staff, students, and other community members? Do you have evacuation plans for buildings?

Monday, August 27, 2007

Dial an Identity Thief: a story from the front lines

A co-worker was kind enough to share his story of attempted identity theft today:

I received a interesting phone call on my office phone this morning.

The caller claimed to be with the 'recovery department' of a company in New York. The caller spoke English very poorly and the connection was noisy, so I never did figure out the name of the company he supposedly represented.

The caller claimed to be calling about $650 that was supposedly withdrawn from my account (not sure what kind of account, or where) several years ago, apparently without my permission. He wanted to confirm some information, so he could return the money to me. He was difficult to understand, but I am fairly certain that he said he needed my credit card number.

We went around and around for several minutes, as I tried to figure out who he supposedly represented and how much information he already had. I finally told him that if he knew how to reach me by phone, he should also know how to reach me by mail, and he should simply send me a check.

Callerid on my phone reported that the call originated from 1234567890. That number is completely bogus. The caller was probably using an internet phone; it is relatively easy to fake callerid with an internet phone.
My thanks to the co-worker for giving permission to post his story. The moral of the story here is to always ask questions, to not give up information without verification, and to always know the identity of your callers. How would you respond to a call like this? What would you do if the callerID had matched a local bank instead of a number you didn't recognize?

I normally advise people to call the company back - ask for a number that you can verify in their website and call that number. If they can't provide that information, ask them to send you more information using your contact information on file. And, as always, the FTC identity theft website is a great resources. While you're at it, you may also want to check out the Privacy Rights Clearinghouse.

Thursday, August 23, 2007

So you want an IT job?

Readers here know that Richard Bejtlich's Taosecurity is a favorite read for me. On Tuesday, he posted "What Hackers Learn that the Rest of Us Don't", and included a bit of commentary about his views for hiring IT staff.

I've had the pleasure of working with IT staff from a variety of backgrounds, from the computer lab IT guy who was a fine arts major, to the gifted Windows admin with a marketing background. They have taught me that a CS degree or an MIS degree frequently isn't the best indicator of suitability to the job. One infamous quote from a CS major undergrad that I knew was "I don't care how the computer works, I just program for it!". That's the same as "I don't care about edge cases, it works most of the time!", or other quotes that scare security folks every time we hear them.

As Bejtlich points out, native curiosity and interest - paying attention to the edge cases and the little details are some of the things that can make a hacker successful. The same goes for hiring an IT professional. During the past few years, I've developed a short list of things that I look for when hiring:

  • Curiosity - if I mention a new technology, technique, or other area of interest, does the candidate ask questions, and do they absorb knowledge?
  • Passion - not everybody can go home and play with things for the entire night, but do they actively enjoy doing what they do? Do they want to do it? I tend to ask candidates what their home network looks like, and how they're securing it. I ask what they'd like to play with, and what opportunities they've had and what they've enjoyed.
  • Laziness - not the bad kind, but the right kind. I look for someone who does it right once, rather than badly over and over again.
  • Active learning - are they expanding their knowledge, either formally via courses and training, or informally by tinkering?
  • Active pursuit of knowledge. Far too many candidates come in who read a security magazine once a month to stay in touch. That's not a useful way of staying up to date in the modern security world. Ask your candidate what they read to stay up to date, and what mailing lists they're subscribed to. I look for depth and breadth of knowledge seeking.
  • Personality - can they make and take a joke? Can they deal with users? How do they come across?
So, what do you look for in an IT candidate? And how does a security professional differ?

Tuesday, August 21, 2007

Your password must be at least 18770 characters long

Courtesy of Digg, the next time your users complain about your egregious security policies, point out this handy Microsoft KB article regarding Windows 2000 authentication against an MIT Kerberos domain.

The specific message returned is:

"Your password must be at least 18770 characters and cannot repeat any of your previous 30689 passwords. Please type a different password. Type a password that meets these requirements in both text boxes."
Who says that 30 day password changes longer than 8 characters are so bad? This is almost as much fun as Compaq's "Where do I find the any key" article.

Monday, August 20, 2007

Snuggly the Security Bear

Martin McKeay's Network Security Blog has a link to Mark Fiore's Flash animated Snuggly the Security Bear in Aye Spy. It is an amusing commentary on the current state of wiretap laws. Send this one on to your security industry friends!

Friday, August 10, 2007

365Main - an example of great disaster recovery communications

Even if you weren't effected by the 365Main power outage, you should read the status update posted by their president.

There are a few things to note here:

  • The entire event is broken down with technical detail.
  • Details of problem solving, maintenance, and testing are all available and relatively transparent - thus providing customers with detail about the event.
  • The tone is professional and communicates issues and events clearly.
  • Customers are reminded of what recompense their contract provides for them.
  • There is a clearly explained troubleshooting process.
  • There is a clearly explained plan to prevent future issues.
  • The data is being made available to other data centers to help prevent similar issues elsewhere - thus giving back to the community.
Also worth noting is that the status has been regularly updated, and that each update includes current information and future steps.

If you are ever in a recovery situation, this is a great example of after action communication to follow.

On the technical side - remember that simply having backup power may not be enough - many of the customers who had power interrupted would have continued to function if they had dedicated UPS units - but without power to their network uplinks, they might not have been able to see the outside world, even if they had power to the machines themselves.

Thursday, August 9, 2007

Certifications and pay

A recent Computerworld article points to an increase in salary for information security practitioners with certifications. Despite questions about the usefulness of some certifications - Bejtlich's take on the CISSP is a great example - they're still required or desired for many positions. Despite views from some in the industry about it, the article notes that the CISSP is amongst the most valuable certifications - at least from a pay perspective:

"Among the certification programs commanding the highest premiums were Certified Information Systems Security Professional (CISSP) , Certified Information Systems Auditor (CISA) and Certified InformationSecurity Manager (CISM)"
How does this negative view of the CISSP from respected industry folks like Thomas Ptacek and Richard Bejtlich fit with a high value for the CISSP? For one, more senior IT staffers are getting the CISSP. The oft maligned "mile wide, inch deep" coverage is well suited to the broad view of management. Similarly, the CISSP's experience requirement helps, but doesn't guarantee more time in the field, and thus one would expect a correlation to higher wages.

More technical certifications, such as many of the SANS paths - GIAC, GCIH, and such are more likely to be found in the hands of technically oriented professionals. The value of the certificates is definitely there, but the correlation to higher wage may not be as easy to show - fewer senior managers and C level positions are likely to have the SANS technical certifications.

Where does that leave us as professionals? Well, for one, the government is requiring more certifications. Per the article there is a "Department of Defense directive which requires over 100,000 security professionals in certain specific job roles to be certified within a five year period" which will drive certification for many in the public sector. Second, compliance requirements dealing with PCI, HIPAA, FERPA, the GLBA, SOX, and other standards mean that companies are looking for security staffers - and certifications are an easy filter for HR.

Given those trends, a certification may just be a good route to a few dollars more on your paycheck, or into a new job - if your friends give you a hard time, tell them to think of it as analyzing and exploiting the system.

Google's new Case The Joint program

Google's Streetview provides a useful service - it shows you pictures of where you're going. Despite privacy questions, there is definitely a benefit to knowing what the building you're looking for looks like.

Now Google is introducing a program called the Google Local Business Referral program. You can become a representative and...

As a Google Business Referral Representative, you'll visit local businesses to collect information (such as hours of operation, types of payment accepted, etc.) for Google Maps, and tell them about Google Maps and Google AdWords. You'll also take a few digital photos of the business that will appear on the Google Maps listing along with the business information.
(Emphasis mine)

If you've ever done physical security evaluations, you know that having a good excuse to get in and take pictures is very handy. Well, here's a great opportunity - and you can get paid for it afterwards.

Is that a bit paranoid? Possibly. Will we see a rash of people noting that you can case the premises using Google's data? Also possible. The important thing is - does it increase risk? Yes - for some businesses such as banks and other high security locations that don't want the general public to be able to check where their security cameras are by doing a Google search. Those same risks exist with a camera phone carrying public, but those require physically visiting the location.

Just be careful when someone knocks on your door and mentions that they're with Google Houseview...

Tuesday, July 31, 2007

Making TFTP portable

If you're a Windows user, and you need a portable TFTP client, 3Com's 3CDaemon might answer your needs. It's small, doesn't install anything into your registry, and can be made portable by simply moving it to your thumbdrive and changing the default file location in the configuration file.

A quick, simple answer when you need to fix a PIX, or PXE boot a host.

While you're at it, check out the Portable Apps website which is an indispensable resource for building a useful portable apps kit.

Friday, June 22, 2007

Physical Security - the unlikely does happen

Police in Tulsa are chasing a ring of criminals who are conducting large scale thefts that most security folks would rate low on the probability scale.

Their most recent heist involved rappelling from the ceiling of a Best Buy to steal a large safe and electronics. They even disabled the alarm system. In other thefts, they've stolen a semi-trailer sized load of electronics, and cut through the side of a building.

I've seen drywall walls cut through to get into an otherwise well secured room, and I've seen datacenters that had no security camera, easy dock access, and a back door latch easily tripped with a credit card or screwdriver. While we're not used to physical theft, it is a fact of life, and if you have valuable, portable items - or even not so portable items, there is a risk.

If you have valuable merchandise, or if your data center has business critical data, you might want to talk to your management about the unlikely, but possible...

Thursday, June 21, 2007

What if everybody used your SSN?

The story starts like this:

"In 1938, wallet manufacturer the E. H. Ferree company in Lockport, New York decided to promote its product by showing how a Social Security card would fit into its wallets. A sample card, used for display purposes, was inserted in each wallet. Company Vice President and Treasurer Douglas Patterson thought it would be a clever idea to use the actual SSN of his secretary, Mrs. Hilda Schrader Whitcher."
Read the rest on the Social Security Administration's website. If you deal with user IDs, or Social Security numbers, this one will make you wince...and smile.

Wednesday, June 20, 2007

Wipe your devices

Like many IT folks, I've picked up used systems and media that contained data.

In my case, I've had everything from departmental mail servers to personal systems containing term papers and billing information pass through my hands. In each case, I carefully wiped the machine or drive before doing anything else with it.

What happens when it goes the other way? Dale Glass's network camera is a great example. He set up the camera to email him when it detected motion, then returned the camera to the retailer. The retailer didn't wipe it, the family that bought it didn't wipe the configuration, and Mr. Glass received email with video of the family.

Here's your reminder to wipe devices when they leave your care - and to check new ones when they come in!

Monday, June 18, 2007

Web application security test software reviews

Jordan Wiens is writing a series of "rolling reviews" on web application security testing software. First up on his plate is SPI Dynamics' WebInspect. You can find the full article here. He points out a high false positive rate, and that it had real issues with Ajax - neither of which surprise me. Any analyst who has been doing system vulnerability scans knows that false positives were (and at times still are) a fact of life - seeing those come up with web applications isn't a real surprise. In many ways, web application security testing feels like vulnerability scanning did a few years ago.

I'll be interested to see what direction the rest of the reviews take - most of the big names in web penetration and security testing still do a lot of manual work to inspect applications. In a business environment, particularly in a budget, skill, and time constrained environment, that may not work well.

Those limitations make the availability - and the accuracy and depth of tools like WebInspect - absolutely critical to custom application development processes. More and more organizations are adding security scans and testing into their development cycle.

One of my current goals is to find a way to easily put a good testing tool into the hands of developers that I work with. I'd like to see them able to take a good first pass at their own applications before running it past security for review - system administrators already have the ability to run vulnerability scans against their new servers and workstations, and that model has been helpful and is worth repeating.

Is that a complete solution to web application security? Definitely not. Will it put us in a better place than we were before we added testing to the development lifecycle? Definitely.

Wednesday, May 30, 2007

Public access

Security picture of the day from a friend - or is that insecurity? Click it to magnify - yes, those are a user ID and password pair on the monitor. If you've got a great shot of a bad security practice, send it in!

Thursday, May 24, 2007

Free lunch: Trust models that don't work

Both business and pleasure travelers are used to seeing bills at hotel restaurants that let you simply write down your room number and your name to charge the bill to your room. Most of us are used to seeing it, and we probably even wonder how often it is exploited.

In my case, it was exploited on a recent stay at an upscale hotel on the west coast during a conference.

My normal departure morning routine is to check the paper bill most hotels now slide under your door the morning of checkout. In my still sleepy daze, I glanced at the bill, expecting it to show a zero balance...

It carried a total of over $300 from the hotel restaurant, and a charge to my credit card for that amount.

This obviously wasn't right - I hadn't eaten in the hotel restaurant, and in fact, all the meals I had eaten had been provided as part of the conference, or by friends off site. Something odd was going on. As with most people, I first thought that there was likely a billing mistake, although the security analyst side of my brain started to ponder how a $300 charge had popped up.

A trip down to the desk and a chat with the clerk changed my initial reaction. They did, in fact have a receipt with my name, a signature, and my room number all filled out - in handwriting that wasn't mine, at a time I was in the conference, and with food for at least four people.

I would have remembered the crab and lobster, let alone the rest of the $300 of food and drinks that were signed for on that receipt.

In the end, the hotel handled it with reasonable aplomb, but I was stunned to see that there was absolutely no verification of the identity of people signing for large bills. This places the hotel itself on the losing end of transactions. If they had left the charge, I would have simply disputed it. As it was, they now have to investigate how someone got my name and room number.

A few simple controls could have prevented this:

  • Check ID for anything charged to a room number.
  • Allow people to elect to not allow anything beyond the room to be charged to their credit cards at check-in.
  • Set a maximum charge limit, either by hotel policy, or for the person who pays for the room.
There is of course the danger of upsetting customers with new requirements like this, particularly at an upscale hotel where patrons are used to the service. Thus, some hotels would find that the optional security approach may be more acceptable to their patronage.

The other interesting thing about the incident is that in talking with hotel staff after the fact, one staff member had a very hard time believing that anybody would take advantage of this loophole. While I can understand that hotel staff members would generally not do this for fear of losing their jobs, I wasn't horribly surprised to find out that someone would try to take advantage of the loophole itself. In many cases, the bill would have been paid for using a corporate card, or possibly by a sponsor, and I wouldn't have ever noticed the discrepancy. The fact that the bill was mine meant that detection was much easier.

Next time you stay at a hotel, see how many times you are given the option of charging to your room - and how easily you can get access to the first initial, last name, and room number of anybody else you run into.

Wednesday, May 16, 2007

Open proxy honeypots

Most of us probably don't run open proxies ourselves - but if you're a higher education security analyst, you probably have at least one on campus, even if you'd prefer not to. That means that your threats may come from inside your border, and worse, that it may be open on purpose.

What do they get used for? Well, a great way to find out is to make an open proxy honeypot.

What can you do with an open proxy acting as a honeypot? Here's a great example - Ryan Barnett from the Web Application Security Consortium has a very interesting presentation available about traffic they observed through a proxy honeypot. It is well worth the read.

Most of us are headed down a road to securing the business side of our institutions, but the academic and student sides are often more problematic. We'll continue to see open proxies, both on our networks, and in use by our users. The good news is that the next time someone asks you about the dangers of open proxies, you'll have an excellent case study in hand.

Tuesday, May 15, 2007

Citysec - informal infosec group meetings

I haven't been to a Chisec meeting yet, but they sound like a great idea. There's also Indysec if you're a bit farther south.

Check out more on the Citysec website for the rest of the metro area infosec meetings - if there isn't one near you, maybe it is time to found a group!

Monday, May 14, 2007

RSnake and the phisherman

RSnake has a very interesting interview with a phisher on his blog.

There are a number of obviously interesting points - the high level of password re-use, the price that accounts can get, and that the anti-phishing technologies are starting to become annoying to the professional thief. I'm sure I'll be seeing the blog post quoted in more than one Powerpoint presentation this year.

What stood out to me, however, is why lithium got started - he saw an opportunity in the spam email his parents were receiving and thought that he could do it better. That's how many entrepreneurs get started, and is, in many ways how technical folks tend to think. This creates an arms race for technical superiority.

Where does RSnake's article leave us? I think it reminds us to remember that a lot of today's hacking world is built on a profit motive. While a certain crowd is definitely still in it for the fame, the more serious threats are from people who make their living stealing cycles, identities, and money.

Or, to put it another way...they get paid to do this. Is your organization treating external threats like they are professionals?

Sunday, May 13, 2007

"Proprietary" encryption

Every security professional I know has heard the dreaded phrase "we use proprietary encryption" at least once in their career. Here are some of the best lines I've heard.

One vendor cited their "64 bit encryption plus three extra bits of proprietary security". Yes, they added three bits. Why stop there? Well, that was enough, right? They really, really hyped those extra bits - after all, three bits is better than two bit encryption.

Another vendor offered "proprietary encryption technologies that our programmers assure us are the very best in the industry" - however, they were completely uninterested in peer review, and would not document in any detail how their encryption was superior.

My all time favorite proprietary encryption line? "We didn't use the standards based encryption libraries included in our IDE because our programmers wrote a far superior 56 bit encryption scheme".

We didn't buy that product.

When vendors throw you lines like these, it is handy to have an acceptable encryption policy like the SANS example.

What are your best "proprietary encryption" stories?

Saturday, May 12, 2007

If only tokens were cheap...

Token based authentication has been something that every large organization that I've worked with has considered, and often it is something that they have deployed. The problem is that the deployment has typically been very small scale, and that it was typically limited to a tiny subset of users. In organizations of every size, token cost was a major factor in deployment - either controlling size, or even the possibility of deployment.

The good news is that token cost won't be the controlling factor for small and mid-size installations much longer if Entrust's IdentityGuard Mini Token is any indicator. The list price for the token is $5. Yes, that's $5, on a one-off purchase, not in ridiculously large quantities. While I've seen the occasional note about some large purchasers pulling in pricing like this on existing tokens, you just haven't been able to buy tokens in smaller quantities for anything approaching $5.

That doesn't change the cost of the back end software, nor the cost of administrative time and implementation time. The good news is that if you were holding back because $35 tokens were too expensive to roll out, or because replacing them would cost too much when your students lost them, now they cost less than lunch - and they're even waterproof.

The gotcha? The tokens aren't available yet. The website takes you to a contact form. I'll take the wait, if this is the shape of things to come. Tokens that are priced reasonably enough to deploy system wide can help make password change policies much less onerous, and improve security if properly implemented. That's a cheap security improvement.

Thursday, May 3, 2007

The importance of secondary routes

Network World is reporting that a fire caused when a homeless man threw a lit cigarette onto a mattress under a bridge has taken down Internet2 access between Boston and New York.

Yes. A burning mattress took down an important link for a major high speed network. No, this probably wasn't specifically covered in their design and operations risk assessment.

While high speed networks are expensive, this does demonstrate the vulnerability that purpose built dedicated links suffer from. If you only have one link, either because of cost or because of specialization, you need to have plans in place for when it goes down. Copper and fiber aren't invulnerable, and even when you think you're safe you can still get hit. Just when you think it is safe to cross your physical paths, someone will go dig there with a backhoe and cut your fiber.

Two stories come to my mind in which relatively unlikely events threatened or took down Internet access.

In the first, a semi hauling a backhoe went underneath a bridge that was lower than the backhoe's retracted and stored arm. The high speed impact with the bridge severed the fiber running underneath it, cutting off Internet access to a large chunk of Michigan.

In the second, a crew working on a a sewer line hit a gas line. In the process of attempting to fix the gas line, they dug and hit a fiber conduit. Fortunately, the slack in the fiber allowed it to pull just enough to remain operational, but a series of unfortunate events might have resulted in loss of Internet access for a major institution - in addition to a pretty nightmarish repair scenario with fiber and gas lines both broken.

Lessons learned? Always ask about single points of failure, identify alternate routes if possible and financially reasonable, and make sure you have an outage handling and recovery plan.

Phishes and loathes...

This post by Pascal Meunier over at CERIAS is well worth reading if you use a Visa credit card to make purchases online and the vendor uses the "Verified by Visa" program. The basic problem is that Visa's program presents itself like more of a phishing attempt than a legitimate fraud prevention tool. Worse than that, I think, is the fundamental implementation problems that Pascal notes in the update at the bottom of the article. Does anyone even test this stuff?

On the subject of phishing, it seems that banks and credit card companies still don't get it. I can find countless examples of unexpected emails from my banks that, from what I can tell are completely legitimate, but are full of "click here" and "login" links - the kinds that train the not-so-careful users to fall for phishing attacks in the first place. Maybe it's time I jumped on the ASCII ribbon campaign..

Tuesday, May 1, 2007

Erasing drives: This drive will self destruct in 1...2...3...

Drive wiping techniques are a frequent point of discussion in the information security community. Most IT staffers know about DBAN and there are plenty of both freeware and commercial tools out there.

If you need something different - either because the drive isn't in a system, or you want to centralize it, there are dedicated drive wiping systems like Ensconce's Digital Shredder . On the other end of the drive wiping spectrum are degaussers like these which are great for wiping drives that are no longer working, but still contain data, or for drives that you don't have interfaces for for your wiping system.

Yes, sometimes somebody shows up with a pile of Fiber Channel drives, or a Bernoulli disk, or some other for of media that you don't have a handy USB adapter for.

You can also have your drives physically destroyed - shredding and destruction companies will do this and will provide a receipt to demonstrate that they've been properly destroyed.

And then there's the drive that Ensconce Data Technology is promising. Sign me up for a self destructing hard drive!

How do you choose what is appropriate for your organization? As always, ask questions.

  1. Do you have to follow any legal or statutory requirements? If so, make sure your strategy satisfies them.
  2. Do you have internal policy requirements? If not, why don't you?
  3. What are the security requirements of your data? Check your data handling guidelines.
  4. What would exposed data cost your organization? Data recovery tools are available, and are easy enough to use that even those without significant technical knowledge can recover data from a drive if it hasn't been securely wiped.
With these answers in hand, you should be able to create policy - if you don't already have it, then create procedures and select appropriate technologies to support them.

Sunday, April 29, 2007

High value higher ed and compromise profiles

Dave G from Matasano Chargen posted about "Mac Punditry and the Office Paradox". My higher ed focused ears perked up when he asked "How are educational environments high value targets?".

Higher ed security folks know that we're major targets - and attractive targets too. Here's why:

  1. Open networks - large public IP spaces, often with relatively loose border controls.
  2. Open systems - frequently systems are not centrally supported, and there can be a wide separation between system security postures across the network.
  3. High bandwidth - universities have bandwidth that many commercial entities and ISPs would be jealous of. Internet 2 connections are typically at least a gigabit, and some universities connect to things like the TeraGrid's 10 - 40 gig research backbone or other specialized high speed networks.
  4. High value data - Social Security numbers, research data, personal data on students, faculty, staff, and donors, and credit card operations are all on the list for most higher ed institutions.
Historically, educational institutions have had relatively open borders. This has changed over the past few years with universities implementing border firewalls and other protections. Universities are still in a somewhat unique position - many have residential networks that act as ISPs for their students and have standards for academic and research freedoms that make a corporate style security architecture difficult, if not impossible.

Watching vendors and other security professionals react to statements along the lines of "well, no, we can't presume that it will be firewall protected" or "we have a class B, and everything we own is on a public IP" can be fun if you enjoy looks of sheer terror.

With attractive and relatively exposed systems - and a population that is often less formally controlled than those in the corporate world (at least in similarly sized institutions), compromises will occur. What do they look like?

Higher education security staff tend to see three attack profiles on systems - and I think that these three attack profiles show the three types of value that compromised systems have for attackers. They are:

1. Zombies - low value systems with no real useful data, smaller hard drives, and low to middling bandwidth. Often these are student machines on residential networks, or standard employee desktops. These systems tend to participate in botnets, and are valuable simply because of numbers and the fact that many are either not detected, or are not cleaned up properly if they are.

2. Storage and bandwidth - these systems are characterized by larger hard drives and bigger pipes. Universities often have relatively large pipes, either thanks to an Internet2 link, a research network link, or simply large commodity Internet upstream connections. Systems that are compromised by an attacker who actually cares about system profiles and that do have bigger drives or more bandwidth continue to be used as storage and distribution points.

3. Jumping off points - every organization has hosts that have sensitive data or that can be used to move through other systems. Your local IT support staff machines are a great jumping off point, and so is a dean's machine, or a business office system. Normally, you will want to focus your forensic efforts on these systems, as they are more likely to have sensitive data - or the keys to the kingdom. A single critical IT worker's machine can let attacks romp through your infrastructure. Most hacks of these machines tend to be detected via system monitoring software - AV and anti-spyware, via user notification, or via one of the methods above.

So how do you deal with these compromise profiles? Your detection and response tactics will likely vary due to your level of control and access.

Zombies can often be found by by monitoring outbound IRC connections or by using netflows and other monitoring technologies to keep track of known bad hosts on the outside. Most normal systems on campus won't be talking to your friendly neighborhood C&C. Since these systems are often outside of the normal support infrastructure for business or academic computing organizations, you have to work with your residential network support or enforce your AUP (you do have an acceptable use policy, don't you?)

Storage and bandwidth oriented hacks may not be reporting into a central C&C - although more and more do dial home. Use your border flows to check for hosts that have very high bandwidth usage - a good tactic is to check your top 10 or top 20 hosts on a daily basis, then filter out the known good hosts, check into the rest, rinse, and repeat. Yes, that public FTP mirror will be high, but why is a grad student's desktop machine in the top 10 for outbound traffic?

Jumping off points are the security analyst's bad day - follow your IR procedures, and make sure you understand what the system had on it, and what access the people who used the system have to other resources. The chain can be long, but following it can help ensure that further compromises don't occur.

The final analysis - at least from my perspective is that higher ed does have high value hosts. Educational institutions cover the spectrum of sensitive data from research data to SSNs and credit card data. In addition, most are highly concerned about their image, meaning that a data breach can cause significant damage to reputation, even if financial losses are smaller.

Where does that leave us? I'll write more about some of the directions that universities are moving in to handle both intrusion and extrusion detection in a coming post.

Thursday, April 26, 2007

What's old is new again: email extortion and urban legends

Much like fashion, the Internet makes old things new again at a startling pace. Dark Reading is carrying an article courtesy of Information Week about a "new" email scam - assassins have been hired to kill you, and if you bribe them, they won't. Unfortunately...this isn't really a new scam. In fact, Snopes has references back to 2006, and an FBI recommendation from December of 2006 - which the article does note. What is newer is that the mailing lists harvested for it seem to target professionals. Not quite spearphishing, but definitely more targeted than your daily allotment of prescription drug and enhancement spam.

Moral of the story? Keep Snopes, ScamBusters, and the CIAC's Hoaxbusters sites handy, and don't panic. If you do know somebody who has succumbed to the scam, point them to law enforcement and the Internet Crime Complaint Center (IC3).

Wednesday, April 25, 2007

If you're still shopping here, do you mind if we lose your data again?

There's a post at Emergent Chaos about how many customers you may lose if you lose their data more than once. I recently asked if you would still shop at a company that lost your data. These statistics are interesting, as they show that at least in some populations, there is a direct churn rate effect from repeated data loss. The question remains: what about institutions that you're not a customer of, but instead belong to a population that they service.

How is that different? Well, there are organizations like universities and the VA that will retain records on you long after your a bank, credit card company, or other institution would have hopefully destroyed your data. You will always be a graduate of your alma mater, and you will always be a veteran - and they will retain your data. Another group that we have little choice in dealing with is credit monitoring and reporting agencies - a breach of one of the major agencies could have serious repercussions. We've already seen third party processors announce compromises.

How can you protect yourself in this case? The responsibility lies with the data holder, and that is what should concern us - in some of these cases, there is no motivation to retain customers, we have no way to remove our data from their databases, and in many cases, there are few penalties for losing data that isn't protected via legislation. We may at least hear about it thanks to legislation that requires reporting- and that's a start.

Tuesday, April 24, 2007

Security deals: Free for life PKWare SecureZIP for Windows

PKWare has SecureZIP available as a free for life download. SC Magazine reviewed it, and seemed to like it. It looks worth checking out if you're looking for something that integrates into Outlook and other software for free.

Monday, April 23, 2007

Paranoid yet? Van Eck Phreaking your LCD

Most of us don't play in the uber-paranoid world of that worries about Van Eck phreaking, but New Scientist has an interesting short article about the possibilities of Van Eck phreaking LCDs. For years most of us had been under the assumption that it was nigh unto impossible to grab enough RF from a laptop or LCD display to view the image - that's not always the case.

Time to run Tinfoil Hat Linux!

Friday, April 20, 2007

Web hacks - gzinflate and base64 encoding

A recent exploit I ran into dropped a remote administration console into a PHP script. I've seen similar exploits before, but this was the first time I had personally run into PHP exploit code nested in the script that used this bit of PHP:

eval(gzinflate(base64_decode('encoded file...')));
This is reasonably clever, as it keeps the file size down and makes the exploit a bit harder to figure out (should we call that insecurity through obscurity?). Vendors also do this at times to try to protect their code, although it is not a very effective reverse engineering prevention scheme - it keeps casual viewers from stealing your code.

How can you deal with this, especially if you're not a PHP wizard? Well, fortunately, the answer is pretty simple. A number of pre-built decoders exist, and my favorite of the day is from Steveandthesoftware. (the site is currently not responding, so you may have to check out the Google cache). There's another simpler script here, and you could always build your own if you're so inclined. Some tools also include base64 decoding - openssl does, as does the ever so useful Paros proxy tool. Even the L33t Key plugin for Firefox has base64 encode/decode support.

Today's lessons?

  1. Use Tripwire or another file integrity monitor on your web scripts and other content that doesn't change often.
  2. Have a known good, clean backup of those scripts.
  3. Make sure your web server has permissions set appropriately on all of the directories - if your web server user doesn't need write access to the directory, don't give it!
  4. Limit the permissions that your web server user (such as www or apache) has. Fortunately, this is the default for most modern servers. If your web server usage model permits it, an Apache chrooted jail may be a good idea.
  5. In the event of compromise, rebuild if you can, and if you can't, make sure you're using those known good backups.

Wednesday, April 18, 2007

Would you shop at a store that lost your data?

Would you shop at a store that lost your data? Would you attend a university that exposed your SSN? Would you donate to that university? If you were a veteran, what would you do if the VA lost your data and didn't know where it went? Do you feel better knowing that your data is out there, and does it worry you to think that many organizations don't announce breaches of your private data?

There seems to be a split between consumers response when polled and their actual behaviors. In the articles I've linked about consumers in the UK, the majority of those polled claimed that they would take their business elsewhere if their data was exposed. The counter article notes that TJ Maxx has gained sales in the past year - in fact, they saw 6% gains in March alone. It may be that notification requirements won't kill our organizations, even if they are embarrassing.

Does this point to consumers accepting data security breaches as commonplace? Stories of people receiving multiple notifications from different organizations in a day or two are floating around, and consumers rail against irresponsible companies. Even if consumers are fed up, it seems that we, as consumers, may be reaching the point that we become used to our data being exposed. The cost of doing business in our current information society is that our data is at risk, and we frequently cannot control how much data companies gather about us. Even if we limit what we give one company, our data can be correlated to data in other databases.

What is the moral of the story for organizations? Is it "Consumers will come back" or is it "Breaches can kill you"? Only time will tell. It may be that in the near term, the huge numbers of organizations leaking data will cause consumers to become inured to the data losses. The legislative backlash that we are seeing nationwide will surely have some effect, both due to required reporting and due to more stringent requirements.

Where does that leave the security professional? With questions, of course:

  1. Are you required by state or federal law to notify, and if so, how, under what circumstances, and how quickly?
  2. What is your organization's breach notification policy?
  3. Do you have procedures in place to handle breach notification?
  4. Do you have an internal communications plan?
  5. Have you looked at insurance? Data breach and information security insurance is just becoming available, and it may be worthwhile for your organization. Dennis Trinkle mentioned insurance in his presentation at the Indiana Higher Education Security Summit - the insurance is out there if you're looking for it.
For now, security folks need to keep track of the laws - both those that are on books, and the bills entering both state and federal legislatures. If you have vendor requirements like PCI you need to make sure that you meet or exceed them. You also need to make sure that our own policies and procedures keep up with both law and other requirements.

Tuesday, April 10, 2007

Hard drive encryption and breach notification

I've had a number of conversations about SSN disclosure laws recently, and they're a major topic in the higher education security space. If you're an Indiana resident, last year's SSN release law and the breach disclosure laws for both private and public institutions made life more interesting. The usual disclaimer applies - I am not a lawyer, and you should talk to yours definitely applies here. With that said, the Indiana laws and others include a possible out for organizations - for example, the state agency version reads:

"Sec. 5. (a) Any state agency that owns or licenses computerized data that includes personal information shall disclose a breach of the security of the system following discovery or notification of the breach to any state resident whose unencrypted personal information was or is reasonably believed to have been acquired by an unauthorized person."
The key here is the "unencrypted" - if you can reasonably state that the data was encrypted, and could not be accessed, then your reporting requirements typically are lessened, if not largely removed. It is worth noting that in general, the laws do not specify an encryption technology - presumably ROT13 would be considered a less than good faith effort.

Does it now behoove you to encrypt everything? Possibly, if you deal with covered data everywhere, but few organizations do. At the least highly sensitive systems, or systems that are likely to be exposed should be considered in your remediation plan. Most organizations are considering laptops and PDA/smartphone devices as part of their first round of targets. Both of these are likely to be exposed or stolen, and often contain local copies of sensitive data. The past 12 months have shown a number of cases of stolen or missing laptops.

From a technical perspective, you have a few options:
  1. Full drive encryption software. Utimaco's product is a good example of this. The best part about software of this nature is that it can be installed on existing systems. Pay particular attention to backups, key escrow, and OS and hardware compatibility. You're sure to find some systems that your encryption scheme just won't work on - many products are Windows-centric. There are some real benefits to full drive encryption, including temporary space and virtual memory being encrypted. There is also typically a real performance hit, particularly for disk intensive activities such as using a virtual machine.
  2. User directory encryption - things like Bitlocker and FileVault can easily encrypt user directories. The caveat here is that you have to make sure that data isn't stored elsewhere - a single application that stores data elsewhere, or a user who stores to the unencrypted root directory of the hard drive can make your encryption useless. In the case of FileVault, you'll also need to make sure you figure out a method for managing the master password. There is still a performance penalty, but general non-user directory software should run at normal speeds - the performance hit is constrained within the bounds of files in the user's encrypted directories.
  3. File or volume encryption. Security professionals will probably point you to software like TrueCrypt for this. You must encrypt the file or volume and unlock it when you use it. This does not account for memory resident data, nor does it handle temporary files, thus making it far more difficult to claim that data would not have been potentially exposed.
Should your organization use an encryption product? If you deal with sensitive data, and want to safeguard it, you should definitely take a look at the products on the market. If your organization is subject to a disclosure law, encryption products may also provide a needed protection in the form of both a reputation and a data disclosure control.

What about cost? In most cases, encryption products can be had in a variety of price ranges - Bitlocker and FileVault are free with Vista and OS X, while commercial solutions range in price up to a few hundred dollars per machine. As with any security control, you'll need to gauge the cost and benefit to determine where and when you should deploy the technology.

Wednesday, April 4, 2007

Social engineering in the workplace: avoiding the "evil security guy" tag.

Security is evil. We say no (that's "default deny" to you), we enforce policies, and we make life difficult.

We ask hard questions, we poke holes in the beautiful software that you just wrote, and worst of all, we take up your time that could be spent on more important things than doing security assessments and configuration checks.

Really, we just get in the way.

Or, at least, that's how it feels a lot of the time. Then, a security event happens, and suddenly the security guy has a new shine!

Sadly, that's not the best way to work with IT staff. Security needs to work effectively with IT staff and the organizational community all of the time, not just during emergencies. What can we, as security professionals, do to build ties with the administrators, staff, and other employees? I have a few favorite tactics to make security staff more available to the rest of the organization.

Lunch with the security guys

Every few weeks, I eat lunch with a group of systems administrators from across the organization. It isn't a formal meeting, just a meeting of fellow geeks. Often, more useful information is exchanged at these lunches than in a week of meetings - and more happens as a result of them.

The real security benefit, however, is that I'm available as a resource in a non-formalized environment. Questions that never come up elsewhere are brought up, discussed - and often I don't have to provide an answer. The others in the group will already know it, or they've heard it from me before.

The key here is being approachable - the same ideas work for CISOs and other security management professionals - up to a point. Define a line of professionalism, but make yourself available.


In larger organizations, security staff may only heard of when they're bringing trouble to your door. It helps to build ties before the event. I've made a habit of getting out periodically and chatting with various contacts across campus. They tend to refer things to me, and it helps me keep my finger on the pulse of the IT community.

There is a balance here - at some point, being heads down working on security projects is more useful, but making these contacts can be an effective part of a security outreach program.


We all like to read about social engineering - why not do some of your own. If you walk into my office, you'll find a bright yellow 1960's Civil Defense Geiger counter, magnetic building toys, and a selection of candy all out and easy to get to. Why?

They lure in IT staff.

I've had more conversations because of someone wandering down the hall and spotting the Geiger counter through my open door than I can count. The candy means that people make a stops, ask a question, grab a handful, and meander back out. The building toys give engineering types something to play with as they explain their problems.

Simply taking the time to chat with the folks you work with is a valuable security tool. Yes, the "evil security guy" image is useful at times, but having an IT community that willingly comes to you before the problem becomes an issue is worth the trade.

Tuesday, April 3, 2007

Easy signed and encrypted email

I'm frequently asked how to easily send email more securely. My default answer in the past was to point the person to PGP or GPG, with the caveat that neither was particularly user friendly and that it would require additional software.

That's not the case these days. Most major email clients include S/MIME support, and getting a certificate is pretty easy - for example, Thawte provides free personal email certificates which are easily imported into your client. From there, you can use the certificate to sign and/or encrypt your email. You'll need the person on the other end to get you their certificate to send encrypted mail to them, but again, modern email clients make this a snap. For most clients, simply have the remote user send you a signed email and click the "import" button when you see the icon for "signed mail" show up.

Pretty easy, right?

What's the difference between signing an email and encrypting it? Signing an email is a way of ensuring that the email is a) one you wrote and b) hasn't been changed along the way. A signed email says "This is from me, and it is exactly the message I wrote to you". An encrypted message is just that - encrypted so that it cannot be read except by people who have the right key to unlock it. Put them both together - a signed and encrypted message, and you have an email that you know came from a given person, you know it hasn't changed in content, and you know nobody else saw it along the way.

The truly paranoid security types would have me note that "you know that it came from a given person" is really "You decided to trust the certificate authority" and that "nobody else saw it" means "you can be reasonably certain that without heroic means using modern technology, nobody else can read the message". Digital signatures are highly dependent on trust models - the signature is only as good as the trust you can place in it.

If you know you want to do S/MIME, an easy way to get a S/MIME certificate from a trusted third party is to use Thawte's free personal email certificate. If you do use Thawte's free email certificate program, you can find notaries in most major metro areas, or you can catch them at your next security conference. A notary will verify that you are who you say you are for Thawte - basically a distributed trust model. If you're far away from any notary, there are alternate ways to get certified, albeit at a cost. If you get your certificate notarized for 50 points or more, you can put your name in your certificate. If you get to 100 points, you can become a notary. After that, the more people you notarize, the more points you will be able to give out - notaries max out at 35 points. In a reasonably sized organization, a handful of notaries can quickly become 35 point notaries and bootstrap new notaries easily.

I should tell you up front that Thawte will want what they call a "National ID number". I counsel people to use their driver's license number rather than their SSN here, as any notary who signs your certificate will need to keep that information on file.

Should your organization use signed or encrypted email? That's a matter of needs, policy, and practical considerations for implementation - and we'll talk about that another day. For now, if your question is "can I, as an individual, easily send signed or encrypted email easily", the answer is yes!

Monday, April 2, 2007

Security deals: eEye's Blink Personal for free

eEye is offering their Blink Personal Internet security product with antivirus with a 1 year license for free for a limited time.

According to an article on Dark Reading, the product will likely remain free after the one year period, as eEye is using consumer data to fuel their pro and commercial products.

"[Ross Brown, CEO of eEye] says the free consumer tool will also help eEye gather the data to build a better commercial product for its traditional business -- commercial and security-savvy users. The company is offering a free one-year subscription, but it doesn't intend to start charging for Blink Personal after the one year is up. "The renewal will probably be free again, too."
The software is worth a look for Windows users as an alternative to traditional standalone AV and firewall products - the product is built with a number of other capabilities such as host based IPS, system, and application firewalls in addition to AV and patch management.

Remember, free is an easy sell to friends and family who otherwise might go without quality security products. Software such as Zone Alarm and AVG's free personal use antivirus are on my frequently recommended list for people who can't or won't buy products.

Thursday, March 29, 2007

Security tools: BartPE - Bootable Windows on a CD

I'm attending offsite training this week, and the class is full of a lot of people who do security work for their organizations. A frequent topic is how their organization do things, and what they use to do their jobs.

I was startled to learn that some of my classmates had never heard of BartPE, which is a tool that any security staffer who works with Windows systems or needs a bootable Windows toolkit should have.

What is BartPE? Is it a "Preinstalled Environment" - which doesn't tell you much up front. In short, it is a Windows environment (XP or 2003) packaged to run from a CD, much like a Knoppix live CD. It has a wide variety of plugins that are ready to go, or which can be easily added if you have appropriate licenses or downloaded free software, and you can add tools of your own quite easily. The advantages of this including the ability to run Windows native programs without touching the host system's filesystem, native NTFS support, and familiar Windows tools will be obvious to anybody who needs to work with Windows systems, either for recovery, repair, or on-host forensic or incident response work. This is also a useful way to boot Windows on non-Windows systems if you are traveling and need a Windows system on the go, but don't have to have a fully installed system, or have only non-Windows x86 systems handy.

This is one of the tools that is usable in its basic form (which does require a local build and setup), but which can be a much more powerful tool with some work. Put this one in the list of tools that are worth your time and effort to learn and build before an incident.

Wednesday, March 28, 2007

Shared security standards - learn from the DoD

GCN points out that the DoD and ODNI are establishing a collaborative security standard policy. There are a few useful points made here that any organizations seeking to work together on security would do well to observe. The bullets below are from the article, with commentary interspersed.

  • Define a common set of trust levels so both departments share information and connect systems more easily.
This is crucial when working between different organizations or systems. Building a mapping of trust levels and a policy and procedure for how to map those levels should be a part of any security program that has to interface different entities.
  • Adopt reciprocity agreements to reduce systems development and approval time.
Trust is at the heart of this bullet. If your organizations have equivalent security policies and have a trust level to match this will work. If not, then sharing systems development and approval will likely fail. Shared standards and practices are absolutely vital if your organization attempts this!
  • Define common security controls using the National Institute of Standards and Technologys Special Publication 800-53 as a starting point.
Another good place to look for security controls would be the CIS benchmarks or the NSA security configuration standards. The keys here are establishing a standard, keeping it up to date, and making sure that you adjust the boilerplate to fit your needs if you use a standard. No matter what standard you use, documentation is key. I like to suggest wikis as a great way to develop and maintain living documents for standards.
  • Agree to common definitions and an understanding of security terms, starting with the Committee on National Security Systems 4009 glossary as a baseline.
The glossary can be found here in PDF form. This is a handy tool - it helps eliminate semantic debate (we've seen it ourselves in our comments). Arguing over the meaning of the word vulnerability might be avoided with an agreed on definition.
  • Implement a senior risk executive function to base an enterprise view of all factors, including mission, IT, budget and security.
This might involve organization wide risk assessment, communication, and possibly most important - ownership and responsibility by senior management. Without strong upper management support and ownership, any large scale security project will fail.
  • Operate IT security within the enterprise operational environments, enabling situational awareness and command and control.
In short - use operational security monitoring. There are technical, administrative, and policy/procedure elements here - all the elements of a full information security program.
  • Institute a common process to incorporate security engineering within life cycle processes.
Lifecycle design is one of the most crucial things for security to be involved in. Establishing security reviews and consultation into the project management and lifecycle will help ensure that new projects are build on a solid foundation, and that existing systems and designs will receive periodic review.

It is good to see government agencies planning ahead, and the outline above is a good high level starting point for organizations seeking to share security practices. Standards, trust, and executive backing are key to this entire process. It will be interesting to see what comes out of the process.

Monday, March 26, 2007

Risk Assessment: EULAs and contracts

One of the often neglected parts of a security program is the contracts that your organization enters into with vendors. Often organizations accept the standard boilerplate rather than negotiating a more favorable position, or accept a EULA that contains conditions that the organization may regret. This is a great place to apply some of the tools and tactics that you apply to other security projects - checklists, risk assessment, and standards.

One tactic I like to recommend is to work with your legal representation to create a checklist to review EULAs and other agreements - being able to filter out those with issues that are easily found up front can save significant amounts of time - your lawyer's time is often expensive, and doing a first round check can save you money and pain in the event the contract becomes an issue. The EFF provides a good user level guide to dangerous EULA terms which can be a good starting point. Once you have a checklist, you can pull risky or questionable clauses out.

When you develop the checklist, make sure to separate your categories - terms that are completely unacceptable should be noted, and terms that are questionable, or that you prefer alternate language for should be marked as such. If a clause that your company requires isn't present, that should be accounted for as well.

While a checklist isn't a substitute for proper legal review, it can help weed out some of the worst EULAs and contracts.

Along those lines, check out the OWASP Secure Software Contract Annex. Organizations that hire third party developers should have a well understood contract, and projects like this help smaller organizations with something they might not have the resources or in house expertise to handle.

Thursday, March 22, 2007

Securing your Mac: benchmarks and guides

While Macs aren't as heavily used in the corporate world as Windows and Unix systems, they've been steadily penetrating the security world over the past few years. Where security conferences used to be dominated by IBM and Dell, these days a quick visual survey shows a high percentage of Macbooks and Macbook Pros.

For those of us who use a Mac, securing MacOS is an interesting topic. It is regularly claimed to be safer than Windows or other Unix/BSD systems, but that doesn't mean we can ignore locking down our systems. There are some good tips and tweaks out there.

As with any lockdown guide, you should review your usage and needs against the assumptions of the guide. If you are doing this for yourself, you may not need to formalize the process. If you are building a lockdown process or security benchmark for an organization, you will need to document what sections you retained and which sections you discarded, and possibly why. You will also need an exceptions process if you will allow exceptions, and a means of properly documenting alternate acceptable configurations.

So how about some lockdown guides?

Apple's guide is available at
Fair warning, this is a 167 page PDF

The CIS standards are available at: - note that unlike other popular operating systems, the CIS benchmark is only available at level 1 (a "prudent level of minimum due care"), and that there is no automated tool for benchmarking.

The NSA released an updated guide for 10.4, available at

What about NIST you ask? Their linked guides are outdated. Check out the configurations above for a more current checklist.