Wednesday, February 27, 2008

Social engineering the deliveryman


Sometimes life gives you a great example of a vulnerable system - here's my recent exposure to an everyday system with a reasonably significant vulnerability.

We've all heard of cases of packages being left on doorsteps, or mis-delivered. This one is a bit different...

I recently ordered a new phone, and had it shipped to my apartment. As most apartment dwellers are used to, over the course of my time there any delivery that arrived when I wasn't around was taken to the complex office and a note was left on my door. That's actually quite convenient, and the complex staff require signatures and, better, recognize me.

In this case, however, something a bit different happened. First, the delivery person was told by my neighbors that I wasn't around, and carried on with his deliveries. On his way back, he saw someone outside of the apartment and stopped. The person claimed to be me - and according to the delivery person, they knew my name, and claimed to be waiting for the package. A forged signature later, and my new phone was gone.

If you ship with FedEx, DHL, or UPS, you've likely never been asked to present ID. If you're around the residence, know the person's name, and act reasonably sure of yourself, packages are free for the taking.

A hole in the system? Yes. One I had never seen exploited before, but one that is pretty easy to tackle if you can get the resident's name. The solution is amazingly simple: allow packages to be sent with a "require ID" option. A photo ID would have prevented the entire issue.

I'll post a follow up, as further investigation is ongoing. Sadly, in cases like this the value of the stolen item isn't sufficient to change policy, and is below the threshold to make any sort of police investigation likely.

Creative Commons licensed image credit to Flickr user StarMama

Encryption Key vulnerability Update...

Late last week, we reported on breaking research from Princeton in which it was found that encryption keys could be harvested from RAM. Over the last few days the folks at the Internet Storm Center (ISC) have been compiling their own research and interpretations to compile a nice guide located here for some of the more popular encryption products on the market.

In the guide you'll find several different products and their level of risk while the system is screen locked, sleeping and in a hibernation status. What stands out to me is that there are vendors that are claiming full invulnerability to this type of attack. Be mindful that most products are not safe until the memory has had a chance to fade. Further, even if a product is designed to wipe the memory at shutdown - this will only occur when the system is shutdown cleanly. So, I'll reassert my original recommendations and add a fourth:

  1. Never let an application remember your password/passphrase
  2. Always shut your computer down when your are done using it if you are in a non-physically secured area
  3. Never set encrypted volumes to auto-mount
  4. Configure auto-dismount of encrypted volumes

Friday, February 22, 2008

FVE: Full "Vulnerable" Encryption...

Ok, so Full Volume Encryption is something I need to ensure my data are safe right? Yes, but skip back to the first lessons your were taught in your crypto classes:

Once the data are encrypted, you must have?

A. A warm fuzzy feeling that you've done everything to protect your data - followed by a double latte
B. More than a modicum of concern about the physical security of the device that holds your encrypted data
C. Good key management practices including backup copies that are physically secured against theft and destruction and access control to the working keys
D. A tinfoil hat, because you are already reading this blog

Answers to follow

In a recent video on the Center for Information Technology Policy site for Princeton University, I saw an example of how BitLocker can be "Bit Unlocked." BitLocker is an underlying FVE engine offered with some flavors of Windows Vista. In the video, the narrator explains, with video evidence, how an attacker could read the encryption keys from RAM even after the the machine was placed in a sleep/hibernate mode and or turned off. Therefore, it is feasible that if your laptop is stolen while running, sleeping or after having just been turned off - your encrypted data are still at risk. So, you need B - more than a modicum of concern about the physical security of your computer. Not using Vista - don't feel too comfortable as the narrator also claims that similar harvesting techniques work against Apple's Filevault, Linux's DMCrypt and could be possible against TrueCrypt.

So, you also need C, good key management practices...In this case, the FVE engines are using RAM to hold the keys for inline use as the computer runs. While this is completely necessary for the system to be able to encrypt and decrypt files, it presents a problem in that the keys are in plain text within the memory. Without a wholesale rewrite of the software to clear memory pages and or provide some transform for the keys there's not much that can be done to prevent this condition. There are configuration options and human actions that can prevent this type of attack. For example, to thwart the attack on BitLocker, one can simply set up Vista to boot to the loader requiring the pass-phrase that was assigned when the volume was encrypted - and then shut the computer down when you are done with it not letting it out of site for at least a few minutes. The downside, if you can call it that, is that boot-ups take longer.

But I have a TPM chip...and I am thirsty for that double latte. Not so fast, according to the accompanying article:

"Trusted Computing hardware, in the form of Trusted Platform Modules (TPMs) [22] is now deployed in some personal computers. Though useful against some attacks, today’s Trusted Computing hardware does not appear to prevent the attacks we describe here.

Deployed TCG TPMs do not implement bulk encryption. Instead, they monitor boot history in order to decide (or help other machines decide) whether it is safe to store a key in RAM. If a software module wants to use a key, it can arrange that the usable form of that key will not be stored in RAM unless the boot process has gone as expected [31]. However, once the key is stored in RAM, it is subject to our attacks. TPMs can prevent a key from being loaded into memory for use, but they cannot prevent it from being captured once it is in memory."
In the end, this is a fairly advanced technique, that in time, I'm sure will become publicly available. Recommended countermeasures include setting the memory in epoxy, using security screws and locks for computer cases to limit physical access to the RAM and even re-engineering the RAM itself to forget faster. Today, however, I'd recommend that you think about these guidelines:
  1. Never let an application remember your password/passphrase
  2. Always shut your computer down when your are done using it if you are in a non-physically secured area
  3. Never set encrypted volumes to auto-mount

Thursday, February 21, 2008

Data Breach Notification requirements, state by state

The Consumerist, an online customer advocacy website pointed out that CSOOnline has put together a comprehensive list of the breach notification laws for each state - 38 states are represented on their map. While the map only covers highlights of each state's laws, it is an interesting way to visualize and review current requirements. The Consumerist article also points out CSO's coverage of current laws moving through Congress in Washington - perhaps we will see a national breach notification law enacted in the next year or two.

Wednesday, February 20, 2008

Blinding security cameras with IR LEDs

BoingBoing linked a translation of a German site today. The site shows a headband with an integrated IR LED that blinds security cameras. This is an interesting alternative to the old trick of blinding cameras with laser pointers, as it offers a means of creating anonymity that might not be noticed by others who saw you in person.

Tuesday, February 19, 2008

Proprietary encryption strikes again

Slashdot readers may have noticed the Heise Security analysis of the 2.5in. Easy Nova Data Box PRO-25UE RFID hard drive case build by Drecom. In short, the issue is that the encryption used by the enclosure's chipset is very poor, and won't stop more than a casual attempt to decrypt the data.

The article says:

The company explained that actual data encryption is based on a proprietary algorithm. The company claims the IM7206 only offers basic protection and is designed for "general purpose" users.
(Emphasis mine)

Joe Consumer isn't likely to have access to chipset specifications - and in fact, the vendor even disclaims responsibility in the Heise article:
Easy Nova product manager Holger Henke says that the improper label "128-bit AES Hardware Data Encryption" for Data Box PRO-25SUE was the result of Innmax's misleading formulation of its controller specifications.
Security analysts should know to avoid proprietary encryption algorithms if they're able to find out that they're in use, but the users who rely on what sounds like a standards based encryption capability will be disappointed - and may have their data put at risk. Very few vendors go so far as to tell you what chipset they're using for encryption, so buyers are put in the position of relying on marketing materials and product labels. That can be an uncomfortable position to be in if you really need to rely on the encryption.

Heise also notes that the same chipset used in the PRO-25UE is used in a number of other products, and that the AES encryption used in the chipset is only to encrypt the RFID unlock token, not the data on the drive. Sadly, the data on the drive is "encrypted" with a simple XOR that Heise reverse engineered rather quickly.

What options do you have? Well, hardware encryption means that you have to trust the vendor to have implemented their encryption system correctly. Since there isn't a central security standards body that certifies encryption devices like this, users are left to investigate on their own, or to rely on third parties who may take interest. If you don't trust the hardware solutions on the market, a software package like TrueCrypt, PGPDisk, or BitLocker is likely your best answer for now.

Monday, February 18, 2008

Learning from It Takes a Thief


The Discovery Channel's It Takes A Thief is an interesting method of advocating home security. For those who haven't watched it, the basic show format is that two hosts, both former burglars who have turned their lives around, first break into a house - with the owner's permission, then upgrade the house's security and retrain the owners before trying it again.

For those who have never participated in a physical security penetration test, this is a reasonable introduction to one form of penetration testing. If you're a security professional, or have physical security expertise, you'll probably note that their targets are selected for the wide variety of issues that they have, and that some of their actions as shown would make a security professional fail - things like entering the house without a written 'get out of jail free' card. You'll also note that while they make quite a few upgrades for physical security, there are often ways around the systems that are installed. If you're questioning that, read their security tips - their goal is to make the house a harder target than the rest of the neighborhood, not to make it invulnerable.

In either case, the families that are on the show do appear to get real security improvements and the impact that the show makes on their habits is real - at least in the short term. Let's hope that a year or two from now the show goes back to check how the participants are doing with their habits and whether their systems have continued to both be used and to function properly.

There are a few interesting things to note in comparison to what many of us might have considered: a penetration test by an electronic penetration testing company.

  1. The hosts select the owners by checking a number of houses in a given neighborhood, rather than the owners soliciting the testing. This remarkably similar to the unsolicited companies and individuals who look for vulnerabilities in software and websites.
  2. The owners are allowed to watch, but cannot respond to the event. In most penetration tests, organizations are encouraged to let their normal defenses respond as they normally would, typically with some level of cut-out to ensure that escalation doesn't cause damage or down-time. While one episode does see the police called on the host while he is robbing the house, the owners never come home and children or others are never in the house for the event.
  3. Technology, infrastructure, and process are reviewed and upgraded. This is very similar to the result of an electronic penetration test, however the hosts provide the upgrade. A model where the assessor does a risk assessment and determines the security improvements to be deployed (albeit, with the understanding that much of it is vendor driven based on advertising) is intriguing. You don't see a companies often doing this sort of publicity, but wouldn't it be an interesting marketing strategy?
  4. The homeowners watch video of the robbery as it happens. Typically senior members of an organization merely receive a report, as electronic penetrations are typically not as dramatic to watch. The impact of a home invasion and theft has a great impact on the homeowners, and the visceral feeling can't be easily replicated in a summary report of findings.
This is, at the end of the day, a live physical security penetration test. Identities are not fully disclosed, although people who recognize the homeowners or know their neighborhoods would be able to identify them and would be familiar with their security systems and their valuable possessions. That's an interesting potential issue, as the homes chosen thus far have typically had valuable possessions reaching into the hundreds of thousands of dollars.

I'll be pointing their home security tips out when I give talks on physical security - having a TV show example is a great way to reach my audience, and awareness at home is a great lead in to awareness at work.

Creative Commons licensed photo credit Flickr user Ben Scicluna

Sunday, February 17, 2008

Enhancing security through simplicity: the dangers of complex firewall rulesets

I've worked with a number of different firewall products ranging from Cisco FWSM, PIX, and ASA devices to Netscreen 5 series SOHO firewalls to Secure Computing's Sidewinder G2. All of them have their pros and cons, but one thing that has been constant is that in a live production environment as ruleset complexity increases, configuration errors, security holes, and mistakes increase.

Very few organizations can afford the time and resources it requires to re-factor a ruleset on a major datacenter firewall device, and change management and rule requests are rarely a closed loop process that ensures that every device and rule is removed when it should be. Even ruleset builder tools can lead to their own issues, either by creating difficult to understand rulesets, or by concealing the complexity that they have created.

Where does this leave the staff members maintaining the ruleset?

They're usually aware that their ruleset is larger than it needs to be, that it is more complex than it needs to be, and that there are orphaned rules and holes that don't need to exist.

But there is often little they can do about it without either starting clean, or having organization wide commitment to a clean and simple policy. If you do have the opportunity, here are a few pointers on cleaning up your rulesets:

  1. Get complete architecture diagrams, and pseudocode your rules.This provides a good overview of what your systems are expected to do, and how they communicate. Verify ports, protocols, and hosts.
  2. Map hosts into functional groups. Look for common access methods, common rule requirements, and common functions. You will have some hard decisions based on functional groups, and may have to change them over time, but a reasonable starting map can make rulesets are more manageable. This is a good time to agree on naming conventions with the system administrators and firewall rule change requesters. Having pre-agreed upon names can simplify rule requests as you won't have to match groups to hosts - requests can be made in the same shorthand that you yourself will use to create the rules.
  3. Write a sample pseudocode ruleset, and review with administrators. Translate requests into firewall rules, then use that and your mappings to write extensible, reasonable firewall rules. Make sure that you leave yourself enough flexibility to add and remove hosts by creating generic rules where possible. Consider trade-offs between a lax ruleset and maintenance concerns. Often, a slightly relaxed ruleset is not an unreasonable compromise given a multi-layered approach for many assets.
  4. Migrate into a clean environment using your draft document, and then migrate machines. If you can migrate into a clean environment - and this is a great place to use virtual firewalls like those the FWSM and higher end Netscreen devices support - you can use only your new ruleset.
  5. Require testing. Require administrators and functional users to test the environment, and monitor during that test. Verify traffic via each requested port and protocol to each requested host, or document why it will not show up. Note unused rules, and review them with the requesters.
  6. Provide documentation to the areas you support. Providing a written copy of the groups and aliases that you have created will help system administrators and others understand how the firewall treats their systems and applications. Often, the best return on investment is seen here, as they can request correct firewall rule changes, and you can communicate using the same terms.
I've used this process in a number of instances, from small-scale group firewall policy rewrites to a clean pre-production re-write of a major ERP system which was heading into a new environment. The advantages are significant - the ERP ruleset went from over 200 rules created during pre-production and test to less than 80, and remained far more manageable over time.

Friday, February 15, 2008

Nagios: An Opensource "Availability" Tool

Confidentiality, Integrity and Availability - these make up the CIA Triad upon which most Information Security programs are based. However, it seems that most folks outside of the infosec profession (and some within) only see the first one or maybe two as a truly infosec focus.

The third leg of the triad - availability - is self-explanatory. Applied in the infosec realm, it means that your resources (network, phones, databases, website, workstations etc...) are there and ready to serve when called upon. Often times, IT shops are told that an outage is occurring by their supported users before they are even aware of the problem. There are many reasons for this, and I won't jump onto that soapbox just yet, but often it stems from not having situational awareness.

So, how do you assure availability? The first step is to know what resources your business relies upon. Make a list, check it twice and then vet it with the system or business owners to verify your findings. Next, expand that list to the underlying dependencies required to support these resources. Take your time in this step and apply the OSI model, the rewards will come in spades if you do it right. Case in point - simply checking to see if a mail server replies to a ping does nothing to indicate if SMTP or IMAP are available. So if the business owners say that life ends when email stops - make sure you are not just looking to see the the mail server is pingable....delve into the lower level dependencies.

Now that our list is a 20 page printed spreadsheet of resources and dependencies, how do we track them all in real time? Well, you could go to the business owners and ask for more FTE dollars, or you could take a spare machine, install your favorite linux distro and setup Nagios.

"Nagios is a host and service monitor designed to inform you of network problems before your clients, end-users or managers do. It has been designed to run under the Linux operating system, but works fine under most *NIX variants as well. The monitoring daemon runs intermittent checks on hosts and services you specify using external "plugins" which return status information to Nagios. When problems are encountered, the daemon can send notifications out to administrative contacts in a variety of different ways (email, instant message, SMS, etc.). Current status information, historical logs, and reports can all be accessed via a web browser."
In my test, I installed Fedora Core 6 and followed the "Quick Start" guide provided on the Nagios website. In about 15 minutes I had Nagios monitoring the local host on which it was installed. A few hours later we had all of our servers (and key services) and switches (and key ports) monitored.

Nagios, uses configuration (*.cfg) files to define objects like contacts, hosts, services, switches etc.. These can be as limited or elaborate as you want them to be. However, most of the work is done for you during the quick start which will get you up and running. Additionally, most of the common services are covered with the packaged plugins and agents. For more advanced monitoring you can find custom modules, say for monitoring a specific port on a Cisco switch, online at places like here or here.

Nagios doesn't stop at monitoring electrons, either. In fact, there are several products available (for a cost) to monitor the physical environment that are supported by Nagios plugins. For example, the Websensor EM01B is capable of monitoring for temperature, liquid presence, illumination (light level) and relative humidity. Perfect for a server room or wiring closet. Look back soon for a test on this specific product.

The final piece that makes Nagios so useful is the included web interface. While it is not a work of art, it is a functional tool that allows viewers to quickly scan the whole, or micromanage the specific. My favorite experience so far with this product stemmed from a recent downtime in which we upgraded the power in our server room. Once we completed the work, we were able to tune into the Nagios web interface to check our status. We found only one issue (SMTP was timing out on a mail server) and were able to focus on that specific issue rather than checking on everything that we thought we needed to. Once we got that issue sorted out, we got the final recovery alert telling us that SMTP was "OK." Here are a few screen shots from the Nagios site to help highlight the functional form of the we interface:


First the Host Detail: A quick and dirty table that will give you a listing of your defined hosts and their statuses. In Nagios, each system is referred to as a host (be it a server, a switch or external device) and these hosts can have services. Agents are available for several operating systems that allow for a more granular interface. Think monitoring key Windows services.


Next the Service Detail: Here the table is sorted by Host and then by service. Look closely and you can see the amount of detail being filtered into the system by the plugins and configurations.


Nagios also offers the ability to group hosts and services so that you can quickly view large portions of your network availability status by group type. Here we see the Host Group Overview page. One of the neater features is the ability to establish parent/child relationships within the hosts and services which allow for extended alerting. For example, if our WAN link ports go down, the parent, several other services are alerted on as children.


Last the Status Map: This is just plain cool if you ask me. It's a nice array that you can have running on a secondary/tertiary monitor all day long.

Some words of caution - when installing on a SE Linux kernel, there are a few more steps to make Nagios work completely. Sure, you can use the "setenforce 0" command to stop SE from getting the way, but I would recommend following the steps in this guide. Next, always check your newly created or edited configuration files for errors or conflicts before restarting Nagios. Pre-packaged with Nagios is a script to do just that. Last, remember that throwing technology at the problem is rarely the complete answer. You still have to check that Nagios is checking - and its results have integrity.

Thursday, February 14, 2008

The problem with security questions - and an easy solution

Most of us have run into security questions before - they're required to reset your password, or to log into your online banking system. Often, they use relatively common information, although that has begun to change. That move to more flexibility is a good thing! It encourages greater security and makes stealing accounts that much harder.

Many sites used to have no option about what questions you could use. Many still don't. Those that do either allow a choice from a static set of questions - allowing you to choose from them, or they allow you to write your own.

What many people don't realize is that security questions can actually make your account easier to steal. In many cases, they were created to lower the costs involved with password resets, although some security questions now act as a second authentication layer.

While a thief who steals your wallet may not be technologically savvy enough to get your personal history to find your mother's maiden name, and your place of birth, some people are. I've seen account security issues due to accounts accessed without permission by family members, significant others, and spouses. In other cases, having access to one compromised account with security question information stored within it has led to further accounts with similar questions being compromised.

Here is an easy method that you can use to avoid this:

  1. Use a password safe application like Keepass or Password Safe. While it isn't necessary to use one to make this habit work, it makes the entire process much easier.
  2. Record the security questions that you use, and then the answers that you provide. Change your answers from the "real" answer to something that you can record in the safe. Having different answers per site is a reasonable idea.
  3. If you have the option to make up your own questions, you can take this further - your questions and answers do not have to have answers that anyone else would know.
  4. Back up your password safe! It is passworded and encrypted, so you may opt to email it to yourself, or to copy it to your thumb drive.
Security staffers - here's your chance to help the cause. Teach your web developers to allow security question options, so that users aren't stuck with the normal questions that can easily be learned. A little education and some relatively easy design choices can significantly help the security of your password recovery and authentication systems.

Wednesday, February 13, 2008

Reformed Lawyer Loves Information Security

David says I need to introduce myself if I want to post here. Since I want to post, here goes. I am a reformed lawyer now working in information security in higher education. My focus is primarily on policy development, although once in a while "they" let me out of my cell and I get to participate in a risk assessment or a special project related to information security and "the law." I truly enjoy the ability to meld my lawyer skills within the constantly changing, always evolving, never still discipline of information security. Without further ado (or with much ado about nothing), here goes...

-------------
I love Google Web Alerts. It proves to be a handy reconnaissance tool for gathering intel on coworkers, friends, family and the like. It also is helpful for assessing your own internet popularity presence (or notoriety as the case may be). Thus, it is with great anticipation that I scan my own weekly Google Web Alert to see if I have "popped up" someplace unexpected.

Today I did. Today's report showed that an article that I wrote back in 2004 has been included in a web-based bibliography. What is notable is the article is from my first career as an attorney, and the article discusses resources that all good general practitioners should have in their legal toolkit.

These days I talk about what a person needs in their security toolkit; IT resource acceptable use policies; and how to navigate the various federal, state, and local laws related specifically to technology and information security. Not only am I a quasi-geek, but I am also a information security policy wonk.

The story of the journey from general practitioner to information security policy writer is not terribly exciting. What is interesting though is that many of the skills, tools, and talents that are useful for the law are also useful for information security. Problem solving, the ability to critically analyze materials in front of you, and an unending desire to know your topic in depth (and always "be right" about the knowledge) are invaluable.

Lawyers working on a case need to know by rote the facts particular to a client, as well as the constraints that the law imposes on those facts. We need to be able to point out from a legal, business, and practical standpoint why a client's desired course of action may have an undesired result (jail, fines, and interaction with additional lawyers are nearly always undesired). Similarly, information security professionals need to know how information systems work, and how their clients and employers intend to use that system. Then the fun begins, pointing out the legitimate business trade offs between data security, business efficiency, and sometimes, plain old common sense.

Since I write policy, I get the best of all worlds (much like being a general practitioner attorney). Sort of a "jack of all trades, master of none" role. For short periods of time (usually spanning months), I get to develop some kind of subject matter expertise in a particular information security area while a policy is being created. I meet with the security professionals who know their areas inside and out, I meet with the administrators who know the business side of an organization cold, and I get to try to facilitate the development of a policy that balances information security and business efficiency in a way that makes sense for my organization. Like a law practice, sometimes it is frustrating, sometimes it is exhilarating. It is never dull.

Most readers of this blog are already information security professionals. For those that are not, I can offer the following tidbits that helped me as I entered this field:

1. Study up and don't be afraid to ask questions. This is not a field where you can bluff your way through complex projects with a "fake it till you make it" attitude. Study for certifications and then continue to study after that. Get to know professionals in the field and turn to them for advice frequently. Some day you will be able to return the favor.

2. Know your strengths/find your niche. I like to write formal documents and found a good match with my attorney training in contracts and information security policy writing. My role blends these strengths perfectly and I enjoy the challenge of looking for loopholes.

3. Don't stand still or get complacent. The information security area is always changing. For instance, new federal and state laws are really starting to grasp that information security, data security, electronic information stores, identity information, medical information, and digital forensics are areas ripe for legislation. Learn about what such legislation means for you as an information security professional.

I am really enjoying "phase two" of my professional career. One sign that it might be a good move: My Google Web Alerts for information security topics are at least as many as my alerts for lawyer topics!

Tuesday, February 12, 2008

Adam Dodge's 2007 Higher Education Security Incidents Year in Review

Adam Dodge published his Educational Security Incidents Year in Review for 2007 on Monday. He cites both an increase in the number of incidents and the number of institutions reporting a breach -a trend we have seen in the corporate world as well. In addition, new categories were reported, including "Employee Fraud", which tends to be under the radar in any higher education reports.

I'm intrigued by the increase in reporting - some is obviously driven by legislation and policy requirements, but the overall growth may also indicate a change in general attitudes and policies from internal handling to active reporting. I'm also glad to see the increase in reporting - the openness brings attention to issues that many in education face, and public announcement of events helps increase awareness and often results in further resources being devoted to fixing security issues.

The growth in the unauthorized disclosure and loss categories is particularly interesting when analyzing this trend. The key statistic that isn't analyzed, and that would have a great impact on interpretation of the growth is how many of these would not have been reported as incidents a year ago, or would have been classified as internal incidents. It would be interesting to map state legislation and policy changes at these institutions to the reporting that institutions on those states have done over the past few years. I would expect to see an increase in reporting after laws such as Indiana's SSN disclosure law went into place, then a slow decrease in incidents.

Why a decrease? The institutions will generally begin to shift policies and practices to ensure that further costly and embarassing incidents do not occur, or do not fit the reporting guidelines required by law. Examples of how universities have begun to deal with this can be found in Purdue's SSN disclosure law FAQ and Indiana University's Data Protection Laws site.

Other interesting tidbits include the prevalence of employee related issues. At almost 50% of incidents reported, we see a number isn't far from the 60% rate found in this Dark Reading article. While we can likely presume that more incidents occurred than were reported, if the data is anywhere close to a representative sample, it should help shape higher education security programs and planning to include better training and process to prevent loss and inadvertent exposure.

There are also indicators that reporting still isn't complete - based on personal experience, user ID and password breaches are obviously under-reported. Most incidents involving single system compromises that may have exposed a username or password won't be reported by universities unless those systems contained data that has a reporting requirement. We can expect that this category is not a good reflection of actual compromises. Since day-to-day compromises of workstations that don't contain sensitive data aren't resulting in reporting - since it isn't generally required by current law - this category will likely remain under-reported, particularly in public view.

Take a look at the report - I'd be interested to hear what our readers see when they look at the numbers.

Thursday, February 7, 2008

HOWTO: Windows full disk encryption with TrueCrypt 5.0

UPDATE: If you're here for TrueCrypt installation details, check out our How-To for TrueCrypt 6.0a.

As promised, I've begun to test TrueCrypt's full disk encryption capability. For personal, one-off full disk encryption, and particularly for free, it appears to be a compelling product. It doesn't have any of the enterprise features that you will find in many of the current commercial products - there is no provision for key escrow, central reporting, or other features suited to enterprise use, but the software itself has a clean interface and is reasonably straightforward.

In my testing thus far, I've run into one crucial problem - I can't get a clean ISO burn of the restore disk. With no method to skip the process, and no work-around to simply check the rescue disk image against what it expects, there is no way to move past the CD/DVD check screen without some trickery.

I've tried on multiple machines, burned the image using three different ISO recording packages - all of which work for other ISO files, and TrueCrypt refuses to recognize the burn. The good news is that you can fool it for testing purposes - grab the free Microsoft virtual CD mounting program here.

You'll need to install and start the driver included in the package first, then you can mount the ISO. This doesn't do you any good for actual rescue - but it will let you successfully test the full volume encryption.

If you are taking TrueCrypt's full disk encryption for a test drive, make sure you do it on a test machine first! This is entirely at your own risk.

TrueCrypt full disk encryption walkthrough

1. Download TrueCrypt and install it.

2. Start TrueCrypt, and select System, then Encrypt System Partition/Drive.



3. Select Encrypt System Partition/Drive. TrueCrypt spend a moment or two detecting hidden sectors, and will then display a menu asking for the number of operating systems. In this example, there is only one operating system, so we will select Single-boot, then click Next.


4. Select your encryption options. I'll select AES as a reasonable choice - there are a number of schemes, including multi-algorithm options if you're particularly paranoid or have special encryption requirements. Note that RIPEMD-160 is the only supported hash algorithm for system volume encryption.



5. You will be asked to create a password - passwords over 20 characters are suggested by TrueCrypt. It will then use mouse movements to generate a random seed to feed into the encryption algorithm. Click Next again on the next screen unless you want to re-generate your keys.

6. Now you will have to create a rescue disk. This acts as a rescue disk for damaged boot loaders, master keys, or other critical data, and will also allow permanent decryption in the case of Windows OS problems. It also contains the contents of the disk's first cylinder where your bootloader usually resides. Provide a filename and location for the rescue disk.

7. TrueCrypt will now ask you to burn the rescue disk image to CD/DVD. You cannot proceed without allowing TrueCrypt to verify that this has been done.



8. If you make it this far - and as I mentioned earlier, some burning software appears to not be TrueCrypt ISO friendly, then you're ready to go on with the encryption process. First, you will receive confirmation that your rescue CD is valid.


9. Now you need to choose your wipe mode - this is how your deleted data will be wiped from the disk. Select the mode that you're most comfortable with - for my own use, I'll select 3 pass wiping as a reasonable option.


10. TrueCrypt will now perform a System Encryption Pretest - it will install the bootloader and require your password to get into the OS.

11. Once you've rebooted and successfully entered your password, you will receive a success message. TrueCrypt will then ask you to print the documentation for your rescue disk for future reference. Click ok, and you will move on to the encryption process.


12. Your time to completion will depend on usage of the system, the size and speed of the disk, and a few other factors such as the wipe mode you selected. In my small scale test, a 4 GB test VM partition encrypted in about 15 minutes. I would expect non-virtual machines to see a performance boost over that, and machine that aren't seeing active use should move along at a nice clip.

What does it look like when you're done? Well, on boot, you'll see a DOS style prompt for your password, after which everything else acts just like your normal machine.


What's next? I'd like to find out if others are having similar issues with TrueCrypt's ISO creation process, and I'm interested in seeing performance differences between this and commercial products - hopefully someone with a nice test lab will benchmark them. MacOS support for encrypted volumes would be a great addition, and is one that I hope that the MacOS TrueCrypt port team tackles in the near future - I haven't found a vendor providing OS X full disk encryption yet, and that's definitely something the market needs.

Are security departments wasting their time?

An article on Dark Reading got a lot of play time in the office today, and a friend asked me about my response. Here's what I sent back:

"Only 3 percent of the vulnerabilities that are discovered are ever exploited".

Many vulnerabilities are patched before exploits are widely released. Many more vulnerabilities have relatively narrow exposure, or require in depth knowledge. Not having an active vulnerability testing community would mean that the low hanging fruit - easily exploited vulnerabilities might be even easier to find.

In addition, throwing a percentage out without qualifiers is suspect - three percent of all vulnerabilities? Three percent of vulnerabilities with admin level access? Statistics without perspective are difficult to judge for their merit.

In today's IT workplace, it is difficult to justify not actively patching vulnerabilities and monitoring for them. Any audit firm that found that you were willfully ignoring vulnerability monitoring, testing, and patching would result in a very nasty note to your management. If you are exposing web applications to the world, and you have customer data, and you don't make at least a reasonable attempt to secure your code, most people won't support your claim that you've done your due diligence.

Comparing vulnerability research to automotive safety research isn't as apt of a comparison as the illustration makes it seem. To expand the metaphor, if vulnerabilities are seen as similar to shooting the sunroof of a vehicle, then one has to presume that there are hundreds, if not thousands of skilled archers, even larger numbers of amateurs, and even further hordes of automatic arrow throwing machines shooting at your vehicle.

In addition, all of them can get to your vehicle from anywhere in the world. You may have a shield up, but you probably allow some holes in the shield because people inside need access, and those holes may let arrows in. If the arrows penetrate, they often become arrow shooters themselves, and can shoot arrows into the rest of your vehicles behind your shield.

Is vulnerability research the end-all solution to security? No. Is it necessary? At least in the foreseeable future it will be. The perfectly safe car hasn't been built, and the perfectly secure computer hasn't been either. It is one aspect of a full information security program. There is a balance to be struck between searching for and fixing vulnerabilities, and ensuring that the system isn't vulnerable in the first place. This is all part of the development lifecycle - and no part of it should be ignored.

After we deal with vulnerability research, we have to deal with strategies. Security strategies revolving around a single computer are definitely necessary - you can't ignore the individual building blocks. Even if the keystone is important to the arch, it doesn't mean much without the other building blocks to make it necessary. That's a layer - protecting endpoints, but again, it isn't the only thing you do. You have to assess risk, and apply controls based on the criticality and sensitivity of the system or systems you are protecting. So yes, you protect one system. But that's just at the first layer. Then you protect other systems at the network layer, and then you protect your border, and so on.

Security professionals realize that security is seen in shades of grey. The only completely secure system is one that's turned off - and somebody will steal that if you're not watching it! Again, layers and assessment are the keys.

Tippett comments that people believe that perfect process can make an organization more secure. Without question that's true - not false! Most security professionals would take any level of success, if it was better than what existed before, and if they understood at least roughly what the rate was. If you have a well designed process, and you use it, and it addresses a valid threat, yes, that is better than not doing it, or doing it haphazardly. Doing it at all is better in some cases than not doing it ever, as long as you understand that you don't have complete coverage.

Will statistics show that? It depends on the other layers, random chance, and of course, what other risks there are. Security reporting rarely captures all of the factors in the equation, and many organizations don't report information in a useful way - willfully or not, it doesn't matter.

Patching your systems without any firewalling will probably fail - even if your patching process is perfect. But, it will help prevent you from falling prey to exploits of the vulnerabilities your patching program took care of.

Tippett suggests enabling default deny on routers. In general, routers use access control lists, and the normal recommendation for firewalls is to set them to default deny. This may be a technical difference that the writer didn't capture - so I won't argue the semantics. In either case, proper router ACLs and firewall rules are useful, and a good part of the world knows it, but the move these days is to get proper outbound rules and monitoring, and to use better anomaly detection.

Finally, security awareness is obviously critical, but you have to assess the risks! Incident classification - impact and severity are crucial here. Reducing your incidents by 30% - but only getting the incidents involving people who had a minor security issue, and not getting the one major virus infection that leads to the trojaning of your highest value fileserver is not a good tradeoff.

In the end, the real dilemma that we face is assessing our risk, and then using the assets - time, people, and money - to the best of our ability to meet those risks. That requires brutal honesty, knowledge of what your threats really are, and a willingness to face risks rather than to deny their existence.

Archer photo credit to foundphotoslj

Wednesday, February 6, 2008

TrueCrypt 5.0 - Full disk boot encryption

TrueCrypt 5.0 is out, and full disk, pre-boot authenticated encryption is now available. I can't wait to test this out, as it provides an interesting alternative to current commercial products. The description states:

...ability to encrypt a system partition/drive (i.e. a partition/drive where Windows is installed) with pre-boot authentication (anyone who wants to gain access and use the system, read and write files, etc., needs to enter the correct password each time before the system starts).
I'll post more after I've had a chance to try it out.

Tuesday, February 5, 2008

Credit reporting - getting your credit score for free


As I discussed recently, you don't have to have your credit score to monitor your credit for identity theft.

Lots of people still want to see their credit score, and it does give you a reasonable gut feel for how your credit is doing. Fortunately, MyMoneyBlog has five ways you can get your credit score for free.

Remember that each reporting agency and method is likely to be somewhat different, so don't expect scores from multiple agencies to be identical. If you're trying to get a, er, ballpark idea of your credit rating, and you want to watch for big drops, these might be a useful option for you.

Monday, February 4, 2008

How secure are your SMS messages?

A recent scandal in Detroit caused Mike Wendland of the Detroit Free Press to look into how long the major cell companies keep text messages on their servers as part of an article on text messaging security. The responses are interesting:

  • Sprint retains messages for approximately two weeks to ensure delivery
  • AT&T keeps messages for 72 hours
  • Verizon did not provide a timeframe but noted that they keep the messages for a "very short time".
Text messages are still plaintext, and provide no real security - and can be kept on the receiving phone, but these numbers may help alleviate concerns of a history of your text messages being kept to haunt you.

None of these are spelled out in their contracts - so there is a place in the industry for an an MVNO to sell encrypted, secure phone-to-phone communication and secure, encrypted text messaging available to subscribers.