Saturday, August 30, 2008

Hashing: Making it easier for users

Recently, I've been pondering how infrequently most people take advantage of MD5 or SHA-1/256 hashes of software on download pages. Here are two ways to easily use hash checks on a daily basis.

HashTab, which adds a properties tab to files which lists commonly referenced hashes for the file. This is an easy way to verify hashes in Windows for users who don't keep Cygwin handy

DownTheAll!, a popular and useful download plugin for FireFox will also verify MD5, SHA1, SHA256, and other hashes.


Of course, using md5sum on your *nix box is an easy option as well. How do you integrate hash checks for yourself, and how do you get your users to use them?

Friday, August 29, 2008

IPhone Security Bypass Fix: Coming in September

Computer World cites an Apple representative as stating that a fix for the security bypass using the home button that I posted about on Wednesday will be available in September.

"The minor iPhone security issue, which surfaced this week, is fixed in a software update which will be released in September," said Apple spokeswoman Jennifer Bowcock in an e-mail Thursday.
Computer World also notes that the same issue was patched prior to the 2.0 release - patched versions were available in January 2008. Patch regression like this is a serious concern for enterprise and home users alike.

Wednesday, August 27, 2008

Viruses in Space


A BBC news story says that NASA has confirmed that laptops on the ISS were infected with Gammima.AG. The story goes on to note that the laptops don't run AV. While the laptops aren't part of the core operating capability of the ISS, the fact that a virus did make it to the station shows that there are potential gaps in their information security coverage. It will be interesting to see where the virus came from when further analysis is done - the theory of a thumbdrive or other portable media carrying it is quite believable.

Creative Commons image courtesy of Flickr user Accidental Angel.

IPhone: In An Emergency, Expose All Of My Contact Data

Gizmodo reports by way of Mac Rumors that simply hitting "Emergency Call" on a locked IPhone can allow access to contacts, email, Safari, and SMS, without knowing the user's passcode.

1. Select emergency call from the lock screen.
2. Quickly double tap the home button.

I've verified this, and it does work - the trick was hitting the home button quickly.

This can be avoided by setting the Home button default:

1. Click on Settings.
2. Click on General.
3. Click on Home Button.
4. Make "Home" or "iPod" our default selection.

With a large number of IPhone users in my organization, I'm sure I'll get some mileage out of this one.

Tuesday, August 26, 2008

Practical Security: Dealing With Drug Spam Using Google Alerts

I've been using Google Alerts to monitor for drug and gambling spam placed into compromised user accounts for a while. If you're a provider of web space for any reasonably sized organization, or if you have the ability to publish web pages and want to monitor the, Google Alerts can be a great way to add an additional layer of defense.

To build an alert, simply identify common key words from the sites, then add them to an alert. You'll note that I remove PDF, PowerPoint, and Microsoft Word .doc files by default, as those are often used in research or presentations.

You can use anything you can do via the normal Google search syntax, allowing you to create reasonably powerful tripwires.

site (your site) -pdf -ppt -doc "poker" or "xanax" or "viagra" or "cialis"
Once you've built your alerts, build a filter for them, and check the folder. Don't forget to set your alerts to plain text mode in your preferences if you want to view them more easily.

Friday, August 22, 2008

MacOS Security Guidelines and Best Practices: Corsaire's "Securing Mac OS X Leopard (10.5)"

Daniel Cuthbert of Corsaire has published "Securing Mac OS X Leopard (10.5)", a lockdown and general security guide for MacOS 10.5. If you're building a MacOS security guideline, you should take a look.

Reminder: Set your gmail to require SSL

Gmail's new setting to require SSL makes the habit of typing "https" unnecessary. Simply select "Settings" from the top of your Gmail page, then at the bottom click the radio button:


Your session will expire, but a page reload will drop you back in, and you'll be using SSL from there

This is one quick and easy fix that I'll be emailing quite a few people about.

RedHat System Compromise Results In Updated Signed OpenSSH packages

RedHat has released updated OpenSSH packages for Red Hat Enterprise Linux 4 i386 and x86_64, and Red Hat Enterprise Linux 5 x86_64 due to a system compromise that resulted in the intruders being able to sign OpenSSH packages for those versions of RHEL. The Fedora infrastructure was also compromised, however investigation there seems to indicate that no changes were made to the distribution.

The Fedora signing key is being updated due to the intrusion, even though Fedora it appears not to have been exposed:

"Based on our review to date, the passphrase was not used during the time of the intrusion on the system and the passphrase is not stored on any of the Fedora servers."
This change to Fedora's signing key may require changes by all Fedora system administrators, and more details are promised if needed.

On the RedHat side, they are careful to note that RedHat Network subscribers would not have received the modified packages via their automatic updates. If you download OpenSSH from any other location, you should carefully verify the MD5 hash against the hashes listed by RedHat.

The question becomes: What are RedHat's signing key management processes, and how did they break down to allow an intruder to sign packages? What level of access did the intruders have to the signing servers?

There are a number of methods to protect systems from this type of compromise, including restricting access at the network level to the signing servers to only allow internally initiated pulls of files to be signed, and then only allowing outbound pushes of signed files.

Today's reminder? Proper key management, particularly for keys that are trusted by customers is crucial!

Thursday, August 21, 2008

TrueCrypt 6.0a How To: Free Full Disk Encryption in Windows XP

Our HOWTO: Windows full disk encryption with TrueCrypt 5.0 article is the most popular article we've published on Devil's Advocate Security with over 900 page views, and we're past due to write an updated article about TrueCrypt 6.0a.

A number of changes were made between 5.0 and 6.0a. These include:

  • Support for encrypted hidden operating systems with plausible deniability
  • Hidden volume creation for MacOS and Linux
  • Multi-core/multi-processor parallelized encryption support
  • Support for full drive encryption in XP and Vista even with extended and/or logical partitions
  • A new volume format which increases performance, reliability, and expandability
  • A number of bug-fixes and other features.
Last time I wrote about TrueCrypt, I noted that MacOS full disk encryption wasn't available on the market yet. Since that time, CheckPoint has put a full commercial version of their full disk encryption software on the market, and other vendors have released their products into beta. I'll report here when I get my hands on a them for testing.

Without further ado, here is our TrueCrypt 6.0a Windows installation walk-through.

TrueCrypt full disk encryption walkthrough

1. Download TrueCrypt and install it. Accept the license, and select "Install" as your option rather than "Extract". TrueCrypt will ask you for a number of setting options - if you are unfamiliar with them, the defaults should be reasonable for most users. Once you click next, you'll see a message that TrueCrypt has sucessfully installed. Click OK, then click Finish and continue onwards.

2. Start TrueCrypt - if you did a default install, you will have a blue and white key icon on your desktop. TrueCrypt will ask you to read the tutorial if you haven't read it before. Once you've through, you'll see the TrueCrypt main window.



3. Select System, then Encrypt System Partition/Drive.


4. If you want to create a hidden operating system for plausible deniability, this is when you should select the "Hidden" option. For the purposes of this walk-through, we will simply do a "Normal" installation with the intent of protecting data, rather than hiding it.

5. Now you need to choose whether you will encrypt just the Windows system partition, or the entire drive. If you have performance concerns, you may opt to encrypt just the Windows system partition, however for the greatest security, you'll likely want to encrypt the entire drive. For this example, we will encrypt the entire drive, which is the default setting.


6. TrueCrypt will ask about Host Protected Areas, which may contain your system diagnostics, RAID tools, or other data. If you're unsure, you should likely select "no" for safety. Most programs do not store sensitive data in the HPA.


7. If you are running a multi-boot system with multiple operating systems, the next question is relevant for you. For most users, selecting Single Boot for their single OS is the route to take. We'll go with single boot for this walk-through.


8. Now you need to select your encryption options - the defaults of AES and RIMEMD-160 should be find for most users. If you have specific compliance requirements, make sure you meet them here.


9. Type your password, or better, a strong passphrase. This will let you access your drive, so you must remember this passphrase!

10. Now TrueCrypt gathers mouse movement to generate a random seed for your encryption. Move your mouse around randomly, and then click next to let it generate your keys.



11. TrueCrypt forces you to create a TrueCrypt Rescue Disk, which allows you to restore your boot loader if it is damaged, lost, or you otherwise cannot access the TrueCrypt volume. By default, it will save an ISO file to your My Documents folder. You will need to burn the ISO to a CD, and then let TrueCrypt verify that it works.

12. Burn your ISO with your favorite CD burning software, then verify it.

13. Now select the wipe mode that you'd like to use. For most users, a 3 pass wipe will be sufficient, although for day to day use, no wipe is likely ok. If you do choose a wipe mode, you will be notified that each wipe will increase the encryption time. Once you click Next, a window of notes to print about the encryption process will pop up. Click OK, and "Yes" when asked if you're ready to reboot.

14. TrueCrypt will ask you to test the encryption by rebooting. This is a good time to make sure that you have your password recorded properly! After rebooting and providing your password, the pretest is complete. Select "Encrypt", print the notes if you would like to have them available, and your drive encryption will begin.



The encryption process is typically quite fast, but will vary with the size of your drive. The demonstration drive is running in a VM, and is an 8 GB partition. Real time to complete with no wipe was approximately 15 minutes.

15. When you reboot, simply enter your password, and your encrypted partition will unlock. Your normal OS boot will occur.


You now have a fully encrypted disk. Make sure you remember your password and keep your rescue CD in a safe place!

Friday, August 15, 2008

Where are the law enforcment information security trainees?

Richard Bejtlich asked where the law enforcement trainees are in information security classes:

"When I teach, there are a lot of military people in my classes. The rest come from private companies. I do not see many law enforcement or other legal types. I'm guessing they do not have the funds or the interest?"
I've worked with cybercrime and computer forensic training programs in the past, and my former employer had a very close relationship with both state and local law enforcement. We saw many police officers and federal agents in forensics classes learning system forensics, and we often provided expertise for those who did not have it. What we see was many officers sent to network analysis or other broader information security classes - their jobs were focused on the investigation rather than threat prevention, or digital defense. Many of the classes spent a lot of time looking for predators online, which tends to be a high profile activity for departments when they do make an arrest.

With all of that said, forensic skills are becoming more common, and training for forensics is available from organizations like Purdue's CyberForensics lab and Eastern Michigan University's Staff and Command school. Even with these resources, network forensics and similar skillsets are typically not a focus at the local level, but do become more useful for state and federal agencies.

Does this mean that our law enforcement organizations are unprepared? In some cases, yes - either because the specialized training isn't available, or their budgets or time are restricted. In addition, many police departments continue to use antiquated IT infrastructure, and smaller police departments are reliant on external support, or no formal support at all. These departments are both more vulnerable and less likely to have access to the training and technology needed to do useful forensic analysis of systems. That's what regional forensic centers are seeking to help with.

I think that many security analysts would benefit from spending some time with their local police forensic analysts - perhaps by joining Infragard, or attending a local cyber forensics class. Those contacts can pay off in the future, and will help you understand what they're dealing with.

You're Doing It Wrong

XKCD is often amusing - the most recent, however, is a great one for security folks. How many good ideas is your organization doing the wrong way?

Thursday, August 14, 2008

VM infrastructure and disaster recovery

VMWare's ESX/ESXi Update 2 contained what they describe as a "build timeout" which caused patched machines to expire their licenses on August 12th. This meant that VMs could not be powered on or resumed on updated machines, and that VMotion couldn't be used to move VMs to those systems. VMWare has released a patch and a letter to their customers notifying them of the issue, and has flagged the isssue as an alert in their knowledgebase. The fixed patch requires a reboot of the VMWare host, potentially causing off-cycle maintenance to be required for those systems that were affected.

As more infrastructure moves to a VM environment, we create the potential for greater failures when the VM host systems have issues. In this case, a single patch could prevent DR from occurring if all of your VMWare systems were patched and a failure occurred. Workarounds were relatively easy if you knew what was wrong - system date and time could be changed in the short term, or, if necessary, a pre-patch backup could be restored to the system.

How can we best plan to handle issues like this? In many ways, the same processes that system administrators have used for years to test patches will continue to serve us, but we need to have plans in place for what to do when an issue effects all VM hosts at a given patch level. This reminds me of Hoff's talk about VM infrastructure at BlackHat - we're more vulnerable than we think we are with VMs, and this patch issue is a great, relatively low cost reminder.

So - how are you planning to handle VM infrastructure outages?

Wednesday, August 13, 2008

DEFCON 16 Badge pictures

For those who might be interested, here are images of the "HUMAN" standard DEFCON 16 attendee badge. The badge itself has solder pads for a USB port, an SD card reader, a Freescale processor, IR support, status LEDs, and more. The folks at Hack A Day have more details of the badge's capabilities.

Apparently, getting the badges out of China was a major issue this year, which led to the massive delays in badge distribution. The builder ended up shipping in a number of smaller shipments, which did clear customs. Also of note, badges from the earliest shipments did not have the SD card reader onboard, although parts kits were available to add it. Later shipments did have the SD card reader.



Note the barcode on the back - it is a 2D datamatrix. Unfortunately, neither of the 2D datamatrix recognition programs (edit: NeoReader and 2D sense) I carry on my phone recognize the matrix on the back with a default snapshot, although it appears that a black and white photocopy might work better. I'll update here once I get a good copy of it.

What else does the badge do? Per Kingpin's description:

  • By default, the badges act as IR receivers.
  • A button push puts them in transmit mode if they find an SD card. They then transmit the contents up to 128KB read only file in the / directory of a FAT16 formatted SD card via IR to any receiver.
  • If no SD card is found, transmit mode makes the badge into a TV-B-Gone.

H.R. 4137 - P2P and Higher Education


EDUCAUSE and the American Council on Education have released an advisory regarding H.R. 4137, a recently passed law waiting on the President's signature. The law requires higher education institutions to do three major things:

  1. To notify students that illegal distribution of copyrighted materials may subject them to civil and criminal penalties, and to describe the steps that the institution will take to detect and punish illegal distribution of copyrighted materials.
  2. The institutions will have to certify to the Secretary of Education that they have created plans to effectively combat unauthorized distribution of copyrighted materials. Institutions are required to consider the use of technology based deterrents such as bandwidth shaping and traffic monitoring. Notably, institutions are not required to adopt any specific technology, and the Secretary of Education is not required to collect, review, or to approve the plans.
  3. Institutions are required to offer alternatives to illegal file sharing "to the extent practicable".

These requirements are far less egregious than the might have been, and allow quite a bit of room for universities to operate within. That is, in large part because EDUCAUSE and other higher education organizations have been fighting this and similar requirements for some time. There is both good and bad news for institutions: the good is that as the EDUCAUSE memo notes, institutions operating in good faith and making reasonable efforts to comply should be in good shape. The bad news is that the negotiated rulemaking around the law must still occur, and that further restrictions and requirements can result.

Flickr Creative Commons image courtesy of MacGBeing.

Tuesday, August 5, 2008

BlackHat 2008: Lessons Learned, Training, and Day One


After a four hour delay at O'Hare - and sub-par updates from United, we managed to arrive in Vegas just after BlackHat's Sunday registration had closed. This is the first flight on which I've heard applause from the other fliers when we were told we would be taking off, and most of the fliers laughed when one person asked loudly "Are we driving to Vegas?" after taxiing for an interminable period of time.

Adventures in travel aside, my first BlackHat has been an interesting experience. I've previously attended other training at Caesar's, including SANS. BlackHat doesn't appear to have the conference setup down as well - wireless in our training room did not work reliably for almost a day and a half, and the food at the breaks was on a single floor in one area. With thousands of attendees crowding in during a 15 minute break, it leads to 30 minute delays in the courses.

Humorously, our class also had an issue with certificates - both missing certificates and duplicates. I'm not horribly worried about not receiving a certificate for attendance, but it was another minor issue added to the list.

How about the course content? The general feeling from two of the three of us who are attending the course segment is that our courses aren't as challenging and deep as we had expected from such a highly regarded conference. The third member of our group, who attended a Cisco course, has had glowing things to say.

I attended Tim Mullen's Microsoft Ninjitsu class. Tim is genial, has a good sense of humor, knows his stuff, and has a good supporting crew, but the content hasn't been as hardcore as I had expected. With that said, I've picked up a number of useful reminders and tidbits, particularly in terms of a Microsoft only network. I still won't be using ISA as a primary edge security device, but there are a number of uses for it when you have a Microsoft specific environment to protect.

The briefings were definitely content rich - I modified my schedule, and attended quite a few good presentations.

I started with Jared DeMott's AppSec A-Z. Jared provided a broad overview of application security topics, with details on how reverse engineering works from the ground up. Jared was working from a much longer talk, and he was definitely squeezed for time. If you're new to reverse engineering, and have some CS background, the presentation was a good intro.

Dan Kaminsky's DNS Goodness talk was massively attended. With co-workers and friends in attendance, I passed and sat in on Nate Lawson's Highway to Hell: Hacking Toll Systems. Nate gave a great presentation about the toll passes used by BATA and other California tolling systems. He discussed both the privacy aspect of remotely activated devices that can be used for both tolling and simple use monitoring, as well as the hardware itself.

Interestingly, the hardware is programmable and has flash storage, allowing them to be updated remotely. Nate offered a number of attack vectors that this could allow, ranging from cloning to shuffling between large numbers of devices as they pass on the highway. As with many other hardware devices, basic measures would have made the devices less susceptible to attack, and less useful for tracking users. Overall? Quite an enjoyable talk - it makes me wonder what EZPass and I-Pass look like.

Next up was Hoff's The Four Horsement of the Virtualization Security Apocalypse. This was one of my favorite talks. Hoff is both a good showman and he has a lot of insight into virtualization - if you're using a virtualized infrastructure, you should check his presentation out. His predictions of increased cost and lower reliability in virtual infrastructures due to lack of high availability VM appliances, as well as infrastructure consolidation are on the mark, and should make any security professional think twice about what their future virtualized infrastructure will look like.

One of the biggest issues that he brought up was the possibility of having a VM infrastructure lose security as VM appliances fail. Infrastructure isn't configured in the same way that a HA physical infrastructure is, meaning that a failure either causes a loss of service or a loss of security. As he spoke, I was struck by the need for a declarative syntax for VM architectures: basically a rules based system that would cause systems to always have the right elements in front of them, even if VMs in the system failed. In essence, we need a way to ensure that the infrastructure remains logically sound, even if the physical and virtualized elements of the infrastructure move or fail. We also need a way to manage VM based appliances in a way that scales - as we deploy IDS sensors, firewalls, WAFs and other tools in VMs, we're going to need to make ourselves more effective.

After a break, Bruce Potter's Malware Detection Through Network Flow Analysis was enjoyable. Bruce was selling his new tool Psyche throughout the presentation, and it sounds like he and the team who ar working on Psyche have the right idea - unfortunately, the quoted speeds are less than a fifth of my normal flow rate. For now, a commercial solution is my best answer to replace my existing NFSen/NFDump architecture. Bruce ran long - but his content was good and I'm sure it was useful and persuasive for those who aren't using netflow. For those who are, he captured some of the biggest issues that we face - how do we identify what isn't right, and how can we visualize it effectively to create useful monitoring capabilities.

I'll run through day two, including great talks on web application security in my next post. Still to come? DEFCON 16, where they've already run out of badges in the first run - if you're going, you won't get a hackable badge until 3 PM tomorrow.

Friday, August 1, 2008

Heading to Blackhat: two aspirin and a glass of water coverage

It sure is a good thing that Blackhat and DEFCON are in Vegas. I'm not sure I could deal with security geeks, hax0rs, and script kiddies for a week straight anywhere else.

Here's a few tips on attending both conferences:

  • There's parties going on every night - mostly vendors and some organizations. Ask around at booths and ask early - the parties usually fill up fast. I'll be hitting up at least the OWASP/WASC party.
  • The double-edged sword of DEFCON: Often, talks that are occuring at Blackhat are also occurring at DEFCON. The relaxed atmosphere of DEFCON usually makes them much more entertaining, but becareful: rooms fill up really fast at DEFCON.
  • Don't forget, Blackhat Briefings pass get's you into to DEFCON for free [as in beer]
  • Trust nothing/no one: I know this should go without saying, but there's a Wall of Sheep for a reason. Keep your Wifi radios computers on at your own risk
As far as briefings, I usually have a few that I want to attend, and then bounce around from room to room looking for something interesting. Sometimes a talk is nothing like you expected it to be based on the description, and sometimes the rooms are just packed - have alternatives.

I'll probably spend my time in AppSec mostly, but here are a few I've got earmarked:

Heading to Blackhat: Additional Briefing Coverage

As David mentioned here, several members of the DA crew are headed to Blackhat next week. While I'll be sampling a bit here and there, I'm interested mostly in Web Application security issues.

The briefings I plan on attending are:

Expect a number of posts from the field next week.