Institutions often have a well defined incident response process, and may have either in-house or contracted forensic and incident analysis professionals. They should have a well defined process for handling incidents, and that well designed response process needs:
- Trained support for end users, servers, and the network that support must have
- An escalation process that leads to
- A central group or team responsible for investigating incidents who have
- A documented process using
- Standardized tools which helps shape
- A decision process from which
- A report is written in a common and understandable report format which
- Guides management and further responses
Communication of this is also important, as any step that isn't properly communicated can lead to a failure - either in the process, or in the implementation of the process.
Once the process is started however, there are three possible answers that can be found. This post focuses on the third answer, but first we need to discuss the worst case scenarios. In these three scenarios, I use two definitions:
event - a documented occurrence that leads to an investigation, and incident - an
event that when investigated turns out to be a compromise, exploit, or other actual compromise.
- There is a real incident, and responses must handle the compromise, data loss, or other event. This may be part of an ongoing cycle, as some compromises lead to further investigation.
- There may be a real incident, but insufficient data is available to assess the situation. In this case, response is typically to restore operation and to verify that the system or application fully meets current standards. This is typically accompanied by further monitoring of the system to ensure that a second compromise does not occur due to an unknown or undocumented problem.
- Finally, the incident may be a false positive. In this case, the goal is to prove that the event was not an incident. This can be just as difficult, if not more so than proving that an incident occurred.
How can your organization handle forensics for these false positives? Often, the best route is to verify 5 things - you may pick and choose from these, depending on the system, and you might even add event appropriate checks beyond this list, but often you must check for these:
- Verify that there is no sign of the reported event. If a system is reported to have acted like a system infected with a trojan, then verify that the system does not display any of the characteristics of a trojan infected system.
- Check that the system or application matches its documented configuration. This is where Tripwire and other file integrity checkers may come in handy. In the some cases, simply checking MD5 sums and directory file lists from a matching system may help.
- Check local logs for signs of compromise.
- Check netflows, firewall logs, and other network traffic logs to determine if the system was inappropriately accessed or if it attempted to connect to remote sites.
- Finally, look for a plausible cause for the behavior that caused the original alarm. In some cases, this is user error, or a misunderstanding, or a process that resulted in unexpected results.
In most cases, false positive forensics are a best effort task - proving beyond a shadow of a doubt that nothing happened is difficult on any but the best documented and monitored systems, and the great majority of workstations - and even servers don't fall into this category.
Once you have your documentation, two last tasks remain - documentation and communication. A report, even a simple "no sign of compromise" is required, and you should communicate to the affected user and support person. This is where appropriate thanks will pay off for future real events, and it can also serve as an excellent opportunity to sponsor awareness.