Skip to main content

Carbon Black Response Timeliner

Incident Response is a challenging career. As responders, we must do our best to keep up to date with the latest attack trends, malware and forensic techniques. Throughout my career as a responder, i’ve had the privilege to use many third party solutions to aid in responding. One of these solutions i’ve spent the last 3+ years working with, developing new tools and push the limits of has been Carbon Black Response. A few main reasons this is possible is due to their awesome developer network and their extensive APIs, including documentation. As a responder, time is everything. From the moment you get a phone call at 2am about a customer being compromised, to the first indicator of compromise identified. Being able to respond quickly, at scale without melting endpoints while ensuring data integrity and security are a must with any IR toolset. It is because of these reasons I write scripts that leverage the CBR APIs to aid in my response efforts and automate as much as possible. I have a saying: “spend less time fighting technology and more time fighting bad guys!

In this blog, i'll be open sourcing a tool called CBR Timeliner. In future posts, i'll be open sourcing additional tools for CBR, so stay tuned! These are very simple tools, and while the code is far from perfect, I feel the concepts are what matters most. 

CBR Timeliner

Carbon Black has a nice feature called Investigations. Their implementation of investigations is very simple, you tag events identified in real time process data and the core fields of the event are collected and stored into an investigation, tracked by a incremental ID starting at 1. Per their documentation page, you will find CBR has 6 core event types you can tag and save into your investigation. I’ve listed the 6 types below:
  • Modload
  • Netconn
  • Regmod
  • Childproc
  • Crossproc
  • Filemod
Additional information on these event types can be found here:
The main purpose of CBR Timeliner is to organize the tagged events for a specific investigation ID into  a formal timeline (basically a super timeline for tagged CB events) or export the events by type. With this tool responders leveraging CBR can use this simple script to generate timelines based on an given investigation ID. The image below outlines the 7 output files produced by this script:

As an added benefit, I also included the ability to export timelines at a per host level. The main concept behind these timelines are as follows:
  1. Identify gaps in your timeline where you may have missed a key event, lateral movement, malware, exfil, etc..
  2. Identify gaps in your timeline where an attacker has gone dark (maybe the attacker took a day off during a non-USA holiday?)
  3. Understand the flow of an attacker from host to host, how they moved laterally, processes they executed, staging directories, times in which the attacker was active, initial point of ingress, common TTP overlap with past incidents/attackers, etc..
  4. Hold individual analysts accountable for an investigation performed on a given host. For larger IR cases, you typically need to divide up investigations per endpoint to other consultants and keep track of the level of analysis performed on which host, by which analyst. This also helps teach newer consultants how to perform IR at scale with CBR in addition to understanding the artifacts we collect per operating system and why they are relevant, not just throwing tools at systems hoping to get results.
  5. Combine artifacts from CBR (real time process metadata) with live forensic evidence to complete the story. While real time process information is amazing when you have a live attacker in your environment, you should always reach down to key endpoints and collect/analyze evidence such as registry hives (NTUSER/USRCLASS.DAT), MFT, appcompat, amcache, event logs, prefetch, bitmap cache, etc.. Without some of these keys artifacts, you may not get the entire story.
Here’s a quick output example of a master timeline:

Another example of a single event type timeline (childproc):

Special thanks to Mike Scutt (@OMGAPT), Jason Garman and the CB team for all the help (3 years and counting). Tools: APT Simulator:


Popular posts from this blog

Analyzing and detecting web shells

Of the various pieces of malware i’ve analyzed, I still find web shells to be the most fascinating. While this not a new topic, i've been asked by others to do a write up on web shells, so here it is ;).  For those new to web shells, think of this type of malware as code designed to be executed by the web server - instead of writing a backdoor in C, for example, an attacker can write malicious PHP and upload the code directly to a vulnerable web server. Web shells span across many different languages and server types. Let's take a looks at some common servers and some web extensions: Operating System Service Binary Name Extensions Windows IIS (Internet Information Services) w3wp.exe .asp/.aspx Windows/Linux apache/ apache2/nginx httpd/httpd.exe/nginx .php Windows/Linux Apache Tomcat* tomcat*.exe/tomcat* .jsp/.jspx Web shells 101 To better understand web shells, let’s take a look at a simple eval web shell below: <?php

Web shell hunting: Meet the web shell analyzer

 In continuation of my prior work on web shells ( Medium / Blog ), I wanted to take my work a step further and introduce a new tool that goes beyond my legacy webshell-scan tool. The “webshell-scan” tool was written in GoLang and provided threat hunters and analysts alike with the ability to quickly scan a target system for web shells in a cross platform fashion. That said, I found it was lacking in many other areas. Allow me to elaborate below… Requirements of web shell analysis In order to perform proper web shell analysis, we need to define some of the key requirements that a web shell analyzer would need to include. This isn’t a definitive list but more of a guide on key requirements based on my experience working on the front lines: Static executable: Tooling must include all dependencies when being deployed. This ensures the execution is consistent and expected. Simple and easy to use: A tool must be simple and straightforward to deploy and execute. Nothing is more frustrating

Apache log analysis with Sublime Text 3

Analyzing log files is generally a tedious task, especially when you are hunting for anomalies without an initial lead or indication of evil. Trying to remove all the legitimate entries while leaving the malicious entries requires not only knowledge of common attacker techniques and understanding patterns but a flexible tool. In this post, we’re going to cover analysis of Apache Tomcat access logs and Catalina logs using a text editor called “Sublime Text 3” ( ). The Scenario To make things semi-realistic, i’ve deployed Apache Tomcat on top of Windows Server 2012 with ports 80,443 and 8080 exposed. For now, we’re not going to deploy any apps such as WordPress, Drupal or Jenkins. In our scenario, the customer (who owns this Tomcat server) has tasked our team with analyzing both the Apache and Catalina logs to help identify some suspicious activity. In many real world cases, web applications are usually in a DMZ on their own, behind a load balancer,