Skip to main content

Carbon Black Response: Intel Tester

In this blog post, I’m releasing another tool for Carbon Black Response called “CBR: Intel Tester”. This is a very simple script that takes a list of Carbon Black Response queries and a specified start time as arguments inside the config.json file. The script will then take each query and run a daily search in CBR starting from the specified start time until it reaches the day you executed the script. I usually set the start date to 30-45 days prior, but it all depends on your CBR setup for retention. The output of this script yields a file called metrics.csv, which is a pipe separated file, showing the following results:

  • Date the query was run
  • Total results for the query for the date it was run
  • The query
  • Title of the query
  • Query description
  • Query reference link

When you run the script, you will see some standard out, including the name of the query running, the date the query pulled data and the CBR query. Below shows two separate outputs during script execution.
Standard output for the query "powershell usage"

Standard output for the query "jp cert reconnaissance"

Currently, the script is single threaded, so depending on the number of queries you have, the script may take a while. After completion, your output file should look like the following:

Example output for metrics.csv

We can review this output file in its current form, but most of the time I create a bar chart for the query of interest or a pivot table to identify any potential anomalies. Let’s check out the bar chart first. I will use the example query provided in the Intel Tester GitHub project (see the
config.json file for other examples) named jp_cert_spread_of_infection.

Daily results for the query "Spread of Infection"

We can see that within our current Carbon Black Response instance, this query has a handful of results on some days while zero results on others. The second example, we take the results of all the queries and create a pivot table to see the results together.

The results vary depending on the queries you use, but simple questions like “How frequent is powershell used in my organization and on which days?” or “What days is this user account running the process evil.exe?” or “How often is a specific user account active based on process executions”. We won’t get into stacking the results, as that topic needs its own blog post ;).

I hope this script comes in use for those using Carbon Black Response. Happy Hunting!


Special thanks to Mike Scutt (@OMGAPT), Jason Garman and the CB team for all the help.


Popular posts from this blog

Analyzing and detecting web shells

Of the various pieces of malware i’ve analyzed, I still find web shells to be the most fascinating. While this not a new topic, i've been asked by others to do a write up on web shells, so here it is ;).  For those new to web shells, think of this type of malware as code designed to be executed by the web server - instead of writing a backdoor in C, for example, an attacker can write malicious PHP and upload the code directly to a vulnerable web server. Web shells span across many different languages and server types. Let's take a looks at some common servers and some web extensions: Operating System Service Binary Name Extensions Windows IIS (Internet Information Services) w3wp.exe .asp/.aspx Windows/Linux apache/ apache2/nginx httpd/httpd.exe/nginx .php Windows/Linux Apache Tomcat* tomcat*.exe/tomcat* .jsp/.jspx Web shells 101 To better understand web shells, let’s take a look at a simple eval web shell below: <?php

Web shell hunting: Meet the web shell analyzer

 In continuation of my prior work on web shells ( Medium / Blog ), I wanted to take my work a step further and introduce a new tool that goes beyond my legacy webshell-scan tool. The “webshell-scan” tool was written in GoLang and provided threat hunters and analysts alike with the ability to quickly scan a target system for web shells in a cross platform fashion. That said, I found it was lacking in many other areas. Allow me to elaborate below… Requirements of web shell analysis In order to perform proper web shell analysis, we need to define some of the key requirements that a web shell analyzer would need to include. This isn’t a definitive list but more of a guide on key requirements based on my experience working on the front lines: Static executable: Tooling must include all dependencies when being deployed. This ensures the execution is consistent and expected. Simple and easy to use: A tool must be simple and straightforward to deploy and execute. Nothing is more frustrating

Apache log analysis with Sublime Text 3

Analyzing log files is generally a tedious task, especially when you are hunting for anomalies without an initial lead or indication of evil. Trying to remove all the legitimate entries while leaving the malicious entries requires not only knowledge of common attacker techniques and understanding patterns but a flexible tool. In this post, we’re going to cover analysis of Apache Tomcat access logs and Catalina logs using a text editor called “Sublime Text 3” ( ). The Scenario To make things semi-realistic, i’ve deployed Apache Tomcat on top of Windows Server 2012 with ports 80,443 and 8080 exposed. For now, we’re not going to deploy any apps such as WordPress, Drupal or Jenkins. In our scenario, the customer (who owns this Tomcat server) has tasked our team with analyzing both the Apache and Catalina logs to help identify some suspicious activity. In many real world cases, web applications are usually in a DMZ on their own, behind a load balancer,