Skip to main content

Carbon Black Response: Process GeoIP

In this blog post, I’m releasing another tool for Carbon Black Response called CBR: Process GeoIP. The purpose of this script is to search specified processes in Carbon Black Response that you’d like to check for unusual network connections using GeoIP. I’ve used this script during an engagement to hunt through remote desktop connections, searching for any anomalies. In another engagement, where the malware had injected itself into a legitimate processes such as explorer.exe, I was able to use this script to quickly dump all network connections made by explorer.exe, review the GeoIP information for each connection, protocol, port, direction of traffic and the process specific information.

In order to extract the network connections for each process, you have to make an API for each process. To ensure your only getting processes with network connections, ensure you add the syntax netconn_count:[1 TO *] to each of your queries. You may also consider adding in time ranges or target a specific port to filter down the total number of processes for each query.

Like most of the tools I write, this is a very simple script that reads your Carbon Black Response queries and configuration information from a config.json file. For each query in the config.json, you also need to specify an output filename. For development purposes, we’re using the MaxMind geolite2 city database found here:, but feel free to tweak the script to use your prefered GeoIP service or database. Once the database is downloaded and saved to this projects working directory (GeoLite2-City.mmdb), we can start by taking a look at the explorer.exe example below:
Once executed, the script will read the query key, iterate over all matching processes, extract out all the network connections, enrich each connection with GeoIP information and write the data to the file explorer_connections.csv file. When the script is running, you should see the following standard output for each query in your console/IDE:
You can see from the image above that a total of 1,341 processes matched our query, and the script is now iterating over each process to extract its network connections. Once the script finishes, you can open up the the corresponding csv file for your query and review the results. As usual, I tend to create a quick pivot table to view the results, as outlined below:
If you really want to have a lot of fun visualizing the data by country, you can use the Maps feature in both excel and google sheets, per the example below:

While the image above isn’t a great example of using the “GeoIP” feature within the script, we can still use the raw data to find out anomalies now that we have extracted out the network connections. If we take a look at the powershell query inside of the pivot table, we can see a call out to raw[.]githubusercontent[.]com with its powershell command line argument:

If possible, you should consider adding in some threat feed lookups to the post-processed network connections to add additional context to the results.

While this script does a decent job at analyzing targeted processes and extracting out their network connections, for larger environments I would recommend leveraging the Carbon Black event forwarder to send the raw endpoint events to a generic JOSN processing pipeline (covered in the next blog post). The event forwarder documentation can be found here:

A list of the raw endpoint events can be found at the following link below. The event ID for capturing only the raw endpoint network connections is ingress.event.netconn:

Leveraging the Carbon Black event forwarder, a storage medium (s3), queuing mechanism (SQS) and workers (Lambda), you can scale out your event processing to broaden your network connection scope vs targeting specific processes to analyze with this script. Again, if you’re interested in how we can leverage services like AWS to scale out event processing, stay tuned for my next blog post.

I hope this script comes in use for those using Carbon Black Response. Happy Hunting!


Special thanks to Mike Scutt (@OMGAPT), Jason Garman and the CB team for all the help.


Popular posts from this blog

Analyzing and detecting web shells

Of the various pieces of malware i’ve analyzed, I still find web shells to be the most fascinating. While this not a new topic, i've been asked by others to do a write up on web shells, so here it is ;).  For those new to web shells, think of this type of malware as code designed to be executed by the web server - instead of writing a backdoor in C, for example, an attacker can write malicious PHP and upload the code directly to a vulnerable web server. Web shells span across many different languages and server types. Let's take a looks at some common servers and some web extensions: Operating System Service Binary Name Extensions Windows IIS (Internet Information Services) w3wp.exe .asp/.aspx Windows/Linux apache/ apache2/nginx httpd/httpd.exe/nginx .php Windows/Linux Apache Tomcat* tomcat*.exe/tomcat* .jsp/.jspx Web shells 101 To better understand web shells, let’s take a look at a simple eval web shell below: <?php

Web shell hunting: Meet the web shell analyzer

 In continuation of my prior work on web shells ( Medium / Blog ), I wanted to take my work a step further and introduce a new tool that goes beyond my legacy webshell-scan tool. The “webshell-scan” tool was written in GoLang and provided threat hunters and analysts alike with the ability to quickly scan a target system for web shells in a cross platform fashion. That said, I found it was lacking in many other areas. Allow me to elaborate below… Requirements of web shell analysis In order to perform proper web shell analysis, we need to define some of the key requirements that a web shell analyzer would need to include. This isn’t a definitive list but more of a guide on key requirements based on my experience working on the front lines: Static executable: Tooling must include all dependencies when being deployed. This ensures the execution is consistent and expected. Simple and easy to use: A tool must be simple and straightforward to deploy and execute. Nothing is more frustrating

Apache log analysis with Sublime Text 3

Analyzing log files is generally a tedious task, especially when you are hunting for anomalies without an initial lead or indication of evil. Trying to remove all the legitimate entries while leaving the malicious entries requires not only knowledge of common attacker techniques and understanding patterns but a flexible tool. In this post, we’re going to cover analysis of Apache Tomcat access logs and Catalina logs using a text editor called “Sublime Text 3” ( ). The Scenario To make things semi-realistic, i’ve deployed Apache Tomcat on top of Windows Server 2012 with ports 80,443 and 8080 exposed. For now, we’re not going to deploy any apps such as WordPress, Drupal or Jenkins. In our scenario, the customer (who owns this Tomcat server) has tasked our team with analyzing both the Apache and Catalina logs to help identify some suspicious activity. In many real world cases, web applications are usually in a DMZ on their own, behind a load balancer,