Skip to main content

Carbon Black Response: Process GeoIP

In this blog post, I’m releasing another tool for Carbon Black Response called CBR: Process GeoIP. The purpose of this script is to search specified processes in Carbon Black Response that you’d like to check for unusual network connections using GeoIP. I’ve used this script during an engagement to hunt through remote desktop connections, searching for any anomalies. In another engagement, where the malware had injected itself into a legitimate processes such as explorer.exe, I was able to use this script to quickly dump all network connections made by explorer.exe, review the GeoIP information for each connection, protocol, port, direction of traffic and the process specific information.

In order to extract the network connections for each process, you have to make an API for each process. To ensure your only getting processes with network connections, ensure you add the syntax netconn_count:[1 TO *] to each of your queries. You may also consider adding in time ranges or target a specific port to filter down the total number of processes for each query.

Like most of the tools I write, this is a very simple script that reads your Carbon Black Response queries and configuration information from a config.json file. For each query in the config.json, you also need to specify an output filename. For development purposes, we’re using the MaxMind geolite2 city database found here:, but feel free to tweak the script to use your prefered GeoIP service or database. Once the database is downloaded and saved to this projects working directory (GeoLite2-City.mmdb), we can start by taking a look at the explorer.exe example below:
Once executed, the script will read the query key, iterate over all matching processes, extract out all the network connections, enrich each connection with GeoIP information and write the data to the file explorer_connections.csv file. When the script is running, you should see the following standard output for each query in your console/IDE:
You can see from the image above that a total of 1,341 processes matched our query, and the script is now iterating over each process to extract its network connections. Once the script finishes, you can open up the the corresponding csv file for your query and review the results. As usual, I tend to create a quick pivot table to view the results, as outlined below:
If you really want to have a lot of fun visualizing the data by country, you can use the Maps feature in both excel and google sheets, per the example below:

While the image above isn’t a great example of using the “GeoIP” feature within the script, we can still use the raw data to find out anomalies now that we have extracted out the network connections. If we take a look at the powershell query inside of the pivot table, we can see a call out to raw[.]githubusercontent[.]com with its powershell command line argument:

If possible, you should consider adding in some threat feed lookups to the post-processed network connections to add additional context to the results.

While this script does a decent job at analyzing targeted processes and extracting out their network connections, for larger environments I would recommend leveraging the Carbon Black event forwarder to send the raw endpoint events to a generic JOSN processing pipeline (covered in the next blog post). The event forwarder documentation can be found here:

A list of the raw endpoint events can be found at the following link below. The event ID for capturing only the raw endpoint network connections is ingress.event.netconn:

Leveraging the Carbon Black event forwarder, a storage medium (s3), queuing mechanism (SQS) and workers (Lambda), you can scale out your event processing to broaden your network connection scope vs targeting specific processes to analyze with this script. Again, if you’re interested in how we can leverage services like AWS to scale out event processing, stay tuned for my next blog post.

I hope this script comes in use for those using Carbon Black Response. Happy Hunting!


Special thanks to Mike Scutt (@OMGAPT), Jason Garman and the CB team for all the help.


Popular posts from this blog

Analyzing and detecting web shells

Of the various pieces of malware i’ve analyzed, I still find web shells to be the most fascinating. While this not a new topic, i've been asked by others to do a write up on web shells, so here it is ;). 
For those new to web shells, think of this type of malware as code designed to be executed by the web server - instead of writing a backdoor in C, for example, an attacker can write malicious PHP and upload the code directly to a vulnerable web server. Web shells span across many different languages and server types. Let's take a looks at some common servers and some web extensions:
Operating System Service Binary Name Extensions Windows IIS (Internet Information Services) w3wp.exe .asp/.aspx Windows/Linux apache/apache2/nginx httpd/httpd.exe/nginx .php Windows/Linux Apache Tom

Introduction to Malware Analysis

Why malware analysisMalware analysis (“MA”) is a fun and excited journey for anyone new or seasoned in the career field. Taking a specimen (malware sample) and reverse engineering it to better understand its inner workings can be a long, tedious adventure. With the sheer number of malware samples circulating the internet, in addition to the various formats specimens are found in, makes malware analysis a good challenge. Outside of learning MA as a hobby, here are some other reasons why we perform malware analysis:To better understand how a specimen works. This may yield certain unique attributes about how the malware was written, methods it performs or its dependencies.To collect intelligence and build Indicators of Compromise (“IOCs”), usually comprised of Host Based Indicators (“HBIs”) and/or Network Based Indicators (“NBIs”).For general knowledge or research purposes.How do I get started?!If you’re new to malware analysis, you want to ensure you’ve taken the right precautions befor…

Basic Dynamic Analysis - PE

As mentioned in my prior post (Medium / StillzTech), malware analysis can be grouped into four categories:
Basic Static Basic Dynamic - PE File (what this post will cover)Advanced StaticAdvanced Dynamic
As stated in my prior post, we perform basic static analysis first to understand the executable’s “potential” capabilities and structure. Some questions we aim to answer during basic static analysis: What libraries does the PE file import, including functions / ordinals?Why? This may indicate the file has the “capability” to log to a text file and read credit card track data from memory, indicating you’re dealing with some point of sale malware.What unique strings stand out?Why? Some malware may contain the PDB file (debugger symbols) or original code file path, which can be used to find related malware or identify the malware itself.What language was the PE file written in?Why? Depending on the language the executable was written in, you might be able to reassemble the source. Languages …