Skip to main content


Showing posts from 2018

Leveraging AWS for Incident Response: Part 1

When an incident occurs, time is everything. One significant challenge I’ve experience performing incident response is working with the large amounts of data needed by responders; storage mechanisms need to be accessible, fast, secure, and allow integrations with post-processing tools. There are many options for storage mediums, but by storing data in the Amazon AWS ecosystem your team can leverage many of the AWS services to store, process, and collaborate on incident response activities, enabling your team to scale response efforts. I’ve outlined some of the main reasons I use AWS below: Adopted by many organizations Ease of use Granular control over data storage, lifecycle and versioning Granular control over permissions Ease of automation (SQS/Lambda for example) Leveraging other AWS services to scale out incident response For this post, we’re only going to cover setting up a S3 bucket, creating a new user, creating a S3 bucket policy to limit access control for our use

Smashing the stack with Carbon Black

Github: In this blog post, we will cover how we perform stacking using Carbon Black Response and how we can use this methodology to find anomalies in your environment. In reality, an awesome threat hunter would like to have the following data at their disposal: Type Code Details Real Time RT Real time process executions and its context Forensic FZ Live forensic data such as prefetch, appcompat, registry keys, etc.. Network NT PCAP and extracted metadata Logs LG Endpoint, firewalls, proxies, AV, Web logs, etc.. Binaries BIN Executables collected in real time or on-demand Memory MEM Real time inspection or dumping of processes/system memory For this blog post, we will focus on Real Time ( RT) process executions within Carbon Black Response. The concept of stacking is simple, we start with collecting data of the same type and choose specific fields in which we want to perform frequency analy