Skip to main content

Leveraging AWS for Incident Response: Part 1

When an incident occurs, time is everything. One significant challenge I’ve experience performing incident response is working with the large amounts of data needed by responders; storage mechanisms need to be accessible, fast, secure, and allow integrations with post-processing tools. There are many options for storage mediums, but by storing data in the Amazon AWS ecosystem your team can leverage many of the AWS services to store, process, and collaborate on incident response activities, enabling your team to scale response efforts. I’ve outlined some of the main reasons I use AWS below:
  • Adopted by many organizations
  • Ease of use
  • Granular control over data storage, lifecycle and versioning
  • Granular control over permissions
  • Ease of automation (SQS/Lambda for example)
  • Leveraging other AWS services to scale out incident response

For this post, we’re only going to cover setting up a S3 bucket, creating a new user, creating a S3 bucket policy to limit access control for our user, and cover some common ways to upload data to your S3 bucket. For those new to AWS S3, the term S3 means “Simple Storage Service”, which is object storage that, according to Amazon, aims to provide scalability, high availability, and low latency at commodity costs.

Setting up S3

Let’s begin by getting AWS S3 setup manually in the AWS console, then in a later post we will show how to spin up an S3 bucket for a customer engagement using Terraform. First, create an account on AWS here: After creating an account and adding in some billing information, go to the s3 option under Services.
After you select S3, click the Create Bucket option. This will bring up a modal in which we will add in our bucket name and region. For our example, let’s use the name and the region US East (N. Virginia).
Your S3 bucket names need to be unique across all existing bucket names in Amazon S3 and you can’t rename the bucket after it's been created. I recommend defining a naming schema that works best for your organization to properly name and track newly created s3 buckets.
It’s also important to consider where you store data. In some cases, it’s better (or required) to store data in a specific region. After entering in the information above, you’ll want to enable encryption to “Automatically encrypt objects when they are stored in S3”. Again, this should be a mandatory best practice when working with any customer data in S3. Lastly, ensure you don’t make the bucket publicly accessible (for obvious reasons). Once you’ve finished creating the bucket, you should see the following in your console.
Now that the bucket is created, we have to create a new account that allows your customer to manage data in their newly created s3 bucket. Let’s head over to the IAM page under services.
Once you’re at the IAM page, we need to create a policy. Go to policies and Create policies. Once at the Create policy window, go to the JSON tab and let’s paste in the following JSON below to get us going, replacing “” with your bucket name
   "Version": "2012-10-17",
   "Statement": [
           "Effect": "Allow",
           "Action": [
           "Resource": [
           "Action": [
           "Resource": [

Details about these permissions can be found here:

Once you’ve pasted in the JSON above into your policy view, click the Review Policy button and give the policy a name, such as policy_cust01 and the create the policy. Now that the policy is created, we need to bind this policy to a user. Go to Users and click the Add User button. A new dialog box will appear asking for you to enter in a username and select an access type. Since we’re creating a locked down account that can only manage data in the S3 bucket, we will name this user account cust01. The access type will be Programmatic access. Now go to the Next:Permissions button and click on the Attach existing policies directly button. Once clicked, search for your policy in the list. An example of what this looks like is provided below:
Select your customer policy, click Review, followed by Create user. After successfully creating the user, you will be prompted with two key items:
  • Access key ID
  • Secret access key

Copy both of these keys to a secure location, as we will need them shortly. Getting data to your new s3 bucket can be accomplished in many ways. Two ways I commonly use are CyberDuck or AWS CLI.

Using CyberDuck

CyberDuck offers a nice GUI to interface to interface with many providers, including AWS S3. Once installed, go to the properties for CyberDuck and set the s3 options to the following (you may need to change your bucket location if you created it outside of US East):

With the properties set, right-click on the browser window and select New Bookmark.

This will bring up a new window. From the first dropdown, select Amazon s3 and add in your Path (name of your customer bucket) under the More Options dropdown and close out the window.

After creating the bookmark, you can now right click on it and click Connect to server option.
You will be prompted for your access key and your secret access key (we got these from creating our user account earlier). If all goes well, we should be able to successfully authenticate to our bucket as our customer.

If we try to change buckets from the dropdown, we will get an error as expected, since the policy defined on this user account limits our access (as outlined below):


After following the installation instructions from one the links above (based on your OS), you should have the AWS CLI installed and ready to go. To begin, we need to configure the AWS CLI to use our keys. We do this by typing the command below:
aws configure

Typing in this command will prompt you to add in the following information:
AWS Access Key ID:
AWS Secret Access Key:
Default region name [None]: us-east-1
Default output format [None]:

After entering in your keys and the region of your customer bucket (ours is us-east-1 for this demo), you can begin uploading your files to your aws bucket using the command below:
aws s3 cp s3:// --recursive --sse

In some cases you may also want to add in an expiration date to have the data removed after a given timestamp. An example of using the expiration is outlined below:
aws s3 cp s3:// --recursive --sse --expires 2018-11-01T00:00:00Z

Additional options can be found here:


For this post, we covered how we can leverage the flexibility of AWS to create a customer provisioned bucket and generate a user account, and two different methods of transferring data to your bucket. When performing large file uploadings, I recommend using the AWS CLI, as it seems to perform better over most GUI tools. In future posts, we will explore integrating other services with S3 to further automate deployment, manage infrastructure, and process data placed into S3 buckets. These posts will include services such as Terraform, Lambda, SQS, Athena. I hope this post is useful to you and look forward to your feedback on how you leverage AWS or improvements!



Popular posts from this blog

Analyzing and detecting web shells

Of the various pieces of malware i’ve analyzed, I still find web shells to be the most fascinating. While this not a new topic, i've been asked by others to do a write up on web shells, so here it is ;).  For those new to web shells, think of this type of malware as code designed to be executed by the web server - instead of writing a backdoor in C, for example, an attacker can write malicious PHP and upload the code directly to a vulnerable web server. Web shells span across many different languages and server types. Let's take a looks at some common servers and some web extensions: Operating System Service Binary Name Extensions Windows IIS (Internet Information Services) w3wp.exe .asp/.aspx Windows/Linux apache/ apache2/nginx httpd/httpd.exe/nginx .php Windows/Linux Apache Tomcat* tomcat*.exe/tomcat* .jsp/.jspx Web shells 101 To better understand web shells, let’s take a look at a simple eval web shell below: <?php

Web shell hunting: Meet the web shell analyzer

 In continuation of my prior work on web shells ( Medium / Blog ), I wanted to take my work a step further and introduce a new tool that goes beyond my legacy webshell-scan tool. The “webshell-scan” tool was written in GoLang and provided threat hunters and analysts alike with the ability to quickly scan a target system for web shells in a cross platform fashion. That said, I found it was lacking in many other areas. Allow me to elaborate below… Requirements of web shell analysis In order to perform proper web shell analysis, we need to define some of the key requirements that a web shell analyzer would need to include. This isn’t a definitive list but more of a guide on key requirements based on my experience working on the front lines: Static executable: Tooling must include all dependencies when being deployed. This ensures the execution is consistent and expected. Simple and easy to use: A tool must be simple and straightforward to deploy and execute. Nothing is more frustrating

Apache log analysis with Sublime Text 3

Analyzing log files is generally a tedious task, especially when you are hunting for anomalies without an initial lead or indication of evil. Trying to remove all the legitimate entries while leaving the malicious entries requires not only knowledge of common attacker techniques and understanding patterns but a flexible tool. In this post, we’re going to cover analysis of Apache Tomcat access logs and Catalina logs using a text editor called “Sublime Text 3” ( ). The Scenario To make things semi-realistic, i’ve deployed Apache Tomcat on top of Windows Server 2012 with ports 80,443 and 8080 exposed. For now, we’re not going to deploy any apps such as WordPress, Drupal or Jenkins. In our scenario, the customer (who owns this Tomcat server) has tasked our team with analyzing both the Apache and Catalina logs to help identify some suspicious activity. In many real world cases, web applications are usually in a DMZ on their own, behind a load balancer,