Skip to main content

Leveraging AWS for Incident Response: Part 2

In my previous post ( we covered how AWS resources such as S3 can be used to quickly spool up storage, lockdown access to this storage and provision users in the AWS console. In this post, we’re going to cover how we can automate this process. Before we began, let’s review some common issues with the previous manual process of using AWS console to provision and manage AWS resources:
  • Time to provision: If you’re new to AWS, using the AWS console to provision the S3 bucket, bucket policy and IAM user account with programmatic access may take ~30 minutes, while those who are more familiar, ~10m.
  • Standardization: When using AWS console, simple copy/paste errors may occur. This may expose the bucket to the wrong customer (or even to the public). Other issues include:
    • Ensuring the bucket names are consistent for all customers. A defined naming convention should be used that is unique for each engagement, as one customer can have multiple engagements. Entering names manually are subject to human error.  
    • Ensure the right policy is assigned to the right customer bucket, with the proper permissions. Entering policy permissions manually are subject to human error. 
    • Ensure access keys are given to the right customer. Copy/pasting credentials with the wrong bucket path may expose the data to the wrong customer. 
  • Scale: As you start to scale out your response, it can become difficult to manage multiple customer buckets, policies and keys.  
  • Deprovisioning: Logging into the AWS console manually to destroy resources while ensuring you destroy the proper resources can become a time consuming task that's prone to error.
  • Data retention/life cycle: When dealing with sensitive data related to incident response, you may encounter requests to hold customer data for longer periods of time. You may wish to apply longer retention to these customers buckets while others can be destroyed completely.
  • Logging: It’s important to know who has accessed objects in a customer bucket, when and how. Logging can be configured on an s3 bucket and delivered to the customer upon request post investigation if required.

To solve these issues (and many others), we can leverage a free application called Terraform.

About Terraform
Per the HashiCorp website “HashiCorp Terraform enables you to safely and predictably create, change, and improve infrastructure.”. We call this “Infrastructure as Code”.  Using Terraform allows your IR team to quickly write your infrastructure as code, review, plan and deploy without the need to log into AWS. Terraform works with many cloud providers such as AWS, Azure and Google Cloud to name a few. A full list of providers can be found here:

Two key parts to Terraform we will discuss are:
  • Modules
  • S3 encrypted state storage backend 

To take advantage of code reuse, Terraform uses modules that can be imported into your code base and helps keep your code base organized. For our use case, we will create an S3 module that defines a customer S3 bucket and an IAM module that creates a customer user account and defines what permissions to assign to this user. Once these module are defined, you can reuse them across all customers, thus ensuring the bucket names are consistent, permissions are correct, keys/bucket paths are given to the right customer and the proper data retention/logging is setup on the bucket. 

Terraform Backend
To ensure the state of your AWS infrastructure is saved, Terraform uses a .tfstate file. This file holds the state of all AWS resources and their metadata, which may contain keys/passwords. This state file is used to track changes to your environment when performing CRUD operations such as “terraform apply” or “terraform destroy”. Potential changes can be reviewed before committing to AWS using the command “terraform apply”. To secure this file, we will use an encrypted S3 backend to prevent any direct access or viewing of this file.

Getting started with Terraform

To begin, we need to download terraform: Once terraform is downloaded, you can begin using it immediately.

It’s important to keep Terraform updated. I can’t tell you the number of times a simple update fixed a Terraform error.

Checking your Terraform version is as simple as running terraform -v from your command prompt. If Terraform is out of date, you’ll see the following standard output from your terminal.

$ terraform -v 
Terraform v0.11.7
Your version of Terraform is out of date! The latest version
is 0.11.10.

After you have the latest version of Terraform, we will need to configure Terraform to use the AWS backend. To keep things simple, we will create an IAM account that has the role of AdministratorAccess. This can be done using the AWS console by navigating to the IAM section and clicking the Add User button. At the new user prompt, we type in the name of the user and select the Programmatic access then click Next: Permissions.  

At the permissions section, we can simply choose an existing policy called AdministratorAccess, as outlined below:

After clicking through the remaining options, you’ll need to copy the Access Key ID and Secret Access Key at the last menu as we will be using these in later steps. While we’re also in the AWS console, create an S3 bucket named terraform-dev-mytest-.

Setting up AWS CLI
Now that we have our IAM account setup and S3 bucket created, we can now install and setup the AWS CLI. The AWS CLI bundled installer instructions can be found here: Once installed, you should be able to run the command aws configure. If successful, you should see the following options below:

AWS Access Key ID: 
AWS Secret Access Key: 
Default region name [None]:  
Default output format [None]:

Enter in your Access Key ID and Secret Access Key we created earlier. For this demo, we can leave the region name and output format empty by simply pressing enter. The reason why we’re configuring the AWS CLI using this method is it outputs the file ~/.aws/config, which Terraform reads and uses when connecting to AWS.

Creating the S3 module

Now that we have Terraform configured and our AWS CLI configured, we can now create our base Terraform project to automate our customer S3 bucket creation and locked down IAM user. To begin, let’s create a project folder called terraform_dev on your workstation to hold our Terraform project. Inside this folder, let’s create the following files: and I’ll explain each file below:

Inside the, we will use the following code:
terraform {
 backend "s3" {
   bucket = "terraform-dev-mytest-"
   key   = "terraform-dev.tfstate"
   region = "us-east-1"
   encrypt = true


The code above tells Terraform to store our tfstate file in an S3 bucket called “terraform-dev-mytest-” and the the key of “terraform-dev.tfstate”. We also set encrypt to true to encrypt the files contents.
With the file created, we can move on to the us-east-1 terraform file. Since we haven’t setup up our S3 and IAM_Customer modules yet, the only contents we will place into this file are as follows:

provider "aws" "us_east"{
 alias  = "use1"
 region = "us-east-1"

This code tells Terraform to use the AWS provider and set the region to us-east-1. Having a terraform file per region allows you to place customer data/resources in their proper region for either data privacy restrictions and/or speed and optimization purposes. With our two Terraform files created, we can now initialize the Terraform backend using the Terraform command terraform init. If successful, you will see the following output below:    

If you’d like to use an IDE to help with Terraform syntax project structure, you can use IntellJ’s GoLand. Just install the Terraform plugin called HashiCorp Terraform / HCL language support and restart the IDE. The plugin can be found under GoLand > Preferences > Plugins.

With our backend initialized, we can proceed with creating our customer S3 module. This module will become our reusable template for deploying new customer locked down S3 buckets with enforced standards such as naming convention, encryption and destruction options. To begin, let’s create a folder called modules inside our project folder. From here, we will create another folder called customer_s3. Since this is a new module we’re building, each module will contain 3 files:
  • .tf

Let’s take a look at the first file below: preview
This file holds variables that will be passed to our module. In this case, our customer alias will be passed from our main file as parameter to our module. We will cover this more in later steps. The second file is called the and tells what outputs the modules should pass after module usage. This is valuable when one module depends on another or printing output to console (such as the bucket arn or “Amazon Resource Names” and user keys). The last file our module needs is the module code itself, held in the .tf or in our case This file will define the standard on how a customer bucket should be created, what server side encryption to use and how the bucket should be destroyed, as outlined below:
Customer S3 Module
You may be wondering why each module has a provider line at the top. When performing incident response, you must be able to support the creation of buckets across regions. Allowing modules to take the provider as parameter, which in turn allows us to define the provider for that region. We will show this in the next section. For this simple use case, we’re only using the bare minimum parameters for the data source aws_s3_bucket. You can view other arguments and definitions at the following link below:

Creating the IAM module

Now that we have created a module that defines how our customer bucket will be created, we need to create another module that creates a customer user account and a defines an IAM policy for that user which limits the user’s access to their S3 bucket including limited permissions.To do this, we create another folder in our modules directory called customer_iam and our new module files below:

This is a very basic example and additional parameters for both the aws_iam_user, aws_iam_access_key and aws_iam_user_policy can be found at the following links below:

Putting it all together

Now that both modules are created, we can now use them. Let’s open up our file again and create our new customer bucket, user and policy, as outlined below:
us-east-1 main file

As you can see from the snippet above, we have defined our provider as aws using the region of us-east-1. This enables us to create a new Terraform file in the future such as eu-west-3 and many other regions. In this file, we also import our new modules using the module syntax and include the path to our module using the source parameter along with any arguments the module requires.

In the end, our project structure should look like the following:
Directory Structure

Now that the code is completed, we must tell Terraform to import our modules so they are recognized. To do this, you simply type in the Terraform command terraform get in the console. Your output should look similar to the image below:

Awesome, now that our modules are imported, we can do a dry run and see what our AWS infrastructure will look like before committing our changes.This can be accomplished be running the command terraform plan. Your output should look similar to the image below:
Terraform Plan Output
The important part to this output outside of the module outputs is the Plan: segment at the bottom, which shows Plan: 4 to add, 0 to change, 0 to destroy. It’s important to check these changes prior to moving forward and ensure you’re adding/removing the proper resources/parameters. For awareness, the console also color codes changes outlined below:
  • Green (Add)
  • Yellow (Change)
  • Red (Destroy)

If everything looks good, you can proceed with the next command terraform apply to allow terraform to provision our new resources. Terraform apply will do two things:
  1. Show you the same output as Terraform plan to perform a last chance review
  2. Will ask for your confirmation before applying these changes to your infrastructure

If you agree with the changes, type yes to begin provisioning your new customer resources. Once completed, the final output will look like below:
Terraform Apply Output
As stated above, review the output of your apply command and ensure the proper number of resources are created. If any errors are shown, they will be in red. You will also see the following items in the outputs below:
The outputs will contain your customers bucket arn, access id and token, which can be used by the customer to authenticate to their bucket using either the AWS CLI or other tools like CyberDuck.

As we saw from the last blog post ( keeping track of all the customer buckets, policies and users at scale can become a tedious task. You don’t want to delete the wrong bucket, policy or user account. The beauty of Terraform is to destroy resources, you just need to delete or comment out the proper code, plan and apply your changes, done! You should never have to login to AWS console ever again! To try this out, comment out the following code in your file:
Commented out modules
Now that the code is commented out, type in the command terraform plan review the changes. You should see the output below stating the customer bucket and IAM resources will be destroyed:
We can see during the planning process that this change will destroy four resources.
Once you confirm this change is correct, type in terraform apply and type yes in the console to proceed with the destruction operation.
Success! In seconds, we have destroyed all the customer resources.


In this post, we covered how to use Terraform to quickly spin up a new S3 bucket, IAM user and keys. Using Terraform also helps us ensure the proper policy is applied and bucket contents are encrypted at rest. While this example is very simple, we can build upon this to enable automated post processing of data (reading a log file for example) using SQS and Lambda. Lastly, you should commit your new Terraform code to a version control system such as GitHub to ensure any changes to the Terraform code base is tracked. I hope you enjoyed this blog post and stay tuned for Part 3, “Automated post processing with SQS and Lambda”. Happy hunting!

For medium users, check out more of my posts on my medium profile:


Popular posts from this blog

Analyzing and detecting web shells

Of the various pieces of malware i’ve analyzed, I still find web shells to be the most fascinating. While this not a new topic, i've been asked by others to do a write up on web shells, so here it is ;).  For those new to web shells, think of this type of malware as code designed to be executed by the web server - instead of writing a backdoor in C, for example, an attacker can write malicious PHP and upload the code directly to a vulnerable web server. Web shells span across many different languages and server types. Let's take a looks at some common servers and some web extensions: Operating System Service Binary Name Extensions Windows IIS (Internet Information Services) w3wp.exe .asp/.aspx Windows/Linux apache/ apache2/nginx httpd/httpd.exe/nginx .php Windows/Linux Apache Tomcat* tomcat*.exe/tomcat* .jsp/.jspx Web shells 101 To better understand web shells, let’s take a look at a simple eval web shell below: <?php

Web shell hunting: Meet the web shell analyzer

 In continuation of my prior work on web shells ( Medium / Blog ), I wanted to take my work a step further and introduce a new tool that goes beyond my legacy webshell-scan tool. The “webshell-scan” tool was written in GoLang and provided threat hunters and analysts alike with the ability to quickly scan a target system for web shells in a cross platform fashion. That said, I found it was lacking in many other areas. Allow me to elaborate below… Requirements of web shell analysis In order to perform proper web shell analysis, we need to define some of the key requirements that a web shell analyzer would need to include. This isn’t a definitive list but more of a guide on key requirements based on my experience working on the front lines: Static executable: Tooling must include all dependencies when being deployed. This ensures the execution is consistent and expected. Simple and easy to use: A tool must be simple and straightforward to deploy and execute. Nothing is more frustrating

Apache log analysis with Sublime Text 3

Analyzing log files is generally a tedious task, especially when you are hunting for anomalies without an initial lead or indication of evil. Trying to remove all the legitimate entries while leaving the malicious entries requires not only knowledge of common attacker techniques and understanding patterns but a flexible tool. In this post, we’re going to cover analysis of Apache Tomcat access logs and Catalina logs using a text editor called “Sublime Text 3” ( ). The Scenario To make things semi-realistic, i’ve deployed Apache Tomcat on top of Windows Server 2012 with ports 80,443 and 8080 exposed. For now, we’re not going to deploy any apps such as WordPress, Drupal or Jenkins. In our scenario, the customer (who owns this Tomcat server) has tasked our team with analyzing both the Apache and Catalina logs to help identify some suspicious activity. In many real world cases, web applications are usually in a DMZ on their own, behind a load balancer,