Skip to main content

Leveraging AWS for Incident Response: Part 1

When an incident occurs, time is everything. One significant challenge I’ve experience performing incident response is working with the large amounts of data needed by responders; storage mechanisms need to be accessible, fast, secure, and allow integrations with post-processing tools. There are many options for storage mediums, but by storing data in the Amazon AWS ecosystem your team can leverage many of the AWS services to store, process, and collaborate on incident response activities, enabling your team to scale response efforts. I’ve outlined some of the main reasons I use AWS below:
  • Adopted by many organizations
  • Ease of use
  • Granular control over data storage, lifecycle and versioning
  • Granular control over permissions
  • Ease of automation (SQS/Lambda for example)
  • Leveraging other AWS services to scale out incident response

For this post, we’re only going to cover setting up a S3 bucket, creating a new user, creating a S3 bucket policy to limit access control for our user, and cover some common ways to upload data to your S3 bucket. For those new to AWS S3, the term S3 means “Simple Storage Service”, which is object storage that, according to Amazon, aims to provide scalability, high availability, and low latency at commodity costs.

Setting up S3

Let’s begin by getting AWS S3 setup manually in the AWS console, then in a later post we will show how to spin up an S3 bucket for a customer engagement using Terraform. First, create an account on AWS here: https://aws.amazon.com. After creating an account and adding in some billing information, go to the s3 option under Services.
After you select S3, click the Create Bucket option. This will bring up a modal in which we will add in our bucket name and region. For our example, let’s use the name cust01.acme.com and the region US East (N. Virginia).
ProTip
Your S3 bucket names need to be unique across all existing bucket names in Amazon S3 and you can’t rename the bucket after it's been created. I recommend defining a naming schema that works best for your organization to properly name and track newly created s3 buckets.
It’s also important to consider where you store data. In some cases, it’s better (or required) to store data in a specific region. After entering in the information above, you’ll want to enable encryption to “Automatically encrypt objects when they are stored in S3”. Again, this should be a mandatory best practice when working with any customer data in S3. Lastly, ensure you don’t make the bucket publicly accessible (for obvious reasons). Once you’ve finished creating the bucket, you should see the following in your console.
Now that the bucket is created, we have to create a new account that allows your customer to manage data in their newly created s3 bucket. Let’s head over to the IAM page under services.
Once you’re at the IAM page, we need to create a policy. Go to policies and Create policies. Once at the Create policy window, go to the JSON tab and let’s paste in the following JSON below to get us going, replacing “cust01.acme.com” with your bucket name
{
   "Version": "2012-10-17",
   "Statement": [
       {
           "Effect": "Allow",
           "Action": [
               "s3:ListBucket"
           ],
           "Resource": [
               "arn:aws:s3:::cust01.acme.com"
           ]
       },
       {
           "Action": [
               "s3:DeleteObject",
               "s3:GetObject",
               "s3:PutObject",
               "s3:ListBucket",
               "s3:ListMultiPartUploadParts"
           ],
           "Effect":"Allow",
           "Resource": [
               "arn:aws:s3:::cust01.acme.com",
               "arn:aws:s3:::cust01.acme.com/*"
           ]
       }
   ]
}

Details about these permissions can be found here:

Once you’ve pasted in the JSON above into your policy view, click the Review Policy button and give the policy a name, such as policy_cust01 and the create the policy. Now that the policy is created, we need to bind this policy to a user. Go to Users and click the Add User button. A new dialog box will appear asking for you to enter in a username and select an access type. Since we’re creating a locked down account that can only manage data in the S3 bucket, we will name this user account cust01. The access type will be Programmatic access. Now go to the Next:Permissions button and click on the Attach existing policies directly button. Once clicked, search for your policy in the list. An example of what this looks like is provided below:
Select your customer policy, click Review, followed by Create user. After successfully creating the user, you will be prompted with two key items:
  • Access key ID
  • Secret access key

Copy both of these keys to a secure location, as we will need them shortly. Getting data to your new s3 bucket can be accomplished in many ways. Two ways I commonly use are CyberDuck or AWS CLI.

Using CyberDuck


CyberDuck offers a nice GUI to interface to interface with many providers, including AWS S3. Once installed, go to the properties for CyberDuck and set the s3 options to the following (you may need to change your bucket location if you created it outside of US East):

With the properties set, right-click on the browser window and select New Bookmark.

This will bring up a new window. From the first dropdown, select Amazon s3 and add in your Path (name of your customer bucket) under the More Options dropdown and close out the window.

After creating the bookmark, you can now right click on it and click Connect to server option.
You will be prompted for your access key and your secret access key (we got these from creating our user account earlier). If all goes well, we should be able to successfully authenticate to our bucket as our customer.

If we try to change buckets from the dropdown, we will get an error as expected, since the policy defined on this user account limits our access (as outlined below):

Using AWS CLI


After following the installation instructions from one the links above (based on your OS), you should have the AWS CLI installed and ready to go. To begin, we need to configure the AWS CLI to use our keys. We do this by typing the command below:
aws configure

Typing in this command will prompt you to add in the following information:
AWS Access Key ID:
AWS Secret Access Key:
Default region name [None]: us-east-1
Default output format [None]:

After entering in your keys and the region of your customer bucket (ours is us-east-1 for this demo), you can begin uploading your files to your aws bucket using the command below:
aws s3 cp s3://cust01.acme.com/ --recursive --sse

In some cases you may also want to add in an expiration date to have the data removed after a given timestamp. An example of using the expiration is outlined below:
aws s3 cp s3://cust01.acme.com/ --recursive --sse --expires 2018-11-01T00:00:00Z

Additional options can be found here:

Conclusion

For this post, we covered how we can leverage the flexibility of AWS to create a customer provisioned bucket and generate a user account, and two different methods of transferring data to your bucket. When performing large file uploadings, I recommend using the AWS CLI, as it seems to perform better over most GUI tools. In future posts, we will explore integrating other services with S3 to further automate deployment, manage infrastructure, and process data placed into S3 buckets. These posts will include services such as Terraform, Lambda, SQS, Athena. I hope this post is useful to you and look forward to your feedback on how you leverage AWS or improvements!

Sources



Comments

Popular posts from this blog

Revealing malware relationships with GraphDB: Part 1

In this post, we will learn how using a Graph Database like Neo4j can help visualize malware relationships and extend these relationships to identify patterns between samples. Before we dig into Neo4j, let’s start with some fundamental graph terminologies:   
Nodes represent entities such as a human, car, laptop or phone. Properties are attributes nodes can contain. A steering wheel or tires would be a property of the “car” node. Labels are a way to group together nodes of a similar type. For example, a label of “FastFood” may include nodes such as “Taco Bell, McDonald’s, and Chipotle”. Edges (or vertices) represent the relationship connection between two nodes. Relationships can also have their own properties. Getting started with Neo4jLink: https://neo4j.com/
Neo4j is a Graph Database commonly known for its pure simplicity and easy to use interface. I find the structure of a graph database quite fascinating, on top of learning how to normalize malware analysis data for each sample into a …

Analyzing and detecting web shells

Of the various pieces of malware i’ve analyzed, I still find web shells to be the most fascinating. While this not a new topic, i've been asked by others to do a write up on web shells, so here it is ;). 
For those new to web shells, think of this type of malware as code designed to be executed by the web server - instead of writing a backdoor in C, for example, an attacker can write malicious PHP and upload the code directly to a vulnerable web server. Web shells span across many different languages and server types. Let's take a looks at some common servers and some web extensions:
Operating System Service Binary Name Extensions Windows IIS (Internet Information Services) w3wp.exe .asp/.aspx Windows/Linux apache/apache2/nginx httpd/httpd.exe/nginx .php Windows/Linux Apache Tom

Introduction to Malware Analysis

Why malware analysisMalware analysis (“MA”) is a fun and excited journey for anyone new or seasoned in the career field. Taking a specimen (malware sample) and reverse engineering it to better understand its inner workings can be a long, tedious adventure. With the sheer number of malware samples circulating the internet, in addition to the various formats specimens are found in, makes malware analysis a good challenge. Outside of learning MA as a hobby, here are some other reasons why we perform malware analysis:To better understand how a specimen works. This may yield certain unique attributes about how the malware was written, methods it performs or its dependencies.To collect intelligence and build Indicators of Compromise (“IOCs”), usually comprised of Host Based Indicators (“HBIs”) and/or Network Based Indicators (“NBIs”).For general knowledge or research purposes.How do I get started?!If you’re new to malware analysis, you want to ensure you’ve taken the right precautions befor…