Don’t Be The Next AWS S3 Data Leak Headline

Don’t Be The Next AWS S3 Data Leak Headline

“Time Warner Hacked – AWS Config Exposes 4M Subscribers” reads a recent headline. AWS S3 isn’t being hacked. Data is, well, just being left out in the open for anyone to access. Millions of people are silently affected by simple security misconfigurations. Why do I say, “silently affected”? Because often you won’t know your data has been leaked until months or years after the fact, when it shows up in a news story. And even after you find out, you may never know how or for what your data is being used.

Verizon leaked 14 million customer credentials through one insecure bucket. An insecure AWS S3 bucket exposed 1.8 million Chicago Voter records. Dow Jones leaked names, email addresses, and some partial credit card numbers of up to 4 million customers. Time Warner leaked data of 4 million customers in a bucket left open by freelance contractors. TigerSwan, a private security firm, leaked thousands of sensitive documents. These documents contained information of individuals working in the Department of Defense and US intelligence community.

I cringe when I see cyber incident news headlines. Furthermore, it gets very real when cyber incidents involve a service you use daily, such as AWS S3. At Simple Thread we use AWS S3 extensively because of its ease of use, pricing structure, high durability, and 99.99% availability guarantee. The leaks are never S3’s fault, as with most data leaks, it almost always stems from the fact that proper technical and management controls weren’t put in place.

Sigh… So, what can you do to be proactive? First of all, implement technical controls.

Technical Controls

Did you know that AWS S3 is actually secure by default? AWS S3 resources can originally only be accessed by the creators of the data and the bucket owners. So, how do you go from secure by default to insecure? Well, we need other people to access the data besides us and we need to provide some kind of alternate data access. All too often that alternate solution is to carelessly completely open up S3.

AWS S3 buckets are controlled in four ways:

1) Identify and Access Management (IAM) policies. Grant access to users within AWS. Access is defined at the user.

2) Bucket Policies. Define broad read/write rules on buckets.

3) Access Control Lists (ACLs). Grant specific read/write permissions for individual buckets and data to users within AWS. Access is defined at the bucket/data level

4) Query String Authentication. Create a URL to access data that is only valid for a specific time.

Want to know more? Check out the AWS S3 Access Permission guide.

Sounds good Kevin, but what can I do to keep track of this stuff? I’m glad you asked. Auditing bucket access is critical for the security of S3, and thankfully Amazon provides a few different options for this.

Amazon Trusted Advisor – Amazon Trusted Advisor will look for buckets that have global public access enabled, or allow access by *any* authenticated AWS user.

CloudWatch – Create custom metric filters for S3 modifications in CloudTrail and configure CloudWatch alerts on them so that you can be notified of any changes made to your bucket ACLs, policies, config, etc…

AWS Config – Now available are the s3-bucket-public-write-prohibited and s3-bucket-public-read-prohibited rules. These rules look at your buckets and check if public access is enabled by any ACL or policy.

Amazon Macie – Amazon Macie is a new service offered by Amazon that allows you to discover sensitive data stored in your s3 buckets, classify that data, and then get predictive alerts based on access to that data. Look for us to describe Macie in more detail in a future post.

Management Controls

Especially relevant are management controls, which complement technical controls to ensure data is secured. You need to have a plan. Not just any plan, but a security plan. Did you know the National Institute of Standards and Technology (NIST) publishes documents addressing security plans? Some pretty smart people work there, and consequently, the concepts still apply even if primarily focused towards organizations working with federal government. I recommend reading Special Publication 800-13 “Guide for Developing Security Plans for Federal Information Systems”. Use the content of the publication to establish a security plan for your organization. Below are example steps you might take to establish a plan, but as with any process, it will need to be customized for your organization.

1) Create an Inventory

2) Create a Review Checklist

3) Assign Responsibility

4) Create Review Schedule

5) Automate When Possible

Create an Inventory

You can’t secure what you don’t know about. The first step in any security plan is to identify major applications and system boundaries. The exact taxonomy of your inventory will depend on your organization structure. For Simple Thread, that might mean grouping systems by client and then listing major applications. For an internal organization, that might mean grouping by department or line of business.

Create a Review Checklist

The NIST document has a thorough list of items to inventory and include in a review process. At the minimum, it should include a way to uniquely identify each major application and its risk profile. For AWS, it might include listing which accounts are used by each application, along with top-level S3 buckets and what technical controls are in place. More generally, this could also include a list of which libraries and system components need upgrades, along with urgency.

Assign Responsibility

For each major application, two roles should be assigned: a system owner and a security officer. The system owner is responsible for ensuring the security review is performed and the ultimately accountable authority. The security officer is responsible for performing the review. No security process should rely on one person performing their duties perfectly; so these two roles should be filled by two different people. If the system owner does not have enough low-level knowledge of the system to evaluate the correctness of the security officer’s review, they can still assess the outcome of the review, e.g., if several reviews in a row, no issues are found or the inventory list isn’t changing, it may be a sign that a more thorough review is needed.

Create Review Schedule

Once a general inventory and review process is defined, it must be repeated on a regular basis, e.g., quarterly. The system owner and especially the security officer must be given time to conduct this review. If a security officer is expected to conduct the review *on top of* their regular duties, then you are effectively stating that security is a lower priority than other duties.

Automate When Possible

At the end of the day, humans must be responsible and accountable for the security of software systems, but there may be ways to automate portions of the review to alleviate the burden from the human reviewers. In addition to leveraging vendor-provided tools like Amazon Trusted Advisor or CloudWatch, also consider if you can create simple tools custom to your organization. For example, if your teams have standard libraries used for AWS integration, you might have a script that searches through source code and server configuration for references to AWS keys or libraries. When the output of that audit script changes, it is a sign to the security officer that a system needs a deeper review.

Cyber News: [Your Name] Opens Private Data

So, ask yourself, is your data left out in the open? How do you know? Additionally, are you a manager? Now ask yourself this again. Think if any contractors or third parties you work with store your data. Do you have your own set of controls so you aren’t the next AWS S3 data leak headline?

Loved the article? Hated it? Didn’t even read it?

We’d love to hear from you.

Reach Out

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

More Insights

View All