Increasing data leaks from misconfigured AWS S3 bucket

Many organisations think that storing data in the cloud is far more secure than storing it on-premise. What most fail to realise is that storing data securely on the cloud is a shared responsibility between cloud providers and organisations. Organisation must appropriately configure their cloud infrastructure to keep the data secure.

In the last few months approximately 30 big companies have suffered major data leaks because of misconfigured AWS S3 buckets. The latest as recently as last week when over 50,000 scanned NSW driver’s licences and completed tolling notices were identified lying insecurely on an AWS S3 storage. Whether your data is hosted on the cloud or on-premise, keeping customer data safe is your responsibility. So, below are some basic health checks that you can carry out on AWS S3.

♦ Block public access to all your S3 buckets unless it’s necessary. When a bucket is needed to be publicly accessible, remove any files from it that shouldn’t be public.

♦ Disable File Access Control List (ACL) as it can override the public access blocking. It can be done by setting “Block new public bucket policies” property to “True”.

♦ Set least privilege when configuring access to buckets. Make sure the S3 permissions are granular; if a user does not need access to an S3 bucket, don’t grant that access. For application layer access, use role-based access control whenever possible.

♦ Enable Server-side encryption at rest and transit.

♦ Enable server access logging for all S3 buckets. Bucket access requests are captured and logged every few minutes and these logs can be stored in a separate S3 bucket.

♦ Enable versioning to keep multiple variants of an object in the same bucket. Use versioning to preserve, retrieve, and restore every version of every object stored in your Amazon S3 bucket. With versioning, you can easily recover from both unintended user actions and application failures.

♦ Enable S3 object locking feature for additional protection against object changes. S3 Object Locking makes it really difficult to edit or delete data from S3. Objects can be locked in two ways;

  1. by specifying a retention period or
  2. by placing a legal hold until manually deleting it.

   Enable this for critical objects like cloudtrail logs.

♦ Set appropriate access policies on the buckets. You may configure policies for each bucket (resource-based policy) for the bucket or user-based policy (identity-based policy). E.g. a bucket policy allowing full bucket access to a user or a user-based policy allowing full access to a certain S3 bucket. For ease of maintenance, use only one type of policy for all of your S3 resources.

♦ Enable MFA Delete to prevent from accidental bucket deletions. MFA Delete requires additional authentication for either of the following operations:

  1. Changing the versioning state of your bucket
  2. Permanently deleting an object version

    Contact Us