[AWS] 7. Amazon S3

Nina·2021년 3월 28일


목록 보기

One of the most import building block of AWS
Advertised as ‘infinitely scaling’ storage

Amazon S3 Overview


Amazon S3 allows people to store objects in buckets(directories)
Buckets must have a globally unique name
Buckets are defined at the regional level


Objects have a key
The key is the full path -> the key is composed of prefix & object name

Object values are the content of the body

  • max size: 5TB
  • if uploading more than 5GB, must use ‘multi-part upload’
Metadata, tags, version ID


You can version your files in Amazon S3
It is enabled at the bucket level
Same key overwrite will increment the version
It is best practice to version your buckets

  • protect against unintended deletes
  • easy roll back to previous version

S3 Encryption

There are 4 methods of encrypting objects in S3

<ul><li>SSE-S3, SSE-KMS, SSE-C, Client Side Encryption</li></ul>

It’s important to understand which ones are adapted to which situation


Encryption using keys handled & managed by Amazon S3
Object is encrypted server side
AES-256 encryption type
Must set header: ‘x-amz-server-side-encryption’:’AES256’


Encryption using keys handled & managed by KMS -> user control + audit trail
Object is encrypted server side
Must set header: ‘x-amz-server-side-encryption’:’aws:kms’


HTTPS is mandatory
Server-side encryption using data keys fully managed by the customer outside of AWS
HTTPS must be used
Amazon S3 does not store the encryption key you provide
Encryption key must provided in HTTP headers, for every HTTP request made
Requires a lot more management

Client Side Encryption

Client library such as Amazon S3 encryption client
Clients must encrypt data themselves before sending to S3
Clients must decrypt data themselves when retrieving from S3
Customer fully manages the keys and encryption cycle

S3 Security

User based

  • IAM policies: which API calls should be allowed for a specific user from IAM console
Resource based
  • bucket policies: bucket wide rules from the S3 console - allow cross account
  • object access control list(ACL)
  • bucket access list(BCL)
IAM principal can access an S3 object if:
  • the user IAM permissions allow it or the resource policy allows it
  • and there’s no explicit deny

S3 Bucket Policies

Json based polices

  • resources: buckets and objects
  • actions: set of API to allow or deny
  • effect: allow/deny
  • principal: the account or user to apply the policy to
Use S3 bucket for policy to:
  • grant public access to the bucket
  • force objects to bo encrypted at upload
  • grant access to another account

Bucket Settings for Block Public Access
Theses settings were created to prevent company data leaks


Networking: supports VPC endpoints
Logging and Audit
User Security: MFA Delete, Pre-signed URLs

S3 Websites

S3 can host static websites and have them accessible on the www
If you get a 430(forbidden) error, make sure the bucket policy allows public reads


An origin is a scheme(protocol), host(domain) and port
CORS means cross-origin resource sharing
Web Browser based mechanism to allow requests to there origins while visiting the main origin
If a client does a cross-origin request on our S3 bucket, we need to enable the correct CORS headers


0개의 댓글