Techhub Support

Amazon S3 (Simple Storage Service)

amazon s3

Introduction to Amazon S3

In this article is intended to provide you with a basic understanding of the core object storage services available on AWS: Amazon Simple Storage Service (Amazon S3).

Amazon S3 being one of the first services introduced by AWS serves as one of the foundational web services- since nearly any application running in AWS either uses Amazon S3 directly or indirectly. With a quite a simple web service interface it can be used to store and retrieve any amount of data from around the world via web. Amazon S3 offers easy-to-use object storage that provides IT and developer teams with durable, secure and highly-scalable cloud storage.

Amazon S3 provides a very high level of integration with other AWS services that can be used either in conjunction or alone. Amazon S3 eliminates the capacity planning and capacity constraints associated with traditional storage since it allows you to pay only for the storage you actually use.

Common use cases for Amazon S3 storage include:

  • Backup and archive for cloud data or on-premises
  • Content, media, and software storage and distribution
  • Big data analytics
  • Static website hosting
  • Cloud-native mobile and Internet application hosting
  • Disaster recovery

Amazon S3 offers a range of storage classes designed for various generic use cases: general purpose, infrequent access, and archive and to support the aforementioned cases and many more. It also offers configurable lifecycle policies to help manage data through its lifecycle by using lifecycle policies; you can have your data automatically migrate to the most appropriate storage class, without modifying your application code. In regards to access security Amazon S3 provides a rich set of permissions, access controls, and encryption options in order to control who has access to your precious data.

Advantage of Amazon S3

  • Create Buckets – You can create and name a bucket which is the fundamental container in Amazon S3 for data storage.
  • Store data in Buckets – Store an infinite (Upload n number of objects and each object can contain up to 5 TB of data) amount of data in a bucket.
  • Download data – At any time of your convince download your data or allow and enable others to do the same.
  • Permissions – Grant or deny access permission to specific users who want to upload or download data into your Amazon S3 bucket.
  • Standard interfaces – Standard based REST and SOAP interfaces are designed to work with any Internet-development toolkit available.

AWS S3 Bucket Restrictions and Limitations

  • By default up to 100 buckets are created in each of your AWS accounts.
  • You can increase your bucket limit by submitting a service limit increase in case if you need additional buckets.
  • Bucket within another bucket is not plausible.
  • The Amazon S3 high-availability engineering is focused on get, put, list, and delete operations.

Rules for Bucket Naming

All bucket names comply with DNS naming conventions.

  • The Bucket names must be at least 3 and shouldn’t be more than 63 characters long.
  • Bucket names must start and end with a lowercase letter or a number and can contain lowercase letters, numbers or hyphens and each label, also the names must be a series of one or more labels and the adjacent labels must be separated by a single period (.).
  • Bucket names must not be IP address formatted as an e.g., 142.116.5.3
  • The SSL wildcard certificate only matches buckets that do not contain periods when you are using virtual hosted–style buckets with SSL.

AWS S3 Features

  1. Reduced Redundancy Storage:-
  • Customers can store their data using the Amazon S3 RRS (Reduced Redundancy Storage) option. The RRS helps customers in their cost reduction by storing non-critical, reproducible data at lower levels of redundancy than the usual standard storage.
  • RRS provides 99.99% durability of objects over a given year meaning that the average expected loss of objects is 0.01% annually.
  1. Bucket Policies:-
  • Centralized access control to buckets and objects for Amazon S3 operations, aspects of the request (for e.g. IP address), requesters, resources and other variety of conditions.
  • An account can grant limited read and write access to one application, but allow another to create and delete buckets.
  • Several field offices are allowed to store their daily reports [write only to a certain set of names (e.g. “Nevada/*” or “Utah/*”)] in a single bucket, but only from the office’s IP address range by an account.
  • Only the bucket owner is allowed access to associate a policy with a bucket.
  1. AWS Identity and Access Management:-
  • To control the type of access a user or group of users has to specific parts of an Amazon S3 bucket your AWS account owns you can use IAM (Identity and Access Management).
  1. Managing Access with ACLs:-
  • Access control lists (ACLs) are resource-based access policy options meaning that you can manage and use ACLs access to grant basic read and write permissions to other AWS accounts for buckets and objects.
  • But there are limits in managing permissions using ACLs you can grant permissions only to other AWS accounts but you cannot grant permissions to users in your account. Also you cannot grant conditional permissions neither can you explicitly deny permissions.

Amazon S3 supports canned ACLs. Each of the canned ACL has a predefined a set of grantees and permissions. The set of canned ACLs and the associated predefined grants and permissions are following.

Private: – No one else has access rights (default). Owner gets FULL_CONTROL

Public-read-write: – READ and WRITE access are granted to all Users group. Owner gets FULL_CONTROL

Aws-exec-read: – To get an AMI (Amazon Machine Image) bundle from Amazon S3, the Amazon EC2 gets READ access. Owner gets FULL_CONTROL

Authenticated-read: – The Authenticated Users group gets READ access. Owner gets FULL_CONTROL

Bucket-owner-read: – Bucket owner gets READ access. If you try to specify this canned ACL while creating a bucket, Amazon S3 ignores it. Owner gets FULL_CONTROL

Bucket-owner-full-control:-Both the object owner and the bucket owner have full control over the object.

Log-delivery-write: – The WRITE and READ permissions on the bucket are granted to the Log Delivery group.

  1. Versioning:-
  • Versioning enables the client in keeping multiple versions of an object within a single bucket, for example, my-image.jpg (version 111) and my-image.jpg (version 222). Also you might have to enable versioning to protect from unintended overwrites and deletions or to archive objects (so that you can retrieve previous versions of them).
  1. Operations:-
  • Create a Bucket – Create and name bucket in order to store your objects.
  • Write an Object – When you write an object specify a unique key in the namespace (bucket) and any access control to object for storing data by either creating or overwriting an object.
  • Read an Object – Read data back. You can download the data via HTTP or Bit Torrent.
  • Deleting an Object – Delete some of the data.
  • Listing Keys – List the keys contained in one of your buckets and based on a prefix you can filter the key list.

 

AWS S3 Storage Classes

In Amazon S3 each object has a storage class associated with it which you can choose depending on your use case scenario and performance access requirements also all of these storage classes offer high durability.

The following storage classes are offered by Amazon S3 for the objects that you store.

  • STANDARD –The default storage class is STANDARD; if you don’t specify any storage class at the time of object upload, Amazon S3 assumes the STANDARD storage class. It is best suited for performance-sensitive use and frequently accessed data cases.
  • STANDARD_IA– This storage class (IA stands for infrequent access) is optimized for long-lived and less frequently accessed data best which is suited for backups and older data where the frequency of access has reduced but requires a high performance.
  • GLACIER – The GLACIER storage class is suitable for archiving data where data access is infrequent and if you want to access, then you must first restore the objects. Archived objects are not available for real-time access.
  • REDUCED_REDUNDANCY – To reduce the cost associated with storage you can use the RRS (Reduced Redundancy Storage) storage class which allows noncritical and reproducible data being stored at lower levels of redundancy than the STANDARD storage class.

AWS Cross-origin resource sharing (CORS)

  • The CORS (Cross-origin resource sharing) allows (defines a way) customer web applications that are loaded in one domain to interact with resources in a different domain also you can selectively allow cross-origin access to your Amazon S3 resources and with the help of CORS support in Amazon S3 build rich client-side web applications.
  • Scenario: Suppose you are hosting a website in an Amazon S3 bucket named website. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Usually the browser would block JavaScript from allowing requests but with the help of CORS you can configure your bucket to explicitly enable cross-origin requests from http://website.s3-website-us-east-1.amazonaws.com. By using the Amazon S3’s API endpoint for the bucket, http://website.s3-website-us-east-1.amazonaws.com you can use JavaScript on the web pages that are stored in this bucket to authenticate GET and PUT requests for the same bucket.

How Do I Configure CORS on My Bucket?

  • To configure bucket to allow cross-origin requests, you have to create a CORS configuration (an XML document with rules that identify the origins, the HTTP methods will support for each origin and other operation-specific information).
  • Nearly 100 rules are allowed to be added to the configuration.

AWS Cross-origin resource sharing (CORS)

Managing Access Permissions to Your Amazon S3 Resources

Lifecycle:-It enables you to specify the lifecycle management of objects in a bucket. Normally the configuration is a set of one or more rules where each rule defines an action to apply to a group of objects.

  • Transition actions: – In transition actions you have to define when objects transition to another storage class. As an example, you may choose to transition objects to the STANDARD_IA (IA stands for Infrequent Access) storage class 30 days after creation or after one year creation archive objects to the GLACIER storage class.
  • Expiration actions:-Specify when the objects expire then on your behalf Amazon S3 deletes the expired objects.

AWS S3 Pricing

  • Most of the storage providers in the market force the client to purchase a predetermined amount of storage and network transfer capacity. In case if you exceed that predetermined capacity you are charged high overage fees or your service is shut off conversely if you do not exceed that capacity, you pay as if you have used it all. You don’t have to plan for the storage requirements of your application in case of Amazon S3 while pricing since S3 is designed to eliminate aforementioned problem with other providers.
  • With no hidden fees and no overage charges you have to only pay for your consumption which gives developers a variable-cost service that can grow with their business while enjoying the cost advantages of Amazon’s infrastructure.
  • There are no set-up fees to begin with but before storing anything in Amazon S3, you need to register with the service and provide a payment instrument that will be charged automatically at the end of each month for that month’s usage.

Hosting a Static Website on Amazon S3

  • You can host a static website (on a static website individual web pages include static content) on Amazon Simple Storage Service (Amazon S3). They might also contain client-side scripts. Amazon Web Services (AWS) also has resources for hosting dynamic websites (a dynamic website relies on server-side processing, including server-side scripts such as PHP, JSP, or ASP.NET). Although Amazon S3 does not support server-side scripting.
  • To host a static website, you configure an Amazon S3 bucket for website hosting, and then upload your website content to the bucket which is then available at the AWS Region-specific website endpoint of the bucket:

<bucket-name>.s3-website-<AWS-region>.amazonaws.com

Setting up a Static Website:-

Step 1: Create a Bucket and configure it as a Website.

Step 2: Add a Bucket Policy so that your bucket content is publicly accessible.

Step 3: Upload an Index Document.

Step 4: Test the Website.

Abhay Singh

1 comment

Follow us

Don't be shy, get in touch. We love meeting interesting people and making new friends.