diff --git a/src/content/docs/aws/services/s3.mdx b/src/content/docs/aws/services/s3.mdx index 29fe8aed..8ff63695 100644 --- a/src/content/docs/aws/services/s3.mdx +++ b/src/content/docs/aws/services/s3.mdx @@ -1,13 +1,13 @@ ---- + title: "Simple Storage Service (S3)" description: Get started with Amazon S3 on LocalStack persistence: supported tags: ["Free"] ---- -import FeatureCoverage from "../../../../components/feature-coverage/FeatureCoverage"; -## Introduction +import FeatureCoverage from "../../../../components/featurecoverage/FeatureCoverage"; + + Introduction Simple Storage Service (S3) is an object storage service that provides a highly scalable and durable solution for storing and retrieving data. In S3, a bucket represents a directory, while an object corresponds to a file. @@ -15,37 +15,34 @@ Each object or file within S3 encompasses essential attributes such as a unique S3 can store unlimited objects, allowing you to store, retrieve, and manage your data in a highly adaptable and reliable manner. LocalStack allows you to use the S3 APIs in your local environment to create new buckets, manage your S3 objects, and test your S3 configurations locally. -The supported APIs are available on the API coverage section for [S3](#api-coverage) and [S3 Control](#api-coverage-s3-control), which provides information on the extent of S3's integration with LocalStack. +The supported APIs are available on the API coverage section for [S3](apicoverage) and [S3 Control](apicoverages3control), which provides information on the extent of S3's integration with LocalStack. -## Getting started + Getting started -This guide is designed for users new to S3 and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awscli-local) wrapper script. +This guide is designed for users new to S3 and assumes basic knowledge of the AWS CLI and our [`awslocal`](https://github.com/localstack/awsclilocal) wrapper script. Start your LocalStack container using your preferred method. -We will demonstrate how you can create an S3 bucket, manage S3 objects, and generate pre-signed URLs for S3 objects. +We will demonstrate how you can create an S3 bucket, manage S3 objects, and generate presigned URLs for S3 objects. -### Create an S3 bucket + Create an S3 bucket -You can create an S3 bucket using the [`CreateBucket`](https://docs.aws.amazon.com/cli/latest/reference/s3api/create-bucket.html) API. -Run the following command to create an S3 bucket named `sample-bucket`: +You can create an S3 bucket using the [`CreateBucket`](https://docs.aws.amazon.com/cli/latest/reference/s3api/createbucket.html) API. +Run the following command to create an S3 bucket named `samplebucket`: ```bash -awslocal s3api create-bucket --bucket sample-bucket -``` +awslocal s3api createbucket bucket samplebucket +You can list your S3 buckets using the ListBuckets API. Run the following command to list your S3 buckets: -You can list your S3 buckets using the [`ListBuckets`](https://docs.aws.amazon.com/cli/latest/reference/s3api/list-buckets.html) API. -Run the following command to list your S3 buckets: +Bash -```bash -awslocal s3api list-buckets -``` +awslocal s3api listbuckets +Bash -```bash title="Output" { "Buckets": [ { - "Name": "sample-bucket", - "CreationDate": "2023-07-18T06:36:25+00:00" + "Name": "samplebucket", + "CreationDate": "20230718T06:36:25+00:00" } ], "Owner": { @@ -53,37 +50,30 @@ awslocal s3api list-buckets "ID": "75aa57f09aa0c8caeab4f8c24e99d10f8e7faeebf76c078efc7c6caea54ba06a" } } -``` - -### Managing S3 objects - -To upload a file to your S3 bucket, you can use the [`PutObject`](https://docs.aws.amazon.com/cli/latest/reference/s3api/put-object.html) API. -Download a random image from the internet and save it as `image.jpg`. -Run the following command to upload the file to your S3 bucket: +Managing S3 objects +To upload a file to your S3 bucket, you can use the PutObject API. Download a random image from the internet and save it as image.jpg. Run the following command to upload the file to your S3 bucket: -```bash -awslocal s3api put-object \ - --bucket sample-bucket \ - --key image.jpg \ - --body image.jpg -``` +Bash -You can list the objects in your S3 bucket using the [`ListObjects`](https://docs.aws.amazon.com/cli/latest/reference/s3api/list-objects.html) API. -Run the following command to list the objects in your S3 bucket: +awslocal s3api putobject \ + bucket samplebucket \ + key image.jpg \ + body image.jpg +You can list the objects in your S3 bucket using the ListObjects API. Run the following command to list the objects in your S3 bucket: -```bash -awslocal s3api list-objects \ - --bucket sample-bucket -``` +Bash +awslocal s3api listobjects \ + bucket samplebucket If your image has been uploaded successfully, you will see the following output: -```bash title="Output" +Bash + { "Contents": [ { "Key": "image.jpg", - "LastModified": "2023-07-18T06:40:07+00:00", + "LastModified": "20230718T06:40:07+00:00", "ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"", "Size": 0, "StorageClass": "STANDARD", @@ -94,253 +84,228 @@ If your image has been uploaded successfully, you will see the following output: } ] } -``` +Run the following command to upload a file named index.html to your S3 bucket: -Run the following command to upload a file named `index.html` to your S3 bucket: +Bash -```bash -awslocal s3api put-object --bucket sample-bucket --key index.html --body index.html -``` +awslocal s3api putobject bucket samplebucket key index.html body index.html +Bash -```bash title="Output" { "ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"" } -``` +Generate a presigned URL for S3 object +You can generate a presigned URL for your S3 object using the presign command. Presigned URL allows anyone to retrieve the S3 object with an HTTP GET request. -### Generate a pre-signed URL for S3 object +Run the following command to generate a presigned URL for your S3 object: -You can generate a pre-signed URL for your S3 object using the [`presign`](https://docs.aws.amazon.com/cli/latest/reference/s3/presign.html) command. -Pre-signed URL allows anyone to retrieve the S3 object with an HTTP GET request. +Bash -Run the following command to generate a pre-signed URL for your S3 object: +awslocal s3 presign s3://samplebucket/image.jpg +You will see a generated presigned URL for your S3 object. You can use curl or wget to retrieve the S3 object using the presigned URL. -```bash -awslocal s3 presign s3://sample-bucket/image.jpg -``` +HighLevel CLI Commands +While the s3api commands above allow for precise control, you can also use the highlevel s3 commands for common tasks. These are often faster to type and easier to remember. -You will see a generated pre-signed URL for your S3 object. -You can use [curl](https://curl.se/) or [`wget`](https://www.gnu.org/software/wget/) to retrieve the S3 object using the pre-signed URL. +Run the following commands to perform the same operations using the highlevel API: -## Path-Style and Virtual Hosted-Style Requests +Bash -Similar to AWS, LocalStack categorizes requests as either [Path style or Virtual-Hosted style](https://docs.aws.amazon.com/AmazonS3/latest/userguide/VirtualHosting.html) based on the Host header of the request. -The following example illustrates this distinction: + Create a bucket +awslocal s3 mb s3://mytestbucket -```bash -http://.s3..localhost.localstack.cloud:4566/ # host-style request -http://.s3.localhost.localstack.cloud:4566/ # host-style request, region is not mandatory in LocalStack -http://s3..localhost.localstack.cloud:4566// # path-style request -http://localhost:4566// # path-style request -``` + Upload a file +echo "Hello World" > hello.txt +awslocal s3 cp hello.txt s3://mytestbucket/ + + List the bucket contents +awslocal s3 ls s3://mytestbucket/ + + Download the file back +awslocal s3 cp s3://mytestbucket/hello.txt downloaded.txt +PathStyle and Virtual HostedStyle Requests +Similar to AWS, LocalStack categorizes requests as either Path style or VirtualHosted style based on the Host header of the request. The following example illustrates this distinction: -A **Virtual-Hosted style** request will have the `bucket` as part of the `Host` header of your request. -In order for LocalStack to be able to parse the bucket name from your request, your endpoint needs to be prefixed with `s3.`, like `s3.localhost.localstack.cloud`. +Bash -If your endpoint cannot be prefixed with `s3.`, you should configure your SDK to use **Path style** request instead, and make the bucket part of the path. +http://.s3..localhost.localstack.cloud:4566/ hoststyle request +http://.s3.localhost.localstack.cloud:4566/ hoststyle request, region is not mandatory in LocalStack +http://s3..localhost.localstack.cloud:4566// pathstyle request +http://localhost:4566// pathstyle request +A VirtualHosted style request will have the bucket as part of the Host header of your request. In order for LocalStack to be able to parse the bucket name from your request, your endpoint needs to be prefixed with s3., like s3.localhost.localstack.cloud. -By default, most SDKs will try to use **Virtual-Hosted style** requests and prepend your endpoint with the bucket name. -However, if the endpoint is not prefixed by `s3.`, LocalStack will not be able to understand the request and it will most likely result in an error. +If your endpoint cannot be prefixed with s3., you should configure your SDK to use Path style request instead, and make the bucket part of the path. -You can either change the endpoint to an S3-specific one, or configure your SDK to use **Path style** requests instead. -Check out our [SDK documentation](/aws/tooling/localstack-sdks/) to learn how you can configure AWS SDKs to access LocalStack and S3. +By default, most SDKs will try to use VirtualHosted style requests and prepend your endpoint with the bucket name. However, if the endpoint is not prefixed by s3., LocalStack will not be able to understand the request and it will most likely result in an error. -:::tip -While using [AWS SDKs](https://aws.amazon.com/developer/tools/#SDKs), you would need to configure the `ForcePathStyle` parameter to `true` in the S3 client configuration to use **Path style** requests. -If you want to use virtual host addressing of buckets, you can remove `ForcePathStyle` from the configuration. -The `ForcePathStyle` parameter name can vary between SDK and languages, please check our [SDK documentation]() -::: +You can either change the endpoint to an S3specific one, or configure your SDK to use Path style requests instead. Check out our SDK documentation to learn how you can configure AWS SDKs to access LocalStack and S3. -If your endpoint is not prefixed with `s3.`, all requests are treated as **Path style** requests. -Using the `s3.localhost.localstack.cloud` endpoint URL is recommended for all requests aimed at S3. +:::tip While using AWS SDKs, you would need to configure the ForcePathStyle parameter to true in the S3 client configuration to use Path style requests. If you want to use virtual host addressing of buckets, you can remove ForcePathStyle from the configuration. The ForcePathStyle parameter name can vary between SDK and languages, please check our SDK documentation ::: -## Configuring Cross-Origin Resource Sharing on S3 +If your endpoint is not prefixed with s3., all requests are treated as Path style requests. Using the s3.localhost.localstack.cloud endpoint URL is recommended for all requests aimed at S3. -You can configure Cross-Origin Resource Sharing (CORS) on a LocalStack S3 bucket using AWS Command Line Interface (CLI). -It would allow your local application to communicate directly with an S3 bucket in LocalStack. -By default, LocalStack will apply specific CORS rules to all requests to allow you to display and access your resources through [LocalStack Web Application](https://app.localstack.cloud). -If no CORS rules are configured for your S3 bucket, LocalStack will apply default rules unless specified otherwise. +Configuring CrossOrigin Resource Sharing on S3 +You can configure CrossOrigin Resource Sharing (CORS) on a LocalStack S3 bucket using AWS Command Line Interface (CLI). It would allow your local application to communicate directly with an S3 bucket in LocalStack. By default, LocalStack will apply specific CORS rules to all requests to allow you to display and access your resources through LocalStack Web Application. If no CORS rules are configured for your S3 bucket, LocalStack will apply default rules unless specified otherwise. -To configure CORS rules for your S3 bucket, you can use the `awslocal` wrapper. -Optionally, you can run a local web application on [localhost:3000](http://localhost:3000). -You can emulate the same behaviour with an AWS SDK or an integration you use. -Follow this step-by-step guide to configure CORS rules on your S3 bucket. +To configure CORS rules for your S3 bucket, you can use the awslocal wrapper. Optionally, you can run a local web application on localhost:3000. You can emulate the same behaviour with an AWS SDK or an integration you use. Follow this stepbystep guide to configure CORS rules on your S3 bucket. Run the following command on your terminal to create your S3 bucket: -```bash -awslocal s3api create-bucket --bucket cors-bucket -``` +Bash + +awslocal s3api createbucket bucket corsbucket +Bash -```bash title="Output" { - "Location": "/cors-bucket" + "Location": "/corsbucket" } -``` +Next, create a JSON file with the CORS configuration. The file should have the following format: -Next, create a JSON file with the CORS configuration. -The file should have the following format: +JSON -```json title="cors-config.json" showLineNumbers { "CORSRules": [ { - "AllowedHeaders": ["*"], + "AllowedHeaders": [""], "AllowedMethods": ["GET", "POST", "PUT"], "AllowedOrigins": ["http://localhost:3000"], "ExposeHeaders": ["ETag"] } ] } -``` +:::note Note that this configuration is a sample, and you can tailor it to fit your needs better, for example, restricting the AllowedHeaders to specific ones. ::: -:::note -Note that this configuration is a sample, and you can tailor it to fit your needs better, for example, restricting the **AllowedHeaders** to specific ones. -::: +Save the file locally with a name of your choice, for example, corsconfig.json. Run the following command to apply the CORS configuration to your S3 bucket: -Save the file locally with a name of your choice, for example, `cors-config.json`. -Run the following command to apply the CORS configuration to your S3 bucket: - -```bash -awslocal s3api put-bucket-cors --bucket cors-bucket --cors-configuration file://cors-config.json -``` +Bash +awslocal s3api putbucketcors bucket corsbucket corsconfiguration file://corsconfig.json You can further verify that the CORS configuration was applied successfully by running the following command: -```bash -awslocal s3api get-bucket-cors --bucket cors-bucket -``` +Bash -On applying the configuration successfully, you should see the same JSON configuration file you created earlier. -Your S3 bucket is configured to allow cross-origin resource sharing, and if you try to send requests from your local application running on [localhost:3000](http://localhost:3000), they should be successful. +awslocal s3api getbucketcors bucket corsbucket +On applying the configuration successfully, you should see the same JSON configuration file you created earlier. Your S3 bucket is configured to allow crossorigin resource sharing, and if you try to send requests from your local application running on localhost:3000, they should be successful. -However, if you try to access your bucket from [LocalStack Web Application](https://app.localstack.cloud), you'll see errors, and your bucket won't be accessible anymore. -We can edit the JSON file `cors-config.json` you created earlier with the following configuration and save it: +However, if you try to access your bucket from LocalStack Web Application, you'll see errors, and your bucket won't be accessible anymore. We can edit the JSON file corsconfig.json you created earlier with the following configuration and save it: + +JSON -```json title="cors-config.json" showLineNumbers { "CORSRules": [ { - "AllowedHeaders": ["*"], + "AllowedHeaders": [""], "AllowedMethods": ["GET", "POST", "PUT", "HEAD", "DELETE"], "AllowedOrigins": [ "http://localhost:3000", - "https://app.localstack.cloud", - "http://app.localstack.cloud" + "[https://app.localstack.cloud](https://app.localstack.cloud)", + "[http://app.localstack.cloud](http://app.localstack.cloud)" ], "ExposeHeaders": ["ETag"] } ] } -``` - You can now run the same steps as before to update the CORS configuration and verify if it is applied correctly: -```bash -awslocal s3api put-bucket-cors --bucket cors-bucket --cors-configuration file://cors-config.json -awslocal s3api get-bucket-cors --bucket cors-bucket -``` - -You can try again to upload files in your bucket from the [LocalStack Web Application](https://app.localstack.cloud) and it should work. +Bash -## S3 Docker image +awslocal s3api putbucketcors bucket corsbucket corsconfiguration file://corsconfig.json +awslocal s3api getbucketcors bucket corsbucket +You can try again to upload files in your bucket from the LocalStack Web Application and it should work. -LocalStack provides a Docker image for S3, which you can use to run S3 in a Docker container. -The image is available on [Docker Hub](https://hub.docker.com/r/localstack/localstack) and can be pulled using the following command: +S3 Docker image +LocalStack provides a Docker image for S3, which you can use to run S3 in a Docker container. The image is available on Docker Hub and can be pulled using the following command: -```bash -docker pull localstack/localstack:s3-latest -``` +Bash +docker pull localstack/localstack:s3latest The S3 Docker image only supports the S3 APIs and does not include other services like Lambda, DynamoDB, etc. You can run the S3 Docker image using any of the following commands: import { Tabs, TabItem } from '@astrojs/starlight/components'; - - -```bash -IMAGE_NAME=localstack/localstack:s3-latest localstack start -``` - - -```yaml showLineNumbers + + +Bash + +IMAGE_NAME=localstack/localstack:s3latest localstack start + + +YAML + services: localstack: - container_name: "${LOCALSTACK_DOCKER_NAME:-localstack-main}" - image: localstack/localstack:s3-latest + container_name: "${LOCALSTACK_DOCKER_NAME:localstackmain}" + image: localstack/localstack:s3latest ports: - - "127.0.0.1:4566:4566" # LocalStack Gateway + "127.0.0.1:4566:4566" LocalStack Gateway environment: - - DEBUG=${DEBUG:-0} + DEBUG=${DEBUG:0} volumes: - - "${LOCALSTACK_VOLUME_DIR:-./volume}:/var/lib/localstack" - - "/var/run/docker.sock:/var/run/docker.sock" -``` - - -```bash + "${LOCALSTACK_VOLUME_DIR:./volume}:/var/lib/localstack" + "/var/run/docker.sock:/var/run/docker.sock" + + +Bash + docker run \ - --rm \ - -p 4566:4566 \ - localstack/localstack:s3-latest -``` - - + rm \ + p 4566:4566 \ + localstack/localstack:s3latest + + +The S3 Docker image has similar parity with the S3 APIs supported by LocalStack Docker image. You can use similar configuration options to alter the behaviour of the S3 Docker image, such as DEBUG or S3_SKIP_SIGNATURE_VALIDATION. + +:::note The S3 Docker image does not support persistence, and all data is lost when the container is stopped. To use persistence or save the container state as a Cloud Pod, you need to use the localstack/localstackpro image. ::: -The S3 Docker image has similar parity with the S3 APIs supported by LocalStack Docker image. -You can use similar [configuration options](/aws/capabilities/config/configuration/#s3) to alter the behaviour of the S3 Docker image, such as `DEBUG` or `S3_SKIP_SIGNATURE_VALIDATION`. +SSEC Encryption +SSEC (ServerSide Encryption with CustomerProvided Keys) is an Amazon S3 encryption method where customers provide their own encryption keys for securing objects. AWS handles the encryption and decryption, but the keys are managed entirely by the customer. -:::note -The S3 Docker image does not support persistence, and all data is lost when the container is stopped. -To use persistence or save the container state as a Cloud Pod, you need to use the [`localstack/localstack-pro`](https://hub.docker.com/r/localstack/localstack-pro) image. -::: +LocalStack supports SSEC parameter validation for the following S3 APIs: -## SSE-C Encryption +PutObject -SSE-C (Server-Side Encryption with Customer-Provided Keys) is an Amazon S3 encryption method where customers provide their own encryption keys for securing objects. -AWS handles the encryption and decryption, but the keys are managed entirely by the customer. +GetObject -LocalStack supports SSE-C parameter validation for the following S3 APIs: +HeadObject -- [`PutObject`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html) -- [`GetObject`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html) -- [`HeadObject`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadObject.html) -- [`GetObjectAttributes`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectAttributes.html) -- [`CopyObject`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html) -- [`CreateMultipartUpload`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_CreateMultipartUpload.html) -- [`UploadPart`](https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPart.html) +GetObjectAttributes -However, LocalStack does not support the actual encryption and decryption of objects using SSE-C. +CopyObject -## Resource Browser +CreateMultipartUpload -The LocalStack Web Application provides a [Resource Browser](/aws/capabilities/web-app/resource-browser) for managing S3 buckets & configurations. -You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the **Resources** section, and then clicking on **S3** under the **Storage** section. +UploadPart -![S3 Resource Browser](/images/aws/s3-resource-browser.png) +However, LocalStack does not support the actual encryption and decryption of objects using SSEC. + +Resource Browser +The LocalStack Web Application provides a Resource Browser for managing S3 buckets & configurations. You can access the Resource Browser by opening the LocalStack Web Application in your browser, navigating to the Resources section, and then clicking on S3 under the Storage section. The Resource Browser allows you to perform the following actions: -- **Create Bucket**: Create a new S3 bucket by specifying a **Bucket Name**, **Bucket Configuration**, **ACL**, **Object Ownership**, and more. -- **Objects & Permissions**: View, upload, download, and delete objects in your S3 buckets. - You can also view and edit the permissions, like the CORS Configuration for the bucket. -- **Create Folder**: Create a new folder in your S3 bucket by clicking on the **Create Folder** button and specifying a **Folder Name**. -- **Delete Bucket**: Delete an S3 bucket by selecting the S3 bucket and clicking on **Actions** button and clicking on **Remove Selected**. +Create Bucket: Create a new S3 bucket by specifying a Bucket Name, Bucket Configuration, ACL, Object Ownership, and more. + +Objects & Permissions: View, upload, download, and delete objects in your S3 buckets. You can also view and edit the permissions, like the CORS Configuration for the bucket. -## Examples +Create Folder: Create a new folder in your S3 bucket by clicking on the Create Folder button and specifying a Folder Name. +Delete Bucket: Delete an S3 bucket by selecting the S3 bucket and clicking on Actions button and clicking on Remove Selected. + +Examples The following code snippets and sample applications provide practical examples of how to use S3 in LocalStack for various use cases: -- [Full-Stack application with Lambda, DynamoDB & S3 for shipment validation](https://github.com/localstack-samples/sample-shipment-list-demo-lambda-dynamodb-s3). -- [Serverless Transcription application using Transcribe, S3, Lambda, SQS, and SES](https://github.com/localstack/sample-transcribe-app) -- [Query data in S3 Bucket with Amazon Athena, Glue Catalog & CloudFormation](https://github.com/localstack/query-data-s3-athena-glue-sample) -- [Serverless Image Resizer with Lambda, S3, SNS, and SES](https://github.com/localstack/serverless-image-resizer) -- [Host a static website locally using Simple Storage Service (S3) and Terraform with LocalStack](https://docs.localstack.cloud/aws/tutorials/s3-static-website-terraform/) +FullStack application with Lambda, DynamoDB & S3 for shipment validation. -## API Coverage +Serverless Transcription application using Transcribe, S3, Lambda, SQS, and SES - +Query data in S3 Bucket with Amazon Athena, Glue Catalog & CloudFormation + +Serverless Image Resizer with Lambda, S3, SNS, and SES -## API Coverage (S3 Control) +Host a static website locally using Simple Storage Service (S3) and Terraform with LocalStack + +API Coverage + - +API Coverage (S3 Control) + \ No newline at end of file