123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463 |
- ---
- sidebar_position: 3
- slug: /aws-s3
- ---
- import Tabs from '@theme/Tabs';
- import TabItem from '@theme/TabItem';
- import UppyCdnExample from '/src/components/UppyCdnExample';
- # AWS S3 (legacy)
- The `@uppy/aws-s3` plugin can be used to upload files directly to a S3 bucket or
- a S3-compatible provider, such as Google Cloud Storage or DigitalOcean Spaces.
- Uploads can be signed using either [Companion][companion docs] or a custom
- signing function.
- This documents the legacy version of this plugin that we plan to remove on the
- next version.
- ## When should I use it?
- :::tip
- Not sure which uploader is best for you? Read
- “[Choosing the uploader you need](/docs/guides/choosing-uploader)”.
- :::
- :::warning
- This plugin is deprecated, you should switch to using the
- [modern version of this plugin](/docs/aws-s3-multipart).
- :::
- You can use this plugin when you prefer a _client-to-storage_ over a
- _client-to-server-to-storage_ (such as [Transloadit](/docs/transloadit) or
- [Tus](/docs/tus)) setup. This may in some cases be preferable, for instance, to
- reduce costs or the complexity of running a server and load balancer with
- [Tus](/docs/tus).
- This plugin can be used with AWS S3, DigitalOcean Spaces, Google Cloud Storage,
- or any S3-compatible provider. Although all S3-compatible providers are
- supported, we don’t test against them, this plugin was developed against S3 so a
- small risk is involved in using the others.
- `@uppy/aws-s3` is best suited for small files and/or lots of files. If you are
- planning to upload mostly large files (100 MB+), consider using
- [`@uppy/aws-s3-multipart`](/docs/aws-s3-multipart).
- ## Install
- <Tabs>
- <TabItem value="npm" label="NPM" default>
- ```shell
- npm install @uppy/aws-s3
- ```
- </TabItem>
- <TabItem value="yarn" label="Yarn">
- ```shell
- yarn add @uppy/aws-s3
- ```
- </TabItem>
- <TabItem value="cdn" label="CDN">
- <UppyCdnExample>
- {`
- import { Uppy, AwsS3 } from "{{UPPY_JS_URL}}"
- new Uppy().use(AwsS3, { /* see options */ })
- `}
- </UppyCdnExample>
- </TabItem>
- </Tabs>
- ## Use
- A quick overview of the complete API.
- ```js {10} showLineNumbers
- import Uppy from '@uppy/core';
- import Dashboard from '@uppy/dashboard';
- import AwsS3 from '@uppy/aws-s3';
- import '@uppy/core/dist/style.min.css';
- import '@uppy/dashboard/dist/style.min.css';
- const uppy = new Uppy()
- .use(Dashboard, { inline: true, target: 'body' })
- .use(AwsS3, { companionUrl: 'http://companion.uppy.io' });
- ```
- ### With a AWS S3 bucket
- To use this plugin with S3 we need to setup a bucket with the right permissions
- and CORS settings.
- S3 buckets do not allow public uploads for security reasons. To allow Uppy and
- the browser to upload directly to a bucket, its CORS permissions need to be
- configured.
- CORS permissions can be found in the
- [S3 Management Console](https://console.aws.amazon.com/s3/home). Click the
- bucket that will receive the uploads, then go into the `Permissions` tab and
- select the `CORS configuration` button. A JSON document will be shown that
- defines the CORS configuration. (AWS used to use XML but now only allow JSON).
- More information about the
- [S3 CORS format here](https://docs.amazonaws.cn/en_us/AmazonS3/latest/userguide/ManageCorsUsing.html).
- The configuration required for Uppy and Companion is this:
- ```json
- [
- {
- "AllowedOrigins": ["https://my-app.com"],
- "AllowedMethods": ["GET", "POST"],
- "MaxAgeSeconds": 3000,
- "AllowedHeaders": [
- "Authorization",
- "x-amz-date",
- "x-amz-content-sha256",
- "content-type"
- ]
- },
- {
- "AllowedOrigins": ["*"],
- "AllowedMethods": ["GET"],
- "MaxAgeSeconds": 3000
- }
- ]
- ```
- A good practice is to use two CORS rules: one for viewing the uploaded files,
- and one for uploading files. This is done above where the first object in the
- array defines the rules for uploading, and the second for viewing. The example
- above **makes files publicly viewable**. You can change it according to your
- needs.
- If you are using an IAM policy to allow access to the S3 bucket, the policy must
- have at least the `s3:PutObject` and `s3:PutObjectAcl` permissions scoped to the
- bucket in question. In-depth documentation about CORS rules is available on the
- [AWS documentation site](https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html).
- ### With a DigitalOcean Spaces bucket
- :::tip
- Checkout the
- [DigitalOcean Spaces example](https://github.com/transloadit/uppy/tree/main/examples/digitalocean-spaces)
- in the Uppy repository for a complete, runnable example.
- :::
- DigitalOcean Spaces is S3-compatible so you only need to change the endpoint and
- bucket. Make sure you have a `key` and `secret`. If not, refer to
- “[How To Create a DigitalOcean Space and API Key](https://www.digitalocean.com/community/tutorials/how-to-create-a-digitalocean-space-and-api-key)”.
- When using [Companion](/docs/companion) as standalone, you can set these as
- environment variables:
- ```bash
- export COMPANION_AWS_KEY="xxx"
- export COMPANION_AWS_SECRET="xxx"
- export COMPANION_AWS_REGION="us-east-1"
- export COMPANION_AWS_ENDPOINT="https://{region}.digitaloceanspaces.com"
- export COMPANION_AWS_BUCKET="my-space-name"
- ```
- The `{region}` string will be replaced by the contents of the
- `COMPANION_AWS_REGION` environment variable.
- When using [Companion](/docs/companion) as an Express integration, configure the
- `s3` options:
- ```js
- const options = {
- s3: {
- key: 'xxx',
- secret: 'xxx',
- bucket: 'my-space-name',
- region: 'us-east-1',
- endpoint: 'https://{region}.digitaloceanspaces.com',
- },
- };
- ```
- ### With a Google Cloud Storage bucket
- For the `@uppy/aws-s3` plugin to be able to upload to a GCS bucket, it needs the
- Interoperability setting enabled. You can enable the Interoperability setting
- and
- [generate interoperable storage access keys](https://cloud.google.com/storage/docs/migrating#keys)
- by going to [Google Cloud Storage](https://console.cloud.google.com/storage) »
- Settings » Interoperability. Then set the environment variables for Companion
- like this:
- ```bash
- export COMPANION_AWS_ENDPOINT="https://storage.googleapis.com"
- export COMPANION_AWS_BUCKET="YOUR-GCS-BUCKET-NAME"
- export COMPANION_AWS_KEY="GOOGxxxxxxxxx" # The Access Key
- export COMPANION_AWS_SECRET="YOUR-GCS-SECRET" # The Secret
- ```
- You do not need to configure the region with GCS.
- You also need to configure CORS with their HTTP API. If you haven’t done this
- already, see
- [Configuring CORS on a Bucket](https://cloud.google.com/storage/docs/configuring-cors#Configuring-CORS-on-a-Bucket)
- in the GCS documentation, or follow the steps below to do it using Google’s API
- playground.
- The JSON format consists of an array of CORS configuration objects. For
- instance:
- ```json
- {
- "cors": [
- {
- "origin": ["https://my-app.com"],
- "method": ["GET", "POST"],
- "maxAgeSeconds": 3000
- },
- {
- "origin": ["*"],
- "method": ["GET"],
- "maxAgeSeconds": 3000
- }
- ]
- }
- ```
- When using presigned `PUT` uploads, replace the `"POST"` method by `"PUT"` in
- the first entry.
- If you have the [gsutil](https://cloud.google.com/storage/docs/gsutil)
- command-line tool, you can apply this configuration using the
- [gsutil cors](https://cloud.google.com/storage/docs/configuring-cors#configure-cors-bucket)
- command.
- ```bash
- gsutil cors set THAT-FILE.json gs://BUCKET-NAME
- ```
- Otherwise, you can manually apply it through the OAuth playground:
- 1. Get a temporary API token from the
- [Google OAuth2.0 playground](https://developers.google.com/oauthplayground/)
- 2. Select the `Cloud Storage JSON API v1` » `devstorage.full_control` scope
- 3. Press `Authorize APIs` and allow access
- 4. Click `Step 3 - Configure request to API`
- 5. Configure it as follows:
- 1. HTTP Method: PATCH
- 2. Request URI: `https://www.googleapis.com/storage/v1/b/YOUR_BUCKET_NAME`
- 3. Content-Type: application/json (should be the default)
- 4. Press `Enter request body` and input your CORS configuration
- 6. Press `Send the request`.
- ### Use with your own server
- The recommended approach is to integrate `@uppy/aws-s3` with your own server.
- You will need to do the following things:
- 1. Setup a bucket
- 2. Create endpoints in your server. You can create them as edge functions (such
- as AWS Lambdas), inside Next.js as an API route, or wherever your server runs
- - `POST` > `/uppy/s3`: get upload parameters
- 3. [Setup Uppy](https://github.com/transloadit/uppy/blob/main/examples/aws-nodejs/public/index.html)
- ### Use with Companion
- [Companion](/docs/companion) has S3 routes built-in for a plug-and-play
- experience with Uppy.
- :::caution
- Generally it’s better for access control, observability, and scaling to
- integrate `@uppy/aws-s3` with your own server. You may want to use
- [Companion](/docs/companion) for creating, signing, and completing your S3
- uploads if you already need Companion for remote files (such as from Google
- Drive). Otherwise it’s not worth the hosting effort.
- :::
- ```js {10} showLineNumbers
- import Uppy from '@uppy/core';
- import Dashboard from '@uppy/dashboard';
- import AwsS3 from '@uppy/aws-s3';
- import '@uppy/core/dist/style.min.css';
- import '@uppy/dashboard/dist/style.min.css';
- const uppy = new Uppy.use(Dashboard, { inline: true, target: 'body' }).use(
- AwsS3,
- { companionUrl: 'http://companion.uppy.io' },
- );
- ```
- ## Options
- The `@uppy/aws-s3` plugin has the following configurable options:
- #### `id`
- A unique identifier for this plugin (`string`, default: `'AwsS3'`).
- #### `companionUrl`
- Companion instance to use for signing S3 uploads (`string`, default: `null`).
- #### `companionHeaders`
- Custom headers that should be sent along to [Companion](/docs/companion) on
- every request (`Object`, default: `{}`).
- #### `allowedMetaFields`
- Pass an array of field names to limit the metadata fields that will be added to
- upload as query parameters (`Array`, default: `null`).
- - Set this to `['name']` to only send the `name` field.
- - Set this to `null` (the default) to send _all_ metadata fields.
- - Set this to an empty array `[]` to not send any fields.
- #### `getUploadParameters(file)`
- :::note
- When using [Companion][companion docs] to sign S3 uploads, do not define this
- option.
- :::
- A function that returns upload parameters for a file (`Promise`, default:
- `null`).
- Parameters should be returned as an object, or as a `Promise` that fulfills with
- an object, with keys `{ method, url, fields, headers }`.
- - The `method` field is the HTTP method to be used for the upload. This should
- be one of either `PUT` or `POST`, depending on the type of upload used.
- - The `url` field is the URL to which the upload request will be sent. When
- using a presigned PUT upload, this should be the URL to the S3 object with
- signing parameters included in the query string. When using a POST upload with
- a policy document, this should be the root URL of the bucket.
- - The `fields` field is an object with form fields to send along with the upload
- request. For presigned PUT uploads, this should be left empty.
- - The `headers` field is an object with request headers to send along with the
- upload request. When using a presigned PUT upload, it’s a good idea to provide
- `headers['content-type']`. That will make sure that the request uses the same
- content-type that was used to generate the signature. Without it, the browser
- may decide on a different content-type instead, causing S3 to reject the
- upload.
- #### `timeout`
- When no upload progress events have been received for this amount of
- milliseconds, assume the connection has an issue and abort the upload (`number`,
- default: `30_000`).
- This is passed through to [XHRUpload](/docs/xhr-upload#timeout-30-1000); see its
- documentation page for details. Set to `0` to disable this check.
- #### `limit`
- Limit the amount of uploads going on at the same time (`number`, default: `5`).
- Setting this to `0` means no limit on concurrent uploads, but we recommend a
- value between `5` and `20`.
- #### `getResponseData(responseText, response)`
- :::note
- This is an advanced option intended for use with _almost_ S3-compatible storage
- solutions.
- :::
- Customize response handling once an upload is completed. This passes the
- function through to @uppy/xhr-upload, see its
- [documentation](https://uppy.io/docs/xhr-upload/#getResponseData-responseText-response)
- for API details.
- This option is useful when uploading to an S3-like service that doesn’t reply
- with an XML document, but with something else such as JSON.
- #### `locale: {}`
- ```js
- export default {
- strings: {
- timedOut: 'Upload stalled for %{seconds} seconds, aborting.',
- },
- };
- ```
- ## Frequently Asked Questions
- ### How can I generate a presigned URL server-side?
- The `getUploadParameters` function can return a `Promise`, so upload parameters
- can be prepared server-side. That way, no private keys to the S3 bucket need to
- be shared on the client. For example, there could be a PHP server endpoint that
- prepares a presigned URL for a file:
- ```js
- uppy.use(AwsS3, {
- getUploadParameters(file) {
- // Send a request to our PHP signing endpoint.
- return fetch('/s3-sign.php', {
- method: 'post',
- // Send and receive JSON.
- headers: {
- accept: 'application/json',
- 'content-type': 'application/json',
- },
- body: JSON.stringify({
- filename: file.name,
- contentType: file.type,
- }),
- })
- .then((response) => {
- // Parse the JSON response.
- return response.json();
- })
- .then((data) => {
- // Return an object in the correct shape.
- return {
- method: data.method,
- url: data.url,
- fields: data.fields,
- // Provide content type header required by S3
- headers: {
- 'Content-Type': file.type,
- },
- };
- });
- },
- });
- ```
- See either the
- [aws-nodejs](https://github.com/transloadit/uppy/tree/HEAD/examples/aws-nodejs)
- or [aws-php](https://github.com/transloadit/uppy/tree/HEAD/examples/aws-php)
- examples in the uppy repository for a demonstration of how to implement handling
- of presigned URLs on both the server-side and client-side.
- ### How can I retrieve the presigned parameters of the uploaded file?
- Once the file is uploaded, it’s possible to retrieve the parameters that were
- generated in `getUploadParameters(file)` via the `file.meta` field:
- ```js
- uppy.on('upload-success', (file, data) => {
- const s3Key = file.meta['key']; // the S3 object key of the uploaded file
- });
- ```
- [companion docs]: /docs/companion
|