---
sidebar_position: 4
slug: /aws-s3-multipart
---
import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';
import UppyCdnExample from '/src/components/UppyCdnExample';
# AWS S3
The `@uppy/aws-s3` plugin can be used to upload files directly to a S3 bucket or
a S3-compatible provider, such as Google Cloud Storage or DigitalOcean Spaces.
Uploads can be signed using either [Companion][companion docs], temporary
credentials, or a custom signing function.
## When should I use it?
:::tip
Not sure which uploader is best for you? Read
“[Choosing the uploader you need](/docs/guides/choosing-uploader)”.
:::
You can use this plugin when you prefer a _client-to-storage_ over a
_client-to-server-to-storage_ (such as [Transloadit](/docs/transloadit) or
[Tus](/docs/tus)) setup. This may in some cases be preferable, for instance, to
reduce costs or the complexity of running a server and load balancer with
[Tus](/docs/tus).
Multipart uploads start to become valuable for larger files (100 MiB+) as
it uploads a single object as a set of parts. This has certain benefits, such as
improved throughput (uploading parts in parallel) and quick recovery from
network issues (only the failed parts need to be retried). The downside is
request overhead, as it needs to do creation, signing (unless you are [signing
on the client][]), and completion requests besides the upload requests. For
example, if you are uploading files that are only a couple kilobytes with a
100ms roundtrip latency, you are spending 400ms on overhead and only a few
milliseconds on uploading.
**In short**
- We recommend to set [`shouldUseMultipart`][] to enable multipart uploads only
for large files.
- If you prefer to have less overhead (+20% upload speed) you can use temporary
S3 credentials with [`getTemporarySecurityCredentials`][]. This means users
get a single token which allows them to do bucket operations for longer,
instead of short lived signed URL per resource. This is a security trade-off.
## Install
```shell
npm install @uppy/aws-s3
```
```shell
yarn add @uppy/aws-s3
```
{`
import { Uppy, AwsS3 } from "{{UPPY_JS_URL}}"
new Uppy().use(AwsS3, { /* see options */ })
`}
## Use
### Setting up your S3 bucket
To use this plugin with S3 we need to setup a bucket with the right permissions
and CORS settings.
S3 buckets do not allow public uploads for security reasons. To allow Uppy and
the browser to upload directly to a bucket, its CORS permissions need to be
configured.
CORS permissions can be found in the
[S3 Management Console](https://console.aws.amazon.com/s3/home). Click the
bucket that will receive the uploads, then go into the `Permissions` tab and
select the `CORS configuration` button. A JSON document will be shown that
defines the CORS configuration. (AWS used to use XML but now only allow JSON).
More information about the
[S3 CORS format here](https://docs.amazonaws.cn/en_us/AmazonS3/latest/userguide/ManageCorsUsing.html).
The configuration required for Uppy and Companion is this:
```json
[
{
"AllowedOrigins": ["https://my-app.com"],
"AllowedMethods": ["GET", "PUT"],
"MaxAgeSeconds": 3000,
"AllowedHeaders": [
"Authorization",
"x-amz-date",
"x-amz-content-sha256",
"content-type"
],
"ExposeHeaders": ["ETag", "Location"]
},
{
"AllowedOrigins": ["*"],
"AllowedMethods": ["GET"],
"MaxAgeSeconds": 3000
}
]
```
A good practice is to use two CORS rules: one for viewing the uploaded files,
and one for uploading files. This is done above where the first object in the
array defines the rules for uploading, and the second for viewing. The example
above **makes files publicly viewable**. You can change it according to your
needs.
If you are using an IAM policy to allow access to the S3 bucket, the policy must
have at least the `s3:PutObject` and `s3:PutObjectAcl` permissions scoped to the
bucket in question. In-depth documentation about CORS rules is available on the
[AWS documentation site](https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html).
### Use with your own server
The recommended approach is to integrate `@uppy/aws-s3` with your own server.
You will need to do the following things:
1. [Setup a S3 bucket](#setting-up-your-s3-bucket).
2. [Setup your server](https://github.com/transloadit/uppy/blob/main/examples/aws-nodejs/index.js)
3. [Setup Uppy client](https://github.com/transloadit/uppy/blob/main/examples/aws-nodejs/public/index.html).
### Use with Companion
[Companion](/docs/companion) has S3 routes built-in for a plug-and-play
experience with Uppy.
:::caution
Generally it’s better for access control, observability, and scaling to
integrate `@uppy/aws-s3` with your own server. You may want to use
[Companion](/docs/companion) for creating, signing, and completing your S3
uploads if you already need Companion for remote files (such as from Google
Drive). Otherwise it’s not worth the hosting effort.
:::
```js {10} showLineNumbers
import Uppy from '@uppy/core';
import Dashboard from '@uppy/dashboard';
import AwsS3 from '@uppy/aws-s3';
import '@uppy/core/dist/style.min.css';
import '@uppy/dashboard/dist/style.min.css';
const uppy = new Uppy()
.use(Dashboard, { inline: true, target: 'body' })
.use(AwsS3, {
shouldUseMultipart: (file) => file.size > 100 * 2 ** 20,
companionUrl: 'https://companion.uppy.io',
});
```
## API
### Options
#### `shouldUseMultipart(file)`
:::warning
Until the next major version, not setting this option uses the
[legacy version of this plugin](../aws-s3/). This is a suboptimal experience for
some of your user’s uploads. It’s best for speed and stability to upload large
(100 MiB+) files with multipart and small files with regular uploads.
:::
A boolean, or a function that returns a boolean which is called for each file
that is uploaded with the corresponding `UppyFile` instance as argument.
By default, all files are uploaded as multipart. In a future version, all files
with a `file.size` ≤ 100 MiB will be uploaded in a single chunk, all files
larger than that as multipart.
Here’s how to use it:
```js
uppy.use(AwsS3, {
shouldUseMultipart(file) {
// Use multipart only for files larger than 100MiB.
return file.size > 100 * 2 ** 20;
},
});
```
#### `limit`
The maximum amount of files to upload in parallel (`number`, default: `6`).
Note that the amount of files is not the same as the amount of concurrent
connections. Multipart uploads can use many requests per file. For example, for
a 100 MiB file with a part size of 5 MiB:
- 1 `createMultipartUpload` request
- 100/5 = 20 sign requests (unless you are [signing on the client][])
- 100/5 = 20 upload requests
- 1 `completeMultipartUpload` request
:::caution
Unless you have a good reason and are well informed about the average internet
speed of your users, do not set this higher. S3 uses HTTP/1.1, which means a
limit to concurrent connections and your uploads may expire before they are
uploaded.
:::
#### `companionUrl`
URL to a [Companion](/docs/companion) instance (`string`, default: `null`).
#### `companionHeaders`
Custom headers that should be sent along to [Companion](/docs/companion) on
every request (`Object`, default: `{}`).
#### `companionCookiesRule`
This option correlates to the
[RequestCredentials value](https://developer.mozilla.org/en-US/docs/Web/API/Request/credentials)
(`string`, default: `'same-origin'`).
This tells the plugin whether to send cookies to [Companion](/docs/companion).
#### `retryDelays`
`retryDelays` are the intervals in milliseconds used to retry a failed chunk
(`array`, default: `[0, 1000, 3000, 5000]`).
This is also used for [`signPart()`](#signpartfile-partdata). Set to `null` to
disable automatic retries, and fail instantly if any chunk fails to upload.
#### `getChunkSize(file)`
A function that returns the minimum chunk size to use when uploading the given
file.
The S3 Multipart plugin uploads files in chunks. Chunks are sent in batches to
have presigned URLs generated with [`signPart()`](#signpartfile-partdata). To
reduce the amount of requests for large files, you can choose a larger chunk
size, at the cost of having to re-upload more data if one chunk fails to upload.
S3 requires a minimum chunk size of 5MiB, and supports at most 10,000 chunks per
multipart upload. If `getChunkSize()` returns a size that’s too small, Uppy will
increase it to S3’s minimum requirements.
#### `getUploadParameters(file, options)`
:::note
When using [Companion][companion docs] to sign S3 uploads, you should not define
this option.
:::
A function that will be called for each non-multipart upload.
- `file`: `UppyFile` the file that will be uploaded
- `options`: `object`
- `signal`: `AbortSignal`
- **Returns:** `object | Promise