aws-s3.mdx 14 KB

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463
  1. ---
  2. sidebar_position: 3
  3. slug: /aws-s3
  4. ---
  5. import Tabs from '@theme/Tabs';
  6. import TabItem from '@theme/TabItem';
  7. import UppyCdnExample from '/src/components/UppyCdnExample';
  8. # AWS S3 (legacy)
  9. The `@uppy/aws-s3` plugin can be used to upload files directly to a S3 bucket or
  10. a S3-compatible provider, such as Google Cloud Storage or DigitalOcean Spaces.
  11. Uploads can be signed using either [Companion][companion docs] or a custom
  12. signing function.
  13. This documents the legacy version of this plugin that we plan to remove on the
  14. next version.
  15. ## When should I use it?
  16. :::tip
  17. Not sure which uploader is best for you? Read
  18. “[Choosing the uploader you need](/docs/guides/choosing-uploader)”.
  19. :::
  20. :::warning
  21. This plugin is deprecated, you should switch to using the
  22. [modern version of this plugin](/docs/aws-s3-multipart).
  23. :::
  24. You can use this plugin when you prefer a _client-to-storage_ over a
  25. _client-to-server-to-storage_ (such as [Transloadit](/docs/transloadit) or
  26. [Tus](/docs/tus)) setup. This may in some cases be preferable, for instance, to
  27. reduce costs or the complexity of running a server and load balancer with
  28. [Tus](/docs/tus).
  29. This plugin can be used with AWS S3, DigitalOcean Spaces, Google Cloud Storage,
  30. or any S3-compatible provider. Although all S3-compatible providers are
  31. supported, we don’t test against them, this plugin was developed against S3 so a
  32. small risk is involved in using the others.
  33. `@uppy/aws-s3` is best suited for small files and/or lots of files. If you are
  34. planning to upload mostly large files (100 MB+), consider using
  35. [`@uppy/aws-s3-multipart`](/docs/aws-s3-multipart).
  36. ## Install
  37. <Tabs>
  38. <TabItem value="npm" label="NPM" default>
  39. ```shell
  40. npm install @uppy/aws-s3
  41. ```
  42. </TabItem>
  43. <TabItem value="yarn" label="Yarn">
  44. ```shell
  45. yarn add @uppy/aws-s3
  46. ```
  47. </TabItem>
  48. <TabItem value="cdn" label="CDN">
  49. <UppyCdnExample>
  50. {`
  51. import { Uppy, AwsS3 } from "{{UPPY_JS_URL}}"
  52. new Uppy().use(AwsS3, { /* see options */ })
  53. `}
  54. </UppyCdnExample>
  55. </TabItem>
  56. </Tabs>
  57. ## Use
  58. A quick overview of the complete API.
  59. ```js {10} showLineNumbers
  60. import Uppy from '@uppy/core';
  61. import Dashboard from '@uppy/dashboard';
  62. import AwsS3 from '@uppy/aws-s3';
  63. import '@uppy/core/dist/style.min.css';
  64. import '@uppy/dashboard/dist/style.min.css';
  65. const uppy = new Uppy()
  66. .use(Dashboard, { inline: true, target: 'body' })
  67. .use(AwsS3, { companionUrl: 'http://companion.uppy.io' });
  68. ```
  69. ### With a AWS S3 bucket
  70. To use this plugin with S3 we need to setup a bucket with the right permissions
  71. and CORS settings.
  72. S3 buckets do not allow public uploads for security reasons. To allow Uppy and
  73. the browser to upload directly to a bucket, its CORS permissions need to be
  74. configured.
  75. CORS permissions can be found in the
  76. [S3 Management Console](https://console.aws.amazon.com/s3/home). Click the
  77. bucket that will receive the uploads, then go into the `Permissions` tab and
  78. select the `CORS configuration` button. A JSON document will be shown that
  79. defines the CORS configuration. (AWS used to use XML but now only allow JSON).
  80. More information about the
  81. [S3 CORS format here](https://docs.amazonaws.cn/en_us/AmazonS3/latest/userguide/ManageCorsUsing.html).
  82. The configuration required for Uppy and Companion is this:
  83. ```json
  84. [
  85. {
  86. "AllowedOrigins": ["https://my-app.com"],
  87. "AllowedMethods": ["GET", "POST"],
  88. "MaxAgeSeconds": 3000,
  89. "AllowedHeaders": [
  90. "Authorization",
  91. "x-amz-date",
  92. "x-amz-content-sha256",
  93. "content-type"
  94. ]
  95. },
  96. {
  97. "AllowedOrigins": ["*"],
  98. "AllowedMethods": ["GET"],
  99. "MaxAgeSeconds": 3000
  100. }
  101. ]
  102. ```
  103. A good practice is to use two CORS rules: one for viewing the uploaded files,
  104. and one for uploading files. This is done above where the first object in the
  105. array defines the rules for uploading, and the second for viewing. The example
  106. above **makes files publicly viewable**. You can change it according to your
  107. needs.
  108. If you are using an IAM policy to allow access to the S3 bucket, the policy must
  109. have at least the `s3:PutObject` and `s3:PutObjectAcl` permissions scoped to the
  110. bucket in question. In-depth documentation about CORS rules is available on the
  111. [AWS documentation site](https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html).
  112. ### With a DigitalOcean Spaces bucket
  113. :::tip
  114. Checkout the
  115. [DigitalOcean Spaces example](https://github.com/transloadit/uppy/tree/main/examples/digitalocean-spaces)
  116. in the Uppy repository for a complete, runnable example.
  117. :::
  118. DigitalOcean Spaces is S3-compatible so you only need to change the endpoint and
  119. bucket. Make sure you have a `key` and `secret`. If not, refer to
  120. “[How To Create a DigitalOcean Space and API Key](https://www.digitalocean.com/community/tutorials/how-to-create-a-digitalocean-space-and-api-key)”.
  121. When using [Companion](/docs/companion) as standalone, you can set these as
  122. environment variables:
  123. ```bash
  124. export COMPANION_AWS_KEY="xxx"
  125. export COMPANION_AWS_SECRET="xxx"
  126. export COMPANION_AWS_REGION="us-east-1"
  127. export COMPANION_AWS_ENDPOINT="https://{region}.digitaloceanspaces.com"
  128. export COMPANION_AWS_BUCKET="my-space-name"
  129. ```
  130. The `{region}` string will be replaced by the contents of the
  131. `COMPANION_AWS_REGION` environment variable.
  132. When using [Companion](/docs/companion) as an Express integration, configure the
  133. `s3` options:
  134. ```js
  135. const options = {
  136. s3: {
  137. key: 'xxx',
  138. secret: 'xxx',
  139. bucket: 'my-space-name',
  140. region: 'us-east-1',
  141. endpoint: 'https://{region}.digitaloceanspaces.com',
  142. },
  143. };
  144. ```
  145. ### With a Google Cloud Storage bucket
  146. For the `@uppy/aws-s3` plugin to be able to upload to a GCS bucket, it needs the
  147. Interoperability setting enabled. You can enable the Interoperability setting
  148. and
  149. [generate interoperable storage access keys](https://cloud.google.com/storage/docs/migrating#keys)
  150. by going to [Google Cloud Storage](https://console.cloud.google.com/storage) »
  151. Settings » Interoperability. Then set the environment variables for Companion
  152. like this:
  153. ```bash
  154. export COMPANION_AWS_ENDPOINT="https://storage.googleapis.com"
  155. export COMPANION_AWS_BUCKET="YOUR-GCS-BUCKET-NAME"
  156. export COMPANION_AWS_KEY="GOOGxxxxxxxxx" # The Access Key
  157. export COMPANION_AWS_SECRET="YOUR-GCS-SECRET" # The Secret
  158. ```
  159. You do not need to configure the region with GCS.
  160. You also need to configure CORS with their HTTP API. If you haven’t done this
  161. already, see
  162. [Configuring CORS on a Bucket](https://cloud.google.com/storage/docs/configuring-cors#Configuring-CORS-on-a-Bucket)
  163. in the GCS documentation, or follow the steps below to do it using Google’s API
  164. playground.
  165. The JSON format consists of an array of CORS configuration objects. For
  166. instance:
  167. ```json
  168. {
  169. "cors": [
  170. {
  171. "origin": ["https://my-app.com"],
  172. "method": ["GET", "POST"],
  173. "maxAgeSeconds": 3000
  174. },
  175. {
  176. "origin": ["*"],
  177. "method": ["GET"],
  178. "maxAgeSeconds": 3000
  179. }
  180. ]
  181. }
  182. ```
  183. When using presigned `PUT` uploads, replace the `"POST"` method by `"PUT"` in
  184. the first entry.
  185. If you have the [gsutil](https://cloud.google.com/storage/docs/gsutil)
  186. command-line tool, you can apply this configuration using the
  187. [gsutil cors](https://cloud.google.com/storage/docs/configuring-cors#configure-cors-bucket)
  188. command.
  189. ```bash
  190. gsutil cors set THAT-FILE.json gs://BUCKET-NAME
  191. ```
  192. Otherwise, you can manually apply it through the OAuth playground:
  193. 1. Get a temporary API token from the
  194. [Google OAuth2.0 playground](https://developers.google.com/oauthplayground/)
  195. 2. Select the `Cloud Storage JSON API v1` » `devstorage.full_control` scope
  196. 3. Press `Authorize APIs` and allow access
  197. 4. Click `Step 3 - Configure request to API`
  198. 5. Configure it as follows:
  199. 1. HTTP Method: PATCH
  200. 2. Request URI: `https://www.googleapis.com/storage/v1/b/YOUR_BUCKET_NAME`
  201. 3. Content-Type: application/json (should be the default)
  202. 4. Press `Enter request body` and input your CORS configuration
  203. 6. Press `Send the request`.
  204. ### Use with your own server
  205. The recommended approach is to integrate `@uppy/aws-s3` with your own server.
  206. You will need to do the following things:
  207. 1. Setup a bucket
  208. 2. Create endpoints in your server. You can create them as edge functions (such
  209. as AWS Lambdas), inside Next.js as an API route, or wherever your server runs
  210. - `POST` > `/uppy/s3`: get upload parameters
  211. 3. [Setup Uppy](https://github.com/transloadit/uppy/blob/main/examples/aws-nodejs/public/index.html)
  212. ### Use with Companion
  213. [Companion](/docs/companion) has S3 routes built-in for a plug-and-play
  214. experience with Uppy.
  215. :::caution
  216. Generally it’s better for access control, observability, and scaling to
  217. integrate `@uppy/aws-s3` with your own server. You may want to use
  218. [Companion](/docs/companion) for creating, signing, and completing your S3
  219. uploads if you already need Companion for remote files (such as from Google
  220. Drive). Otherwise it’s not worth the hosting effort.
  221. :::
  222. ```js {10} showLineNumbers
  223. import Uppy from '@uppy/core';
  224. import Dashboard from '@uppy/dashboard';
  225. import AwsS3 from '@uppy/aws-s3';
  226. import '@uppy/core/dist/style.min.css';
  227. import '@uppy/dashboard/dist/style.min.css';
  228. const uppy = new Uppy.use(Dashboard, { inline: true, target: 'body' }).use(
  229. AwsS3,
  230. { companionUrl: 'http://companion.uppy.io' },
  231. );
  232. ```
  233. ## Options
  234. The `@uppy/aws-s3` plugin has the following configurable options:
  235. #### `id`
  236. A unique identifier for this plugin (`string`, default: `'AwsS3'`).
  237. #### `companionUrl`
  238. Companion instance to use for signing S3 uploads (`string`, default: `null`).
  239. #### `companionHeaders`
  240. Custom headers that should be sent along to [Companion](/docs/companion) on
  241. every request (`Object`, default: `{}`).
  242. #### `allowedMetaFields`
  243. Pass an array of field names to limit the metadata fields that will be added to
  244. upload as query parameters (`Array`, default: `null`).
  245. - Set this to `['name']` to only send the `name` field.
  246. - Set this to `null` (the default) to send _all_ metadata fields.
  247. - Set this to an empty array `[]` to not send any fields.
  248. #### `getUploadParameters(file)`
  249. :::note
  250. When using [Companion][companion docs] to sign S3 uploads, do not define this
  251. option.
  252. :::
  253. A function that returns upload parameters for a file (`Promise`, default:
  254. `null`).
  255. Parameters should be returned as an object, or as a `Promise` that fulfills with
  256. an object, with keys `{ method, url, fields, headers }`.
  257. - The `method` field is the HTTP method to be used for the upload. This should
  258. be one of either `PUT` or `POST`, depending on the type of upload used.
  259. - The `url` field is the URL to which the upload request will be sent. When
  260. using a presigned PUT upload, this should be the URL to the S3 object with
  261. signing parameters included in the query string. When using a POST upload with
  262. a policy document, this should be the root URL of the bucket.
  263. - The `fields` field is an object with form fields to send along with the upload
  264. request. For presigned PUT uploads, this should be left empty.
  265. - The `headers` field is an object with request headers to send along with the
  266. upload request. When using a presigned PUT upload, it’s a good idea to provide
  267. `headers['content-type']`. That will make sure that the request uses the same
  268. content-type that was used to generate the signature. Without it, the browser
  269. may decide on a different content-type instead, causing S3 to reject the
  270. upload.
  271. #### `timeout`
  272. When no upload progress events have been received for this amount of
  273. milliseconds, assume the connection has an issue and abort the upload (`number`,
  274. default: `30_000`).
  275. This is passed through to [XHRUpload](/docs/xhr-upload#timeout-30-1000); see its
  276. documentation page for details. Set to `0` to disable this check.
  277. #### `limit`
  278. Limit the amount of uploads going on at the same time (`number`, default: `5`).
  279. Setting this to `0` means no limit on concurrent uploads, but we recommend a
  280. value between `5` and `20`.
  281. #### `getResponseData(responseText, response)`
  282. :::note
  283. This is an advanced option intended for use with _almost_ S3-compatible storage
  284. solutions.
  285. :::
  286. Customize response handling once an upload is completed. This passes the
  287. function through to @uppy/xhr-upload, see its
  288. [documentation](https://uppy.io/docs/xhr-upload/#getResponseData-responseText-response)
  289. for API details.
  290. This option is useful when uploading to an S3-like service that doesn’t reply
  291. with an XML document, but with something else such as JSON.
  292. #### `locale: {}`
  293. ```js
  294. export default {
  295. strings: {
  296. timedOut: 'Upload stalled for %{seconds} seconds, aborting.',
  297. },
  298. };
  299. ```
  300. ## Frequently Asked Questions
  301. ### How can I generate a presigned URL server-side?
  302. The `getUploadParameters` function can return a `Promise`, so upload parameters
  303. can be prepared server-side. That way, no private keys to the S3 bucket need to
  304. be shared on the client. For example, there could be a PHP server endpoint that
  305. prepares a presigned URL for a file:
  306. ```js
  307. uppy.use(AwsS3, {
  308. getUploadParameters(file) {
  309. // Send a request to our PHP signing endpoint.
  310. return fetch('/s3-sign.php', {
  311. method: 'post',
  312. // Send and receive JSON.
  313. headers: {
  314. accept: 'application/json',
  315. 'content-type': 'application/json',
  316. },
  317. body: JSON.stringify({
  318. filename: file.name,
  319. contentType: file.type,
  320. }),
  321. })
  322. .then((response) => {
  323. // Parse the JSON response.
  324. return response.json();
  325. })
  326. .then((data) => {
  327. // Return an object in the correct shape.
  328. return {
  329. method: data.method,
  330. url: data.url,
  331. fields: data.fields,
  332. // Provide content type header required by S3
  333. headers: {
  334. 'Content-Type': file.type,
  335. },
  336. };
  337. });
  338. },
  339. });
  340. ```
  341. See either the
  342. [aws-nodejs](https://github.com/transloadit/uppy/tree/HEAD/examples/aws-nodejs)
  343. or [aws-php](https://github.com/transloadit/uppy/tree/HEAD/examples/aws-php)
  344. examples in the uppy repository for a demonstration of how to implement handling
  345. of presigned URLs on both the server-side and client-side.
  346. ### How can I retrieve the presigned parameters of the uploaded file?
  347. Once the file is uploaded, it’s possible to retrieve the parameters that were
  348. generated in `getUploadParameters(file)` via the `file.meta` field:
  349. ```js
  350. uppy.on('upload-success', (file, data) => {
  351. const s3Key = file.meta['key']; // the S3 object key of the uploaded file
  352. });
  353. ```
  354. [companion docs]: /docs/companion