Преглед изворни кода

docs: update `@uppy/aws-s3` docs (#5093)

Antoine du Hamel пре 11 месеци
родитељ
комит
89bfe2324e
2 измењених фајлова са 4 додато и 529 уклоњено
  1. 4 66
      docs/uploader/aws-s3-multipart.mdx
  2. 0 463
      docs/uploader/aws-s3.mdx

+ 4 - 66
docs/uploader/aws-s3-multipart.mdx

@@ -41,7 +41,7 @@ milliseconds on uploading.
 
 **In short**
 
-- We recommend to set [`shouldUseMultipart`][] to enable multipart uploads only
+- We recommend the default value of [`shouldUseMultipart`][], which enable multipart uploads only
   for large files.
 - If you prefer to have less overhead (+20% upload speed) you can use temporary
   S3 credentials with [`getTemporarySecurityCredentials`][]. This means users
@@ -166,7 +166,6 @@ import '@uppy/dashboard/dist/style.min.css';
 const uppy = new Uppy()
 	.use(Dashboard, { inline: true, target: 'body' })
 	.use(AwsS3, {
-		shouldUseMultipart: (file) => file.size > 100 * 2 ** 20,
 		companionUrl: 'https://companion.uppy.io',
 	});
 ```
@@ -177,20 +176,10 @@ const uppy = new Uppy()
 
 #### `shouldUseMultipart(file)`
 
-:::warning
-
-Until the next major version, not setting this option uses the
-[legacy version of this plugin](../aws-s3/). This is a suboptimal experience for
-some of your user’s uploads. It’s best for speed and stability to upload large
-(100 MiB+) files with multipart and small files with regular uploads.
-
-:::
-
 A boolean, or a function that returns a boolean which is called for each file
 that is uploaded with the corresponding `UppyFile` instance as argument.
 
-By default, all files are uploaded as multipart. In a future version, all files
-with a `file.size` ≤ 100 MiB will be uploaded in a single chunk, all files
+By default, all files with a `file.size` ≤ 100 MiB will be uploaded in a single chunk, all files
 larger than that as multipart.
 
 Here’s how to use it:
@@ -254,9 +243,9 @@ disable automatic retries, and fail instantly if any chunk fails to upload.
 #### `getChunkSize(file)`
 
 A function that returns the minimum chunk size to use when uploading the given
-file.
+file as multipart.
 
-The S3 Multipart plugin uploads files in chunks. Chunks are sent in batches to
+For multipart uploads, chunks are sent in batches to
 have presigned URLs generated with [`signPart()`](#signpartfile-partdata). To
 reduce the amount of requests for large files, you can choose a larger chunk
 size, at the cost of having to re-upload more data if one chunk fails to upload.
@@ -404,57 +393,6 @@ upload as query parameters.
 <details>
 <summary>Deprecated options</summary>
 
-#### `prepareUploadParts(file, partData)`
-
-A function that generates a batch of signed URLs for the specified part numbers.
-
-Receives the `file` object from Uppy’s state. The `partData` argument is an
-object with keys:
-
-- `uploadId` - The UploadID of this Multipart upload.
-- `key` - The object key in the S3 bucket.
-- `parts` - An array of objects with the part number and chunk
-  (`Array<{ number: number, chunk: blob }>`). `number` can’t be zero.
-
-`prepareUploadParts` should return a `Promise` with an `Object` with keys:
-
-- `presignedUrls` - A JavaScript object with the part numbers as keys and the
-  presigned URL for each part as the value.
-- `headers` - **(Optional)** Custom headers to send along with every request per
-  part (`{ 1: { 'Content-MD5': 'hash' }}`). These are (1-based) indexed by part
-  number too so you can for instance send the MD5 hash validation for each part
-  to S3.
-
-An example of what the return value should look like:
-
-```json
-{
-	"presignedUrls": {
-		"1": "https://bucket.region.amazonaws.com/path/to/file.jpg?partNumber=1&...",
-		"2": "https://bucket.region.amazonaws.com/path/to/file.jpg?partNumber=2&...",
-		"3": "https://bucket.region.amazonaws.com/path/to/file.jpg?partNumber=3&..."
-	},
-	"headers": {
-		"1": { "Content-MD5": "foo" },
-		"2": { "Content-MD5": "bar" },
-		"3": { "Content-MD5": "baz" }
-	}
-}
-```
-
-If an error occurred, reject the `Promise` with an `Object` with the following
-keys:
-
-```json
-{ "source": { "status": 500 } }
-```
-
-`status` is the HTTP code and is required for determining whether to retry the
-request. `prepareUploadParts` will be retried if the code is `0`, `409`, `423`,
-or between `500` and `600`.
-
-</details>
-
 #### `getTemporarySecurityCredentials(options)`
 
 :::note

+ 0 - 463
docs/uploader/aws-s3.mdx

@@ -1,463 +0,0 @@
----
-sidebar_position: 3
-slug: /aws-s3
----
-
-import Tabs from '@theme/Tabs';
-import TabItem from '@theme/TabItem';
-import UppyCdnExample from '/src/components/UppyCdnExample';
-
-# AWS S3 (legacy)
-
-The `@uppy/aws-s3` plugin can be used to upload files directly to a S3 bucket or
-a S3-compatible provider, such as Google Cloud Storage or DigitalOcean Spaces.
-Uploads can be signed using either [Companion][companion docs] or a custom
-signing function.
-
-This documents the legacy version of this plugin that we plan to remove on the
-next version.
-
-## When should I use it?
-
-:::tip
-
-Not sure which uploader is best for you? Read
-“[Choosing the uploader you need](/docs/guides/choosing-uploader)”.
-
-:::
-
-:::warning
-
-This plugin is deprecated, you should switch to using the
-[modern version of this plugin](/docs/aws-s3-multipart).
-
-:::
-
-You can use this plugin when you prefer a _client-to-storage_ over a
-_client-to-server-to-storage_ (such as [Transloadit](/docs/transloadit) or
-[Tus](/docs/tus)) setup. This may in some cases be preferable, for instance, to
-reduce costs or the complexity of running a server and load balancer with
-[Tus](/docs/tus).
-
-This plugin can be used with AWS S3, DigitalOcean Spaces, Google Cloud Storage,
-or any S3-compatible provider. Although all S3-compatible providers are
-supported, we don’t test against them, this plugin was developed against S3 so a
-small risk is involved in using the others.
-
-`@uppy/aws-s3` is best suited for small files and/or lots of files. If you are
-planning to upload mostly large files (100&nbsp;MB+), consider using
-[`@uppy/aws-s3-multipart`](/docs/aws-s3-multipart).
-
-## Install
-
-<Tabs>
-  <TabItem value="npm" label="NPM" default>
-
-```shell
-npm install @uppy/aws-s3
-```
-
-  </TabItem>
-
-  <TabItem value="yarn" label="Yarn">
-
-```shell
-yarn add @uppy/aws-s3
-```
-
-  </TabItem>
-
-  <TabItem value="cdn" label="CDN">
-    <UppyCdnExample>
-      {`
-        import { Uppy, AwsS3 } from "{{UPPY_JS_URL}}"
-        new Uppy().use(AwsS3, { /* see options */ })
-      `}
-    </UppyCdnExample>
-  </TabItem>
-</Tabs>
-
-## Use
-
-A quick overview of the complete API.
-
-```js {10} showLineNumbers
-import Uppy from '@uppy/core';
-import Dashboard from '@uppy/dashboard';
-import AwsS3 from '@uppy/aws-s3';
-
-import '@uppy/core/dist/style.min.css';
-import '@uppy/dashboard/dist/style.min.css';
-
-const uppy = new Uppy()
-	.use(Dashboard, { inline: true, target: 'body' })
-	.use(AwsS3, { companionUrl: 'http://companion.uppy.io' });
-```
-
-### With a AWS S3 bucket
-
-To use this plugin with S3 we need to setup a bucket with the right permissions
-and CORS settings.
-
-S3 buckets do not allow public uploads for security reasons. To allow Uppy and
-the browser to upload directly to a bucket, its CORS permissions need to be
-configured.
-
-CORS permissions can be found in the
-[S3 Management Console](https://console.aws.amazon.com/s3/home). Click the
-bucket that will receive the uploads, then go into the `Permissions` tab and
-select the `CORS configuration` button. A JSON document will be shown that
-defines the CORS configuration. (AWS used to use XML but now only allow JSON).
-More information about the
-[S3 CORS format here](https://docs.amazonaws.cn/en_us/AmazonS3/latest/userguide/ManageCorsUsing.html).
-
-The configuration required for Uppy and Companion is this:
-
-```json
-[
-	{
-		"AllowedOrigins": ["https://my-app.com"],
-		"AllowedMethods": ["GET", "POST"],
-		"MaxAgeSeconds": 3000,
-		"AllowedHeaders": [
-			"Authorization",
-			"x-amz-date",
-			"x-amz-content-sha256",
-			"content-type"
-		]
-	},
-	{
-		"AllowedOrigins": ["*"],
-		"AllowedMethods": ["GET"],
-		"MaxAgeSeconds": 3000
-	}
-]
-```
-
-A good practice is to use two CORS rules: one for viewing the uploaded files,
-and one for uploading files. This is done above where the first object in the
-array defines the rules for uploading, and the second for viewing. The example
-above **makes files publicly viewable**. You can change it according to your
-needs.
-
-If you are using an IAM policy to allow access to the S3 bucket, the policy must
-have at least the `s3:PutObject` and `s3:PutObjectAcl` permissions scoped to the
-bucket in question. In-depth documentation about CORS rules is available on the
-[AWS documentation site](https://docs.aws.amazon.com/AmazonS3/latest/dev/cors.html).
-
-### With a DigitalOcean Spaces bucket
-
-:::tip
-
-Checkout the
-[DigitalOcean Spaces example](https://github.com/transloadit/uppy/tree/main/examples/digitalocean-spaces)
-in the Uppy repository for a complete, runnable example.
-
-:::
-
-DigitalOcean Spaces is S3-compatible so you only need to change the endpoint and
-bucket. Make sure you have a `key` and `secret`. If not, refer to
-“[How To Create a DigitalOcean Space and API Key](https://www.digitalocean.com/community/tutorials/how-to-create-a-digitalocean-space-and-api-key)”.
-
-When using [Companion](/docs/companion) as standalone, you can set these as
-environment variables:
-
-```bash
-export COMPANION_AWS_KEY="xxx"
-export COMPANION_AWS_SECRET="xxx"
-export COMPANION_AWS_REGION="us-east-1"
-export COMPANION_AWS_ENDPOINT="https://{region}.digitaloceanspaces.com"
-export COMPANION_AWS_BUCKET="my-space-name"
-```
-
-The `{region}` string will be replaced by the contents of the
-`COMPANION_AWS_REGION` environment variable.
-
-When using [Companion](/docs/companion) as an Express integration, configure the
-`s3` options:
-
-```js
-const options = {
-	s3: {
-		key: 'xxx',
-		secret: 'xxx',
-		bucket: 'my-space-name',
-		region: 'us-east-1',
-		endpoint: 'https://{region}.digitaloceanspaces.com',
-	},
-};
-```
-
-### With a Google Cloud Storage bucket
-
-For the `@uppy/aws-s3` plugin to be able to upload to a GCS bucket, it needs the
-Interoperability setting enabled. You can enable the Interoperability setting
-and
-[generate interoperable storage access keys](https://cloud.google.com/storage/docs/migrating#keys)
-by going to [Google Cloud Storage](https://console.cloud.google.com/storage) »
-Settings » Interoperability. Then set the environment variables for Companion
-like this:
-
-```bash
-export COMPANION_AWS_ENDPOINT="https://storage.googleapis.com"
-export COMPANION_AWS_BUCKET="YOUR-GCS-BUCKET-NAME"
-export COMPANION_AWS_KEY="GOOGxxxxxxxxx" # The Access Key
-export COMPANION_AWS_SECRET="YOUR-GCS-SECRET" # The Secret
-```
-
-You do not need to configure the region with GCS.
-
-You also need to configure CORS with their HTTP API. If you haven’t done this
-already, see
-[Configuring CORS on a Bucket](https://cloud.google.com/storage/docs/configuring-cors#Configuring-CORS-on-a-Bucket)
-in the GCS documentation, or follow the steps below to do it using Google’s API
-playground.
-
-The JSON format consists of an array of CORS configuration objects. For
-instance:
-
-```json
-{
-	"cors": [
-		{
-			"origin": ["https://my-app.com"],
-			"method": ["GET", "POST"],
-			"maxAgeSeconds": 3000
-		},
-		{
-			"origin": ["*"],
-			"method": ["GET"],
-			"maxAgeSeconds": 3000
-		}
-	]
-}
-```
-
-When using presigned `PUT` uploads, replace the `"POST"` method by `"PUT"` in
-the first entry.
-
-If you have the [gsutil](https://cloud.google.com/storage/docs/gsutil)
-command-line tool, you can apply this configuration using the
-[gsutil cors](https://cloud.google.com/storage/docs/configuring-cors#configure-cors-bucket)
-command.
-
-```bash
-gsutil cors set THAT-FILE.json gs://BUCKET-NAME
-```
-
-Otherwise, you can manually apply it through the OAuth playground:
-
-1.  Get a temporary API token from the
-    [Google OAuth2.0 playground](https://developers.google.com/oauthplayground/)
-2.  Select the `Cloud Storage JSON API v1` » `devstorage.full_control` scope
-3.  Press `Authorize APIs` and allow access
-4.  Click `Step 3 - Configure request to API`
-5.  Configure it as follows:
-    1.  HTTP Method: PATCH
-    2.  Request URI: `https://www.googleapis.com/storage/v1/b/YOUR_BUCKET_NAME`
-    3.  Content-Type: application/json (should be the default)
-    4.  Press `Enter request body` and input your CORS configuration
-6.  Press `Send the request`.
-
-### Use with your own server
-
-The recommended approach is to integrate `@uppy/aws-s3` with your own server.
-You will need to do the following things:
-
-1. Setup a bucket
-2. Create endpoints in your server. You can create them as edge functions (such
-   as AWS Lambdas), inside Next.js as an API route, or wherever your server runs
-   - `POST` > `/uppy/s3`: get upload parameters
-3. [Setup Uppy](https://github.com/transloadit/uppy/blob/main/examples/aws-nodejs/public/index.html)
-
-### Use with Companion
-
-[Companion](/docs/companion) has S3 routes built-in for a plug-and-play
-experience with Uppy.
-
-:::caution
-
-Generally it’s better for access control, observability, and scaling to
-integrate `@uppy/aws-s3` with your own server. You may want to use
-[Companion](/docs/companion) for creating, signing, and completing your S3
-uploads if you already need Companion for remote files (such as from Google
-Drive). Otherwise it’s not worth the hosting effort.
-
-:::
-
-```js {10} showLineNumbers
-import Uppy from '@uppy/core';
-import Dashboard from '@uppy/dashboard';
-import AwsS3 from '@uppy/aws-s3';
-
-import '@uppy/core/dist/style.min.css';
-import '@uppy/dashboard/dist/style.min.css';
-
-const uppy = new Uppy.use(Dashboard, { inline: true, target: 'body' }).use(
-	AwsS3,
-	{ companionUrl: 'http://companion.uppy.io' },
-);
-```
-
-## Options
-
-The `@uppy/aws-s3` plugin has the following configurable options:
-
-#### `id`
-
-A unique identifier for this plugin (`string`, default: `'AwsS3'`).
-
-#### `companionUrl`
-
-Companion instance to use for signing S3 uploads (`string`, default: `null`).
-
-#### `companionHeaders`
-
-Custom headers that should be sent along to [Companion](/docs/companion) on
-every request (`Object`, default: `{}`).
-
-#### `allowedMetaFields`
-
-Pass an array of field names to limit the metadata fields that will be added to
-upload as query parameters (`Array`, default: `null`).
-
-- Set this to `['name']` to only send the `name` field.
-- Set this to `null` (the default) to send _all_ metadata fields.
-- Set this to an empty array `[]` to not send any fields.
-
-#### `getUploadParameters(file)`
-
-:::note
-
-When using [Companion][companion docs] to sign S3 uploads, do not define this
-option.
-
-:::
-
-A function that returns upload parameters for a file (`Promise`, default:
-`null`).
-
-Parameters should be returned as an object, or as a `Promise` that fulfills with
-an object, with keys `{ method, url, fields, headers }`.
-
-- The `method` field is the HTTP method to be used for the upload. This should
-  be one of either `PUT` or `POST`, depending on the type of upload used.
-- The `url` field is the URL to which the upload request will be sent. When
-  using a presigned PUT upload, this should be the URL to the S3 object with
-  signing parameters included in the query string. When using a POST upload with
-  a policy document, this should be the root URL of the bucket.
-- The `fields` field is an object with form fields to send along with the upload
-  request. For presigned PUT uploads, this should be left empty.
-- The `headers` field is an object with request headers to send along with the
-  upload request. When using a presigned PUT upload, it’s a good idea to provide
-  `headers['content-type']`. That will make sure that the request uses the same
-  content-type that was used to generate the signature. Without it, the browser
-  may decide on a different content-type instead, causing S3 to reject the
-  upload.
-
-#### `timeout`
-
-When no upload progress events have been received for this amount of
-milliseconds, assume the connection has an issue and abort the upload (`number`,
-default: `30_000`).
-
-This is passed through to [XHRUpload](/docs/xhr-upload#timeout-30-1000); see its
-documentation page for details. Set to `0` to disable this check.
-
-#### `limit`
-
-Limit the amount of uploads going on at the same time (`number`, default: `5`).
-
-Setting this to `0` means no limit on concurrent uploads, but we recommend a
-value between `5` and `20`.
-
-#### `getResponseData(responseText, response)`
-
-:::note
-
-This is an advanced option intended for use with _almost_ S3-compatible storage
-solutions.
-
-:::
-
-Customize response handling once an upload is completed. This passes the
-function through to @uppy/xhr-upload, see its
-[documentation](https://uppy.io/docs/xhr-upload/#getResponseData-responseText-response)
-for API details.
-
-This option is useful when uploading to an S3-like service that doesn’t reply
-with an XML document, but with something else such as JSON.
-
-#### `locale: {}`
-
-```js
-export default {
-	strings: {
-		timedOut: 'Upload stalled for %{seconds} seconds, aborting.',
-	},
-};
-```
-
-## Frequently Asked Questions
-
-### How can I generate a presigned URL server-side?
-
-The `getUploadParameters` function can return a `Promise`, so upload parameters
-can be prepared server-side. That way, no private keys to the S3 bucket need to
-be shared on the client. For example, there could be a PHP server endpoint that
-prepares a presigned URL for a file:
-
-```js
-uppy.use(AwsS3, {
-	getUploadParameters(file) {
-		// Send a request to our PHP signing endpoint.
-		return fetch('/s3-sign.php', {
-			method: 'post',
-			// Send and receive JSON.
-			headers: {
-				accept: 'application/json',
-				'content-type': 'application/json',
-			},
-			body: JSON.stringify({
-				filename: file.name,
-				contentType: file.type,
-			}),
-		})
-			.then((response) => {
-				// Parse the JSON response.
-				return response.json();
-			})
-			.then((data) => {
-				// Return an object in the correct shape.
-				return {
-					method: data.method,
-					url: data.url,
-					fields: data.fields,
-					// Provide content type header required by S3
-					headers: {
-						'Content-Type': file.type,
-					},
-				};
-			});
-	},
-});
-```
-
-See either the
-[aws-nodejs](https://github.com/transloadit/uppy/tree/HEAD/examples/aws-nodejs)
-or [aws-php](https://github.com/transloadit/uppy/tree/HEAD/examples/aws-php)
-examples in the uppy repository for a demonstration of how to implement handling
-of presigned URLs on both the server-side and client-side.
-
-### How can I retrieve the presigned parameters of the uploaded file?
-
-Once the file is uploaded, it’s possible to retrieve the parameters that were
-generated in `getUploadParameters(file)` via the `file.meta` field:
-
-```js
-uppy.on('upload-success', (file, data) => {
-	const s3Key = file.meta['key']; // the S3 object key of the uploaded file
-});
-```
-
-[companion docs]: /docs/companion