Browse Source

@uppy/aws-s3-multipart: change limit to 6 (#4199)

Antoine du Hamel 2 năm trước cách đây
mục cha
commit
90457f91a3

+ 1 - 1
packages/@uppy/aws-s3-multipart/src/MultipartUploader.js

@@ -4,7 +4,7 @@ import delay from '@uppy/utils/lib/delay'
 const MB = 1024 * 1024
 
 const defaultOptions = {
-  limit: 1,
+  limit: 6,
   retryDelays: [0, 1000, 3000, 5000],
   getChunkSize (file) {
     return Math.ceil(file.size / 10000)

+ 1 - 1
packages/@uppy/aws-s3-multipart/src/index.js

@@ -32,7 +32,7 @@ export default class AwsS3Multipart extends BasePlugin {
     this.#client = new RequestClient(uppy, opts)
 
     const defaultOptions = {
-      limit: 0,
+      limit: 6,
       retryDelays: [0, 1000, 3000, 5000],
       createMultipartUpload: this.createMultipartUpload.bind(this),
       listParts: this.listParts.bind(this),

+ 1 - 0
packages/@uppy/aws-s3-multipart/src/index.test.js

@@ -46,6 +46,7 @@ describe('AwsS3Multipart', () => {
     beforeEach(() => {
       core = new Core()
       core.use(AwsS3Multipart, {
+        limit: 0,
         createMultipartUpload: jest.fn(() => {
           return {
             uploadId: '6aeb1980f3fc7ce0b5454d25b71992',

+ 4 - 2
website/src/docs/aws-s3-multipart.md

@@ -39,9 +39,11 @@ const { AwsS3Multipart } = Uppy
 
 The `@uppy/aws-s3-multipart` plugin has the following configurable options:
 
-### `limit: 5`
+### `limit: 6`
 
-The maximum amount of chunks to upload simultaneously. This affects [`prepareUploadParts()`](#prepareUploadParts-file-partData) as well; after the initial batch of `limit` parts is presigned, a minimum of `limit / 2` rounded up will be presigned at a time. You should set the limit carefully. Setting it to a value too high could cause issues where the presigned URLs begin to expire before the chunks they are for start uploading. Too low and you will end up with a lot of extra round trips to your server (or Companion) than necessary to presign URLs. If the default chunk size of 5MB is used, a `limit` between 5 and 15 is recommended.
+The maximum amount of chunks to upload simultaneously. This affects [`prepareUploadParts()`](#prepareUploadParts-file-partData) as well; after the initial batch of `limit` parts is presigned, a minimum of `limit / 2` rounded up will be presigned at a time. You should set the limit carefully. Setting it to a value too high could cause issues where the presigned URLs begin to expire before the chunks they are for start uploading. Too low and you will end up with a lot of extra round trips to your server (or Companion) than necessary to presign URLs. If the default chunk size of 5MB is used, a `limit` between 5 and 6 is recommended.
+
+Because HTTP/1.1 limits the number of concurrent requests to one origin to 6, it’s recommended to always set a limit of 6 or smaller for all your uploads, or to not override the default.
 
 For example, with a 50MB file and a `limit` of 5 we end up with 10 chunks. 5 of these are presigned in one batch, then 3, then 2, for a total of 3 round trips to the server via [`prepareUploadParts()`](#prepareUploadParts-file-partData) and 10 requests sent to AWS via the presigned URLs generated.