You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+8Lines changed: 8 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,14 @@
1
1
Changelog
2
2
=========
3
3
4
+
#### 1.0.6 (2014-10-20)
5
+
6
+
Removing global state, and adding pause and resume functionality.
7
+
8
+
#### 1.0.5 (2014-10-13)
9
+
10
+
Changing how buffers are subdivided, in order to provide support for in browser operation.
11
+
4
12
#### 1.0.4 (2014-10-13)
5
13
6
14
Getting rid of the use of setImmeadiate. Also now the MPU is not initialized until data is actually received by the writable stream, and error checking verifies that data has actually been uploaded to S3 before trying to end the stream. This fixes an issue where empty incoming streams were causing errors to come back from S3 as the module was attempting to complete an empty MPU.
Copy file name to clipboardExpand all lines: README.md
+59-79Lines changed: 59 additions & 79 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,13 +6,9 @@ A pipeable write stream which uploads to Amazon S3 using the multipart file uplo
6
6
7
7
### Changelog
8
8
9
-
#### 1.0.4 (2014-10-13)
9
+
#### 1.0.6 (2014-10-20)
10
10
11
-
Getting rid of the use of setImmediate. Also now the MPU is not initialized until data is actually received by the writable stream, and error checking verifies that data has actually been uploaded to S3 before trying to end the stream. This fixes an issue where empty incoming streams were causing errors to come back from S3 as the module was attempting to complete an empty MPU.
12
-
13
-
#### 1.0.3 (2014-10-12)
14
-
15
-
Some minor scope adjustments.
11
+
Removing global state, and adding pause and resume functionality.
16
12
17
13
[Historical Changelogs](CHANGELOG.md)
18
14
@@ -23,6 +19,7 @@ Some minor scope adjustments.
23
19
* This package is designed to use the official Amazon SDK for Node.js, helping keep it small and efficient. For maximum flexibility you pass in the aws-sdk client yourself, allowing you to use a uniform version of AWS SDK throughout your code base.
24
20
* You can provide options for the upload call directly to do things like set server side encryption, reduced redundancy storage, or access level on the object, which some other similar streams are lacking.
25
21
* Emits "part" events which expose the amount of incoming data received by the writable stream versus the amount of data that has been uploaded via the multipart API so far, allowing you to create a progress bar if that is a requirement.
22
+
* Support for pausing and later resuming in progress multipart uploads.
Configures the S3 client for s3-upload-stream to use. Please note that this module has only been tested with AWS SDK 2.0 and greater.
83
+
Before uploading you must configures the S3 client for s3-upload-stream to use. Please note that this module has only been tested with AWS SDK 2.0 and greater.
90
84
91
85
This module does not include the AWS SDK itself. Rather you must require the AWS SDK in your own application code, instantiate an S3 client and then supply it to s3-upload-stream.
92
86
@@ -97,23 +91,25 @@ When setting up the S3 client the recommended approach for credential management
97
91
If you are following this approach then you can configure the S3 client very simply:
Resume an incomplete multipart upload from a previous session by providing a `session` object with an upload ID, and ETag and numbers for each part. `destination` details is as above.
The following methods can be called on the stream returned by from `client.upload()`
176
+
177
+
### stream.pause()
179
178
180
179
Pause an active multipart upload stream.
181
180
@@ -187,7 +186,7 @@ Calling `pause()` will immediately:
187
186
188
187
When mid-upload parts are finished, a `paused` event will fire, including an object with `UploadId` and `Parts` data that can be used to resume an upload in a later session.
189
188
190
-
### package.resume()
189
+
### stream.resume()
191
190
192
191
Resume a paused multipart upload stream.
193
192
@@ -199,19 +198,15 @@ Calling `resume()` will immediately:
199
198
200
199
It is safe to call `resume()` at any time after `pause()`. If the stream is between `pausing` and `paused`, then `resume()` will resume data flow and the `paused` event will not be fired.
201
200
202
-
## Optional Configuration
203
-
204
201
### stream.maxPartSize(sizeInBytes)
205
202
206
203
Used to adjust the maximum amount of stream data that will be buffered in memory prior to flushing. The lowest possible value, and default value, is 5 MB. It is not possible to set this value any lower than 5 MB due to Amazon S3 restrictions, but there is no hard upper limit. The higher the value you choose the more stream data will be buffered in memory before flushing to S3.
207
204
208
205
The main reason for setting this to a higher value instead of using the default is if you have a stream with more than 50 GB of data, and therefore need larger part sizes in order to flush the entire stream while also staying within Amazon's upper limit of 10,000 parts for the multipart upload API.
@@ -231,10 +226,8 @@ Used to adjust the number of parts that are concurrently uploaded to S3. By defa
231
226
Keep in mind that total memory usage will be at least `maxPartSize` * `concurrentParts` as each concurrent part will be `maxPartSize` large, so it is not recommended that you set both `maxPartSize` and `concurrentParts` to high values, or your process will be buffering large amounts of data in its memory.
The methods and interface for s3-upload-stream has changed since 1.0 and is no longer compatible with the older versions.
253
-
254
-
The differences are:
255
-
256
-
* This package no longer includes Amazon SDK, and now you must include it in your own app code and pass an instantiated Amazon S3 client in.
257
-
* The upload stream is now returned immeadiately, instead of in a callback.
258
-
* The "chunk" event emitted is now called "part" instead.
259
-
* The .maxPartSize() and .concurrentParts() methods are now methods of the writable stream itself, instead of being methods of an object returned from the upload stream constructor method.
260
-
261
-
If you have questions about how to migrate from the older version of the package after reviewing these docs feel free to open an issue with your code example.
262
-
263
243
### Tuning configuration of the AWS SDK
264
244
265
245
The following configuration tuning can help prevent errors when using less reliable internet connections (such as 3G data if you are using Node.js on the Tessel) by causing the AWS SDK to detect upload timeouts and retry.
0 commit comments