Skip to content

Commit f750707

Browse files
committed
Documentation changes and other minor tweaks.
1 parent 40f3abe commit f750707

File tree

6 files changed

+102
-118
lines changed

6 files changed

+102
-118
lines changed

CHANGELOG.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,14 @@
11
Changelog
22
=========
33

4+
#### 1.0.6 (2014-10-20)
5+
6+
Removing global state, and adding pause and resume functionality.
7+
8+
#### 1.0.5 (2014-10-13)
9+
10+
Changing how buffers are subdivided, in order to provide support for in browser operation.
11+
412
#### 1.0.4 (2014-10-13)
513

614
Getting rid of the use of setImmeadiate. Also now the MPU is not initialized until data is actually received by the writable stream, and error checking verifies that data has actually been uploaded to S3 before trying to end the stream. This fixes an issue where empty incoming streams were causing errors to come back from S3 as the module was attempting to complete an empty MPU.

README.md

Lines changed: 59 additions & 79 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,9 @@ A pipeable write stream which uploads to Amazon S3 using the multipart file uplo
66

77
### Changelog
88

9-
#### 1.0.4 (2014-10-13)
9+
#### 1.0.6 (2014-10-20)
1010

11-
Getting rid of the use of setImmediate. Also now the MPU is not initialized until data is actually received by the writable stream, and error checking verifies that data has actually been uploaded to S3 before trying to end the stream. This fixes an issue where empty incoming streams were causing errors to come back from S3 as the module was attempting to complete an empty MPU.
12-
13-
#### 1.0.3 (2014-10-12)
14-
15-
Some minor scope adjustments.
11+
Removing global state, and adding pause and resume functionality.
1612

1713
[Historical Changelogs](CHANGELOG.md)
1814

@@ -23,6 +19,7 @@ Some minor scope adjustments.
2319
* This package is designed to use the official Amazon SDK for Node.js, helping keep it small and efficient. For maximum flexibility you pass in the aws-sdk client yourself, allowing you to use a uniform version of AWS SDK throughout your code base.
2420
* You can provide options for the upload call directly to do things like set server side encryption, reduced redundancy storage, or access level on the object, which some other similar streams are lacking.
2521
* Emits "part" events which expose the amount of incoming data received by the writable stream versus the amount of data that has been uploaded via the multipart API so far, allowing you to create a progress bar if that is a requirement.
22+
* Support for pausing and later resuming in progress multipart uploads.
2623

2724
### Limits
2825

@@ -32,14 +29,13 @@ Some minor scope adjustments.
3229
## Example
3330

3431
```js
35-
var s3Stream = require('s3-upload-stream'),
36-
AWS = require('aws-sdk'),
32+
var AWS = require('aws-sdk'),
3733
zlib = require('zlib'),
3834
fs = require('fs');
35+
s3Stream = require('s3-upload-stream')(new AWS.S3()),
3936

4037
// Set the client to be used for the upload.
4138
AWS.config.loadFromPath('./config.json');
42-
s3Stream.client(new AWS.S3());
4339

4440
// Create the streams
4541
var read = fs.createReadStream('/path/to/a/file');
@@ -84,9 +80,7 @@ read.pipe(compress).pipe(upload);
8480

8581
## Usage
8682

87-
### package.client(s3);
88-
89-
Configures the S3 client for s3-upload-stream to use. Please note that this module has only been tested with AWS SDK 2.0 and greater.
83+
Before uploading you must configures the S3 client for s3-upload-stream to use. Please note that this module has only been tested with AWS SDK 2.0 and greater.
9084

9185
This module does not include the AWS SDK itself. Rather you must require the AWS SDK in your own application code, instantiate an S3 client and then supply it to s3-upload-stream.
9286

@@ -97,23 +91,25 @@ When setting up the S3 client the recommended approach for credential management
9791
If you are following this approach then you can configure the S3 client very simply:
9892

9993
```js
100-
var s3Stream = require('s3-upload-stream'),
101-
AWS = require('aws-sdk');
102-
103-
s3Stream.client(new AWS.S3());
94+
var AWS = require('aws-sdk'),
95+
s3Stream = require('../lib/s3-upload-stream.js')(new AWS.S3());
10496
```
10597

10698
However, some environments may require you to keep your credentials in a file, or hardcoded. In that case you can use the following form:
10799

108100
```js
109-
var s3Stream = require('s3-upload-stream'),
110-
AWS = require('aws-sdk');
101+
#!/usr/bin/env node
102+
var AWS = require('aws-sdk');
111103

104+
// Make sure AWS credentials are loaded using on of the following techniques
105+
AWS.config.loadFromPath('./config.json');
112106
AWS.config.update({accessKeyId: 'akid', secretAccessKey: 'secret'});
113-
s3Stream.client(new AWS.S3());
107+
108+
// Create a stream client.
109+
var s3Stream = require('../lib/s3-upload-stream.js')(new AWS.S3());
114110
```
115111

116-
### package.upload(destination)
112+
### client.upload(destination)
117113

118114
Create an upload stream that will upload to the specified destination. The upload stream is returned immeadiately.
119115

@@ -122,60 +118,63 @@ The destination details is an object in which you can specify many different [de
122118
__Example:__
123119

124120
```js
125-
var s3Stream = require('s3-upload-stream'),
126-
AWS = require('aws-sdk');
127-
128-
s3Stream.client(new AWS.S3());
121+
var AWS = require('aws-sdk'),
122+
s3Stream = require('../lib/s3-upload-stream.js')(new AWS.S3());
129123

130124
var read = fs.createReadStream('/path/to/a/file');
131-
var upload = new s3Client.upload({
132-
"Bucket": "bucket-name",
133-
"Key": "key-name",
134-
"ACL": "public-read",
135-
"StorageClass": "REDUCED_REDUNDANCY",
136-
"ContentType": "binary/octet-stream"
125+
var upload = s3Client.upload({
126+
Bucket: "bucket-name",
127+
Key: "key-name",
128+
ACL: "public-read",
129+
StorageClass: "REDUCED_REDUNDANCY",
130+
ContentType: "binary/octet-stream"
137131
});
138132

139133
read.pipe(upload);
140134
```
141135

142-
### package.upload(destination, session)
136+
### client.upload(destination, [session])
143137

144138
Resume an incomplete multipart upload from a previous session by providing a `session` object with an upload ID, and ETag and numbers for each part. `destination` details is as above.
145139

146140
__Example:__
147141

148142
```js
149-
var s3Stream = require('s3-upload-stream'),
150-
AWS = require('aws-sdk');
151-
152-
s3Stream.client(new AWS.S3());
143+
var AWS = require('aws-sdk'),
144+
s3Stream = require('../lib/s3-upload-stream.js')(new AWS.S3());
153145

154146
var read = fs.createReadStream('/path/to/a/file');
155-
var upload = new s3Client.upload({
156-
"Bucket": "bucket-name",
157-
"Key": "key-name",
158-
"ACL": "public-read",
159-
"StorageClass": "REDUCED_REDUNDANCY",
160-
"ContentType": "binary/octet-stream"
161-
}, {
162-
"UploadId": "f1j2b47238f12984f71b2o8347f12",
163-
"Parts": [
164-
{
165-
"ETag": "3k2j3h45t9v8aydgajsda",
166-
"PartNumber": 1
167-
},
168-
{
169-
"Etag": "kjgsdfg876sd8fgk3j44t",
170-
"PartNumber": 2
171-
}
172-
]
173-
});
147+
var upload = s3Client.upload(
148+
{
149+
Bucket: "bucket-name",
150+
Key: "key-name",
151+
ACL: "public-read",
152+
StorageClass: "REDUCED_REDUNDANCY",
153+
ContentType: "binary/octet-stream"
154+
},
155+
{
156+
UploadId: "f1j2b47238f12984f71b2o8347f12",
157+
Parts: [
158+
{
159+
ETag: "3k2j3h45t9v8aydgajsda",
160+
PartNumber: 1
161+
},
162+
{
163+
Etag: "kjgsdfg876sd8fgk3j44t",
164+
PartNumber: 2
165+
}
166+
]
167+
}
168+
);
174169

175170
read.pipe(upload);
176171
```
177172

178-
### package.pause()
173+
## Stream Methods
174+
175+
The following methods can be called on the stream returned by from `client.upload()`
176+
177+
### stream.pause()
179178

180179
Pause an active multipart upload stream.
181180

@@ -187,7 +186,7 @@ Calling `pause()` will immediately:
187186

188187
When mid-upload parts are finished, a `paused` event will fire, including an object with `UploadId` and `Parts` data that can be used to resume an upload in a later session.
189188

190-
### package.resume()
189+
### stream.resume()
191190

192191
Resume a paused multipart upload stream.
193192

@@ -199,19 +198,15 @@ Calling `resume()` will immediately:
199198

200199
It is safe to call `resume()` at any time after `pause()`. If the stream is between `pausing` and `paused`, then `resume()` will resume data flow and the `paused` event will not be fired.
201200

202-
## Optional Configuration
203-
204201
### stream.maxPartSize(sizeInBytes)
205202

206203
Used to adjust the maximum amount of stream data that will be buffered in memory prior to flushing. The lowest possible value, and default value, is 5 MB. It is not possible to set this value any lower than 5 MB due to Amazon S3 restrictions, but there is no hard upper limit. The higher the value you choose the more stream data will be buffered in memory before flushing to S3.
207204

208205
The main reason for setting this to a higher value instead of using the default is if you have a stream with more than 50 GB of data, and therefore need larger part sizes in order to flush the entire stream while also staying within Amazon's upper limit of 10,000 parts for the multipart upload API.
209206

210207
```js
211-
var s3Stream = require('s3-upload-stream'),
212-
AWS = require('aws-sdk');
213-
214-
s3Stream.client(new AWS.S3());
208+
var AWS = require('aws-sdk'),
209+
s3Stream = require('../lib/s3-upload-stream.js')(new AWS.S3());
215210

216211
var read = fs.createReadStream('/path/to/a/file');
217212
var upload = new s3Client.upload({
@@ -231,10 +226,8 @@ Used to adjust the number of parts that are concurrently uploaded to S3. By defa
231226
Keep in mind that total memory usage will be at least `maxPartSize` * `concurrentParts` as each concurrent part will be `maxPartSize` large, so it is not recommended that you set both `maxPartSize` and `concurrentParts` to high values, or your process will be buffering large amounts of data in its memory.
232227

233228
```js
234-
var s3Stream = require('s3-upload-stream'),
235-
AWS = require('aws-sdk');
236-
237-
s3Stream.client(new AWS.S3());
229+
var AWS = require('aws-sdk'),
230+
s3Stream = require('../lib/s3-upload-stream.js')(new AWS.S3());
238231

239232
var read = fs.createReadStream('/path/to/a/file');
240233
var upload = new s3Client.upload({
@@ -247,19 +240,6 @@ upload.concurrentParts(5);
247240
read.pipe(upload);
248241
```
249242

250-
### Migrating from pre-1.0 s3-upload-stream
251-
252-
The methods and interface for s3-upload-stream has changed since 1.0 and is no longer compatible with the older versions.
253-
254-
The differences are:
255-
256-
* This package no longer includes Amazon SDK, and now you must include it in your own app code and pass an instantiated Amazon S3 client in.
257-
* The upload stream is now returned immeadiately, instead of in a callback.
258-
* The "chunk" event emitted is now called "part" instead.
259-
* The .maxPartSize() and .concurrentParts() methods are now methods of the writable stream itself, instead of being methods of an object returned from the upload stream constructor method.
260-
261-
If you have questions about how to migrate from the older version of the package after reviewing these docs feel free to open an issue with your code example.
262-
263243
### Tuning configuration of the AWS SDK
264244

265245
The following configuration tuning can help prevent errors when using less reliable internet connections (such as 3G data if you are using Node.js on the Tessel) by causing the AWS SDK to detect upload timeouts and retry.

examples/upload.js

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,19 @@
11
#!/usr/bin/env node
2-
var s3Stream = require('../lib/s3-upload-stream.js'),
3-
AWS = require('aws-sdk'),
2+
var AWS = require('aws-sdk'),
43
zlib = require('zlib'),
54
fs = require('fs');
65

7-
// JSON file containing AWS API credentials.
6+
// Make sure AWS credentials are loaded.
87
AWS.config.loadFromPath('./config.json');
98

10-
// Set the client to be used for the upload.
11-
s3Stream.client(new AWS.S3());
9+
// Initialize a stream client.
10+
var s3Stream = require('../lib/s3-upload-stream.js')(new AWS.S3());
1211

1312
// Create the streams
1413
var read = fs.createReadStream('./video.mp4');
1514
var compress = zlib.createGzip();
16-
var upload = new s3Stream.upload({
17-
"Bucket": "storydesk",
15+
var upload = s3Stream.upload({
16+
"Bucket": "bucket",
1817
"Key": "video.mp4.gz"
1918
});
2019

lib/s3-upload-stream.js

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,6 @@ function Client(client) {
1616

1717
// Generate a writeable stream which uploads to a file on S3.
1818
Client.prototype.upload = function (destinationDetails, sessionDetails) {
19-
2019
var cachedClient = this.cachedClient;
2120
var e = new events.EventEmitter();
2221

@@ -29,7 +28,7 @@ Client.prototype.upload = function (destinationDetails, sessionDetails) {
2928

3029
// Data pertaining to the overall upload.
3130
// If resumable parts are passed in, they must be free of gaps.
32-
var multipartUploadID = sessionDetails.UploadId;
31+
var multipartUploadID = sessionDetails.UploadId ? sessionDetails.UploadId : null;
3332
var partNumber = sessionDetails.Parts ? (sessionDetails.Parts.length + 1) : 1;
3433
var partIds = sessionDetails.Parts || [];
3534
var receivedSize = 0;
@@ -363,11 +362,11 @@ Client.client = function (options) {
363362
return Client.globalClient;
364363
};
365364

366-
Client.upload = function (destinationDetails) {
365+
Client.upload = function (destinationDetails, sessionDetails) {
367366
if (!Client.globalClient) {
368367
throw new Error('Must configure an S3 client before attempting to create an S3 upload stream.');
369368
}
370-
return Client.globalClient.upload(destinationDetails);
369+
return Client.globalClient.upload(destinationDetails, sessionDetails);
371370
};
372371

373372
module.exports = Client;

package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
{
22
"name": "s3-upload-stream",
33
"description": "Writeable stream for uploading content of unknown size to S3 via the multipart API.",
4-
"version": "1.0.5",
4+
"version": "1.0.6",
55
"author": {
66
"name": "Nathan Peck",
77
"email": "nathan@storydesk.com"

0 commit comments

Comments
 (0)