Skip to content
30 changes: 15 additions & 15 deletions locale/en/knowledge/advanced/buffers/how-to-use-buffers.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ layout: knowledge-post.hbs

## Why Buffers?

Pure JavaScript, while great with unicode-encoded strings, does not handle straight binary data very well. This is fine on the browser, where most data is in the form of strings. However, Node.js servers have to also deal with TCP streams and reading and writing to the filesystem, both which make it necessary to deal with purely binary streams of data.
Pure JavaScript, while great with unicode-encoded strings, does not handle straight binary data very well. This is fine on the browser, where most data is in the form of strings. However, Node.js servers have to also deal with TCP streams and reading and writing to the filesystem, both of which make it necessary to deal with purely binary streams of data.

One way to handle this problem is to just use strings *anyway*, which is exactly what Node.js did at first. However, this approach is extremely problematic to work with; It's slow, makes you work with an API designed for strings and not binary data, and has a tendency to break in strange and mysterious ways.

Expand Down Expand Up @@ -61,13 +61,13 @@ This initializes the buffer to a binary encoding of the first string as specifie

Given that there is already a buffer created:

```
> var buffer = new Buffer(16);
```console
> var buffer = Buffer.alloc(16)
```

we can start writing strings to it:

```
```console
> buffer.write("Hello", "utf-8")
5
```
Expand All @@ -76,7 +76,7 @@ The first argument to `buffer.write` is the string to write to the buffer, and t

`buffer.write` returned 5. This means that we wrote to five bytes of the buffer. The fact that the string "Hello" is also 5 characters long is coincidental, since each character *just happened* to be 8 bits apiece. This is useful if you want to complete the message:

```
```console
> buffer.write(" world!", 5, "utf-8")
7
```
Expand All @@ -89,14 +89,14 @@ When `buffer.write` has 3 arguments, the second argument indicates an offset, or

Probably the most common way to read buffers is to use the `toString` method, since many buffers contain text:

```
```console
> buffer.toString('utf-8')
'Hello world!\u0000�k\t'
```

Again, the first argument is the encoding. In this case, it can be seen that not the entire buffer was used! Luckily, because we know how many bytes we've written to the buffer, we can simply add more arguments to "stringify" the slice that's actually interesting:

```
```console
> buffer.toString("utf-8", 0, 12)
'Hello world!'
```
Expand All @@ -105,7 +105,7 @@ Again, the first argument is the encoding. In this case, it can be seen that not

You can also set individual bytes by using an array-like syntax:

```
```console
> buffer[12] = buffer[11];
33
> buffer[13] = "1".charCodeAt();
Expand All @@ -130,7 +130,7 @@ This method checks to see if `object` is a buffer, similar to `Array.isArray`.

With this function, you can check the number of bytes required to encode a string with a given encoding (which defaults to utf-8). This length is *not* the same as string length, since many characters require more bytes to encode. For example:

```
```console
> var snowman = "☃";
> snowman.length
1
Expand All @@ -144,8 +144,8 @@ The unicode snowman is only one character, but takes 3 entire bytes to encode!

This is the length of your buffer, and represents how much memory is allocated. It is not the same as the size of the buffer's contents, since a buffer may be half-filled. For example:

```
> var buffer = new Buffer(16)
```console
> var buffer = Buffer.alloc(16)
> buffer.write(snowman)
3
> buffer.length
Expand All @@ -158,9 +158,9 @@ In this example, the contents written to the buffer only consist of three groups

`buffer.copy` allows one to copy the contents of one buffer onto another. The first argument is the target buffer on which to copy the contents of `buffer`, and the rest of the arguments allow for copying only a subsection of the source buffer to somewhere in the middle of the target buffer. For example:

```
> var frosty = new Buffer(24)
> var snowman = new Buffer("☃", "utf-8")
```console
> var frosty = Buffer.alloc(24)
> var snowman = Buffer.from("☃", "utf-8")
> frosty.write("Happy birthday! ", "utf-8")
16
> snowman.copy(frosty, 16)
Expand All @@ -175,7 +175,7 @@ In this example, I copied the "snowman" buffer, which contains a 3 byte long cha

This method's API is generally the same as that of `Array.prototype.slice`, but with one very import difference: The slice is **not** a new buffer and merely references a subset of the memory space. *Modifying the slice will also modify the original buffer*! For example:

```
```console
> var puddle = frosty.slice(16, 19)
> puddle.toString()
'☃'
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ difficulty: 3
layout: knowledge-post.hbs
---

The function `fs.createWriteStream()` creates a writable stream in a very simple manner. After a call to `fs.createWriteStream` with the filepath, you have a writeable stream to work with. It turns out that the response (as well as the request) objects are streams. So we will stream the `POST` data to the file `output`. Since the code is simple enough, it is pretty easy just to read through it and comment why each line is necessary.
The function `fs.createWriteStream()` creates a writable stream in a very simple manner. After a call to `fs.createWriteStream()` with the filepath, you have a writeable stream to work with. It turns out that the response (as well as the request) objects are streams. So we will stream the `POST` data to the file `output`. Since the code is simple enough, it is pretty easy just to read through it and comment why each line is necessary.

```javascript
var http = require('http');
Expand Down
10 changes: 5 additions & 5 deletions locale/en/knowledge/advanced/streams/how-to-use-stream-pipe.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ layout: knowledge-post.hbs

If you've been using Node.js for a while, you've definitely run into streams. HTTP connections are streams, open files are streams; stdin, stdout, and stderr are all streams as well. A 'stream' is node's I/O abstraction - if you feel like you still need to understand them better, you can read more about them [here](https://nodejs.org/api/stream.html#stream_stream).

Streams make for quite a handy abstraction, and there's a lot you can do with them - as an example, let's take a look at stream.pipe, the method used to take a readable stream and connect it to a writeable steam. Suppose we wanted to spawn a `node` child process and pipe our stdout and stdin to its corresponding stdout and stdin.
Streams make for quite a handy abstraction, and there's a lot you can do with them - as an example, let's take a look at `stream.pipe()`, the method used to take a readable stream and connect it to a writeable steam. Suppose we want to spawn a `node` child process and pipe our stdout and stdin to its corresponding stdout and stdin.

```javascript
#!/usr/bin/env node
Expand All @@ -36,7 +36,7 @@ myREPL.on('exit', function (code) {

There you have it - spawn the Node.js REPL as a child process, and pipe your stdin and stdout to its stdin and stdout. Make sure to listen for the child's 'exit' event, too, or else your program will just hang there when the REPL exits.

Another use for stream.pipe is file streams. In Node.js, fs.createReadStream and fs.createWriteStream are used to create a stream to an open file descriptor. Now let's look at how one might use stream.pipe to write to a file. You'll probably recognize most of the code:
Another use for `stream.pipe()` is file streams. In Node.js, `fs.createReadStream()` and `fs.createWriteStream()` are used to create a stream to an open file descriptor. Now let's look at how one might use `stream.pipe()` to write to a file. You'll probably recognize most of the code:

```javascript
#!/usr/bin/env node
Expand Down Expand Up @@ -66,7 +66,7 @@ myREPL.on('exit', function (code) {

With those small additions, your stdin and the stdout from your REPL will both be piped to the writeable file stream you opened to 'myOutput.txt'. It's that simple - you can pipe streams to as many places as you want.

Another very important use case for stream.pipe is with HTTP request and response objects. Here we have the very simplest kind of proxy:
Another very important use case for `stream.pipe()` is with HTTP request and response objects. Here we have the very simplest kind of proxy:

```javascript
#!/usr/bin/env node
Expand All @@ -89,6 +89,6 @@ http.createServer(function (req, res) {
}).listen(9000);
```

One could also use stream.pipe to send incoming requests to a file for logging, or to a child process, or any one of a number of other things.
One could also use `stream.pipe()` to send incoming requests to a file for logging, or to a child process, or any one of a number of other things.

Hopefully this has shown you the basics of using stream.pipe to easily pass your data streams around. It's truly a powerful little trick in Node.js, and its uses are yours to explore. Happy coding, and try not to cross your streams!
Hopefully this has shown you the basics of using `stream.pipe()` to easily pass your data streams around. It's truly a powerful little trick in Node.js, and its uses are yours to explore. Happy coding, and try not to cross your streams!
6 changes: 4 additions & 2 deletions locale/en/knowledge/advanced/streams/what-are-streams.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Streams are another basic construct in Node.js that encourages asynchronous codi

In other words, Streams use events to deal with data as it happens, rather than only with a callback at the end. Readable streams emit the event `data` for each chunk of data that comes in, and an `end` event, which is emitted when there is no more data. Writeable streams can be written to with the `write()` function, and closed with the `end()` function. All types of streams emit `error` events when errors arise.

As a quick example, we can write a simple version of `cp` (the Unix utility that copies files). We could do that by reading the whole file with standard filesystem calls and then writing it out to a file. Unfortunately, that requires that the whole file be read in before it can be written. In the case of 1-2 giga files, you could run into out of memory operations. The biggest advantage that streams give you over their non-stream versions are that you can start process the info before you have all the information. In this case, writing out the file doesn't get sped up, but if we were streaming over the internet or doing cpu processing on it then there could be measurable performance improvements.
As a quick example, we can write a simple version of `cp` (the Unix utility that copies files). We could do that by reading the whole file with standard filesystem calls and then writing it out to a file. Unfortunately, that requires that the whole file be read in before it can be written. In this case, writing the file isn't faster, but if we were streaming over a network or doing CPU processing on the data, then there could be measurable performance improvements.

Run this script with arguments like `node cp.js src.txt dest.txt`. This would mean, in the code below, that `process.argv[2]` is `src.txt` and `process.argv[3]` is `desc.txt`.

Expand Down Expand Up @@ -41,4 +41,6 @@ writeStream.on('error', function (err) {
});
```

This sets up a readable stream from the source file and a writable stream to the destination file. Then whenever the readable stream gets data, it gets written to the writeable stream. Then finally it closes the writable stream when the readable stream is finished. NOTE: it would have been better to use [pipe](/en/knowledge/advanced/streams/how-to-use-stream-pipe/) like `readStream.pipe(writeStream);`, however, to show how streams work, we have done things the long way.
This sets up a readable stream from the source file and a writable stream to the destination file. Then whenever the readable stream gets data, it gets written to the writeable stream. Then finally it closes the writable stream when the readable stream is finished.

It would have been better to use [pipe](/en/knowledge/advanced/streams/how-to-use-stream-pipe/) like `readStream.pipe(writeStream);`, however, to show how streams work, we have done things the long way.