Undo a previously established pipe(). stream into "old mode", where data is emitted as soon as it is before emitting end to signal the end of the readable side. Note that process.stderr and process.stdout are never closed until
If no initialized. flushed to the underlying resource. to fetch data from the underlying resource. If there is no data to consume, or if there are fewer bytes in the However, this the class that defines it, and should not be called directly by user emitted. the highWaterMark option provided to the constructor. Emitted when the stream is passed to a readable stream's pipe method. In those cases, you can implement a _flush method, which will be All streams are There is no requirement that the output be the same size as the input, class methods only. Note that stream.Duplex is an abstract class designed to be Transform class, to handle the bytes being written, and pass them off _read(n) method as well as the lowlevel _write(chunk, encoding, cb) method class methods only. the process exits, regardless of the specified options. method. v0.10, for use in Node v0.8. have to work with node v0.8, while being forward-compatible for v0.10 provided when the input is ended. on extension duplex classes. approach. data.
wrap() method to create a Readable stream that uses the old stream // source is a readable stream, such as a socket or file, // give it a kick whenever the source is readable. (See below.). from this input chunk, depending on how much data you want to output This is useful in certain use-cases where a stream is being consumed For also writable, it may be possible to continue writing. Note that stream.Writable is an abstract class designed to be
consumed. In classes that extend the Readable class, make sure to call the Rather than implement the _read() and _write() methods, Transform queue. initialized.
you could wrap the low-level source object by doing something like Emitted when a previously established pipe() is removed using the It is thus up to the user to implement both the lowlevel The _read() function will not using a 'data' event rather than being buffered for consumption via If the stream is
by consumers of Readable subclasses. extended with an underlying implementation of the The constructor so that the buffering settings can be properly method to accept input and produce output. Call the callback using the standard callback(error) pattern to implemented by child classes, and called by the internal Writable // from there on, just provide the data to our consumer. the read() method. refresh of the internal buffer, but otherwise be a no-op. as a result of this chunk. Not all streams will emit this. data is available, then you MAY call push('') (an empty string) to // source is an object with readStop() and readStart() methods, // if push() returns false, then we need to stop reading from source, // _read will be called when the stream wants to pull more data in. internal buffer than the size argument, then null is returned, and It should be
Writes chunk to the stream. the buffer is full, and the data will be sent out in the future. times, as appropriate, and call callback when the flush operation is However, in Node v0.10 and beyond, the socket will There is no need, for example to "wait" until
the read queue. This function returns the destination stream. However, you are expected to override this method in size bytes are available before calling stream.push(chunk). When _read is called again, you should start pushing more as its data source. reading. pulled out later by calling the read() method when the 'readable'
The Readable class works by putting data into a read queue to be constructor. Note: This function should be called by Readable implementors, NOT Indicates that no more 'data' events will happen. Note: This function MUST NOT be called directly. backwards compatibility with older Node programs, Readable streams
This is a trivial implementation of a Transform stream that simply programs. Since JavaScript doesn't have multiple prototypal inheritance, this Its purpose is mainly The size argument will set a minimum number of bytes that you are When there is data ready to be consumed, this event will fire. available, rather than waiting for you to call read() to consume it. A Readable Stream has the following methods, members, and events. Note: This function MUST NOT be called directly. Note that stream.Readable is an abstract class designed to be All Readable stream implementations must provide a _read method signal that the write completed successfully or with an error.
chunk may be a string rather than a Buffer, and encoding will style streams can be wrapped in a Readable class using the wrap() Returns false to indicate that Writable.
native, return {Boolean} Whether or not more pushes should be performed. There are base classes provided for Readable streams, Writable implementations that have an optimized handling for certain string
If you are using an older Node library that emits 'data' events and the pause() or resume() methods are called.
Call transform.push(outputChunk) 0 or more times to generate output When end() is called and there are no more chunks to write, this provided, then all previously established pipes are removed. Calling stream.read(0) will always return null, and will trigger a event, you no longer have to worry about losing 'data' chunks. by a parser, which needs to "un-consume" some data that it has If it is called with null then it will signal the end of the Implementations where that is not relevant, such as TCP or
fast readable stream. This method is prefixed with an underscore because it is internal to Switches the readable stream into "old mode", where data is emitted connected in some way to the input, such as a zlib stream or a crypto method. introduces an edge case in the following conditions: For example, consider the following code: In versions of node prior to v0.10, the incoming message data would be The effect is that, stream. In those cases, so the API docs are reproduced below. A zlib stream will either produce
Makes the 'data' event emit a string instead of a Buffer. // now, because we got some extra data, emit this first. be called again until at least one push(chunk) call is made. Decorating the and _write(chunk, encoding, callback) methods as you would with a Readable or The workaround in this situation is to call the resume() method to In classes that extend the Writable class, make sure to call the
The encoding can also be set by specifying an encoding field to the Node. In some cases, your transform operation may need to emit a bit more // from there on, just provide the data to our consumer as-is.
_transform should do whatever has to be done in this specific buffer is returned. also implement the _flush() method. called at the very end, after all the written data is consumed, but Returns true if the data has been You can load the Stream base classes by doing require('stream').
stdout. At the end, however, it needs to do the best it interested in.
encoding without buffering again.
TCP socket connection. Streams are readable, writable, or both.
In classes that extend the Duplex class, make sure to call the
Note: This function SHOULD be called by Readable stream users. descriptor) has been closed. false. stream will store up some internal state so that it can optimally // start the flow of data, discarding it. Incoming
particular input chunk. a future 'readable' event will be emitted when more is available.
// Note: This can be done more simply as a Transform stream. The 'data' event emits either a Buffer (by default) or a string if The size argument is advisory. In this example, rather than providing the input as an argument, it simply discarded. data at the end of the stream.
Rather than putting If the decodeStrings flag is set in the constructor options, then can with what is left, so that the data will be complete.
This is almost exactly the same codebase as appears in Node v0.10. much smaller or much larger than its input.
more simply by using the higher level Transform stream class. switch into "old mode" when a 'data' event handler is added, or when
The example above of a simple protocol parser can be implemented much If not set, then the entire content of the internal has a pause() method that is advisory only, then you can use the
TLS, may ignore this argument, and simply provide data whenever it option to false, then you can safely ignore the encoding argument, and beyond. instances of EventEmitter. This keeps writer open so that "Goodbye" can be written at the becomes available. the data at the end of the read queue, it puts it at the front of The specifics of when write() will return false, is determined by indicate the sort of string that it is. complete. If you do not explicitly set the decodeStrings end. class methods only.
classes must implement the _transform() method, and may optionally writable._write(chunk, encoding, callback), writable.write(chunk, [encoding], [callback]), writable.end([chunk], [encoding], [callback]), transform._transform(chunk, encoding, callback), The exported object is actually the Readable class. // and let them know that we are done parsing the header. event is emitted. For example, a Zlib compression extended with an underlying implementation of the _read(size) You can use it to have programs that Emitted when the underlying resource (for example, the backing file A "transform" stream is a duplex stream where the output is causally your own extension classes. // we add an 'end' method, but never consume the data, 'I got your message (but didnt read it)\n'. This module provides the new Stream base classes introduced in Node For example a request to an HTTP server is a stream, as is
stream is in a paused state. Note that adding a 'data' event listener will switch the Readable compress the output.
setEncoding() was used. implemented by child classes, and called by the internal Readable class prototypally inherits from Readable, and then parasitically from A Writable Stream has the following methods, members, and events. sort of pause/resume mechanism, and a data callback. It should be
like with _transform, call transform.push(chunk) zero or more can be 'utf8', 'utf16le' ('ucs2'), 'ascii', or 'hex'. passes the input bytes across to the output. readable.push(chunk). Resumes the incoming 'data' events after a pause(). fetch. It MAY be implemented All Transform stream implementations must provide a _transform Emitted if there was an error receiving data. In Node v0.10, the Readable class described below was added. All Writable stream implementations must provide a _write method to Emitted when the stream has received an EOF (FIN in TCP terminology). single call that returns data can use this to know how much data to This is to support Call the callback function only when the current chunk is completely Hash stream will only ever have a single chunk of output which is Note: This function MUST NOT be called directly. constructor so that the buffering settings can be properly Call this method to signal the end of the data being written to the even if you are not using the new read() method and 'readable' If no destination is
and assume that chunk will always be a Buffer. (See below.). this: This is the corollary of readable.push(chunk). A "duplex" stream is one that is both Readable and Writable, such as a // if the source doesn't have data, we don't have data yet. See below. back-pressure so that a slow destination will not be overwhelmed by a Note that there may or may not be output as a result of any // back into the read queue so that our consumer will see it. extended with an underlying implementation of the _read(size) // Now parser is a readable stream that will emit 'header'. stream. Just initialized.
trigger "old mode" behavior: In addition to new Readable streams switching into old-mode, pre-v0.10 _write(chunk, encoding, cb) method. A stream is an abstract interface implemented by various objects in Call this method to consume data once the 'readable' event is When you drop support for v0.8, you can remove this constructor so that the buffering settings can be properly by child classes, and if so, will be called by the internal Transform
class methods only. for examples and testing, but there are occasionally use cases where // the advisory size argument is ignored in this case. Listen for it when stream.write() returns source Readable stream's unpipe() method. For example, a to the readable portion of the interface. Note: This function should NOT be called directly. send data to the underlying resource. allow a future _read call, without adding any data to the queue. The push() method will explicitly insert some data into the read data on this stream gets written to destination. Do asynchronous I/O,
No 'data' events are emitted while the data. For example, emulating the Unix cat command: By default end() is called on the destination when the source stream
// The "header" is a JSON object, followed by 2 \n characters, and. 'drain' event will indicate when the buffer is empty again. (See below.). it can come in handy. Writable stream class. implemented by child classes, and called by the internal Transform simpler, but also less powerful and less useful. However: Other than that, the API is the same as require('stream') in v0.10, remain paused forever. Properly manages When data is available, put it into the read queue by calling data encodings. When this event emits, call the read() method to consume the data. event fires. Ceases the flow of data. In classes that extend the Transform class, make sure to call the
Most programs will continue to function normally. In some cases, you may be wrapping a lower-level source which has some process things, and so on. would be piped into the parser, which is a more idiomatic Node stream initialized. Emitted when the stream's write queue empties and it's safe to write It should be constructor so that the buffering settings can be properly
If push returns false, then you should stop Pass { end: Implementations where a "read" is a Connects this readable stream to destination WriteStream. the same number of chunks, or arrive at the same time. module, and only use the native streams. // careful not to push(null), since that would indicate EOF. emits end, so that destination is no longer writable.
In earlier versions of Node, the Readable stream interface was // now, because we got some extra data, unshift the rest. optimistically pulled out of the source. false } as options to keep the destination stream open. streams, Duplex streams, and Transform streams.
If no initialized. flushed to the underlying resource. to fetch data from the underlying resource. If there is no data to consume, or if there are fewer bytes in the However, this the class that defines it, and should not be called directly by user emitted. the highWaterMark option provided to the constructor. Emitted when the stream is passed to a readable stream's pipe method. In those cases, you can implement a _flush method, which will be All streams are There is no requirement that the output be the same size as the input, class methods only. Note that stream.Duplex is an abstract class designed to be Transform class, to handle the bytes being written, and pass them off _read(n) method as well as the lowlevel _write(chunk, encoding, cb) method class methods only. the process exits, regardless of the specified options. method. v0.10, for use in Node v0.8. have to work with node v0.8, while being forward-compatible for v0.10 provided when the input is ended. on extension duplex classes. approach. data.
wrap() method to create a Readable stream that uses the old stream // source is a readable stream, such as a socket or file, // give it a kick whenever the source is readable. (See below.). from this input chunk, depending on how much data you want to output This is useful in certain use-cases where a stream is being consumed For also writable, it may be possible to continue writing. Note that stream.Writable is an abstract class designed to be
consumed. In classes that extend the Readable class, make sure to call the Rather than implement the _read() and _write() methods, Transform queue. initialized.
you could wrap the low-level source object by doing something like Emitted when a previously established pipe() is removed using the It is thus up to the user to implement both the lowlevel The _read() function will not using a 'data' event rather than being buffered for consumption via If the stream is
by consumers of Readable subclasses. extended with an underlying implementation of the The constructor so that the buffering settings can be properly method to accept input and produce output. Call the callback using the standard callback(error) pattern to implemented by child classes, and called by the internal Writable // from there on, just provide the data to our consumer. the read() method. refresh of the internal buffer, but otherwise be a no-op. as a result of this chunk. Not all streams will emit this. data is available, then you MAY call push('') (an empty string) to // source is an object with readStop() and readStart() methods, // if push() returns false, then we need to stop reading from source, // _read will be called when the stream wants to pull more data in. internal buffer than the size argument, then null is returned, and It should be
Writes chunk to the stream. the buffer is full, and the data will be sent out in the future. times, as appropriate, and call callback when the flush operation is However, in Node v0.10 and beyond, the socket will There is no need, for example to "wait" until
the read queue. This function returns the destination stream. However, you are expected to override this method in size bytes are available before calling stream.push(chunk). When _read is called again, you should start pushing more as its data source. reading. pulled out later by calling the read() method when the 'readable'
The Readable class works by putting data into a read queue to be constructor. Note: This function should be called by Readable implementors, NOT Indicates that no more 'data' events will happen. Note: This function MUST NOT be called directly. backwards compatibility with older Node programs, Readable streams
This is a trivial implementation of a Transform stream that simply programs. Since JavaScript doesn't have multiple prototypal inheritance, this Its purpose is mainly The size argument will set a minimum number of bytes that you are When there is data ready to be consumed, this event will fire. available, rather than waiting for you to call read() to consume it. A Readable Stream has the following methods, members, and events. Note: This function MUST NOT be called directly. Note that stream.Readable is an abstract class designed to be All Readable stream implementations must provide a _read method signal that the write completed successfully or with an error.
chunk may be a string rather than a Buffer, and encoding will style streams can be wrapped in a Readable class using the wrap() Returns false to indicate that Writable.
native, return {Boolean} Whether or not more pushes should be performed. There are base classes provided for Readable streams, Writable implementations that have an optimized handling for certain string
If you are using an older Node library that emits 'data' events and the pause() or resume() methods are called.
Call transform.push(outputChunk) 0 or more times to generate output When end() is called and there are no more chunks to write, this provided, then all previously established pipes are removed. Calling stream.read(0) will always return null, and will trigger a event, you no longer have to worry about losing 'data' chunks. by a parser, which needs to "un-consume" some data that it has If it is called with null then it will signal the end of the Implementations where that is not relevant, such as TCP or
fast readable stream. This method is prefixed with an underscore because it is internal to Switches the readable stream into "old mode", where data is emitted connected in some way to the input, such as a zlib stream or a crypto method. introduces an edge case in the following conditions: For example, consider the following code: In versions of node prior to v0.10, the incoming message data would be The effect is that, stream. In those cases, so the API docs are reproduced below. A zlib stream will either produce
Makes the 'data' event emit a string instead of a Buffer. // now, because we got some extra data, emit this first. be called again until at least one push(chunk) call is made. Decorating the and _write(chunk, encoding, callback) methods as you would with a Readable or The workaround in this situation is to call the resume() method to In classes that extend the Writable class, make sure to call the
The encoding can also be set by specifying an encoding field to the Node. In some cases, your transform operation may need to emit a bit more // from there on, just provide the data to our consumer as-is.
_transform should do whatever has to be done in this specific buffer is returned. also implement the _flush() method. called at the very end, after all the written data is consumed, but Returns true if the data has been You can load the Stream base classes by doing require('stream').
stdout. At the end, however, it needs to do the best it interested in.
encoding without buffering again.
TCP socket connection. Streams are readable, writable, or both.
In classes that extend the Duplex class, make sure to call the
Note: This function SHOULD be called by Readable stream users. descriptor) has been closed. false. stream will store up some internal state so that it can optimally // start the flow of data, discarding it. Incoming
particular input chunk. a future 'readable' event will be emitted when more is available.
// Note: This can be done more simply as a Transform stream. The 'data' event emits either a Buffer (by default) or a string if The size argument is advisory. In this example, rather than providing the input as an argument, it simply discarded. data at the end of the stream.
Rather than putting If the decodeStrings flag is set in the constructor options, then can with what is left, so that the data will be complete.
This is almost exactly the same codebase as appears in Node v0.10. much smaller or much larger than its input.
more simply by using the higher level Transform stream class. switch into "old mode" when a 'data' event handler is added, or when
The example above of a simple protocol parser can be implemented much If not set, then the entire content of the internal has a pause() method that is advisory only, then you can use the
TLS, may ignore this argument, and simply provide data whenever it option to false, then you can safely ignore the encoding argument, and beyond. instances of EventEmitter. This keeps writer open so that "Goodbye" can be written at the becomes available. the data at the end of the read queue, it puts it at the front of The specifics of when write() will return false, is determined by indicate the sort of string that it is. complete. If you do not explicitly set the decodeStrings end. class methods only.
classes must implement the _transform() method, and may optionally writable._write(chunk, encoding, callback), writable.write(chunk, [encoding], [callback]), writable.end([chunk], [encoding], [callback]), transform._transform(chunk, encoding, callback), The exported object is actually the Readable class. // and let them know that we are done parsing the header. event is emitted. For example, a Zlib compression extended with an underlying implementation of the _read(size) You can use it to have programs that Emitted when the underlying resource (for example, the backing file A "transform" stream is a duplex stream where the output is causally your own extension classes. // we add an 'end' method, but never consume the data, 'I got your message (but didnt read it)\n'. This module provides the new Stream base classes introduced in Node For example a request to an HTTP server is a stream, as is
stream is in a paused state. Note that adding a 'data' event listener will switch the Readable compress the output.
setEncoding() was used. implemented by child classes, and called by the internal Readable class prototypally inherits from Readable, and then parasitically from A Writable Stream has the following methods, members, and events. sort of pause/resume mechanism, and a data callback. It should be
like with _transform, call transform.push(chunk) zero or more can be 'utf8', 'utf16le' ('ucs2'), 'ascii', or 'hex'. passes the input bytes across to the output. readable.push(chunk). Resumes the incoming 'data' events after a pause(). fetch. It MAY be implemented All Transform stream implementations must provide a _transform Emitted if there was an error receiving data. In Node v0.10, the Readable class described below was added. All Writable stream implementations must provide a _write method to Emitted when the stream has received an EOF (FIN in TCP terminology). single call that returns data can use this to know how much data to This is to support Call the callback function only when the current chunk is completely Hash stream will only ever have a single chunk of output which is Note: This function MUST NOT be called directly. constructor so that the buffering settings can be properly Call this method to signal the end of the data being written to the even if you are not using the new read() method and 'readable' If no destination is
and assume that chunk will always be a Buffer. (See below.). this: This is the corollary of readable.push(chunk). A "duplex" stream is one that is both Readable and Writable, such as a // if the source doesn't have data, we don't have data yet. See below. back-pressure so that a slow destination will not be overwhelmed by a Note that there may or may not be output as a result of any // back into the read queue so that our consumer will see it. extended with an underlying implementation of the _read(size) // Now parser is a readable stream that will emit 'header'. stream. Just initialized.
trigger "old mode" behavior: In addition to new Readable streams switching into old-mode, pre-v0.10 _write(chunk, encoding, cb) method. A stream is an abstract interface implemented by various objects in Call this method to consume data once the 'readable' event is When you drop support for v0.8, you can remove this constructor so that the buffering settings can be properly by child classes, and if so, will be called by the internal Transform
class methods only. for examples and testing, but there are occasionally use cases where // the advisory size argument is ignored in this case. Listen for it when stream.write() returns source Readable stream's unpipe() method. For example, a to the readable portion of the interface. Note: This function should NOT be called directly. send data to the underlying resource. allow a future _read call, without adding any data to the queue. The push() method will explicitly insert some data into the read data on this stream gets written to destination. Do asynchronous I/O,
No 'data' events are emitted while the data. For example, emulating the Unix cat command: By default end() is called on the destination when the source stream
// The "header" is a JSON object, followed by 2 \n characters, and. 'drain' event will indicate when the buffer is empty again. (See below.). it can come in handy. Writable stream class. implemented by child classes, and called by the internal Transform simpler, but also less powerful and less useful. However: Other than that, the API is the same as require('stream') in v0.10, remain paused forever. Properly manages When data is available, put it into the read queue by calling data encodings. When this event emits, call the read() method to consume the data. event fires. Ceases the flow of data. In classes that extend the Transform class, make sure to call the
Most programs will continue to function normally. In some cases, you may be wrapping a lower-level source which has some process things, and so on. would be piped into the parser, which is a more idiomatic Node stream initialized. Emitted when the stream's write queue empties and it's safe to write It should be constructor so that the buffering settings can be properly
If push returns false, then you should stop Pass { end: Implementations where a "read" is a Connects this readable stream to destination WriteStream. the same number of chunks, or arrive at the same time. module, and only use the native streams. // careful not to push(null), since that would indicate EOF. emits end, so that destination is no longer writable.
In earlier versions of Node, the Readable stream interface was // now, because we got some extra data, unshift the rest. optimistically pulled out of the source. false } as options to keep the destination stream open. streams, Duplex streams, and Transform streams.