readablestream async iterator


The At least in tests we often check whether something's errored or closed by doing, I agree with auto-cancel as the default return() behavior. The WritableStream is a destination to which stream data is sent.

The readableStream.locked property is false by default, and is Returns a pair of new instances to which this Use it as an async iterator that immediately ends. Appends a new chunk of data to the 's queue. Creates a new WritableStreamDefaultWriter that is locked to the given It is similar to the Node.js Streams API but emerged later @mcollina see the above comments for API discussion so far. An implementation of the WHATWG Streams Standard. The must not be locked (that is, it must not have an existing Have a question about this project? I think it would help defining a best practice on how iterators should be used. Chromium: https://bugs.chromium.org/p/chromium/issues/detail?id=929585 given . stream.iterator(opts). performance.now() timestamp once every second forever. the internal state and management of the stream's queue. This class is now exposed on the global object. Signals that the request has been fulfilled with bytes written If you iterator to the end, should we release the lock for you? On the other hand, it's kind of annoying to make people manually acquire a new reader and call. JavaScript value. Pointless. Creates a new ReadableStreamBYOBReader that is locked to the break invokes return(), which per the above design will cancel the stream. callbacks. It is also easy to opt out of even without providing .iterator({ preventCancel: true }) by either: That said, I am not objecting to a .iterator({ preventCancel: true }) just saying that we haven't had to use it in our own code. I am wondering whether we should rename .iterator() to .values(), hmm. ReadableStream's data will be forwarded. I like. I think we should have auto-release.

to readableStreamBYOBRequest.view. of the TransformStream. An async iterable It will need to be benchmarked again in the future, V8 is shipping a lot of optimizations for promises, generators and AsyncIterators. Here they are: Releases this reader's lock on the underlying . Summing up some IRC discussion: In a large code base using our own stream->to->async iterator transition we did this and are pretty happy with that as a default. is active. WebKit: https://bugs.webkit.org/show_bug.cgi?id=194379.

// successfully without breaking or throwing. @jakearchibald @domenic I would like Node.js Readable to be as close as possible to whatwg Readable when used with for await, i.e. or is passed in to readableStreamBYOBReader.read(), Then stream[Symbol.asyncIterator] could alias stream.iterator. The TransformStreamDefaultController manages the internal state method to acquire the async iterator and set the preventCancel option to We hope to solicit feedback through a Node.js foundation survey which we hope to send in about a month. The object To prevent When a Buffer, , that has been provided for the read request to fill, Wrapping the stream with another stream for the purpose of that iteration and not forwarding cancellation. I've also been using async iterators with Node streams which have been working well - though I admit a lot less than I'd like to - I have been trying to (as in emailing relevant parties) solicit feedback but haven't been able to get it. provides access to a ReadableStreamBYOBRequest instance Called by user-code to signal that an error has occurred while processing This is a niche use case, and there's other ways of doing it. reader treats the chunks of data passed through the stream as opaque The value will be true if decoding errors result in a TypeError being Each will receive the data into the WritableStream. ReadableStream was created). The default Every has a controller that is responsible for @jakearchibald I added AsyncIterators based on what I thought it might make sense for Node.js Streams, we also did a couple of implementations and this turned out to be more performant*. the internal state and management of the stream's queue. possibly transformed, then pushed to transform.readable.

The BYOB is short for "bring your own buffer". Signals to both the readable and writable side that an error has occurred

is used to gain access to the ArrayBuffer/TypedArray Sign up for a free GitHub account to open an issue and contact its maintainers and the community. Causes the readableStream.locked to be true. stream's data. and provides methods for signaling that the data has Creates a new TextEncoderStream instance. Already on GitHub? I really look forward to this! switched to true while there is an active writer attached to this Returns the amount of data remaining to fill the 's This is somewhat of an edge case, and just governs what happens if someone tries to acquire a new iterator or lock after iterating to the end. to a new Buffer, TypedArray, or DataView. Releases this writer's lock on the underlying . Abruptly terminates the WritableStream.

It's an experimental feature that is shipping for the first time in 2 weeks (it will print a warning and notify the users). The amount of data required to fill the 's queue. canceled with their associated promises rejected. thrown. Assuming we have auto-cancel and auto-close, then there are extremely limited extra capabilities you'd gain from auto-release: Maybe auto-release has some aesthetic benefits I haven't considered. and returns a promise that is fulfilled with the data once it is https://jakearchibald.com/2017/async-iterators-and-generators/#making-streams-iterate, Add two demos using streams for PNG manipulation, add @@asyncIterator to ReadableStreamDefaultReader, Perf and binary-parse-stream/binary-parser, https://bugs.chromium.org/p/chromium/issues/detail?id=929585, https://bugzilla.mozilla.org/show_bug.cgi?id=1525852, https://bugs.webkit.org/show_bug.cgi?id=194379, getFilesFromPath works in Javascript but not in Typescript, Should the async iterator's return() (which is called, remember, when you break out of a for-await-of loop via return/break/throw) cancel the reader, or just release the lock? to be abruptly closed with an error. Not auto-cancelling leads to simpler code when your input has a distinct header and body. The WHATWG Streams Standard (or "web streams") defines an API for handling By clicking Sign up for GitHub, you agree to our terms of service and This values, which allows the to work with generally any same data. pattern that allows for more efficient reading of byte-oriented Appends a chunk of data to the readable side's queue. the view's underlying ArrayBuffer is detached, invalidating A instance can be transferred using a . ReadableByteStreamController is for byte-oriented ReadableStreams. Closes the WritableStream when no additional writes are expected. Sign in At least in tests we often check whether something's errored or closed by doing .getReader().closed.then(); we shouldn't break that, I think. The async iterator will consume the until it terminates. WritableStream. return, or a throw), the will be closed. is used to read the data from the stream. Connects this to the pair of and streams, and when using the ReadableStreamBYOBReader, Cancels the and returns a promise that is fulfilled Wrapper object seems like a clear winner. I prefer .iterator() for ReadableStream. true. We also do this and find it quite useful. can have disastrous consequences for your application. When called, the will be aborted, We can definitely change any part of what I did in implementation before it gets out of experimental (ideally before Node 10 goes LTS in October, but even later if things are in flux). The encoding supported by the TextDecoderStream instance. Every has a controller that is responsible for By default, calling readableStream.getReader() with no arguments All queued writes will be the WritableStream data. are connected such that the data written to the WritableStream is received, while processing the transform data, causing both sides to be abruptly Causes the readableStream.locked to be true while the async iterator Signals an error that causes the to error and close. environments. switched to true while there is an active reader consuming the Basically a really obfuscated way of closing a WritableStream. internal state. pipeline is configured, transform.readable is returned. A TransformStream consists of a and a that A instance can be transferred using a . Creates a new that is locked to the underlyingSource.type set equal to 'bytes' when the Wrapping the async iterator with another async iterator and not forwarding the. ReadableStreamDefaultController is the default controller queue. Requests the next chunk of data from the underlying closed. By default, if the async iterator exits early (via either a break, is active. You signed in with another tab or window. with currently pending writes canceled. I forgot to file implementer bugs yesterday.

will return an instance of ReadableStreamDefaultReader. Appends a new chunk of data to the 's queue. Firefox: https://bugzilla.mozilla.org/show_bug.cgi?id=1525852

These types of Buffers use a shared underlying Uhm, at the moment it's called .getIterator() instead of .iterator(). I also agree that it should be auto cancelled and auto closed, as (async) iterators most often are single consumer. I think people are likely to see .values() and expect it to be a shortcut for slurping the whole stream. Given the precedent that is somewhat being established in WICG/kv-storage#6, I am wondering whether we should rename .iterator() to .values(), hmm. I didn't like the sound of auto-cancel, but given @domenic's code example it sounds like the better thing to do. A instance can be transferred using a .

The value will be true if the decoding result will include the byte order On the one hand, that seems presumptuous. Signals that a bytesWritten number of bytes have been written Get a reader.

been provided. active reader). Pointless. object that contains all of the data from all of data from this is written in to transform.writable, ReadableStream's data. Well occasionally send you account related emails. Do not pass a pooled object instance in to this method. and has become the "standard" API for streaming data across many JavaScript @devsnek has generously volunteered to help with the spec and tests here, and I'm very excited. automatic closing of the , use the readableStream.values() The utility consumer functions provide common options for consuming Causes the readableStream.locked to be true while the pipe operation But, does anyone think the former would be better? This would make easier to move code between nodejs and the browser. There are three primary types of objects: This example creates a simple ReadableStream that pushes the current After realizing we'd have to make releaseLock() return an empty object, I'm leaning toward the latter. .

The object supports the async iterator protocol using

One thing that is not obvious, is if we should close the stream when there is a break. Is there somewhere we can read up on the justifications and tradeoffs for the pattern node is shipping? Closes the readable side of the transport and causes the writable side given . Mixing iteration with using readers is not something I'd want to encourage. You can break out of the first loop when you get to the end of the header, and then have a second loop which processes the body. Excellent!

This is a The text was updated successfully, but these errors were encountered: https://jakearchibald.com/2017/async-iterators-and-generators/#making-streams-iterate - article on this, including basic implementation. Creates and returns an async iterator usable for consuming this streaming data. ReadableStream should be an async iterable, // Might throw if the reader is still locked because we finished. Creates and creates a new writer instance that can be used to write If you want to rename it, let me know in #980. the readableByteStreamController.byobRequest property Could I be involved in the process? The ReadableStreamBYOBReader is an alternative consumer for

byte-oriented s (those that are created with During the async iteration, the will be locked. I'm not sure it's "obvious" that it should cancel the stream, but it's probably better for the common case, and @jakearchibald had a good idea for how we could allow an escape hatch for the uncommon case. to your account, (I'm adding this mostly as a tracking bug after I asked about it in the wrong place). privacy statement. or Buffer.from(), or are often returned by various node:fs module implementation for ReadableStreams that are not byte-oriented. all existing views that may exist on that ArrayBuffer. Creates a new TextDecoderStream instance. NodeJS is looking into supporting async iterables as a way to stream data, and it would be great if fetch (or the readable stream part of fetch) supported the same interface. that represents the current read request. The amount of data required to fill the readable side's queue. queue. I don't think I favour auto-release. Unwise. Use a reader to observe that the stream is closed, which it always will be. and potentially transformed, before being pushed into the ReadableStream's Also, it would improve code portability. I agree with auto-cancel as the default return() behavior.

data that avoids extraneous copying. The writableStream.locked property is false by default, and is I.e., if you break out of a loop, should we assume you'll never want to continue looping from the current point and we should clean up for you? when the underlying stream has been canceled. WritableStream. I guess my implementation of return should be correct then, thanks!

The WritableStreamDefaultController manage's the 's the pooled Buffer instances. I don't think I ever cancel iterating the stream though, maybe only when throwing an exception in a loop. pipeTo or pipeThrough. for await syntax. That seems like the most common scenario, and it doesn't make it impossible to get the multi consumer scenario to work. .iterator() is more specific about the functionality it provides. Closes the to which this controller is associated. Use of this API no longer emit a runtime warning. break invokes return(), which per the above design will cancel the stream. available. There's no reason for any consumer to forever hold a lock and not have any way of giving it back, even if the stream is definitely closed.

The encoding supported by the TextEncoderStream instance.

When using ReadableByteStreamController in byte-oriented Is this API spec'd yet? mark. Pooled Buffer objects are created using Buffer.allocUnsafe(), Here is the implementation that I currently use to convert a ReadableStream to an async iterator, and it works well enough for what I need from it. The rest of the time, auto-cancel is cleaner and less error-prone. Successfully merging a pull request may close this issue. The Once the I think the api should be optimized for single consumer and require a small wrapper for cases where it should not automatically close. provided in the transform argument such that the streams.