Skip to content

Commit 238466b

Browse files
mcollinaitaloacasas
authored andcommitted
doc: handle backpressure when write() return false
The doc specified that writable.write() was advisory only. However, ignoring that value might lead to memory leaks. This PR specifies that behavior. Moreover, it adds an example on how to listen for the 'drain' event correctly. See: f347dad PR-URL: #10631 Reviewed-By: Colin Ihrig <[email protected]> Reviewed-By: Sam Roberts <[email protected]> Reviewed-By: Evan Lucas <[email protected]> Reviewed-By: James M Snell <[email protected]> Reviewed-By: Joyee Cheung <[email protected]>
1 parent ec226a2 commit 238466b

File tree

1 file changed

+40
-3
lines changed

1 file changed

+40
-3
lines changed

doc/api/stream.md

+40-3
Original file line numberDiff line numberDiff line change
@@ -443,9 +443,46 @@ first argument. To reliably detect write errors, add a listener for the
443443
The return value is `true` if the internal buffer does not exceed
444444
`highWaterMark` configured when the stream was created after admitting `chunk`.
445445
If `false` is returned, further attempts to write data to the stream should
446-
stop until the [`'drain'`][] event is emitted. However, the `false` return
447-
value is only advisory and the writable stream will unconditionally accept and
448-
buffer `chunk` even if it has not not been allowed to drain.
446+
stop until the [`'drain'`][] event is emitted.
447+
448+
While a stream is not draining, calls to `write()` will buffer `chunk`, and
449+
return false. Once all currently buffered chunks are drained (accepted for
450+
delivery by the operating system), the `'drain'` event will be emitted.
451+
It is recommended that once write() returns false, no more chunks be written
452+
until the `'drain'` event is emitted. While calling `write()` on a stream that
453+
is not draining is allowed, Node.js will buffer all written chunks until
454+
maximum memory usage occurs, at which point it will abort unconditionally.
455+
Even before it aborts, high memory usage will cause poor garbage collector
456+
performance and high RSS (which is not typically released back to the system,
457+
even after the memory is no longer required). Since TCP sockets may never
458+
drain if the remote peer does not read the data, writing a socket that is
459+
not draining may lead to a remotely exploitable vulnerability.
460+
461+
Writing data while the stream is not draining is particularly
462+
problematic for a [Transform][], because the `Transform` streams are paused
463+
by default until they are piped or an `'data'` or `'readable'` event handler
464+
is added.
465+
466+
If the data to be written can be generated or fetched on demand, it is
467+
recommended to encapsulate the logic into a [Readable][] and use
468+
[`stream.pipe()`][]. However, if calling `write()` is preferred, it is
469+
possible to respect backpressure and avoid memory issues using the
470+
the [`'drain'`][] event:
471+
472+
```js
473+
function write (data, cb) {
474+
if (!stream.write(data)) {
475+
stream.once('drain', cb)
476+
} else {
477+
process.nextTick(cb)
478+
}
479+
}
480+
481+
// Wait for cb to be called before doing any other write.
482+
write('hello', () => {
483+
console.log('write completed, do more writes now')
484+
})
485+
```
449486

450487
A Writable stream in object mode will always ignore the `encoding` argument.
451488

0 commit comments

Comments
 (0)