
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 07:03:29 GMT</lastBuildDate>
        <item>
            <title><![CDATA[We deserve a better streams API for JavaScript]]></title>
            <link>https://blog.cloudflare.com/a-better-web-streams-api/</link>
            <pubDate>Fri, 27 Feb 2026 06:00:00 GMT</pubDate>
            <description><![CDATA[ The Web streams API has become ubiquitous in JavaScript runtimes but was designed for a different era. Here's what a modern streaming API could (should?) look like. ]]></description>
            <content:encoded><![CDATA[ <p>Handling data in streams is fundamental to how we build applications. To make streaming work everywhere, the <a href="https://streams.spec.whatwg.org/"><u>WHATWG Streams Standard</u></a> (informally known as "Web streams") was designed to establish a common API to work across browsers and servers. It shipped in browsers, was adopted by Cloudflare Workers, Node.js, Deno, and Bun, and became the foundation for APIs like <a href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API"><u>fetch()</u></a>. It's a significant undertaking, and the people who designed it were solving hard problems with the constraints and tools they had at the time.</p><p>But after years of building on Web streams – implementing them in both Node.js and Cloudflare Workers, debugging production issues for customers and runtimes, and helping developers work through far too many common pitfalls – I've come to believe that the standard API has fundamental usability and performance issues that cannot be fixed easily with incremental improvements alone. The problems aren't bugs; they're consequences of design decisions that may have made sense a decade ago, but don't align with how JavaScript developers write code today.</p><p>This post explores some of the fundamental issues I see with Web streams and presents an alternative approach built around JavaScript language primitives that demonstrate something better is possible. </p><p>In benchmarks, this alternative can run anywhere between 2x to <i>120x</i> faster than Web streams in every runtime I've tested it on (including Cloudflare Workers, Node.js, Deno, Bun, and every major browser). The improvements are not due to clever optimizations, but fundamentally different design choices that more effectively leverage modern JavaScript language features. I'm not here to disparage the work that came before; I'm here to start a conversation about what can potentially come next.</p>
    <div>
      <h2>Where we're coming from</h2>
      <a href="#where-were-coming-from">
        
      </a>
    </div>
    <p>The Streams Standard was developed between 2014 and 2016 with an ambitious goal to provide "APIs for creating, composing, and consuming streams of data that map efficiently to low-level I/O primitives." Before Web streams, the web platform had no standard way to work with streaming data.</p><p>Node.js already had its own <a href="https://nodejs.org/api/stream.html"><u>streaming API</u></a> at the time that was ported to also work in browsers, but WHATWG chose not to use it as a starting point given that it is chartered to only consider the needs of Web browsers. Server-side runtimes only adopted Web streams later, after Cloudflare Workers and Deno each emerged with first-class Web streams support and cross-runtime compatibility became a priority.</p><p>The design of Web streams predates async iteration in JavaScript. The <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for-await...of"><code><u>for await...of</u></code></a> syntax didn't land until <a href="https://262.ecma-international.org/9.0/"><u>ES2018</u></a>, two years after the Streams Standard was initially finalized. This timing meant the API couldn't initially leverage what would eventually become the idiomatic way to consume asynchronous sequences in JavaScript. Instead, the spec introduced its own reader/writer acquisition model, and that decision rippled through every aspect of the API.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3X0niHShBlgF4LlpWYB7eC/f0bbf35f12ecc98a3888e6e3835acf3a/1.png" />
          </figure>
    <div>
      <h4>Excessive ceremony for common operations</h4>
      <a href="#excessive-ceremony-for-common-operations">
        
      </a>
    </div>
    <p>The most common task with streams is reading them to completion. Here's what that looks like with Web streams:</p>
            <pre><code>// First, we acquire a reader that gives an exclusive lock
// on the stream...
const reader = stream.getReader();
const chunks = [];
try {
  // Second, we repeatedly call read and await on the returned
  // promise to either yield a chunk of data or indicate we're
  // done.
  while (true) {
    const { value, done } = await reader.read();
    if (done) break;
    chunks.push(value);
  }
} finally {
  // Finally, we release the lock on the stream
  reader.releaseLock();
}</code></pre>
            <p>You might assume this pattern is inherent to streaming. It isn't. The reader acquisition, the lock management, and the <code>{ value, done }</code> protocol are all just design choices, not requirements. They are artifacts of how and when the Web streams spec was written. Async iteration exists precisely to handle sequences that arrive over time, but async iteration did not yet exist when the streams specification was written. The complexity here is pure API overhead, not fundamental necessity.</p><p>Consider the alternative approach now that Web streams do support <code>for await...of</code>:</p>
            <pre><code>const chunks = [];
for await (const chunk of stream) {
  chunks.push(chunk);
}</code></pre>
            <p>This is better in that there is far less boilerplate, but it doesn't solve everything. Async iteration was retrofitted onto an API that wasn't designed for it, and it shows. Features like <a href="https://developer.mozilla.org/en-US/docs/Web/API/ReadableStreamBYOBReader"><u>BYOB (bring your own buffer)</u></a> reads aren't accessible through iteration. The underlying complexity of readers, locks, and controllers are still there, just hidden. When something does go wrong, or when additional features of the API are needed, developers find themselves back in the weeds of the original API, trying to understand why their stream is "locked" or why <code>releaseLock()</code> didn't do what they expected or hunting down bottlenecks in code they don't control.</p>
    <div>
      <h4>The locking problem</h4>
      <a href="#the-locking-problem">
        
      </a>
    </div>
    <p>Web streams use a locking model to prevent multiple consumers from interleaving reads. When you call <a href="https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream/getReader"><code><u>getReader()</u></code></a>, the stream becomes locked. While locked, nothing else can read from the stream directly, pipe it, or even cancel it – only the code that is actually holding the reader can.</p><p>This sounds reasonable until you see how easily it goes wrong:</p>
            <pre><code>async function peekFirstChunk(stream) {
  const reader = stream.getReader();
  const { value } = await reader.read();
  // Oops — forgot to call reader.releaseLock()
  // And the reader is no longer available when we return
  return value;
}

const first = await peekFirstChunk(stream);
// TypeError: Cannot obtain lock — stream is permanently locked
for await (const chunk of stream) { /* never runs */ }</code></pre>
            <p>Forgetting <a href="https://developer.mozilla.org/en-US/docs/Web/API/ReadableStreamDefaultReader/releaseLock"><code><u>releaseLock()</u></code></a> permanently breaks the stream. The <a href="https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream/locked"><code><u>locked</u></code></a><code> </code>property tells you that a stream is locked, but not why, by whom, or whether the lock is even still usable. <a href="https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream/pipeTo"><u>Piping</u></a> internally acquires locks, making streams unusable during pipe operations in ways that aren't obvious.</p><p>The semantics around releasing locks with pending reads were also unclear for years. If you called read() but didn't await it, then called releaseLock(), what happened? The spec was recently clarified to cancel pending reads on lock release – but implementations varied, and code that relied on the previous unspecified behavior can break.</p><p>That said, it's important to recognize that locking in itself is not bad. It does, in fact, serve an important purpose to ensure that applications properly and orderly consume or produce data. The key challenge is with the original manual implementation of it using APIs like <code>getReader() </code>and <code>releaseLock()</code>. With the arrival of automatic lock and reader management with async iterables, dealing with locks from the users point of view became a lot easier.</p><p>For implementers, the locking model adds a fair amount of non-trivial internal bookkeeping. Every operation must check lock state, readers must be tracked, and the interplay between locks, cancellation, and error states creates a matrix of edge cases that must all be handled correctly.</p>
    <div>
      <h4>BYOB: complexity without payoff</h4>
      <a href="#byob-complexity-without-payoff">
        
      </a>
    </div>
    <p><a href="https://developer.mozilla.org/en-US/docs/Web/API/ReadableStreamBYOBReader"><u>BYOB (bring your own buffer)</u></a> reads were designed to let developers reuse memory buffers when reading from streams, an important optimization intended for high-throughput scenarios. The idea is sound: instead of allocating new buffers for each chunk, you provide your own buffer and the stream fills it.</p><p>In practice, (and yes, there are always exceptions to be found) BYOB is rarely used to any measurable benefit. The API is substantially more complex than default reads, requiring a separate reader type (<code>ReadableStreamBYOBReader</code>) and other specialized classes (e.g. <code>ReadableStreamBYOBRequest</code>), careful buffer lifecycle management, and understanding of <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer#transferring_arraybuffers"><code><u>ArrayBuffer</u></code><u> detachment</u></a> semantics. When you pass a buffer to a BYOB read, the buffer becomes detached – transferred to the stream – and you get back a different view over potentially different memory. This transfer-based model is error-prone and confusing:</p>
            <pre><code>const reader = stream.getReader({ mode: 'byob' });
const buffer = new ArrayBuffer(1024);
let view = new Uint8Array(buffer);

const result = await reader.read(view);
// 'view' should now be detached and unusable
// (it isn't always in every impl)
// result.value is a NEW view, possibly over different memory
view = result.value; // Must reassign</code></pre>
            <p>BYOB also can't be used with async iteration or TransformStreams, so developers who want zero-copy reads are forced back into the manual reader loop.</p><p>For implementers, BYOB adds significant complexity. The stream must track pending BYOB requests, handle partial fills, manage buffer detachment correctly, and coordinate between the BYOB reader and the underlying source. The <a href="https://github.com/web-platform-tests/wpt/tree/master/streams/readable-byte-streams"><u>Web Platform Tests for readable byte streams</u></a> include dedicated test files just for BYOB edge cases: detached buffers, bad views, response-after-enqueue ordering, and more.</p><p>BYOB ends up being complex for both users and implementers, yet sees little adoption in practice. Most developers stick with default reads and accept the allocation overhead.</p><p>Most userland implementations of custom ReadableStream instances do not typically bother with all the ceremony required to correctly implement both default and BYOB read support in a single stream – and for good reason. It's difficult to get right and most of the time consuming code is typically going to fallback on the default read path. The example below shows what a "correct" implementation would need to do. It's big, complex, and error prone, and not a level of complexity that the typical developer really wants to have to deal with:</p>
            <pre><code>new ReadableStream({
    type: 'bytes',
    
    async pull(controller: ReadableByteStreamController) {      
      if (offset &gt;= totalBytes) {
        controller.close();
        return;
      }
      
      // Check for BYOB request FIRST
      const byobRequest = controller.byobRequest;
      
      if (byobRequest) {
        // === BYOB PATH ===
        // Consumer provided a buffer - we MUST fill it (or part of it)
        const view = byobRequest.view!;
        const bytesAvailable = totalBytes - offset;
        const bytesToWrite = Math.min(view.byteLength, bytesAvailable);
        
        // Create a view into the consumer's buffer and fill it
        // not critical but safer when bytesToWrite != view.byteLength
        const dest = new Uint8Array(
          view.buffer,
          view.byteOffset,
          bytesToWrite
        );
        
        // Fill with sequential bytes (our "data source")
        // Can be any thing here that writes into the view
        for (let i = 0; i &lt; bytesToWrite; i++) {
          dest[i] = (offset + i) &amp; 0xFF;
        }
        
        offset += bytesToWrite;
        
        // Signal how many bytes we wrote
        byobRequest.respond(bytesToWrite);
        
      } else {
        // === DEFAULT READER PATH ===
        // No BYOB request - allocate and enqueue a chunk
        const bytesAvailable = totalBytes - offset;
        const chunkSize = Math.min(1024, bytesAvailable);
        
        const chunk = new Uint8Array(chunkSize);
        for (let i = 0; i &lt; chunkSize; i++) {
          chunk[i] = (offset + i) &amp; 0xFF;
        }
        
        offset += chunkSize;
        controller.enqueue(chunk);
      }
    },
    
    cancel(reason) {
      console.log('Stream canceled:', reason);
    }
  });</code></pre>
            <p>When a host runtime provides a byte-oriented ReadableStream from the runtime itself, for instance, as the <code>body </code>of a fetch <code>Response</code>, it is often far easier for the runtime itself to provide an optimized implementation of BYOB reads, but those still need to be capable of handling both default and BYOB reading patterns and that requirement brings with it a fair amount of complexity.</p>
    <div>
      <h4>Backpressure: good in theory, broken in practice</h4>
      <a href="#backpressure-good-in-theory-broken-in-practice">
        
      </a>
    </div>
    <p>Backpressure – the ability for a slow consumer to signal a fast producer to slow down – is a first-class concept in Web streams. In theory. In practice, the model has some serious flaws.</p><p>The primary signal is <a href="https://developer.mozilla.org/en-US/docs/Web/API/ReadableStreamDefaultController/desiredSize"><code><u>desiredSize</u></code></a> on the controller. It can be positive (wants data), zero (at capacity), negative (over capacity), or null (closed). Producers are supposed to check this value and stop enqueueing when it's not positive. But there's nothing enforcing this: <a href="https://developer.mozilla.org/en-US/docs/Web/API/ReadableStreamDefaultController/enqueue"><code><u>controller.enqueue()</u></code></a> always succeeds, even when desiredSize is deeply negative.</p>
            <pre><code>new ReadableStream({
  start(controller) {
    // Nothing stops you from doing this
    while (true) {
      controller.enqueue(generateData()); // desiredSize: -999999
    }
  }
});</code></pre>
            <p>Stream implementations can and do ignore backpressure; and some spec-defined features explicitly break backpressure. <a href="https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream/tee"><code><u>tee()</u></code></a>, for instance, creates two branches from a single stream. If one branch reads faster than the other, data accumulates in an internal buffer with no limit. A fast consumer can cause unbounded memory growth while the slow consumer catches up, and there's no way to configure this or opt out beyond canceling the slower branch.</p><p>Web streams do provide clear mechanisms for tuning backpressure behavior in the form of the <code>highWaterMark</code> option and customizable size calculations, but these are just as easy to ignore as <code>desiredSize</code>, and many applications simply fail to pay attention to them.</p><p>The same issues exist on the <code>WritableStream</code> side. A <code>WritableStream</code> has a <code>highWaterMark</code> and <code>desiredSize</code>. There is a <code>writer.ready</code> promise that producers of data are supposed to pay attention but often don't.</p>
            <pre><code>const writable = getWritableStreamSomehow();
const writer = writable.getWriter();

// Producers are supposed to wait for the writer.ready
// It is a promise that, when resolves, indicates that
// the writables internal backpressure is cleared and
// it is ok to write more data
await writer.ready;
await writer.write(...);</code></pre>
            <p>For implementers, backpressure adds complexity without providing guarantees. The machinery to track queue sizes, compute <code>desiredSize</code>, and invoke <code>pull()</code> at the right times must all be implemented correctly. However, since these signals are advisory, all that work doesn't actually prevent the problems backpressure is supposed to solve.</p>
    <div>
      <h4>The hidden cost of promises</h4>
      <a href="#the-hidden-cost-of-promises">
        
      </a>
    </div>
    <p>The Web streams spec requires promise creation at numerous points, often in hot paths and often invisible to users. Each <code>read()</code> call doesn't just return a promise; internally, the implementation creates additional promises for queue management, <code>pull()</code> coordination, and backpressure signaling.</p><p>This overhead is mandated by the spec's reliance on promises for buffer management, completion, and backpressure signals. While some of it is implementation-specific, much of it is unavoidable if you're following the spec as written. For high-frequency streaming – video frames, network packets, real-time data – this overhead is significant.</p><p>The problem compounds in pipelines. Each <code>TransformStream</code> adds another layer of promise machinery between source and sink. The spec doesn't define synchronous fast paths, so even when data is available immediately, the promise machinery still runs.</p><p>For implementers, this promise-heavy design constrains optimization opportunities. The spec mandates specific promise resolution ordering, making it difficult to batch operations or skip unnecessary async boundaries without risking subtle compliance failures. There are many hidden internal optimizations that implementers do make but these can be complicated and difficult to get right.</p><p>While I was writing this blog post, Vercel's Malte Ubl published their own <a href="https://vercel.com/blog/we-ralph-wiggumed-webstreams-to-make-them-10x-faster"><u>blog post</u></a> describing some research work Vercel has been doing around improving the performance of Node.js' Web streams implementation. In that post they discuss the same fundamental performance optimization problem that every implementation of Web streams face:</p><blockquote><p>"Or consider pipeTo(). Each chunk passes through a full Promise chain: read, write, check backpressure, repeat. An {value, done} result object is allocated per read. Error propagation creates additional Promise branches.</p><p>None of this is wrong. These guarantees matter in the browser where streams cross security boundaries, where cancellation semantics need to be airtight, where you do not control both ends of a pipe. But on the server, when you are piping React Server Components through three transforms at 1KB chunks, the cost adds up.</p><p>We benchmarked native WebStream pipeThrough at 630 MB/s for 1KB chunks. Node.js pipeline() with the same passthrough transform: ~7,900 MB/s. That is a 12x gap, and the difference is almost entirely Promise and object allocation overhead." 
- Malte Ubl, <a href="https://vercel.com/blog/we-ralph-wiggumed-webstreams-to-make-them-10x-faster"><u>https://vercel.com/blog/we-ralph-wiggumed-webstreams-to-make-them-10x-faster</u></a></p></blockquote><p>As part of their research, they have put together a set of proposed improvements for Node.js' Web streams implementation that will eliminate promises in certain code paths which can yield a significant performance boost up to 10x faster, which only goes to prove the point: promises, while useful, add significant overhead. As one of the core maintainers of Node.js, I am looking forward to helping Malte and the folks at Vercel get their proposed improvements landed!</p><p>In a recent update made to Cloudflare Workers, I made similar kinds of modifications to an internal data pipeline that reduced the number of JavaScript promises created in certain application scenarios by up to 200x. The result is several orders of magnitude improvement in performance in those applications.</p>
    <div>
      <h3>Real-world failures</h3>
      <a href="#real-world-failures">
        
      </a>
    </div>
    
    <div>
      <h4>Exhausting resources with unconsumed bodies</h4>
      <a href="#exhausting-resources-with-unconsumed-bodies">
        
      </a>
    </div>
    <p>When <code>fetch()</code> returns a response, the body is a <a href="https://developer.mozilla.org/en-US/docs/Web/API/Response/body"><code><u>ReadableStream</u></code></a>. If you only check the status and don't consume or cancel the body, what happens? The answer varies by implementation, but a common outcome is resource leakage.</p>
            <pre><code>async function checkEndpoint(url) {
  const response = await fetch(url);
  return response.ok; // Body is never consumed or cancelled
}

// In a loop, this can exhaust connection pools
for (const url of urls) {
  await checkEndpoint(url);
}</code></pre>
            <p>This pattern has caused connection pool exhaustion in Node.js applications using <a href="https://nodejs.org/api/globals.html#fetch"><u>undici</u></a> (the <code>fetch() </code>implementation built into Node.js), and similar issues have appeared in other runtimes. The stream holds a reference to the underlying connection, and without explicit consumption or cancellation, the connection may linger until garbage collection – which may not happen soon enough under load.</p><p>The problem is compounded by APIs that implicitly create stream branches. <a href="https://developer.mozilla.org/en-US/docs/Web/API/Request/clone"><code><u>Request.clone()</u></code></a> and <a href="https://developer.mozilla.org/en-US/docs/Web/API/Response/clone"><code><u>Response.clone()</u></code></a> perform implicit <code>tee()</code> operations on the body stream – a detail that's easy to miss. Code that clones a request for logging or retry logic may unknowingly create branched streams that need independent consumption, multiplying the resource management burden.</p><p>Now, to be certain, these types of issues <i>are</i> implementation bugs. The connection leak was definitely something that undici needed to fix in its own implementation, but the complexity of the specification does not make dealing with these types of issues easy.</p><blockquote><p>"Cloning streams in Node.js's fetch() implementation is harder than it looks. When you clone a request or response body, you're calling tee() - which splits a single stream into two branches that both need to be consumed. If one consumer reads faster than the other, data buffers unbounded in memory waiting for the slow branch. If you don't properly consume both branches, the underlying connection leaks. The coordination required between two readers sharing one source makes it easy to accidentally break the original request or exhaust connection pools. It's a simple API call with complex underlying mechanics that are difficult to get right." - Matteo Collina, Ph.D. - Platformatic Co-Founder &amp; CTO, Node.js Technical Steering Committee Chair</p></blockquote>
    <div>
      <h4>Falling headlong off the tee() memory cliff</h4>
      <a href="#falling-headlong-off-the-tee-memory-cliff">
        
      </a>
    </div>
    <p><a href="https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream/tee"><code><u>tee()</u></code></a> splits a stream into two branches. It seems straightforward, but the implementation requires buffering: if one branch is read faster than the other, the data must be held somewhere until the slower branch catches up.</p>
            <pre><code>const [forHash, forStorage] = response.body.tee();

// Hash computation is fast
const hash = await computeHash(forHash);

// Storage write is slow — meanwhile, the entire stream
// may be buffered in memory waiting for this branch
await writeToStorage(forStorage);</code></pre>
            <p>The spec does not mandate buffer limits for <code>tee()</code>. And to be fair, the spec allows implementations to implement the actual internal mechanisms for <code>tee()</code>and other APIs in any way they see fit so long as the observable normative requirements of the specification are met. But if an implementation chooses to implement <code>tee()</code> in the specific way described by the streams specification, then <code>tee()</code> will come with a built-in memory management issue that is difficult to work around.</p><p>Implementations have had to develop their own strategies for dealing with this. Firefox initially used a linked-list approach that led to O<code>(n)</code> memory growth proportional to the consumption rate difference. In Cloudflare Workers, we opted to implement a shared buffer model where backpressure is signaled by the slowest consumer rather than the fastest.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5cl4vqYfaHaVXiHjLSXv0a/03a0b9fe4c9c0594e181ffee43b63998/2.png" />
          </figure>
    <div>
      <h4>Transform backpressure gaps</h4>
      <a href="#transform-backpressure-gaps">
        
      </a>
    </div>
    <p><code>TransformStream</code> creates a <code>readable/writable</code> pair with processing logic in between. The <code>transform()</code> function executes on <i>write</i>, not on read. Processing of the transform happens eagerly as data arrives, regardless of whether any consumer is ready. This causes unnecessary work when consumers are slow, and the backpressure signaling between the two sides has gaps that can cause unbounded buffering under load. The expectation in the spec is that the producer of the data being transformed is paying attention to the <code>writer.ready</code> signal on the writable side of the transform but quite often producers just simply ignore it.</p><p>If the transform's <code>transform() </code>operation is synchronous and always enqueues output immediately, it never signals backpressure back to the writable side even when the downstream consumer is slow. This is a consequence of the spec design that many developers completely overlook. In browsers, where there's only a single user and typically only a small number of stream pipelines active at any given time, this type of foot gun is often of no consequence, but it has a major impact on server-side or edge performance in runtimes that serve thousands of concurrent requests.</p>
            <pre><code>const fastTransform = new TransformStream({
  transform(chunk, controller) {
    // Synchronously enqueue — this never applies backpressure
    // Even if the readable side's buffer is full, this succeeds
    controller.enqueue(processChunk(chunk));
  }
});

// Pipe a fast source through the transform to a slow sink
fastSource
  .pipeThrough(fastTransform)
  .pipeTo(slowSink);  // Buffer grows without bound</code></pre>
            <p>What TransformStreams are supposed to do is check for backpressure on the controller and use promises to communicate that back to the writer:</p>
            <pre><code>const fastTransform = new TransformStream({
  async transform(chunk, controller) {
    if (controller.desiredSize &lt;= 0) {
      // Wait on the backpressure to clear somehow
    }

    controller.enqueue(processChunk(chunk));
  }
});</code></pre>
            <p>A difficulty here, however, is that the <code>TransformStreamDefaultController</code> does not have a ready promise mechanism like Writers do; so the <code>TransformStream</code> implementation would need to implement a polling mechanism to periodically check when <code>controller.desiredSize</code> becomes positive again.</p><p>The problem gets worse in pipelines. When you chain multiple transforms – say, parse, transform, then serialize – each <code>TransformStream</code> has its own internal readable and writable buffers. If implementers follow the spec strictly, data cascades through these buffers in a push-oriented fashion: the source pushes to transform A, which pushes to transform B, which pushes to transform C, each accumulating data in intermediate buffers before the final consumer has even started pulling. With three transforms, you can have six internal buffers filling up simultaneously.</p><p>Developers using the streams API are expected to remember to use options like <code>highWaterMark</code> when creating their sources, transforms, and writable destinations but often they either forget or simply choose to ignore it.</p>
            <pre><code>source
  .pipeThrough(parse)      // buffers filling...
  .pipeThrough(transform)  // more buffers filling...
  .pipeThrough(serialize)  // even more buffers...
  .pipeTo(destination);    // consumer hasn't started yet</code></pre>
            <p>Implementations have found ways to optimize transform pipelines by collapsing identity transforms, short-circuiting non-observable paths, deferring buffer allocation, or falling back to native code that does not run JavaScript at all. Deno, Bun, and Cloudflare Workers have all successfully implemented "native path" optimizations that can help eliminate much of the overhead, and Vercel's recent <a href="https://vercel.com/blog/we-ralph-wiggumed-webstreams-to-make-them-10x-faster"><u>fast-webstreams</u></a> research is working on similar optimizations for Node.js. But the optimizations themselves add significant complexity and still can't fully escape the inherently push-oriented model that TransformStream uses.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/64FcAUPYrTvOSYOPoT2FkR/cc91e0d32dd47320e8ac9d6f431a2fda/3.png" />
          </figure>
    <div>
      <h4>GC thrashing in server-side rendering</h4>
      <a href="#gc-thrashing-in-server-side-rendering">
        
      </a>
    </div>
    <p>Streaming server-side rendering (SSR) is a particularly painful case. A typical SSR stream might render thousands of small HTML fragments, each passing through the streams machinery:</p>
            <pre><code>// Each component enqueues a small chunk
function renderComponent(controller) {
  controller.enqueue(encoder.encode(`&lt;div&gt;${content}&lt;/div&gt;`));
}

// Hundreds of components = hundreds of enqueue calls
// Each one triggers promise machinery internally
for (const component of components) {
  renderComponent(controller);  // Promises created, objects allocated
}</code></pre>
            <p>Every fragment means promises created for <code>read()</code> calls, promises for backpressure coordination, intermediate buffer allocations, and <code>{ value, done } </code>result objects – most of which become garbage almost immediately.</p><p>Under load, this creates GC pressure that can devastate throughput. The JavaScript engine spends significant time collecting short-lived objects instead of doing useful work. Latency becomes unpredictable as GC pauses interrupt request handling. I've seen SSR workloads where garbage collection accounts for a substantial portion (up to and beyond 50%) of total CPU time per request. That's time that could be spent actually rendering content.</p><p>The irony is that streaming SSR is supposed to improve performance by sending content incrementally. But the overhead of the streams machinery can negate those gains, especially for pages with many small components. Developers sometimes find that buffering the entire response is actually faster than streaming through Web streams, defeating the purpose entirely.</p>
    <div>
      <h3>The optimization treadmill</h3>
      <a href="#the-optimization-treadmill">
        
      </a>
    </div>
    <p>To achieve usable performance, every major runtime has resorted to non-standard internal optimizations for Web streams. Node.js, Deno, Bun, and Cloudflare Workers have all developed their own workarounds. This is particularly true for streams wired up to system-level I/O, where much of the machinery is non-observable and can be short-circuited.</p><p>Finding these optimization opportunities can itself be a significant undertaking. It requires end-to-end understanding of the spec to identify which behaviors are observable and which can safely be elided. Even then, whether a given optimization is actually spec-compliant is often unclear. Implementers must make judgment calls about which semantics they can relax without breaking compatibility. This puts enormous pressure on runtime teams to become spec experts just to achieve acceptable performance.</p><p>These optimizations are difficult to implement, frequently error-prone, and lead to inconsistent behavior across runtimes. Bun's "<a href="https://bun.sh/docs/api/streams#direct-readablestream"><u>Direct Streams</u></a>" optimization takes a deliberately and observably non-standard approach, bypassing much of the spec's machinery entirely. Cloudflare Workers' <a href="https://developers.cloudflare.com/workers/runtime-apis/streams/transformstream/"><code><u>IdentityTransformStream</u></code></a> provides a fast-path for pass-through transforms but is Workers-specific and implements behaviors that are not standard for a <code>TransformStream</code>. Each runtime has its own set of tricks and the natural tendency is toward non-standard solutions, because that's often the only way to make things fast.</p><p>This fragmentation hurts portability. Code that performs well on one runtime may behave differently (or poorly) on another, even though it's using "standard" APIs. The complexity burden on runtime implementers is substantial, and the subtle behavioral differences create friction for developers trying to write cross-runtime code, particularly those maintaining frameworks that must be able to run efficiently across many runtime environments.</p><p>It is also necessary to emphasize that many optimizations are only possible in parts of the spec that are unobservable to user code. The alternative, like Bun "Direct Streams", is to intentionally diverge from the spec-defined observable behaviors. This means optimizations often feel "incomplete". They work in some scenarios but not in others, in some runtimes but not others, etc. Every such case adds to the overall unsustainable complexity of the Web streams approach which is why most runtime implementers rarely put significant effort into further improvements to their streams implementations once the conformance tests are passing.</p><p>Implementers shouldn't need to jump through these hoops. When you find yourself needing to relax or bypass spec semantics just to achieve reasonable performance, that's a sign something is wrong with the spec itself. A well-designed streaming API should be efficient by default, not require each runtime to invent its own escape hatches.</p>
    <div>
      <h3>The compliance burden</h3>
      <a href="#the-compliance-burden">
        
      </a>
    </div>
    <p>A complex spec creates complex edge cases. The <a href="https://github.com/web-platform-tests/wpt/tree/master/streams"><u>Web Platform Tests for streams</u></a> span over 70 test files, and while comprehensive testing is a good thing, what's telling is what needs to be tested.</p><p>Consider some of the more obscure tests that implementations must pass:</p><ul><li><p>Prototype pollution defense: One test patches <code>Object.prototype.</code>then to intercept promise resolutions, then verifies that <code>pipeTo()</code> and <code>tee()</code> operations don't leak internal values through the prototype chain. This tests a security property that only exists because the spec's promise-heavy internals create an attack surface.</p></li><li><p>WebAssembly memory rejection: BYOB reads must explicitly reject ArrayBuffers backed by WebAssembly memory, which look like regular buffers but can't be transferred. This edge case exists because of the spec's buffer detachment model – a simpler API wouldn't need to handle it.</p></li><li><p>Crash regression for state machine conflicts: A test specifically checks that calling <code>byobRequest.respond()</code> after <code>enqueue()</code> doesn't crash the runtime. This sequence creates a conflict in the internal state machine — the <code>enqueue()</code> fulfills the pending read and should invalidate the <code>byobRequest</code>, but implementations must gracefully handle the subsequent <code>respond()</code> rather than corrupting memory in order to cover the very likely possibility that developers are not using the complex API correctly.</p></li></ul><p>These aren't contrived scenarios invented by test authors in total vacuum. They're consequences of the spec's design and reflect real world bugs.</p><p>For runtime implementers, passing the WPT suite means handling intricate corner cases that most application code will never encounter. The tests encode not just the happy path but the full matrix of interactions between readers, writers, controllers, queues, strategies, and the promise machinery that connects them all.</p><p>A simpler API would mean fewer concepts, fewer interactions between concepts, and fewer edge cases to get right resulting in more confidence that implementations actually behave consistently.</p>
    <div>
      <h3>The takeaway</h3>
      <a href="#the-takeaway">
        
      </a>
    </div>
    <p>Web streams are complex for users and implementers alike. The problems with the spec aren't bugs. They emerge from using the API exactly as designed. They aren't issues that can be fixed solely through incremental improvements. They're consequences of fundamental design choices. To improve things we need different foundations.</p>
    <div>
      <h2>A better streams API is possible</h2>
      <a href="#a-better-streams-api-is-possible">
        
      </a>
    </div>
    <p>After implementing the Web streams spec multiple times across different runtimes and seeing the pain points firsthand, I decided it was time to explore what a better, alternative streaming API could look like if designed from first principles today.</p><p>What follows is a proof of concept: it's not a finished standard, not a production-ready library, not even necessarily a concrete proposal for something new, but a starting point for discussion that demonstrates the problems with Web streams aren't inherent to streaming itself; they're consequences of specific design choices that could be made differently. Whether this exact API is the right answer is less important than whether it sparks a productive conversation about what we actually need from a streaming primitive.</p>
    <div>
      <h3>What is a stream?</h3>
      <a href="#what-is-a-stream">
        
      </a>
    </div>
    <p>Before diving into API design, it's worth asking: what is a stream?</p><p>At its core, a stream is just a sequence of data that arrives over time. You don't have all of it at once. You process it incrementally as it becomes available.</p><p>Unix pipes are perhaps the purest expression of this idea:</p>
            <pre><code>cat access.log | grep "error" | sort | uniq -c</code></pre>
            <p>
Data flows left to right. Each stage reads input, does its work, writes output. There's no pipe reader to acquire, no controller lock to manage. If a downstream stage is slow, upstream stages naturally slow down as well. Backpressure is implicit in the model, not a separate mechanism to learn (or ignore).</p><p>In JavaScript, the natural primitive for "a sequence of things that arrive over time" is already in the language: the async iterable. You consume it with <code>for await...of</code>. You stop consuming by stopping iteration.</p><p>This is the intuition the new API tries to preserve: streams should feel like iteration, because that's what they are. The complexity of Web streams – readers, writers, controllers, locks, queuing strategies – obscures this fundamental simplicity. A better API should make the simple case simple and only add complexity where it's genuinely needed.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3AUAA4bitbTOVSQg7Pd7fv/0856b44d78899dcffc4493f4146fb64f/4.png" />
          </figure>
    <div>
      <h3>Design principles</h3>
      <a href="#design-principles">
        
      </a>
    </div>
    <p>I built the proof-of-concept alternative around a different set of principles.</p>
    <div>
      <h4>Streams are iterables.</h4>
      <a href="#streams-are-iterables">
        
      </a>
    </div>
    <p>No custom <code>ReadableStream</code> class with hidden internal state. A readable stream is just an <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Iteration_protocols#the_async_iterator_and_async_iterable_protocols"><code><u>AsyncIterable&lt;Uint8Array[]&gt;</u></code></a>. You consume it with <code>for await...of</code>. No readers to acquire, no locks to manage.</p>
    <div>
      <h4>Pull-through transforms</h4>
      <a href="#pull-through-transforms">
        
      </a>
    </div>
    <p>Transforms don't execute until the consumer pulls. There's no eager evaluation, no hidden buffering. Data flows on-demand from source, through transforms, to the consumer. If you stop iterating, processing stops.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4bEXBTEOHBMnCRKGA7odt5/cf51074cce3bb8b2ec1b5158c7560b68/5.png" />
          </figure>
    <div>
      <h4>Explicit backpressure</h4>
      <a href="#explicit-backpressure">
        
      </a>
    </div>
    <p>Backpressure is strict by default. When a buffer is full, writes reject rather than silently accumulating. You can configure alternative policies – block until space is available, drop oldest, drop newest – but you have to choose explicitly. No more silent memory growth.</p>
    <div>
      <h4>Batched chunks</h4>
      <a href="#batched-chunks">
        
      </a>
    </div>
    <p>Instead of yielding one chunk per iteration, streams yield <code>Uint8Array[]:</code> arrays of chunks. This amortizes the async overhead across multiple chunks, reducing promise creation and microtask latency in hot paths.</p>
    <div>
      <h4>Bytes only</h4>
      <a href="#bytes-only">
        
      </a>
    </div>
    <p>The API deals exclusively with bytes (<a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Uint8Array"><code><u>Uint8Array</u></code></a>). Strings are UTF-8 encoded automatically. There's no "value stream" vs "byte stream" dichotomy. If you want to stream arbitrary JavaScript values, use async iterables directly. While the API uses <code>Uint8Array</code>, it treats chunks as opaque. There is no partial consumption, no BYOB patterns, no byte-level operations within the streaming machinery itself. Chunks go in, chunks come out, unchanged unless a transform explicitly modifies them.</p>
    <div>
      <h4>Synchronous fast paths matter</h4>
      <a href="#synchronous-fast-paths-matter">
        
      </a>
    </div>
    <p>The API recognizes that synchronous data sources are both necessary and common. The application should not be forced to always accept the performance cost of asynchronous scheduling simply because that's the only option provided. At the same time, mixing sync and async processing can be dangerous. Synchronous paths should always be an option and should always be explicit.</p>
    <div>
      <h3>The new API in action</h3>
      <a href="#the-new-api-in-action">
        
      </a>
    </div>
    
    <div>
      <h4>Creating and consuming streams</h4>
      <a href="#creating-and-consuming-streams">
        
      </a>
    </div>
    <p>In Web streams, creating a simple producer/consumer pair requires <code>TransformStream</code>, manual encoding, and careful lock management:</p>
            <pre><code>const { readable, writable } = new TransformStream();
const enc = new TextEncoder();
const writer = writable.getWriter();
await writer.write(enc.encode("Hello, World!"));
await writer.close();
writer.releaseLock();

const dec = new TextDecoder();
let text = '';
for await (const chunk of readable) {
  text += dec.decode(chunk, { stream: true });
}
text += dec.decode();</code></pre>
            <p>Even this relatively clean version requires: a <code>TransformStream</code>, manual <code>TextEncoder</code> and <code>TextDecoder</code>, and explicit lock release.</p><p>Here's the equivalent with the new API:</p>
            <pre><code>import { Stream } from 'new-streams';

// Create a push stream
const { writer, readable } = Stream.push();

// Write data — backpressure is enforced
await writer.write("Hello, World!");
await writer.end();

// Consume as text
const text = await Stream.text(readable);</code></pre>
            <p>The readable is just an async iterable. You can pass it to any function that expects one, including <code>Stream.text()</code> which collects and decodes the entire stream.</p><p>The writer has a simple interface: <code>write(), writev()</code> for batched writes, <code>end()</code> to signal completion, and <code>abort()</code> for errors. That's essentially it.</p><p>The Writer is not a concrete class. Any object that implements <code>write()</code>, <code>end()</code>, and <code>abort()</code> can be a writer making it easy to adapt existing APIs or create specialized implementations without subclassing. There's no complex <code>UnderlyingSink</code> protocol with <code>start()</code>, <code>write()</code>, <code>close()</code>, <code>and abort() </code>callbacks that must coordinate through a controller whose lifecycle and state are independent of the <code>WritableStream</code> it is bound to.</p><p>Here's a simple in-memory writer that collects all written data:</p>
            <pre><code>// A minimal writer implementation — just an object with methods
function createBufferWriter() {
  const chunks = [];
  let totalBytes = 0;
  let closed = false;

  const addChunk = (chunk) =&gt; {
    chunks.push(chunk);
    totalBytes += chunk.byteLength;
  };

  return {
    get desiredSize() { return closed ? null : 1; },

    // Async variants
    write(chunk) { addChunk(chunk); },
    writev(batch) { for (const c of batch) addChunk(c); },
    end() { closed = true; return totalBytes; },
    abort(reason) { closed = true; chunks.length = 0; },

    // Sync variants return boolean (true = accepted)
    writeSync(chunk) { addChunk(chunk); return true; },
    writevSync(batch) { for (const c of batch) addChunk(c); return true; },
    endSync() { closed = true; return totalBytes; },
    abortSync(reason) { closed = true; chunks.length = 0; return true; },

    getChunks() { return chunks; }
  };
}

// Use it
const writer = createBufferWriter();
await Stream.pipeTo(source, writer);
const allData = writer.getChunks();</code></pre>
            <p>No base class to extend, no abstract methods to implement, no controller to coordinate with. Just an object with the right shape.</p>
    <div>
      <h4>Pull-through transforms</h4>
      <a href="#pull-through-transforms">
        
      </a>
    </div>
    <p>Under the new API design, transforms should not perform any work until the data is being consumed. This is a fundamental principle.</p>
            <pre><code>// Nothing executes until iteration begins
const output = Stream.pull(source, compress, encrypt);

// Transforms execute as we iterate
for await (const chunks of output) {
  for (const chunk of chunks) {
    process(chunk);
  }
}</code></pre>
            <p><code>Stream.pull()</code> creates a lazy pipeline. The <code>compress</code> and <code>encrypt</code> transforms don't run until you start iterating output. Each iteration pulls data through the pipeline on demand.</p><p>This is fundamentally different from Web streams' <a href="https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream/pipeThrough"><code><u>pipeThrough()</u></code></a>, which starts actively pumping data from the source to the transform as soon as you set up the pipe. Pull semantics mean you control when processing happens, and stopping iteration stops processing.</p><p>Transforms can be stateless or stateful. A stateless transform is just a function that takes chunks and returns transformed chunks:</p>
            <pre><code>// Stateless transform — a pure function
// Receives chunks or null (flush signal)
const toUpperCase = (chunks) =&gt; {
  if (chunks === null) return null; // End of stream
  return chunks.map(chunk =&gt; {
    const str = new TextDecoder().decode(chunk);
    return new TextEncoder().encode(str.toUpperCase());
  });
};

// Use it directly
const output = Stream.pull(source, toUpperCase);</code></pre>
            <p>Stateful transforms are simple objects with member functions that maintain state across calls:</p>
            <pre><code>// Stateful transform — a generator that wraps the source
function createLineParser() {
  // Helper to concatenate Uint8Arrays
  const concat = (...arrays) =&gt; {
    const result = new Uint8Array(arrays.reduce((n, a) =&gt; n + a.length, 0));
    let offset = 0;
    for (const arr of arrays) { result.set(arr, offset); offset += arr.length; }
    return result;
  };

  return {
    async *transform(source) {
      let pending = new Uint8Array(0);
      
      for await (const chunks of source) {
        if (chunks === null) {
          // Flush: yield any remaining data
          if (pending.length &gt; 0) yield [pending];
          continue;
        }
        
        // Concatenate pending data with new chunks
        const combined = concat(pending, ...chunks);
        const lines = [];
        let start = 0;

        for (let i = 0; i &lt; combined.length; i++) {
          if (combined[i] === 0x0a) { // newline
            lines.push(combined.slice(start, i));
            start = i + 1;
          }
        }

        pending = combined.slice(start);
        if (lines.length &gt; 0) yield lines;
      }
    }
  };
}

const output = Stream.pull(source, createLineParser());</code></pre>
            <p>For transforms that need cleanup on abort, add an abort handler:</p>
            <pre><code>// Stateful transform with resource cleanup
function createGzipCompressor() {
  // Hypothetical compression API...
  const deflate = new Deflater({ gzip: true });

  return {
    async *transform(source) {
      for await (const chunks of source) {
        if (chunks === null) {
          // Flush: finalize compression
          deflate.push(new Uint8Array(0), true);
          if (deflate.result) yield [deflate.result];
        } else {
          for (const chunk of chunks) {
            deflate.push(chunk, false);
            if (deflate.result) yield [deflate.result];
          }
        }
      }
    },
    abort(reason) {
      // Clean up compressor resources on error/cancellation
    }
  };
}</code></pre>
            <p>For implementers, there's no Transformer protocol with <code>start()</code>, <code>transform()</code>, <code>flush()</code> methods and controller coordination passed into a <code>TransformStream</code> class that has its own hidden state machine and buffering mechanisms. Transforms are just functions or simple objects: far simpler to implement and test.</p>
    <div>
      <h4>Explicit backpressure policies</h4>
      <a href="#explicit-backpressure-policies">
        
      </a>
    </div>
    <p>When a bounded buffer fills up and a producer wants to write more, there are only a few things you can do:</p><ol><li><p>Reject the write: refuse to accept more data</p></li><li><p>Wait: block until space becomes available</p></li><li><p>Discard old data: evict what's already buffered to make room</p></li><li><p>Discard new data: drop what's incoming</p></li></ol><p>That's it. Any other response is either a variation of these (like "resize the buffer," which is really just deferring the choice) or domain-specific logic that doesn't belong in a general streaming primitive. Web streams currently always choose Wait by default.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/68339c8QsvNmb7JcZ2lSDO/e52a86a9b8f52b52eb9328d5ee58f23a/6.png" />
          </figure><p>The new API makes you choose one of these four explicitly:</p><ul><li><p><code>strict</code> (default): Rejects writes when the buffer is full and too many writes are pending. Catches "fire-and-forget" patterns where producers ignore backpressure.</p></li><li><p><code>block</code>: Writes wait until buffer space is available. Use when you trust the producer to await writes properly.</p></li><li><p><code>drop-oldest</code>: Drops the oldest buffered data to make room. Useful for live feeds where stale data loses value.</p></li><li><p><code>drop-newest</code>: Discards incoming data when full. Useful when you want to process what you have without being overwhelmed.</p></li></ul>
            <pre><code>const { writer, readable } = Stream.push({
  highWaterMark: 10,
  backpressure: 'strict' // or 'block', 'drop-oldest', 'drop-newest'
});</code></pre>
            <p>No more hoping producers cooperate. The policy you choose determines what happens when the buffer fills.</p><p>Here's how each policy behaves when a producer writes faster than the consumer reads:</p>
            <pre><code>// strict: Catches fire-and-forget writes that ignore backpressure
const strict = Stream.push({ highWaterMark: 2, backpressure: 'strict' });
strict.writer.write(chunk1);  // ok (not awaited)
strict.writer.write(chunk2);  // ok (fills slots buffer)
strict.writer.write(chunk3);  // ok (queued in pending)
strict.writer.write(chunk4);  // ok (pending buffer fills)
strict.writer.write(chunk5);  // throws! too many pending writes

// block: Wait for space (unbounded pending queue)
const blocking = Stream.push({ highWaterMark: 2, backpressure: 'block' });
await blocking.writer.write(chunk1);  // ok
await blocking.writer.write(chunk2);  // ok
await blocking.writer.write(chunk3);  // waits until consumer reads
await blocking.writer.write(chunk4);  // waits until consumer reads
await blocking.writer.write(chunk5);  // waits until consumer reads

// drop-oldest: Discard old data to make room
const dropOld = Stream.push({ highWaterMark: 2, backpressure: 'drop-oldest' });
await dropOld.writer.write(chunk1);  // ok
await dropOld.writer.write(chunk2);  // ok
await dropOld.writer.write(chunk3);  // ok, chunk1 discarded

// drop-newest: Discard incoming data when full
const dropNew = Stream.push({ highWaterMark: 2, backpressure: 'drop-newest' });
await dropNew.writer.write(chunk1);  // ok
await dropNew.writer.write(chunk2);  // ok
await dropNew.writer.write(chunk3);  // silently dropped</code></pre>
            
    <div>
      <h4>Explicit Multi-consumer patterns</h4>
      <a href="#explicit-multi-consumer-patterns">
        
      </a>
    </div>
    
            <pre><code>// Share with explicit buffer management
const shared = Stream.share(source, {
  highWaterMark: 100,
  backpressure: 'strict'
});

const consumer1 = shared.pull();
const consumer2 = shared.pull(decompress);</code></pre>
            <p>Instead of <code>tee()</code> with its hidden unbounded buffer, you get explicit multi-consumer primitives. <code>Stream.share()</code> is pull-based: consumers pull from a shared source, and you configure the buffer limits and backpressure policy upfront.</p><p>There's also <code>Stream.broadcast()</code> for push-based multi-consumer scenarios. Both require you to think about what happens when consumers run at different speeds, because that's a real concern that shouldn't be hidden.</p>
    <div>
      <h4>Sync/async separation</h4>
      <a href="#sync-async-separation">
        
      </a>
    </div>
    <p>Not all streaming workloads involve I/O. When your source is in-memory and your transforms are pure functions, async machinery adds overhead without benefit. You're paying for coordination of "waiting" that adds no benefit.</p><p>The new API has complete parallel sync versions: <code>Stream.pullSync()</code>, <code>Stream.bytesSync()</code>, <code>Stream.textSync()</code>, and so on. If your source and transforms are all synchronous, you can process the entire pipeline without a single promise.</p>
            <pre><code>// Async — when source or transforms may be asynchronous
const textAsync = await Stream.text(source);

// Sync — when all components are synchronous
const textSync = Stream.textSync(source);</code></pre>
            <p>Here's a complete synchronous pipeline – compression, transformation, and consumption with zero async overhead:</p>
            <pre><code>// Synchronous source from in-memory data
const source = Stream.fromSync([inputBuffer]);

// Synchronous transforms
const compressed = Stream.pullSync(source, zlibCompressSync);
const encrypted = Stream.pullSync(compressed, aesEncryptSync);

// Synchronous consumption — no promises, no event loop trips
const result = Stream.bytesSync(encrypted);</code></pre>
            <p>The entire pipeline executes in a single call stack. No promises are created, no microtask queue scheduling occurs, and no GC pressure from short-lived async machinery. For CPU-bound workloads like parsing, compression, or transformation of in-memory data, this can be significantly faster than the equivalent Web streams code – which would force async boundaries even when every component is synchronous.</p><p>Web streams has no synchronous path. Even if your source has data ready and your transform is a pure function, you still pay for promise creation and microtask scheduling on every operation. Promises are fantastic for cases in which waiting is actually necessary, but they aren't always necessary. The new API lets you stay in sync-land when that's what you need.</p>
    <div>
      <h4>Bridging the gap between this and web streams</h4>
      <a href="#bridging-the-gap-between-this-and-web-streams">
        
      </a>
    </div>
    <p>The async iterator based approach provides a natural bridge between this alternative approach and Web streams. When coming from a ReadableStream to this new approach, simply passing the readable in as input works as expected when the ReadableStream is set up to yield bytes:</p>
            <pre><code>const readable = getWebReadableStreamSomehow();
const input = Stream.pull(readable, transform1, transform2);
for await (const chunks of input) {
  // process chunks
}</code></pre>
            <p>When adapting to a ReadableStream, a bit more work is required since the alternative approach yields batches of chunks, but the adaptation layer is as easily straightforward:</p>
            <pre><code>async function* adapt(input) {
  for await (const chunks of input) {
    for (const chunk of chunks) {
      yield chunk;
    }
  }
}

const input = Stream.pull(source, transform1, transform2);
const readable = ReadableStream.from(adapt(input));</code></pre>
            
    <div>
      <h4>How this addresses the real-world failures from earlier</h4>
      <a href="#how-this-addresses-the-real-world-failures-from-earlier">
        
      </a>
    </div>
    <ul><li><p>Unconsumed bodies: Pull semantics mean nothing happens until you iterate. No hidden resource retention. If you don't consume a stream, there's no background machinery holding connections open.</p></li><li><p>The <code>tee()</code> memory cliff: <code>Stream.share()</code> requires explicit buffer configuration. You choose the <code>highWaterMark</code> and backpressure policy upfront: no more silent unbounded growth when consumers run at different speeds.</p></li><li><p>Transform backpressure gaps: Pull-through transforms execute on-demand. Data doesn't cascade through intermediate buffers; it flows only when the consumer pulls. Stop iterating, stop processing.</p></li><li><p>GC thrashing in SSR: Batched chunks (<code>Uint8Array[]</code>) amortize async overhead. Sync pipelines via <code>Stream.pullSync()</code> eliminate promise allocation entirely for CPU-bound workloads.</p></li></ul>
    <div>
      <h3>Performance</h3>
      <a href="#performance">
        
      </a>
    </div>
    <p>The design choices have performance implications. Here are benchmarks from the reference implementation of this possible alternative compared to Web streams (Node.js v24.x, Apple M1 Pro, averaged over 10 runs):</p><table><tr><td><p><b>Scenario</b></p></td><td><p><b>Alternative</b></p></td><td><p><b>Web streams</b></p></td><td><p><b>Difference</b></p></td></tr><tr><td><p>Small chunks (1KB × 5000)</p></td><td><p>~13 GB/s</p></td><td><p>~4 GB/s</p></td><td><p>~3× faster</p></td></tr><tr><td><p>Tiny chunks (100B × 10000)</p></td><td><p>~4 GB/s</p></td><td><p>~450 MB/s</p></td><td><p>~8× faster</p></td></tr><tr><td><p>Async iteration (8KB × 1000)</p></td><td><p>~530 GB/s</p></td><td><p>~35 GB/s</p></td><td><p>~15× faster</p></td></tr><tr><td><p>Chained 3× transforms (8KB × 500)</p></td><td><p>~275 GB/s</p></td><td><p>~3 GB/s</p></td><td><p><b>~80–90× faster</b></p></td></tr><tr><td><p>High-frequency (64B × 20000)</p></td><td><p>~7.5 GB/s</p></td><td><p>~280 MB/s</p></td><td><p>~25× faster</p></td></tr></table><p>The chained transform result is particularly striking: pull-through semantics eliminate the intermediate buffering that plagues Web streams pipelines. Instead of each <code>TransformStream</code> eagerly filling its internal buffers, data flows on-demand from consumer to source.</p><p>Now, to be fair, Node.js really has not yet put significant effort into fully optimizing the performance of its Web streams implementation. There's likely significant room for improvement in Node.js' performance results through a bit of applied effort to optimize the hot paths there. That said, running these benchmarks in Deno and Bun also show a significant performance improvement with this alternative iterator based approach than in either of their Web streams implementations as well.</p><p>Browser benchmarks (Chrome/Blink, averaged over 3 runs) show consistent gains as well:</p><table><tr><td><p><b>Scenario</b></p></td><td><p><b>Alternative</b></p></td><td><p><b>Web streams</b></p></td><td><p><b>Difference</b></p></td></tr><tr><td><p>Push 3KB chunks</p></td><td><p>~135k ops/s</p></td><td><p>~24k ops/s</p></td><td><p>~5–6× faster</p></td></tr><tr><td><p>Push 100KB chunks</p></td><td><p>~24k ops/s</p></td><td><p>~3k ops/s</p></td><td><p>~7–8× faster</p></td></tr><tr><td><p>3 transform chain</p></td><td><p>~4.6k ops/s</p></td><td><p>~880 ops/s</p></td><td><p>~5× faster</p></td></tr><tr><td><p>5 transform chain</p></td><td><p>~2.4k ops/s</p></td><td><p>~550 ops/s</p></td><td><p>~4× faster</p></td></tr><tr><td><p>bytes() consumption</p></td><td><p>~73k ops/s</p></td><td><p>~11k ops/s</p></td><td><p>~6–7× faster</p></td></tr><tr><td><p>Async iteration</p></td><td><p>~1.1M ops/s</p></td><td><p>~10k ops/s</p></td><td><p><b>~40–100× faster</b></p></td></tr></table><p>These benchmarks measure throughput in controlled scenarios; real-world performance depends on your specific use case. The difference between Node.js and browser gains reflects the distinct optimization paths each environment takes for Web streams.</p><p>It's worth noting that these benchmarks compare a pure TypeScript/JavaScript implementation of the new API against the native (JavaScript/C++/Rust) implementations of Web streams in each runtime. The new API's reference implementation has had no performance optimization work; the gains come entirely from the design. A native implementation would likely show further improvement.</p><p>The gains illustrate how fundamental design choices compound: batching amortizes async overhead, pull semantics eliminate intermediate buffering, and the freedom for implementations to use synchronous fast paths when data is available immediately all contribute.</p><blockquote><p>"We’ve done a lot to improve performance and consistency in Node streams, but there’s something uniquely powerful about starting from scratch. New streams’ approach embraces modern runtime realities without legacy baggage, and that opens the door to a simpler, performant and more coherent streams model." 
- Robert Nagy, Node.js TSC member and Node.js streams contributor</p></blockquote>
    <div>
      <h2>What's next</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>I'm publishing this to start a conversation. What did I get right? What did I miss? Are there use cases that don't fit this model? What would a migration path for this approach look like? The goal is to gather feedback from developers who've felt the pain of Web streams and have opinions about what a better API should look like.</p>
    <div>
      <h3>Try it yourself</h3>
      <a href="#try-it-yourself">
        
      </a>
    </div>
    <p>A reference implementation for this alternative approach is available now and can be found at <a href="https://github.com/jasnell/new-streams"><u>https://github.com/jasnell/new-streams</u></a>.</p><ul><li><p>API Reference: See the <a href="https://github.com/jasnell/new-streams/blob/main/API.md"><u>API.md</u></a> for complete documentation</p></li><li><p>Examples: The <a href="https://github.com/jasnell/new-streams/tree/main/samples"><u>samples directory</u></a> has working code for common patterns</p></li></ul><p>I welcome issues, discussions, and pull requests. If you've run into Web streams problems I haven't covered, or if you see gaps in this approach, let me know. But again, the idea here is not to say "Let's all use this shiny new object!"; it is to kick off a discussion that looks beyond the current status quo of Web Streams and returns back to first principles.</p><p>Web streams was an ambitious project that brought streaming to the web platform when nothing else existed. The people who designed it made reasonable choices given the constraints of 2014 – before async iteration, before years of production experience revealed the edge cases.</p><p>But we've learned a lot since then. JavaScript has evolved. A streaming API designed today can be simpler, more aligned with the language, and more explicit about the things that matter, like backpressure and multi-consumer behavior.</p><p>We deserve a better stream API. So let's talk about what that could look like.</p> ]]></content:encoded>
            <category><![CDATA[Standards]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[TypeScript]]></category>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Node.js]]></category>
            <category><![CDATA[Performance]]></category>
            <category><![CDATA[API]]></category>
            <guid isPermaLink="false">37h1uszA2vuOfmXb3oAnZr</guid>
            <dc:creator>James M Snell</dc:creator>
        </item>
        <item>
            <title><![CDATA[A year of improving Node.js compatibility in Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/nodejs-workers-2025/</link>
            <pubDate>Thu, 25 Sep 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Over the year we have greatly expanded Node.js compatibility. There are hundreds of new Node.js APIs now available that make it easier to run existing Node.js code on our platform. ]]></description>
            <content:encoded><![CDATA[ <p>We've been busy.</p><p>Compatibility with the broad JavaScript developer ecosystem has always been a key strategic investment for us. We believe in open standards and an open web. We want you to see <a href="https://workers.cloudflare.com/"><u>Workers</u></a> as a powerful extension of your development platform with the ability to just drop code in that Just Works. To deliver on this goal, the Cloudflare Workers team has spent the past year significantly expanding compatibility with the Node.js ecosystem, enabling hundreds (if not thousands) of popular <a href="https://npmjs.com"><u>npm</u></a> modules to now work seamlessly, including the ever popular <a href="https://expressjs.com"><u>express</u></a> framework.</p><p>We have implemented a <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>substantial subset of the Node.js standard library</u></a>, focusing on the most commonly used, and asked for, APIs. These include:</p>
<div><table><colgroup>
<col></col>
<col></col>
</colgroup>
<thead>
  <tr>
    <th><span>Module</span></th>
    <th><span>API documentation</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>node:console</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/console.html"><span>https://nodejs.org/docs/latest/api/console.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:crypto</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/crypto.html"><span>https://nodejs.org/docs/latest/api/crypto.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:dns</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/dns.html"><span>https://nodejs.org/docs/latest/api/dns.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:fs</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/fs.html"><span>https://nodejs.org/docs/latest/api/fs.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:http</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/http.html"><span>https://nodejs.org/docs/latest/api/http.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:https</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/https.html"><span>https://nodejs.org/docs/latest/api/https.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:net</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/net.html"><span>https://nodejs.org/docs/latest/api/net.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:process</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/process.html"><span>https://nodejs.org/docs/latest/api/process.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:timers</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/timers.html"><span>https://nodejs.org/docs/latest/api/timers.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:tls</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/tls.html"><span>https://nodejs.org/docs/latest/api/tls.html</span></a><span> </span></td>
  </tr>
  <tr>
    <td><span>node:zlib</span></td>
    <td><a href="https://nodejs.org/docs/latest/api/zlib.html"><span>https://nodejs.org/docs/latest/api/zlib.html</span></a><span> </span></td>
  </tr>
</tbody></table></div><p>Each of these has been carefully implemented to approximate Node.js' behavior as closely as possible where feasible. Where matching <a href="http://nodejs.org"><u>Node.js</u></a>' behavior is not possible, our implementations will throw a clear error when called, rather than silently failing or not being present at all. This ensures that packages that check for the presence of these APIs will not break, even if the functionality is not available.</p><p>In some cases, we had to implement entirely new capabilities within the runtime in order to provide the necessary functionality. For <code>node:fs</code>, we added a new virtual file system within the Workers environment. In other cases, such as with <code>node:net</code>, <code>node:tls</code>, and <code>node:http</code>, we wrapped the new Node.js APIs around existing Workers capabilities such as the <a href="https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/"><u>Sockets API</u></a> and <a href="https://developers.cloudflare.com/workers/runtime-apis/fetch/"><code><u>fetch</u></code></a>.</p><p>Most importantly, <b>all of these implementations are done natively in the Workers runtime</b>, using a combination of TypeScript and C++. Whereas our earlier Node.js compatibility efforts relied heavily on polyfills and shims injected at deployment time by developer tooling such as <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler</u></a>, we are moving towards a model where future Workers will have these APIs available natively, without need for any additional dependencies. This not only improves performance and reduces memory usage, but also ensures that the behavior is as close to Node.js as possible.</p>
    <div>
      <h2>The networking stack</h2>
      <a href="#the-networking-stack">
        
      </a>
    </div>
    <p>Node.js has a rich set of networking APIs that allow applications to create servers, make HTTP requests, work with raw TCP and UDP sockets, send DNS queries, and more. Workers do not have direct access to raw kernel-level sockets though, so how can we support these Node.js APIs so packages still work as intended? We decided to build on top of the existing <a href="https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/"><u>managed Sockets</u></a> and fetch APIs. These implementations allow many popular Node.js packages that rely on networking APIs to work seamlessly in the Workers environment.</p><p>Let's start with the HTTP APIs.</p>
    <div>
      <h3>HTTP client and server support</h3>
      <a href="#http-client-and-server-support">
        
      </a>
    </div>
    <p>From the moment we announced that we would be pursuing Node.js compatibility within Workers, users have been asking specifically for an implementation of the <code>node:http</code> module. There are countless modules in the ecosystem that depend directly on APIs like <code>http.get(...)</code> and <code>http.createServer(...)</code>.</p><p>The <code>node:http</code> and <code>node:https</code> modules provide APIs for creating HTTP clients and servers. <a href="https://blog.cloudflare.com/bringing-node-js-http-servers-to-cloudflare-workers/"><u>We have implemented both</u></a>, allowing you to create HTTP clients using <code>http.request()</code> and servers using <code>http.createServer()</code>. <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/http/"><u>The HTTP client implementation</u></a> is built on top of the Fetch API, while the HTTP server implementation is built on top of the Workers runtime’s existing request handling capabilities.</p><p>The client side is fairly straightforward:</p>
            <pre><code>import http from 'node:http';

export default {
  async fetch(request) {
    return new Promise((resolve, reject) =&gt; {
      const req = http.request('http://example.com', (res) =&gt; {
        let data = '';
        res.setEncoding('utf8');
        res.on('data', (chunk) =&gt; {
          data += chunk;
        });
        res.on('end', () =&gt; {
          resolve(new Response(data));
        });
      });
      req.on('error', (err) =&gt; {
        reject(err);
      });
      req.end();
    });
  }
}
</code></pre>
            <p>The server side is just as simple but likely even more exciting. We've often been asked about the possibility of supporting <a href="https://expressjs.com/"><u>Express</u></a>, or <a href="https://koajs.com/"><u>Koa</u></a>, or <a href="https://fastify.dev/"><u>Fastify</u></a> within Workers, but it was difficult to do because these were so dependent on the Node.js APIs. With the new additions it is now possible to use both Express and Koa within Workers, and we're hoping to be able to add Fastify support later. </p>
            <pre><code>import { createServer } from "node:http";
import { httpServerHandler } from "cloudflare:node";

const server = createServer((req, res) =&gt; {
  res.writeHead(200, { "Content-Type": "text/plain" });
  res.end("Hello from Node.js HTTP server!");
});

export default httpServerHandler(server);
</code></pre>
            <p>The <code>httpServerHandler()</code> function from the <code>cloudflare:nod</code>e module integrates the HTTP <code>server</code> with the Workers fetch event, allowing it to handle incoming requests.</p>
    <div>
      <h3>The <code>node:dns</code> module</h3>
      <a href="#the-node-dns-module">
        
      </a>
    </div>
    <p>The <code>node:dns</code> module provides an API for performing DNS queries. </p><p>At Cloudflare, we happen to have a <a href="https://developers.cloudflare.com/1.1.1.1/encryption/dns-over-https/"><u>DNS-over-HTTPS (DoH)</u></a> service and our own <a href="https://one.one.one.one/"><u>DNS service called 1.1.1.1</u></a>. We took advantage of this when exposing <code>node:dns</code> in Workers. When you use this module to perform a query, it will just make a subrequest to 1.1.1.1 to resolve the query. This way the user doesn’t have to think about DNS servers, and the query will just work.</p>
    <div>
      <h3>The <code>node:net</code> and <code>node:tls</code> modules</h3>
      <a href="#the-node-net-and-node-tls-modules">
        
      </a>
    </div>
    <p>The <code>node:net</code> module provides an API for creating TCP sockets, while the <code>node:tls</code> module provides an API for creating secure TLS sockets. As we mentioned before, both are built on top of the existing <a href="https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets/"><u>Workers Sockets API</u></a>. Note that not all features of the <code>node:net</code> and <code>node:tls</code> modules are available in Workers. For instance, it is not yet possible to create a TCP server using <code>net.createServer()</code> yet (but maybe soon!), but we have implemented enough of the APIs to allow many popular packages that rely on these modules to work in Workers.</p>
            <pre><code>import net from 'node:net';
import tls from 'node:tls';

export default {
  async fetch(request) {
    const { promise, resolve } = Promise.withResolvers();
    const socket = net.connect({ host: 'example.com', port: 80 },
        () =&gt; {
      let buf = '';
      socket.setEncoding('utf8')
      socket.on('data', (chunk) =&gt; buf += chunk);
      socket.on('end', () =&gt; resolve(new Response('ok'));
      socket.end();
    });
    return promise;
  }
}
</code></pre>
            
    <div>
      <h2>A new virtual file system and the <code>node:fs</code> module</h2>
      <a href="#a-new-virtual-file-system-and-the-node-fs-module">
        
      </a>
    </div>
    <p>What does supporting filesystem APIs mean in a serverless environment? When you deploy a Worker, it runs in Region:Earth and we don’t want you needing to think about individual servers with individual file systems. There are, however, countless existing applications and modules in the ecosystem that leverage the file system to store configuration data, read and write temporary data, and more.</p><p>Workers do not have access to a traditional file system like a Node.js process does, and for good reason! A Worker does not run on a single machine; a single request to one worker can run on any one of thousands of servers anywhere in Cloudflare's global <a href="https://www.cloudflare.com/network"><u>network</u></a>. Coordinating and synchronizing access to shared physical resources such as a traditional file system harbor major technical challenges and risks of deadlocks and more; challenges that are inherent in any massively distributed system. Fortunately, Workers provide powerful tools like <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> that provide a solution for coordinating access to shared, durable state at scale. To address the need for a file system in Workers, we built on what already makes Workers great.</p><p>We implemented a virtual file system that allows you to use the node:fs APIs to read and write temporary, in-memory files. This virtual file system is specific to each Worker. When using a stateless worker, files created in one request are not accessible in any other request. However, when using a Durable Object, this temporary file space can be shared across multiple requests from multiple users. This file system is ephemeral (for now), meaning that files are not persisted across Worker restarts or deployments, so it does not replace the use of the <a href="https://developers.cloudflare.com/durable-objects/api/storage-api/"><u>Durable Object Storage</u></a> mechanism, but it provides a powerful new tool that greatly expands the capabilities of your Durable Objects.</p><p>The <code>node:fs</code> module provides a rich set of APIs for working with files and directories:</p>
            <pre><code>import fs from 'node:fs';

export default {
  async fetch(request) {
    // Write a temporary file
    await fs.promises.writeFile('/tmp/hello.txt', 'Hello, world!');

    // Read the file
    const data = await fs.promises.readFile('/tmp/hello.txt', 'utf-8');

    return new Response(`File contents: ${data}`);
  }
}
</code></pre>
            <p>The virtual file system supports a wide range of file operations, including reading and writing files, creating and removing directories, and working with file descriptors. It also supports standard input/output/error streams via <code>process.stdin</code>, <code>process.stdout</code>, and <code>process.stderr</code>, symbolic links, streams, and more.</p><p>While the current implementation of the virtual file system is in-memory only, we are exploring options for adding persistent storage in the future that would link to existing Cloudflare storage solutions like <a href="https://www.cloudflare.com/developer-platform/products/r2/">R2</a> or Durable Objects. But you don't have to wait on us! When combined with powerful tools like Durable Objects and <a href="https://developers.cloudflare.com/workers/runtime-apis/rpc/"><u>JavaScript RPC</u></a>, it's certainly possible to create your own general purpose, durable file system abstraction backed by sqlite storage.</p>
    <div>
      <h2>Cryptography with <code>node:crypto</code></h2>
      <a href="#cryptography-with-node-crypto">
        
      </a>
    </div>
    <p>The <code>node:crypto</code> module provides a comprehensive set of cryptographic functionality, including hashing, encryption, decryption, and more. We have implemented a full version of the <code>node:crypto</code> module, allowing you to use familiar cryptographic APIs in your Workers applications. There will be some difference in behavior compared to Node.js due to the fact that Workers uses <a href="https://github.com/google/boringssl/blob/main/README.md"><u>BoringSSL</u></a> under the hood, while Node.js uses <a href="https://github.com/openssl"><u>OpenSSL</u></a>. However, we have strived to make the APIs as compatible as possible, and many popular packages that rely on <code>node:crypto</code> now work seamlessly in Workers.</p><p>To accomplish this, we didn't just copy the implementation of these cryptographic operations from Node.js. Rather, we worked within the Node.js project to extract the core crypto functionality out into a separate dependency project called <a href="https://github.com/nodejs/ncrypto"><code><u>ncrypto</u></code></a> that is used – not only by Workers but Bun as well – to implement Node.js compatible functionality by simply running the exact same code that Node.js is running.</p>
            <pre><code>import crypto from 'node:crypto';

export default {
  async fetch(request) {
    const hash = crypto.createHash('sha256');
    hash.update('Hello, world!');
    const digest = hash.digest('hex');

    return new Response(`SHA-256 hash: ${digest}`);
  }
}
</code></pre>
            <p>All major capabilities of the <code>node:crypto</code> module are supported, including:</p><ul><li><p>Hashing (e.g., SHA-256, SHA-512)</p></li><li><p>HMAC</p></li><li><p>Symmetric encryption/decryption</p></li><li><p>Asymmetric encryption/decryption</p></li><li><p>Digital signatures</p></li><li><p>Key generation and management</p></li><li><p>Random byte generation</p></li><li><p>Key derivation functions (e.g., PBKDF2, scrypt)</p></li><li><p>Cipher and Decipher streams</p></li><li><p>Sign and Verify streams</p></li><li><p>KeyObject class for managing keys</p></li><li><p>Certificate handling (e.g., X.509 certificates)</p></li><li><p>Support for various encoding formats (e.g., PEM, DER, base64)</p></li><li><p>and more…</p></li></ul>
    <div>
      <h2>Process &amp; Environment</h2>
      <a href="#process-environment">
        
      </a>
    </div>
    <p>In Node.js, the <code>node:process</code> module provides a global object that gives information about, and control over, the current Node.js process. It includes properties and methods for accessing environment variables, command-line arguments, the current working directory, and more. It is one of the most fundamental modules in Node.js, and many packages rely on it for basic functionality and simply assume its presence. There are, however, some aspects of the <code>node:process</code> module that do not make sense in the Workers environment, such as process IDs and user/group IDs which are tied to the operating system and process model of a traditional server environment and have no equivalent in the Workers environment.</p><p>When <code>nodejs_compat</code> is enabled, the <code>process</code> global will be available in your Worker scripts or you can import it directly via <code>import process from 'node:process'</code>. Note that the <code>process</code> global is only available when the <code>nodejs_compat</code> flag is enabled. If you try to access <code>process</code> without the flag, it will be <code>undefined</code> and the import will throw an error.</p><p>Let's take a look at the <code>process</code> APIs that do make sense in Workers, and that have been fully implemented, starting with <code>process.env</code>.</p>
    <div>
      <h3>Environment variables</h3>
      <a href="#environment-variables">
        
      </a>
    </div>
    <p>Workers have had <a href="https://developers.cloudflare.com/workers/configuration/environment-variables/"><u>support for environment variables</u></a> for a while now, but previously they were only accessible via the env argument passed to the Worker function. Accessing the environment at the top-level of a Worker was not possible:</p>
            <pre><code>export default {
  async fetch(request, env) {
    const config = env.MY_ENVIRONMENT_VARIABLE;
    // ...
  }
}
</code></pre>
            <p> With the <a href="https://developers.cloudflare.com/workers/configuration/environment-variables/"><code><u>new process.env</u></code><u> implementation</u></a>, you can now access environment variables in a more familiar way, just like in Node.js, and at any scope, including the top-level of your Worker:</p>
            <pre><code>import process from 'node:process';
const config = process.env.MY_ENVIRONMENT_VARIABLE;

export default {
  async fetch(request, env) {
    // You can still access env here if you need to
    const configFromEnv = env.MY_ENVIRONMENT_VARIABLE;
    // ...
  }
}
</code></pre>
            <p><a href="https://developers.cloudflare.com/workers/configuration/environment-variables/"><u>Environment variables</u></a> are set in the same way as before, via the <code>wrangler.toml</code> or <code>wrangler.jsonc</code> configuration file, or via the Cloudflare dashboard or API. They may be set as simple key-value pairs or as JSON objects:</p>
            <pre><code>{
  "name": "my-worker-dev",
  "main": "src/index.js",
  "compatibility_date": "2025-09-15",
  "compatibility_flags": [
    "nodejs_compat"
  ],
  "vars": {
    "API_HOST": "example.com",
    "API_ACCOUNT_ID": "example_user",
    "SERVICE_X_DATA": {
      "URL": "service-x-api.dev.example",
      "MY_ID": 123
    }
  }
}
</code></pre>
            <p>When accessed via <code>process.env</code>, all environment variable values are strings, just like in Node.js.</p><p>Because <code>process.env</code> is accessible at the global scope, it is important to note that environment variables are accessible from anywhere in your Worker script, including third-party libraries that you may be using. This is consistent with Node.js behavior, but it is something to be aware of from a security and configuration management perspective. The <a href="https://developers.cloudflare.com/secrets-store/"><u>Cloudflare Secrets Store</u></a> can provide enhanced handling around secrets within Workers as an alternative to using environment variables.</p>
    <div>
      <h4>Importable environment and waitUntil</h4>
      <a href="#importable-environment-and-waituntil">
        
      </a>
    </div>
    <p>When not using the <code>nodejs_compat</code> flag, we decided to go a step further and make it possible to import both the environment, and the <a href="https://developers.cloudflare.com/workers/configuration/environment-variables/"><u>waitUntil mechanism</u></a>, as a module, rather than forcing users to always access it via the <code>env</code> and <code>ctx</code> arguments passed to the Worker function. This can make it easier to access the environment in a more modular way, and can help to avoid passing the <code>env</code> argument through multiple layers of function calls. This is not a Node.js-compatibility feature, but we believe it is a useful addition to the Workers environment:</p>
            <pre><code>import { env, waitUntil } from 'cloudflare:workers';

const config = env.MY_ENVIRONMENT_VARIABLE;

export default {
  async fetch(request) {
    // You can still access env here if you need to
    const configFromEnv = env.MY_ENVIRONMENT_VARIABLE;
    // ...
  }
}

function doSomething() {
  // Bindings and waitUntil can now be accessed without
  // passing the env and ctx through every function call.
  waitUntil(env.RPC.doSomethingRemote());
}
</code></pre>
            <p>One important note about <code>process.env</code>: changes to environment variables via <code>process.env</code> will not be reflected in the <code>env</code> argument passed to the Worker function, and vice versa. The <code>process.env</code> is populated at the start of the Worker execution and is not updated dynamically. This is consistent with Node.js behavior, where changes to <code>process.env</code> do not affect the actual environment variables of the running process. We did this to minimize the risk that a third-party library, originally meant to run in Node.js, could inadvertently modify the environment assumed by the rest of the Worker code.</p>
    <div>
      <h3>Stdin, stdout, stderr</h3>
      <a href="#stdin-stdout-stderr">
        
      </a>
    </div>
    <p>Workers do not have a traditional standard input/output/error streams like a Node.js process does. However, we have implemented <code>process.stdin</code>, <code>process.stdout</code>, and <code>process.stderr</code> as stream-like objects that can be used similarly. These streams are not connected to any actual process stdin and stdout, but they can be used to capture output that is written to the logs captured by the Worker in the same way as <code>console.log</code> and friends, just like them, they will show up in <a href="https://developers.cloudflare.com/workers/observability/logs/workers-logs/"><u>Workers Logs</u></a>.</p><p>The <code>process.stdout</code> and <code>process.stderr</code> are Node.js writable streams:</p>
            <pre><code>import process from 'node:process';

export default {
  async fetch(request) {
    process.stdout.write('This will appear in the Worker logs\n');
    process.stderr.write('This will also appear in the Worker logs\n');
    return new Response('Hello, world!');
  }
}
</code></pre>
            <p>Support for <code>stdin</code>, <code>stdout</code>, and <code>stderr</code> is also integrated with the virtual file system, allowing you to write to the standard file descriptors <code>0</code>, <code>1</code>, and <code>2</code> (representing <code>stdin</code>, <code>stdout</code>, and <code>stderr</code> respectively) using the <code>node:fs</code> APIs:</p>
            <pre><code>import fs from 'node:fs';
import process from 'node:process';

export default {
  async fetch(request) {
    // Write to stdout
    fs.writeSync(process.stdout.fd, 'Hello, stdout!\n');
    // Write to stderr
    fs.writeSync(process.stderr.fd, 'Hello, stderr!\n');

    return new Response('Check the logs for stdout and stderr output!');
  }
}
</code></pre>
            
    <div>
      <h3>Other process APIs</h3>
      <a href="#other-process-apis">
        
      </a>
    </div>
    <p>We cannot cover every <code>node:process</code> API in detail here, but here are some of the other notable APIs that we have implemented:</p><ul><li><p><code>process.nextTick(fn)</code>: Schedules a callback to be invoked after the current execution context completes. Our implementation uses the same microtask queue as promises so that it behaves exactly the same as <code>queueMicrotask(fn)</code>.</p></li><li><p><code>process.cwd()</code> and <code>process.chdir()</code>: Get and change the current virtual working directory. The current working directory is initialized to /<code>bundle</code> when the Worker starts, and every request has its own isolated view of the current working directory. Changing the working directory in one request does not affect the working directory in other requests.</p></li><li><p><code>process.exit()</code>: Immediately terminates the current Worker request execution. This is unlike Node.js where <code>process.exit()</code> terminates the entire process. In Workers, calling <code>process.exit()</code> will stop execution of the current request and return an error response to the client.</p></li></ul>
    <div>
      <h2>Compression with <code>node:zlib</code></h2>
      <a href="#compression-with-node-zlib">
        
      </a>
    </div>
    <p>The <code>node:zlib</code> module provides APIs for compressing and decompressing data using various algorithms such as gzip, deflate, and brotli. We have implemented the <code>node:zlib</code> module, allowing you to use familiar compression APIs in your Workers applications. This enables a wide range of use cases, including data compression for network transmission, response optimization, and archive handling.</p>
            <pre><code>import zlib from 'node:zlib';

export default {
  async fetch(request) {
    const input = 'Hello, world! Hello, world! Hello, world!';
    const compressed = zlib.gzipSync(input);
    const decompressed = zlib.gunzipSync(compressed).toString('utf-8');

    return new Response(`Decompressed data: ${decompressed}`);
  }
}
</code></pre>
            <p>While Workers has had built-in support for gzip and deflate compression via the <a href="https://compression.spec.whatwg.org/"><u>Web Platform Standard Compression API</u></a>, the <code>node:zlib</code> module support brings additional support for the Brotli compression algorithm, as well as a more familiar API for Node.js developers.</p>
    <div>
      <h2>Timing &amp; scheduling</h2>
      <a href="#timing-scheduling">
        
      </a>
    </div>
    <p>Node.js provides a set of timing and scheduling APIs via the <code>node:timers</code> module. We have implemented these in the runtime as well.</p>
            <pre><code>import timers from 'node:timers';

export default {
  async fetch(request) {
    timers.setInterval(() =&gt; {
      console.log('This will log every half-second');
    }, 500);

    timers.setImmediate(() =&gt; {
      console.log('This will log immediately after the current event loop');
    });

    return new Promise((resolve) =&gt; {
      timers.setTimeout(() =&gt; {
        resolve(new Response('Hello after 1 second!'));
      }, 1000);
    });
  }
}
</code></pre>
            <p>The Node.js implementations of the timers APIs are very similar to the standard Web Platform with one key difference: the Node.js timers APIs return <code>Timeout</code> objects that can be used to manage the timers after they have been created. We have implemented the <code>Timeout</code> class in Workers to provide this functionality, allowing you to clear or re-fire timers as needed.</p>
    <div>
      <h2>Console</h2>
      <a href="#console">
        
      </a>
    </div>
    <p>The <code>node:console</code> module provides a set of console logging APIs that are similar to the standard <code>console</code> global, but with some additional features. We have implemented the <code>node:console</code> module as a thin wrapper around the existing <code>globalThis.console</code> that is already available in Workers.</p>
    <div>
      <h2>How to enable the Node.js compatibility features</h2>
      <a href="#how-to-enable-the-node-js-compatibility-features">
        
      </a>
    </div>
    <p>To enable the Node.js compatibility features as a whole within your Workers, you can set the <code>nodejs_compat</code> <a href="https://developers.cloudflare.com/workers/configuration/compatibility-flags/"><u>compatibility flag</u></a> in your <a href="https://developers.cloudflare.com/workers/wrangler/configuration/"><code><u>wrangler.jsonc or wrangler.toml</u></code></a> configuration file. If you are not using Wrangler, you can also set the flag via the <a href="https://dash.cloudflare.com"><u>Cloudflare dashboard</u></a> or API:</p>
            <pre><code>{
  "name": "my-worker",
  "main": "src/index.js",
  "compatibility_date": "2025-09-21",
  "compatibility_flags": [
    // Get everything Node.js compatibility related
    "nodejs_compat",
  ]
}
</code></pre>
            <p><b>The compatibility date here is key! Update that to the most current date, and you'll always be able to take advantage of the latest and greatest features.</b></p><p>The <code>nodejs_compat</code> flag is an umbrella flag that enables all the Node.js compatibility features at once. This is the recommended way to enable Node.js compatibility, as it ensures that all features are available and work together seamlessly. However, if you prefer, you can also enable or disable some features individually via their own compatibility flags:</p>
<div><table><thead>
  <tr>
    <th><span>Module</span></th>
    <th><span>Enable Flag (default)</span></th>
    <th><span>Disable Flag</span></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>node:console</span></td>
    <td><span>enable_nodejs_console_module</span></td>
    <td><span>disable_nodejs_console_module</span></td>
  </tr>
  <tr>
    <td><span>node:fs</span></td>
    <td><span>enable_nodejs_fs_module</span></td>
    <td><span>disable_nodejs_fs_module</span></td>
  </tr>
  <tr>
    <td><span>node:http (client)</span></td>
    <td><span>enable_nodejs_http_modules</span></td>
    <td><span>disable_nodejs_http_modules</span></td>
  </tr>
  <tr>
    <td><span>node:http (server)</span></td>
    <td><span>enable_nodejs_http_server_modules</span></td>
    <td><span>disable_nodejs_http_server_modules</span></td>
  </tr>
  <tr>
    <td><span>node:os</span></td>
    <td><span>enable_nodejs_os_module</span></td>
    <td><span>disable_nodejs_os_module</span></td>
  </tr>
  <tr>
    <td><span>node:process</span></td>
    <td><span>enable_nodejs_process_v2</span></td>
    <td></td>
  </tr>
  <tr>
    <td><span>node:zlib</span></td>
    <td><span>nodejs_zlib</span></td>
    <td><span>no_nodejs_zlib</span></td>
  </tr>
  <tr>
    <td><span>process.env</span></td>
    <td><span>nodejs_compat_populate_process_env</span></td>
    <td><span>nodejs_compat_do_not_populate_process_env</span></td>
  </tr>
</tbody></table></div><p>By separating these features, you can have more granular control over which Node.js APIs are available in your Workers. At first, we had started rolling out these features under the one <code>nodejs_compat</code> flag, but we quickly realized that some users perform feature detection based on the presence of certain modules and APIs and that by enabling everything all at once we were risking breaking some existing Workers. Users who are checking for the existence of these APIs manually can ensure new changes don’t break their workers by opting out of specific APIs:</p>
            <pre><code>{
  "name": "my-worker",
  "main": "src/index.js",
  "compatibility_date": "2025-09-15",
  "compatibility_flags": [
    // Get everything Node.js compatibility related
    "nodejs_compat",
    // But disable the `node:zlib` module if necessary
    "no_nodejs_zlib",
  ]
}
</code></pre>
            <p>But, to keep things simple, <b>we recommend starting with the </b><code><b>nodejs_compat</b></code><b> flag, which will enable everything. You can always disable individual features later if needed.</b> There is no performance penalty to having the additional features enabled.</p>
    <div>
      <h3>Handling end-of-life'd APIs</h3>
      <a href="#handling-end-of-lifed-apis">
        
      </a>
    </div>
    <p>One important difference between Node.js and Workers is that Node.js has a <a href="https://nodejs.org/en/eol"><u>defined long term support (LTS) schedule</u></a> that allows it to make breaking changes at certain points in time. More specifically, Node.js can remove APIs and features when they reach end-of-life (EOL). On Workers, however, we have a rule that once a Worker is deployed, <a href="https://blog.cloudflare.com/backwards-compatibility-in-cloudflare-workers/"><u>it will continue to run as-is indefinitely</u></a>, without any breaking changes as long as the compatibility date does not change. This means that we cannot simply remove APIs when they reach EOL in Node.js, since this would break existing Workers. To address this, we have introduced a new set of compatibility flags that allow users to specify that they do not want the <code>nodejs_compat</code> features to include end-of-life APIs. These flags are based on the Node.js major version in which the APIs were removed:</p><p>The <code>remove_nodejs_compat_eol</code> flag will remove all APIs that have reached EOL up to your current compatibility date:</p>
            <pre><code>{
  "name": "my-worker",
  "main": "src/index.js",
  "compatibility_date": "2025-09-15",
  "compatibility_flags": [
    // Get everything Node.js compatibility related
    "nodejs_compat",
    // Remove Node.js APIs that have reached EOL up to your
    // current compatibility date
    "remove_nodejs_compat_eol",
  ]
}
</code></pre>
            <ul><li><p>The <code>remove_nodejs_compat_eol_v22</code> flag will remove all APIs that reached EOL in Node.js v22. When using r<code>emovenodejs_compat_eol</code>, this flag will be automatically enabled if your compatibility date is set to a date after Node.js v22's EOL date (April 30, 2027).</p></li><li><p>The <code>remove_nodejs_compat_eol_v23</code> flag will remove all APIs that reached EOL in Node.js v23. When using r<code>emovenodejs_compat_eol</code>, this flag will be automatically enabled if your compatibility date is set to a date after Node.js v24's EOL date (April 30, 2028).</p></li><li><p>The <code>remove_nodejs_compat_eol_v24</code> flag will remove all APIs that reached EOL in Node.js v24. When using <code>removenodejs_compat_eol</code>, this flag will be automatically enabled if your compatibility date is set to a date after Node.js v24's EOL date (April 30, 2028).</p></li></ul><p>If you look at the date for <code>remove_nodejs_compat_eol_v23</code> you'll notice that it is the same as the date for <code>remove_nodejs_compat_eol_v24</code>. That is not a typo! Node.js v23 is not an LTS release, and as such it has a very short support window. It was released in October 2023 and reached EOL in May 2024. Accordingly, we have decided to group the end-of-life handling of non-LTS releases into the next LTS release. This means that when you set your compatibility date to a date after the EOL date for Node.js v24, you will also be opting out of the APIs that reached EOL in Node.js v23. Importantly, these flags will not be automatically enabled until your compatibility date is set to a date after the relevant Node.js version's EOL date, ensuring that existing Workers will have plenty of time to migrate before any APIs are removed, or can choose to just simply keep using the older APIs indefinitely by using the reverse compatibility flags like <code>add_nodejs_compat_eol_v24</code>.</p>
    <div>
      <h2>Giving back</h2>
      <a href="#giving-back">
        
      </a>
    </div>
    <p>One other important bit of work that we have been doing is expanding Cloudflare's investment back into the Node.js ecosystem as a whole. There are now five members of the Workers runtime team (plus one summer intern) that are actively contributing to the <a href="https://github.com/nodejs/node"><u>Node.js project</u></a> on GitHub, two of which are members of Node.js' Technical Steering Committee. While we have made a number of new feature contributions such as an implementation of the Web Platform Standard <a href="https://blog.cloudflare.com/improving-web-standards-urlpattern/"><u>URLPattern</u></a> API and improved implementation of <a href="https://github.com/nodejs/ncrypto"><u>crypto</u></a> operations, our primary focus has been on improving the ability for other runtimes to interoperate and be compatible with Node.js, fixing critical bugs, and improving performance. As we continue to grow our efforts around Node.js compatibility we will also grow our contributions back to the project and ecosystem as a whole.</p>
<div><table><thead>
  <tr>
    <th><span>Aaron Snell</span></th>
    <th><span>2025 Summer Intern, Cloudflare Containers</span><br /><span>Node.js Web Infrastructure Team</span></th>
    <th><img src="https://images.ctfassets.net/zkvhlag99gkb/2ud1DF6HOI3ha2ySAhPOve/803132cf224695a48698afb806bf147b/Aaron.png?h=250" /></th>
  </tr>
  <tr>
    <th><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></th>
    <th><a href="https://github.com/flakey5"><span>flakey5</span></a></th>
  </tr></thead>
<tbody>
  <tr>
    <td><span>Dario Piotrowicz</span></td>
    <td><span>Senior System Engineer</span><br /><span>Node.js Collaborator</span></td>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/4K17bsjek1z4u2KRTtZ8uS/d7058dea515cb057a1727bcd01a0f5d2/Dario.png?h=250" /></td>
  </tr>
  <tr>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></td>
    <td><a href="https://github.com/dario-piotrowicz"><span>dario-piotrowicz</span></a></td>
  </tr>
  <tr>
    <td><span>Guy Bedford</span></td>
    <td><span>Principal Systems Engineer</span><br /><span>Node.js Collaborator</span></td>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/iYM8oWWSK89MesmQwctfc/4d86847238b1f10e18717771e2ad5ee8/Guy.png?h=250" /></td>
  </tr>
  <tr>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></td>
    <td><a href="https://github.com/guybedford"><span>guybedford</span></a></td>
  </tr>
  <tr>
    <td><span>James Snell</span></td>
    <td><span>Principal Systems Engineer</span><br /><span>Node.js TSC</span></td>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/4vN2YAqsEBlSnWtXRM0pTT/5e9130753ed71933fc94bc2c634425f3/James.png?h=250" /></td>
  </tr>
  <tr>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></td>
    <td><a href="https://github.com/jasnell"><span>jasnell</span></a></td>
  </tr>
  <tr>
    <td><span>Nicholas Paun</span></td>
    <td><span>Systems Engineer</span><br /><span>Node.js Contributor</span></td>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/4ePtfLAzk4pKYi4hU4dRLX/e4dcdfe86a4e54c4d02e356e2078d214/Nicholas.png?h=250" /></td>
  </tr>
  <tr>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></td>
    <td><a href="https://github.com/npaun"><span>npaun</span></a></td>
  </tr>
  <tr>
    <td><span>Yagiz Nizipli</span></td>
    <td><span>Principal Systems Engineer</span><br /><span>Node.js TSC</span></td>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nvpEqU0VHi3Se9fxJ5vE8/0f5628bc1756c7e3e363760be9c493ae/Yagiz.png?h=250" /></td>
  </tr>
  <tr>
    <td><img src="https://images.ctfassets.net/zkvhlag99gkb/2nqff7ZSEryQfXbl2OdwfJ/6b4a56a3e71f439032d3bc0413d2d72f/GitHub.png?h=250" /></td>
    <td><a href="https://github.com/anonrig"><span>anonrig</span></a></td>
  </tr>
</tbody></table></div><p>Cloudflare is also proud to continue supporting critical infrastructure for the Node.js project through its <a href="https://openjsf.org/blog/openjs-cloudflare-partnership"><u>ongoing strategic partnership</u></a> with the OpenJS Foundation, providing free access to the project to services such as Workers, R2, DNS, and more.</p>
    <div>
      <h2>Give it a try!</h2>
      <a href="#give-it-a-try">
        
      </a>
    </div>
    <p>Our vision for Node.js compatibility in Workers is not just about implementing individual APIs, but about creating a comprehensive platform that allows developers to run existing Node.js code seamlessly in the Workers environment. This involves not only implementing the APIs themselves, but also ensuring that they work together harmoniously, and that they integrate well with the unique aspects of the Workers platform.</p><p>In some cases, such as with <code>node:fs</code> and <code>node:crypto</code>, we have had to implement entirely new capabilities that were not previously available in Workers and did so at the native runtime level. This allows us to tailor the implementations to the unique aspects of the Workers environment and ensure both performance and security.</p><p>And we're not done yet. We are continuing to work on implementing additional Node.js APIs, as well as improving the performance and compatibility of the existing implementations. We are also actively engaging with the community to understand their needs and priorities, and to gather feedback on our implementations. If there are specific Node.js APIs or npm packages that you would like to see supported in Workers, <a href="https://github.com/cloudflare/workerd/"><u>please let us know</u></a>! If there are any issues or bugs you encounter, please report them on our <a href="https://github.com/cloudflare/workerd/"><u>GitHub repository</u></a>. While we might not be able to implement every single Node.js API, nor match Node.js' behavior exactly in every case, we are committed to providing a robust and comprehensive Node.js compatibility layer that meets the needs of the community.</p><p>All the Node.js compatibility features described in this post are <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>available now</u></a>. To get started, simply enable the <code>nodejs_compat</code> compatibility flag in your <code>wrangler.toml</code> or <code>wrangler.jsonc</code> file, or via the Cloudflare dashboard or API. You can then start using the Node.js APIs in your Workers applications right away.</p> ]]></content:encoded>
            <category><![CDATA[Node.js]]></category>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Servers]]></category>
            <guid isPermaLink="false">rMNgTNdCcEh6MjAlrKkL3</guid>
            <dc:creator>James M Snell</dc:creator>
        </item>
        <item>
            <title><![CDATA[Bringing Node.js HTTP servers to Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/bringing-node-js-http-servers-to-cloudflare-workers/</link>
            <pubDate>Mon, 08 Sep 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ We've implemented the node:http client and server APIs in Cloudflare Workers, allowing developers to migrate existing Node.js applications with minimal code changes. ]]></description>
            <content:encoded><![CDATA[ <p>We’re making it easier to run your Node.js applications on <a href="https://www.cloudflare.com/developer-platform/products/workers/"><u>Cloudflare Workers </u></a>by adding support for the <code>node:http</code> client and server APIs. This significant addition brings familiar Node.js HTTP interfaces to the edge, enabling you to deploy existing Express.js, Koa, and other Node.js applications globally with zero cold starts, automatic scaling, and significantly lower latency for your users — all without rewriting your codebase. Whether you're looking to migrate legacy applications to a modern serverless platform or build new ones using the APIs you already know, you can now leverage Workers' global network while maintaining your existing development patterns and frameworks.</p>
    <div>
      <h2>The Challenge: Node.js-style HTTP in a Serverless Environment</h2>
      <a href="#the-challenge-node-js-style-http-in-a-serverless-environment">
        
      </a>
    </div>
    <p>Cloudflare Workers operate in a unique <a href="https://www.cloudflare.com/learning/serverless/what-is-serverless/"><u>serverless</u></a> environment where direct tcp connection isn't available. Instead, all networking operations are fully managed by specialized services outside the Workers runtime itself — systems like our <a href="https://blog.cloudflare.com/introducing-oxy/"><u>Open Egress Router (OER)</u></a> and <a href="https://github.com/cloudflare/pingora"><u>Pingora</u></a> that handle connection pooling, keeping connections warm, managing egress IPs, and all the complex networking details. This means as a developer, you don't need to worry about TLS negotiation, connection management, or network optimization — it's all handled for you automatically.</p><p>This fully-managed approach is actually why we can't support certain Node.js APIs — these networking decisions are handled at the system level for performance and security. While this makes Workers different from traditional Node.js environments, it also makes them better for serverless computing — you get enterprise-grade networking without the complexity.</p><p>This fundamental difference required us to rethink how HTTP APIs work at the edge while maintaining compatibility with existing Node.js code patterns.</p><p>Our Solution: we've implemented the core `node:http` APIs by building on top of the web-standard technologies that Workers already excel at. Here's how it works:</p>
    <div>
      <h3>HTTP Client APIs</h3>
      <a href="#http-client-apis">
        
      </a>
    </div>
    <p>The <code>node:http</code> client implementation includes the essential APIs you're familiar with:</p><ul><li><p><code>http.get()</code> - For simple GET requests</p></li><li><p><code>http.request()</code> - For full control over HTTP requests</p></li></ul><p>Our implementations of these APIs are built on top of the standard <a href="https://developer.mozilla.org/en-US/docs/Web/API/Fetch_API"><code><u>fetch()</u></code></a> API that Workers use natively, providing excellent performance while maintaining Node.js compatibility.</p>
            <pre><code>import http from 'node:http';

export default {
  async fetch(request) {
    // Use familiar Node.js HTTP client APIs
    const { promise, resolve, reject } = Promise.withResolvers();

    const req = http.get('https://api.example.com/data', (res) =&gt; {
      let data = '';
      res.on('data', chunk =&gt; data += chunk);
      res.on('end', () =&gt; {
        resolve(new Response(data, {
          headers: { 'Content-Type': 'application/json' }
        }));
      });
    });

    req.on('error', reject);

    return promise;
  }
};</code></pre>
            
    <div>
      <h3>What's Supported</h3>
      <a href="#whats-supported">
        
      </a>
    </div>
    <ul><li><p>Standard HTTP methods (GET, POST, PUT, DELETE, etc.)</p></li><li><p>Request and response headers</p></li><li><p>Request and response bodies</p></li><li><p>Streaming responses</p></li><li><p>Basic authentication</p></li></ul>
    <div>
      <h3>Current Limitations</h3>
      <a href="#current-limitations">
        
      </a>
    </div>
    <ul><li><p>The <a href="https://nodejs.org/api/http.html#class-httpagent"><code><u>Agent</u></code></a> API is provided but operates as a no-op.</p></li><li><p><a href="https://nodejs.org/docs/v22.19.0/api/http.html#responseaddtrailersheaders"><u>Trailers</u></a>, <a href="https://nodejs.org/docs/v22.19.0/api/http.html#responsewriteearlyhintshints-callback"><u>early hints</u></a>, and <a href="https://nodejs.org/docs/v22.19.0/api/http.html#event-continue"><u>1xx responses</u></a> are not supported.</p></li><li><p>TLS-specific options are not supported (Workers handle TLS automatically).</p></li></ul>
    <div>
      <h2>HTTP Server APIs</h2>
      <a href="#http-server-apis">
        
      </a>
    </div>
    <p>The server-side implementation is where things get particularly interesting. Since Workers can't create traditional TCP servers listening on specific ports, we've created a bridge system that connects Node.js-style servers to the Workers request handling model.</p><p>When you create an HTTP server and call <code>listen(port)</code>, instead of opening a TCP socket, the server is registered in an internal table within your Worker. This internal table acts as a bridge between http.createServer executions and the incoming fetch requests using the port number as the identifier. 

You then use one of two methods to bridge incoming Worker requests to your Node.js-style server.</p>
    <div>
      <h3>Manual Integration with <code>handleAsNodeRequest</code></h3>
      <a href="#manual-integration-with-handleasnoderequest">
        
      </a>
    </div>
    <p>This approach gives you the flexibility to integrate Node.js HTTP servers with other Worker features, and allows you to have multiple handlers in your default <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/service-bindings/rpc/"><u>entrypoint</u></a> such as <code>fetch</code>, <code>scheduled</code>, <code>queue</code>, etc.</p>
            <pre><code>import { handleAsNodeRequest } from 'cloudflare:node';
import { createServer } from 'node:http';

// Create a traditional Node.js HTTP server
const server = createServer((req, res) =&gt; {
  res.writeHead(200, { 'Content-Type': 'text/plain' });
  res.end('Hello from Node.js HTTP server!');
});

// Register the server (doesn't actually bind to port 8080)
server.listen(8080);

// Bridge from Workers fetch handler to Node.js server
export default {
  async fetch(request) {
    // You can add custom logic here before forwarding
    if (request.url.includes('/admin')) {
      return new Response('Admin access', { status: 403 });
    }

    // Forward to the Node.js server
    return handleAsNodeRequest(8080, request);
  },
  async queue(batch, env, ctx) {
    for (const msg of batch.messages) {
      msg.retry();
    }
  },
  async scheduled(controller, env, ctx) {
    ctx.waitUntil(doSomeTaskOnSchedule(controller));
  },
};</code></pre>
            <p>This approach is perfect when you need to:</p><ul><li><p>Integrate with other Workers features like <a href="https://www.cloudflare.com/developer-platform/products/workers-kv/"><u>KV</u></a>, <a href="https://www.cloudflare.com/developer-platform/products/durable-objects/"><u>Durable Objects</u></a>, or <a href="https://www.cloudflare.com/developer-platform/products/r2/"><u>R2</u></a></p></li><li><p>Handle some routes differently while delegating others to the Node.js server</p></li><li><p>Apply custom middleware or request processing</p></li></ul>
    <div>
      <h3>Automatic Integration with <code>httpServerHandler</code></h3>
      <a href="#automatic-integration-with-httpserverhandler">
        
      </a>
    </div>
    <p>For use cases where you want to integrate a Node.js HTTP server without any additional features or complexity, you can use the `httpServerHandler` function. This function automatically handles the integration for you. This solution is ideal for applications that don’t need Workers-specific features.</p>
            <pre><code>import { httpServerHandler } from 'cloudflare:node';
import { createServer } from 'node:http';

// Create your Node.js HTTP server
const server = createServer((req, res) =&gt; {
  if (req.url === '/') {
    res.writeHead(200, { 'Content-Type': 'text/html' });
    res.end('&lt;h1&gt;Welcome to my Node.js app on Workers!&lt;/h1&gt;');
  } else if (req.url === '/api/status') {
    res.writeHead(200, { 'Content-Type': 'application/json' });
    res.end(JSON.stringify({ status: 'ok', timestamp: Date.now() }));
  } else {
    res.writeHead(404, { 'Content-Type': 'text/plain' });
    res.end('Not Found');
  }
});

server.listen(8080);

// Export the server as a Workers handler
export default httpServerHandler({ port: 8080 });
// Or you can simply pass the http.Server instance directly:
// export default httpServerHandler(server);</code></pre>
            
    <div>
      <h2><a href="https://expressjs.com/"><u>Express.js</u></a>, <a href="https://koajs.com/"><u>Koa.js</u></a> and Framework Compatibility</h2>
      <a href="#and-framework-compatibility">
        
      </a>
    </div>
    <p>These HTTP APIs open the door to running popular Node.js frameworks like Express.js on Workers. If any of the middlewares for these frameworks don’t work as expected, please <a href="https://github.com/cloudflare/workerd/issues"><u>open an issue</u></a> to Cloudflare Workers repository.</p>
            <pre><code>import { httpServerHandler } from 'cloudflare:node';
import express from 'express';

const app = express();

app.get('/', (req, res) =&gt; {
  res.json({ message: 'Express.js running on Cloudflare Workers!' });
});

app.get('/api/users/:id', (req, res) =&gt; {
  res.json({
    id: req.params.id,
    name: 'User ' + req.params.id
  });
});

app.listen(3000);
export default httpServerHandler({ port: 3000 });
// Or you can simply pass the http.Server instance directly:
// export default httpServerHandler(app.listen(3000));</code></pre>
            <p>In addition to <a href="https://expressjs.com"><u>Express.js</u></a>, <a href="https://koajs.com/"><u>Koa.js</u></a> is also supported:</p>
            <pre><code>import Koa from 'koa';
import { httpServerHandler } from 'cloudflare:node';

const app = new Koa()

app.use(async ctx =&gt; {
  ctx.body = 'Hello World';
});

app.listen(8080);

export default httpServerHandler({ port: 8080 });</code></pre>
            
    <div>
      <h2>Getting started with serverless <a href="http://node.js"><u>Node.js</u></a> applications</h2>
      <a href="#getting-started-with-serverless-applications">
        
      </a>
    </div>
    <p>The <code>node:http </code>and <code>node:https</code> APIs are available in Workers with Node.js compatibility enabled using the <a href="https://developers.cloudflare.com/workers/configuration/compatibility-dates/#nodejs-compatibility-flag"><code><u>nodejs_compat</u></code></a> compatibility flag with a compatibility date later than 08-15-2025.</p><p>The addition of <code>node:http</code> support brings us closer to our goal of making Cloudflare Workers the best platform for running JavaScript at the edge, whether you're building new applications or migrating existing ones.</p><a href="https://deploy.workers.cloudflare.com/?url=&lt;https://github.com/cloudflare/templates/tree/main/nodejs-http-server-template"><img src="https://deploy.workers.cloudflare.com/button" /></a>
<p></p><p>Ready to try it out? <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>Enable Node.js compatibility</u></a> in your Worker and start exploring the possibilities of familiar<a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/http/"><u> HTTP APIs at the edge</u></a>.</p> ]]></content:encoded>
            <category><![CDATA[Node.js]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Servers]]></category>
            <guid isPermaLink="false">k5sD9WGL8BsJPuqsJj6Fn</guid>
            <dc:creator>Yagiz Nizipli</dc:creator>
            <dc:creator>James M Snell</dc:creator>
        </item>
        <item>
            <title><![CDATA[New URLPattern API brings improved pattern matching to Node.js and Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/improving-web-standards-urlpattern/</link>
            <pubDate>Mon, 24 Mar 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Today we're announcing our latest contribution to Node.js, now available in v23.8.0: URLPattern.  ]]></description>
            <content:encoded><![CDATA[ <p>Today, we are excited to announce that we have contributed an implementation of the <a href="https://urlpattern.spec.whatwg.org"><u>URLPattern</u></a> API to Node.js, and it is available starting with <a href="https://nodejs.org/en/blog/release/v23.8.0"><u>the v23.8.0 update</u></a>. We've done this by adding our URLPattern implementation to <a href="https://github.com/ada-url/ada"><u>Ada URL</u></a>, the high-performance URL parser that now powers URL handling in both Node.js and Cloudflare Workers. This marks an important step toward bringing this API to the broader JavaScript ecosystem.</p><p>Cloudflare Workers has, from the beginning, embraced a standards-based JavaScript programming model, and Cloudflare was one of the founding companies for what has evolved into <a href="https://ecma-international.org/technical-committees/tc55/"><u>ECMA's 55th Technical Committee</u></a>, focusing on interoperability between Web-interoperable runtimes like Workers, Node.js, Deno, and others. This contribution highlights and marks our commitment to this ongoing philosophy. Ensuring that all the JavaScript runtimes work consistently and offer at least a minimally consistent set of features is critical to ensuring the ongoing health of the ecosystem as a whole.</p><p>URLPattern API contribution is just one example of Cloudflare’s ongoing commitment to the open-source ecosystem. We actively contribute to numerous open-source projects including Node.js, V8, and Ada URL, while also maintaining our own open-source initiatives like <a href="https://github.com/cloudflare/workerd"><u>workerd</u></a> and <a href="https://github.com/cloudflare/workers-sdk"><u>wrangler</u></a>. By upstreaming improvements to foundational technologies that power the web, we strengthen the entire developer ecosystem while ensuring consistent features across JavaScript runtimes. This collaborative approach reflects our belief that open standards and shared implementations benefit everyone - reducing fragmentation, improving developer experience and creating a better Internet. </p>
    <div>
      <h2>What is URLPattern?</h2>
      <a href="#what-is-urlpattern">
        
      </a>
    </div>
    <p>URLPattern is a standard published by the <a href="https://whatwg.org/"><u>WHATWG (Web Hypertext Application Technology Working Group)</u></a> which provides a pattern-matching system for URLs. This specification is available at <a href="http://urlpattern.spec.whatwg.org"><u>urlpattern.spec.whatwg.org</u></a>. The API provides developers with an easy-to-use, <a href="https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_expressions"><u>regular expression (regex)</u></a>-based approach to handling route matching, with built-in support for named parameters, wildcards, and more complex pattern matching that works uniformly across all URL components.</p><p>URLPattern is part of the <a href="https://min-common-api.proposal.wintertc.org/"><u>WinterTC Minimum Common API</u></a>, a soon-to-be standardized subset of web platform APIs designed to ensure interoperability across JavaScript runtimes, particularly for server-side and non-browser environments, and includes other APIs such as <a href="https://url.spec.whatwg.org/#url"><u>URL</u></a> and <a href="https://url.spec.whatwg.org/#urlsearchparams"><u>URLSearchParams</u></a>.</p><p>Cloudflare Workers has supported URLPattern for a number of years now, reflecting our commitment to enabling developers to use standard APIs across both browsers and server-side JavaScript runtimes. Contributing to Node.js and unifying the URLPattern implementation simplifies the ecosystem by reducing fragmentation, while at the same time improving our own implementation in Cloudflare Workers by making it faster and more specification compliant.</p><p>The following example demonstrates how URLPattern is used by creating a pattern that matches URLs with a “/blog/:year/:month/:slug” path structure, then tests if one specific URL string matches this pattern, and extracts the named parameters from a second URL using the exec method.</p>
            <pre><code>const pattern = new URLPattern({
  pathname: '/blog/:year/:month/:slug'
});

if (pattern.test('https://example.com/blog/2025/03/urlpattern-launch')) {
  console.log('Match found!');
}

const result = pattern.exec('https://example.com/blog/2025/03/urlpattern-launch');
console.log(result.pathname.groups.year); // "2025"
console.log(result.pathname.groups.month); // "03"
console.log(result.pathname.groups.slug); // "urlpattern-launch"</code></pre>
            <p>The URLPattern constructor accepts pattern strings or objects defining patterns for individual URL components. The <code>test()</code> method returns a boolean indicating if a URL simply matches the pattern. The <code>exec()</code> method provides detailed match results including captured groups. Behind this simple API, there’s sophisticated machinery working behind the scenes:</p><ol><li><p>When a URLPattern is used, it internally breaks down a URL, matching it against eight distinct components: protocol, username, password, hostname, port, pathname, search, and hash. This component-based approach gives the developer control over which parts of a URL to match.</p></li><li><p>Upon creation of the instance, URLPattern parses your input patterns for each component and compiles them internally into eight specialized regular expressions (one for each component type). This compilation step happens just once when you create an URLPattern object, optimizing subsequent matching operations.</p></li><li><p>During a match operation (whether using <code>test()</code> or <code>exec()</code>), these regular expressions are used to determine if the input matches the given properties. The <code>test()</code> method tells you if there’s a match, while <code>exec()</code> provides detailed information about what was matched, including any named capture groups from your pattern.</p></li></ol>
    <div>
      <h2>Fixing things along the way</h2>
      <a href="#fixing-things-along-the-way">
        
      </a>
    </div>
    <p>While implementing URLPattern, we discovered some inconsistencies between the specification and the <a href="https://github.com/web-platform-tests/wpt/pull/49782"><u>web-platform tests</u></a>, a cross-browser test suite maintained by all major browsers to test conformance to web standard specifications. For instance, we found that <a href="https://github.com/whatwg/urlpattern/issues/240"><u>URLs with non-special protocols (opaque-paths)</u></a> and URLs with invalid characters in hostnames were not correctly defined and processed within the URLPattern specification. We worked actively with the Chromium and the Safari teams to address these issues.</p><p>URLPatterns constructed from hostname components that contain newline or tab characters were expected to fail in the corresponding web-platform tests. This was due to an inconsistency with the original URLPattern implementation and the URLPattern specification.</p>
            <pre><code>const pattern = new URL({ "hostname": "bad\nhostname" });
const matched = pattern.test({ "hostname": "badhostname" });
// This now returns true.</code></pre>
            <p>We opened <a href="https://github.com/whatwg/urlpattern/issues/239"><u>several issues</u></a> to document these inconsistencies and followed up with <a href="https://github.com/whatwg/urlpattern/pull/243"><u>a pull-request to fix the specification</u></a>, ensuring that all implementations will eventually converge on the same corrected behavior. This also resulted in fixing several inconsistencies in web-platform tests, particularly around handling certain types of white space (such as newline or tab characters) in hostnames. </p>
    <div>
      <h2>Getting started with URLPattern</h2>
      <a href="#getting-started-with-urlpattern">
        
      </a>
    </div>
    <p>If you’re interested in using URLPattern today, you can:</p><ul><li><p>Use it natively in modern browsers by accessing the global URLPattern class</p></li><li><p>Try it in Cloudflare Workers (which has had URLPattern support for some time, now with improved spec compliance and performance)</p></li><li><p>Try it in Node.js, <a href="https://nodejs.org/en/blog/release/v23.8.0"><u>starting from v23.8.0</u></a></p></li><li><p>Try it in NativeScript on iOS and Android, <a href="https://blog.nativescript.org/nativescript-8-9-announcement/"><u>starting from v8.9.0</u></a></p></li><li><p>Try it in <a href="https://docs.deno.com/api/web/~/URLPattern"><u>Deno</u></a></p></li></ul><p>Here is a more complex example showing how URLPattern can be used for routing in a Cloudflare Worker — a common use case when building API endpoints or web applications that need to handle different URL paths efficiently and differently. The following example shows a pattern for <a href="https://en.wikipedia.org/wiki/REST"><u>REST APIs</u></a> that matches both “/users” and “/users/:userId”</p>
            <pre><code>const routes = [
  new URLPattern({ pathname: '/users{/:userId}?' }),
];

export default {
  async fetch(request, env, ctx): Promise&lt;Response&gt; {
    const url = new URL(request.url);
    for (const route of routes) {
      const match = route.exec(url);
      if (match) {
        const { userId } = match.pathname.groups;
        if (userId) {
          return new Response(`User ID: ${userId}`);
        }
        return new Response('List of users');
      }
    }
    // No matching route found
    return new Response('Not Found', { status: 404 });
  },
} satisfies ExportedHandler&lt;Env&gt;;</code></pre>
            
    <div>
      <h2>What does the future hold?</h2>
      <a href="#what-does-the-future-hold">
        
      </a>
    </div>
    <p>The contribution of URLPattern to Ada URL and Node.js is just the beginning. We’re excited about the possibilities this opens up for developers across different JavaScript environments.</p><p>In the future, we expect to contribute additional improvements to URLPattern’s performance, enabling more use cases for web application routing. Additionally, efforts to standardize the <a href="https://github.com/whatwg/urlpattern/pull/166"><u>URLPatternList proposal</u></a> will help deliver faster matching capabilities for server-side runtimes. We’re excited about these developments and encourage you to try URLPattern in your projects today.	</p><p>Try it and let us know what you think by creating an issue on the <a href="https://github.com/cloudflare/workerd"><u>workerd repository</u></a>. Your feedback is invaluable as we work to further enhance URLPattern.</p><p>We hope to do our part to build a unified Javascript ecosystem, and encourage others to do the same. This may mean looking for opportunities, such as we have with URLPattern, to share API implementations across backend runtimes. It could mean using or contributing to <a href="https://web-platform-tests.org/"><u>web-platform-tests</u></a> if you are working on a server-side runtime or web-standard APIs, or it might mean joining <a href="https://wintertc.org/faq"><u>WinterTC</u></a> to help define web-interoperable standards for server-side JavaScript.</p> ]]></content:encoded>
            <category><![CDATA[Node.js]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Standards]]></category>
            <guid isPermaLink="false">55t98SAXi3erhs7Wn5dgno</guid>
            <dc:creator>Yagiz Nizipli</dc:creator>
            <dc:creator>James M Snell</dc:creator>
            <dc:creator>Daniel Lemire (Guest author)</dc:creator>
        </item>
        <item>
            <title><![CDATA[More NPM packages on Cloudflare Workers: Combining polyfills and native code to support Node.js APIs]]></title>
            <link>https://blog.cloudflare.com/more-npm-packages-on-cloudflare-workers-combining-polyfills-and-native-code/</link>
            <pubDate>Mon, 09 Sep 2024 21:00:00 GMT</pubDate>
            <description><![CDATA[ Workers now supports more NPM packages and Node.js APIs using an overhauled hybrid compatibility layer. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, we are excited to announce a preview of <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>improved Node.js compatibility</u></a> for Workers and Pages. Broader compatibility lets you use more NPM packages and take advantage of the JavaScript ecosystem when writing your Workers.</p><p>Our newest version of Node.js compatibility combines the best features of our previous efforts. <a href="https://workers.cloudflare.com/"><u>Cloudflare Workers</u></a> have supported Node.js in some form for quite a while. We first announced polyfill support in <a href="https://blog.cloudflare.com/node-js-support-cloudflare-workers"><u>2021</u></a>, and later <a href="https://blog.cloudflare.com/workers-node-js-asynclocalstorage"><u>built-in support for parts of the Node.js API</u></a> that has <a href="https://blog.cloudflare.com/workers-node-js-apis-stream-path"><u>expanded</u></a> over time.</p><p>The latest changes make it even better:</p><ul><li><p>You can use far more <a href="https://en.wikipedia.org/wiki/Npm"><u>NPM</u></a> packages on Workers.</p></li><li><p>You can use packages that do not use the <code>node</code>: prefix to import Node.js APIs</p></li><li><p>You can use <a href="https://workers-nodejs-compat-matrix.pages.dev/"><u>more Node.js APIs on Workers</u></a>, including most methods on <a href="https://nodejs.org/docs/latest/api/async_hooks.html"><code><u>async_hooks</u></code></a>, <a href="https://nodejs.org/api/buffer.html"><code><u>buffer</u></code></a>, <a href="https://nodejs.org/api/dns.html"><code><u>dns</u></code></a>, <a href="https://nodejs.org/docs/latest/api/os.html"><code><u>os</u></code></a>, and <a href="https://nodejs.org/docs/latest/api/events.html"><code><u>events</u></code></a>. Many more, such as <a href="https://nodejs.org/api/fs.html"><code><u>fs</u></code></a> or <a href="https://nodejs.org/docs/latest/api/process.html"><code><u>process</u></code></a> are importable with mocked methods.</p></li></ul><p>To give it a try, add the following flag to <code>wrangler.toml</code>, and deploy your Worker with <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler</u></a>:</p><p><code>compatibility_flags = ["nodejs_compat_v2"]</code></p><p>Packages that could not be imported with <code>nodejs_compat</code>, even as a dependency of another package, will now load. This includes popular packages such as <a href="https://www.npmjs.com/package/body-parser">body-parser</a>, <a href="https://www.npmjs.com/package/jsonwebtoken">jsonwebtoken</a>, {}<a href="https://www.npmjs.com/package/got">got</a>, <a href="https://www.npmjs.com/package/passport">passport</a>, <a href="https://www.npmjs.com/package/md5">md5</a>, <a href="https://www.npmjs.com/package/knex">knex</a>, <a href="https://www.npmjs.com/package/mailparser">mailparser</a>, <a href="https://www.npmjs.com/package/csv-stringify">csv-stringify</a>, <a href="https://www.npmjs.com/package/cookie-signature">cookie-signature</a>, <a href="https://www.npmjs.com/package/stream-slice">stream-slice</a>, and many more.</p><p>This behavior will soon become the default for all Workers with the <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>existing nodejs_compat compatibility flag</u></a> enabled, and a <a href="https://developers.cloudflare.com/workers/configuration/compatibility-dates/"><u>compatibility date</u></a> of 2024-09-23 or later. As you experiment with improved Node.js compatibility, share your feedback by <a href="https://github.com/cloudflare/workers-sdk/issues/new?assignees=&amp;labels=bug&amp;projects=&amp;template=bug-template.yaml&amp;title=%F0%9F%90%9B+BUG%3A"><u>opening an issue on GitHub</u></a>.</p>
    <div>
      <h3>Workerd is not Node.js</h3>
      <a href="#workerd-is-not-node-js">
        
      </a>
    </div>
    <p>To understand the latest changes, let’s start with a brief overview of how the Workers runtime differs from <a href="https://nodejs.org/"><u>Node.js</u></a>.</p><p>Node.js was built primarily for services run directly on a host OS and pioneered server-side JavaScript. Because of this, it includes functionality necessary to interact with the host machine, such as <a href="https://nodejs.org/api/process.html"><u>process</u></a> or <a href="https://nodejs.org/api/fs.html"><u>fs</u></a>, and a variety of utility modules, such as <a href="https://nodejs.org/api/crypto.html"><u>crypto</u></a>.</p><p>Cloudflare Workers run on an open source JavaScript/Wasm runtime called <a href="https://github.com/cloudflare/workerd"><u>workerd</u></a>. While both Node.js and workerd are built on <a href="https://v8.dev/"><u>V8</u></a>, workerd is <a href="https://blog.cloudflare.com/cloud-computing-without-containers"><u>designed to run untrusted code in shared processes</u></a>, exposes <a href="https://developers.cloudflare.com/workers/runtime-apis/bindings/"><u>bindings</u></a> for interoperability with other Cloudflare services, including <a href="https://blog.cloudflare.com/javascript-native-rpc"><u>JavaScript-native RPC</u></a>, and uses <a href="https://blog.cloudflare.com/introducing-the-wintercg"><u>web-standard APIs</u></a> whenever possible.</p><p>Cloudflare <a href="https://blog.cloudflare.com/introducing-the-wintercg/"><u>helped establish</u></a> <a href="https://wintercg.org/"><u>WinterCG</u></a>, the Web-interoperable Runtimes Community Group to improve interoperability of JavaScript runtimes, both with each other and with the web platform. You can build many applications using only web-standard APIs, but what about when you want to import dependencies from NPM that rely on Node.js APIs?</p><p>For example, if you attempt to import <a href="https://www.npmjs.com/package/pg"><u>pg</u></a>, a PostgreSQL driver, without Node.js compatibility turned on…</p>
            <pre><code>import pg from 'pg'</code></pre>
            <p>You will see the following error when you run <a href="https://developers.cloudflare.com/workers/wrangler/commands/#dev"><u>wrangler dev</u></a> to build your Worker:</p>
            <pre><code>✘ [ERROR] Could not resolve "events"
    ../node_modules/.pnpm/pg-cloudflare@1.1.1/node_modules/pg-cloudflare/dist/index.js:1:29:
      1 │ import { EventEmitter } from 'events';
        ╵                              ~~~~~~~~
  The package "events" wasn't found on the file system but is built into node.</code></pre>
            <p>This happens because the pg package imports the <a href="https://nodejs.org/api/events.html"><u>events module</u></a> from Node.js, which is not provided by workerd by default.</p><p>How can we enable this?</p>
    <div>
      <h3>Our first approach – build-time polyfills</h3>
      <a href="#our-first-approach-build-time-polyfills">
        
      </a>
    </div>
    <p>Polyfills are code that add functionality to a runtime that does not natively support it. They are often added to provide modern JavaScript functionality to older browsers, but can be used for server-side runtimes as well.</p><p>In 2022, we <a href="https://github.com/cloudflare/workers-sdk/pull/869"><u>added functionality to Wrangler</u></a> that injected polyfill implementations of some Node.js APIs into your Worker if you set <code>node_compat = true</code> in your wrangler.toml. For instance, the following code would work with this flag, but not without:</p>
            <pre><code>import EventEmitter from 'events';
import { inherits } from 'util';</code></pre>
            <p>These polyfills are essentially just additional JavaScript code added to your Worker by <a href="https://developers.cloudflare.com/workers/wrangler/"><u>Wrangler</u></a> when deploying the Worker. This behavior is enabled by <a href="https://www.npmjs.com/package/@esbuild-plugins/node-globals-polyfill"><code><u>@esbuild-plugins/node-globals-polyfill</u></code></a> which in itself uses <a href="https://github.com/ionic-team/rollup-plugin-node-polyfills/"><code><u>rollup-plugin-node-polyfills</u></code></a>.</p><p>This allows you to import and use some NPM packages, such as pg. However, many modules cannot be polyfilled with fast enough code or cannot be polyfilled at all.</p><p>For instance, <a href="https://nodejs.org/api/buffer.html"><u>Buffer</u></a> is a common Node.js API used to handle binary data. Polyfills exist for it, but JavaScript is often not optimized for the operations it performs under the hood, such as <code>copy</code>, <code>concat</code>, substring searches, or transcoding. While it is possible to implement in pure JavaScript, it could be far faster if the underlying runtime could use primitives from different languages. Similar limitations exist for other popular APIs such as <a href="https://nodejs.org/api/crypto.html"><u>Crypto</u></a>, <a href="https://nodejs.org/api/async_context.html"><u>AsyncLocalStorage</u></a>, and <a href="https://nodejs.org/api/stream.html"><u>Stream</u></a>.</p>
    <div>
      <h3>Our second approach – native support for some Node.js APIs in the Workers runtime</h3>
      <a href="#our-second-approach-native-support-for-some-node-js-apis-in-the-workers-runtime">
        
      </a>
    </div>
    <p>In 2023, we <a href="https://blog.cloudflare.com/workers-node-js-asynclocalstorage"><u>started adding</u></a> a subset of Node.js APIs directly to the Workers runtime. You can enable these APIs by adding the <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>nodejs_compat compatibility flag</u></a> to your Worker, but you cannot use polyfills with <code>node_compat = true</code> at the same time.</p><p>Also, when importing Node.js APIs, you must use the <code>node</code>: prefix:</p>
            <pre><code>import { Buffer } from 'node:buffer';</code></pre>
            <p>Since these Node.js APIs are built directly into the Workers runtime, they can be <a href="https://github.com/cloudflare/workerd/blob/main/src/workerd/api/node/buffer.c%2B%2B"><u>written in C++</u></a>, which allows them to be faster than JavaScript polyfills. APIs like <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/asynclocalstorage/"><u>AsyncLocalStorage</u></a>, which cannot be polyfilled without safety or performance issues, can be provided natively.</p><p>Requiring the <code>node: </code>prefix made imports more explicit and aligns with modern Node.js conventions. Unfortunately, existing NPM packages may import modules without <code>node:</code>. For instance, revisiting the example above, if you import the popular package <code>pg</code> in a Worker with the <code>nodejs_compat</code> flag, you still see the following error:</p>
            <pre><code>✘ [ERROR] Could not resolve "events"
    ../node_modules/.pnpm/pg-cloudflare@1.1.1/node_modules/pg-cloudflare/dist/index.js:1:29:
      1 │ import { EventEmitter } from 'events';
        ╵                              ~~~~~~~~
  The package "events" wasn't found on the file system but is built into node.</code></pre>
            <p>Many NPM packages still didn’t work in Workers, even if you enabled the <code>nodejs_compat</code> compatibility flag. You had to choose between a smaller set of performant APIs, exposed in a way that many NPM packages couldn’t access, or a larger set of incomplete and less performant APIs. And APIs like <code>process</code> that are exposed as globals in Node.js could still only be accessed by importing them as modules.</p>
    <div>
      <h3>The new approach: a hybrid model</h3>
      <a href="#the-new-approach-a-hybrid-model">
        
      </a>
    </div>
    <p>What if we could have the best of both worlds, and it just worked?</p><ul><li><p>A subset of Node.js APIs implemented directly in the Workers Runtime </p></li><li><p>Polyfills for the majority of other Node.js APIs</p></li><li><p>No <code>node</code>: prefix required</p></li><li><p>One simple way to opt-in</p></li></ul><p>Improved Node.js compatibility does just that.</p><p>Let’s take a look at two lines of code that look similar, but now act differently under the hood when <code>nodejs_compat_v2</code> is enabled:</p>
            <pre><code>import { Buffer } from 'buffer';  // natively implemented
import { isIP } from 'net'; // polyfilled</code></pre>
            <p>The first line imports <code>Buffer</code> from a <a href="https://github.com/cloudflare/workerd/blob/main/src/node/internal/internal_buffer.ts"><u>JavaScript module</u></a> in workerd that is backed by <a href="https://github.com/cloudflare/workerd/blob/main/src/workerd/api/node/buffer.c%2B%2B"><code><u>C++ code</u></code></a>. Various other Node.js modules are similarly implemented in a combination of Typescript and C++, including <a href="https://github.com/cloudflare/workerd/blob/main/src/workerd/api/node/async-hooks.h"><code><u>AsyncLocalStorage</u></code></a> and <a href="https://github.com/cloudflare/workerd/blob/main/src/workerd/api/node/crypto.h"><code><u>Crypto</u></code></a>. This allows for highly performant code that matches Node.js behavior.</p><p>Note that the <code>node:</code> prefix is not needed when importing <code>buffer</code>, but the code would also work with <code>node:buffer</code>.</p><p>The second line imports <code>net</code> which Wrangler automatically polyfills using a library called <a href="https://github.com/unjs/unenv"><u>unenv</u></a>. Polyfills and built-in runtime APIs now work together.</p><p>Previously, when you set <code>node_compat = true</code>, Wrangler added polyfills for every Node.js API that it was able to, even if neither your Worker nor its dependencies used that API. When you enable the <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>nodejs_compat_v2 compatibility flag</u></a>, Wrangler only adds polyfills for Node.js APIs that your Worker or its dependencies actually use. This results in small Worker sizes, even with polyfills.</p><p>For some Node.js APIs, there is not yet native support in the Workers runtime nor a polyfill implementation. In these cases, unenv “mocks” the interface. This means it adds the module and its methods to your Worker, but calling methods of the module will either do nothing or will throw an error with a message like:</p><p><code>[unenv] &lt;method name&gt; is not implemented yet!</code></p><p>This is more important than it might seem. Because if a Node.js API is “mocked”, NPM packages that depend on it can still be imported. Consider the following code:</p>
            <pre><code>// Package name: my-module

import fs from "fs";

export function foo(path) {
  const data = fs.readFileSync(path, 'utf8');
  return data;
}

export function bar() {
  return "baz";
}
</code></pre>
            
            <pre><code>import { bar } from "my-module"

bar(); // returns "baz"
foo(); // throws readFileSync is not implemented yet!
</code></pre>
            <p>Previously, even with the <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>existing nodejs_compat compatibility flag</u></a> enabled, attempting to import my-module would fail at build time, because the <code>fs</code> module could not be resolved. Now, the <code>fs</code> module can be resolved, methods that do not rely on an unimplemented Node.js API work, and methods that do throw a more specific error – a runtime error that a specific Node.js API method is not yet supported, rather than a build-time error that the module could not be resolved.</p><p>This is what enables some packages to transition from “doesn’t even load on Workers” to, “loads, but with some unsupported methods”.</p>
    <div>
      <h3>Still missing an API from Node.js? Module aliasing to the rescue</h3>
      <a href="#still-missing-an-api-from-node-js-module-aliasing-to-the-rescue">
        
      </a>
    </div>
    <p>Let’s say you need an NPM package to work on Workers that relies on a Node.js API that isn’t yet implemented in the Workers runtime or as a polyfill in unenv. You can use <a href="https://developers.cloudflare.com/workers/wrangler/configuration/#module-aliasing"><u>module aliasing</u></a> to implement just enough of that API to make things work.</p><p>For example, let’s say the NPM package you need to work calls <a href="https://nodejs.org/api/fs.html#fsreadfilepath-options-callback"><u>fs.readFile</u></a>. You can alias the fs module by adding the following to your Worker’s wrangler.toml:</p>
            <pre><code>[alias]
"fs" = "./fs-polyfill"</code></pre>
            <p>Then, in the fs-polyfill.js file, you can define your own implementation of any methods of the fs module:</p>
            <pre><code>export function readFile() {
  console.log("readFile was called");
  // ...
}
</code></pre>
            <p>Now, the following code, which previously threw the error message “[unenv] readFile is not implemented yet!”, runs without errors:</p>
            <pre><code>import { readFile } from 'fs';

export default {
  async fetch(request, env, ctx) {
    readFile();
    return new Response('Hello World!');
  },
};
</code></pre>
            <p>You can also use module aliasing to provide an implementation of an NPM package that does not work on Workers, even if you only rely on that NPM package indirectly, as a dependency of one of your Worker's dependencies.</p><p>For example, some NPM packages, such as <a href="https://www.npmjs.com/package/cross-fetch"><u>cross-fetch</u></a>, depend on <a href="https://www.npmjs.com/package/node-fetch"><u>node-fetch</u></a>, a package that provided a polyfill of the <a href="https://developers.cloudflare.com/workers/runtime-apis/fetch/"><u>fetch() API</u></a> before it was built into Node.js. The node-fetch package isn't needed in Workers, because the fetch() API is provided by the Workers runtime. And node-fetch doesn't work on Workers, because it relies on currently unsupported Node.js APIs from the <a href="https://nodejs.org/api/http.html"><u>http</u></a> and <a href="https://nodejs.org/api/https.html"><u>https</u></a> modules.</p><p>You can alias all imports of node-fetch to instead point directly to the fetch() API that is built into the Workers runtime using the popular <a href="https://github.com/SukkaW/nolyfill"><u>nolyfill</u></a> package:</p>
            <pre><code>[alias]
"node-fetch" = "./fetch-nolyfill"</code></pre>
            <p>All your replacement module needs to do in this case is to re-export the fetch API that is built into the Workers runtime:</p>
            <pre><code>export default fetch;</code></pre>
            
    <div>
      <h3>Contributing back to unenv</h3>
      <a href="#contributing-back-to-unenv">
        
      </a>
    </div>
    <p>Cloudflare is actively contributing to unenv. We think unenv is solving the problem of cross-runtime compatibility the right way — it adds only the necessary polyfills to your application, based on what APIs you use and what runtime you target. The project supports a variety of runtimes beyond workerd and is already used by other popular projects including <a href="https://nuxt.com/"><u>Nuxt</u></a> and <a href="https://nitro.unjs.io/"><u>Nitro</u></a>. We want to thank <a href="https://github.com/pi0"><u>Pooya Parsa</u></a> and the unenv maintainers and encourage others in the ecosystem to adopt or contribute.</p>
    <div>
      <h3>The path forward</h3>
      <a href="#the-path-forward">
        
      </a>
    </div>
    <p>Currently, you can enable improved Node.js compatibility by setting the <code>nodejs_compat_v2</code> flag in <code>wrangler.toml</code>. We plan to make the new behavior the default when using the <code>nodejs_compat</code> flag on September 23rd. This will require updating your <a href="https://developers.cloudflare.com/workers/configuration/compatibility-dates/"><code><u>compatibility_date</u></code></a>.</p><p>We are excited about the changes coming to Node.js compatibility, and encourage you to try it today. <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/"><u>See the documentation</u></a> on how to opt-in for your Workers, and please send feedback and report bugs <a href="https://github.com/cloudflare/workers-sdk/issues/new?assignees=&amp;labels=bug&amp;projects=&amp;template=bug-template.yaml&amp;title=%F0%9F%90%9B+BUG%3A"><u>by opening an issue</u></a>. Doing so will help us identify any gaps in support and ensure that as much of the Node.js ecosystem as possible runs on Workers.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Node.js]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[JavaScript]]></category>
            <guid isPermaLink="false">3zICVbgdxrLByG4g2Dsddy</guid>
            <dc:creator>James M Snell</dc:creator>
            <dc:creator>Igor Minar</dc:creator>
            <dc:creator>James Culveyhouse</dc:creator>
            <dc:creator>Mike Nomitch</dc:creator>
        </item>
        <item>
            <title><![CDATA[A Socket API that works across JavaScript runtimes — announcing a WinterCG spec and Node.js implementation of connect()]]></title>
            <link>https://blog.cloudflare.com/socket-api-works-javascript-runtimes-wintercg-polyfill-connect/</link>
            <pubDate>Thu, 28 Sep 2023 13:00:37 GMT</pubDate>
            <description><![CDATA[ Engineers from Cloudflare and Vercel have published a specification of the connect() sockets API for review by the community, along with a Node.js compatible implementation of connect() that developers can start using today ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Earlier this year, we <a href="/workers-tcp-socket-api-connect-databases/">announced a new API for creating outbound TCP sockets</a> — <a href="https://developers.cloudflare.com/workers/runtime-apis/tcp-sockets?cf_target_id=6F3FD2F2360D5526EEE56A7398DB7D9D">connect()</a>. From day one, we’ve been working with the <a href="https://wintercg.org/">Web-interoperable Runtimes Community Group (WinterCG) community</a> to chart a course toward making this API a standard, available across all runtimes and platforms — including Node.js.</p><p>Today, we’re sharing that we’ve reached a new milestone in the path to making this API available across runtimes — engineers from Cloudflare and Vercel have published <a href="https://sockets-api.proposal.wintercg.org/">a draft specification of the connect() sockets API</a> for review by the community, along with a Node.js compatible <a href="https://github.com/Ethan-Arrowood/socket">implementation of the connect() API</a> that developers can start using today.</p><p>This implementation helps both application developers and maintainers of libraries and frameworks:</p><ol><li><p>Maintainers of existing libraries that use the <a href="https://nodejs.org/api/net.html">node:net</a> and <a href="https://nodejs.org/api/tls.html">node:tls</a> APIs can use it to more easily add support for runtimes where node:net and node:tls are not available.</p></li><li><p>JavaScript frameworks can use it to make connect() available in local development, making it easier for application developers to target runtimes that provide connect().</p></li></ol>
    <div>
      <h3>Why create a new standard? Why connect()?</h3>
      <a href="#why-create-a-new-standard-why-connect">
        
      </a>
    </div>
    <p>As we <a href="/workers-tcp-socket-api-connect-databases/">described when we first announced connect()</a>, to-date there has not been a standard API across JavaScript runtimes for creating and working with TCP or UDP sockets. This makes it harder for maintainers of open-source libraries to ensure compatibility across runtimes, and ultimately creates friction for application developers who have to navigate which libraries work on which platforms.</p><p>While Node.js provides the <a href="https://nodejs.org/api/net.html">node:net</a> and <a href="https://nodejs.org/api/tls.html">node:tls</a> APIs, these APIs were designed over 10 years ago in the very early days of the Node.js project and remain callback-based. As a result, they can be hard to work with, and expose configuration in ways that don’t fit serverless platforms or web browsers.</p><p>The connect() API fills this gap by incorporating the best parts of existing socket APIs and <a href="https://github.com/WICG/direct-sockets/blob/main/docs/explainer.md">prior proposed standards</a>, based on feedback from the JavaScript community — including contributors to Node.js. Libraries like <a href="https://www.npmjs.com/package/pg">pg</a> (<a href="https://github.com/brianc/node-postgres">node-postgres</a> on Github) are already using the connect() API.</p>
    <div>
      <h3>The connect() specification</h3>
      <a href="#the-connect-specification">
        
      </a>
    </div>
    <p>At time of writing, the <a href="https://sockets-api.proposal.wintercg.org/">draft specification of the Sockets API</a> defines the following API:</p>
            <pre><code>dictionary SocketAddress {
  DOMString hostname;
  unsigned short port;
};

typedef (DOMString or SocketAddress) AnySocketAddress;

enum SecureTransportKind { "off", "on", "starttls" };

[Exposed=*]
dictionary SocketOptions {
  SecureTransportKind secureTransport = "off";
  boolean allowHalfOpen = false;
};

[Exposed=*]
interface Connect {
  Socket connect(AnySocketAddress address, optional SocketOptions opts);
};

interface Socket {
  readonly attribute ReadableStream readable;
  readonly attribute WritableStream writable;

  readonly attribute Promise&lt;undefined&gt; closed;
  Promise&lt;undefined&gt; close();

  Socket startTls();
};</code></pre>
            <p>The proposed API is Promise-based and reuses existing standards whenever possible. For example, <a href="https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream">ReadableStream</a> and <a href="https://developer.mozilla.org/en-US/docs/Web/API/WritableStream">WritableStream</a> are used for the read and write ends of the socket. This makes it easy to pipe data from a TCP socket to any other library or existing code that accepts a ReadableStream as input, or to write to a TCP socket via a WritableStream.</p><p>The entrypoint of the API is the connect() function, which takes a string containing both the hostname and port separated by a colon, or an object with discrete hostname and port fields. It returns a Socket object which represents a socket connection. An instance of this object exposes attributes and methods for working with the connection.</p><p>A connection can be established in plain-text or TLS mode, as well as a special “starttls” mode which allows the socket to be easily upgraded to TLS after some period of plain-text data transfer, by calling the startTls() method on the Socket object. No need to create a new socket or switch to using a separate set of APIs once the socket is upgraded to use TLS.</p><p>For example, to upgrade a socket using the startTLS pattern, you might do something like this:</p>
            <pre><code>import { connect } from "@arrowood.dev/socket"

const options = { secureTransport: "starttls" };
const socket = connect("address:port", options);
const secureSocket = socket.startTls();
// The socket is immediately writable
// Relies on web standard WritableStream
const writer = secureSocket.writable.getWriter();
const encoder = new TextEncoder();
const encoded = encoder.encode("hello");
await writer.write(encoded);</code></pre>
            <p>Equivalent code using the node:net and node:tls APIs:</p>
            <pre><code>import net from 'node:net'
import tls from 'node:tls'

const socket = new net.Socket(HOST, PORT);
socket.once('connect', () =&gt; {
  const options = { socket };
  const secureSocket = tls.connect(options, () =&gt; {
    // The socket can only be written to once the
    // connection is established.
    // Polymorphic API, uses Node.js streams
    secureSocket.write('hello');
  }
})</code></pre>
            
    <div>
      <h3>Use the Node.js implementation of connect() in your library</h3>
      <a href="#use-the-node-js-implementation-of-connect-in-your-library">
        
      </a>
    </div>
    <p>To make it easier for open-source library maintainers to adopt the connect() API, we’ve published an <a href="https://github.com/Ethan-Arrowood/socket">implementation of connect() in Node.js</a> that allows you to publish your library such that it works across JavaScript runtimes, without having to maintain any runtime-specific code.</p><p>To get started, install it as a dependency:</p>
            <pre><code>npm install --save @arrowood.dev/socket</code></pre>
            <p>And import it in your library or application:</p>
            <pre><code>import { connect } from "@arrowood.dev/socket"</code></pre>
            
    <div>
      <h3>What’s next for connect()?</h3>
      <a href="#whats-next-for-connect">
        
      </a>
    </div>
    <p>The <a href="https://github.com/wintercg/proposal-sockets-api/">wintercg/proposal-sockets-api</a> is published as a draft, and the next step is to solicit and incorporate feedback. We’d love your feedback, particularly if you maintain an open-source library or make direct use of the node:net or node:tls APIs.</p><p>Once feedback has been incorporated, engineers from Cloudflare, Vercel and beyond will be continuing to work towards contributing an implementation of the API directly to Node.js as a built-in API.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Product News]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[TCP]]></category>
            <category><![CDATA[Node.js]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">6LC7InDwR6gLWapyPtL3u5</guid>
            <dc:creator>Dominik Picheta</dc:creator>
            <dc:creator>James M Snell</dc:creator>
            <dc:creator>Ethan Arrowood (Guest Author)</dc:creator>
        </item>
        <item>
            <title><![CDATA[More Node.js APIs in Cloudflare Workers — Streams, Path, StringDecoder]]></title>
            <link>https://blog.cloudflare.com/workers-node-js-apis-stream-path/</link>
            <pubDate>Fri, 19 May 2023 13:00:47 GMT</pubDate>
            <description><![CDATA[ Today we are announcing support for three additional APIs from Node.js in Cloudflare Workers — stream, crypto, and http/https.request. This increases compatibility with the existing ecosystem of open source NPM packages, allowing you to use your preferred libraries in Workers. ]]></description>
            <content:encoded><![CDATA[ 
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5qJu9MRz57Hr7ADuD0vyvp/1197816c370dcc274b5941ea7675e642/image2-33.png" />
            
            </figure><p>Today we are announcing support for three additional APIs from Node.js in Cloudflare Workers. This increases compatibility with the existing ecosystem of open source npm packages, allowing you to use your preferred libraries in Workers, even if they depend on APIs from Node.js.</p><p>We recently <a href="/workers-node-js-asynclocalstorage/">added support</a> for AsyncLocalStorage, EventEmitter, Buffer, assert and parts of util. Today, we are adding support for:</p><ul><li><p>Node.js <a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/streams">Streams</a></p></li><li><p><a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/path">Path</a></p></li><li><p><a href="https://developers.cloudflare.com/workers/runtime-apis/nodejs/string-decoder/">StringDecoder</a></p></li></ul><p>We are also sharing a preview of a new module type, available in the <a href="https://github.com/cloudflare/workerd">open-source Workers runtime</a>, that mirrors a Node.js environment more closely by making some APIs available as globals, and allowing imports without the node: specifier prefix.</p><p>You can start using these APIs today, in the <a href="https://github.com/cloudflare/workerd">open-source runtime</a> that powers Cloudflare Workers, in local development, and when you deploy your Worker. Get started by <a href="https://developers.cloudflare.com/workers/platform/nodejs-compatibility/">enabling the nodejs_compat compatibility flag</a> for your Worker.</p>
    <div>
      <h3>Stream</h3>
      <a href="#stream">
        
      </a>
    </div>
    <p>The <a href="https://nodejs.org/dist/latest-v20.x/docs/api/stream.html">Node.js streams API</a> is the original API for working with streaming data in JavaScript that predates the <a href="https://streams.spec.whatwg.org/">WHATWG ReadableStream standard</a>. Now, a full implementation of Node.js streams (based directly on the <a href="https://www.npmjs.com/package/readable-stream">official implementation</a> provided by the Node.js project) is available within the Workers runtime.</p><p>Let's start with a quick example:</p>
            <pre><code>import {
  Readable,
  Transform,
} from 'node:stream';

import {
  text,
} from 'node:stream/consumers';

import {
  pipeline,
} from 'node:stream/promises';

// A Node.js-style Transform that converts data to uppercase
// and appends a newline to the end of the output.
class MyTransform extends Transform {
  constructor() {
    super({ encoding: 'utf8' });
  }
  _transform(chunk, _, cb) {
    this.push(chunk.toString().toUpperCase());
    cb();
  }
  _flush(cb) {
    this.push('\n');
    cb();
  }
}

export default {
  async fetch() {
    const chunks = [
      "hello ",
      "from ",
      "the ",
      "wonderful ",
      "world ",
      "of ",
      "node.js ",
      "streams!"
    ];

    function nextChunk(readable) {
      readable.push(chunks.shift());
      if (chunks.length === 0) readable.push(null);
      else queueMicrotask(() =&gt; nextChunk(readable));
    }

    // A Node.js-style Readable that emits chunks from the
    // array...
    const readable = new Readable({
      encoding: 'utf8',
      read() { nextChunk(readable); }
    });

    const transform = new MyTransform();
    await pipeline(readable, transform);
    return new Response(await text(transform));
  }
};</code></pre>
            <p>In this example we create two Node.js stream objects: one stream.Readable and one stream.Transform. The stream.Readable simply emits a sequence of individual strings, piped through the stream.Transform which converts those to uppercase and appends a new-line as a final chunk.</p><p>The example is straightforward and illustrates the basic operation of the Node.js API. For anyone already familiar with using standard WHATWG streams in Workers the pattern here should be recognizable.</p><p>The Node.js streams API is used by countless numbers of modules published on <a href="https://www.npmjs.com/">npm</a>. Now that the Node.js streams API is available in Workers, many packages that depend on it can be used in your Workers. For example, the <a href="https://www.npmjs.com/package/split2">split2 module</a> is a simple utility that can break a stream of data up and reassemble it so that every line is a distinct chunk. While simple, the module is downloaded over 13 million times each week and has over a thousand direct dependents on npm (and many more indirect dependents). Previously it was not possible to use split2 within Workers without also pulling in a large and complicated polyfill implementation of streams along with it. Now split2 can be used directly within Workers with no modifications and no additional polyfills. This reduces the size and complexity of your Worker by thousands of lines.</p>
            <pre><code>import {
  PassThrough,
} from 'node:stream';

import { default as split2 } from 'split2';

const enc = new TextEncoder();

export default {
  async fetch() {
    const pt = new PassThrough();
    const readable = pt.pipe(split2());

    pt.end('hello\nfrom\nthe\nwonderful\nworld\nof\nnode.js\nstreams!');
    for await (const chunk of readable) {
      console.log(chunk);
    }

    return new Response("ok");
  }
};</code></pre>
            
    <div>
      <h3>Path</h3>
      <a href="#path">
        
      </a>
    </div>
    <p>The <a href="https://nodejs.org/api/path.html">Node.js Path API</a> provides utilities for working with file and directory paths. For example:</p>
            <pre><code>import path from "node:path"
path.join('/foo', 'bar', 'baz/asdf', 'quux', '..');

// Returns: '/foo/bar/baz/asdf'</code></pre>
            <p>Note that in the Workers implementation of path, the <a href="https://nodejs.org/api/path.html#pathwin32">path.win32</a> variants of the path API are not implemented, and will throw an exception.</p>
    <div>
      <h3>StringDecoder</h3>
      <a href="#stringdecoder">
        
      </a>
    </div>
    <p>The <a href="https://nodejs.org/dist/latest-v20.x/docs/api/string_decoder.html">Node.js StringDecoder API</a> is a simple legacy utility that predates the <a href="https://encoding.spec.whatwg.org/">WHATWG standard TextEncoder/TextDecoder API</a> and serves roughly the same purpose. It is used by Node.js' stream API implementation as well as a number of popular npm modules for the purpose of decoding UTF-8, UTF-16, Latin1, Base64, and Hex encoded data.</p>
            <pre><code>import { StringDecoder } from 'node:string_decoder';
const decoder = new StringDecoder('utf8');

const cent = Buffer.from([0xC2, 0xA2]);
console.log(decoder.write(cent));

const euro = Buffer.from([0xE2, 0x82, 0xAC]);
console.log(decoder.write(euro)); </code></pre>
            <p>In the vast majority of cases, your Worker should just keep on using the standard TextEncoder/TextDecoder APIs, but the StringDecoder is available directly for workers to use now without relying on polyfills.</p>
    <div>
      <h3>Node.js Compat Modules</h3>
      <a href="#node-js-compat-modules">
        
      </a>
    </div>
    <p>One Worker can already be a <a href="https://developers.cloudflare.com/workers/wrangler/configuration/#bundling">bundle of multiple assets</a>. This allows a single Worker to be made up of multiple individual ESM modules, CommonJS modules, JSON, text, and binary data files.</p><p>Soon there will be a new type of module that can be included in a Worker bundles: the NodeJsCompatModule.</p><p>A NodeJsCompatModule is designed to emulate the Node.js environment as much as possible. Within these modules, common Node.js global variables such as process, Buffer, and even __filename will be available. More importantly, it is possible to require() our Node.js core API implementations without using the node: specifier prefix. This maximizes compatibility with existing NPM packages that depend on globals from Node.js being present, or don’t import Node.js APIs using the node: specifier prefix.</p><p>Support for this new module type has landed in the open source <a href="https://github.com/cloudflare/workerd">workerd</a> runtime, with deeper integration with Wrangler coming soon.</p>
    <div>
      <h3>What’s next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>We’re adding support for more Node.js APIs each month, and as we introduce new APIs, they will be added under the <a href="https://developers.cloudflare.com/workers/platform/compatibility-dates/#nodejs-compatibility-flag">nodejs_compat compatibility flag</a> — no need to take any action or update your <a href="https://developers.cloudflare.com/workers/platform/compatibility-dates/">compatibility date</a>.</p><p>Have an NPM package that you wish worked on Workers, or an API you’d like to be able to use? Join the <a href="https://discord.com/invite/cloudflaredev">Cloudflare Developers Discord</a> and tell us what you’re building, and what you’d like to see next.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div></div> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Node.js]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">397gVh2vxNiXM0KQCWX7kK</guid>
            <dc:creator>James M Snell</dc:creator>
        </item>
        <item>
            <title><![CDATA[Node.js compatibility for Cloudflare Workers – starting with Async Context Tracking, EventEmitter, Buffer, assert, and util]]></title>
            <link>https://blog.cloudflare.com/workers-node-js-asynclocalstorage/</link>
            <pubDate>Thu, 23 Mar 2023 13:05:00 GMT</pubDate>
            <description><![CDATA[ Over the coming months, Cloudflare Workers will start to roll out built-in compatibility with Node.js core APIs as part of an effort to support increased compatibility across JavaScript runtimes ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Over the coming months, Cloudflare Workers will start to roll out built-in compatibility with Node.js core APIs as part of an effort to support increased compatibility across JavaScript runtimes.</p><p>We are happy to announce today that the first of these Node.js APIs – <code>AsyncLocalStorage</code>, <code>EventEmitter</code>, <code>Buffer</code>, <code>assert</code>, and parts of <code>util</code> – are now available for use. These APIs are provided directly by the <a href="https://github.com/cloudflare/workerd">open-source Cloudflare Workers runtime</a>, with no need to bundle polyfill implementations into your own code.</p><p>These new APIs are available today — start using them by enabling the <code>nodejs_compat</code> <a href="https://developers.cloudflare.com/workers/platform/compatibility-dates/">compatibility flag</a> in your Workers.</p>
    <div>
      <h3>Async Context Tracking with the AsyncLocalStorage API</h3>
      <a href="#async-context-tracking-with-the-asynclocalstorage-api">
        
      </a>
    </div>
    <p>The <code>AsyncLocalStorage</code> API provides a way to track context across asynchronous operations. It allows you to pass a value through your program, even across multiple layers of asynchronous code, without having to pass a context value between operations.</p><p>Consider an example where we want to add debug logging that works through multiple layers of an application, where each log contains the ID of the current request. Without AsyncLocalStorage, it would be necessary to explicitly pass the request ID down through every function call that might invoke the logging function:</p>
            <pre><code>function logWithId(id, state) {
  console.log(`${id} - ${state}`);
}

function doSomething(id) {
  // We don't actually use id for anything in this function!
  // It's only here because logWithId needs it.
  logWithId(id, "doing something");
  setTimeout(() =&gt; doSomethingElse(id), 10);
}

function doSomethingElse(id) {
  logWithId(id, "doing something else");
}

let idSeq = 0;

export default {
  async fetch(req) {
    const id = idSeq++;
    doSomething(id);
    logWithId(id, 'complete');
    return new Response("ok");
  }
}</code></pre>
            <p>While this approach works, it can be cumbersome to coordinate correctly, especially as the complexity of an application grows. Using <code>AsyncLocalStorage</code> this becomes significantly easier by eliminating the need to explicitly pass the context around. Our application functions (<code>doSomething</code> and <code>doSomethingElse</code> in this case) never need to know about the request ID at all while the <code>logWithId</code> function does exactly what we need it to:</p>
            <pre><code>import { AsyncLocalStorage } from 'node:async_hooks';

const requestId = new AsyncLocalStorage();

function logWithId(state) {
  console.log(`${requestId.getStore()} - ${state}`);
}

function doSomething() {
  logWithId("doing something");
  setTimeout(() =&gt; doSomethingElse(), 10);
}

function doSomethingElse() {
  logWithId("doing something else");
}

let idSeq = 0;

export default {
  async fetch(req) {
    return requestId.run(idSeq++, () =&gt; {
      doSomething();
      logWithId('complete');
      return new Response("ok");
    });
  }
}</code></pre>
            <p>With the <code>nodejs_compat</code> <a href="https://developers.cloudflare.com/workers/platform/compatibility-dates/">compatibility flag</a> enabled, import statements are used to access specific APIs. The Workers implementation of these APIs requires the use of the node: specifier prefix that was introduced recently in Node.js (e.g. <code>node:async_hooks</code>, <code>node:events</code>, etc)</p><p>We implement <a href="https://github.com/wintercg/proposal-common-minimum-api/blob/main/asynclocalstorage.md">a subset</a> of the <code>AsyncLocalStorage</code> API in order to keep things as simple as possible. Specifically, we've chosen not to support the <code>enterWith()</code> and <code>disable()</code> APIs that are found in Node.js implementation simply because they make async context tracking more brittle and error prone.</p><p>Conceptually, at any given moment within a worker, there is a current "Asynchronous Context Frame", which consists of a map of storage cells, each holding a store value for a specific <code>AsyncLocalStorage</code> instance. Calling <code>asyncLocalStorage.run(...)</code> causes a new frame to be created, inheriting the storage cells of the current frame, but using the newly provided store value for the cell associated with <code>asyncLocalStorage</code>.</p>
            <pre><code>const als1 = new AsyncLocalStorage();
const als2 = new AsyncLocalStorage();

// Code here runs in the root frame. There are two storage cells,
// one for als1, and one for als2. The store value for each is
// undefined.

als1.run(123, () =&gt; {
  // als1.run(...) creates a new frame (1). The store value for als1
  // is set to 123, the store value for als2 is still undefined.
  // This new frame is set to "current".

  als2.run(321, () =&gt; {
    // als2.run(...) creates another new frame (2). The store value
    // for als1 is still 123, the store value for als2 is set to 321.
    // This new frame is set to "current".
    console.log(als1.getStore(), als2.getStore());
  });

  // Frame (1) is restored as the current. The store value for als1
  // is still 123, but the store value for als2 is undefined again.
});

// The root frame is restored as the current. The store values for
// both als1 and als2 are both undefined again.</code></pre>
            <p>Whenever an asynchronous operation is initiated in JavaScript, for example, creating a new JavaScript promise, scheduling a timer, etc, the current frame is captured and associated with that operation, allowing the store values at the moment the operation was initialized to be propagated and restored as needed.</p>
            <pre><code>const als = new AsyncLocalStorage();
const p1 = als.run(123, () =&gt; {
  return promise.resolve(1).then(() =&gt; console.log(als.getStore());
});

const p2 = promise.resolve(1); 
const p3 = als.run(321, () =&gt; {
  return p2.then(() =&gt; console.log(als.getStore()); // prints 321
});

als.run('ABC', () =&gt; setInterval(() =&gt; {
  // prints "ABC" to the console once a second…
  setInterval(() =&gt; console.log(als.getStore(), 1000);
});

als.run('XYZ', () =&gt; queueMicrotask(() =&gt; {
  console.log(als.getStore());  // prints "XYZ"
}));</code></pre>
            <p>Note that for unhandled promise rejections, the "<code>unhandledrejection</code>" event will automatically propagate the context that is associated with the promise that was rejected. This behavior is different from other types of events emitted by <code>EventTarget</code> implementations, which will propagate whichever frame is current when the event is emitted.</p>
            <pre><code>const asyncLocalStorage = new AsyncLocalStorage();

asyncLocalStorage.run(123, () =&gt; Promise.reject('boom'));
asyncLocalStorage.run(321, () =&gt; Promise.reject('boom2'));

addEventListener('unhandledrejection', (event) =&gt; {
  // prints 123 for the first unhandled rejection ('boom'), and
  // 321 for the second unhandled rejection ('boom2')
  console.log(asyncLocalStorage.getStore());
});</code></pre>
            <p>Workers can use the <code>AsyncLocalStorage.snapshot()</code> method to create their own objects that capture and propagate the context:</p>
            <pre><code>const asyncLocalStorage = new AsyncLocalStorage();

class MyResource {
  #runInAsyncFrame = AsyncLocalStorage.snapshot();

  doSomething(...args) {
    return this.#runInAsyncFrame((...args) =&gt; {
      console.log(asyncLocalStorage.getStore());
    }, ...args);
  }
}

const resource1 = asyncLocalStorage.run(123, () =&gt; new MyResource());
const resource2 = asyncLocalStorage.run(321, () =&gt; new MyResource());

resource1.doSomething();  // prints 123
resource2.doSomething();  // prints 321</code></pre>
            <p>For more, refer to the <a href="https://nodejs.org/dist/latest-v18.x/docs/api/async_context.html#class-asynclocalstorage">Node.js documentation</a> about the <code>AsyncLocalStorage</code> API.</p><p>There is currently an effort underway to add a new <a href="https://github.com/tc39/proposal-async-context">AsyncContext</a> mechanism (inspired by <code>AsyncLocalStorage</code>) to the JavaScript language itself. While it is still early days for the TC-39 proposal, there is good reason to expect it to progress through the committee. Once it does, we look forward to being able to make it available in the Cloudflare Workers platform. We expect our implementation of <code>AsyncLocalStorage</code> to be compatible with this new API.</p><p>The proposal for AsyncContext provides an excellent set of examples and <a href="https://github.com/tc39/proposal-async-context#motivation">description of the motivation</a> of why async context tracking is useful.</p>
    <div>
      <h3>Events with EventEmitter</h3>
      <a href="#events-with-eventemitter">
        
      </a>
    </div>
    <p>The EventEmitter API is one of the most fundamental Node.js APIs and is critical to supporting many other higher level APIs, including streams, crypto, net, and more. An EventEmitter is an object that emits named events that cause listeners to be called.</p>
            <pre><code>import { EventEmitter } from 'node:events';

const emitter = new EventEmitter();
emitter.on('hello', (...args) =&gt; {
  console.log(...args);
});

emitter.emit('hello', 1, 2, 3);</code></pre>
            <p>The <a href="https://github.com/cloudflare/workerd/blob/main/src/node/internal/events.ts">implementation</a> in the Workers runtime fully supports the entire Node.js EventEmitter API including the captureRejections option that allows improved handling of async functions as event handlers:</p>
            <pre><code>const emitter = new EventEmitter({ captureRejections: true });
emitter.on('hello', async (...args) =&gt; {
  throw new Error('boom');
});
emitter.on('error', (err) =&gt; {
  // the async promise rejection is emitted here!
});</code></pre>
            <p>Please refer to the Node.js documentation for more details on the use of the <code>EventEmitter</code> API: <a href="https://nodejs.org/dist/latest-v19.x/docs/api/events.html#events">https://nodejs.org/dist/latest-v19.x/docs/api/events.html#events</a>.</p>
    <div>
      <h3>Buffer</h3>
      <a href="#buffer">
        
      </a>
    </div>
    <p>The <code>Buffer</code> API in Node.js predates the introduction of the standard TypedArray and DataView APIs in JavaScript by many years and has persisted as one of the most commonly used Node.js APIs for manipulating binary data. Today, every Buffer instance extends from the standard Uint8Array class but adds a range of unique capabilities such as built-in base64 and hex encoding/decoding, byte-order manipulation, and encoding-aware substring searching.</p>
            <pre><code>import { Buffer } from 'node:buffer';

const buf = Buffer.from('hello world', 'utf8');

console.log(buf.toString('hex'));
// Prints: 68656c6c6f20776f726c64
console.log(buf.toString('base64'));
// Prints: aGVsbG8gd29ybGQ=</code></pre>
            <p>Because a Buffer extends from Uint8Array, it can be used in any workers API that currently accepts Uint8Array, such as creating a new Response:</p>
            <pre><code>const response = new Response(Buffer.from("hello world"));</code></pre>
            <p>Or interacting with streams:</p>
            <pre><code>const writable = getWritableStreamSomehow();
const writer = writable.getWriter();
writer.write(Buffer.from("hello world"));</code></pre>
            <p>Please refer to the Node.js documentation for more details on the use of the Buffer API: <a href="https://nodejs.org/dist/latest-v19.x/docs/api/buffer.html">https://nodejs.org/dist/latest-v19.x/docs/api/buffer.html</a>.</p>
    <div>
      <h3>Assertions</h3>
      <a href="#assertions">
        
      </a>
    </div>
    <p>The assert module in Node.js provides a number of useful assertions that are useful when building tests.</p>
            <pre><code>import {
  strictEqual,
  deepStrictEqual,
  ok,
  doesNotReject,
} from 'node:assert';

strictEqual(1, 1); // ok!
strictEqual(1, "1"); // fails! throws AssertionError

deepStrictEqual({ a: { b: 1 }}, { a: { b: 1 }});// ok!
deepStrictEqual({ a: { b: 1 }}, { a: { b: 2 }});// fails! throws AssertionError

ok(true); // ok!
ok(false); // fails! throws AssertionError

await doesNotReject(async () =&gt; {}); // ok!
await doesNotReject(async () =&gt; { throw new Error('boom') }); // fails! throws AssertionError</code></pre>
            <p>In the Workers implementation of assert, all assertions run in what Node.js calls the "<a href="https://nodejs.org/dist/latest-v19.x/docs/api/assert.html#strict-assertion-mode">strict assertion mode</a>", which means that non-strict methods behave like their corresponding strict methods. For instance, <code>deepEqual()</code> will behave like <code>deepStrictEqual()</code>.</p><p>Please refer to the Node.js documentation for more details on the use of the assertion API: <a href="https://nodejs.org/dist/latest-v19.x/docs/api/assert.html">https://nodejs.org/dist/latest-v19.x/docs/api/assert.html</a>.</p>
    <div>
      <h3>Promisify/Callbackify</h3>
      <a href="#promisify-callbackify">
        
      </a>
    </div>
    <p>The <code>promisify</code> and callbackify APIs in Node.js provide a means of bridging between a Promise-based programming model and a callback-based model.</p><p>The <code>promisify</code> method allows taking a Node.js-style callback function and converting it into a Promise-returning async function:</p>
            <pre><code>import { promisify } from 'node:util';

function foo(args, callback) {
  try {
    callback(null, 1);
  } catch (err) {
    // Errors are emitted to the callback via the first argument.
    callback(err);
  }
}

const promisifiedFoo = promisify(foo);
await promisifiedFoo(args);</code></pre>
            <p>Similarly, callbackify converts a Promise-returning async function into a Node.js-style callback function:</p>
            <pre><code>import { callbackify } from 'node:util';

async function foo(args) {
  throw new Error('boom');
}

const callbackifiedFoo = callbackify(foo);

callbackifiedFoo(args, (err, value) =&gt; {
  if (err) throw err;
});</code></pre>
            <p>Together these utilities make it easy to properly handle all of the generally tricky nuances involved with properly bridging between callbacks and promises.</p><p>Please refer to the Node.js documentation for more information on how to use these APIs: <a href="https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utilcallbackifyoriginal">https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utilcallbackifyoriginal</a>, <a href="https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utilpromisifyoriginal">https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utilpromisifyoriginal</a>.</p>
    <div>
      <h3>Type brand-checking with util.types</h3>
      <a href="#type-brand-checking-with-util-types">
        
      </a>
    </div>
    <p>The util.types API provides a reliable and generally more efficient way of checking that values are instances of various built-in types.</p>
            <pre><code>import { types } from 'node:util';

types.isAnyArrayBuffer(new ArrayBuffer());  // Returns true
types.isAnyArrayBuffer(new SharedArrayBuffer());  // Returns true
types.isArrayBufferView(new Int8Array());  // true
types.isArrayBufferView(Buffer.from('hello world')); // true
types.isArrayBufferView(new DataView(new ArrayBuffer(16)));  // true
types.isArrayBufferView(new ArrayBuffer());  // false
function foo() {
  types.isArgumentsObject(arguments);  // Returns true
}
types.isAsyncFunction(function foo() {});  // Returns false
types.isAsyncFunction(async function foo() {});  // Returns true
// .. and so on</code></pre>
            <p>Please refer to the Node.js documentation for more information on how to use the type check APIs: <a href="https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utiltypes">https://nodejs.org/dist/latest-v19.x/docs/api/util.html#utiltypes</a>. The workers implementation currently does not provide implementations of the <code>util.types.isExternal()</code>, <code>util.types.isProxy()</code>, <code>util.types.isKeyObject()</code>, or <code>util.type.isWebAssemblyCompiledModule()</code> APIs.</p>
    <div>
      <h3>What's next</h3>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>Keep your eyes open for more Node.js core APIs coming to Cloudflare Workers soon! We currently have implementations of the string decoder, streams and crypto APIs in active development. These will be introduced into the workers runtime incrementally over time and any worker using the <code>nodejs_compat</code> compatibility flag will automatically pick up the new modules as they are added.</p> ]]></content:encoded>
            <category><![CDATA[Node.js]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">3sNrfiJGZc4oDIOguVYgq9</guid>
            <dc:creator>James M Snell</dc:creator>
        </item>
        <item>
            <title><![CDATA[A Community Group for Web-interoperable JavaScript runtimes]]></title>
            <link>https://blog.cloudflare.com/introducing-the-wintercg/</link>
            <pubDate>Mon, 09 May 2022 13:00:27 GMT</pubDate>
            <description><![CDATA[ The Web-interoperable Runtimes Community Group is a new effort that brings contributors from Cloudflare Workers, Deno, and Node.js together to collaborate on common Web platform API standards. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, Cloudflare – in partnership with Vercel, Shopify, and individual core contributors to both <a href="https://nodejs.org">Node.js</a> and <a href="https://deno.land">Deno</a> – is announcing the establishment of a new <a href="https://www.w3.org/community/wintercg/">Community Group</a> focused on the interoperable implementation of standardized <a href="https://www.cloudflare.com/learning/security/api/what-is-an-api/">web APIs</a> in non-web browser, JavaScript-based development environments.</p><p>The <a href="https://w3.org">W3C</a> and the <a href="https://whatwg.org/">Web Hypertext Application Technology Working Group</a> (or WHATWG) have long pioneered the efforts to develop standardized APIs and features for the web as a development environment. APIs such as <a href="https://fetch.spec.whatwg.org/">fetch()</a>, <a href="https://streams.spec.whatwg.org/">ReadableStream and WritableStream</a>, <a href="https://url.spec.whatwg.org/">URL</a>, <a href="https://wicg.github.io/urlpattern">URLPattern</a>, <a href="https://encoding.spec.whatwg.org/">TextEncoder</a>, and more have become ubiquitous and valuable components of modern web development. However, the charters of these existing groups have always been <a href="https://whatwg.org/faq#what-is-the-whatwg-working-on">explicitly limited to</a> considering only the specific needs of web browsers, resulting in the development of standards that are not readily optimized for any environment that does not look exactly like a web browser. A good example of this effect is that some non-browser implementations of the <a href="https://streams.spec.whatwg.org/">Streams standard</a> are an order of magnitude <a href="https://github.com/nodejs/undici/issues/1203"><i>slower</i></a> than the equivalent Node.js streams and Deno reader implementations due largely to how the API is specified in the standard.</p><p>Serverless environments such as <a href="https://workers.cloudflare.com/">Cloudflare Workers</a>, or runtimes like Node.js and Deno, have a broad wide range of requirements, issues, and concerns that are simply not relevant to web browsers, and vice versa. This disconnect and the lack of clear consideration of these differences while the various specifications have been developed, has led to a situation where the non-browser runtimes have implemented their own bespoke, ad-hoc solutions for functionality that is actually common across the environments.</p><p>This new effort is changing that by providing a venue to discuss and advocate for the common requirements of <i>all</i> web environments, deployed anywhere throughout the stack.</p>
    <div>
      <h2>What's in it for developers?</h2>
      <a href="#whats-in-it-for-developers">
        
      </a>
    </div>
    <p>Developers want their code to be portable. Once they write it, if they choose to move to a different environment (from Node.js to Deno, for instance) they don't want to have to completely <i>rewrite</i> it just to make it keep doing the exact same thing it already was.</p><p>One of the more common questions we get from Cloudflare users is how they can make use of some arbitrary module published to <a href="https://npmjs.org">npm</a> that makes use of some set of Node.js-specific or Deno-specific APIs. The answer usually involves pulling in some arbitrary combination of polyfill implementations. The situation is similar with the Deno project, which has opted to integrate a polyfill of the full Node.js core API directly into their standard library. The more these environments implement the same common standards, the more the developer ecosystem can depend on the code they write just working, regardless of where it is being run.</p><p>Cloudflare Workers, Node.js, Deno, and web browsers are all very different from each other, but they share a good number of common functions. For instance, they all provide APIs for generating cryptographic hashes; they all deal in some way with streaming data; they all provide the ability to send an HTTP request somewhere. Where this overlap exists, and where the requirements and functionality are the same, the environments should all implement the same standardized mechanisms.</p>
    <div>
      <h2>The Web-interoperable Runtimes Community Group</h2>
      <a href="#the-web-interoperable-runtimes-community-group">
        
      </a>
    </div>
    <p>The new <a href="https://github.com/wintercg">Web-interoperable Runtimes Community Group</a> (or "WinterCG") operates under the established processes of the <a href="https://www.w3.org/community/about/">W3C</a>.</p><p>The naming of this group is something that took us a while to settle on because it is critical to understand the goals the group is trying to achieve (and what it is <i>not</i>). The key element is the phrase "web-interoperable".</p><p>We use "web" in exactly the same sense that the W3C and WHATWG communities use the term – precisely: <i>web browsers</i>. The term "web-interoperable", then, means implementing features in a manner that is <i>either identical or at least as consistent as possible</i> with the way those features are implemented in web browsers. For instance, the way that the new URL() constructor works in browsers is exactly how the new URL() constructor should work in Node.js, in Deno, and in Cloudflare Workers.</p><p>It is important, however, to acknowledge the fact that Node.js, Deno, and Cloudflare Workers are explicitly <b>not</b> web browsers. While this point should be obvious, it is important to call out because the differences between the various JavaScript environments can greatly impact the design decisions of standardized APIs. Node.js and Deno, for instance, each provide full access to the local file system. Cloudflare Workers, in contrast, has no local file system; and web browsers necessarily restrict applications from manipulating the local file system. Likewise, while web browsers inherently include a concept of a website's "origin" and implement mechanisms such as <a href="https://fetch.spec.whatwg.org/#http-cors-protocol">CORS</a> to protect users against a variety of security threats, there is no equivalent concept of "origins" on the server-side where Node.js, Deno, and Cloudflare Workers operate.</p><p>Up to now, the W3C and WHATWG have concerned themselves strictly with the needs of web browsers. The new Web-interoperable Runtimes Community Group is explicitly addressing and advocating for the needs of everyone else.</p><p>It is not intended that WinterCG will go off and publish its own set of independent standard APIs. Ideas for new specifications that emerge from WinterCG will first be submitted for consideration by existing work streams in the W3C and WHATWG with the goal of gaining the broadest possible consensus. However, should it become clear that web browsers have no particular need for, or interest in, a feature that the other environments (such as Cloudflare Workers) have need for, WinterCG will be empowered to move forward with a specification of its own – with the constraint that nothing will be introduced that intentionally conflicts with or is incompatible with the established web standards.</p><p>WinterCG will be open for anyone to participate; it will operate under the established W3C processes and policies; all work will be openly accessible via the <a href="https://github.com/wintercg">"wintercg" GitHub organization</a>; and everything it does will be centered on the goal of maximizing interoperability.</p>
    <div>
      <h2>Work in Progress</h2>
      <a href="#work-in-progress">
        
      </a>
    </div>
    <p>WinterCG has already started work on a number of important work items.</p>
    <div>
      <h3>The Minimum Common Web API</h3>
      <a href="#the-minimum-common-web-api">
        
      </a>
    </div>
    <p>From the introduction in the current <a href="https://github.com/wintercg/proposal-common-minimum-api">draft of the specification</a>:</p><blockquote><p>"The Minimum Common Web Platform API is a curated subset of standardized web platform APIs intended to define a minimum set of capabilities common to Browser and Non-Browser JavaScript-based runtime environments."</p></blockquote><p>Or put another way: It is a minimal set of <i>existing</i> web APIs that will be implemented consistently and correctly in Node.js, Deno, and Cloudflare Workers. Most of the APIs, with some exceptions and nuances, already exist in these environments, so the bulk of the work remaining is to ensure that those implementations are conformant to their relative specifications and portable across environments.</p><p>The table below lists all the APIs currently included in this subset (along with an indication of whether the API is currently or likely soon to be supported by Node.js, Deno, and Cloudflare Workers):</p><table><tr><td><p></p></td><td><p><b>Node.js</b></p></td><td><p><b>Deno</b></p></td><td><p><b>Cloudflare Workers</b></p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://dom.spec.whatwg.org/#abortcontroller">AbortController</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://dom.spec.whatwg.org/#abortsignal">AbortSignal</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://streams.spec.whatwg.org/#bytelengthqueuingstrategy">ByteLengthQueueingStrategy</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://wicg.github.io/compression/#compression-stream">CompressionStream</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://streams.spec.whatwg.org/#countqueuingstrategy">CountQueueingStrategy</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://w3c.github.io/webcrypto/#dfn-Crypto">Crypto</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://w3c.github.io/webcrypto/#dfn-CryptoKey">CryptoKey</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://wicg.github.io/compression/#decompression-stream">DecompressionStream</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://webidl.spec.whatwg.org/#idl-DOMException">DOMException</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://dom.spec.whatwg.org/#event">Event</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://dom.spec.whatwg.org/#eventtarget">EventTarget</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://streams.spec.whatwg.org/#readablebytestreamcontroller">ReadableByteStreamController</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://streams.spec.whatwg.org/#readablestream">ReadableStream</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://streams.spec.whatwg.org/#readablestreambyobreader">ReadableStreamBYOBReader</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://streams.spec.whatwg.org/#readablestreambyobrequest">ReadableStreamBYOBRequest</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://streams.spec.whatwg.org/#readablestreamdefaultcontroller">ReadableStreamDefaultController</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://streams.spec.whatwg.org/#readablestreamdefaultreader">ReadableStreamDefaultReader</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://w3c.github.io/webcrypto/#dfn-SubtleCrypto">SubtleCrypto</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://encoding.spec.whatwg.org/#textdecoder">TextDecoder</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://encoding.spec.whatwg.org/#textdecoderstream">TextDecoderStream</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>(soon)</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://encoding.spec.whatwg.org/#textencoder">TextEncoder</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://encoding.spec.whatwg.org/#textencoderstream">TextEncoderStream</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p></p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://streams.spec.whatwg.org/#transformstream">TransformStream</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://streams.spec.whatwg.org/#transformstreamdefaultcontroller">TransformStreamDefaultController</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>(soon)</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://url.spec.whatwg.org/#url">URL</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://wicg.github.io/urlpattern/#urlpattern-class">URLPattern</a></p></td><td><p>?</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://url.spec.whatwg.org/#urlsearchparams">URLSearchParams</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://streams.spec.whatwg.org/#writablestream">WritableStream</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p><a href="http://web.archive.org/web/20230801055012/https://streams.spec.whatwg.org/#writablestreamdefaultcontroller">WritableStreamDefaultController</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p>globalThis.<a href="http://web.archive.org/web/20230801055012/https://html.spec.whatwg.org/multipage/window-object.html#dom-self">self</a></p></td><td><p>?</p></td><td><p>✔️</p></td><td><p>(soon)</p></td></tr><tr><td><p>globalThis.<a href="http://web.archive.org/web/20230801055012/https://html.spec.whatwg.org/multipage/webappapis.html#dom-atob">atob()</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p>globalThis.<a href="http://web.archive.org/web/20230801055012/https://html.spec.whatwg.org/multipage/webappapis.html#dom-btoa">btoa()</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p>globalThis.<a href="http://web.archive.org/web/20230801055012/https://console.spec.whatwg.org/#namespacedef-console">console</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p>globalThis.<a href="http://web.archive.org/web/20230801055012/https://w3c.github.io/webcrypto/#dom-windoworworkerglobalscope-crypto">crypto</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p>globalThis.navigator.<a href="http://web.archive.org/web/20230801055012/https://html.spec.whatwg.org/multipage/system-state.html#dom-navigator-useragent">userAgent</a></p></td><td><p>?</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p>globalThis.<a href="http://web.archive.org/web/20230801055012/https://html.spec.whatwg.org/multipage/timers-and-user-prompts.html#dom-queuemicrotask">queueMicrotask()</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p>globalThis.<a href="http://web.archive.org/web/20230801055012/https://html.spec.whatwg.org/multipage/timers-and-user-prompts.html#dom-settimeout">setTimeout()</a>/ globalthis.<a href="http://web.archive.org/web/20230801055012/https://html.spec.whatwg.org/multipage/timers-and-user-prompts.html#dom-cleartimeout">clearTimeout()</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p>globalThis.<a href="http://web.archive.org/web/20230801055012/https://html.spec.whatwg.org/multipage/timers-and-user-prompts.html#dom-setinterval">setInterval()</a>/ globalThis.<a href="http://web.archive.org/web/20230801055012/https://html.spec.whatwg.org/multipage/timers-and-user-prompts.html#dom-clearinterval">clearInterval()</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr><tr><td><p>globalThis.</p><p><a href="http://web.archive.org/web/20230801055012/https://html.spec.whatwg.org/multipage/structured-data.html#dom-structuredclone">structuredClone()</a></p></td><td><p>✔️</p></td><td><p>✔️</p></td><td><p>✔️</p></td></tr></table><p>Whenever one of the environments diverges from the standardized definition of the API (such as Node.js implementation of setTimeout() and setInterval()), clear documentation describing the differences will be made available. Such differences should only exist for backwards compatibility with existing code.</p>
    <div>
      <h3>Web Cryptography Streams</h3>
      <a href="#web-cryptography-streams">
        
      </a>
    </div>
    <p>The <a href="https://www.w3.org/TR/WebCryptoAPI/">Web Cryptography API</a> provides a minimal (and very <i>limited</i>) APIs for  common cryptography operations. One of its key limitations is the fact that – unlike Node.js' <a href="https://nodejs.org/dist/latest-v18.x/docs/api/crypto.html">built-in crypto module</a> – it does not have any support for streaming inputs and outputs to symmetric cryptographic algorithms. All Web Cryptography features operate on chunks of data held in memory, all at once. This strictly limits the performance and scalability of cryptographic operations. Using these APIs in any environment that is not a web browser, and trying to make them perform well, quickly becomes painful.</p><p>To address that issue, WinterCG has started <a href="https://github.com/wintercg/proposal-webcrypto-streams">drafting a new specification for Web Crypto Streams</a> that will be submitted to the W3C for consideration as part of a larger effort currently being bootstrapped by the W3C to update the Web Cryptography specification. The goal is to bring streaming crypto operations to the whole of the web, including web browsers, in a way that conforms with existing standards.</p>
    <div>
      <h3>A subset of fetch() for servers</h3>
      <a href="#a-subset-of-fetch-for-servers">
        
      </a>
    </div>
    <p>With the recent release of version 18.0.0, <a href="https://nodejs.org/dist/latest-v18.x/docs/api/globals.html#fetch">Node.js has joined</a> the collection of JavaScript environments that provide an implementation of the WHATWG standardized fetch() API. There are, however, a number of important differences between the way Node.js, Deno, and Cloudflare Workers implement fetch() versus the way it is implemented in web browsers.</p><p>For one, server environments do not have a concept of "origin" like a web browser does. Features such as CORS intended to protect against cross-site scripting vulnerabilities are simply irrelevant on the server. Likewise, where web browsers are generally used by one individual user at a time and have a concept of a globally-scoped cookie store, server and serverless applications can be used by millions of users simultaneously and a globally-scoped cookie store that potentially contains session and authentication details would be both impractical and dangerous.</p><p>Because of the acute differences in the environments, it is often difficult to reason about, and gain consensus on, proposed changes in the fetch standard. Some proposed new API, for instance, might be fantastically relevant to fetch users on a server but completely useless to fetch users in a web browser. Some set of security concerns that are relevant to the Browser might have no impact whatsoever on the server.</p><p>To address this issue, and to make it easier for non-web browser environments to implement fetch consistently, WinterCG is <a href="http://web.archive.org/web/20230801055012/https://github.com/wintercg/fetch">working on documenting a subset of the fetch</a> standard that deals specifically with those different requirements and constraints.</p><p>Critically, this subset will be fully compatible with the fetch standard; and is being cooperatively developed by the same folks who have worked on fetch in Node.js, Deno, and Cloudflare Workers. It is not intended that this will become a competing definition of the fetch standard, but rather a set of documented guidelines on how to implement fetch correctly in these other environments.</p>
    <div>
      <h2>We're just getting started</h2>
      <a href="#were-just-getting-started">
        
      </a>
    </div>
    <p>The Web-interoperable Runtimes Community Group is just getting started, and we have a number of ambitious goals. Participation is open to everyone, and all work will be done in the open via GitHub at <a href="https://github.com/wintercg">https://github.com/wintercg</a>. We are actively seeking collaboration with the W3C, the WHATWG, and the JavaScript community at large to ensure that web features are available, work consistently, and meet the requirements of all web developers working anywhere across the stack.</p><p>For more information on the WinterCG, refer to <a href="https://wintercg.org">https://wintercg.org</a>. For details on how to participate, refer to <a href="https://github.com/wintercg/admin">https://github.com/wintercg/admin</a>.</p>
    <div>
      <h3><i>Join us at Cloudflare Connect!</i></h3>
      <a href="#join-us-at-cloudflare-connect">
        
      </a>
    </div>
    <p><i>Interested in learning more about building with Cloudflare Pages? If you’re based in the New York City area, join us on Thursday, May 12th for a series of workshops on how to build a full stack application on Pages! Follow along with a fully hands-on lab featuring Pages in conjunction with other products like Workers, Images and Cloudflare Gateway, and hear directly from our product managers. </i><a href="https://events.www.cloudflare.com/flow/cloudflare/connect2022nyc/landing/page/page"><i>Register now</i></a><i>!</i></p> ]]></content:encoded>
            <category><![CDATA[Platform Week]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Community]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Node.js]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">q3DlDZ0aYj6Ok5mROKeW3</guid>
            <dc:creator>James M Snell</dc:creator>
        </item>
        <item>
            <title><![CDATA[Node.js support in Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/node-js-support-cloudflare-workers/</link>
            <pubDate>Fri, 16 Apr 2021 13:00:00 GMT</pubDate>
            <description><![CDATA[ Check out the current state of Node.js compatibility with Workers. We want to hear from you on which Node.js-dependent libraries and APIs we should support. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>We released Cloudflare Workers three years ago, making edge compute accessible to the masses with native support for the world’s most ubiquitous language — JavaScript.</p><p>The Workers platform has transformed so much since its launch. Developers can not only write sandboxed code at our edge, they can also store data at the edge with Workers KV and, more recently, coordinate state within our giant network using <a href="/durable-objects-open-beta/">Durable Objects</a>. Now, we’re excited to share our support of an 11 year old technology that’s still going strong: Node.js.</p><p>Node.js made a breakthrough by enabling developers to build both the frontend and the backend with a single language. It took JavaScript beyond the browser and into the server by using Chrome’s JavaScript engine, V8.</p><p>Workers is also built on V8 <a href="https://developers.cloudflare.com/workers/learning/how-workers-works#isolates">Isolates</a> and empowers developers in a similar way by allowing you to create entire applications with only JavaScript — except your code runs across Cloudflare’s data centers in over 100 countries.</p>
    <div>
      <h2>Our Package Support Today</h2>
      <a href="#our-package-support-today">
        
      </a>
    </div>
    <p>There is nothing more satisfying than importing a library and watching your code magically work out-of-the-box.</p><p>For over <a href="https://www.npmjs.com/package/webpack?activeTab=dependents">20k packages</a>, Workers supports this magic already: <i>any Node.js package that uses webpack or another polyfill bundler runs within our environment today</i>. You can get started with the greatest hits packages like <a href="https://github.com/cisco/node-jose">node-jose</a> for encryption, <a href="https://github.com/kwhitley/itty-router">itty-router</a> for routing, <a href="https://www.npmjs.com/package/graphql">graphql</a> for querying your API, and so much more.</p><p>And rather than finding out by trial and error, we made a <a href="https://workers.cloudflare.com/works">catalogue of libraries</a> that you can rely on. All you have to do is pick one and boom: it runs.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2IlWAw0CyBBZuB9PVNTBn7/03e2a631cc586e12c0c01a279b778598/image1-26.png" />
            
            </figure><p>Once you select a package, you can use webpack to bundle it all up with our <a href="https://developers.cloudflare.com/workers/cli-wrangler/webpack">wrangler CLI</a> and deploy onto Workers. Webpack is a module bundler that takes your JavaScript files, including third-party dependencies, and makes them usable in the browser.</p><p>For an example of bundling dependencies in action, see this <a href="https://www.gatsbyjs.com/docs/deploying-to-cloudflare-workers/">example</a> of getting started with Gatsby.</p>
    <div>
      <h2>Our Next Steps</h2>
      <a href="#our-next-steps">
        
      </a>
    </div>
    
    <div>
      <h3>Increasing Worker sizes</h3>
      <a href="#increasing-worker-sizes">
        
      </a>
    </div>
    <p>Using webpack can get you far, but that can cause the Worker to exceed the size limit quickly. Node.js was designed with the assumption that servers, unlike the client, are amenable to code bloat, resulting in an ecosystem of packages that are generous in size.</p><p>We plan to support raising the 1MB size limit for Workers soon, so users don’t have to worry about the size of their dependencies. Please share what you’re building in the <a href="https://discord.gg/wdycq7r6Y9">Workers Unbound channel</a> of our Discord if you’d like that limit raised.</p>
    <div>
      <h3>Supporting Native APIs</h3>
      <a href="#supporting-native-apis">
        
      </a>
    </div>
    <p>But why stop there? We want to go even further and support the most important modules, even if they do rely on native code. Our approach will be to reimplement supported modules and polyfill package functionality directly into the Workers runtime. This doesn’t mean we’re shifting our runtime to run on Node.js. In fact, here are two important security and design reasons why we are not:</p><ul><li><p>Node.js was not designed to be a sandbox, which was made apparent by their <a href="https://nodejs.org/api/vm.html#vm_vm_executing_javascript">vm module</a> that says “do not use it to run untrusted code.”</p></li><li><p>For proper sandboxing, Node.js would’ve forced us to build a container-based runtime that both doesn’t scale and isn’t as performant as Isolates. Without containers, we were able to design a system that has 0ms cold starts.</p></li></ul><p>However, there are other ways we can be Node.js compatible without necessarily supporting the entire runtime. What’s up first? We’ll support Stripe.js SDK and Twilio Client JS SDK. We’ll also build support for the net module, so you can run popular database libraries.</p><p>But we want to hear from you! We created a <a href="https://workers.cloudflare.com/node"><b>leadership board</b></a> for you to vote on which popular libraries/APIs matter the most. Are statistics packages your jam? Do you need an email utility? What about a templating engine? We want to hear directly from you.</p><p>We won’t stop until our users can import popular Node.js libraries seamlessly. This effort will be large-scale and ongoing for us, but we think it’s well worth it.</p><p>We’re excited to support developers as they build all types of applications. We look forward to hearing from <a href="https://workers.cloudflare.com/node"><b>you</b></a>!</p> ]]></content:encoded>
            <category><![CDATA[Developer Week]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Node.js]]></category>
            <category><![CDATA[Product News]]></category>
            <guid isPermaLink="false">1U9ggRMpSYbVu0D3QNsNUj</guid>
            <dc:creator>Albert Zhao</dc:creator>
        </item>
    </channel>
</rss>