
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sun, 05 Apr 2026 06:02:47 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Let’s DO this: detecting Workers Builds errors across 1 million Durable Objects]]></title>
            <link>https://blog.cloudflare.com/detecting-workers-builds-errors-across-1-million-durable-durable-objects/</link>
            <pubDate>Thu, 29 May 2025 13:00:00 GMT</pubDate>
            <description><![CDATA[ Workers Builds, our CI/CD product for deploying Workers, monitors build issues by analyzing build failure metadata spread across over one million Durable Objects. ]]></description>
            <content:encoded><![CDATA[ <p>Cloudflare Workers Builds is our <a href="https://en.wikipedia.org/wiki/CI/CD"><u>CI/CD</u></a> product that makes it easy to build and deploy Workers applications every time code is pushed to GitHub or GitLab. What makes Workers Builds special is that projects can be built and deployed with minimal configuration.<a href="https://developers.cloudflare.com/workers/ci-cd/builds/#get-started"> <u>Just hook up your project and let us take care of the rest!</u></a></p><p>But what happens when things go wrong, such as failing to install tools or dependencies? What usually happens is that we don’t fix the problem until a customer contacts us about it, at which point many other customers have likely faced the same issue. This can be a frustrating experience for both us and our customers because of the lag time between issues occurring and us fixing them.</p><p>We want Workers Builds to be reliable, fast, and easy to use so that developers can focus on building, not dealing with our bugs. That’s why we recently started building an error detection system that can detect, categorize, and surface all build issues occurring on Workers Builds, enabling us to proactively fix issues and add missing features.</p><p>It’s also no secret that we’re big fans of being “<a href="https://www.cloudflare.com/the-net/top-of-mind-security/customer-zero/">Customer Zero</a>” at Cloudflare, and Workers Builds is itself a product that’s built end-to-end on our <a href="https://www.cloudflare.com/developer-platform/"><u>Developer Platform</u></a> using <a href="https://developers.cloudflare.com/workers/"><u>Workers</u></a>, <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a>, <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive</u></a>, <a href="https://blog.cloudflare.com/cloudflare-containers-coming-2025/"><u>Containers</u></a>, <a href="https://developers.cloudflare.com/queues/"><u>Queues</u></a>, <a href="https://developers.cloudflare.com/kv/"><u>Workers KV</u></a>, <a href="https://developers.cloudflare.com/r2/"><u>R2</u></a>, and <a href="https://developers.cloudflare.com/workers/observability/"><u>Workers Observability</u></a>.</p><p>In this post, we will dive into how we used the <a href="https://www.cloudflare.com/developer-platform/">Cloudflare Developer Platform</a> to check for issues across more than <b>1 million Durable Objects</b>.</p>
    <div>
      <h2>Background: Workers Builds architecture</h2>
      <a href="#background-workers-builds-architecture">
        
      </a>
    </div>
    <p>Back in October 2024, we wrote about<a href="https://blog.cloudflare.com/workers-builds-integrated-ci-cd-built-on-the-workers-platform/"> <u>how we built Workers Builds entirely on the Workers platform</u></a>. To recap, Builds is built using Workers, Durable Objects, Workers KV, R2, Queues, Hyperdrive, and a Postgres database. Some of these things were not present when launched back in October (for example, Queues and KV). But the core of the architecture is the same.</p><p>A client Worker receives GitHub/GitLab webhooks and stores build metadata in Postgres (via Hyperdrive). A build management Worker uses two Durable Object classes: a Scheduler class to find builds in Postgres that need scheduling, and a class called BuildBuddy to manage the lifecycle of a build. When a build needs to be started, Scheduler creates a new BuildBuddy instance which is responsible for creating a container for the build (using<a href="https://blog.cloudflare.com/container-platform-preview/"> <u>Cloudflare Containers</u></a>), monitoring the container with health checks, and receiving build logs so that they can be viewed in the Cloudflare Dashboard.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2Zf6QSXafUJOxn6isLsqar/fd8eaa3428185c3da2ef96ddd1fdc43c/image2.png" />
          </figure><p>In addition to this core scheduling logic, we have several Workers Queues for background work such as sending PR comments to GitHub/GitLab.</p>
    <div>
      <h2>The problem: builds are failing</h2>
      <a href="#the-problem-builds-are-failing">
        
      </a>
    </div>
    <p>While this architecture has worked well for us so far, we found ourselves with a problem: compared to<a href="https://developers.cloudflare.com/pages/"> <u>Cloudflare Pages</u></a>, a concerning percentage of builds were failing. We needed to dig deeper and figure out what was wrong, and understand how we could improve Workers Builds so that developers can focus more on shipping instead of build failures.</p>
    <div>
      <h2>Types of build failures</h2>
      <a href="#types-of-build-failures">
        
      </a>
    </div>
    <p>Not all build failures are the same. We have several categories of failures that we monitor:</p><ul><li><p>Initialization failures: when the container fails to start.</p></li><li><p>Clone failures: failing to clone the repository from GitHub/GitLab.</p></li><li><p>Build timeouts: builds that ran past the limit and were terminated by BuildBuddy.</p></li><li><p>Builds failing health checks: the container stopped responding to health checks, e.g. the container crashed for an unknown reason.</p></li><li><p>Failure to install tools or dependencies.</p></li><li><p>Failed user build/deploy commands.</p></li></ul><p>The first few failure types were straightforward, and we’ve been able to track down and fix issues in our build system and control plane to improve what we call “build completion rate”. We define build completion as the following:</p><ol><li><p>We successfully started the build.</p></li><li><p>We attempted to install tools/dependencies (considering failures as “user error”).</p></li><li><p>We attempted to run the user-defined build/deploy commands (again, considering failures as “user error”).</p></li><li><p>We successfully marked the build as stopped in our database.</p></li></ol><p>For example, we had a bug where builds for a deleted Worker would attempt to run and continuously fail, which affected our build completion rate metric.</p>
    <div>
      <h3>User error</h3>
      <a href="#user-error">
        
      </a>
    </div>
    <p>We’ve made a lot of progress improving the reliability of build and container orchestration, but we had a significant percentage of build failures in the “user error” metric. We started asking ourselves “is this actually user error? Or is there a problem with the product itself?”</p><p>This presented a challenge because questions like “did the build command fail due to a bug in the build system, or user error?” are a lot harder to answer than pass/fail issues like failing to create a container for the build. To answer these questions, we had to build something new, something smarter.</p>
    <div>
      <h3>Build logs</h3>
      <a href="#build-logs">
        
      </a>
    </div>
    <p>The most obvious way to determine why a build failed is to look at its logs. When spot-checking build failures, we can typically identify what went wrong. For example, some builds fail to install dependencies because of an out of date lockfile (e.g. package-lock.json out of date with package.json). But looking through build failures one by one doesn’t scale. We didn’t want engineers looking through customer build logs without at least suspecting that there was an issue with our build system that we could fix.</p>
    <div>
      <h2>Automating error detection</h2>
      <a href="#automating-error-detection">
        
      </a>
    </div>
    <p>At this point, next steps were clear: we needed an automated way to identify why a build failed based on build logs, and provide a way for engineers to see what the top issues were while ensuring privacy (e.g. removing account-specific identifiers and file paths from the aggregate data).</p>
    <div>
      <h3>Detecting errors in build logs using Workers Queues</h3>
      <a href="#detecting-errors-in-build-logs-using-workers-queues">
        
      </a>
    </div>
    <p>The first thing we needed was a way to categorize build errors after a build fails. To do this, we created a queue named BuildErrorsQueue to process builds and look for errors. After a build fails, BuildBuddy will send the build ID to BuildErrorsQueue which fetches the logs, checks for issues, and saves results to Postgres.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3423WCenScTudEv27TCMnJ/86b621a957d4249449c99db43a43bb9a/image7.png" />
          </figure><p>We started out with a few static patterns to match things like Wrangler errors in log lines:</p>
            <pre><code>export const DetectedErrorCodes = {
  wrangler_error: {
    detect: async (lines: LogLines) =&gt; {
      const errors: DetectedError[] = []
      for (const line of lines) {
        if (line[2].trim().startsWith('✘ [ERROR]')) {
          errors.push({
            error_code: 'wrangler_error',
            error_group: getWranglerLogGroupFromLogLine(line, wranglerRegexMatchers),
            detected_on: new Date(),
            lines_matched: [line],
          })
        }
      }
      return errors
    },
  },
  installing_tools_or_dependencies_failed: { ... },
}</code></pre>
            <p>It wouldn’t be useful if all Wrangler errors were grouped under a single generic “wrangler_error” code, so we further grouped them by normalizing the log lines into groups:</p>
            <pre><code>function getWranglerLogGroupFromLogLine(
  logLine: LogLine,
  regexMatchers: RegexMatcher[]
): string {
  const original = logLine[2].trim().replaceAll(/[\t\n\r]+/g, ' ')
  let message = original
  let group = original
  for (const { mustMatch, patterns, stopOnMatch, name, useNameAsGroup } of regexMatchers) {
    if (mustMatch !== undefined) {
      const matched = matchLineToRegexes(message, mustMatch)
      if (!matched) continue
    }
    if (patterns) {
      for (const [pattern, mask] of patterns) {
        message = message.replaceAll(pattern, mask)
      }
    }
    if (useNameAsGroup === true) {
      group = name
    } else {
      group = message
    }
    if (Boolean(stopOnMatch) &amp;&amp; message !== original) break
  }
  return group
}

const wranglerRegexMatchers: RegexMatcher[] = [
  {
    name: 'could_not_resolve',
    // ✘ [ERROR] Could not resolve "./balance"
    // ✘ [ERROR] Could not resolve "node:string_decoder" (originally "string_decoder/")
    mustMatch: [/^✘ \[ERROR\] Could not resolve "[@\w :/\\.-]*"/i],
    stopOnMatch: true,
    patterns: [
      [/(?&lt;=^✘ \[ERROR\] Could not resolve ")[@\w :/\\.-]*(?=")/gi, '&lt;MODULE&gt;'],
      [/(?&lt;=\(originally ")[@\w :/\\.-]*(?=")/gi, '&lt;MODULE&gt;'],
    ],
  },
  {
    name: 'no_matching_export_for_import',
    // ✘ [ERROR] No matching export in "src/db/schemas/index.ts" for import "someCoolTable"
    mustMatch: [/^✘ \[ERROR\] No matching export in "/i],
    stopOnMatch: true,
    patterns: [
      [/(?&lt;=^✘ \[ERROR\] No matching export in ")[@~\w:/\\.-]*(?=")/gi, '&lt;MODULE&gt;'],
      [/(?&lt;=" for import ")[\w-]*(?=")/gi, '&lt;IMPORT&gt;'],
    ],
  },
  // ...many more added over time
]</code></pre>
            <p>Once we had our error detection matchers and normalizing logic in place, implementing the BuildErrorsQueue consumer was easy:</p>
            <pre><code>export async function handleQueue(
  batch: MessageBatch,
  env: Bindings,
  ctx: ExecutionContext
): Promise&lt;void&gt; {
  ...
  await pMap(batch.messages, async (msg) =&gt; {
    try {
      const { build_id } = BuildErrorsQueueMessageBody.parse(msg.body)
      await store.buildErrors.deleteErrorsByBuildId({ build_id })
      const bb = getBuildBuddy(env, build_id)
      const errors: DetectedError[] = []
      let cursor: LogsCursor | undefined
      let hasMore = false

      do {
        using maybeNewLogs = await bb.getLogs(cursor, false)
        const newLogs = LogsWithCursor.parse(maybeNewLogs)
        cursor = newLogs.cursor
        const newErrors = await detectErrorsInLogLines(newLogs.lines)
        errors.push(...newErrors)
        hasMore = Boolean(cursor) &amp;&amp; newLogs.lines.length &gt; 0
      } while (hasMore)

      if (errors.length &gt; 0) {
        await store.buildErrors.insertErrors(
          errors.map((e) =&gt; ({
            build_id,
            error_code: e.error_code,
            error_group: e.error_group,
          }))
        )
      }
      msg.ack()
    } catch (e) {
      msg.retry()
      sentry.captureException(e)
    }
  })
}</code></pre>
            <p>Here, we’re fetching logs from each build’s BuildBuddy Durable Object, detecting why it failed using the matchers we wrote, and saving errors to the Postgres DB. We also delete any existing errors for when we improve our error detection patterns to prevent subsequent runs from adding duplicate data to our database.</p>
    <div>
      <h2>What about historical builds?</h2>
      <a href="#what-about-historical-builds">
        
      </a>
    </div>
    <p>The BuildErrorsQueue was great for new builds, but this meant we still didn’t know why all the previous build failures happened other than “user error”. We considered only tracking errors in new builds, but this was unacceptable because it would significantly slow down our ability to improve our error detection system because each iteration would require us to wait days to identify issues we need to prioritize.</p>
    <div>
      <h3>Problem: logs are stored across one million+ Durable Objects</h3>
      <a href="#problem-logs-are-stored-across-one-million-durable-objects">
        
      </a>
    </div>
    <p>Remember how every build has an associated BuildBuddy DO to store logs? This is a great design for ensuring our logging pipeline scales with our customers, but it presented a challenge when trying to aggregate issues based on logs because something would need to go through all historical builds (&gt;1 million at the time) to fetch logs and detect why they failed.</p><p>If we were using Go and Kubernetes, we might solve this using a long-running container that goes through all builds and runs our error detection. But how do we solve this in Workers?</p>
    <div>
      <h3>How do we backfill errors for historical builds?</h3>
      <a href="#how-do-we-backfill-errors-for-historical-builds">
        
      </a>
    </div>
    <p>At this point, we already had the Queue to process new builds. If we could somehow send all of the old build IDs to the queue, it could scan them all quickly using<a href="https://developers.cloudflare.com/queues/configuration/consumer-concurrency/"> <u>Queues concurrent consumers</u></a> to quickly work through all builds. We thought about hacking together a local script to fetch all of the log IDs and sending them to an API to put them on a queue. But we wanted something more secure and easier to use so that running a new backfill was as simple as an API call.</p><p>That’s when an idea hit us: what if we used a Durable Object with alarms to fetch a range of builds and send them to BuildErrorsQueue? At first, it seemed far-fetched, given that Durable Object alarms have a limited amount of work they can do per invocation. But wait, if<a href="https://agents.cloudflare.com/"> <u>AI Agents built on Durable Objects</u></a> can manage background tasks, why can’t we fetch millions of build IDs and forward them to queues?</p>
    <div>
      <h3>Building a Build Errors Agent with Durable Objects</h3>
      <a href="#building-a-build-errors-agent-with-durable-objects">
        
      </a>
    </div>
    <p>The idea was simple: create a Durable Object class named BuildErrorsAgent and run a single instance that loops through the specified range of builds in the database and sends them to BuildErrorsQueue.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3kmsS4LACzLUUoECSJT08g/b6a9ccffcbe8a41c300a74546a17ba85/image5.png" />
          </figure><p>The first thing we did was set up an RPC method to start a backfill and save the parameters in<a href="https://developers.cloudflare.com/durable-objects/api/storage-api/#kv-api"> <u>Durable Object KV storage</u></a> so that it can be read each time the alarm executes:</p>
            <pre><code>async start({
  min_build_id,
  max_build_id,
}: {
  min_build_id: BuildRecord['build_id']
  max_build_id: BuildRecord['build_id']
}): Promise&lt;void&gt; {
  logger.setTags({ handler: 'start', environment: this.env.ENVIRONMENT })
  try {
    if (min_build_id &lt; 0) throw new Error('min_build_id cannot be negative')
    if (max_build_id &lt; min_build_id) {
      throw new Error('max_build_id cannot be less than min_build_id')
    }
    const [started_on, stopped_on] = await Promise.all([
      this.kv.get('started_on'),
      this.kv.get('stopped_on'),
    ])
    await match({ started_on, stopped_on })
      .with({ started_on: P.not(null), stopped_on: P.nullish }, () =&gt; {
        throw new Error('BuildErrorsAgent is already running')
      })
      .otherwise(async () =&gt; {
        // delete all existing data and start queueing failed builds
        await this.state.storage.deleteAlarm()
        await this.state.storage.deleteAll()
        this.kv.put('started_on', new Date())
        this.kv.put('config', { min_build_id, max_build_id })
        void this.state.storage.setAlarm(this.getNextAlarmDate())
      })
  } catch (e) {
    this.sentry.captureException(e)
    throw e
  }
}</code></pre>
            <p>The most important part of the implementation is the alarm that runs every second until the job is complete. Each alarm invocation has the following steps:</p><ol><li><p>Set a new alarm (always first to ensure an error doesn’t cause it to stop).</p></li><li><p>Retrieve state from KV.</p></li><li><p>Validate that the agent is supposed to be running:</p><ol><li><p>Ensure the agent is supposed to be running.</p></li><li><p>Ensure we haven’t reached the max build ID set in the config.</p></li></ol></li><li><p>Finally, queue up another batch of builds by querying Postgres and sending to the BuildErrorsQueue.</p></li></ol>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6Ab6VC49luyio3t5QamgMD/273c77158ff4ac7af662669360d5f485/image6.png" />
          </figure>
            <pre><code>async alarm(): Promise&lt;void&gt; {
  logger.setTags({ handler: 'alarm', environment: this.env.ENVIRONMENT })
  try {
    void this.state.storage.setAlarm(Date.now() + 1000)
    const kvState = await this.getKVState()
    this.sentry.setContext('BuildErrorsAgent', kvState)
    const ctxLogger = logger.withFields({ state: JSON.stringify(kvState) })

    await match(kvState)
      .with({ started_on: P.nullish }, async () =&gt; {
        ctxLogger.info('BuildErrorsAgent is not started, cancelling alarm')
        await this.state.storage.deleteAlarm()
      })
      .with({ stopped_on: P.not(null) }, async () =&gt; {
        ctxLogger.info('BuildErrorsAgent is stopped, cancelling alarm')
        await this.state.storage.deleteAlarm()
      })
      .with(
        // we should never have started_on set without config set, but just in case
        { started_on: P.not(null), config: P.nullish },
        async () =&gt; {
          const msg =
            'BuildErrorsAgent started but config is empty, stopping and cancelling alarm'
          ctxLogger.error(msg)
          this.sentry.captureException(new Error(msg))
          this.kv.put('stopped_on', new Date())
          await this.state.storage.deleteAlarm()
        }
      )
      .when(
        // make sure there are still builds to enqueue
        (s) =&gt;
          s.latest_build_id !== null &amp;&amp;
          s.config !== null &amp;&amp;
          s.latest_build_id &gt;= s.config.max_build_id,
        async () =&gt; {
          ctxLogger.info('BuildErrorsAgent job complete, cancelling alarm')
          this.kv.put('stopped_on', new Date())
          await this.state.storage.deleteAlarm()
        }
      )
      .with(
        {
          started_on: P.not(null),
          stopped_on: P.nullish,
          config: P.not(null),
          latest_build_id: P.any,
        },
        async ({ config, latest_build_id }) =&gt; {
          // 1. select batch of ~1000 builds
          // 2. send them to Queues 100 at a time, updating
          //    latest_build_id after each batch is sent
          const failedBuilds = await this.store.builds.selectFailedBuilds({
            min_build_id: latest_build_id !== null ? latest_build_id + 1 : config.min_build_id,
            max_build_id: config.max_build_id,
            limit: 1000,
          })
          if (failedBuilds.length === 0) {
            ctxLogger.info(`BuildErrorsAgent: ran out of builds, stopping and cancelling alarm`)
            this.kv.put('stopped_on', new Date())
            await this.state.storage.deleteAlarm()
          }

          for (
            let i = 0;
            i &lt; BUILDS_PER_ALARM_RUN &amp;&amp; i &lt; failedBuilds.length;
            i += QUEUES_BATCH_SIZE
          ) {
            const batch = failedBuilds
              .slice(i, QUEUES_BATCH_SIZE)
              .map((build) =&gt; ({ body: build }))

            if (batch.length === 0) {
              ctxLogger.info(`BuildErrorsAgent: ran out of builds in current batch`)
              break
            }
            ctxLogger.info(
              `BuildErrorsAgent: sending ${batch.length} builds to build errors queue`
            )
            await this.env.BUILD_ERRORS_QUEUE.sendBatch(batch)
            this.kv.put(
              'latest_build_id',
              Math.max(...batch.map((m) =&gt; m.body.build_id).concat(latest_build_id ?? 0))
            )

            this.kv.put(
              'total_builds_processed',
              ((await this.kv.get('total_builds_processed')) ?? 0) + batch.length
            )
          }
        }
      )
      .otherwise(() =&gt; {
        const msg = 'BuildErrorsAgent has nothing to do - this should never happen'
        this.sentry.captureException(msg)
        ctxLogger.info(msg)
      })
  } catch (e) {
    this.sentry.captureException(e)
    throw e
  }
}</code></pre>
            <p>Using pattern matching with <a href="https://github.com/gvergnaud/ts-pattern"><u>ts-pattern</u></a> made it much easier to understand what states we were expecting and what will happen compared to procedural code. We considered using a more powerful library like <a href="https://stately.ai/docs/xstate"><u>XState</u></a>, but decided on ts-pattern due to its simplicity.</p>
    <div>
      <h3>Running the backfill</h3>
      <a href="#running-the-backfill">
        
      </a>
    </div>
    <p>Once everything rolled out, we were able to trigger an errors backfill for over a million failed builds in a couple of hours with a single API call, categorizing 80% of failed builds on the first run. With a fast backfill process, we were able to iterate on our regex matchers to further refine our error detection and improve error grouping. Here’s what the error list looks like in our staging environment:</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5rdNvB1SpjGpeiCOCs86Tj/74141402e67fbd9ced673a98cb3c57f6/image4.png" />
          </figure>
    <div>
      <h2>Fixes and improvements</h2>
      <a href="#fixes-and-improvements">
        
      </a>
    </div>
    <p>Having a better understanding of what’s going wrong has already enabled us to make several improvements:</p><ul><li><p>Wrangler now shows a<a href="https://github.com/cloudflare/workers-sdk/pull/8534"> <u>clearer error message when no config file is found</u></a>.</p></li><li><p>Fixed multiple edge-cases where the wrong package manager was used in TypeScript/JavaScript projects.</p></li><li><p>Added support for bun.lock (previously only checked for bun.lockb).</p></li><li><p>Fixed several edge cases where build caching did not work in monorepos.</p></li><li><p>Projects that use a runtime.txt file to specify a Python version no longer fail.</p></li><li><p>….and more!</p></li></ul><p>We’re still working on fixing other bugs we’ve found, but we’re making steady progress. Reliability is a feature we’re striving for in Workers Builds, and this project has helped us make meaningful progress towards that goal. Instead of waiting for people to contact support for issues, we’re able to proactively identify and fix issues (and catch regressions more easily).</p><p>One of the great things about building on the Developer Platform is how easy it is to ship things. The core of this error detection pipeline (the Queue and Durable Object) <b>only took two days to build</b>, which meant we could spend more time working on improving Workers Builds instead of spending weeks on the error detection pipeline itself.</p>
    <div>
      <h2>What’s next?</h2>
      <a href="#whats-next">
        
      </a>
    </div>
    <p>In addition to continuing to improve build reliability and speed, we’ve also started thinking about other ways to help developers build their applications on Workers. For example, we built a<a href="https://github.com/cloudflare/mcp-server-cloudflare/tree/main/apps/workers-builds"> <u>Builds MCP server</u></a> that allows users to debug builds directly in Cursor/Claude/etc. We’re also thinking about ways we can expose these detected issues in the Cloudflare Dashboard so that users can identify issues more easily without scrolling through hundreds of logs.</p>
    <div>
      <h2>Ready to get started?</h2>
      <a href="#ready-to-get-started">
        
      </a>
    </div>
    <p>Building applications on Workers has never been easier! Try deploying a Durable Object-backed <a href="https://github.com/cloudflare/templates/tree/main/durable-chat-template"><u>chat application</u></a> with Workers Builds: </p><a href="https://deploy.workers.cloudflare.com/?url=https://github.com/cloudflare/templates/tree/main/durable-chat-template"><img src="https://deploy.workers.cloudflare.com/button" /></a><p></p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Durable Objects]]></category>
            <category><![CDATA[Dogfooding]]></category>
            <guid isPermaLink="false">2dJV7VMudIGAhdS2pL32lv</guid>
            <dc:creator>Jacob Hands</dc:creator>
        </item>
        <item>
            <title><![CDATA[Workers Builds: integrated CI/CD built on the Workers platform]]></title>
            <link>https://blog.cloudflare.com/workers-builds-integrated-ci-cd-built-on-the-workers-platform/</link>
            <pubDate>Thu, 31 Oct 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Workers Builds, an integrated CI/CD pipeline for the Workers platform, recently launched in open beta. We walk through how we built this product on Cloudflare’s Developer Platform. ]]></description>
            <content:encoded><![CDATA[ <p>During 2024’s Birthday Week, we <a href="https://blog.cloudflare.com/builder-day-2024-announcements/#continuous-integration-and-delivery"><u>launched Workers Builds</u></a> in open beta — an integrated <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">Continuous Integration and Delivery (CI/CD) </a>workflow you can use to build and deploy everything from full-stack applications built with the most popular frameworks to simple static websites onto the Workers platform. With Workers Builds, you can connect a GitHub or GitLab repository to a Worker, and Cloudflare will automatically build and deploy your changes each time you push a commit.</p><p>Workers Builds is intended to bridge the gap between the developer experiences for Workers and Pages, the latter of which <a href="https://blog.cloudflare.com/cloudflare-pages/"><u>launched with an integrated CI/CD system in 2020</u></a>. As we continue to <a href="https://blog.cloudflare.com/pages-and-workers-are-converging-into-one-experience/"><u>merge the experiences of Pages and Workers</u></a>, we wanted to bring one of the best features of Pages to Workers: the ability to tie deployments to existing development workflows in GitHub and GitLab with minimal developer overhead. </p><p>In this post, we’re going to share how we built the Workers Builds system on Cloudflare’s Developer Platform, using <a href="https://developers.cloudflare.com/workers/"><u>Workers</u></a>, <a href="https://developers.cloudflare.com/durable-objects"><u>Durable Objects</u></a>, <a href="https://developers.cloudflare.com/hyperdrive"><u>Hyperdrive</u></a>, <a href="https://developers.cloudflare.com/logs/log-explorer/"><u>Workers Logs</u></a>, and <a href="https://developers.cloudflare.com/workers/configuration/smart-placement"><u>Smart Placement</u></a>.</p>
    <div>
      <h2>The design problem</h2>
      <a href="#the-design-problem">
        
      </a>
    </div>
    <p>The core problem for Workers Builds is how to pick up a commit from GitHub or GitLab and start a containerized job that can clone the repo, build the project, and deploy a Worker. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6n6UCIKAM4uAtWzsRBiS16/1c0b655b415afe375b6b153ada570357/BLOG-2594_2.png" />
          </figure><p>Pages solves a similar problem, and we were initially inclined to expand our existing architecture and tech stack, which includes a centralized configuration plane built on Go in Kubernetes. We also considered the ways in which the Workers ecosystem has evolved in the four years since Pages launched — we have since launched so many more tools built for use cases just like this! </p><p>The distributed nature of Workers offers some advantages over a centralized stack — we can spend less time configuring Kubernetes because Workers automatically handles failover and scaling. Ultimately, we decided to keep using what required no additional work to re-use from Pages (namely, the system for connecting GitHub/GitLab accounts to Cloudflare, and ingesting push events from them), and for the rest build out a new architecture on the Workers platform, with reliability and minimal latency in mind.</p>
    <div>
      <h2>The Workers Builds system</h2>
      <a href="#the-workers-builds-system">
        
      </a>
    </div>
    <p>We didn’t need to make any changes to the system that handles connections from GitHub/GitLab to Cloudflare and ingesting push events from them. That left us with two systems to build: the configuration plane for users to connect a Worker to a repo, and a build management system to run and monitor builds.</p>
    <div>
      <h3>Client Worker </h3>
      <a href="#client-worker">
        
      </a>
    </div>
    <p>We can begin with our configuration plane, which consists of a simple Client Worker that implements a RESTful API (using <a href="https://hono.dev/docs/getting-started/cloudflare-workers"><u>Hono</u></a>) and connects to a PostgreSQL database. It’s in this database that we store build configurations for our users, and through this Worker that users can view and manage their builds. </p><p>We use a <a href="https://developers.cloudflare.com/hyperdrive/"><u>Hyperdrive binding</u></a> to connect to our database <a href="https://developers.cloudflare.com/hyperdrive/configuration/connect-to-private-database"><u>securely over Cloudflare Access</u></a> (which also manages connection pooling and query caching).</p><p>We considered a more distributed data model (like <a href="https://developers.cloudflare.com/d1/"><u>D1</u></a>, sharded by account), but ultimately decided that keeping our database in a datacenter more easily fit our use-case. The Workers Builds data model is relational — Workers belong to Cloudflare Accounts, and Builds belong to Workers — and build metadata must be consistent in order to properly manage build queues. We chose to keep our failover-ready database in a centralized datacenter and take advantage of two other Workers products, Smart Placement and Hyperdrive, in order to keep the benefits of a distributed control plane. </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/33eYqRr5LXKbAvfP8RR7X7/b82858c39b9755c6e056577c9449b00f/BLOG-2594_3.png" />
          </figure><p>Everything that you see in the Cloudflare Dashboard related to Workers Builds is served by this Worker. </p>
    <div>
      <h3>Build Management Worker</h3>
      <a href="#build-management-worker">
        
      </a>
    </div>
    <p>The more challenging problem we faced was how to run and manage user builds effectively. We wanted to support the same experience that we had achieved with Pages, which led to these key requirements:</p><ol><li><p>Builds should be initiated with minimal latency.</p></li><li><p>The status of a build should be tracked and displayed through its entire lifecycle, starting when a user pushes a commit.</p></li><li><p>Customer build logs should be stored in a secure, private, and long-lived way.</p></li></ol><p>To solve these problems, we leaned heavily into the technology of <a href="https://developers.cloudflare.com/durable-objects/"><u>Durable Objects</u></a> (DO). </p><p>We created a Build Management Worker with two DO classes: A Scheduler class to manage the scheduling of builds, and a class called BuildBuddy to manage individual builds. We chose to design our system this way for an efficient and scalable system. Since each build is assigned its own build manager DO, its operation won’t ever block other builds or the scheduler, meaning we can start up builds with minimal latency. Below, we dive into each of these Durable Objects classes.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6RUDJI7IYIlzcX4qjF9EYY/7e959b7a4489a41d275d74d634389f31/BLOG-2594_4.png" />
          </figure>
    <div>
      <h4>Scheduler DO</h4>
      <a href="#scheduler-do">
        
      </a>
    </div>
    <p>The Scheduler DO class is relatively simple. Using <a href="https://developers.cloudflare.com/durable-objects/api/alarms/"><u>Durable Objects Alarms</u></a>, it is triggered every second to pull up a list of user build configurations that are ready to be started. For each of those builds, the Scheduler creates an instance of our other DO Class, the Build Buddy. </p>
            <pre><code>import { DurableObject } from 'cloudflare:workers'


export class BuildScheduler extends DurableObject {
   state: DurableObjectState
   env: Bindings


   constructor(ctx: DurableObjectState, env: Bindings) {
       super(ctx, env)
   }
   
   // The DO alarm handler will be called every second to fetch builds
   async alarm(): Promise&lt;void&gt; {
// set alarm to run again in 1 second
       await this.updateAlarm()


       const builds = await this.getBuildsToSchedule()
       await this.scheduleBuilds(builds)
   }


   async scheduleBuilds(builds: Builds[]): Promise&lt;void&gt; {
       // Don't schedule builds, if no builds to schedule
       if (builds.length === 0) return


       const queue = new PQueue({ concurrency: 6 })
       // Begin running builds
       builds.forEach((build) =&gt;
           queue.add(async () =&gt; {
       	  // The BuildBuddy is another DO described more in the next section! 
               const bb = getBuildBuddy(this.env, build.build_id)
               await bb.startBuild(build)
           })
       )


       await queue.onIdle()
   }


   async getBuildsToSchedule(): Promise&lt;Builds[]&gt; {
       // returns list of builds to schedule
   }


   async updateAlarm(): Promise&lt;void&gt; {
// We want to ensure we aren't running multiple alarms at once, so we only set the next alarm if there isn’t already one set. 
       const existingAlarm = await this.ctx.storage.getAlarm()
       if (existingAlarm === null) {
           this.ctx.storage.setAlarm(Date.now() + 1000)
       }
   }
}
</code></pre>
            
    <div>
      <h4>Build Buddy DO</h4>
      <a href="#build-buddy-do">
        
      </a>
    </div>
    <p>The Build Buddy DO class is what we use to manage each individual build from the time it begins initializing to when it is stopped. Every build has a buddy for life!</p><p>Upon creation of a Build Buddy DO instance, the Scheduler immediately calls <code>startBuild()</code> on the instance. The <code>startBuild()</code> method is responsible for fetching all metadata and secrets needed to run a build, and then kicking off a build on Cloudflare’s container platform (<a href="https://blog.cloudflare.com/container-platform-preview/"><u>not public yet, but coming soon</u></a>!). </p><p>As the containerized build runs, it reports back to the Build Buddy, sending status updates and logs for the Build Buddy to deal with. </p>
    <div>
      <h5>Build status</h5>
      <a href="#build-status">
        
      </a>
    </div>
    <p>As a build progresses, it reports its own status back to Build Buddy, sending updates when it has finished initializing, has completed successfully, or been terminated by the user. The Build Buddy is responsible for handling this incoming information from the containerized build, writing status updates to the database (via a Hyperdrive binding) so that users can see the status of their build in the Cloudflare dashboard.</p>
    <div>
      <h5>Build logs</h5>
      <a href="#build-logs">
        
      </a>
    </div>
    <p>A running build generates output logs that are important to store and surface to the user. The containerized build flushes these logs to the Build Buddy every second, which, in turn, stores those logs in <a href="https://developers.cloudflare.com/durable-objects/api/storage-api/"><u>DO storage</u></a>. </p><p>The decision to use Durable Object storage here makes it easy to multicast logs to multiple clients efficiently, and allows us to use the same API for both streaming logs and viewing historical logs. </p><p>// build-management-app.ts</p>
            <pre><code>// We created a Hono app to for use by our Client Worker API
const app = new Hono&lt;HonoContext&gt;()
   .post(
       '/api/builds/:build_uuid/status',
       async (c) =&gt; {
           const buildStatus = await c.req.json()


           // fetch build metadata
           const build = ...


           const bb = getBuildBuddy(c.env, build.build_id)
           return await bb.handleStatusUpdate(build, statusUpdate)
       }
   )
   .post(
       '/api/builds/:build_uuid/logs',
       async (c) =&gt; {
           const logs = await c.req.json()
     // fetch build metadata
           const build = ...


           const bb = getBuildBuddy(c.env, build.build_id)
           return await bb.addLogLines(logs.lines)
       }
   )


export default {
   fetch: app.fetch
}
</code></pre>
            <p>// build-buddy.ts</p>
            <pre><code>import { DurableObject } from 'cloudflare:workers'


export class BuildBuddy extends DurableObject {
   compute: WorkersBuildsCompute


   constructor(ctx: DurableObjectState, env: Bindings) {
       super(ctx, env)
       this.compute = new ComputeClient({
           // ...
       })
   }


   // The Scheduler DO calls startBuild upon creating a BuildBuddy instance
   startBuild(build: Build): void {
       this.startBuildAsync(build)         
   }


   async startBuildAsync(build: Build): Promise&lt;void&gt; {
       // fetch all necessary metadata build, including
	// environment variables, secrets, build tokens, repo credentials, 
// build image URI, etc
	// ...


	// start a containerized build
       const computeBuild = await this.compute.createBuild({
           // ...
       })
   }


   // The Build Management worker calls handleStatusUpdate when it receives an update
   // from the containerized build
   async handleStatusUpdate(
       build: Build,
       buildStatusUpdatePayload: Payload
   ): Promise&lt;void&gt; {
// Write status updates to the database
   }


   // The Build Management worker calls addLogLines when it receives flushed logs
   // from the containerized build
   async addLogLines(logs: LogLines): Promise&lt;void&gt; {
       // Generate nextLogsKey to store logs under      
       this.ctx.storage.put(nextLogsKey, logs)
   }


   // The Client Worker can call methods on a Build Buddy via RPC, using a service binding to the Build Management Worker.
   // The getLogs method retrieves logs for the user, and the cancelBuild method forwards a request from the user to terminate a build. 
   async getLogs(cursor: string){
       const decodedCursor = cursor !== undefined ? decodeLogsCursor(cursor) : undefined
       return await this.getLogs(decodedCursor)
   }


   async cancelBuild(compute_id: string, build_id: string): void{
      await this.terminateBuild(build_id, compute_id)
   }


   async terminateBuild(build_id: number, compute_id: string): Promise&lt;void&gt; {
       await this.compute.stopBuild(compute_id)
   }
}


   export function getBuildBuddy(
   env: Pick&lt;Bindings, 'BUILD_BUDDY'&gt;,
   build_id: number
): DurableObjectStub&lt;BuildBuddy&gt; {
   const id = env.BUILD_BUDDY.idFromName(build_id.toString())
   return env.BUILD_BUDDY.get(id)
}
</code></pre>
            
    <div>
      <h5>Alarms</h5>
      <a href="#alarms">
        
      </a>
    </div>
    <p>We utilize <a href="https://developers.cloudflare.com/durable-objects/api/alarms/"><u>alarms</u></a> in the Build Buddy to check that a build has a healthy startup and to terminate any builds that run longer than 20 minutes. </p>
    <div>
      <h2>How else have we leveraged the Developer Platform?</h2>
      <a href="#how-else-have-we-leveraged-the-developer-platform">
        
      </a>
    </div>
    <p>Now that we've gone over the core behavior of the Workers Builds control plane, we'd like to detail a few other features of the Workers platform that we use to improve performance, monitor system health, and troubleshoot customer issues.</p>
    <div>
      <h3>Smart Placement and location hints</h3>
      <a href="#smart-placement-and-location-hints">
        
      </a>
    </div>
    <p>While our control plane is distributed in the sense that it can be run across multiple datacenters, to reduce latency costs, we want most requests to be served from locations close to our primary database in the western US.</p><p>While a build is running, Build Buddy, a Durable Object, is continuously writing status updates to our database. For the Client and the Build Management API Workers, we enabled <a href="https://developers.cloudflare.com/workers/configuration/smart-placement/"><u>Smart Placement</u></a> with <a href="https://developers.cloudflare.com/durable-objects/reference/data-location/#provide-a-location-hint"><u>location hints</u></a> to ensure requests run close to the database.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4hhFLpYizLZ6cyu4h80YL8/40af67320a6bf44f375d6055b2997a99/BLOG-2594_5.png" />
          </figure><p>This graph shows the reduction in round trip time (RTT) observed for our Worker with Smart Placement turned on. </p>
    <div>
      <h3>Workers Logs</h3>
      <a href="#workers-logs">
        
      </a>
    </div>
    <p>We needed a logging tool that allows us to aggregate and search across persistent operational logs from our Workers to assist with identifying and troubleshooting issues. We worked with the Workers Observability team to become early adopters of <a href="https://developers.cloudflare.com/workers/observability/logs/workers-logs"><u>Workers Logs</u></a>.</p><p>Workers Logs worked out of the box, giving us fast and easy to use logs directly within the Cloudflare dashboard. To improve our ability to search logs, we created a <a href="https://www.npmjs.com/package/workers-tagged-logger"><u>tagging library</u></a> that allows us to easily add metadata like the git tag of the deployed worker that the log comes from, allowing us to filter logs by release.</p><p>See a shortened example below for how we handle and log errors on the Client Worker. </p><p>// client-worker-app.ts</p>
            <pre><code>// The Client Worker is a RESTful API built with Hono
const app = new Hono&lt;HonoContext&gt;()
   // This is from the workers-tagged-logger library - first we register the logger
   .use(useWorkersLogger('client-worker-app'))
   // If any error happens during execution, this middleware will ensure we log the error
   .onError(useOnError)
   // routes
   .get(
       '/apiv4/builds',
       async (c) =&gt; {
           const { ids } = c.req.query()
           return await getBuildsByIds(c, ids)
       }
   )


function useOnError(e: Error, c: Context&lt;HonoContext&gt;): Response {
   // Set the project identifier n the error
   logger.setTags({ release: c.env.GIT_TAG })
 
   // Write a log at level 'error'. Can also log 'info', 'log', 'warn', and 'debug'
   logger.error(e)
   return c.json(internal_error.toJSON(), internal_error.statusCode)
}
</code></pre>
            <p>This setup can lead to the following sample log message from our Workers Log dashboard. You can see the release tag is set on the log.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6gfd725NCFNrhlDt3gK515/90138c159285e91535a986266918be13/BLOG-2594_6.png" />
          </figure><p>We can get a better sense of the impact of the error by adding filters to the Workers Logs view, as shown below. We are able to filter on any of the fields since we’re <a href="https://developers.cloudflare.com/workers/observability/logs/workers-logs#logging-structured-json-objects"><u>logging with structured JSON</u></a>.  </p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6XqXINluVzzyHd4O17JsnZ/0ac714792a4d21623b4a875291ae0ad0/BLOG-2594_7.png" />
          </figure>
    <div>
      <h3>R2</h3>
      <a href="#r2">
        
      </a>
    </div>
    <p>Coming soon to Workers Builds is build caching, used to store artifacts of a build for subsequent builds to reuse, such as package dependencies and build outputs. Build caching can speed up customer builds by avoiding the need to redownload dependencies from NPM or to rebuild projects from scratch. The cache itself will be backed by <a href="https://www.cloudflare.com/developer-platform/products/r2/">R2 storage</a>. </p>
    <div>
      <h3>Testing</h3>
      <a href="#testing">
        
      </a>
    </div>
    <p>We were able to build up a great testing story using <a href="https://blog.cloudflare.com/workers-vitest-integration/"><u>Vitest and workerd</u></a> — unit tests, cross-worker integration tests, the works. In the example below, we make use of the <code>runInDurableObject</code> stub from <code>cloudflare:test</code> to test instance methods on the Scheduler DO directly.</p><p>// scheduler.spec.ts</p>
            <pre><code>import { env, runInDurableObject } from 'cloudflare:test'
import { expect, test } from 'vitest'
import { BuildScheduler } from './scheduler'


test('getBuildsToSchedule() runs a queued build', async () =&gt; {
   // Our test harness creates a single build for our scheduler to pick up
   const { build } = await harness.createBuild()


   // We create a scheduler DO instance
   const id = env.BUILD_SCHEDULER.idFromName(crypto.randomUUID())
   const stub = env.BUILD_SCHEDULER.get(id)
   await runInDurableObject(stub, async (instance: BuildScheduler) =&gt; {
       expect(instance).toBeInstanceOf(BuildScheduler)


// We check that the scheduler picks up 1 build
       const builds = await instance.getBuildsToSchedule()
       expect(builds.length).toBe(1)
	
// We start the build, which should mark it as running
       await instance.scheduleBuilds(builds)
   })


   // Check that there are no more builds to schedule
   const queuedBuilds = ...
   expect(queuedBuilds.length).toBe(0)
})
</code></pre>
            <p>We use <code>SELF.fetch()</code> from <code>cloudflare:test</code> to run integration tests on our Client Worker, as shown below. This integration test covers our Hono endpoint and database queries made by the Client Worker in retrieving the metadata of a build.</p><p>// builds_api.test.ts</p>
            <pre><code>import { env, SELF } from 'cloudflare:test'
   
it('correctly selects a single build', async () =&gt; {
   // Our test harness creates a randomized build to test with
   const { build } = await harness.createBuild()


   // We send a request to the Client Worker itself to fetch the build metadata
   const getBuild = await SELF.fetch(
       `https://example.com/builds/${build1.build_uuid}`,
       {
           method: 'GET',
           headers: new Headers({
               Authorization: `Bearer JWT`,
               'content-type': 'application/json',
           }),
       }
   )


   // We expect to receive a 200 response from our request and for the 
   // build metadata returned to match that of the random build that we created
   expect(getBuild.status).toBe(200)
   const getBuildV4Resp = await getBuild.json()
   const buildResp = getBuildV4Resp.result
   expect(buildResp).toBeTruthy()
   expect(buildResp).toEqual(build)
})
</code></pre>
            <p>These tests run on the same runtime that Workers run on in production, meaning we have greater confidence that any code changes will behave as expected when they go live. </p>
    <div>
      <h3>Analytics</h3>
      <a href="#analytics">
        
      </a>
    </div>
    <p>We use the technology underlying the <a href="https://developers.cloudflare.com/analytics/analytics-engine/"><u>Workers Analytics Engine</u></a> to collect all of the metrics for our system. We set up <a href="https://developers.cloudflare.com/analytics/analytics-engine/grafana/"><u>Grafana</u></a> dashboards to display these metrics. </p>
    <div>
      <h3>JavaScript-native RPC</h3>
      <a href="#javascript-native-rpc">
        
      </a>
    </div>
    <p><a href="https://blog.cloudflare.com/javascript-native-rpc/"><u>JavaScript-native RPC</u></a> was added to Workers in April of 2024, and it’s pretty magical. In the scheduler code example above, we call <code>startBuild()</code> on the BuildBuddy DO from the Scheduler DO. Without RPC, we would need to stand up routes on the BuildBuddy <code>fetch()</code> handler for the Scheduler to trigger with a fetch request. With RPC, there is almost no boilerplate — all we need to do is call a method on a class. </p>
            <pre><code>const bb = getBuildBuddy(this.env, build.build_id)


// Starting a build without RPC 😢
await bb.fetch('http://do/api/start_build', {
    method: 'POST',
    body: JSON.stringify(build),
})


// Starting a build with RPC 😸
await bb.startBuild(build)
</code></pre>
            
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>By using Workers and Durable Objects, we were able to build a complex and distributed system that is easy to understand and is easily scalable. </p><p>It’s been a blast for our team to build on top of the very platform that we work on, something that would have been much harder to achieve on Workers just a few years ago. We believe in being Customer Zero for our own products — to identify pain points firsthand and to continuously improve the developer experience by applying them to our own use cases. It was fulfilling to have our needs as developers met by other teams and then see those tools quickly become available to the rest of the world — we were collaborators and internal testers for Workers Logs and private network support for Hyperdrive (both released on Birthday Week), and the soon to be released container platform.</p><p>Opportunities to build complex applications on the Developer Platform have increased in recent years as the platform has matured and expanded product offerings for more use cases. We hope that Workers Builds will be yet another tool in the Workers toolbox that enables developers to spend less time thinking about configuration and more time writing code. </p><p>Want to try it out? Check out the <a href="https://developers.cloudflare.com/workers/ci-cd/builds/"><u>docs</u></a> to learn more about how to deploy your first project with Workers Builds.</p> ]]></content:encoded>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">6uKjGQLUKCb33wGIcOQE1Y</guid>
            <dc:creator>Serena Shah-Simpson</dc:creator>
            <dc:creator>Jacob Hands</dc:creator>
            <dc:creator>Natalie Rogers</dc:creator>
        </item>
        <item>
            <title><![CDATA[Race ahead with Cloudflare Pages build caching]]></title>
            <link>https://blog.cloudflare.com/race-ahead-with-build-caching/</link>
            <pubDate>Thu, 28 Sep 2023 13:00:57 GMT</pubDate>
            <description><![CDATA[ Unleash the fast & furious in your builds with Cloudflare Pages' build caching. Reduce build times by caching previously computed project components. Now in Beta for select frameworks and package managers. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Today, we are thrilled to release a beta of Cloudflare Pages support for build caching! With build caching, we are offering a supercharged Pages experience by helping you cache parts of your project to save time on subsequent builds.</p><p>For developers, time is not just money – it’s innovation and progress. When every second counts in crunch time before a new launch, the “need for speed” becomes <i>critical</i>. With Cloudflare Pages’ built-in <a href="https://www.cloudflare.com/learning/serverless/glossary/what-is-ci-cd/">continuous integration and continuous deployment (CI/CD)</a>, developers count on us to drive fast. We’ve already taken great strides in making sure we’re enabling quick development iterations for our users by <a href="/cloudflare-pages-build-improvements/">making solid improvements on the stability and efficiency</a> of our build infrastructure. But we always knew there was more to our build story.</p>
    <div>
      <h3>Quick pit stops</h3>
      <a href="#quick-pit-stops">
        
      </a>
    </div>
    <p>Build times can feel like a developer's equivalent of a time-out, a forced pause in the creative process—the inevitable pit stop in a high-speed formula race.</p><p>Long build times not only breaks the flow of individual developers, but it can also create a ripple effect across the team. It can slow down iterations and push back deployments. In the fast-paced world of CI/CD, these delays can drastically impact productivity and the delivery of products.</p><p>We want to empower developers to <b>win the race</b>, miles ahead of competition.</p>
    <div>
      <h3>Mechanics of build caching</h3>
      <a href="#mechanics-of-build-caching">
        
      </a>
    </div>
    <p>At its core, build caching is a mechanism that stores artifacts of a build, allowing subsequent builds to reuse these artifacts rather than recomputing them from scratch. By leveraging the cached results, build times can be significantly reduced, leading to a more efficient build process.</p><p>Previously, when you initiated a build, the Pages CI system would generate every step of the build process, even if most parts of the codebase remain unchanged between builds. This is the equivalent to changing out every single part of the car during a pit stop, irrespective of if anything needs replacing.</p><p>Build caching refines this process. Now, the Pages build system will detect if cached artifacts can be leveraged, restore the artifacts, then focus on only computing the modified sections of the code. In essence, build caching acts like an experienced pit crew, smartly skipping unnecessary steps and focusing only on what's essential to get you back in the race faster.</p>
    <div>
      <h3>What are we caching?</h3>
      <a href="#what-are-we-caching">
        
      </a>
    </div>
    <p>It boils down to two components: dependencies and build output.</p><p>The Pages build system supports dependency caching for select package managers and build output caching for select frameworks. Check out our <a href="https://developers.cloudflare.com/pages/platform/build-caching">documentation</a> for more information on what’s currently supported and what’s coming up.</p><p>Let’s take a closer look at what exactly we are caching.</p><p><b>Dependencies:</b> upon initiating a build, the Pages CI system checks for cached artifacts from previous builds. If it identifies a cache hit for dependencies, it restores from cache to speed up dependency installation.</p><p><b>Build output:</b> if a cache hit for build output is identified, Pages will only build the changed assets. This approach enables the long awaited <i>incremental builds</i> for supported JavaScript frameworks.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4kqmUJuLrUGc7vtXbDc4X6/3f1440dbf1ad3acef20a2b99c18d6d28/image2-26.png" />
            
            </figure>
    <div>
      <h3>Ready, set … go!</h3>
      <a href="#ready-set-go">
        
      </a>
    </div>
    <p>Build caching is now in beta, and ready for you to test drive!</p><p>In this release, the feature will support the node-based package managers <a href="https://www.npmjs.com/">npm</a>, <a href="https://yarnpkg.com/">yarn</a>, <a href="https://pnpm.io/">pnpm</a>, as well as <a href="https://bun.sh/">Bun</a>. We’ve also ensured compatibility with the most popular frameworks that provide native incremental building support: <a href="https://www.gatsbyjs.com/">Gatsby.js</a>, <a href="https://nextjs.org/">Next.js</a> and <a href="https://astro.build/">Astro</a> – and more to come!</p><p>For you as a Pages user, interacting with build caching will be seamless. If you are working with an existing project, simply navigate to your project’s settings to toggle on Build Cache.</p><p>When you push a code change and initiate a build using Pages CI, build caching will kick-start and do its magic in the background.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4hWz7Sh9wtk64c01cSnNjG/6967b9783a75f3fdfaaa10bd26884e0b/image4-17.png" />
            
            </figure>
    <div>
      <h3>“Cache” us on Discord</h3>
      <a href="#cache-us-on-discord">
        
      </a>
    </div>
    <p>Have questions? Join us on our <a href="https://discord.com/invite/cloudflaredev?event=1152163002502615050">Discord Server</a>. We will be hosting an “Ask Us Anything” <a href="https://discord.com/invite/cloudflaredev?event=1152163002502615050">session</a> on October 2nd where you can chat live with members of our team! Your feedback on this beta is invaluable to us, so after testing out build caching, don't hesitate to share your experiences! Happy building!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6lavvh2PfpjlEbNV0YEuGB/8104fcccf6bf1243dfa113e940317f82/image3-32.png" />
            
            </figure><p></p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Pages]]></category>
            <category><![CDATA[Beta]]></category>
            <category><![CDATA[Speed]]></category>
            <guid isPermaLink="false">5NhsEJJxtKlKawPWJmHWJm</guid>
            <dc:creator>Anni Wang</dc:creator>
            <dc:creator>Jacob Hands</dc:creator>
            <dc:creator>John Fawcett</dc:creator>
        </item>
        <item>
            <title><![CDATA[A new era for Cloudflare Pages builds]]></title>
            <link>https://blog.cloudflare.com/cloudflare-pages-build-improvements/</link>
            <pubDate>Tue, 10 May 2022 13:01:10 GMT</pubDate>
            <description><![CDATA[ Announcing several build experience improvements for Cloudflare Pages including build times, logging and configuration ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Music is flowing through your headphones. Your hands are flying across the keyboard. You’re stringing together a masterpiece of code. The momentum is building up as you put on the finishing touches of your project. And at last, it’s ready for the world to see. Heart pounding with excitement and the feeling of victory, you push changes to the main branch…. only to end up waiting for the build to execute each step and spit out the build logs.</p>
    <div>
      <h2>Starting afresh</h2>
      <a href="#starting-afresh">
        
      </a>
    </div>
    <p>Since the launch of Cloudflare Pages, there is no doubt that the build experience has been its biggest source of criticism. From the amount of waiting to inflexibility of CI workflow, Pages had a lot of opportunity for growth and improvement. With Pages, our North Star has always been designing a developer platform that fits right into your workflow and oozes simplicity. User pain points have been and always will be our priority, which is why today we are thrilled to share a list of exciting updates to our build times, logs and settings!</p><p>Over the last three quarters, we implemented a new build infrastructure that speeds up Pages builds, so you can iterate quickly and efficiently. In February, we soft released the Pages Fast Builds Beta, allowing you to opt in to this new infrastructure on a per-project basis. This not only allowed us to test our implementation, but also gave our community the opportunity to try it out and give us direct feedback in <a href="https://discord.gg/cloudflaredev">Discord</a>. Today we are excited to announce the new build infrastructure is now generally available and automatically enabled for all existing and new projects!</p>
    <div>
      <h2>Faster build times</h2>
      <a href="#faster-build-times">
        
      </a>
    </div>
    <p>As a developer, your time is extremely valuable, and we realize Pages builds were slow. It was obvious that creating an infrastructure that built projects faster and smarter was one of our top requirements.</p><p>Looking at a Pages build, there are four main steps: (1) initializing the build environment, (2) cloning your git repository, (3) building the application, and (4) deploying to Cloudflare’s global network. Each of these steps is a crucial part of the build process, and upon investigating areas suitable for optimization, we directed our efforts to cutting down on build initialization time.</p><p>In our old infrastructure, every time a build job was submitted, we created a new virtual machine to run that build, costing our users precious dev time. In our new infrastructure, we start jobs on machines that are ready and waiting to be used, taking a major chunk of time away from the build initialization step. This step previously ran for 2+ minutes, but with our new infrastructure update, projects are expected to see a build initialization time cut down to <b>2-3 SECONDS</b>.</p><p>This means less time waiting and more time iterating on your code.</p>
    <div>
      <h3>Fast and secure</h3>
      <a href="#fast-and-secure">
        
      </a>
    </div>
    <p>In our old build infrastructure, because we spun up a new virtual machine (VM) for every build, it would take several minutes to boot up and initialize with the Pages build image needed to execute the build. Alternatively, one could reuse a collection of VMs, assigning a new build to the next available VM, but containers share a kernel with the host operating system, making them far less isolated, posing a huge security risk. This could allow a malicious actor to perform a "container escape" to break out of their sandbox. We wanted the best of both worlds: the speed of a container with the isolation of a virtual machine.</p><p>Enter <a href="https://gvisor.dev/">gVisor</a>, a container sandboxing technology that drastically limits the attack surface of a host. In the new infrastructure, each container running with gVisor is given its own independent application "kernel,” instead of directly sharing the kernel with its host. Then, to address the speed, we keep a cluster of virtual machines warm and ready to execute builds so that when a new Pages deployment is triggered, it takes just a few seconds for a new gVisor container to start up and begin executing meaningful work in a secure sandbox with near native performance.</p>
    <div>
      <h2>Stream your build logs</h2>
      <a href="#stream-your-build-logs">
        
      </a>
    </div>
    <p>After we solidified a fast and secure build, we wanted to enhance the user facing build experience. Because a build may not be successful every time, providing you with the tools you need to debug and access that information as fast as possible is crucial. While we have a long list of future improvements for a better logging experience, today we are starting by enabling you to stream your build logs.</p><p>Prior to today, with the aforementioned build steps required to complete a Pages build, you were required to wait until the build completed in order to view the resulting build logs. Easily addressable issues like incorrectly inputting the build command or specifying an environment variable would have required waiting for the entire build to finish before understanding the problem.</p><p>Today, we’re giving you the power to understand your build issues as soon as they happen. Spend less time waiting for your logs and start debugging the events of your builds within a second or less after they happen!</p><div></div>
<p></p>
    <div>
      <h2>Control Branch Builds</h2>
      <a href="#control-branch-builds">
        
      </a>
    </div>
    <p>Finally, the build experience does not just include the events during execution but everything leading up to the trigger of a build. For our final trick, we’re enabling our users to have full control of the precise branches they’d like to include and exclude for automatic deployments.</p><p>Before today, Pages submitted builds for every commit in both production and preview environments, which led to queued builds and even more waiting if you exceeded your concurrent build limit. We wanted to provide even more flexibility to control your CI workflow. Now you can configure your build settings to specify branches to build, as well as skip ad hoc commits.</p>
    <div>
      <h3>Specify branches to build</h3>
      <a href="#specify-branches-to-build">
        
      </a>
    </div>
    <p>While “unlimited staging” is one of Pages’ greatest advantages, depending on your setup, sometimes automatic deployments to the preview environment can cause extra noise.</p><p>In the Pages build configuration setting, you can specify automatic deployments to be turned off for the production environment, the preview environment, or specific preview branches. In a more extreme case, you can even pause all deployments so that any commit sent to your git source will not trigger a new Pages build.</p><p>Additionally, in your project’s settings, you can now configure the specific Preview branches you would like to include and exclude for automatic deployments. To make this configuration an even more powerful tool, you can use wildcard syntax to set rules for existing branches as well as any newly created preview branches.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7fw9Ltufs0OqoXuByAcKdD/8e871205590c8270f36eb33627a13ae9/image1-15.png" />
            
            </figure><p><a href="https://developers.cloudflare.com/pages/platform/branch-build-controls/">Read more in our Pages docs</a> on how to get started with configuring automatic deployments with Wildcard Syntax.</p>
    <div>
      <h3>Using CI Skip</h3>
      <a href="#using-ci-skip">
        
      </a>
    </div>
    <p>Sometimes commits need to be skipped on an ad hoc basis. A small update to copy or a set of changes within a small timespan don’t always require an entire site rebuild. That’s why we also implemented a CI Skip command for your commit message, signaling to Pages that the update should be skipped by our builder.</p><p>With both CI Skip and configured build rules, you can keep track of your site changes in Pages’ deployment history.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/Nm9Zxx2FDkDz1OAvBq3OE/d349e0628a51eed8036b81db1cadf21d/image2-10.png" />
            
            </figure>
    <div>
      <h2>Where we’re going</h2>
      <a href="#where-were-going">
        
      </a>
    </div>
    <p>We’re extremely excited to bring these updates to you today, but of course, this is only the beginning of improving our build experience. Over the next few quarters, we will be bringing more to the build experience to create a seamless developer journey from site inception to launch.</p>
    <div>
      <h3>Incremental builds and caching</h3>
      <a href="#incremental-builds-and-caching">
        
      </a>
    </div>
    <p>From beta testing, we noticed that our new infrastructure can be less impactful on larger projects that use heavier frameworks such as Gatsby. We believe that every user on our developer platform, regardless of their use case, has the right to fast builds. Up next, we will be implementing incremental builds to help Pages identify only the deltas between commits and rebuild only files that were directly updated. We will also be implementing other caching strategies such as caching external dependencies to save time on subsequent builds.</p>
    <div>
      <h3>Build image updates</h3>
      <a href="#build-image-updates">
        
      </a>
    </div>
    <p>Because we’ve been using the same build image we launched Pages with back in 2021, we are going to make some major updates. Languages release new versions all the time, and we want to make sure we update and maintain the latest versions. An updated build image will mean faster builds, more security and of course supporting all the latest versions of languages and tools we provide. With new build image versions being released, we will allow users to opt in to the updated builds in order to maintain compatibility with all existing projects.</p>
    <div>
      <h3>Productive error messaging</h3>
      <a href="#productive-error-messaging">
        
      </a>
    </div>
    <p>Lastly, while streaming build logs helps you to identify those easily addressable issues, the infamous “Internal error occurred” is sometimes a little more cryptic to decipher depending on the failure. While we recently published a “<a href="https://developers.cloudflare.com/pages/platform/debugging-pages/">Debugging Cloudflare Pages</a>” guide, in the future we’d like to provide the error feedback in a more productive manner, so you can easily identify the issue.</p>
    <div>
      <h2>Have feedback?</h2>
      <a href="#have-feedback">
        
      </a>
    </div>
    <p>As always, your feedback defines our roadmap. With all the updates we’ve made to our build experience, it’s important we hear from you! You can get in touch with our team directly through <a href="https://discord.gg/cloudflaredev">Discord</a>. Navigate to our Pages specific section and check out our various channels specific to different parts of the product!</p>
    <div>
      <h3>Join us at Cloudflare Connect!</h3>
      <a href="#join-us-at-cloudflare-connect">
        
      </a>
    </div>
    <p>Interested in learning more about building with Cloudflare Pages? If you’re based in the New York City area, join us on Thursday, May 12th for a series of workshops on how to build a full stack application on Pages! Follow along with a fully hands-on lab, featuring Pages in conjunction with other products like Workers, Images and Cloudflare Gateway, and hear directly from our product managers. <a href="https://events.www.cloudflare.com/flow/cloudflare/connect2022nyc/landing/page/page">Register now</a>!</p> ]]></content:encoded>
            <category><![CDATA[Platform Week]]></category>
            <category><![CDATA[Cloudflare Pages]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Security]]></category>
            <category><![CDATA[Speed]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">2oh3ffXctAGXfS6sDKaqSB</guid>
            <dc:creator>Nevi Shah</dc:creator>
            <dc:creator>Josh Wheeler</dc:creator>
            <dc:creator>Jacob Hands</dc:creator>
        </item>
        <item>
            <title><![CDATA[Developer Spotlight: Automating Workflows with Airtable and Cloudflare Workers]]></title>
            <link>https://blog.cloudflare.com/developer-spotlight-jacob-hands-tritails/</link>
            <pubDate>Thu, 18 Nov 2021 14:00:05 GMT</pubDate>
            <description><![CDATA[ Jacob operates TriTails Premium Beef, an online store for meat, a very perishable good. So he has a unique set of challenges with shipping. As a developer, he turned to Airtable and Cloudflare Workers to automate large parts of the process to be able to deal with their rapid growth. ]]></description>
            <content:encoded><![CDATA[ <p></p><p>Next up on the <a href="/tag/developer-spotlight/">Developer Spotlight</a> is another favourite of mine. Today’s post is by Jacob Hands. Jacob operates <a href="http://tritailsbeef.com">TriTails Premium Beef</a>, which is an online store for meat, a very perishable good. So he has a lot of unique challenges when it comes to shipping. To deal with their growth, Jacob, a developer by trade, turned to Airtable and Cloudflare Workers to automate a lot of their workflow.</p><p>One of Jacob’s quotes is one of my favourites:</p><blockquote><p>“Sure, Cloudflare Workers allows you to scale to billions of requests per day, but it is also awesome for a few hundred requests a day.”</p></blockquote><p>Here is Jacob talking about how it only took him a few days to put together a fully customised workflow tool by integrating Airtable and Workers. And how it saves them multiple hours every single day.</p>
    <div>
      <h2>Shipping Requirements</h2>
      <a href="#shipping-requirements">
        
      </a>
    </div>
    <p>Working at a new e-commerce business shipping perishable goods has several challenges as operations scale up. One of our biggest challenges is that daily shipping throughput is limited. Partly because of a small workspace, limiting how many employees can simultaneously pack orders, and also because despite having a requested pickup time with UPS, they often show up hours early, requiring packers to stop and scramble to meet them before they leave. Packing is also time-consuming because it’s a game of Tetris getting all products to fit with enough dry ice to keep it frozen.</p><p>This is what a regular box looks like:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2N61FLQM9pesehhiwnTGRx/0502858a965985ef3de5a170dad4a2be/image2-3.jpg" />
            
            </figure><p>Ensuring time-in-transit stays as low as possible is critical for ensuring that products stay frozen when arriving at the customer’s doorstep. Because of this requirement, avoiding packages staying in transit during the weekend is a must. We learned that the hard way after a package got delayed by a day, which wouldn’t have been too bad, but that meant it stayed in a sorting centre over the weekend, which wasn’t as pleasant.</p><p>Luckily, we caught it on time, and we were able to send a replacement set of steaks overnight and save a dinner party. But after that, we started triaging our orders to make sure that the correct packages were shipped at the right time.</p>
    <div>
      <h2>Order Triage, The Hard Way</h2>
      <a href="#order-triage-the-hard-way">
        
      </a>
    </div>
    <p>In the early days, we could pack orders after lunch and be done in an hour, but as we grew we needed to be careful about what, when, and how we ship. First, all open orders were copied to a Google Sheet. Next, the time-in-transit was manually checked for each order and added to the sheet. The sheet was then sorted by transit time (with paid priority air at the top), and each set of orders was separated into groups. Finally, the Google Sheet was printed for the packing team to work through.</p><p>Transit times are so crucial to the shipment process that they need to be on each packing slip so that the packing team knows how much dry ice and packaging each order needs. So the transit times were typed into each packing slip in Adobe Acrobat before printing. While this is a very tedious process, it is vital to ensure that each package is packed according to what they need to arrive in good condition.</p><p>Once the packing team would finish packing orders, the box weights and sizes were added to the Google Sheet based on the worksheet filled out by the packers. Next, each order label was created, individually copying weights and sizes from the Google Sheet to ShipStation, the application we use to manage logistics with our providers. Finally, the packages would be picked up and started their journey to the customer’s doorstep.</p><p>This process worked fine for ten orders, but as operations scaled up, triaging and organizing the orders became a full-time job, checking and double-checking that everything was entered correctly and that no human mistakes occurred (spoiler, they still happened!)</p>
    <div>
      <h2>Automation</h2>
      <a href="#automation">
        
      </a>
    </div>
    <p>At first, I just wanted to automate the most tedious step: calculating transit times. This process took so long that it hindered how early the packing team could start packing orders, further limiting our throughput. Cloudflare Workers are so easy to use and get running quickly, so they seemed like a great place to start. The plan was to use =IMPORTDATA(order) in Google Sheets and eliminate that step in the process.</p><p>Automating just one thing is powerful, and it opened a flood of ideas about how our workflow could further be improved. With the first 30 minutes of daily work automated, what else could be done? That’s when I set out to automate as much of the workflow as possible, excited about the possibilities.</p>
    <div>
      <h3>Triaging the Triaging</h3>
      <a href="#triaging-the-triaging">
        
      </a>
    </div>
    <p>Problem-solving is often about figuring out what to prioritize, and automating this workflow is no different. Our order triaging process has many steps, and setting out to automate the entire thing at once wasn’t possible because of the limited blocks of time to work on it. Instead, I decided to only solve the highest priority problems, one step at a time. Triaging the triaging process helped me build everything needed to automate an entire workflow without it ever feeling overwhelming, and gaining efficiency each step along the way.</p><p>With the time-in-transit calculation API working, the next part I automated was getting the orders that need shipping from Shopify via the API instead of copy-pasting every time. This is where the limits of Google Sheets started to become apparent. While automation can be done in Sheets, it can quickly become a black box full of hacks. So it was time to move to a better platform, but which one?</p><p>While I had often heard of Airtable and played with it a few times since it launched in 2012, the pricing and limitations never seemed to fit any of my use cases. But with the little amount of data we needed to store at any one time, it seemed worth trying since it has an easy-to-use API and supports strict cell formats, which is much harder to do in Sheets. Airtable has an intuitive UI, and it is easy to create custom fields for each type of data needed.</p><p>Once I found out Airtable had a built-in <a href="https://airtable.com/marketplace/blkQyAKhJoGKqnR0T/scripting">Scripting app</a>, it was obvious this was the right tool for the job.</p>
    <div>
      <h2>Building Airtable Scripting Apps</h2>
      <a href="#building-airtable-scripting-apps">
        
      </a>
    </div>
    <p>Airtable Scripting is a powerful tool for building functionality directly within Airtable using JavaScript. Unfortunately, there are some limitations. For example, it isn’t possible to share code between different instances of the Scripting App without copying and pasting. There’s also no source control so reverting changes relies on the Undo button.</p><p>Cloudflare Workers, on the other hand, is a full developer platform. You can easily use source control, and it has a great developer experience with Wrangler and Miniflare, so testing and deploying is fast and seamless.</p><p>Airtable Scripting and Cloudflare Workers work together beautifully. Building APIs on Workers allows more complex tasks to run on the Cloudflare network. These APIs are then fetched by Airtable scripts, solving the code-sharing issue and speeding up development.</p>
    <div>
      <h3>Shopify Order Importing</h3>
      <a href="#shopify-order-importing">
        
      </a>
    </div>
    <p>First, we needed to import orders from Shopify into Airtable. The API endpoint I created in Workers goes through all open orders and figures out which ones should be shipped this week. The orders are then cached in the Workers Cache API, so we can request this endpoint as much as needed without hitting Shopify API’s limits.</p><p>From there, the Airtable Scripting app checks the transit time for each order using our Workers API that makes calls to Shippo (a multi-carrier shipping API) to get time-in-transit estimates for the carrier. Finally, each row in Airtable is updated with the respective transit times, automatically sorted with priority paid air at the top, followed by the longest to the shortest transit times.</p><p>Going from an entirely manual process of getting a list of triaged orders in 45 minutes to clicking a button and having Airtable and Workers do it all for me in seconds was one of the most significant “lightbulb” moments I’ve ever had programming.</p>
    <div>
      <h3>Printing Packing Slips in Order</h3>
      <a href="#printing-packing-slips-in-order">
        
      </a>
    </div>
    <p>The next big thing to tackle was the printing of packing slips. They need to be printed in the triaged order rather than in chronological order. To do so, this manually required searching for each order, but now a button in Airtable generates links to Shopify search with each batch of orders prefilled.</p>
    <div>
      <h3>Printing the Order Worksheet</h3>
      <a href="#printing-the-order-worksheet">
        
      </a>
    </div>
    <p>Of course, we just couldn’t stop there.</p><p>To keep track of orders as they are packed, we use a printed worksheet with all orders listed and columns for each order’s box size and weight. Unfortunately, Airtable does not have a good way to customize the printout of a table.</p><p>Ironically, this brought us back to Google Sheets! Since Sheets is the easiest way to format a table, it seemed like the best choice. But copying data from Airtable to Sheets is tedious. Instead, I created an API endpoint in Workers to get the data from Airtable and format it as a CSV the way we need it to look when printing. From Sheets, the IMPORTDATA function imports the day’s orders automatically when opened, ready for printing.</p>
    <div>
      <h3>Sending Package Details to ShipStation</h3>
      <a href="#sending-package-details-to-shipstation">
        
      </a>
    </div>
    <p>Once the packing team has finished packing and filling out the shipment worksheet, box size and weights are entered into Airtable for each order. Rather than typing these details also into ShipStation, I built an endpoint in our Workers API to set the weight and size using the ShipStation API. ShipStation order updates are done based on the ID of the order. The script first lists all open orders and then writes the order name and ID mapping for all open orders to Workers KV so that future requests to this API can avoid the ShipStation list API, which is slow and has strict limits.</p><p>Next, I built another Airtable script to send the details of each box to this API. In addition to setting the weight and size, the order is also tagged with today’s date, making it easy to identify what orders in ShipStation are ready to be labeled. Finally, the labels are created and printed in ShipStation in bulk and applied to their respective packages.</p>
    <div>
      <h2>Putting it all together</h2>
      <a href="#putting-it-all-together">
        
      </a>
    </div>
    <p>So an overview of the entire system looks like this. All clients connect to Airtable and Airtable makes calls out to the Worker APIs which connect and coordinates between all third party APIs.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4n2dGbNG8XVk4hdJNsVu4i/a68b247bf4d441a406488fa57591510d/image3-27.png" />
            
            </figure>
    <div>
      <h2>Why Workers and Airtable Work Well Together</h2>
      <a href="#why-workers-and-airtable-work-well-together">
        
      </a>
    </div>
    <p>While it might have been possible to build this entire workflow in Airtable, integrating Workers has made the process much easier to build, test, and reuse code both between Airtable scripts and other platforms.</p>
    <div>
      <h3>Development Experience</h3>
      <a href="#development-experience">
        
      </a>
    </div>
    <p>The Airtable Scripting app makes it quick and easy to build scripts that work with the data stored in Airtable, with a decent editor and autocomplete, but it is hard to build more complex scripts in it.</p><p>Funnily enough for this project, latency and scaling weren’t all that important. But Cloudflare Workers makes development and testing incredibly easy: no excessive configuration or deployment pipelines.</p>
    <div>
      <h3>Reliability and Security</h3>
      <a href="#reliability-and-security">
        
      </a>
    </div>
    <p>We are running a business and having to babysit servers is a massive distraction that we certainly don’t need. With Workers being fully serverless I never have to worry about anything breaking because a server is down.</p><p>And we can safely store all our secrets needed to access all third-party systems with Cloudflare, with the secret environments variables. Making sure those tokens and keys are all fully encrypted and secure.</p>
    <div>
      <h3>Airtable is a great database and UI in one</h3>
      <a href="#airtable-is-a-great-database-and-ui-in-one">
        
      </a>
    </div>
    <p>Building UI's around data entry and visualisation takes a lot of time and resources. By utilizing Airtable, I built out an entire workflow without ever touching HTML, let alone front-end frameworks. Instead, I could focus solely on core business logic. Airtable's dashboard feature also allows building reports with high-level overviews of the types of packages being sent, helping us forecast future packing supplies needed.</p><p>While building workflows in spreadsheets can feel like a hack when custom scripting gets involved, Airtable is the opposite. The extensibility and good UX have made Airtable a great tool to use going forward.</p>
    <div>
      <h2>Improvements Going Forward</h2>
      <a href="#improvements-going-forward">
        
      </a>
    </div>
    <p>Now that we had the basics covered, I noticed one of the most powerful things about this setup: how easy it was to add features. I started noticing minor issues with the workflow that could be improved. For example, when an order has to be split into multiple packages, the row in Airtable has to be duplicated and have a suffix added to the order number for each order. Automating order splitting was not a priority previously, but it quickly became one of the most time-consuming parts of the process. Thirty minutes later, every row had a “Split order” button, built with another Airtable script.</p><p>Another issue was when a customer was not going to be home on a Wednesday, which meant that if the order got shipped on Monday, it would go bad sitting on their doorstep. Thankfully, adding an optional minimum ship date tag to the Workers API that gets shippable orders was quick and easy. Now, our sales team can add tags for minimum ship dates when customers are not home, and the rest of the workflow will automatically take it into account when deciding what to ship.</p>
    <div>
      <h2>Conclusion</h2>
      <a href="#conclusion">
        
      </a>
    </div>
    <p>Many businesses are turning to Workers for their incredible performance and scaling to millions or billions of requests, but we couldn’t be happier with how much value we get with the few hundred Workers requests we do every day.</p><p>Cloudflare Workers, especially in combination with tools like Airtable, make it really easy to create your own internal tool, built to your exact specifications. Which will bring this capability to so many more businesses.</p><p><i>Cloudflare is not affiliated with Formagrid, Inc., dba Airtable. The views and opinions expressed in this blog post are solely those of the guest author and do not necessarily represent those of Cloudflare, Inc.</i></p> ]]></content:encoded>
            <category><![CDATA[Full Stack Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Developer Spotlight]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">NR3oZws9aFPGCviEgIlJk</guid>
            <dc:creator>Erwin van der Koogh</dc:creator>
            <dc:creator>Jacob Hands</dc:creator>
        </item>
    </channel>
</rss>