
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sat, 04 Apr 2026 06:46:44 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Migrating to React land: Gatsby]]></title>
            <link>https://blog.cloudflare.com/migrating-to-react-land-gatsby/</link>
            <pubDate>Thu, 26 Mar 2020 12:00:00 GMT</pubDate>
            <description><![CDATA[ As our developer documentation grows so does the need for tooling. Let’s walk through how we migrated our documentation site to Gatsby to take full advantage of static generation and React.  ]]></description>
            <content:encoded><![CDATA[ <p></p><p>I am an engineer that loves docs. Well, OK, I don’t love all docs but I believe docs are a crucial, yet often neglected element to a great developer experience. I work on the developer experience team for Cloudflare Workers focusing on several components of Workers, particularly on the docs that we recently migrated to Gatsby.</p><blockquote><p>? We’ve moved the Cloudflare Workers docs to <a href="https://twitter.com/gatsbyjs?ref_src=twsrc%5Etfw">@gatsbyjs</a></p><p>The new documentation is...</p><p>?‍♀️ faster⭐️ more accessible? a perfect foundation for the redesign later this year?️ open-source</p><p>shout out to <a href="https://twitter.com/exvuma?ref_src=twsrc%5Etfw">@exvuma</a> for this incredible work ??<a href="https://t.co/k3huvCvash">https://t.co/k3huvCvash</a> <a href="https://t.co/MBWxVtlrin">pic.twitter.com/MBWxVtlrin</a></p><p>— Cloudflare Developers (@CloudflareDev) <a href="https://twitter.com/CloudflareDev/status/1235265069283504128?ref_src=twsrc%5Etfw">March 4, 2020</a></p></blockquote><p>Through porting our documentation site to Gatsby I learned a lot. In this post, I share some of the learnings that could’ve saved my former self from several headaches. This will hopefully help others considering a move to Gatsby or another static site generator.</p>
    <div>
      <h2>Why Gatsby?</h2>
      <a href="#why-gatsby">
        
      </a>
    </div>
    <p>Prior to our migration to Gatsby, we used Hugo for our <a href="https://developers.cloudflare.com/workers">developer documentation</a>. There are a lot of positives about working with Hugo - fast build times, fast load times - that made building a simple static site a great use case for Hugo. Things started to turn sour when we started making our docs more interactive and expanding the content being generated.</p><p>Going from writing JSX with TypeScript back to string-based templating languages is difficult. Trying to perform complicated tasks, like generating a sidebar, cost me - a developer who knows nothing about liquid code or Go templating (though with Golang experience) - several tears not even to implement but to just understand what was happening.</p><p>Here is the code to template an item in the sidebar in Hugo:</p>
            <pre><code>&lt;!-- templates --&gt;
{{ define "section-tree-nav" }}
{{ $currentNode := .currentnode }}
{{ with .sect }}
 {{ if not .Params.Hidden }}
  {{ if .IsSection }}
    {{safeHTML .Params.head}}
    &lt;li data-nav-id="{{.URL}}" class="dd-item
        {{ if .IsAncestor $currentNode }}parent{{ end }}
        {{ if eq .UniqueID $currentNode.UniqueID}}active{{ end }}
        {{ if .Params.alwaysopen}}parent{{ end }}
        {{ if .Params.alwaysopen}}always-open{{ end }}
        "&gt;
      &lt;a href="{{ .RelPermalink}}"&gt;
        &lt;span&gt;{{safeHTML .Params.Pre}}{{.Title}}{{safeHTML .Params.Post}}&lt;/span&gt;
 
        {{ if .Params.new }}
          &lt;span class="new-badge"&gt;NEW&lt;/span&gt;
        {{ end }}
 
        {{ $numberOfPages := (add (len .Pages) (len .Sections)) }}
        {{ if ne $numberOfPages 0 }}
 
          {{ if or (.IsAncestor $currentNode) (.Params.alwaysopen)  }}
            &lt;i class="triangle-up"&gt;&lt;/i&gt;
          {{ else }}
            &lt;i class="triangle-down"&gt;&lt;/i&gt;
          {{ end }}
 
        {{ end }}
      &lt;/a&gt;
      {{ if ne $numberOfPages 0 }}
        &lt;ul&gt;
          {{ .Scratch.Set "pages" .Pages }}
          {{ if .Sections}}
          {{ .Scratch.Set "pages" (.Pages | union .Sections) }}
          {{ end }}
          {{ $pages := (.Scratch.Get "pages") }}
 
        {{ if eq .Site.Params.ordersectionsby "title" }}
          {{ range $pages.ByTitle }}
            {{ if and .Params.hidden (not $.showhidden) }}
            {{ else }}
            {{ template "section-tree-nav" dict "sect" . "currentnode" $currentNode }}
            {{ end }}
          {{ end }}
        {{ else }}
          {{ range $pages.ByWeight }}
            {{ if and .Params.hidden (not $.showhidden) }}
            {{ else }}
            {{ template "section-tree-nav" dict "sect" . "currentnode" $currentNode }}
            {{ end }}
          {{ end }}
        {{ end }}
        &lt;/ul&gt;
      {{ end }}
    &lt;/li&gt;
  {{ else }}
    {{ if not .Params.Hidden }}
      &lt;li data-nav-id="{{.URL}}" class="dd-item
     {{ if eq .UniqueID $currentNode.UniqueID}}active{{ end }}
      "&gt;
        &lt;a href="{{.RelPermalink}}"&gt;
        &lt;span&gt;{{safeHTML .Params.Pre}}{{.Title}}{{safeHTML .Params.Post}}&lt;/span&gt;
        {{ if .Params.new }}
          &lt;span class="new-badge"&gt;NEW&lt;/span&gt;
        {{ end }}
 
        &lt;/a&gt;&lt;/li&gt;
     {{ end }}
  {{ end }}
 {{ end }}
{{ end }}
{{ end }}</code></pre>
            <p>Whoa. I may be exceptionally oblivious, but I had to squint at the snippet above for an hour before I realized this was the code for a sidebar item (the <code>li</code> element was the eventual giveaway, but took some parsing to discover where the logic actually started).</p><p>(Disclaimer: I am in no way a pro at Hugo and in any situation there are always several ways to code a solution; thus I am in no way claiming this was the only way to write the template nor am I chastising the author of the code. I am just displaying the differences in pieces of code I came across)</p><p>Now, here is what the TSX (I will get into the JS later in the article) for the Gatsby project using the exact same styling would look like:</p>
            <pre><code> &lt;li data-nav-id={pathToServe} className={'dd-item ' + ddClass}&gt;
   &lt;Link className="" to={pathToServe} title="Docs Home" activeClassName="active"&gt;
     {title || 'No title'}
     {numberOfPages ? &lt;Triangle isAncestor={isAncestor} alwaysopen={showChildren} /&gt; : ''}
     {showNew ? &lt;span className="new-badge"&gt;NEW&lt;/span&gt; : ''}
   &lt;/Link&gt;
   {showChildren ? (
     &lt;ul&gt;
       {' '}
       {myChildren.map((child: mdx) =&gt; {
         return (
           &lt;SidebarLi
             frontmatter={child.frontmatter}
             fields={child.fields}
             depth={++depth}
             key={child.frontmatter.title}
           /&gt;
         )
       })}
     &lt;/ul&gt;
   ) : (
     ''
   )}
 &lt;/li&gt;</code></pre>
            <p>This code is clean and compact because Gatsby is a static content generation tool based on React. It’s loved for a myriad of reasons, but my honest main reason to migrate to it was to make the Hugo code above much less ugly.</p><p>For our purposes, less ugly was important because we had dreams of redesigning our docs to be interactive with support for multiple coding languages and other features.</p><p>For example, the <a href="https://developers.cloudflare.com/workers/templates">template gallery</a> would be a place to go to for how-to recipes and examples. The templates themselves would live in a template registry service and turn into static pages via an API.</p><p>We wanted the docs to not be constrained by Go templating. The <a href="https://gohugo.io/templates/introduction/">Hugo docs</a> admit their templates aren’t the best for complicated logic:</p><blockquote><p>Go Templates provide an extremely simple template language that adheres to the belief that only the most basic of logic belongs in the template or view layer.</p></blockquote><p>Gatsby and React enable the more complex logic we were looking for. After our team built <a href="https://workers.cloudflare.com/">workers.cloudflare.com</a> and <a href="https://workers.cloudflare.com/built-with">Built with Workers</a> on Gatsby, I figured this was my shot to really give Gatsby a try on our Workers developer docs.</p>
    <div>
      <h3>Decision to Migrate over Starting from Scratch</h3>
      <a href="#decision-to-migrate-over-starting-from-scratch">
        
      </a>
    </div>
    <p>I’m normally not a fan of fixing things that aren’t broken. Though I didn’t like working with Hugo, did love working in React, and had all the reasons to. I was timid about being the one in charge of switching from Hugo. I was scared. I hated looking at the liquid code of Go templates. I didn’t want to have to port all the existing templates to React without truly understanding what I might be missing.</p><p>There comes a point with tech debt though where you have to tackle the tech debt you are most scared of.</p><p>The easiest solution would be of course to throw the Hugo code away. Start from scratch. A clean slate. But this means taking something that was not broken and breaking it. The styling, SEO, tagging, and analytics of the site took small iterations over the course of a few years to get right and I didn’t want to be the one to break them. Instead of throwing all the styling and logic tied in for search, SEO, etc..., our plan was to maintain as much of the current design and logic as possible while converting it to React piece-by-piece, component-by-component.</p><p>Also there were existing developer docs still using Hugo on Cloudflare by other teams (e.g. Access, Argo Tunnel, etc...). I wanted a team at Cloudflare to be able to import their existing markdown files with frontmatter into the Gatsby repo and preserve the existing design.</p><p>I wanted to migrate instead of teleport to Gatsby.</p>
    <div>
      <h2>How-to: Hugo to Gatsby</h2>
      <a href="#how-to-hugo-to-gatsby">
        
      </a>
    </div>
    <p>In this blog post, I go through some but not all of the steps of how I ported to Gatsby from Hugo for our complex doc site. The few examples here help to convey the issues that caused the most pain.</p><p>Let’s start with getting the <a href="https://blog.cloudflare.com/markdown-for-agents/">markdown files </a>to turn into HTML pages.</p>
    <div>
      <h3>Markdown</h3>
      <a href="#markdown">
        
      </a>
    </div>
    <p>One goal was to keep all the existing markdown and frontmatter we had set up in Hugo as similar as possible. The reasoning for this was to not break existing content and also maintain the version history of each doc.</p><p>Gatsby is built on top of GraphQL. All the data and most all content for Gatsby is put into GraphQL during startup likely via a plugin, then Gatsby will query for this data upon actual page creation. This is quite different from Hugo’s much more abstract model of putting all your content in a folder named <code>content</code> and then Hugo figures out which template to apply based on the logic in the template.</p><p>MDX is a sophisticated tool that parses markdown into Gatsby so it can later be represented as HTML (it actually can do much more than that but, I won’t get into it here). I started with Gatsby’s MDX plugin to create nodes from my markdown files. Here is the code to set up the plugin to get all the markdown files (files ending in .md and .mdx) I had in the <code>src/content</code> folder into GraphQL:</p><p><code>gatsby-config.js</code></p>
            <pre><code>const path = require('path')
 
module.exports = {
 plugins: [
   {
     resolve: `gatsby-source-filesystem`,
     options: {
       name: `mdx-pages`,
       path: `${__dirname}/src/content`,
       ignore: [`**/CONTRIBUTING*`, '/styles/**'],
     },
   },
   {
     resolve: `gatsby-plugin-mdx`,
     options: {
       extensions: [`.mdx`, `.md`],
     },
   }, 
]}</code></pre>
            <p>Now that Gatsby knows about these files as nodes, we can create pages for them. In <code>gatsby-node.js</code>, I tell Gatsby to grab these MDX pages and use a template <code>markdownTemplate.tsx</code> to create pages for them:</p>
            <pre><code>const path = require(`path`)
const { createFilePath } = require(`gatsby-source-filesystem`)
exports.createPages = async ({ actions, GraphQL, reporter }) =&gt; {
 const { createPage } = actions
 
 const markdownTemplate = path.resolve(`src/templates/markdownTemplate.tsx`)
 
 result = await GraphQL(`
   {
     allMdx(limit: 1000) {
       edges {
         node {
           fields {
             pathToServe
           }
           frontmatter {
             alwaysopen
             weight
           }
           fileAbsolutePath
         }
       }
     }
   }
 `)
 // Handle errors
 if (result.errors) {
   reporter.panicOnBuild(`Error while running GraphQL query.`)
   return
 }
 result.data.allMdx.edges.forEach(({ node }) =&gt; {
   return createPage({
     path: node.fields.pathToServe,
     component: markdownTemplate,
     context: {
       parent: node.fields.parent,
       weight: node.frontmatter.weight,
     }, // additional data can be passed via context, can use as variable on query
   })
 })
}
exports.onCreateNode = ({ node, getNode, actions }) =&gt; {
 const { createNodeField } = actions
 // Ensures we are processing only markdown files
 if (node.internal.type === 'Mdx') {
   // Use `createFilePath` to turn markdown files in our `content` directory into `/workers/`pathToServe
   const originalPath = node.fileAbsolutePath.replace(
     node.fileAbsolutePath.match(/.*content/)[0],
     ''
   )
   let pathToServe = createFilePath({
     node,
     getNode,
     basePath: 'content/',
   })
   let parentDir = path.dirname(pathToServe)
   if (pathToServe.includes('index')) {
     pathToServe = parentDir
     parentDir = path.dirname(parentDir) // "/" dirname will = "/"
   }
   pathToServe = pathToServe.replace(/\/+$/, '/') // always end the path with a slash
   // Creates new query'able field with name of 'pathToServe', 'parent'..
   // for allMdx edge nodes
   createNodeField({
     node,
     name: 'pathToServe',
     value: `/workers${pathToServe}`,
   })
   createNodeField({
     node,
     name: 'parent',
     value: parentDir,
   })
   createNodeField({
     node,
     name: 'filePath',
     value: originalPath,
   })
 }
}</code></pre>
            <p>Now every time Gatsby runs, it starts running through each node on <code>onCreateNode</code>. If the node is MDX, it passes the node’s content (the markdown, <code>fileAbsolutePath</code>, etc.) and all the node fields (<code>filePath</code>, <code>parent</code> and <code>pathToServe</code>) to the <code>markdownTemplate.tsx</code> component so that the component can render the appropriate information for that markdown file.</p><p>The barebone component for a page that renders a React component from the MDX node looks like this:</p><p><code>markdownTemplate.tsx</code></p>
            <pre><code>import React from "react"
import { graphql } from "gatsby"
import { MDXRenderer } from "gatsby-plugin-mdx"
 
export default function PageTemplate({ data: { mdx } }) {
 return (
   &lt;div&gt;
     &lt;h1&gt;{mdx.frontmatter.title}&lt;/h1&gt;
     &lt;MDXRenderer&gt;{mdx.body}&lt;/MDXRenderer&gt;
   &lt;/div&gt;
 )
}
 
export const pageQuery = graphql`
 query BlogPostQuery($id: String) {
   mdx(id: { eq: $id }) {
     id
     body
     frontmatter {
       title
     }
   }
 }
`</code></pre>
            
    <div>
      <h3>A Complex Component: Sidebar</h3>
      <a href="#a-complex-component-sidebar">
        
      </a>
    </div>
    <p>Now let’s get into where I wasted the most time, but learned hard lessons upfront: turning the Hugo template into a React component. At the beginning of this article, I showed that scary sidebar.</p><p>To set up the <code>li</code> element we had the Hugo logic looks like:</p>
            <pre><code>{{ define "section-tree-nav" }}
{{ $currentNode := .currentnode }}
{{ with .sect }}
 {{ if not .Params.Hidden }}
  {{ if .IsSection }}
    {{safeHTML .Params.head}}
    &lt;li data-nav-id="{{.URL}}" class="dd-item
        {{ if .IsAncestor $currentNode }}parent{{ end }}
        {{ if eq .UniqueID $currentNode.UniqueID}}active{{ end }}
        {{ if .Params.alwaysopen}}parent{{ end }}
        {{ if .Params.alwaysopen}}always-open{{ end }}
        "&gt;</code></pre>
            <p>I see that the code is defining some <code>section-tree-nav</code> component-like thing and taking in some <code>currentNode</code>. To be honest, I still don’t know exactly what the variables <code>.sect</code>, <code>IsSection</code>, <code>Params.head</code>, <code>Params.Hidden</code> mean. Although I can take a wild guess, they're not that important for understanding what the logic is doing. The logic is setting the classes on the <code>li</code> element which is all I really care about: parent, always-open and active.</p><p>When focusing on those three classes, we can port them to React in a much more readable way by defining a variable string <code>ddClass</code>:</p>
            <pre><code> let ddClass = ''
 let isAncestor = numberOfPages &gt; 0
 if (isAncestor) {
   ddClass += ' parent'
 }
 if (frontmatter.alwaysopen) {
   ddClass += ' parent alwaysOpen'
 }
 return (
   &lt;Location&gt;
     {({ location }) =&gt; {
       const currentPathActive = location.pathname === pathToServe
       if (currentPathActive) {
         ddClass += ' active'
       }
       return (
         &lt;li data-nav-id={pathToServe} className={'dd-item ' + ddClass}&gt;</code></pre>
            <p>There are actually a few nice things about the Hugo code, I admit. Using the Location component in React was probably less intuitive than Hugo’s ability to access <code>currentNode</code> to get the active page. Also <code>isAncestor</code> is predefined in Hugo as <code>Whether the current page is an ancestor of the given page.</code> For me though, having to track down the definitions of the predefined variables was frustrating and I appreciate the local explicitness of the definition, but I admit I’m a bit jaded.</p>
    <div>
      <h4>Children</h4>
      <a href="#children">
        
      </a>
    </div>
    <p>The most complex part of the sidebar is getting the children. Now this is a story that really gets me starting to appreciate GraphQL.</p><p>Here’s getting the children for the sidebar in Hugo:</p>
            <pre><code>    {{ $numberOfPages := (add (len .Pages) (len .Sections)) }}
        {{ if ne $numberOfPages 0 }}
 
          {{ if or (.IsAncestor $currentNode) (.Params.alwaysopen)  }}
            &lt;i class="triangle-up"&gt;&lt;/i&gt;
          {{ else }}
            &lt;i class="triangle-down"&gt;&lt;/i&gt;
          {{ end }}
 
        {{ end }}
      &lt;/a&gt;
      {{ if ne $numberOfPages 0 }}
        &lt;ul&gt;
          {{ .Scratch.Set "pages" .Pages }}
          {{ if .Sections}}
          {{ .Scratch.Set "pages" (.Pages | union .Sections) }}
          {{ end }}
          {{ $pages := (.Scratch.Get "pages") }}
 
        {{ if eq .Site.Params.ordersectionsby "title" }}
          {{ range $pages.ByTitle }}
            {{ if and .Params.hidden (not $.showhidden) }}
            {{ else }}
            {{ template "section-tree-nav" dict "sect" . "currentnode" $currentNode }}
            {{ end }}
          {{ end }}
        {{ else }}
          {{ range $pages.ByWeight }}
            {{ if and .Params.hidden (not $.showhidden) }}
            {{ else }}
            {{ template "section-tree-nav" dict "sect" . "currentnode" $currentNode }}
            {{ end }}
          {{ end }}
        {{ end }}
        &lt;/ul&gt;
      {{ end }}
    &lt;/li&gt;
  {{ else }}
    {{ if not .Params.Hidden }}
      &lt;li data-nav-id="{{.URL}}" class="dd-item
     {{ if eq .UniqueID $currentNode.UniqueID}}active{{ end }}
      "&gt;
        &lt;a href="{{.RelPermalink}}"&gt;
        &lt;span&gt;{{safeHTML .Params.Pre}}{{.Title}}{{safeHTML .Params.Post}}&lt;/span&gt;
        {{ if .Params.new }}
          &lt;span class="new-badge"&gt;NEW&lt;/span&gt;
        {{ end }}
 
        &lt;/a&gt;&lt;/li&gt;
     {{ end }}
  {{ end }}
 {{ end }}
{{ end }}
{{ end }}</code></pre>
            <p>This is just the first layer of children. No grandbabies, sorry. And I won’t even get into all that is going on there exactly. When I started porting this over, I realized a lot of that logic was not even being used.</p><p>In React, we grab all the markdown pages and see which have parents that match the current page:</p>
            <pre><code>  const topLevelMarkdown: markdownRemarkEdge[] = useStaticQuery(
   GraphQL`
     {
       allMdx(limit: 1000) {
         edges {
           node {
             frontmatter {
               title
               alwaysopen
               hidden
               showNew
               weight
             }
             fileAbsolutePath
             fields {
               pathToServe
               parent
               filePath
             }
           }
         }
       }
     }
   `
 ).allMdx.edges
 const myChildren: mdx[] = topLevelMarkdown
   .filter(
     edge =&gt;
       fields.pathToServe === '/workers' + edge.node.fields.parent &amp;&amp;
       fields.pathToServe !== edge.node.fields.pathToServe
   )
   .map(child =&gt; child.node)
   .filter(child =&gt; !child.frontmatter.hidden)
   .sort(sortByWeight)
 const numberOfPages = myChildren.length</code></pre>
            <p>And then we render the children, so the full JSX becomes:</p>
            <pre><code>&lt;li data-nav-id={pathToServe} className={'dd-item ' + ddClass}&gt;
   &lt;Link
     to={pathToServe}
     title="Docs Home"
     activeClassName="active"
   &gt;
     {title || 'No title'}
     {numberOfPages ? (
       &lt;Triangle isAncestor={isAncestor} alwaysopen={showChildren} /&gt;
     ) : (
       ''
     )}
     {showNew ? &lt;span className="new-badge"&gt;NEW&lt;/span&gt; : ''}
   &lt;/Link&gt;
   {showChildren ? (
     &lt;ul&gt;
       {' '}
       {myChildren.map((child: mdx) =&gt; {
         return (
           &lt;SidebarLi
             frontmatter={child.frontmatter}
             fields={child.fields}
             depth={++depth}
             key={child.frontmatter.title}
           /&gt;
         )
       })}
     &lt;/ul&gt;
   ) : (
     ''
   )}
 &lt;/li&gt;</code></pre>
            <p>Ok now that we have a component, and we have Gatsby creating the pages off the markdown, I can go back to my <code>PageTemplate</code> component and render the sidebar:</p>
            <pre><code>import Sidebar from './Sidebar'
export default function PageTemplate({ data: { mdx } }) {
 return (
   &lt;div&gt;
     &lt;Sidebar /&gt;
     &lt;h1&gt;{mdx.frontmatter.title}&lt;/h1&gt;
     &lt;MDXRenderer&gt;{mdx.body}&lt;/MDXRenderer&gt;
   &lt;/div&gt;
 )
}</code></pre>
            <p>I don’t have to pass any props to <code>Sidebar</code> because the GraphQL static query in <code>Sidebar.tsx</code> gets all the data about all the pages that I need. I don’t even maintain state because <code>Location</code> is used to determine which path is active. Gatsby generates pages using the above component for each page that’s a markdown MDX node.</p>
    <div>
      <h2>Wrapping up</h2>
      <a href="#wrapping-up">
        
      </a>
    </div>
    <p>This was just the beginning of the full migration to Gatsby. I repeated the process above for turning templates, partials, and other HTML component-like parts in Hugo into React, which was actually pretty fun, though turning vanilla JS that once manipulated the DOM into React would probably be a nightmare if I wasn’t somewhat comfortable working in React.</p><p>Main lessons learned:</p><ul><li><p>Being careful about breaking things and being scared to break things are two very different things. Being careful is good; being scared is bad. If I were to complete this migration again, I would’ve used the Hugo templates as a reference but not as a source of truth. Staging environments are what testing is for. Don’t sacrifice writing things the right way to comply with the old way.</p></li><li><p>When doing a migration like this on a static site, get just a few pages working before moving the content over to avoid intermediate PRs from breaking. It seems obvious but, with the large amounts of content we had, a lot of things broke when porting over content. Get everything polished with each type of page before moving all your content over.</p></li><li><p>When doing a migration like this, it’s OK to compromise some features of the old design until you determine whether to add them back in, just make sure to test this with real users first. For example, I made the mistake of assuming others wouldn’t mind being without anchor tags. (Note Hugo templates create anchor tags for headers automatically as in Gatsby you have to use MDX to customize markdown components). Test this on a single, popular page with real users first to see if it matters before giving it up.</p></li><li><p>Even for those with React background, the ramp up with GraphQL and setting up Gatsby isn’t as simple as it seems at first. But once you’re set up it’s pretty dang nice.</p></li></ul><p>Overall the process of moving to Gatsby was well worth the effort. As we implement a redesign in React it’s much easier to apply the designs in this cleaner code base. Also though Hugo was already very performant with a nice SEO score, in Gatsby we are able to increase the performance and SEO thanks to the framework’s flexibility.</p><p>Lastly, working with the Gatsby team was awesome and they even give free T-shirts for your first PR!</p> ]]></content:encoded>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">1y7UGzzeCtwuWlFuVMZlic</guid>
            <dc:creator>Victoria Bernard</dc:creator>
        </item>
        <item>
            <title><![CDATA[A Node to Workers Story]]></title>
            <link>https://blog.cloudflare.com/a-node-to-workers-story/</link>
            <pubDate>Fri, 08 Mar 2019 23:04:45 GMT</pubDate>
            <description><![CDATA[ Node.js allows developers to build web services with JavaScript. However, you're on your own when it comes to registering a domain, setting up DNS, managing the server processes, and setting up builds.  ]]></description>
            <content:encoded><![CDATA[ <p>Node.js allows developers to build web services with JavaScript. However, you're on your own when it comes to registering a domain, setting up DNS, managing the server processes, and setting up builds.</p><p>There's no reason to manage all these layers on separate platforms. For a site on Cloudflare, these layers can be on a single platform. Serverless technology simplifies developers' lives and reframes our current definition of backend.</p><p>In this article I will breeze through a simple example of how converting a former Node server into a Worker untangled a part of my teams’ code base. The conversion to Workers for this example can be found at this <a href="https://github.com/cloudflare-apps/spotify-oauth-express/pull/1/files">PR on Github</a>.</p>
    <div>
      <h3>Background</h3>
      <a href="#background">
        
      </a>
    </div>
    <p>Cloudflare Marketplace hosts a variety of apps, most of which are produced by third party developers, but some are produced by Cloudflare employees.</p><p>The Spotify app is one of those apps that was written by the Cloudflare apps team. This app requires an OAuth flow with Spotify to retrieve the user’s token and gather the playlist, artists, other Spotify profile specific information. While Cloudflare manages the OAuth authentication portion, the app owner - in this case Cloudflare Apps - manages the small integration service that uses the token to call Spotify and formats an appropriate response. Mysteriously, this Spotify OAuth integration broke.</p><p>Teams at Cloudflare are keen to remain  agile, adaptive, and constantly learning. The current Cloudflare Apps team no longer comprises the original team that developed the Spotify OAuth integration. As such, this current team had no idea why the app broke. Although we had various alerting and logging systems, the Spotify OAuth server was lost in the cloud.</p><p>Our first step to tackling the issue was tracking down, <i>where</i> exactly did the OAuth flow <i>live</i>. After shuffling through several of the potential platforms - GCloud, AWS, Digital Ocean.. - we discovered the service was on Heroku.  The more platforms introduced, the more complexity in deploys and access management.</p><p>I decided to reduce the number of layers in our service by simply creating a serverless Cloudflare Worker with no maintenance, no new logins, and no unique backend configuration.</p><p>Here’s how I did it.</p>
    <div>
      <h3>Goodbye Node</h3>
      <a href="#goodbye-node">
        
      </a>
    </div>
    <p>The old service used the Node.js and Express.</p>
            <pre><code>app.post('/blah', function(request, response) {</code></pre>
            <p>This states that for every POST to an endpoint <code>/blah</code>, execute the callback function with a request and response object as arguments.</p><p>Cloudflare Workers are built on top of the <a href="https://developer.mozilla.org/en-US/docs/Web/API/Service_Worker_API">Service Workers</a> spec. Instead of mutating the response and calling methods on the response object like in Express, we need to respond to ‘fetch’ events. The code below adds an event listener for fetch events (incoming requests to the worker), receiving a <a href="https://developer.mozilla.org/en-US/docs/Web/API/FetchEvent">FetchEvent</a> as the first parameter. The FetchEvent has a special method called <a href="https://developer.mozilla.org/en-US/docs/Web/API/FetchEvent/respondWith">respondWith</a> that accepts an instance of <a href="https://developer.mozilla.org/en-US/docs/Web/API/Response">Response</a> or a Promise which resolves to a Response.</p>
            <pre><code>addEventListener("fetch", event =&gt; {
         event.respondWith(new Response(‘Hello world!’));
});
</code></pre>
            <p>To avoid reimplementation of the routing logic in my worker, I made my own <code>app</code> .</p>
            <pre><code>const app = {
   get: (endpoint, fn) =&gt; {
     url = new URL(request.url);
     if (url.pathname === endpoint &amp;&amp; request.method === "GET")
       return fn(request);
     return null;
   },
   post: (endpoint, fn) =&gt; {
     url = new URL(request.url);
     if (url.pathname === endpoint &amp;&amp; request.method === "POST")
       return fn(request);
     return null;
   }
 };</code></pre>
            <p>Now with app set, I call <code>app.get(..)</code> similar to how I did in Node in my handler. I just need to make sure the handler returns at the correct app.</p>
            <pre><code>async function handleRequest(request) {
  lastResponse = app.post("/", async function (request) {..}
  if (lastResponse) {
      return lastResponse;
    }
 lastResponse =  app.get("/", async function (request) {
 if (lastResponse) {
      return lastResponse;
    }</code></pre>
            <p><code>lastResponse</code> ensures that we keep listening for all the endpoint methods.</p><p>The other thing that needs to change is the return of the response. Before that return used <code>response.json()</code>, so the final response would be of JSON type.</p>
            <pre><code>response.json({
         proceed: false,
         errors: [{type: '400', message: error.toString()}]</code></pre>
            <p>In workers, I need to return a type <code>Response</code> to the <code>respondWith</code> function. I replaced every instance of <code>response.json</code> or <code>response.sendStatus</code> with a new Response object.</p>
            <pre><code>return new Response(
         JSON.stringify({
         proceed: false,
         errors: [{ type: "400", message: res.error }]
         }, { headers: { ‘Content-Type’: ‘application/json’ } })</code></pre>
            <p>Now for the most beautiful part of the transition: delete useless config.</p><p>Our Express server was set to export app as a module insert credentials so that Heroku or whatever non-serverless server could pick up, run, and build.</p><p>Though I <i>can</i> import libraries for workers via <a href="/using-webpack-to-bundle-workers/">webpack</a>, for this application, it’s unreasoned. Also, I have access to fetch and other native service worker functions.</p>
            <pre><code>const {getJson} = require('simple-fetch')
module.exports = function setRoutes (app) {</code></pre>
            <p>Getting rid of modules and deployment config, I removed the files: <code>Procfile</code>, <code>credentials.json</code>, <code>package.json</code>, <code>development.js</code>, <code>heroku.js</code>, and <code>create-app.js</code>.<code>Routes.js</code> simply becomes <code>worker.js</code>.</p><p>This was a demo of how workers made my life as a programmer easier. Future developers working with my code can read this code without ever looking at any configuration. Even a purely vanilla bean Javascript developer can come in since there is no managing builds and pulling hair out.</p><p>With serverless I can now spend time on doing what I love - development.</p> ]]></content:encoded>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">57JO6uOsVn7zF28ZV4HjHo</guid>
            <dc:creator>Victoria Bernard</dc:creator>
        </item>
        <item>
            <title><![CDATA[OAuth Auth Server through Workers]]></title>
            <link>https://blog.cloudflare.com/oauth-2-0-authentication-server/</link>
            <pubDate>Tue, 11 Dec 2018 23:48:12 GMT</pubDate>
            <description><![CDATA[ Services need to talk to each other safely without inconveniencing users. Let’s pretend I own a service with users and I want to grant other services access to my service on behalf of my users.  ]]></description>
            <content:encoded><![CDATA[ <p>Let’s pretend I own a service and I want to grant other services access to my service on behalf of my users. The familiar OAuth 2.0 is the industry standard used by the likes of <a href="https://developers.google.com/identity/protocols/OAuth2">Google sign in</a>, Facebook, etc. to communicate safely without inconveniencing users.</p><p>Implementing an OAuth Authentication server is conceptually simple but a pain in practice. We can leverage the power of <a href="https://cloudflareworkers.com">Cloudflare Workers</a> to simplify the implementation, reduce latency, and segregate our service logic from the authentication layer.</p><p>For those unfamiliar with OAuth, I highly recommend reading a more in depth <a href="https://aaronparecki.com/oauth-2-simplified/">article</a>.</p><p>The steps of the OAuth 2.0 workflow are as follows:</p><ol><li><p>The consumer service redirects the user to a callback URL that was setup by the auth server. At this callback URL, the auth server asks the user to sign in and accept the consumer permissions requests.</p></li><li><p>The auth server redirects the user to the consumer service with a code.</p></li><li><p>The consumer service asks to exchange this code for an access token. The consumer service validates their identity by including their client secret in the callback URL.</p></li><li><p>The auth server gives the consumer the access token.</p></li><li><p>The consumer service can now use the access token to get resources on behalf of the user.</p></li></ol><p>In the rest of this post, I will be walking through my implementation of an OAuth Authentication server using a Worker. For simplicity, I will make the assumption the user has already logged in and obtained a session token in the form of a JWT that I will refer to as “token” herein. My <a href="https://github.com/victoriabernard92/OAuth-Server">full implementation</a> has a more thorough flow that includes initial user login and registration.</p>
    <div>
      <h3>Setup</h3>
      <a href="#setup">
        
      </a>
    </div>
    <p>We must be able to reference valid user sessions, codes and login information. Because Workers do not maintain state between executions, we will store this information using <a href="https://developers.cloudflare.com/workers/writing-workers/storing-data/">Cloudflare Storage</a>. We setup up three namespaces called: USERS, CODES, and TOKENS .</p><p>On your OAuth server domain, create two empty worker scripts called auth and token. Bind the three namespaces to the two workers scripts. Then configure the namespaces to the scripts so that your resources end up looking like:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1VFGcOfkTKZXNuEHBDlReT/4ca67344666d87c20215defd524a9b70/Screen-Shot-2018-12-11-at-3.46.15-PM.png" />
            
            </figure><p>To put and get items from storage using KV Storage syntax:</p>
            <pre><code>// @ts-ignore 
CODES.get(“user@ex.com”) </code></pre>
            <p>We include <code>// @ts-ignore</code> preceding all KV storage commands. We do not have type definitions for these variables locally, so Typescript would throw an error at compile time otherwise.</p><p>To set up a project using Typescript and the Cloudflare Previewer, follow this <a href="/using-webpack-to-bundle-workers/">blog post</a>. Webpack will allow us to <code>import</code> which we will need to use the JWT library <code>jsonwebtoken</code>.</p>
            <pre><code>import * as jwt from "jsonwebtoken";</code></pre>
            <p>Remember to run:</p>
            <pre><code>npm install jsonwebtoken &amp;&amp; npm install @types/jsonwebtoken</code></pre>
            <p>Optionally, we can set up a file to specify endpoints and credentials.</p>
            <pre><code>import { hosts } from "./private";

export const credentials = {/* for demo purposes, ideally use KV to store secrets */
  clients: [{
    id: "victoriasclient",
    secret: "victoriassecret"
  }],
  storage: {
    secret: "somesecrettodecryptfromtheKV"
  }

};

export const paths = {
  auth: {
    authorize: hosts.auth + "/authorize",
    login: hosts.auth + "/login",
    code: hosts.auth + "/code",
  },
  token: {
    resource: hosts.token + "/resource",
    token: hosts.token + "/authorize",
  }
}</code></pre>
            
    <div>
      <h4>1. Accept page after callback</h4>
      <a href="#1-accept-page-after-callback">
        
      </a>
    </div>
    <p>The consumer service generates some callback URL that redirects the user to our authentication server. The authentication server then presents the user with a login or accept page to generate a code. The authentication server thus must listen on the <code>authorize</code> url endpoint and return <code>giveLoginPageResponse</code>.</p>
            <pre><code>addEventListener("fetch", (event: FetchEvent) =&gt; {
 const url = new URL(event.request.url);
 if (url.pathname.includes("/authorize"))
  return event.respondWith(giveLoginPageResponse(event.request));
}

export async function giveLoginPageResponse(request: Request) {
 ...checks for cases where I am not necessarily logged in... 
 let token = getTokenFromRequest(request)
 if (token) { //user already signed in
  return new Response(giveAcceptPage(request)
 }</code></pre>
            <p>Since the user already has a stored session, we can use a method <code>giveAcceptPage</code>. To display the accept page and return a redirect to generate the code.</p>
            <pre><code>export function giveAcceptPage(request: Request) {
  let req_url = new URL(request.url);
  let params = req_url.search
  let fetchCodeURL = paths.auth.code + params
  return `&lt;!DOCTYPE html&gt;
  &lt;html&gt;
    &lt;body&gt;
      &lt;a href="${fetchCodeURL}"&gt; Accept&lt;/button&gt;
    &lt;/body&gt;
  &lt;/html&gt;
    `;
}</code></pre>
            
    <div>
      <h4>2. Redirect back to consumer</h4>
      <a href="#2-redirect-back-to-consumer">
        
      </a>
    </div>
    <p>At the endpoint for <code>fetchCodeURL</code>, the authentication server will redirect the user’s browser to the consumer’s page as specified by <code>redirect_uri</code> in the original URL params of the callback with the code.</p>
            <pre><code>addEventListener("fetch", (event: FetchEvent) =&gt; {
 ...
 if (url.pathname.includes("/code"))
  return event.respondWith(redirectCodeToConsumer(event.request));
}
export async function redirectCodeToConsumer(request: Request) {
 let session = await verifyUser(request)

 if (session.msg == "403") return new Response(give403Page(), { status: 403 })
 if (session.msg == "dne") return registerNewUser(session.email, session.pwd)
 let code = Math.random().toString(36).substring(2, 12)
 try {
  let req_url = new URL(request.url)
  let redirect_uri = new URL(encodeURI(req_url.searchParams.get("redirect_uri")))
  let client_id = new URL(encodeURI(req_url.searchParams.get("client_id")))
  // @ts-ignore
  await CODES.put(client_id + email, code)
  redirect_uri.searchParams.set("code", code);
  redirect_uri.searchParams.set("response_type", "code");
  return Response.redirect(redirect_uri.href, 302);
 } catch (e) {
  // @ts-ignore
  await CODES.delete(email, code)
  return new Response(
   JSON.stringify(factoryIError({ message: "error with the URL passed in" + e})),
  { status: 500 });
 }
}</code></pre>
            
    <div>
      <h4>3. Code to Token Exchange</h4>
      <a href="#3-code-to-token-exchange">
        
      </a>
    </div>
    <p>Now the consumer has the code. They can use this code to request a token. On our token worker, configure the endpoint to exchange the code for the consumer service to grant a token.</p>
            <pre><code>addEventListener("fetch", (event: FetchEvent) =&gt; {
...
 if (url.pathname.includes("/token"))
  return event.respondWith(giveToken(event.request));</code></pre>
            <p>Grab the code from the request and validate this code matches the code that is stored for this client. Once the code is verified, deliver the token by grabbing the existing token from the KV storage or by signing the user information to generate a new token.</p>
            <pre><code>export async function giveToken(request: Request) {
 let req_url = new URL(request.url);
 let code = req_url.searchParams.get("code");
 let email = req_url.searchParams.get("email");
 if (code){
   if(!validClientSecret(request) return errorResponse()
  // @ts-ignore
  let storedCode = await CODES.get(email)
  if(code != storedCode) return new Response(give403Page(), { status:403})

  let tokenJWT = jwt.sign(email, credentials.client.secret);
  ... return token</code></pre>
            
    <div>
      <h4>4. Give the token to the consumer</h4>
      <a href="#4-give-the-token-to-the-consumer">
        
      </a>
    </div>
    <p>Continuing where step 3 left off from the <code>giveToken</code> method, respond to the consumer with this valid token.</p>
            <pre><code>  ...
  headers.append("set-cookie", "token=Bearer " + tokenJWT);
  // @ts-ignore
  await TOKENS.put(email, tokenJWT)
  var respBody = factoryTokenResponse({
   "access_token": tokenJWT,
   "token_type": "bearer",
   "expires_in": 2592000,
   "refresh_token": token,
   "token": token
  })
 } else {
  respBody.errors.push(factoryIError({ message: "there was no code sent to the authorize token url" }))
 }
 return new Response(JSON.stringify(respBody), { headers });
}</code></pre>
            
    <div>
      <h4>5. Accepting the token</h4>
      <a href="#5-accepting-the-token">
        
      </a>
    </div>
    <p>At this point voila, your duty as an OAuth 2.0 Authentication server is complete! The consumer service that wishes to use your service has the token that you have not so magically generated.</p><p>The consumer server would send a request including the token:</p>
            <pre><code>GET /resource/some-goods
Authorization: Bearer eyJhbGci..bGAqA</code></pre>
            <p>Authentication server would validate the token and give the goods:</p>
            <pre><code>export async function giveResource(request: Request) {
 var respBody: HookResponse = factoryHookResponse({})
 let token = ""
 let decodedJWT = factoryJWTPayload()
 try { //validate request is who they claim
  token = getCookie(request.headers.get("cookie"), "token")
  if (!token) token = request.headers.get("Authorization").substring(7)
  decodedJWT = jwt.verify(token, credentials.storage.secret)
  // @ts-ignore
  let storedToken = await TOKENS.get(decodedJWT.sub)
  if (isExpired(storedToken)) throw new Error("token is expired") /* TODO instead of throwing error send to refresh */
  if (storedToken != token) throw new Error("token does not match what is stored")
 }
 catch (e) {
  respBody.errors.push(factoryIError({ message: e.message, type: "oauth" }))
  return new Response(JSON.stringify(respBody), init)
 }
 respBody.body = getUsersPersonalBody(decodedJWT.sub)
 return new Response(JSON.stringify(respBody), init)
}</code></pre>
            <p>The boundaries of serverless are pushed everyday, though if your app just needs to authorize users, you may be better off using <a href="https://www.cloudflare.com/products/cloudflare-access/">Cloudflare Access</a>. We've demonstrated that a full blown OAuth 2.0 authentication server implementation can be achieved with Cloudflare Workers and Storage.</p><p>Stay tuned for a follow-up blog post on an OAuth consumer implementation.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Apps]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[Serverless]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <guid isPermaLink="false">3DMym5hzg83QDRvOxJCSTo</guid>
            <dc:creator>Victoria Bernard</dc:creator>
        </item>
        <item>
            <title><![CDATA[Custom Page Selection for Cloudflare Apps]]></title>
            <link>https://blog.cloudflare.com/custom-page-selection/</link>
            <pubDate>Thu, 03 May 2018 17:00:00 GMT</pubDate>
            <description><![CDATA[ In July 2016, Cloudflare integrated with Eager - an apps platform. During this integration, several decisions were made to ensure an optimal experience installing apps. We wanted to make sure site owners on Cloudflare could customize and install an app with the minimal number of clicks possible.  ]]></description>
            <content:encoded><![CDATA[ <p>In July 2016, Cloudflare <a href="/cloudflare-acquires-eager/">integrated with Eager</a> - an apps platform. During this integration, several decisions were made to ensure an optimal experience installing apps. We wanted to make sure site owners on Cloudflare could customize and install an app with the minimal number of clicks possible. Customizability often adds complexity and clicks for the user. We’ve been tinkering to find the right balance of user control and simplicity since.</p><p>When installing an app, a site owner must select <i>where</i> - what URLs on their site - they want <i>what</i> apps installed. Our original plan for selecting the URLs an app would be installed on took a few twists and turns. Our end decision was to utilize our <a href="https://support.cloudflare.com/hc/en-us/articles/200168006-How-does-Always-Online-work-">Always Online</a> crawler to pre-populate a tree of the user’s site. Always Online is a feature that crawls Cloudflare sites and serves pages from our cache if the site goes down.</p><p>The benefits to this original setup are:<b>1. Only valid pages appear</b>An app only allows installations on html pages. For example, since injecting Javascript into a JPEG image isn’t possible, we would prevent the installer from trying it by not showing that path. Preventing the user from that type of phony installation prevents the user from being confused later when it doesn’t work.<b>2. The user was not required to know any URL of their site</b>The URLs are available right there in the UI. With the click of a check mark, the user would not have to type a thing.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/wzvhWE2cwXANxwwAc6mzs/17d385f4cfc893f54dd46a25dc8b85b3/Screen-Shot-2018-04-16-at-3.10.53-PM.png" />
            
            </figure><p>The disadvantage of this setup is the dependency of the Always Online crawler.</p><p>First off, some users do not wish to have Always Online turned on. Without the consent of the site owner to crawl the site via Always Online, the page loader tree will not load and the user had no options of pages to install an app on.</p><p>When a user does have Always Online enabled properly, the crawler might not crawl every page the site owner wishes to install an app on.</p><p>The duty of Always Online is to make sure in the most catastrophic event for a site owner - their site being down - users can still see a version of the site via cached static HTML. Once upon a time before <a href="/always-online-v2/">Always Online v2</a>, we actually used the Google bot and other search engine crawlers’ activity to decide what to cache for the Always Online feature. We found that implementing our own crawler made more sense. Our goal is to make sure the most vital pages of a site are crawled and stored on our cache, contrasting with search engine crawler’s priority of get the most information possible from the site, thus going “deep” into the depths of a site map.</p><p>The duty of an app install on Cloudflare’s Apps platform is to seamlessly enable users to select pages in which to inject Javascript, HTML, CSS, and in the near future, <a href="https://developers.cloudflare.com/workers/about/">Cloudflare Service Workers</a> into. Since the objectives of the Always Online crawler differ from that of the Cloudflare Apps platform, there were inevitable consequences. Here are some examples where a page would not be crawled:</p><ul><li><p>The page’s subdomain was not "<a href="https://support.cloudflare.com/hc/en-us/articles/200169626">orange-clouded</a>".</p></li><li><p>The page was not be accessible from the site's homepage via links.</p></li><li><p>The site’s homepage had too many links for us to follow.</p></li><li><p>The page was password-protected, preventing us from accessing it and adding it to your site map.</p></li><li><p>The page was added before we had a chance to crawl the site.</p></li></ul><p>Although our custom crawler works well for the Always Online feature, this limited control for our customers who are installing apps. We decided to do something about it. Combining the advantages of the crawler data we already had implemented <i>with</i> the ability to enter any URL in an install, we created the best of both worlds.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2COMbJNiEM9kdDsOxNAlgI/76d2b6474cb7f9b6d1da6157d96377b5/Screen-Shot-2018-04-16-at-3.08.24-PM.png" />
            
            </figure><p>Now, site owners can type in whatever URL they wish to install an app. There is also an option for selecting an entire directory or strictly that page. For simplicity, no regex patterns are supported.</p><p>As the apps on the Cloudflare Apps platform advance, it is vital that the platform itself advance. In the near future, the App’s platform will have the power of Cloudflare Workers, local testing, and much more to come.</p> ]]></content:encoded>
            <category><![CDATA[Cloudflare Apps]]></category>
            <category><![CDATA[JavaScript]]></category>
            <category><![CDATA[Always Online]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">7jZsGav4RDfLIfoIVVXfU</guid>
            <dc:creator>Victoria Bernard</dc:creator>
        </item>
    </channel>
</rss>