
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/">
    <channel>
        <title><![CDATA[ The Cloudflare Blog ]]></title>
        <description><![CDATA[ Get the latest news on how products at Cloudflare are built, technologies used, and join the teams helping to build a better Internet. ]]></description>
        <link>https://blog.cloudflare.com</link>
        <atom:link href="https://blog.cloudflare.com/" rel="self" type="application/rss+xml"/>
        <language>en-us</language>
        
        <lastBuildDate>Sun, 05 Apr 2026 17:59:50 GMT</lastBuildDate>
        <item>
            <title><![CDATA[Cloudflare’s bigger, better, faster AI platform]]></title>
            <link>https://blog.cloudflare.com/workers-ai-bigger-better-faster/</link>
            <pubDate>Thu, 26 Sep 2024 13:00:00 GMT</pubDate>
            <description><![CDATA[ Cloudflare helps you build AI applications with fast inference at the edge, optimized AI workflows, and vector database-powered RAG solutions. ]]></description>
            <content:encoded><![CDATA[ <p>Birthday Week 2024 marks our first anniversary of Cloudflare’s AI developer products — <a href="https://blog.cloudflare.com/workers-ai/"><u>Workers AI</u></a>, <a href="https://blog.cloudflare.com/announcing-ai-gateway/"><u>AI Gateway</u></a>, and <a href="https://blog.cloudflare.com/vectorize-vector-database-open-beta/"><u>Vectorize</u></a>. For our first birthday this year, we’re excited to announce powerful new features to elevate the way you build with AI on Cloudflare.</p><p>Workers AI is getting a big upgrade, with more powerful GPUs that enable faster inference and bigger models. We’re also expanding our model catalog to be able to dynamically support models that you want to run on us. Finally, we’re saying goodbye to neurons and revamping our pricing model to be simpler and cheaper. On AI Gateway, we’re moving forward on our vision of becoming an ML Ops platform by introducing more powerful logs and human evaluations. Lastly, Vectorize is going GA, with expanded index sizes and faster queries.</p>
    <div>
      <h3>Watch on Cloudflare TV</h3>
      <a href="#watch-on-cloudflare-tv">
        
      </a>
    </div>
    <div>
  
</div><p>Whether you want the fastest inference at the edge, optimized AI workflows, or vector database-powered <a href="https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/"><u>RAG</u></a>, we’re excited to help you harness the full potential of AI and get started on building with Cloudflare.</p>
    <div>
      <h3>The fast, global AI platform</h3>
      <a href="#the-fast-global-ai-platform">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/56ofEZRtFHhkrfMaGC4RUb/3f69a2fc3722f67218297c65bd510941/image9.png" />
          </figure><p>The first thing that you notice about an application is how fast, or in many cases, how slow it is. This is especially true of AI applications, where the standard today is to wait for a response to be generated.</p><p>At Cloudflare, we’re obsessed with improving the performance of applications, and have been doubling down on our commitment to make AI fast. To live up to that commitment, we’re excited to announce that we’ve added even more powerful GPUs across our network to accelerate LLM performance.</p><p>In addition to more powerful GPUs, we’ve continued to expand our GPU footprint to get as close to the user as possible, reducing latency even further. Today, we have GPUs in over 180 cities, having doubled our capacity in a year. </p>
    <div>
      <h3>Bigger, better, faster</h3>
      <a href="#bigger-better-faster">
        
      </a>
    </div>
    <p>With the introduction of our new, more powerful GPUs, you can now run inference on significantly larger models, including Meta Llama 3.1 70B. Previously, our model catalog was limited to 8B parameter LLMs, but we can now support larger models, faster response times, and larger context windows. This means your applications can handle more complex tasks with greater efficiency.</p><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td>
                        <p><span><span><strong>Model</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>@cf/meta/llama-3.2-11b-vision-instruct</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>@cf/meta/llama-3.2-1b-instruct</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>@cf/meta/llama-3.2-3b-instruct</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>@cf/meta/llama-3.1-8b-instruct-fast</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>@cf/meta/Llama-3.1-70b-instruct</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>@cf/black-forest-labs/flux-1-schnell</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div><p>The set of models above are available on our new GPUs at faster speeds. In general, you can expect throughput of 80+ Tokens per Second (TPS) for 8b models and a Time To First Token of 300 ms (depending on where you are in the world).</p><p>Our model instances now support larger context windows, like the full 128K context window for Llama 3.1 and 3.2. To give you full visibility into performance, we’ll also be publishing metrics like TTFT, TPS, Context Window, and pricing on models in our <a href="https://developers.cloudflare.com/workers-ai/models/"><u>catalog</u></a>, so you know exactly what to expect.</p><p>We’re committed to bringing the best of open-source models to our platform, and that includes Meta’s release of the new Llama 3.2 collection of models. As a Meta launch partner, we were excited to have Day 0 support for the 11B vision model, as well as the 1B and 3B text-only model on Workers AI.</p><p>For more details on how we made Workers AI fast, take a look at our <a href="https://blog.cloudflare.com/making-workers-ai-faster"><u>technical blog post</u></a>, where we share a novel method for KV cache compression (it’s open-source!), as well as details on speculative decoding, our new hardware design, and more.</p>
    <div>
      <h3>Greater model flexibility</h3>
      <a href="#greater-model-flexibility">
        
      </a>
    </div>
    <p>With our commitment to helping you run more powerful models faster, we are also expanding the breadth of models you can run on Workers AI with our Run Any* Model feature. Until now, we have manually curated and added only the most popular open source models to Workers AI. Now, we are opening up our catalog to the public, giving you the flexibility to choose from a broader selection of models. We will support models that are compatible with our GPUs and inference stack at the start (hence the asterisk on Run Any* Model). We’re launching this feature in closed beta and if you’d like to try it out, please fill out the <a href="https://forms.gle/h7FcaTF4Zo5dzNb68"><u>form</u></a>, so we can grant you access to this new feature.</p><p>The Workers AI model catalog will now be split into two parts: a static catalog and a dynamic catalog. Models in the static catalog will remain curated by Cloudflare and will include the most popular open source models with guarantees on availability and speed (the models listed above). These models will always be kept warm in our network, ensuring you don’t experience cold starts. The usage and pricing model remains serverless, where you will only be charged for the requests to the model and not the cold start times.</p><p>Models that are launched via Run Any* Model will make up the dynamic catalog. If the model is public, users can share an instance of that model. In the future, we will allow users to launch private instances of models as well.</p><p>This is just the first step towards running your own custom or private models on Workers AI. While we have already been supporting private models for select customers, we are working on making this capacity available to everyone in the near future.</p>
    <div>
      <h3>New Workers AI pricing</h3>
      <a href="#new-workers-ai-pricing">
        
      </a>
    </div>
    <p>We launched Workers AI during Birthday Week 2023 with the concept of “neurons” for pricing. Neurons were intended to simplify the unit of measure across various models on our platform, including text, image, audio, and more. However, over the past year, we have listened to your feedback and heard that neurons were difficult to grasp and challenging to compare with other providers. Additionally, the industry has matured, and new pricing standards have materialized. As such, we’re excited to announce that we will be moving towards unit-based pricing and saying goodbye to neurons.</p><p>Moving forward, Workers AI will be priced based on model task, size, and units. LLMs will be priced based on the model size (parameters) and input/output tokens. Image generation models will be priced based on the output image resolution and the number of steps. Embeddings models will be priced based on input tokens. Speech-to-text models will be priced on seconds of audio input. </p><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td>
                        <p><span><span><strong>Model Task</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Units</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Model Size</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Pricing</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>LLMs (incl. Vision models)</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Tokens in/out (blended)</span></span></p>
                    </td>
                    <td>
                        <p><span><span>&lt;= 3B parameters</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.10 per Million Tokens</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>3.1B - 8B</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.15 per Million Tokens</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>8.1B - 20B</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.20 per Million Tokens</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>20.1B - 40B</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.50 per Million Tokens</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>40.1B+</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.75 per Million Tokens</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Embeddings</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Tokens in</span></span></p>
                    </td>
                    <td>
                        <p><span><span>&lt;= 150M parameters</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.008 per Million Tokens</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>151M+ parameters</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.015 per Million Tokens</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Speech-to-text</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Audio seconds in</span></span></p>
                    </td>
                    <td>
                        <p><span><span>N/A</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.0039 per minute of audio input</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td>
                        <p><span><span><strong>Image Size</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Model Type</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Steps</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Price</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>&lt;=256x256</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Standard</span></span></p>
                    </td>
                    <td>
                        <p><span><span>25</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.00125 per 25 steps</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Fast</span></span></p>
                    </td>
                    <td>
                        <p><span><span>5</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.00025 per 5 steps</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>&lt;=512x512</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Standard</span></span></p>
                    </td>
                    <td>
                        <p><span><span>25</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.0025 per 25 steps</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Fast</span></span></p>
                    </td>
                    <td>
                        <p><span><span>5</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.0005 per 5 steps</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>&lt;=1024x1024</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Standard</span></span></p>
                    </td>
                    <td>
                        <p><span><span>25</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.005 per 25 steps</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Fast</span></span></p>
                    </td>
                    <td>
                        <p><span><span>5</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.001 per 5 steps</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>&lt;=2048x2048</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Standard</span></span></p>
                    </td>
                    <td>
                        <p><span><span>25</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.01 per 25 steps</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Fast</span></span></p>
                    </td>
                    <td>
                        <p><span><span>5</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.002 per 5 steps</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div><p>We paused graduating models and announcing pricing for beta models over the past few months as we prepared for this new pricing change. We’ll be graduating all models to this new pricing, and billing will take effect on October 1, 2024.</p><p>Our free tier has been redone to fit these new metrics, and will include a monthly allotment of usage across all the task types.</p><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td>
                        <p><span><span><strong>Model</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Free tier size</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Text Generation - LLM</span></span></p>
                    </td>
                    <td>
                        <p><span><span>10,000 tokens a day across any model size</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Embeddings</span></span></p>
                    </td>
                    <td>
                        <p><span><span>10,000 tokens a day across any model size</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Images</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Sum of 250 steps, up to 1024x1024 resolution</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Whisper</span></span></p>
                    </td>
                    <td>
                        <p><span><span>10 minutes of audio a day</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div>
    <div>
      <h3>Optimizing AI workflows with AI Gateway</h3>
      <a href="#optimizing-ai-workflows-with-ai-gateway">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6sLY6zUP6vDdnk1FNJfBBe/9a9e8df1f608b1540175302300ae9bc0/image7.png" />
          </figure><p><a href="https://developers.cloudflare.com/ai-gateway/"><u>AI Gateway</u></a> is designed to help developers and organizations building AI applications better monitor, control, and optimize their AI usage, and thanks to our users, AI Gateway has reached an incredible milestone — over 2 billion requests proxied by September 2024, less than a year after its inception. But we are not stopping there.</p><p><b>Persistent logs (open beta)</b></p><p><a href="https://developers.cloudflare.com/ai-gateway/observability/logging/"><u>Persistent logs</u></a> allow developers to store and analyze user prompts and model responses for extended periods, up to 10 million logs per gateway. Each request made through AI Gateway will create a log. With a log, you can see details of a request, including timestamp, request status, model, and provider.</p><p>We have revamped our logging interface to offer more detailed insights, including cost and duration. Users can now annotate logs with human feedback using thumbs up and thumbs down. Lastly, you can now filter, search, and tag logs with <a href="https://developers.cloudflare.com/ai-gateway/configuration/custom-metadata/"><u>custom metadata</u></a> to further streamline analysis directly within AI Gateway.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/18OovOZzlAkoKvMIgFJ1kR/dbb6b809fb063b2d918b2355cbf11ea3/image1.png" />
          </figure><p>Persistent logs are available to use on <a href="https://developers.cloudflare.com/ai-gateway/pricing/"><u>all plans</u></a>, with a free allocation for both free and paid plans. On the Workers Free plan, users can store up to 100,000 logs total across all gateways at no charge. For those needing more storage, upgrading to the Workers Paid plan will give you a higher free allocation — 200,000 logs stored total. Any additional logs beyond those limits will be available at $8 per 100,000 logs stored per month, giving you the flexibility to store logs for your preferred duration and do more with valuable data. Billing for this feature will be implemented when the feature reaches General Availability, and we’ll provide plenty of advance notice.</p><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td> </td>
                    <td>
                        <p><span><span><strong>Workers Free</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Workers Paid</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Enterprise</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Included Volume</span></span></p>
                    </td>
                    <td>
                        <p><span><span>100,000 logs stored (total)</span></span></p>
                    </td>
                    <td>
                        <p><span><span>200,000 logs stored (total)</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Additional Logs</span></span></p>
                    </td>
                    <td>
                        <p><span><span>N/A</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$8 per 100,000 logs stored per month</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div><p><b>Export logs with Logpush</b></p><p>For users looking to export their logs, AI Gateway now supports log export via <a href="https://developers.cloudflare.com/ai-gateway/observability/logging/logpush"><u>Logpush</u></a>. With Logpush, you can automatically push logs out of AI Gateway into your preferred storage provider, including Cloudflare R2, Amazon S3, Google Cloud Storage, and more. This can be especially useful for compliance or advanced analysis outside the platform. Logpush follows its <a href="https://developers.cloudflare.com/workers/observability/logging/logpush/"><u>existing pricing model</u></a> and will be available to all users on a paid plan.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6uazGQNezknc5P9kVyr9gr/1da3b3897c9f6376ea4983b2d267b405/image2.png" />
          </figure><p><b>AI evaluations</b></p><p>We are also taking our first step towards comprehensive <a href="https://developers.cloudflare.com/ai-gateway/evaluations/"><u>AI evaluations</u></a>, starting with evaluation using human in the loop feedback (this is now in open beta). Users can create datasets from logs to score and evaluate model performance, speed, and cost, initially focused on LLMs. Evaluations will allow developers to gain a better understanding of how their application is performing, ensuring better accuracy, reliability, and customer satisfaction. We’ve added support for <a href="https://developers.cloudflare.com/ai-gateway/observability/costs/"><u>cost analysis</u></a> across many new models and providers to enable developers to make informed decisions, including the ability to add <a href="https://developers.cloudflare.com/ai-gateway/configuration/custom-costs/"><u>custom costs</u></a>. Future enhancements will include automated scoring using LLMs, comparing performance of multiple models, and prompt evaluations, helping developers make decisions on what is best for their use case and ensuring their applications are both efficient and cost-effective.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5dyhxoR6KEsM8uh371XnDN/5eab93923157fd59112ffdea14b3bb2f/image3.png" />
          </figure>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/21DCTbhFEh7u4m1d0Tfgmn/2839e2ae7d226fdcc4086f108f5c9612/image6.png" />
          </figure>
    <div>
      <h3>Vectorize GA</h3>
      <a href="#vectorize-ga">
        
      </a>
    </div>
    
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/DjhP2xqOhPMP7oQK5Mdpa/c216167d0a204f344afd2ff7393d97f9/image4.png" />
          </figure><p>We've completely redesigned Vectorize since our <a href="https://blog.cloudflare.com/vectorize-vector-database-open-beta/"><u>initial announcement </u></a>in 2023 to better serve customer needs. Vectorize (v2) now supports<b> indexes of up to 5 million vectors</b> (up from 200,000), <b>delivers faster queries</b> (median latency is down 95% from 500 ms to 30 ms), and <b>returns up to 100 results per query</b> (increased from 20). These improvements significantly enhance Vectorize's capacity, speed, and depth of results.</p><p>Note: if you got started on Vectorize before GA, to ease the move from v1 to v2, a migration solution will be available in early Q4 — stay tuned!</p>
    <div>
      <h3>New Vectorize pricing</h3>
      <a href="#new-vectorize-pricing">
        
      </a>
    </div>
    <p>Not only have we improved performance and scalability, but we've also made Vectorize one of the most cost-effective options on the market. We've reduced query prices by 75% and storage costs by 98%.</p><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td> </td>
                    <td>
                        <p><span><span><strong>New Vectorize pricing</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Old Vectorize pricing</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Price reduction</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>Writes</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>Free</span></span></p>
                    </td>
                    <td>
                        <p><span><span>Free</span></span></p>
                    </td>
                    <td>
                        <p><span><span>n/a</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>Query</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>$.01 per 1 million vector dimensions</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.04 per 1 million vector dimensions</span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>75%</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span><strong>Storage</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span>$0.05 per 100 million vector dimensions</span></span></p>
                    </td>
                    <td>
                        <p><span><span>$4.00 per 100 million vector dimensions</span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>98%</strong></span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div><p>You can learn more about our pricing in the <a href="https://developers.cloudflare.com/vectorize/platform/pricing/"><u>Vectorize docs</u></a>.</p><p><b>Vectorize free tier</b></p><p>There’s more good news: we’re introducing a free tier to Vectorize to make it easy to experiment with our full AI stack.</p><p>The free tier includes:</p><ul><li><p>30 million <b>queried</b> vector dimensions / month</p></li><li><p>5 million <b>stored</b> vector dimensions / month</p></li></ul>
    <div>
      <h3>How fast is Vectorize?</h3>
      <a href="#how-fast-is-vectorize">
        
      </a>
    </div>
    <p>To measure performance, we conducted benchmarking tests by executing a large number of vector similarity queries as quickly as possible. We measured both request latency and result precision. In this context, precision refers to the proportion of query results that match the known true-closest results for all benchmarked queries. This approach allows us to assess both the speed and accuracy of our vector similarity search capabilities. Here are the following datasets we benchmarked on:</p><ul><li><p><a href="https://github.com/qdrant/vector-db-benchmark"><b><u>dbpedia-openai-1M-1536-angular</u></b></a>: 1 million vectors, 1536 dimensions, queried with cosine similarity at a top K of 10</p></li><li><p><a href="https://myscale.github.io/benchmark"><b><u>Laion-768-5m-ip</u></b></a>: 5 million vectors, 768 dimensions, queried with cosine similarity at a top K of 10</p><ul><li><p>We ran this again skipping the result-refinement pass to return approximate results faster</p></li></ul></li></ul><div>
    <figure>
        <table>
            <colgroup>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
                <col></col>
            </colgroup>
            <tbody>
                <tr>
                    <td>
                        <p><span><span><strong>Benchmark dataset</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>P50 (ms)</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>P75 (ms)</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>P90 (ms)</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>P95 (ms)</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Throughput (RPS)</strong></span></span></p>
                    </td>
                    <td>
                        <p><span><span><strong>Precision</strong></span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>dbpedia-openai-1M-1536-angular</span></span></p>
                    </td>
                    <td>
                        <p><span><span>31</span></span></p>
                    </td>
                    <td>
                        <p><span><span>56</span></span></p>
                    </td>
                    <td>
                        <p><span><span>159</span></span></p>
                    </td>
                    <td>
                        <p><span><span>380</span></span></p>
                    </td>
                    <td>
                        <p><span><span>343</span></span></p>
                    </td>
                    <td>
                        <p><span><span>95.4%</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Laion-768-5m-ip </span></span></p>
                    </td>
                    <td>
                        <p><span><span>81.5</span></span></p>
                    </td>
                    <td>
                        <p><span><span>91.7</span></span></p>
                    </td>
                    <td>
                        <p><span><span>105</span></span></p>
                    </td>
                    <td>
                        <p><span><span>123</span></span></p>
                    </td>
                    <td>
                        <p><span><span>623</span></span></p>
                    </td>
                    <td>
                        <p><span><span>95.5%</span></span></p>
                    </td>
                </tr>
                <tr>
                    <td>
                        <p><span><span>Laion-768-5m-ip w/o refinement</span></span></p>
                    </td>
                    <td>
                        <p><span><span>14.7</span></span></p>
                    </td>
                    <td>
                        <p><span><span>19.3</span></span></p>
                    </td>
                    <td>
                        <p><span><span>24.3</span></span></p>
                    </td>
                    <td>
                        <p><span><span>27.3</span></span></p>
                    </td>
                    <td>
                        <p><span><span>698</span></span></p>
                    </td>
                    <td>
                        <p><span><span>78.9%</span></span></p>
                    </td>
                </tr>
            </tbody>
        </table>
    </figure>
</div><p>These benchmarks were conducted using a standard Vectorize v2 index, queried with a concurrency of 300 via a Cloudflare Worker binding. The reported latencies reflect those observed by the Worker binding querying the Vectorize index on warm caches, simulating the performance of an existing application with sustained usage.</p><p>Beyond Vectorize's fast query speeds, we believe the combination of Vectorize and Workers AI offers an unbeatable solution for delivering optimal AI application experiences. By running Vectorize close to the source of inference and user interaction, rather than combining AI and vector database solutions across providers, we can significantly minimize end-to-end latency.</p><p>With these improvements, we're excited to announce the general availability of the new Vectorize, which is more powerful, faster, and more cost-effective than ever before.</p>
    <div>
      <h3>Tying it all together: the AI platform for all your inference needs</h3>
      <a href="#tying-it-all-together-the-ai-platform-for-all-your-inference-needs">
        
      </a>
    </div>
    <p>Over the past year, we’ve been committed to building powerful AI products that enable users to build on us. While we are making advancements on each of these individual products, our larger vision is to provide a seamless, integrated experience across our portfolio.</p><p>With Workers AI and AI Gateway, users can easily enable analytics, logging, caching, and rate limiting to their AI application by connecting to AI Gateway directly through a binding in the Workers AI request. We imagine a future where AI Gateway can not only help you create and save datasets to use for fine-tuning your own models with Workers AI, but also seamlessly redeploy them on the same platform. A great AI experience is not just about speed, but also accuracy. While Workers AI ensures fast performance, using it in combination with AI Gateway allows you to evaluate and optimize that performance by monitoring model accuracy and catching issues, like hallucinations or incorrect formats. With AI Gateway, users can test out whether switching to new models in the Workers AI model catalog will deliver more accurate performance and a better user experience.</p><p>In the future, we’ll also be working on tighter integrations between Vectorize and Workers AI, where you can automatically supply context or remember past conversations in an inference call. This cuts down on the orchestration needed to run a <a href="https://www.cloudflare.com/learning/ai/retrieval-augmented-generation-rag/">RAG application</a>, where we can automatically help you make queries to vector databases.</p><p>If we put the three products together, we imagine a world where you can build AI apps with <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">full observability </a>(traces with AI Gateway) and see how the retrieval (Vectorize) and generation (Workers AI) components are working together, enabling you to diagnose issues and improve performance.</p><p>This Birthday Week, we’ve been focused on making sure our individual products are best-in-class, but we’re continuing to invest in building a holistic AI platform within our AI portfolio, but also with the larger Developer Platform Products. Our goal is to make sure that Cloudflare is the simplest, fastest, more powerful place for you to build full-stack AI experiences with all the batteries included.</p>
          <figure>
          <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6nXZn8qwK1tCVVMFbYFf7n/fe538bed97b00ef1b74a05dfd86eb496/image5.png" />
          </figure><p>We’re excited for you to try out all these new features! Take a look at our <a href="https://developers.cloudflare.com/products/?product-group=AI"><u>updated developer docs </u></a>on how to get started and the Cloudflare dashboard to interact with your account.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Vectorize]]></category>
            <category><![CDATA[AI Gateway]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Workers AI]]></category>
            <guid isPermaLink="false">2lS9TcgZHa1fubO371mYiv</guid>
            <dc:creator>Michelle Chen</dc:creator>
            <dc:creator>Kathy Liao</dc:creator>
            <dc:creator>Phil Wittig</dc:creator>
            <dc:creator>Meaghan Choi</dc:creator>
        </item>
        <item>
            <title><![CDATA[AI Gateway is generally available: a unified interface for managing and scaling your generative AI workloads]]></title>
            <link>https://blog.cloudflare.com/ai-gateway-is-generally-available/</link>
            <pubDate>Wed, 22 May 2024 13:00:17 GMT</pubDate>
            <description><![CDATA[ AI Gateway is an AI ops platform that provides speed, reliability, and observability for your AI applications. With a single line of code, you can unlock powerful features including rate limiting, custom caching, real-time logs, and aggregated analytics across multiple providers ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5GsB2wwIevC3G2m0PGOAhz/d9eaeea0933d269b39fcda70c22881b7/image4-3.png" />
            
            </figure><p>During Developer Week in April 2024, we announced General Availability of <a href="/workers-ai-ga-huggingface-loras-python-support">Workers AI</a>, and today, we are excited to announce that AI Gateway is Generally Available as well. Since its launch to beta <a href="/announcing-ai-gateway">in September 2023 during Birthday Week</a>, we’ve proxied over 500 million requests and are now prepared for you to use it in production.</p><p>AI Gateway is an AI ops platform that offers a unified interface for managing and scaling your generative AI workloads. At its core, it acts as a proxy between your service and your inference provider(s), regardless of where your model runs. With a single line of code, you can unlock a set of powerful features focused on performance, security, reliability, and observability – think of it as your <a href="https://www.cloudflare.com/learning/network-layer/what-is-the-control-plane/">control plane</a> for your AI ops. And this is just the beginning – we have a roadmap full of exciting features planned for the near future, making AI Gateway the tool for any organization looking to get more out of their AI workloads.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6M6hDWXdRH2rZETQK4UlPe/444269e8d23056252e9e17aa08cef333/image6-1.png" />
            
            </figure>
    <div>
      <h2>Why add a proxy and why Cloudflare?</h2>
      <a href="#why-add-a-proxy-and-why-cloudflare">
        
      </a>
    </div>
    <p>The AI space moves fast, and it seems like every day there is a new model, provider, or framework. Given this high rate of change, it’s hard to keep track, especially if you’re using more than one model or provider. And that’s one of the driving factors behind launching AI Gateway – we want to provide you with a single consistent control plane for all your models and tools, even if they change tomorrow, and then again the day after that.</p><p>We've talked to a lot of developers and organizations building AI applications, and one thing is clear: they want more <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a>, control, and tooling around their AI ops. This is something many of the AI providers are lacking as they are deeply focused on model development and less so on platform features.</p><p>Why choose Cloudflare for your AI Gateway? Well, in some ways, it feels like a natural fit. We've spent the last 10+ years helping build a better Internet by running one of the largest global networks, helping customers around the world with performance, reliability, and security – Cloudflare is used as a <a href="https://www.cloudflare.com/learning/cdn/glossary/reverse-proxy/">reverse proxy</a> by nearly 20% of all websites. With our expertise, it felt like a natural progression – change one line of code, and we can help with observability, reliability, and control for your AI applications – all in one control plane – so that you can get back to building.</p><p>Here is that one line code change using the OpenAI JS SDK. And check out <a href="https://developers.cloudflare.com/ai-gateway/providers/">our docs</a> to reference other providers, SDKs, and languages.</p>
            <pre><code>import OpenAI from 'openai';

const openai = new OpenAI({
apiKey: 'my api key', // defaults to process.env["OPENAI_API_KEY"]
	baseURL: "https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_slug}/openai"
});</code></pre>
            <p></p>
    <div>
      <h2>What’s included today?</h2>
      <a href="#whats-included-today">
        
      </a>
    </div>
    <p>After talking to customers, it was clear that we needed to focus on some foundational features before moving onto some of the more advanced ones. While we're really excited about what’s to come, here are the key features available in GA today:</p><p><b>Analytics</b>: Aggregate metrics from across multiple providers. See traffic patterns and usage including the number of requests, tokens, and costs over time.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3gFXixQSV6rVUM9V6ew1W4/db974469f45415b7ae0f0af45c30e7f3/pasted-image-0--10-.png" />
            
            </figure><p><b>Real-time logs:</b> Gain insight into requests and errors as you build.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/31KebDSmQfi9lW87mh3oZy/541a90575637dc860e1ef28972958ed4/image8-1.png" />
            
            </figure><p><b>Caching:</b> Enable custom caching rules and use Cloudflare’s cache for repeat requests instead of hitting the original model provider API, helping you save on cost and latency.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2bZw1HaJUP48B3MbXiATpx/0e7ee230a8b1c62e782efd466177fb5f/image1-10.png" />
            
            </figure><p><b>Rate limiting:</b> Control how your application scales by limiting the number of requests your application receives to control costs or prevent abuse.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4icXzN7Z8VuZw17KdzKl2X/60466c7cbe3869c14aa7a7ad90c40159/image5-9.png" />
            
            </figure><p><b>Support for your favorite providers:</b> AI Gateway now natively supports Workers AI plus 10 of the most popular providers, including <a href="https://x.com/CloudflareDev/status/1791204770394648901">Groq and Cohere</a> as of mid-May 2024.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1ORhtmLzCTOKLVrCyhyEZK/53be2a20c4d6bd7dd3cdcd2657ef6455/image2-10.png" />
            
            </figure><p><b>Universal endpoint:</b> In case of errors, improve resilience by defining <a href="https://developers.cloudflare.com/ai-gateway/configuration/fallbacks/">request fallbacks</a> to another model or inference provider.</p>
            <pre><code>curl https://gateway.ai.cloudflare.com/v1/{account_id}/{gateway_slug} -X POST \
  --header 'Content-Type: application/json' \
  --data '[
  {
    "provider": "workers-ai",
    "endpoint": "@cf/meta/llama-2-7b-chat-int8",
    "headers": {
      "Authorization": "Bearer {cloudflare_token}",
      "Content-Type": "application/json"
    },
    "query": {
      "messages": [
        {
          "role": "system",
          "content": "You are a friendly assistant"
        },
        {
          "role": "user",
          "content": "What is Cloudflare?"
        }
      ]
    }
  },
  {
    "provider": "openai",
    "endpoint": "chat/completions",
    "headers": {
      "Authorization": "Bearer {open_ai_token}",
      "Content-Type": "application/json"
    },
    "query": {
      "model": "gpt-3.5-turbo",
      "stream": true,
      "messages": [
        {
          "role": "user",
          "content": "What is Cloudflare?"
        }
      ]
    }
  }
]'</code></pre>
            <p></p>
    <div>
      <h2>What’s coming up?</h2>
      <a href="#whats-coming-up">
        
      </a>
    </div>
    <p>We've gotten a lot of feedback from developers, and there are some obvious things on the horizon such as persistent logs and custom metadata – foundational features that will help unlock the real magic down the road.</p><p>But let's take a step back for a moment and share our vision. At Cloudflare, we believe our platform is much more powerful as a unified whole than as a collection of individual parts. This mindset applied to our AI products means that they should be easy to use, combine, and run in harmony.</p><p>Let's imagine the following journey. You initially onboard onto Workers AI to run inference with the latest open source models. Next, you enable AI Gateway to gain better visibility and control, and start storing persistent logs. Then you want to start tuning your inference results, so you leverage your persistent logs, our prompt management tools, and our built in eval functionality. Now you're making analytical decisions to improve your inference results. With each data driven improvement, you want more. So you implement our feedback API which helps annotate inputs/outputs, in essence building a structured data set. At this point, you are one step away from a one-click fine tune that can be deployed instantly to our global network, and it doesn't stop there. As you continue to collect logs and feedback, you can continuously rebuild your fine tune adapters in order to deliver the best results to your end users.</p><p>This is all just an aspirational story at this point, but this is how we envision the future of AI Gateway and our AI suite as a whole. You should be able to start with the most basic setup and gradually progress into more advanced workflows, all without leaving <a href="https://www.cloudflare.com/ai-solution/">Cloudflare’s AI platform</a>. In the end, it might not look exactly as described above, but you can be sure that we are committed to providing the best AI ops tools to help make Cloudflare the best place for AI.</p>
    <div>
      <h2>How do I get started?</h2>
      <a href="#how-do-i-get-started">
        
      </a>
    </div>
    <p>AI Gateway is available to use today on all plans. If you haven’t yet used AI Gateway, check out our <a href="https://developers.cloudflare.com/ai-gateway/">developer documentation</a> and get started now. AI Gateway’s core features available today are offered for free, and all it takes is a Cloudflare account and one line of code to get started. In the future, more premium features, such as persistent logging and secrets management will be available subject to fees. If you have any questions, reach out on our <a href="http://discord.cloudflare.com">Discord channel</a>.</p> ]]></content:encoded>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Developers]]></category>
            <category><![CDATA[Open Source]]></category>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Connectivity Cloud]]></category>
            <category><![CDATA[AI Gateway]]></category>
            <category><![CDATA[AI]]></category>
            <guid isPermaLink="false">3EErej51Xbc8xOYpGL8ggy</guid>
            <dc:creator>Kathy Liao</dc:creator>
            <dc:creator>Michelle Chen</dc:creator>
            <dc:creator>Phil Wittig</dc:creator>
        </item>
        <item>
            <title><![CDATA[Workers AI Update: Stable Diffusion, Code Llama + Workers AI in 100 cities]]></title>
            <link>https://blog.cloudflare.com/workers-ai-update-stable-diffusion-code-llama-workers-ai-in-100-cities/</link>
            <pubDate>Thu, 23 Nov 2023 14:00:30 GMT</pubDate>
            <description><![CDATA[ We're thrilled to announce that Stable Diffusion and Code Llama are now available as part of Workers AI, running in over 100 cities across Cloudflare’s global network. ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/4JrMurER3IHj2JNpysSWao/c4b23bad27e76766855e4f1965271335/image3-1.png" />
            
            </figure><p>Thanksgiving might be a US holiday (and one of our favorites — we have many things to be thankful for!). Many people get excited about the food or deals, but for me as a developer, it’s also always been a nice quiet holiday to hack around and play with new tech. So in that spirit, we're thrilled to announce that <b>Stable Diffusion</b> and <b>Code Llama</b> are now available as part of Workers AI, running in over 100 cities across Cloudflare’s global network.</p><p>As many AI fans are aware, Stable Diffusion is the groundbreaking image-generation model that can conjure images based on text input. Code Llama is a powerful language model optimized for generating programming code.</p><p>For more of the fun details, read on, or head over to the <a href="https://developers.cloudflare.com/workers-ai/models/">developer docs</a> to get started!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5AzwupJ50F4zr9X3DrfIIA/5aa6748535bd74e863a66e9161d1d86e/image2-3.png" />
            
            </figure><p><i>Generated by Stable Diffusion - “Happy llama in an orange cloud celebrating thanksgiving”</i></p>
    <div>
      <h3>Generating images with Stable Diffusion</h3>
      <a href="#generating-images-with-stable-diffusion">
        
      </a>
    </div>
    <p>Stability AI launched Stable Diffusion XL 1.0 (SDXL) this past summer. You can read more about it <a href="https://stability.ai/news/stable-diffusion-public-release">here</a>, but we’ll briefly mention some really cool aspects.</p><p>First off, “Distinct images can be prompted without having any particular ‘feel’ imparted by the model, ensuring absolute freedom of style”. This is great as it gives you a blank canvas as a developer, or should I say artist.</p><p>Additionally, it’s “particularly well-tuned for vibrant and accurate colors, with better contrast, lighting, and shadows than its predecessor, all in native 1024x1024 resolution.” With the advancements in today's cameras (or phone cameras), quality images are table stakes, and it’s nice to see these models keeping up.</p><p>Getting started with Workers AI + SDXL (via <a href="https://developers.cloudflare.com/workers-ai/models/text-to-image/">API</a>) couldn’t be easier. Check out the example below:</p>
            <pre><code>curl -X POST \
"https://api.cloudflare.com/client/v4/accounts/{account-id}/ai/run/@cf/stabilityai/stable-diffusion-xl-base-1.0" \
-H "Authorization: Bearer {api-token}" \
-H "Content-Type:application/json" \
-d '{ "prompt": "A happy llama running through an orange cloud" }' \
-o 'happy-llama.png'</code></pre>
            <p>And here is our happy llama:</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/6JaekijEHB7yBo5UgRQqUf/ca56efa08967917f6e1cd525ca42c531/image1-9.png" />
            
            </figure><p>You can also do this in a <a href="https://developers.cloudflare.com/workers-ai/models/text-to-image/">Worker</a>:</p>
            <pre><code>import { Ai } from '@cloudflare/ai';
export default {
  async fetch(request, env) {
    const ai = new Ai(env.AI);
    const response = await ai.run('@cf/stabilityai/stable-diffusion-xl-base-1.0', {
      prompt: 'A happy llama running through an orange cloud'
    });
    return new Response(response, {
      headers: {
          "content-type": "image/png",
      },
  });
  }
}</code></pre>
            
    <div>
      <h3>Generate code with Code Llama</h3>
      <a href="#generate-code-with-code-llama">
        
      </a>
    </div>
    <p>If you’re not into generating art, then maybe you can have some fun with code. Code Llama, which was also released this last summer by Meta, is built on top of Llama 2, but optimized to understand and generate code in many popular languages (Python, C++, Java, PHP, Typescript / Javascript, C#, and Bash).</p><p>You can use it to help you generate code for a tough problem you're faced with, or you can also use it to help you understand code — perfect if you are picking up an existing, unknown codebase.</p><p>And just like all the other models, generating code with Workers AI is really easy.</p><p>From a Worker:</p>
            <pre><code>import { Ai } from '@cloudflare/ai';

// Enable env.AI for your worker by adding the ai binding to your wrangler.toml file:
// [ai]
// binding = "AI"

export default {
  async fetch(request, env) {
    const ai = new Ai(env.AI);
    const response = await ai.run('@hf/thebloke/codellama-7b-instruct-awq', {
      prompt: 'In JavaScript, define a priority queue class. The constructor must take a function that is called on each object to determine its priority.'
    });
    return Response.json(response);
  }
}</code></pre>
            <p>Using curl:</p>
            <pre><code>curl -X POST \
"https://api.cloudflare.com/client/v4/accounts/{account-id}/ai/run/@hf/thebloke/codellama-7b-instruct-awq" \

-H "Authorization: Bearer {api-token}" \-H "Content-Type: application/json" \
-d '{ "prompt": "In JavaScript, define a priority queue class. The constructor must take a function that is called on each object to determine its priority." }</code></pre>
            <p>Using python:</p>
            <pre><code>#!/usr/bin/env python3

import json
import os
import requests

ACCOUNT_ID=os.environ["ACCOUNT_ID"]
API_TOKEN=os.environ["API_TOKEN"]
MODEL="@hf/thebloke/codellama-7b-instruct-awq"

prompt="""In JavaScript, define a priority queue class. The constructor must take a function that is called on each object to determine its priority."""
url = f"https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/run/{MODEL}"
headers = {
  "Authorization": f"Bearer {API_TOKEN}"
}
payload = json.dumps({
  "prompt": prompt
})

print(url)
r = requests.post(url, data=payload, headers=headers)

j = r.json()
if "result" in j and "response" in j["result"]:
   print(r.json()["result"]["response"])
else:
   print(json.dumps(j, indent=2))</code></pre>
            
    <div>
      <h3>Workers AI inference now available in 100 cities</h3>
      <a href="#workers-ai-inference-now-available-in-100-cities">
        
      </a>
    </div>
    <p>When we <a href="/workers-ai/">first released Workers AI</a> back in September we launched with inference running in seven cities, but set an ambitious target to support Workers AI inference in 100 cities by the end of the year, and nearly everywhere by the end of 2024. We’re proud to say that we’re ahead of schedule and now support Workers AI Inference in 100 cities thanks to some awesome, hard-working folks across multiple teams. For developers this means that your inference tasks are more likely to run near your users, and it will only continue to improve over the next 18 months.</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/7evaAyyQ5hOg4LfGD07jZo/569bc07580bb009db8d8981a7c63f8e6/image4-1.png" />
            
            </figure>
    <div>
      <h3>Mistral, in case you missed it</h3>
      <a href="#mistral-in-case-you-missed-it">
        
      </a>
    </div>
    <p>Lastly, in case you didn’t see our other update earlier this week, we also launched Mistral 7B, a super capable and powerful language model that packs a punch for its size. You can read more about it <a href="/workers-ai-update-hello-mistral-7b/">here</a>, or start building with it <a href="https://developers.cloudflare.com/workers-ai/models/text-generation/">here</a>.</p>
    <div>
      <h3>Go forth and build something fun</h3>
      <a href="#go-forth-and-build-something-fun">
        
      </a>
    </div>
    <p>Today we gave you images (art), code, and Workers AI inference running in more cities. Please go have fun, build something cool, and if you need help, want to give feedback, or want to share what you’re building just pop into our <a href="https://discord.com/invite/cloudflaredev">Developer Discord</a>!</p>
    <div>
      <h3>Happy Thanksgiving!</h3>
      <a href="#happy-thanksgiving">
        
      </a>
    </div>
    <p>Additionally, if you’re just getting started with AI, we’ll be offering a series of developer workshops ranging from understanding the basics such as embeddings, models and vector databases, getting started with LLMs on Workers AI and more. We encourage you to <a href="https://www.cloudflare.com/lp/ai-developer-workshop/">sign up here</a>.</p> ]]></content:encoded>
            <category><![CDATA[Workers AI]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <guid isPermaLink="false">6q0MD0Y6GiJ9b4fkZt88xA</guid>
            <dc:creator>Phil Wittig</dc:creator>
        </item>
        <item>
            <title><![CDATA[Workers AI: serverless GPU-powered inference on Cloudflare’s global network]]></title>
            <link>https://blog.cloudflare.com/workers-ai/</link>
            <pubDate>Wed, 27 Sep 2023 13:00:47 GMT</pubDate>
            <description><![CDATA[ We are excited to launch Workers AI - an AI inference as a service platform, empowering developers to run AI models with just a few lines of code, all powered by our global network of GPUs ]]></description>
            <content:encoded><![CDATA[ <p></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/1kH38tclcLOGwYv40vTHNy/300956275074e73dd480a93898d43c08/image1-29.png" />
            
            </figure><p>If you're anywhere near the developer community, it's almost impossible to avoid the impact that AI’s recent advancements have had on the ecosystem. Whether you're using <a href="https://www.cloudflare.com/learning/ai/what-is-artificial-intelligence/">AI</a> in your workflow to improve productivity, or you’re shipping AI based features to your users, it’s everywhere. The focus on AI improvements are extraordinary, and we’re super excited about the opportunities that lay ahead, but it's not enough.</p><p>Not too long ago, if you wanted to leverage the power of AI, you needed to know the ins and outs of <a href="https://www.cloudflare.com/learning/ai/what-is-machine-learning/">machine learning</a>, and be able to manage the infrastructure to power it.</p><p>As a developer platform with over one million active developers, we believe there is so much potential yet to be unlocked, so we’re changing the way AI is delivered to developers. Many of the current solutions, while powerful, are based on closed, proprietary models and don't address privacy needs that developers and users demand. Alternatively, the open source scene is exploding with powerful models, but they’re simply not accessible enough to every developer. Imagine being able to run a model, from your code, wherever it’s <a href="https://www.cloudflare.com/developer-platform/solutions/hosting/">hosted</a>, and never needing to find GPUs or deal with setting up the infrastructure to support it.</p><p>That's why we are excited to launch Workers AI - an AI inference as a service platform, empowering developers to run AI models with just a few lines of code, all powered by our global network of GPUs. It's open and accessible, serverless, privacy-focused, runs near your users, pay-as-you-go, and it's built from the ground up for a best in class developer experience.</p>
    <div>
      <h2>Workers AI - making inference <b>just work</b></h2>
      <a href="#workers-ai-making-inference-just-work">
        
      </a>
    </div>
    <p>We’re launching Workers AI to put AI inference in the hands of every developer, and to actually deliver on that goal, it should <b>just work</b> out of the box. How do we achieve that?</p><ul><li><p>At the core of everything, it runs on the right infrastructure - our world-class network of GPUs</p></li><li><p>We provide off-the-shelf models that run seamlessly on our infrastructure</p></li><li><p>Finally, deliver it to the end developer, in a way that’s delightful. A developer should be able to build their first Workers AI app in minutes, and say “Wow, that’s kinda magical!”.</p></li></ul><p>So what exactly is Workers AI? It’s another building block that we’re adding to our developer platform - one that helps developers run well-known AI models on serverless GPUs, all on Cloudflare’s trusted global network. As one of the latest additions to our developer platform, it works seamlessly with Workers + Pages, but to make it truly accessible, we’ve made it platform-agnostic, so it also works everywhere else, made available via a REST API.</p>
    <div>
      <h2>Models you know and love</h2>
      <a href="#models-you-know-and-love">
        
      </a>
    </div>
    <p>We’re launching with a curated set of popular, open source models, that cover a wide range of inference tasks:</p><ul><li><p><b>Text generation (large language model):</b> meta/llama-2-7b-chat-int8</p></li><li><p><b>Automatic speech recognition (ASR):</b> openai/whisper</p></li><li><p><b>Translation:</b> meta/m2m100-1.2</p></li><li><p><b>Text classification:</b> huggingface/distilbert-sst-2-int8</p></li><li><p><b>Image classification:</b> microsoft/resnet-50</p></li><li><p><b>Embeddings:</b> baai/bge-base-en-v1.5</p></li></ul><p>You can browse all available models in your Cloudflare dashboard, and soon you’ll be able to dive into logs and analytics on a per model basis!</p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/3iLFApyCjCwTCEtV8QRhke/91793f5eaabe3c426cf5fb7f421f4508/image4-14.png" />
            
            </figure><p>This is just the start, and we’ve got big plans. After launch, we’ll continue to expand based on community feedback. Even more exciting - in an effort to take our catalog from zero to sixty, we’re announcing a partnership with Hugging Face, a leading AI community + hub. The partnership is multifaceted, and you can read more about it <a href="/best-place-region-earth-inference">here</a>, but soon you’ll be able to browse and run a subset of the Hugging Face catalog directly in Workers AI.</p>
    <div>
      <h2>Accessible to everyone</h2>
      <a href="#accessible-to-everyone">
        
      </a>
    </div>
    <p>Part of the mission of our developer platform is to provide <b>all</b> the building blocks that developers need to build the applications of their dreams. Having access to the right blocks is just one part of it — as a developer your job is to put them together into an application. Our goal is to make that as easy as possible.</p><p>To make sure you could use Workers AI easily regardless of entry point, we wanted to provide access via: Workers or Pages to make it easy to use within the Cloudflare ecosystem, and via REST API if you want to use Workers AI with your current stack.</p><p>Here’s a quick CURL example that translates some text from English to French:</p>
            <pre><code>curl https://api.cloudflare.com/client/v4/accounts/{ACCOUNT_ID}/ai/run/@cf/meta/m2m100-1.2b \
-H "Authorization: Bearer {API_TOKEN}" \
	-d '{ "text": "I'll have an order of the moule frites", "target_lang": "french" }'</code></pre>
            <p>And here are what the response looks like:</p>
            <pre><code>{
  "result": {
    "answer": "Je vais commander des moules frites"
  },
  "success": true,
  "errors":[],
  "messages":[]
}</code></pre>
            <p>Use it with any stack, anywhere - your favorite Jamstack framework, Python + Django/Flask, Node.js, Ruby on Rails, the possibilities are endless. And deploy.</p>
    <div>
      <h2>Designed for developers</h2>
      <a href="#designed-for-developers">
        
      </a>
    </div>
    <p>Developer experience is really important to us. In fact, most of this post has been about just that. Making sure it works out of the box. Providing popular models that just work. Being accessible to all developers whether you build and deploy with Cloudflare or elsewhere. But it’s more than that - the experience should be frictionless, zero to production should be fast, and it should feel good along the way.</p><p>Let’s walk through another example to show just how easy it is to use! We’ll run Llama 2, a popular <a href="https://www.cloudflare.com/learning/ai/what-is-large-language-model/">large language model</a> open sourced by Meta, in a worker.</p><p>We’ll assume you have some of the basics already complete (Cloudflare account, Node, NPM, etc.), but if you don’t <a href="https://developers.cloudflare.com/workers-ai/get-started/local-dev-setup/">this guide</a> will get you properly set up!</p>
    <div>
      <h3>1. Create a Workers project</h3>
      <a href="#1-create-a-workers-project">
        
      </a>
    </div>
    <p>Create a new project named workers-ai by running:</p>
            <pre><code>$ npm create cloudflare@latest</code></pre>
            <p>When setting up your workers-ai worker, answer the setup questions as follows:</p><ul><li><p>Enter <b>workers-ai</b> for the app name</p></li><li><p>Choose <b>Hello World</b> script for the type of application</p></li><li><p>Select <b>yes</b> to using TypeScript</p></li><li><p>Select <b>yes</b> to using Git</p></li><li><p>Select <b>no</b> to deploying</p></li></ul><p>Lastly navigate to your new app directory:</p>
            <pre><code>cd workers-ai</code></pre>
            
    <div>
      <h3>2. Connect Workers AI to your worker</h3>
      <a href="#2-connect-workers-ai-to-your-worker">
        
      </a>
    </div>
    <p>Create a Workers AI binding, which allows your worker to access the Workers AI service without having to manage an API key yourself.</p><p>To bind Workers AI to your worker, add the following to the end of your <b>wrangler.toml</b> file:</p>
            <pre><code>[ai]
binding = "AI" #available in your worker via env.AI</code></pre>
            <p>You can also bind Workers AI to a Pages Function. For more information, refer to <a href="https://developers.cloudflare.com/pages/platform/functions/bindings/#ai">Functions Bindings</a>.</p>
    <div>
      <h3>3. Install the Workers AI client library</h3>
      <a href="#3-install-the-workers-ai-client-library">
        
      </a>
    </div>
    
            <pre><code>npm install @cloudflare/ai</code></pre>
            
    <div>
      <h3>4. Run an inference task in your worker</h3>
      <a href="#4-run-an-inference-task-in-your-worker">
        
      </a>
    </div>
    <p>Update the <b>source/index.ts</b> with the following code:</p>
            <pre><code>import { Ai } from '@cloudflare/ai'
export default {
  async fetch(request, env) {
    const ai = new Ai(env.AI);
    const input = { prompt: "What's the origin of the phrase 'Hello, World'" };
    const output = await ai.run('@cf/meta/llama-2-7b-chat-int8', input );
    return new Response(JSON.stringify(output));
  },
};</code></pre>
            
    <div>
      <h3>5. Develop locally with Wrangler</h3>
      <a href="#5-develop-locally-with-wrangler">
        
      </a>
    </div>
    <p>While in your project directory, test Workers AI locally by running:</p>
            <pre><code>$ npx wrangler dev --remote</code></pre>
            <p><b>Note -</b> These models currently only run on Cloudflare’s network of GPUs (and not locally), so setting <code>--remote</code> above is a must, and you’ll be prompted to log in at this point.</p><p>Wrangler will give you a URL (most likely localhost:8787). Visit that URL, and you’ll see a response like this</p>
            <pre><code>{
  "response": "Hello, World is a common phrase used to test the output of a computer program, particularly in the early stages of programming. The phrase "Hello, World!" is often the first program that a beginner learns to write, and it is included in many programming language tutorials and textbooks as a way to introduce basic programming concepts. The origin of the phrase "Hello, World!" as a programming test is unclear, but it is believed to have originated in the 1970s. One of the earliest known references to the phrase is in a 1976 book called "The C Programming Language" by Brian Kernighan and Dennis Ritchie, which is considered one of the most influential books on the development of the C programming language.
}</code></pre>
            
    <div>
      <h3>6. Deploy your worker</h3>
      <a href="#6-deploy-your-worker">
        
      </a>
    </div>
    <p>Finally, deploy your worker to make your project accessible on the Internet:</p>
            <pre><code>$ npx wrangler deploy
# Outputs: https://workers-ai.&lt;YOUR_SUBDOMAIN&gt;.workers.dev</code></pre>
            <p>And that’s it. You can literally go from zero to deployed AI in minutes. This is obviously a simple example, but shows how easy it is to run Workers AI from any project. </p>
    <div>
      <h2>Privacy by default</h2>
      <a href="#privacy-by-default">
        
      </a>
    </div>
    <p>When Cloudflare was founded, our value proposition had three pillars: more secure, more reliable, and more performant. Over time, we’ve realized that a better Internet is also a more private Internet, and we want to play a role in building it.</p><p>That’s why Workers AI is private by default - we don’t train our models, LLM or otherwise, on your data or conversations, and our models don’t learn from your usage. You can feel confident using Workers AI in both personal and business settings, without having to worry about leaking your data. Other providers only offer this fundamental feature with their enterprise version. With us, it’s built in for everyone.</p><p>We’re also excited to support data localization in the future. To make this happen, we have an ambitious GPU rollout plan - we’re launching with seven sites today, roughly 100 by the end of 2023, and nearly everywhere by the end of 2024. Ultimately, this will empower developers to keep delivering killer AI features to their users, while staying compliant with their end users’ data localization requirements.</p>
    <div>
      <h2>The power of the platform</h2>
      <a href="#the-power-of-the-platform">
        
      </a>
    </div>
    
    <div>
      <h4>Vector database - Vectorize</h4>
      <a href="#vector-database-vectorize">
        
      </a>
    </div>
    <p>Workers AI is all about running Inference, and making it really easy to do so, but sometimes inference is only part of the equation. Large language models are trained on a fixed set of data, based on a snapshot at a specific point in the past, and have no context on your business or use case. When you submit a prompt, information specific to you can increase the quality of results, making it more useful and relevant. That’s why we’re also launching Vectorize, our <a href="https://www.cloudflare.com/learning/ai/what-is-vector-database/">vector database</a> that’s designed to work seamlessly with Workers AI. Here’s a quick overview of how you might use Workers AI + Vectorize together.</p><p>Example: Use your data (knowledge base) to provide additional context to an LLM when a user is chatting with it.</p><ol><li><p><b>Generate initial embeddings:</b> run your data through Workers AI using an <a href="https://www.cloudflare.com/learning/ai/what-are-embeddings/">embedding model</a>. The output will be embeddings, which are numerical representations of those words.</p></li><li><p><b>Insert those embeddings into Vectorize:</b> this essentially seeds the vector database with your data, so we can later use it to retrieve embeddings that are similar to your users’ query</p></li><li><p><b>Generate embedding from user question:</b> when a user submits a question to your AI app, first, take that question, and run it through Workers AI using an embedding model.</p></li><li><p><b>Get context from Vectorize:</b> use that embedding to query Vectorize. This should output embeddings that are similar to your user’s question.</p></li><li><p><b>Create context aware prompt:</b> Now take the original text associated with those embeddings, and create a new prompt combining the text from the vector search, along with the original question</p></li><li><p><b>Run prompt:</b> run this prompt through Workers AI using an LLM model to get your final result</p></li></ol>
    <div>
      <h4>AI Gateway</h4>
      <a href="#ai-gateway">
        
      </a>
    </div>
    <p>That covers a more advanced use case. On the flip side, if you are running models elsewhere, but want to get more out of the experience, you can run those APIs through our AI gateway to get features like caching, rate-limiting, analytics and logging. These features can be used to protect your end point, monitor and optimize costs, and also help with data loss prevention. Learn more about AI gateway <a href="/announcing-ai-gateway">here</a>.</p>
    <div>
      <h2>Start building today</h2>
      <a href="#start-building-today">
        
      </a>
    </div>
    <p>Try it out for yourself, and let us know what you think. Today we’re launching Workers AI as an open Beta for all Workers plans - free or paid. That said, it’s super early, so…</p>
    <div>
      <h4>Warning - It’s an early beta</h4>
      <a href="#warning-its-an-early-beta">
        
      </a>
    </div>
    <p>Usage is <b>not currently recommended for production apps</b>, and limits + access are subject to change.</p>
    <div>
      <h4>Limits</h4>
      <a href="#limits">
        
      </a>
    </div>
    <p>We’re initially launching with limits on a per-model basis</p><ul><li><p>@cf/meta/llama-2-7b-chat-int8: 50 reqs/min globally</p></li></ul><p>Checkout our <a href="https://developers.cloudflare.com/workers-ai/platform/limits/">docs</a> for a full overview of our limits.</p>
    <div>
      <h4>Pricing</h4>
      <a href="#pricing">
        
      </a>
    </div>
    <p>What we released today is just a small preview to give you a taste of what’s coming (we simply couldn’t hold back), but we’re looking forward to putting the full-throttle version of Workers AI in your hands.</p><p>We realize that as you approach building something, you want to understand: how much is this going to cost me? Especially with AI costs being so easy to get out of hand. So we wanted to share the upcoming pricing of Workers AI with you.</p><p>While we won’t be billing on day one, we are announcing what we expect our pricing will look like.</p><p>Users will be able to choose from two ways to run Workers AI:</p><ul><li><p><b>Regular Twitch Neurons (RTN)</b> - running wherever there's capacity at $0.01 / 1k neurons</p></li><li><p><b>Fast Twitch Neurons (FTN)</b> - running at nearest user location at $0.125 / 1k neurons</p></li></ul><p>You may be wondering — what’s a neuron?</p><p>Neurons are a way to measure AI output that always scales down to zero (if you get no usage, you will be charged for 0 neurons). To give you a sense of what you can accomplish with a thousand neurons, you can: generate 130 LLM responses, 830 image classifications, or 1,250 embeddings.</p><p>Our goal is to help our customers pay only for what they use, and choose the pricing that best matches their use case, whether it’s price or latency that is top of mind.</p>
    <div>
      <h3>What’s on the roadmap?</h3>
      <a href="#whats-on-the-roadmap">
        
      </a>
    </div>
    <p>Workers AI is just getting started, and we want your feedback to help us make it great. That said, there are some exciting things on the roadmap.</p>
    <div>
      <h4>More models, please</h4>
      <a href="#more-models-please">
        
      </a>
    </div>
    <p>We're launching with a solid set of models that just work, but will continue to roll out new models based on your feedback. If there’s a particular model you'd love to see on Workers AI, pop into our <a href="https://discord.cloudflare.com/">Discord</a> and let us know!</p><p>In addition to that, we're also announcing a <a href="/best-place-region-earth-inference">partnership with Hugging Face</a>, and soon you'll be able to access and run a subset of the Hugging Face catalog directly from Workers AI.</p>
    <div>
      <h4>Analytics + observability</h4>
      <a href="#analytics-observability">
        
      </a>
    </div>
    <p>Up to this point, we’ve been hyper focussed on one thing - making it really easy for any developer to run powerful AI models in just a few lines of code. But that’s only one part of the story. Up next, we’ll be working on some analytics and <a href="https://www.cloudflare.com/learning/performance/what-is-observability/">observability</a> capabilities to give you insights into your usage + performance + spend on a per-model basis, plus the ability to fig into your logs if you want to do some exploring.</p>
    <div>
      <h4>A road to global GPU coverage</h4>
      <a href="#a-road-to-global-gpu-coverage">
        
      </a>
    </div>
    <p>Our goal is to be the best place to run inference on Region: Earth, so we're adding GPUs to our data centers as fast as we can.</p><p><b>We plan to be in 100 data centers by the end of this year</b></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/5A8SGUOEAcs3sjNjv48yIh/bafbc77b256fef490d4357613b036603/image3-28.png" />
            
            </figure><p><b>And nearly everywhere by the end of 2024</b></p>
            <figure>
            
            <img src="https://cf-assets.www.cloudflare.com/zkvhlag99gkb/2rrL2H0dHYZ4hxOBq0X1pw/f38d122af92f789dc2b31d3bdea1ab06/unnamed-3.png" />
            
            </figure><p><b>We’re really excited to see you build</b> - head over to <a href="https://developers.cloudflare.com/workers-ai/">our docs</a> to get started.</p><p>If you need inspiration, want to share something you’re building, or have a question - pop into our <a href="https://discord.com/invite/cloudflaredev">Developer Discord</a>.</p> ]]></content:encoded>
            <category><![CDATA[Birthday Week]]></category>
            <category><![CDATA[Cloudflare Workers]]></category>
            <category><![CDATA[AI]]></category>
            <category><![CDATA[Developer Platform]]></category>
            <category><![CDATA[Database]]></category>
            <category><![CDATA[Vectorize]]></category>
            <category><![CDATA[Developers]]></category>
            <guid isPermaLink="false">6jSrrIFC7yStZxCaqaM0c1</guid>
            <dc:creator>Phil Wittig</dc:creator>
            <dc:creator>Rita Kozlov</dc:creator>
            <dc:creator>Rebecca Weekly</dc:creator>
            <dc:creator>Celso Martinho</dc:creator>
            <dc:creator>Meaghan Choi</dc:creator>
        </item>
    </channel>
</rss>