Offload the AI inferenceto your users

Provide your users with the highest level of data privacy while saving on your inference bill.

Ensure the privacy of your users' data

Why Offload?

Logos from different open-source AI large language models (LLMs) and Small Language Models (SMLs)

Whether consumers or businesses, your customers aren't comfortable sending their private data to a third-party API.
By integrating Offload in your web application, the inference happens on each user device, maintaining their data private.

How it works?

Common AI app

Your users' data is sent to a third-party API that runs AI inference. These APIs typically use that data to train AI models.

Diagram of an AI application without offload

Offload for B2C AI app

AI inference runs on the user's device, preserving data privacy.

Diagram of a B2C AI application with offload

Offload for B2B AI app

AI inference runs on the customer datacenter/private cloud, preserving data privacy.

Diagram of a B2B AI application with offload

Easy to add to any project

How to install

<!-- Include the Offload library on your app -->
<script src="//unpkg.com/offload-ai" defer></script>

Then, just invoke the inference using the SDK.

How to run inference

const { text } = await offload({
    model: "my-model-id",
    prompt: "Write a vegetarian lasagna recipe for 4 people.",
});

And you are done!

Frequently Asked Questions

FAQ

Start offloading today!

Let us know your email and we will get in contact! We'd love to learn from you along the way!

Ensure the privacy of your users' data

Offload logo

Offload © 2024