Create distributed APIs for your e-commerce stores using Cloudflare's edge network and Turso, the database built for the edge.
In the world of eCommerce, every millisecond of latency matters. Creating a fast, reliably performant solution for end users can help drive sales. Fortunately, it's easier than ever to do this.
In this blog post, we are going to learn how to create a distributed (edge) API for an e-commerce store using Cloudflare Workers and Turso.
Cloudflare Workers is a service from Cloudflare that enables us to build serverless applications and deploy instantly to the Cloudflare edge network that spans over 200 cities across the globe for exceptional performance, reliability, and scale.
Turso is the edge database based on libSQL, an open contribution fork of SQLite.
We'll also be using Drizzle as the ORM (Object-relational Mapping) tool for our project, which simply means it will handle the generation and migration of the project's database schema and help build queries.
While composing the REST API with this stack, Cloudflare Workers and Turso, we expect the compute and data to be as close as possible (at the edge) to the API consumers. This will facilitate low latency from most parts of the world.
The API we are going to build is the data source to a “Mugs Store” e-commerce store. Ideally, you have a number of endpoints and data models in a complete store, but for brevity we are going to work with two data models, “Mugs” as the products model and “Categories”. These are the models that our API endpoints will be based on.
To see the complete source code of the e-commerce API being built visit the repo on GitHub.
Prerequisites:
To get started with a new Cloudflare workers project run:
npm create cloudflare
You'll be asked some questions to define the app you are building with Cloudflare, respond using the following template.
~ npm create cloudflare
using create-cloudflare version 2.0.9
╭ Create an application with Cloudflare Step 1 of 3
│
├ Where do you want to create your application?
│ dir mug-store-api
│
├ What type of application do you want to create?
│ type "Hello World" script
│
├ Do you want to use TypeScript?
│ typescript yes
│
├ Copying files from "simple" template
│
├ Do you want to use git?
│ git yes
│
╰ Application created
╭ Installing dependencies Step 2 of 3
│
├ Installing dependencies
│ installed via `npm install`
│
├ Committing new files
│ git initial commit
│
╰ Dependencies Installed
╭ Deploy with Cloudflare Step 3 of 3
│
├ Do you want to deploy your application?
│ no deploying via `npm run deploy`
│
├ APPLICATION CREATED Deploy your application with npm run deploy
│
│ Run the development server npm run start
│ Deploy your application npm run deploy
│ Read the documentation https://developers.cloudflare.com/workers
│ Stuck? Join us at https://discord.gg/cloudflaredev
│
╰ See you again soon!
On completion, cd
into the project's directory and you should expect the project to contain the following directory structure.
.
├── node_modules
├── package-lock.json
├── package.json
├── src
│ └── worker.ts
├── tsconfig.json
└── wrangler.toml
Delete all the commented out bindings information inside the wrangler.toml
and src/worker.ts files as we won't be using them.
If it's the first time you are working on a Cloudflare project, you'll need to authenticate the project's workspace in order to be able to create secrets and eventually deploy the project on Cloudflare.
Follow these steps if this statement applies to your project.
npx wrangler login
. This should open a new tab on your browser and you should see the request demonstrated below.
In the next section, we'll be creating a Turso database and adding its database url and authentication token to the Cloudflare workers bindings passed as the second argument of the fetch function inside the worker's default export found inside src/workers.ts
.
You need to log in to the Turso CLI before you can use it to create and manage databases. Run the following command to do so.
turso auth login
This will open up a browser tab and ask you to authenticate via GitHub. If you are doing this for the first time, you will need to give the Turso application permission to use you account. Grant Turso the permissions needed to proceed.
After authentication, run the following command to create a new database.
turso db create mugs-store-api
The above command will create a new database named mugs-store-api
in the location closest to you in the currently supported locations.
To get the database details we'll need to pass to the worker's fetch function, run the following two commands.
# Get the database url
turso db show --url mugs-store-api
# Create a database authentication token
turso db tokens create mugs-store-api
Copy the results of the above command and store them as we'll be using them in the proceeding instructions.
For the database url, add a [vars]
section inside the workers configuration file wrangler.toml
adding it as the environment variable TURSO_DB_URL
.
[vars]
TURSO_DB_URL = "<OBTAINED-DB-URL>"
Since the authentication token is a sensitive variable, add it as a workers secret environment variable TURSO_DB_AUTH_TOKEN
by running the following command:
npx wrangler secret put TURSO_DB_AUTH_TOKEN
You'll be prompted to provide the secret value afterwards, paste the Turso database auth token obtained in the previous step.
Next, add the environment variable keys to the worker's Env interface inside the /src/worker.ts
file as follows:
export interface Env {
TURSO_DB_AUTH_TOKEN?: string;
TURSO_DB_URL?: string;
}
When you run npm run dev
at this stage you will still see a “Hello World” message displayed since that's what the fetch function is currently returning as a response.
Let's change that and actually have json responses returned to respective HTTP requests made to our API.
To streamline the API endpoints we'll need to use a router, and in this example, we are going to use the JavaScript micro-router, itty-router.
To install it run the following command:
npm install itty-router
Back in the worker file, add the router and its type to the Env type we added the database bindings to earlier.
import { RouterType } from 'itty-router';
export interface Env {
…
router?: RouterType;
}
Let's create a router initiating function that will handle our endpoints and return the itty-router instance.
import { Router, type RouterType } from 'itty-router';
export function buildIttyRouter(env: Env): RouterType {
const router = Router();
router.get('*', () => json('Hi World!'));
return router;
}
Update the fetch function inside the worker's default export to initiate and use the router to handle received requests.
async fetch(request: Request, env: Env, ctx: ExecutionContext): Promise<Response> {
if (env.router === undefined) {
env.router = buildRouter(env);
}
return env.router?.handle(request);
}
Now, when visiting localhost:8787 you should see a JSON response with the details “Hello there!”. This means that our router is successfully handling our worker requests, the next step would be adding endpoints that will be responding to respective HTTP requests made to our API.
But, before that you'll first need to define the data models for our API, generate their schemas, and migrate them to our database. We are going to do this with the help of Drizzle.
To use Drizzle in the project you'll need to install the following packages.
npm install drizzle-orm @libsql/client
npm install --save-dev drizzle-kit drizzle-zod tsx dotenv
The drizzle-orm package is used to model data schema, migrate them, connect to our database, and build queries, while the drizzle-kit package is a CLI tool used to generate migration files. The drizzle-zod package will assist with validating the JSON data being submitted to our endpoints. And lastly, we use the tsx module to execute the typescript code.
Create a /drizzle
directory at the root of the project. Inside it, add a /schemas.ts
file and add the Drizzle schemas for our database. The following is the schema for the categories table for this project.
To see the full Drizzle schema for this project, open this file.
import {
sqliteTable,
text,
uniqueIndex,
} from 'drizzle-orm/sqlite-core';
import { createInsertSchema, createSelectSchema } from 'drizzle-zod';
export const categories = sqliteTable(
'categories',
{
id: text('id').primaryKey(),
name: text('name'),
},
(categories) => ({
nameIdx: uniqueIndex('name_idx').on(categories.name),
}),
);
export const insertCategorySchema = createInsertSchema(categories);
export const selectCategorySchema = createSelectSchema(categories);
Let's create a script that will handle the migrations generation using the Drizzle CLI, using the created schema as input. Inside package.json
add the following ”generate”
script.
"scripts": {
...
"generate": "npx drizzle-kit generate:sqlite --out ./drizzle/migrations --breakpoints --schema=./drizzle/schema.ts",
...
}
We are using the Drizzle CLI to generate SQL migrations in the above script passing ./drizzle/migrations
as the output directory for the generated migration files.
Run npm run generate
to see the migrations generated and placed inside the selected directory.
On successful schema generation, you should see the following output on the terminal.
2 tables
categories 2 columns 1 indexes 0 fks
mugs 8 columns 5 indexes 1 fks
To migrate the generated schema to our database we need to add the migration code. Create a migrate.ts
file under the /drizzle
directory and add the following code inside it.
import 'dotenv/config';
import { createClient } from '@libsql/client';
import { drizzle } from 'drizzle-orm/libsql';
import { migrate } from 'drizzle-orm/libsql/migrator';
const client = createClient({
url: process.env.TURSO_DB_URL as string,
authToken: process.env.TURSO_DB_AUTH_TOKEN as string,
});
const db = drizzle(client);
async function main() {
await migrate(db, {
migrationsFolder: './drizzle/migrations',
});
}
main()
.then((res) => {
console.log('Tables migrated!');
process.exit(0);
})
.catch((err) => {
console.error('Error performing migration: ', err);
process.exit(1);
});
In the migrate()
function above we pass the output directory of our generated migrations as the second argument so that Drizzle's libSQL migrator knows the location of the migrations.
Since the migration process is done locally, we can't use the environment variables added to the Cloudflare worker to handle the connection to our Turso database.
To make sure the data migration works, provide the database environment variables being required in the above Node.js migration environment by adding a .env
file with the following keys. Assign them with the database variables we acquired earlier.
TURSO_DB_URL=<DB-URL>
TURSO_DB_AUTH_TOKEN=<AUTH-TOKEN>
To streamline the execution of schema migrations add the following script in package.json
.
"scripts": {
...
"migrate": "tsx drizzle/migrate"
...
}
After having set this up, you can now perform database migrations by running npm run migrate
.
Run turso db shell mugs-store-api .tables
to validate if the tables were added to your Turso database.
If you updated the schema to reflect the code in the repository, you should expect the following tables to be listed.
__drizzle_migrations
categories
mugs
Note: The repo on GitHub contains the code to some demo data that can be seeded to the database.
To make our API functional, there's a need for it to respond to HTTP requests with the expected data. To do that we need to transact with Turso and get or submit data into the database depending on the nature of the requests.
Let's create a database client building function that returns a libSQL (Turso) client wrapped with Drizzle as the query builder.
Add the following function to the worker file.
import { createClient } from '@libsql/client/http';
import { drizzle, LibSQLDatabase } from 'drizzle-orm/libsql';
function buildDbClient(env: Env): LibSQLDatabase {
const url = env.TURSO_DB_URL?.trim();
if (url === undefined) {
throw new Error('TURSO_DB_URL is not defined');
}
const authToken = env.TURSO_DB_AUTH_TOKEN?.trim();
if (authToken === undefined) {
throw new Error('TURSO_DB_AUTH_TOKEN is not defined');
}
return drizzle(createClient({ url, authToken }));
}
Next, initiate a database client by adding the following code inside the routeBuilder()
function we created earlier.
function buildIttyRouter(env: Env): RouterType {
const db = buildDbClient(env);
// router code
}
Now, we can proceed with creating the API endpoints.
Before adding the endpoints install the uuid package which we are using to generate unique ids for our table rows within this project.
npm install uuid
Then, import it at the top of our worker file.
import { v4 as uuidv4 } from "uuid";
For the proceeding endpoints code make sure to update the required imports.
import { error, IRequest, json, Router, RouterType, withParams, } from 'itty-router';
Starting with basic GET
requests, create two endpoints, one that handles the request to fetch all mugs and the second that returns a mug based on the provided id
.
function buildIttyRouter(env: Env): RouterType {
const router = Router();
const db = buildDbClient(env);
router
.get('/mugs', async () => {
const mugsData = await db.select().from(mugs).all();
return json({
mugs: mugsData,
});
})
.get('/mug/:id', async ({params: { id }}) => {
if (!id) {
return error(422, 'ID is required');
}
const mugData = await db.select().from(mugs).where(eq(mugs.id, id)).get();
return mugData
? json({
mug: mugData,
})
: error(404, 'Mug not found!');
})
// subsequent endpoint routes
}
For data submission POST
requests, the following is the code to the endpoint that handles the creation of a new category.
// previous endpoint routes
.post('/category', async (request: IRequest) => {
const jsonData = await request.json();
console.log('HERE 1');
const categoryData = insertCategorySchema.safeParse({
id: uuidv4(),
...(jsonData as object),
});
if (!categoryData.success) {
const { message, path } = categoryData.error.issues[0];
return error(path.length ? 422 : 400, `[${path}]: ${message}`);
}
const newCategory = await db
.insert(categories)
.values(categoryData.data)
.returning()
.get();
return json(
{ category: newCategory },
{
status: 201,
},
);
})
For the data update PATCH
requests for categories based on the provided id
, add the following endpoint.
// previous endpoint routes
.patch('/category/:id', async (request) => {
const { id } = request.params;
if (!id) {
return error(422, 'ID is required');
}
const jsonData: { name: string } = await request.json();
if (!Object.keys(jsonData).length){
return error(400, 'No data is being updated!');
}
const category = await db
.update(categories)
.set(jsonData)
.where(eq(categories.id, id))
.returning()
.get();
return json({ category });
})
And lastly, for the DELETE
requests to mug items using the item id
s, add the following endpoint.
.delete('/mug/:id', async ({params: { id }}) => {
if (!id) {
return error(422, 'ID is required');
}
const mugData = await db
.delete(mugs)
.where(eq(mugs.id, id))
.returning()
.get();
return json({
mug: mugData,
});
})
For the remaining endpoints for the Mug Store API, view the router function code inside the worker file on the GitHub repository.
Test to see if every endpoint works as intended and perform fixes where necessary.
Next, we'll be deploying our REST API to the Cloudflare network.
If your project was scaffolded with the deprecated ”wrangler publish”
as the workers deploy script on package.json
update it to use the updated ”wrangler deploy”
command.
You can then deploy the Mug Store e-commerce API to Cloudflare's distributed network by running the following command.
npm run deploy
This command should log details along the following lines when the worker project is deployed successfully.
Your worker has access to the following bindings:
- Vars:
- TURSO_DB_URL: "libsql://mug-store-api-xinnks.turso.io"
Total Upload: 272.46 KiB / gzip: 50.44 KiB
Uploaded the-mugs-store-api (4.92 sec)
Published the-mugs-store-api (7.13 sec)
https://the-mugs-store-api.xinnks.workers.dev
Current Deployment ID: …
You can now access your distributed e-commerce API from whatever head you choose ranging from web, mobile, to desktop projects using the published url provided above.
You can use the url in the above deployment log to test the REST API routes provided in this blog post. This distributed API comprises Turso database instances hosted on the following three locations — Denver (US), Johannesburg (South Africa), and Paris (EU).
For more information regarding the stack used on this blog post, visit the following links:
If you enjoyed this article and would like to get updates on more content like this, you can follow me on twitter — @xinnks.