Speeding up a Remix website with Turso's embedded replicas hosted on Akamai's Linode
Using Turso's embedded replicas to speed up a Remix website hosted on Akamai's Linode
James Sinkala
In a previous blog post, we learned about how to create multitenant SaaS services using Remix and Turso. In this blog we are going to further speed up the queries in our SaaS service by using Turso's embedded replicas, which will cut latency down further, and we'll be hosting the service on Akamai's Linode.
We recently wrote about why you should ditch your cache and use your database in its place, this blog expands on that by showing how to make that happen in real-world situations. Because if you can get microsecond-level responses to your database queries, that makes cache solutions, which add an extra layer of complexity, no longer necessary.
Being fast is great. But getting rid of caches and simplifying the architecture is where the real power of Embedded replicas lie
Fady
@fady_mak
This is going to unlock a lot of interesting use-cases!
Right now we have a globally-replicated DynamoDB table that holds our routing information. We had to put a cache in-front of it. Embedded Replicas would simplify the whole architecture AND make it faster 🔥
For those not acquainted, Akamai's Linode is a massively distributed cloud computing service, and Turso is the edge database based on libSQL, the open-source, open-contribution fork of SQLite.
Prerequisites:
You will the following to work on the project discussed in the blog post:
In the referenced blog post, we went through the process of building a Customer Relationship Management (CRM) service using Remix, Turso, and Drizzle. The service assigned every registered tenant their own database while keeping records of all registered tenants within a centralized database.
We started off by describing the data needs of the multi-tenant SaaS service, then created a Remix application and added some pages within which we fetched and displayed the data stored within the app's Turso databases.
We went through the instructions of setting up Drizzle to create schemas for the different databases, migrate them to Turso, and build queries used for data transactions within the pages of the Remix application.
We also learned how to use the Turso platform API to create new databases on the Turso platform.
You can refer to the multitenancy blog if you need to go through the building process of the multitenancy project in its entirety.
The libsql package is a better-sqlite3 compatible API for libSQL that's more friendly with managed services, it supports Bun, Deno, and Node. With this package we currently get SQLite's prepared statements and their associated performance benefits.
But also, with the libsql package we lose the integration with Drizzle that was present in the earlier setup that let us leverage Drizzle's ORM capabilities, on the other side we get more SQL and less ORM code.
As a substitute for the missing ORM typing that comes with Drizzle, I had to make some changes to the project's types code, constructing typed Objects from the results of the database queries when needed.
To install the libsql package, run:
npm install libsql
With Turso's embedded replicas, all the reads are performed locally while writes are sent over the network. We don't need to manually set this up since the SDK takes care of it, all we need is to provide some few things.
In the above code, we're passing the path to the embedded replica (local SQLite file database) as the first argument, the second argument options is pretty much what we've been working with up to this point while using the @libsql/client SDK, that is, the database URL and token.
Then, after every write we call the sync() method to synchronize the remote data with the embedded replica. Since data synchronization is on-demand, you can perform it as much as you want.
Here is an example from the code.
// app/routes/org-actions.ts
const { full_name, email } = values as unknown as {
full_name: string;
email: string;
};
const id = uuidv4();
const password = uuidv4().split('-')[0];
const hash = await bcrypt.hash(password, 10);
const agentInformation = [id, full_name, email, hash];
// add agent to db
const t1 = new Delta();
const agentAdded = await manageOrgDbs
.prepare(
'INSERT INTO agents(id, full_name, email, password) values(?, ?, ?, ?)',
)
.run(agentInformation);
t1.stop('Creation of new agent');
const t2 = new Delta();
await manageOrgDbs.sync();
t2.stop('syncing org database');
In the code above, we're creating a new organization agent then synchronizing the remote database with our embedded replica.
Then, we can perform reads as usual, only this time the SDK will be fetching the queried data from the local SQLite database.
// app/routes/_org.dash.tsx
const db = buildOrgDbClient({ url: orgInfo.dbUrl as string });
const agents = await db.prepare('SELECT * FROM agents').all();
const tickets = await db
.prepare(
'select "id", "customer_email", "customer_name", "query", "is_closed", "service_rating", "created_at", "updated_at", (select json_array("id", "ticket_id", "agent_id", "created_at", "updated_at") as "data" from (select * from "conversations" "tickets_conversation" where "tickets_conversation"."ticket_id" = "tickets"."id") "tickets_conversation") as "conversation" from "tickets"',
)
.all();
return json({
agents: agents.map((agent: any) => makeAgent(agent)),
tickets: tickets.map((ticket: any) => makeTicket(ticket)),
});
We'll talk about the Delta() class seen in some of this code later.
Going along with SSH access, after the Linode server has been provisioned, SSH into it.
ssh root@<linode-server-ip-address>
Install Node.js and npm since we'll be working with a Node.js project.
apt install nodejs
apt install npm
Clone the project over from your remote git client, in this case, we'll be cloning the embedded-replicas branch over from the multi-tenant SaaS project's repo.
cd into the project's directory and install the project's dependencies.
cd turso-crm
npm install
Rename the project's “.env.example” file to “.env.production” from inside the Linode server and populate all the necessary environment variables with the required data.
mv .env.example .env.production
nano .env.production # open the file on the nano editor
Finally, build and run the project in production mode.
npm run build
npm run start
pm2 is a good candidate to assist you with managing the project on the server, it has a built-in load balancing and can let you monitor logs, alert you on bugs and exceptions, etc.
Since there's a chance of our SaaS service making it in the tech world and we might end up servicing numerous tenants, like how many databases Turso is offering, we should be more structured and place all those databases under a “/databases” folder.
Before opening any page, first run the following command:
npm run start-replicas
This command will run the code that simply creates a “/databases” directory at the root of our project and in it add the embedded replicas (SQLite database files) for tenants that already exist on Turso. New embedded replicas will be created as new tenants register to the service.
This is the output you should see when you check the logs of the running app.
On completion, the npm script will give a summary of what happened.
{"ok":"Found 3 organizations!"}
I will not be going over setting up reverse proxy, SSL, and where to get your domain since I'm assuming that you have an idea, but for a combination of the first two Caddy might be a sound option.
You can then proceed to using the service.
The Delta() class that we saw in earlier code (and will see quite a bit in the project's source code) is a simple utility class that I added to benchmark individual queries and general execution times. When running this project you should see some time deltas being logged on the console. The database reads numbers are impressively at the micro-second level.
Here's an example of the numbers we see when adding new tenants to the service.
Fetching a single organization: 0.281 ms
Creating a new organization: 832.115 ms
Fetching created organization: 0.470 ms
And, here are the total execution times for all the code that resolves into data for the front-end of our Remix app, database transactions included, for the homepage, in this instance after having a few organizations added to the service, the organization dashboard, and conversation pages, the last two involving multiple queries.
# home-page [5 consequtive requests]
Page requests latency [/]: 6.726 ms
Page requests latency [/]: 8.564 ms
Page requests latency [/]: 7.408 ms
Page requests latency [/]: 9.284 ms
Page requests latency [/]: 5.055 ms
# organization dashboard [5 consequtive requests]
Page requests latency [/dash]: 12.746 ms
Page requests latency [/dash]: 8.002 ms
Page requests latency [/dash]: 16.614 ms
Page requests latency [/dash]: 13.574 ms
Page requests latency [/dash]: 12.637 ms
# conversation page [5 consequtive requests]
Page requests latency [/conversation/$orgUsername]: 13.302 ms
Page requests latency [/conversation/$orgUsername]: 10.680 ms
Page requests latency [/conversation/$orgUsername]: 11.941 ms
Page requests latency [/conversation/$orgUsername]: 8.611 ms
Page requests latency [/conversation/$orgUsername]: 12.906 ms
Here are the accompanying screenshots for the Time To First Byte (TTFB) as seen on the browser for these three pages.
Home Page.
Organization dashboard page.
Organization conversation page.
It goes without saying that for the pages' TTFB latency numbers as seen in the above screenshots to be this low the location of both the compute, in this case our Linode server, and database (already achieved here with embedded replicas) needs to be close to the user visiting the website for the case of reads. Akamai, just like Turso lets you provision more servers in more locations to achieve this for all your users.
And, with these numbers, Turso with its embedded replicas is already delivering beyond what most cache options promise without the added complexity to your project's infrastructure.
To learn more about the technologies used in this project, visit the following links: