Vector Similarity Search is now available!
With AI products and features powered by LLMs becoming very popular, which is a massive understatement, we’ve seen many of our users building AI features into their products, along with new AI products altogether. That has made vector search extremely important, but until now SQLite didn’t have any related inbuilt capabilities.
One popular extension for SQLite that some folks have used to add vector functionality is sqlite-vss, a SQLite extension for vector search, which is also supported by the Turso platform. But we constantly heard feedback that managing the extension was challenging, and that the extension itself was problematic.
To address those issues, today we are announcing that native Vector Similarity Search has been added to Turso. New and existing Turso users can get started right now, but if you’re new to the LLM space, read on for tips about how to make use of Vector Embeddings for Vector Similarity Search.
We're enabling all this in libSQL, the Open Contribution fork of SQLite that powers Turso. LibSQL is designed to expand the scope and capabilities of SQLite while maintaining its core attributes and advantages, so adding native Vector Search fits quite nicely with its intended purpose.
Our implementation of Vector Search is designed to work especially well in areas where SQLite already excels, for example:
To use vector search in Turso or libSQL, there is no need to add any extension. Once you are using a supported version, a new column type is added. This works in every build of libSQL. Whether you are connecting to the Turso service, or running an in-memory database without any network connectivity, vectors are available.
For example, let’s create a table that includes vectors:
CREATE TABLE movies (
title TEXT,
year INT,
embedding F32_BLOB(3)
);
In the example above, we are declaring an array of 32-bit floating point numbers. Other types, like booleans – useful in models tailored for binary quantization – are coming soon.
One of the main benefits of being just a data type is that it can be a natural part of your queries. You can insert data naturally as you would with any table:
INSERT INTO movies (title, year, embedding)
VALUES
(
'Napoleon',
2023,
vector('[1,2,3]')
),
(
'Black Hawk Down',
2001,
vector('[10,11,12]')
),
(
'Gladiator',
2000,
vector('[7,8,9]')
),
(
'Blade Runner',
1982,
vector('[4,5,6]')
);
And you can write queries that combine vectors and standard SQLite data:
SELECT title,
vector_extract(embedding),
vector_distance_cos(embedding, vector('[5,6,7]'))
FROM movies;
Vectors are stored as SQLite blobs. The function vector
is used to transform a string representation into our vector format, and the function vector_extract
does the opposite, outputting the underlying vector as a string.
In some situations, querying the vectors directly is not what we want. Being just a part of SQLite allows you to write queries taking full advantage of the SQL language, combining vector data with relational data.
For example, to find the top 3 movies created after 2020, ordered by vector similarity, you would write:
SELECT *
FROM
movies
WHERE
year >= 2020
ORDER BY
vector_distance_cos(embedding, '[3,1,2]')
LIMIT 3;
Note that this query has perfect recall, as it performs a full table scan over all the vectors in the table, at the expense of more computation. In our experience, response times with a full table scan are still acceptable until around 10,000 vectors.
For larger datasets, Turso implements Approximate Nearest Neighbors (ANN) using the DiskANN algorithm. Creating an index is easy. It is not unlike how you would create any other index in SQLite (you just need to wrap vector column into the special libsql_vector_idx
function):
CREATE INDEX movies_idx ON movies ( libsql_vector_idx(embedding) );
Using the index is not automatic, since it is internally represented as a different table. For the previous example query, we can modify it slightly to make sure the index is consulted:
SELECT
title,
year
FROM
vector_top_k('movies_idx', '[4,5,6]', 3)
JOIN
movies
ON
movies.rowid = id
WHERE
year >= 2020;
Unlike some other popular algorithms, DiskANN is designed to keep memory usage to a minimum. This makes it a great match for use cases where Turso already excels at.
Turso allows for an easy implementation of a database-per-tenant pattern, where databases can be cheaply created on-demand. Keeping memory consumption at bay is critical for our ability to fit hundreds of thousands of databases in a single server, which is a cornerstone of this pattern.
By combining vector search and database-per-tenant architecture, you are able to keep context data completely private, and make sure that data for a tenant is never mixed with others.
Turso can also replicate data inside your own servers, alleviating the need for caches or complicated architectures.
Paired with embedded replicas, which allow a database to be replicated anywhere, including mobile devices, Turso vector search offers a compelling on-device inference story.
For more information and examples about how to use vector search, see here.
Our vector implementation is in BETA.
Vectors are available over HTTP on the Turso Platform. During the beta period, users can create a new group (recommended) with the command:
turso group create default
Alternatively, you can update your existing group to the latest version (assuming it is called default
, use turso group list
to check):
turso group update default
SDKs for local use may still have to be built from source. In particular, React Native and Android SDKs are still work in progress.
There is also a channel #turso-vector-embeddings
on our Discord server, that you can join to learn more.
Vectors are stored as normal SQLite rows, and are charged as such. But there is extra work needed (for example in the index) to account for the vectors, as well as other places where vectors are read and written that are not accounted for. In the future, stats about reads and writes regarding vectors may change.