Getting started with pg_vector

The pg_vector extension in PostgreSQL is used to efficiently store and query vector data. The pg_vector extension provides PostgreSQL with the ability to store and perform operations on vectors directly within the database.

Create tenant table with vector type

Vector types work like any other standard types. You can make them the type of a column in a tenant table and Nile will take care of isolating the embeddings per tenant.

-- creating a table to store wiki documents for a Notion like
-- SaaS application with vector dimension of 3
CREATE TABLE wiki_documents(
    tenant_id uuid,
    id integer,
    embedding vector(3)
    );

Store vectors per tenant

Once you have the table defined, you would want to populate the embeddings. Typically, this is done by querying a large language model (eg. OpenAI, HuggingFace), retrieving the embeddings and storing them in the vector store. Once stored, the embeddings follow the standard tenant rules. They can be isolated, sharded and placed based on the tenant they belong to.

INSERT INTO wiki_documents (tenant_id,id, embedding)
VALUES ('018ade1a-7843-7e60-9686-714bab650998',1, '[1,2,3]');

Query vectors

pg_vector supports three types of vector similarity operators

OperatorDescriptionUse Cases
<->

L2 distance. Measure of the straight-line distance between two points in a multi-dimensional space. It calculates the length of the shortest path between the points, which corresponds to the hypotenuse of a right triangle.

Used in clustering, k-means clustering, and distance-based classification algorithms

<#>

Inner product. The inner product, also known as the dot product, measures the similarity or alignment between two vectors. It calculates the sum of the products of corresponding elements in the vectors.

Used in similarity comparison or feature selection
<=>

Cosine distance. Cosine distance, often used as cosine similarity when measuring similarity, quantifies the cosine of the angle between two vectors in a multi-dimensional space. It focuses on the direction rather than the magnitude of the vectors.

Used in text similarity, recommendation systems, and any context where you want to compare the direction of vectors

You could use any one of them to find the distance between vectors. For a real world application, you would typically do chunking to compute embeddings for large documents. For the purpose of the example, we can assume that these are small documents. To get the L2 distance

SELECT embedding <-> '[3,1,2]' AS distance FROM wiki_documents;

For inner product, multiply by -1 (since <#> returns the negative inner product)

SELECT (embedding <#> '[3,1,2]') * -1 AS inner_product FROM wiki_documents;

For cosine similarity, use 1 - cosine distance

SELECT 1 - (embedding <=> '[3,1,2]') AS cosine_similarity FROM wiki_documents;

Vector Indexes

pgvector supports two types of indexes:

  • HNSW
  • IVFFlat

HNSW

An HNSW index creates a multilayer graph. It has slower build times and uses more memory than IVFFlat, but has better query performance (in terms of speed-recall tradeoff). There’s no training step like IVFFlat, so the index can be created without any data in the table.

Add an index for each distance function you want to use.

CREATE INDEX ON wiki_documents USING hnsw (embedding vector_l2_ops);

Inner product

CREATE INDEX ON wiki_documents USING hnsw (embedding vector_ip_ops);

Cosine distance

CREATE INDEX ON wiki_documents USING hnsw (embedding vector_cosine_ops);

Vectors with up to 2,000 dimensions can be indexed.

IVFLAT

An IVFFlat index divides vectors into lists, and then searches a subset of those lists that are closest to the query vector. It has faster build times and uses less memory than HNSW, but has lower query performance (in terms of speed-recall tradeoff).

Three keys to achieving good recall are:

  1. Create the index after the table has some data
  2. Choose an appropriate number of lists - a good place to start is rows / 1000 for up to 1M rows and sqrt(rows) for over 1M rows
  3. When querying, specify an appropriate number of probes (higher is better for recall, lower is better for speed) - a good place to start is sqrt(lists)

Add an index for each distance function you want to use.

L2 distance

CREATE INDEX ON wiki_documents USING ivfflat (embedding vector_l2_ops) WITH (lists = 100);

Inner product

CREATE INDEX ON wiki_documents USING ivfflat (embedding vector_ip_ops) WITH (lists = 100);

Cosine distance

CREATE INDEX ON wiki_documents USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100);

Vectors with up to 2,000 dimensions can be indexed.

You can read more about pg_vector on their github

If you have any feedback or questions on building AI-native SaaS applications on Nile, please do reach out on our Github discussion forum or our Discord community.