Quickstart
This guide takes you through the full tsql workflow:
- Define schema migrations in TypeScript.
- Apply migrations in CI or application startup.
- Treat each migration as SQL (DDL + DML) in one callback.
- Use the same schema for typed queries in server code.
Define your schema with migrations
tsql treats your ordered migration history as the source of truth.
import { defineDatabase } from "@bms/tsql";
export const schema = defineDatabase((db) =>
db
.migration("001_create_users", (schemaBuilder) =>
schemaBuilder.createTable("users", (t) => ({
id: t.integer().primaryKey().autoIncrement(),
email: t.text().notNull(),
createdAt: t.datetime().notNull(),
})),
)
.migration("002_add_profile_fields", (schemaBuilder) =>
schemaBuilder.alterTable("users", (t) => ({
displayName: t.text(),
avatarUrl: t.text(),
})),
),
);
Use stable, ordered migration names (for example 001_, 002_, ...). db.migrate() uses those names to track applied migrations in __tsql_migrations.
Connect and run migrations
At runtime you provide three things:
schema: yourdefineDatabase(...)resultdialect: currentlysqliteDialect()connection: an adapter that implementsSqlConnection.execute(sql, params)
import { DatabaseSync } from "node:sqlite";
import type { SQLInputValue } from "node:sqlite";
import { connect, sqliteDialect, type SqlConnection } from "@bms/tsql";
import { schema } from "./schema";
const sqlite = new DatabaseSync(process.env.DB_PATH ?? "./app.sqlite");
const connection: SqlConnection = {
execute: async (sql, params) => {
const statement = sqlite.prepare(sql);
const sqliteParams = params as readonly SQLInputValue[];
if (/^\s*select\s/i.test(sql)) {
return { rows: statement.all(...sqliteParams) };
}
statement.run(...sqliteParams);
return { rows: [] };
},
};
export const db = connect({
schema,
dialect: sqliteDialect(),
connection,
});
Then run migrations once in a bootstrap path:
await db.migrate();
SQL rewrites in migrations
Migrations are SQL-first scripts authored in TypeScript. Chain DDL steps (alterTable, createIndex) and DML steps (update, insert, delete, raw) in order.
Example: add normalizedEmail and backfill it in one migration.
// schema.ts
export const schema = defineDatabase((db) =>
db
.migration("001_create_users", (schemaBuilder) =>
schemaBuilder.createTable("users", (t) => ({
id: t.integer().primaryKey().autoIncrement(),
email: t.text().notNull(),
})),
)
.migration("002_add_normalized_email", (migration) =>
migration
.alterTable("users", (t) => ({
normalizedEmail: t.text(),
}))
.createIndex("idx_users_normalized_email", "users", ["normalizedEmail"])
.update("users")
.where(({ users }) => users.normalizedEmail.isNull())
.set(({ users }) => ({
normalizedEmail: users.email.trim().lower(),
}))
.execute(),
),
);
// migrate.ts
import { db } from "./db";
await db.migrate();
If your transform is multi-step, keep chaining in the same callback (schema ops + insert/update/delete/raw). Each migration runs in a transaction, and db.migrate() skips migrations that were already recorded.
Run this in CI or monolith startup
CI / deployment step (recommended)
Create a dedicated script:
// scripts/migrate.ts
import { db } from "../src/db";
await db.migrate();
Run it before starting the new version:
pnpm tsx scripts/migrate.ts
Monolith startup
For single-process monoliths, running migrations during startup is fine:
import { db } from "./db";
import { startServer } from "./server";
await db.migrate();
await startServer();
Write typed queries in server code
The same schema used for migrations drives query typing.
import { db } from "./db";
const users = await db
.from("users")
.where(({ users }) => users.email.like("%@example.com", db.dialect))
.orderBy(({ users }) => users.createdAt.desc())
.select(({ users }) => ({
id: users.id,
email: users.email,
displayName: users.displayName,
}))
.execute();
For compile-only flows (for logging or external execution), use .compile() instead of .execute().