Skip to content

A next-generation database abstraction for modern apps. (Introducing new syntax sugars in SQL and auto-versioning capabilities in any database.)

License

Notifications You must be signed in to change notification settings

linked-db/linked-ql

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Linked QL

npm versionnpm downloads bundle License

Linked QL Banner

Save the overhead working with SQL and structured data - from the time and effort spent figuring out relational queries to the labour managing schemas! Try a modern, simplistic take on SQL and databases in general!

Linked QL is a simplistic database abstraction for modern apps that ridiculously streamlines the amount of SQL you write and the schema management work you do.

What does it do at a high level?

💥 Takes the ORM and friends out of the way and let's you write actual SQL, but this time, beautiful and streamlined SQL that you will actually enjoy. (Linked QL extends standard SQL with new syntax sugars that let you write relational queries in less than half the code.)

⚡️ Takes the process out of schema management and lets you just ALTER away your DB, but with automatic schema versioning happening behind the scenes.

💥 Brings the "schema-as-code" philosophy to its true practice wherein you are able to manage your entire DB structure out of a single schema.json (or schema.yml) file.

Linked QL comes as a small library and is usable over your DB of choice - from the server-side Postgres, mariadb and MySQL, to the client-side IndexedDB, and the in-memory plain JSON object!

Jump to sections and features:

Getting Started

Install Linked QL:

npm install @linked-db/linked-ql

Obtain the Linked QL client for your target database:

  1. For SQL databases, install the regular SQL client you use for your DB. (Typically, pg for PostgreSQL, mariadb for mariadb, mysql/mysql2 for MySQL databases.)

    Using PostgreSQL as an example, install the pg client:

    npm install pg

    Use Linked QL as a wrapper over that:

    // Import pg and LinkedQl
    import pg from 'pg';
    import LinkedQl from '@linked-db/linked-ql/sql';
    
    // Connect pg
    const pgClient = new pg.Client({
        host: 'localhost',
        port: 5432,
    });
    await pgClient.connect();
    
    // Use LinkedQl as a wrapper over that
    const client = new LinkedQl(pgClient, { dialect: 'postgres' });
    See also: sample setup for mariadb

    Note that your mariadb database must be v10.5.2 or higher. (MySQL v8 comparably.) In addition, Linked QL needs to be able to run multiple statements in one query. The multipleStatements connector parameter below is thus required. We also need to have the bitOneIsBoolean parameter in place.

    // Import mariadb and LinkedQl
    import mariadb from 'mariadb';
    import LinkedQl from '@linked-db/linked-ql/sql';
    
    // Connect pg
    const myConnection = await mariadb.createConnection({
        host: '127.0.0.1',
        user: 'root',
        port: 3306,
        // -------
        multipleStatements: true, // Required
        bitOneIsBoolean: true, // The default, but required
        trace: true, // Recommended
    });
    
    // Use LinkedQl as a wrapper over that
    const client = new LinkedQl(myConnection, { dialect: 'mysql' });
  2. For the client-side IndexedDB, import and instantiate the IDB client. (Coming soon)

    // Import IDB as LinkedQl
    import LinkedQl from '@linked-db/linked-ql/idb';
    
    // Create an instance.
    const client = new LinkedQl;
  3. To work with Linked QL's in-memory object database, import and instantiate the ODB client. (Coming soon)

    // Import ODB as LinkedQl
    import LinkedQl from '@linked-db/linked-ql/odb';
    
    // Create an instance.
    const LinkedQlClient = new LinkedQl;

All client instances above implement the same interface:

client.query('SELECT fname, lname FROM users WHERE role = $1', { values: ['admin'] }).then(result => {
    console.log(result);
});
const result = await client.query('SELECT fname, lname FROM users WHERE role = $1', { values: ['admin'] });
console.log(result);

This API and more are covered right in the API area.

By design, you are able to choose between running raw SQL using client.query() and running equivalent statements using APIs like client.createDatabase(), client.alterDatabase(), client.dropDatabase(), database.createTable(), database.alterTable(), database.dropTable(), table.select(), table.insert(), table.upsert(), table.update(), table.delete(), etc. (All as covered in the API area.)

✨ Now, that's like: whatever your query style or usecase, there's a thing in Linked QL for you!

Introducing Magic Paths

💥 Express relationships graphically! You shouldn't always have to write JOINS!

Meet Linked QL's magic path operators, a syntax extension to SQL, that lets you connect to columns on other tables without writing a single JOIN clause. Linked QL uses heuristics on your DB structure to figure out the details and the relevant JOINS behind the scenes.

Where you normally would write...

-- Regular SQL
SELECT title, users.fname AS author_name FROM posts
LEFT JOIN users ON users.id = posts.author

Linked QL lets you draw a path to express the relationship:

-- Linked QL
SELECT title, author ~> fname AS author_name FROM posts

And here's a scenario showing a typical schema and an example query each:

-- The users table
CREATE TABLE users (
    id int primary key generated always as identity,
    title varchar,
    name varchar,
    role int references roles (id),
    created_time timestamp
);
-- The books table
CREATE TABLE books (
    id int primary key generated always as identity,
    title varchar,
    content varchar,
    author int references users (id),
    created_time timestamp
);
-- Regular SQL
SELECT book.id, book.title, content, book.created_time, user.id AS author_id, user.title AS author_title, user.name AS author_name 
FROM books AS book LEFT JOIN users AS user ON user.id = book.author
-- Linked QL
SELECT id, title, content, created_time, author ~> id, author ~> title, author ~> name 
FROM books

Now, that translates to about 50% code, plus whole namespacing exercise, having been eliminated! Yet, no questions asked about your schema, and none of the usual upfront relationship mapping!

Taking things further, you are able to chain these operators to any level for your multi-level relationships:

-- Linked QL
SELECT * FROM books
WHERE author ~> role ~> codename = 'admin'

and for the different forms of relationships out there (one-to-many, many-to-one, many-to-many), path operators can go in any direction:

-- Linked QL
SELECT * FROM users
WHERE author <~ books ~> title = 'Beauty and the Beast'

Plus, with Linked QL being a superset of SQL, you can combine the new magic together with the old LEFT JOIN/RIGHT JOIN/etc clauses with zero implications:

-- Linked QL
SELECT users.* FROM users, some_other_table.id
LEFT JOIN some_other_table USING some_other_condition
WHERE author <~ books ~> title = 'Beauty and the Beast'

giving you just the right tool for the job in every scenario: the regular JOINS for whatever calls for them; magic paths for when the very JOINS are an overkill!

✨ We think this will make a lot of your tooling and manual work around SQL obsolete and your codebase saner! You essentially get back SQL - and with it, a dose of magic!

Introducing Auto-Versioning

⚡️ Create, alter, and drop schemas without needing to worry about versioning.

You may be doing too much!

Databases have historically lacked the concept of versioning, and that has seen all of the engineering work pushed down to the client application. If you've ever had to adopt a special process for defining and managing your schemas, wherein changes are handled through specially-named, chronologically-ordered files within your application...

app
├─migrations
  ├─20240523_1759_create_users_table_and_drop_accounts_table
  │  └[UP]:
  │    CREATE TABLE users (id int, first_name varchar);
  │    DROP TABLE accounts;
  │
  ├─20240523_1760_add_last_login_to_users_table_and_rename_order_status_table
  │  └[UP]:
  │    ALTER TABLE users ADD COLUMN last_name varchar;
  │    ALTER TABLE order_status RENAME TO order_tracking;
  │
  ├─ +256 more...

with each of those also needing to be paired with a "DOWN" logic (the reverse-engineering logic):

app
├─migrations
  ├─20240523_1760_add_last_login_to_users_table_and_rename_order_status_table:
  │  └[DOWN]:
  │    ALTER TABLE users DROP COLUMN last_name;
  │    ALTER TABLE order_tracking RENAME TO order_status;
  │
  ├─20240523_1759_create_users_table_and_drop_accounts_table:
  │  └[DOWN]:
  │    DROP TABLE users;
  │    CREATE TABLE accounts (id int, first_name varchar);
  │
  ├─ +256 more...

then you've faced the problem that this defeciency in databases creates!

Meet Linked QL's Automatic Schema Savepoint and Rollback feature - a little addition to your database that does the heavy-lifting of schema versiong at the database level!

Here, you alter your schema and get back a reference to a "savepoint" automatically created for you:

// Alter schema
const savepoint = await client.query('CREATE TABLE public.users (id int, name varchar)', {
    description: 'Create users table',
});
// As an axample of what you see:
console.log(savepoint.description);   // Create users table
console.log(savepoint.versionTag);    // 1
console.log(savepoint.savepointDate); // 2024-07-17T22:40:56.786Z
// Or to see everything:
console.table(savepoint.toJSON());

You're also able to access the same savepoint on-demand using the database.savepoint() API:

const savepoint = await client.database('public').savepoint();

Either way, you get a nifty rollback button should you want to rollback:

// Rollback all associated changes (Gets the users table dropped)
await savepoint.rollback();

and you can roll all the way back to a point in time, should you want to:

// Rollback to public@3
let savepoint;
while((savepoint = await client.database('public').savepoint()) && savepoint.versionTag <= 3) {
    await savepoint.rollback();
}

✨ Now, that's a go-ahead to alter your DB carefree! But this time, in a safety net!

Taking that further, you also get a way to roll forward from a rollback state! (Much like hitting "Redo" to reverse a certain "Undo").

This time, on calling database.savepoint(), you indicate that you want a "forward" movement from your current point in time:

// "Undo" the last rollback (Gets the users table re-created)
let savepoint = await client.database('public').savepoint({ direction: 'forward' });
await savepoint.rollback();

You essentially get time travel in any direction - and as seamlessly as you move on a movie track!

✨ Meanwhile, your schema histories now live as data (instead of as files), making them queryable, analyzable, and even visualizable, just as regular data! Plus, the DB now essentially becomes the absolute source of truth for both itself and its client applications!

Re-Introducing Schema-as-Code with schema.json

💥 Have your entire DB structure live in a single schema.json (or schema.yml) file that you edit in-place!

With schema versioning now happening at the database level, the whole concept of database migrations at the application level should also change: no need to keep a growing list of migration files just to maintain past states! We found that you could essentially streamline you whole "database" footprint to fit in a single schema.json (or schema.yml) file!

schema.json

[
    {
        // string
        "name": "database_1",
        // TableSchemaSpec[]
        "tables": []
    },
    {
        // string
        "name": "database_2",
        // TableSchemaSpec[]
        "tables": []
    }
]
See a complete example
[
    {
        // string - required
        "name": "database_1",
        // TableSchemaSpec[]
        "tables": [
            {
                // string - required
                "name": "users",
                // ColumnSchemaSpec[] - required
                "columns": [
                    {
                        // string - required
                        "name": "id",
                        // string or array like ["int",3] - required
                        "type": "int",
                        // boolean or PrimaryKeySchemaSpec
                        "primaryKey": true,
                        // boolean or IdentityConstraintSchemaSpec
                        "identity": true
                    },
                    {
                        // string - required
                        "name": "first_name",
                        // array or string like "varchar" - required
                        "type": ["varchar", 101]
                    },
                    {
                        // string - required
                        "name": "last_name",
                        // array or string like "varchar" - required
                        "type": ["varchar", 101]
                    },
                    {
                        // string - required
                        "name": "full_name",
                        // array or string like "varchar" - required
                        "type": ["varchar", 101],
                        // string or ExpressionConstraintSchemaSpec
                        "expression": "(first_name || ' ' || last_name)"
                    },
                    {
                        // string - required
                        "name": "email",
                        // array or string like "varchar" - required
                        "type": ["varchar", 50],
                        // boolean or UniqueKeySchemaSpec
                        "uniqueKey": true,
                        // boolean
                        "notNull": true,
                        // string or CheckConstraintSchemaSpec
                        "check": "(email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$')"
                    },
                    {
                        // string - required
                        "name": "parent",
                        // string or array like ["int",3] - required
                        "type": "int",
                        // boolean
                        "notNull": true,
                        // ForeignKeySchemaSpec
                        "references": {
                            // string or string[] like ["database_2", "users"] - required
                            "targetTable": "users",
                            // string[] - required
                            "targetColumns": ["id"],
                            // string
                            "matchRule": "full",
                            // string or object like { rule: "cascade", columns: ["col1"] }
                            "updateRule": "cascade",
                            // string or object like { rule: "restrict", columns: ["col1"] }
                            "deleteRule": "restrict"
                        }
                    }
                ],
                // TableConstraintSchemaType[]
                "constraints": [
                    {
                        // string - required
                        "type": "PRIMARY_KEY",
                        // string[] - required
                        "columns": ["id_2"],
                    },
                    {
                        // string - required
                        "type": "FOREIGN_KEY",
                        // string[] - required
                        "columns": ["parent_2"],
                        // string or string[] like ["database_2", "users"] - required
                        "targetTable": "users",
                        // string[] - required
                        "targetColumns": ["id"],
                        // string
                        "matchRule": "full",
                        // string or object like { rule: "cascade", columns: ["col1"] }
                        "updateRule": "cascade",
                        // string or object like { rule: "restrict", columns: ["col1"] }
                        "deleteRule": "restrict"
                    },
                    {
                        // string - required
                        "type": "UNIQUE_KEY",
                        // string
                        "name": "constraint_name",
                        // string[] - required
                        "columns": ["parent", "full_name"]
                    },
                    {
                        // string - required
                        "type": "CHECK",
                        // string - required
                        "expr": "(email ~* '^[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Za-z]{2,}$')"
                    }
                ],
                // IndexSchemaSpec[]
                "indexes": [
                    {
                        // string - required
                        "type": "FULLTEXT",
                        // string[] - required
                        "columns": ["full_name"]
                    },
                    {
                        // string - required
                        "type": "SPATIAL",
                        // string[] - required
                        "columns": ["full_name"]
                    }
                ]
            }
        ]
    },
    {
        // string - required
        "name": "database_2",
        // TableSchemaSpec[]
        "tables": []
    }
]
See the schema spec
interface DatabaseSchemaSpec {
    name: string;
    tables: TableSchemaSpec[];
}
interface TableSchemaSpec {
    name: string | string[];
    columns: ColumnSchemaSpec[];
    constraints: TableConstraintSchemaType[];
    indexes: IndexSchemaSpec[];
}
interface ColumnSchemaSpec {
    name: string;
    type: string | array;
    primaryKey?: boolean | PrimaryKeySchemaSpec;
    [ foreignKey | references ]?: ForeignKeySchemaSpec;
    uniqueKey?: boolean | UniqueKeySchemaSpec;
    check?: string | CheckConstraintSchemaSpec;
    default?: string | DefaultConstraintSchemaSpec;
    expression?: string | ExpressionConstraintSchemaSpec;
    identity: boolean | IdentityConstraintSchemaSpec;
    onUpdate?: string | OnUpdateConstraintSchemaSpec; // (MySQL-specific attribute)
    autoIncrement?: boolean; // (MySQL-specific attribute)
    notNull?: boolean;
    null?: boolean;
}
type TableConstraintSchemaType = TablePrimaryKeySchemaSpec | TableForeignKeySchemaSpec | TableUniqueKeySchemaSpec | TableCheckConstraintSchemaSpec;
interface TablePrimaryKeySchemaSpec extends PrimaryKeySchemaSpec {
    type: 'PRIMARY_KEY';
    columns: string[];
}

interface TableForeignKeySchemaSpec extends ForeignKeySchemaSpec {
    type: 'FOREIGN_KEY';
    columns: string[];
}

interface TableUniqueKeySchemaSpec extends UniqueKeySchemaSpec {
    type: 'UNIQUE_KEY';
    columns: string[];
}

interface TableCheckConstraintSchemaSpec extends CheckConstraintSchemaSpec {
    type: 'CHECK';
}
type ColumnConstraintSchemaType = PrimaryKeySchemaSpec | ForeignKeySchemaSpec | UniqueKeySchemaSpec | CheckConstraintSchemaSpec | DefaultConstraintSchemaSpec | ExpressionConstraintSchemaSpec | IdentityConstraintSchemaSpec | OnUpdateConstraintSchemaSpec;
interface PrimaryKeySchemaSpec {
    name: string;
}

interface ForeignKeySchemaSpec {
    name?: string;
    targetTable: string | string[];
    targetColumns: string[];
    matchRule?: string;
    updateRule?: string | { rule: string, columns: string[] };
    deleteRule?: string | { rule: string, columns: string[] };
}

interface UniqueKeySchemaSpec {
    name: string;
}

interface CheckConstraintSchemaSpec {
    name?: string;
    expr: string;
}

interface DefaultConstraintSchemaSpec {
    expr: string;
}

interface ExpressionConstraintSchemaSpec {
    expr: string;
    stored: boolean;
}

interface IdentityConstraintSchemaSpec {
    always: boolean;
}

interface OnUpdateConstraintSchemaSpec {
    expr: string;
}
interface IndexSchemaSpec {
    name?: string;
    type: string;
    columns: string[];
}

If you had that somewhere in your application, say at ./database/schema.json, Linked QL could help keep it in sync both ways with your database:

  • you add or remove a database object or table object or column object... and it is automatically reflected in your DB structure at the click of a command: linkedql commit
  • your colleague makes new changes from their codebase... and it is automatically reflected in your local copy at your next git pull, or at the click of a command: linkedql refresh

You may want to see how that brings us to true "Schema as Code" in practice.

⚡️ You also get to see a version number on each database object in your schema essentially incrementing on each migrate operation (whether by you or by colleague), and decrementing on each rollback operation (whether by you or by colleague).

To setup:

  1. Make a directory within your application for database concerns. Linked QL will want to look in ./database, but you will be able to point to your preferred location when running Linked QL commands.

  2. Have a driver.js file in that directory that has a default export function that returns a Linked QL instance. This will be imported and used by Linked QL to interact with your database. This could look something like:

    import pg from 'pg';
    import SQLClient from '@linked-db/linked-ql/sql';
    
    const pgClient = new pg.Client({
        host: 'localhost',
        port: 5432,
    });
    await pgClient.connect();
    const sqlClient = new SQLClient(pgClient, { dialect: 'postgres' });
    
    export default function() {
        return sqlClient;
    }
  3. Have your DB structure defined in a schema.json (or schema.yml) file in that directory. (See schema.json above for a guide.)

    Now, you can always extend your DB structure with new objects, drop existsing ones, or edit them in-place. Only, for an existing database, table, column, constraint, or index, names may be changed, but not in-place! A "rename" operation is done with the addition of a temporary $name attribute:

    {
        "name": "old_name",
        "$name": "new_name"
    }

    The old name being in place is needed to find the target during migration. The temporary $name attribute automatically disappears after new name has been picked up by Linked QL at next linkedql commit.

To run:

  • Use linkedql commit to walk through your staged local changes and interactively perform a migration against your database.
  • Use linkedql rollback to walk through the latest savepoint at each database and interactively perform a rollback.
  • Use linkedql state to just view the state of each database.

Details of these commands in the CLI area.

🐣 And that's a wrap on Linked QL!

Found this exciting? Don't forget to leave us a star.

DOCS

If you've made it this far, you may want to go here next:

Roadmap

  • [DONE] Implement support for a schema.yml alternative to schema.json file.
  • [DONE] Support dimensional payloads at table.insert(), table.upsert(), table.update().
  • [ONGOING] Support dimensional fields at table.select() and in the returning clause at table.insert(), table.upsert(), table.update().
  • [ONGOING] Improve support for MySQL.
  • [PENDING] Implement support for IndexedDB.
  • [PENDING] Implement the in-memory database.
  • [PENDING] Implement LinkedDB Realtime.
  • [PENDING] Implement DB-native extensions of LinkedDB.

Much of that could happen sooner with your support! If you'd like to help out, please consider a sponsorship. PRs are also always welcome.

Issues

To report bugs or request features, please submit an issue to this repository.

License

MIT.

About

A next-generation database abstraction for modern apps. (Introducing new syntax sugars in SQL and auto-versioning capabilities in any database.)

Topics

Resources

License

Stars

Watchers

Forks