Development

    • Install rustup & cargo.

    • [Optional but highly recommended] Install , our testing framework:

    • That’s it! Running the unit tests for the prql-compiler crate after cloning the repo should complete successfully:

      1. cargo test -p prql-compiler --lib

      …or, to run tests and update the test snapshots:

      1. cargo insta test --accept -p prql-compiler --lib

      There’s more context on our tests in below.

    That’s sufficient for making an initial contribution to the compiler.


    For more advanced development; for example compiling for wasm or previewing the website, we have two options:

    • Install Task; either brew install go-task/tap/go-task or as described on .

    • Then run the setup-dev task. This runs commands from our Taskfile.yml, installing dependencies with cargo, brew, npm & pip, and suggests some VS Code extensions.

      1. task setup-dev
    • We’ll need cargo-insta, to update snapshot tests:

    • We’ll need a couple of additional components, which most systems will have already. The easiest way to check whether they’re installed is to try running the full tests:

      1. cargo test

      …and if that doesn’t complete successfully, check we have:

      • A clang compiler, to compile the DuckDB integration tests, since we use `duckdb-rs’. To install one:

        • On macOS, install xcode with xcode-select --install
        • On Debian Linux, apt-get update && apt-get install clang
        • On Windows, duckdb-rs isn’t supported, so these tests are excluded
      • Python >= 3.7, to compile prql-python.
    • For more involved contributions, such as building the website, playground, book, or some release artifacts, we’ll need some additional tools. But we won’t need those immediately, and the error messages on what’s missing should be clear when we attempt those things. When we hit them, the will be a good source to copy & paste instructions from.

    We have a couple of tasks which incorporate all building & testing. While they don’t need to be run as part of a standard dev loop — generally we’ll want to run a more specific test — they can be useful as a backstop to ensure everything works, and as a reference for how each part of the repo is built & tested. They should be consistent with the GitHub Actions workflows; please report any inconsistencies.

    To build everything:

      To run all tests; (which includes building everything):

      We’re similar to most projects on GitHub — open a Pull Request with a suggested change!

      • If a change is user-facing, please add a line in , with {message}, ({@contributor, #X}) where X is the PR number.
        • If there’s a missing entry, a follow-up PR containing just the changelog entry is welcome.
      • We’re using Conventional Commits message format, enforced through .
      • We merge any code that makes PRQL better
      • A PR doesn’t need to be perfect to be merged; it doesn’t need to solve a big problem. It needs to:
        • be in the right direction,
        • make incremental progress,
        • be explicit on its current state, so others can continue the progress.
      • If you have merge permissions, and are reasonably confident that a PR is suitable to merge (whether or not you’re the author), feel free to merge.
        • If you don’t have merge permissions and have authored a few PRs, ask and ye shall receive.
      • The primary way we ratchet the code quality is through automated tests.
        • This means PRs almost always need a test to demonstrate incremental progress.
        • If a change breaks functionality without breaking tests, our tests were probably insufficient.
        • If a change breaks existing tests (for example, changing an external API), that indicates we should be careful about merging a change, including soliciting others’ views.
      • We use PR reviews to give general context, offer specific assistance, and collaborate on larger decisions.
        • Reviews around ‘nits’ like code formatting / idioms / etc are very welcome. But the norm is for them to be received as helpful advice, rather than as mandatory tasks to complete. Adding automated tests & lints to automate these suggestions is welcome.
        • If you have merge permissions and would like a PR to be reviewed before it merges, that’s great — ask or assign a reviewer.
        • If a PR hasn’t received attention after a day, please feel free to ping the pull request.
      • People may review a PR after it’s merged. As part of the understanding that we can merge quickly, contributors are expected to incorporate substantive feedback into a future PR.
      • We should revert quickly if the impact of a PR turns out not to be consistent with our expectations, or there isn’t as much consensus on a decision as we had hoped. It’s very easy to revert code and then re-revert when we’ve resolved the issue; it’s a sign of moving quickly. Other options which resolve issues immediately are also fine, such as commenting out an incorrect test or adding a quick fix for the underlying issue.

      Docs

      We’re very keen on contributions to improve our documentation.

      This includes our docs in the book, on the website, in our code, or in a Readme. We also appreciate issues pointing out that our documentation was confusing, incorrect, or stale — if it’s confusing for you, it’s probably confusing for others.

      Some principles for ensuring our docs remain maintainable:

      • Docs should be as close as possible to the code. Doctests are ideal on this dimension — they’re literally very close to the code and they can’t drift apart since they’re tested on every commit. Or, for example, it’s better to add text to a --help message, rather than write a paragraph in the Readme explaining the CLI.
      • We should have some visualization of how to maintain docs when we add them. Docs have a habit of falling out of date — the folks reading them are often different from those writing them, they’re sparse from the code, generally not possible to test, and are rarely the by-product of other contributions. Docs that are concise & specific are easier to maintain.
      • Docs should be specifically relevant to PRQL; anything else we can instead link to.

      If something doesn’t fit into one of these categories, there are still lots of ways of getting the word out there — a blog post / gist / etc. Let us know and we’re happy to link to it / tweet it.

      We use a pyramid of tests — we have fast, focused tests at the bottom of the pyramid, which give us low latency feedback when developing, and then slower, broader tests which ensure that we don’t miss anything as PRQL develops1.

      Our tests, from the bottom of the pyramid to the top:

      • — we run a few static checks to ensure the code stays healthy and consistent. They’re defined in .pre-commit-config.yaml, using . They can be run locally with

        The tests fix most of the issues they find themselves. Most of them also run on GitHub on every commit; any changes they make are added onto the branch automatically in an additional commit.

        • Checking by MegaLinter, which includes more Linters, is also done automatically on GitHub. (experimental)
      • Unit tests & inline insta snapshots — we rely on unit tests to rapidly check that our code basically works. We extensively use , a snapshot testing tool which writes out the values generated by our code, making it fast & simple to write and modify tests2

        These are the fastest tests which run our code; they’re designed to run on every save while you’re developing. We include a task which does this:

        1. task test-rust-fast
        2. # or
        3. # or, to run on every change:
        4. task -w test-rust-fast
      • — we compile all examples in the PRQL Book, to test that they produce the SQL we expect, and that changes to our code don’t cause any unexpected regressions.

      • Integration tests — these run tests against real databases, to ensure we’re producing correct SQL.

      • — we run the tests described up to this point on every commit to a pull request. These are designed to run in under five minutes, and we should be reassessing their scope if they grow beyond that. Once these pass, a pull request can be merged.

        These can be run locally with:

        1. task test-rust
      • GitHub Actions on specific changes — we run additional tests on pull requests when we identify changes to some paths, such as bindings to other languages.

      • — we run many more tests on every merge to main. This includes testing across OSs, all our language bindings, our task tasks, a measure of test code coverage, and some performance benchmarks.

        If these tests fail after merging, we revert the merged commit before fixing the test and then re-reverting.

        Most of these will run locally with:

        1. task test-all
      • GitHub Actions nightly — we run tests that take a long time or are unrelated to code changes, such as security checks, or expensive timing benchmarks, every night.

        We can run these tests before a merge by adding a label pr-cron to the PR.

      The goal of our tests is to allow us to make changes quickly. If you find they’re making it more difficult for you to make changes, or there are missing tests that would give you the confidence to make changes faster, then please raise an issue.


      Website

      The website is published together with the book and the playground, and is automatically built and released on any push to the web branch.

      The web branch points to the latest release plus any website-specific fixes. That way, the compiler behavior in the playground matches the latest release while allowing us to fix mistakes with a tighter loop than every release.

      Fixes to the playground, book, or website should have a label added to their PR — a bot will then open another PR onto the web branch once the initial branch merges.


      Currently we release in a semi-automated way:

      1. PR & merge an updated Changelog. GitHub will produce a draft version at , including “New Contributors”.

        We can use this script to generate the first line:

      2. Run cargo release version patch -x && cargo release replace -x to bump the versions, then PR the resulting commit.

      3. After merging, go to Draft a new release, copy the changelog entry into the release description4, enter the tag to be created, and hit “Publish”.

      4. From there, both the tag and release is created and all packages are published automatically based on our .

      5. Add in the sections for a new Changelog:

        1. ## 0.7.X — [unreleased]
        2. **Features**:
        3. **Fixes**:
        4. **Documentation**:
        5. **Web**:
        6. **Integrations**:
        7. **Internal changes**:
      6. Check whether there are milestones that need to be pushed out.

      We may make this more automated in future; e.g. automatic changelog creation.


      : Our approach is very consistent with @matklad’s advice, in his excellent blog post .

      2: — note that only the initial line of each test is written by us; the remainder is filled in by insta.

      4: Unfortunately GitHub’s markdown parser interprets linebreaks as newlines. I haven’t found a better way of editing the markdown to look reasonable than manually editing the text.