I'm advocating for JJ to build a proper daemon that runs "checks" per change in the background. So you don't run pre-commit checks when committing. They just happen in the background, and when by the time you get to sharing your changes, you get all the things verified for you for each change/commit, effortlessly without you wasting time or needing to do anything special.
I have something a bit like that implemented in SelfCI (a minimalistic local-first Unix-philosophy-abiding CI) https://app.radicle.xyz/nodes/radicle.dpc.pw/rad%3Az2tDzYbAX... and it replaced my use of pre-commit hooks entirely. And users already told me that it does feel like commit hooks done right.
The main advantage for me is that prek has support for monorepo/workspaces, while staying compatible with existing pre-commit hooks.
So you can have additional .pre-commit-config.yaml files in each workspace under the root, and prek will find and run them all when you commit. The results are collated nicely. Just works.
Having the default hooks reimplemented in Rust is minor bonus (3rd party hooks won't be any faster) and also using uv as the package manager speeds up hook updates for python hooks.
Dedicated a whole chapter to it in my latest book, Effective Testing.
The trend of fast core (with rust) and convenient wrapper is great while we are still writing code.
I haven't yet submitted it to upstream for design discussion, but I pushed up my branch[1]. You can also declare a revset that the target revision must match, for extra belts and suspenders (eg., '~conflicts()')
[1] https://github.com/paulsmith/jj/tree/protected-bookmarks
I'm sure this is more reliably than pre-commit, but you still have hooks building Python wheels and whatnot, which fails annoyingly often.
The VFS stuff is not quite finished yet though (it's really complicated). If anyone wants to help me with that it would be welcome!
pre-commit considered harmful if you ask me. prek seems to largely be an improvement but I think it's improving on an already awful platform so you should not use it.
I know I am working on a competing tool, but I don't share the same criticism for lefthook or husky. I think those are fine and in some ways (like simplicity) better than hk.
Is prek much better?
Why not just call a shell script directly? How would you use these with a CI/CD platform?
This is the kind of thing I see and I think to myself: is this solving a problem or is this solving a problem that the real problem created?
Why is your pre-commit so complicated that it needs all this? I wish I could say it could all be much simpler, but I’ve worked in big tech and the dynamics of large engineering workforces over time can make this sort of thing do more good than harm, but again I wonder if the real problem is very large engineering teams…
I think wasi is a cool way to handle this problem. I don't think security is a reason though.
From an org standpoint you can have them (mandate?) as part of the developer experience.
(Our team doesn't use them, but I can see the potential value)
The prek documentation has a list of many large projects (such as CPython and FastAPI, to name a few) who use it; each link is a PR of how they integrated it into CI if you want to see more: https://prek.j178.dev/#who-is-using-prek
I loathe UX flows where you get turned around. If I try to make a commit, it's because that I what I intend to do. I don't want to receive surprise errors. It's just more magic, more implicit behavior. Give me explicit tooling.
If you want to use pre-commit hooks, great! You do you. But don't force them on me, as so many projects do these days.
Basically what I would want is write a commit (because I want to commit early and often) then run the lint (and tests) in a sandboxed environment. if they pass, great. if they fail and HERAD has moved ahead of the failing commit, create a "FIXME" branch off the failure. back on main or whatever branch head was pointed at, if tests start passing, you probably never need to revisit the failure.
I want to know about local test failures before I push to remote with full CI.
automatic branching and workflow stuff is optional. the core idea is great.
I personally can't stand my git commit command to be slow or to fail.
[0]: such as https://github.com/watchexec/watchexec
Also, how do you like Radicle?
It's as simple as a script with a cp command that I run after any clone of a repo that requires it; certainly doesn't require anything as elaborate as a hook manager.
I'm not sure if I fully understood. But SelfCI's Merge-Queue (mq) daemon has a built-in hook system, so it's possible to do custom stuff at certain points. So probably you should be able to implement it already, or it might require couple of minor tweaks (should be easy to do on SelfCI side after some discussion).
That’s reversing the flow of control, but might be workable!
SelfCI is _very_ minimal by design. There isn't really all that much to document other than what is described in the README.
> Also, how do you like Radicle?
I enjoy that it's p2p, and it works for me in this respect. Personally I disagree with it attempt to duplicate other features of GitHub-like forge, instead of the original collaborate model of Linux kernel that git was built for. I think it should try to replicate something more like SourceHut, mailinglist thread, communication that includes patches, etc. But I did not really _collaborated_ much using Radicle yet, I just push and pull stuff from it and it works for that just fine.
Changes to code would obviously need to be reviewed before they are committed. That's still much better than with pre-commit, where e.g. to do simple things like banning tabs you pretty much give some guy you don't know full access to your machine. Even worse - almost everyone that uses pre-commit also uses tags instead of commit hashes so the hook can be modified retroactively.
One interesting attack would be for a hook to modify e.g. `.vscode/settings.json`... I should probably make the default config exclude those files. Is that what you meant? Even without that it's a lot more secure than pre-commit.
Now, if the server enforces checks on push, that's a project policy that should be respected.
This is a long standing sore point in pre-commit, see https://github.com/pre-commit/pre-commit/issues/860 and also linked duplicates (some of which are not duplicates).
Pre-commit checks should be opt-in with CI as the gate. It's useful to be able to commit code in a failing state.
You run the same hooks in CI as locally so it's DRY and pushes people to use the hooks locally to get the early feedback instead of failing in CI.
Hooks without CI are less useful since they will be constantly broken.
* CI (I understand pre-commit shifts errors left)
* in editor/IDE live error callouts for stuff like type checking, and auto-formatting for things like "linters".
Do you run tests? How do you know _which_ tests to run, and not just run every test CI would run, which could be slow?
Is enough to don't even open the link! Everything right now seems to have an urgent need to be developed into Rust, like why???
Just like kubernetes, many companies followed the kubernetes hype even when it was not needed and added unnecessary complexity to a simple environment.
Now it is Rust time!!
I have a shell utility similar to make that CI/CD calls for each step (like for step build, run make build) that abstracts stuff. I'd have Prek call this tool, I guess, but then I don't get what benefit there is here.
There should be a .gitextensions in the repo that the repo owners maintain just like .gitignores and . gitattributes etc etc. Everything can still be opt in by every user but at least all git clients would be able to know about, pull down, and install per user discretion.
It seems pretty basic in this day and age but it's still a gaping hole. You still need to manually call LFS install for goodness sake.
Pre-commit and pre-push hooks serve the purpose of keeping code isolated to a developer's machine. This is a recipe for disaster. You will run into situations where important work isn't accessible since a developer couldn't commit/push their code and the machine was lost or damaged. I've seen it happen.
i regularly edit history of PRs for a variety of reasons and avoid pre-commit when possible.
put it all in CI thank you please — gimme a big red X on my pipeline publicly telling me i’ve forgotten to do something considered important.
I can't, because the point of our pre-commit use isn't to run logic in hooks that can't be run otherwise.
e.g. We use pre-commit to enforce that our language's whitespace formatting has been applied. This has the same configuration in the IDE, but sometimes devs ignore IDE warnings or just open files in a text editor for a quick edit and don't see IDE warnings or w/e.
"Replaced by CI" isn't really meaningful in our context - pre-commit is just a tool that runs as part of CI - some things get done as pre-commit hooks because they're fast and it's a convenient place to put them. Devs are encouraged to also run pre-commit locally, but there's no enforcement of this.
> Do you run tests? How do you know _which_ tests to run, and not just run every test CI would run, which could be slow?
We have performance metrics for pre-commit hooks and pre-push hooks. I forget the exact numbers, but we want stuff to "feel" fast, so e.g. if you're rebasing something locally with a few dozen commits it should only take seconds. Pre-push hooks have a bit more latitude.
If it’s on a pull/merge request, you’re wasting reviewer time.
If the hook is blocking secrets, you can’t un-push it with 100% certainty so you have to revoke credentials.
For texts, I tend to have the equivalent of “pytest tests/unit/“ since those are fast and a good sanity check, especially for things like refactoring.
I also run our pre-commit checks in CI for consistency so we’re never relying on someone’s local environment (web editors exist) and to keep everyone honest about their environment.
The checks in those pre-commit hooks would need to be very fast - otherwise they'd be too slow to run on every commit.
Then why would it save time and money if they only get run at the pipeline stage? That would only save substantial time if the pipepline is architected in a suboptimal way: Those checks should get run immediately on push, and first in the pipeline so the make the pipeline fail fast if they don't pass. Instant Slack notification on fail.
But the fastest feedback is obviously in the editor, where such checks like linting / auto-formatting belong, IMHO. There I can see what gets changed, and react to it.
Pre-commit hooks sit in such a weird place between where I author my code (editor) and the last line of defense (CI).
pre-commit is a framework to run hooks written in many languages, and it manages the language toolchain and dependencies for running the hooks.
prek is a reimagined version of pre-commit, built in Rust. It is designed to be a faster, dependency-free and drop-in alternative for it, while also providing some additional long-requested features.
[!NOTE] Although prek is pretty new, it’s already powering real‑world projects like CPython, Apache Airflow, FastAPI, and more projects are picking it up—see Who is using prek?. If you’re looking for an alternative to
pre-commit, please give it a try—we’d love your feedback!Please note that some languages are not yet supported for full drop‑in parity with
pre-commit. See Language Support for current status.
pre-commit and more efficient in disk space usage.uv for managing Python virtual environments and dependencies.prek provides a standalone installer script to download and install the tool,
On Linux and macOS:
curl --proto '=https' --tlsv1.2 -LsSf https://github.com/j178/prek/releases/download/v0.3.1/prek-installer.sh | sh
On Windows:
powershell -ExecutionPolicy ByPass -c "irm https://github.com/j178/prek/releases/download/v0.3.1/prek-installer.ps1 | iex"
prek is published as Python binary wheel to PyPI, you can install it using pip, uv (recommended), or pipx:
# Using uv (recommended)
uv tool install prek
# Using uvx (install and run in one command)
uvx prek
# Adding prek to the project dev-dependencies
uv add --dev prek
# Using pip
pip install prek
# Using pipx
pipx install prek
brew install prek
Build from source using Cargo (Rust 1.89+ is required):
cargo install --locked prek
prek is published as a Node.js package and can be installed with any npm-compatible package manager:
# As a dev dependency
npm add -D @j178/prek
pnpm add -D @j178/prek
bun add -D @j178/prek
# Or install globally
npm install -g @j178/prek
pnpm add -g @j178/prek
bun install -g @j178/prek
# Or run directly without installing
npx @j178/prek --version
bunx @j178/prek --version
prek is available via Nixpkgs.
# Choose what's appropriate for your use case.
# One-off in a shell:
nix-shell -p prek
# NixOS or non-NixOS without flakes:
nix-env -iA nixos.prek
# Non-NixOS with flakes:
nix profile install nixpkgs#prek
Pre-built binaries are available for download from the GitHub releases page.
prek can be used in GitHub Actions via the j178/prek-action repository.
Example workflow:
name: Prek checks
on: [push, pull_request]
jobs:
prek:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: j178/prek-action@v1
This action installs prek and runs prek run --all-files on your repository.
prek is also available via taiki-e/install-action for installing various tools.
If installed via the standalone installer, prek can update itself to the latest version:
prek self update
prek safely.pre-commit and takes up half the disk space.priority may run concurrently), reducing end-to-end runtime.uv for creating Python virtualenvs and installing dependencies, which is known for its speed and efficiency.repo: builtin for offline, zero-setup hooks, which is not available in pre-commit..pre-commit-config.yaml file.prek run has some nifty improvements over pre-commit run, such as:prek run --directory <dir> runs hooks for files in the specified directory, no need to use git ls-files -- <dir> | xargs pre-commit run --files anymore.prek run --last-commit runs hooks for files changed in the last commit.prek run [HOOK] [HOOK] selects and runs multiple hooks.prek list command lists all available hooks, their ids, and descriptions, providing a better overview of the configured hooks.prek auto-update supports --cooldown-days to mitigate open source supply chain attacks.prek run <hook_id> command, making it easier to run specific hooks without remembering their ids.For more detailed improvements prek offers, take a look at Difference from pre-commit.
prek is pretty new, but it is already being used or recommend by some projects and organizations:
This project is heavily inspired by the original pre-commit tool, and it wouldn't be possible without the hard work of the maintainers and contributors of that project.
And a special thanks to the Astral team for their remarkable projects, particularly uv, from which I've learned a lot on how to write efficient and idiomatic Rust code.
It's a little surprising that git doesn't pass pre-commit hooks any information, like a list of which files were changed in the soon-to-be-made commit. git does so for pre-push, where it writes to a hook's stdin some information about the refs and remotes involved in the push.
I wonder if many pre-commit hooks, like the kind which run formatters, would be better off as `clean` filters, which run on files when they are staged. The filter mechanism makes it easier to apply just to the files which were changed. In the git docs, they even use a formatter (`indent`) as an example.
https://git-scm.com/book/ms/v2/Customizing-Git-Git-Attribute...
That's still multiple minutes compared to an error thrown on push - i.e. long enough for the dev in question to create a PR, start another task, and then leave the PR open with CI failures for days afterwards.
> But the fastest feedback is obviously in the editor, where such checks like linting / auto-formatting belong, IMHO.
There are substantial chunk of fast checks that can't be configured in <arbitrary editor> or that require a disproportionate time investment. (e.g. you could write and maintain a Visual Studio extension vs just adding a line to grep for pre-commit)
Slow hooks are also not a problem in projects I manage as I don't use them.
> Slow hooks are also not a problem in projects I manage as I don't use them.
You bypass the slow hooks you mentioned? Why even have hooks then?
So reviewers have to digest all of the twists and turns I took to get to the final result? Why oh why oh why?
Sure, if they've already seen some of it, then there should be an easy way for them to see the updates. (Either via separate commits or if you're fortunate enough to have a good review system, integrated interdiffs so you can choose what view to use.)
In a better world, it would be the code author's responsibility to construct a meaningful series of commits. Unless you do everything perfectly right the first time, that means updating commits or using fixup commits. This doesn't just benefit reviewers, it's also enormously valuable when something goes wrong and you can bisect it down to one small change rather than half a dozen not-even-compiling ones.
But then, you said "atomic", which suggests you're already trying to make clean commits. How do you do that without modifying past commits once you discover another piece that belongs with an earlier step?
> You just squash on merge.
I'd rather not. Or more specifically, optimal review granularity != optimal final granularity. Some things should be reviewed separately then squashed together (eg a refactoring + the change on top). Some things should stay separate (eg making a change to one scary area and then making it to another). And optimal authoring granularity can often be yet another thing.
But I'll admit, git + github tooling kind of forces a subpar workflow.