Hopefully easier than hacking around changesets, but less mature, of course.
(Disclaimer: Him and I are in the same org and I have sent PRs to said tool)
Publishing to npm, PyPi, Maven Central, crates.io, NuGet⦠all using changesets.
Because Nix is a package manager not tied to one programming language ecosystem, I can install all the tools for every language I need, and have the tooling consistent and modular, even between monorepos.
For formatting I usee treefmt-nix, which quickly format all syntaxes in my repo (.nix, .rs, .md, etc.) by calling individual formatters (installed via Nix), such as rustfmt, mdformat, nixfmt, etc.
For git hooks I use lefthook-nix, which automatically installs my git hooks using lefthook. husky, cargo-husky, etc. are great, but they assume you're mainly using one tech stack. lefthook is like pre-commit, but with significantly better dependency chain. (I tried one time to bust the Nix cache and had to download and compile both the .NET runtime and the Swift runtime... it reminded me my dependency footprint could be smaller.)
For Cargo workspaces in Rust I use workspace-level linter rules, so all new crates can inherit the same rules.
As the author, I also love `just` and I have the CI steps as `just fmt`, etc.
This means the same commands I type work in CI, so there's not a parallel environment I have to maintain.
I have a `just ci` for running all the steps at once locally, but in GitHub/Forgejo Actions, I like to split them into separate Actions steps for better rendering on web. But `just ci: fmt lint ...` is just an alias, so very little repetition here.
Here's a lefthook-nix + treefmt-nix guide: https://simonshine.dk/articles/lefthook-treefmt-direnv-nix/
Here's a GitHub Actions + Nix guide: https://simonshine.dk/articles/speeding-up-ci-with-nix/
Here's an example project that uses it: https://github.com/sshine/walltime-rs
Here's a "how much Nix should I swallow at once?" guide: https://simonshine.dk/articles/three-levels-of-nix/
Here's a Forgejo Actions runner that builds and pushes an OCI image to a registry without Docker: https://git.shine.town/infra/runners/src/branch/main/.forgej...
One of the nice things about working in a smaller business is you can enjoy using things that donβt need to scale to extreme sizes. One example in the software world is monorepos. While monorepos can scale well (see Google, Facebook, and others), doing so requires special tooling and infrastructure. With plain git, you can only go so far. While you can use it, it has meaningful advantages, like being able to make atomic changes that affect many parts of the system in a single commit, which eliminates whole classes of compatibility and integration issues. You can always split a monorepo later (see git-filter-repo).
So, suppose youβre a small-to-medium team using a monorepo. Letβs go further and say that this monorepo stores all your companyβs code, meaning it spans many different programming languagesβitβs a polyglot monorepo. What tool can you use to manage versioning in a consistent way?
I argue that changesets is a solid choice, even if itβs primarily focused on the JavaScript/TypeScript ecosystem.
For any versioning tool, you are typically looking for how to:
changesets assumes per-package semantic versioning (i.e., all packages have their own version). In addition, each package has its own CHANGELOG.md.
The changesets team also has a GitHub Action, changesets/action which importantly allows specifying custom scripts for the version and publish commands. That customization is what gives changesets support for polyglot repositories.
In changesets, engineers commit βchangesetβ files to the repository that define what content ends up in changelogs, and what packages versions are bumped (i.e., major, minor, patch).
See the changesets documentation for more details.
Iβm a fan of just. I also really like uv scripts. The example below uses both.
Iβm also going to assume you are in a enterprise setting where all your monorepo is private, not open-source.
My recommended organization (at least at time of writing) is something like the following.
.
βββ .changeset
β βββ config.json
β βββ README.md
βββ contrib
β βββ utils
βββ docker
β βββ Dockerfile
βββ docs
β βββ package.json
β βββ pnpm-lock.yaml
β βββ ...
β βββ pnpm-workspace.yaml
βββ Justfile
βββ package-lock.json
βββ package.json
βββ packages
β βββ python-one
β β βββ ...
β β βββ package.json
β βββ rust-one
β β βββ ...
β β βββ package.json
β βββ rust-two
β β βββ ...
β β βββ package.json
βββ pnpm-workspace.yaml
βββ third-party
Put all packages in a packages/ directory, no matter what language they are. I also enjoy having documentation as code, so letβs say you have a docs/ directory, too, and that your docs is written in a javascript-based frontend (like Starlight), for the purposes of highlighting a nuance later.
With this setup, you can configure changesets with a proxy pnpm workspace at the root with all your packages.
# pnpm-workspace.yaml
packages:
- "packages/**"
And, declare your changesets dependencies:
// package.json
{
"name": "example-monorepo",
"private": true,
"devDependencies": {
"@changesets/changelog-git": "^0.2.0",
"@changesets/cli": "^2.29.0"
}
}
You should now also update your .gitignore:
node_modules/
Because changesets is built for JavaScript, we also need βproxyβ package.json files for all of our packages; changesets uses these to perform version bumps.
These can be as simple as:
// packages/python-one/package.json
{
"name": "python-one",
"version": "0.1.0",
"private": true
}
With this setup, note how we are intentionally trying to exclude our internal docs/ as a pnpm workspace memberβwe only want to version packages. To do so, declare the docs/ directory its own pnpm workspace, otherwise it will try and combine the docs/ dependencies into the root package-lock.json. This can be as simple as:
# docs/pnpm-workspace.pyml
packages: []
Next, we can add our changeset configuration:
// .changeset/config.json
{
"$schema": "https://unpkg.com/@changesets/[email protected]/schema.json",
"changelog": "@changesets/changelog-git",
"commit": false,
"fixed": [],
"linked": [],
"access": "restricted",
"baseBranch": "main",
"updateInternalDependencies": "patch",
"ignore": [],
"privatePackages": {
"version": true,
"tag": true
},
"___experimentalUnsafeOptions_WILL_CHANGE_IN_PATCH": {
"onlyUpdatePeerDependentsWhenOutOfRange": true
}
}
Next, we want to automate our releases. That is, generating the changelog PRs, bumping package metadata, pushing tags, and triggering builds on those tags.
Letβs start with our GitHub Workflow definition, and unpack the scripts it calls.
name: Release
on:
push:
branches:
- main
concurrency: ${{ github.workflow }}-${{ github.ref }}
permissions:
contents: write
pull-requests: write
jobs:
release:
name: Release
runs-on: ubuntu-latest
outputs:
published: ${{ steps.changesets.outputs.published }}
steps:
- uses: actions/checkout@v6
- uses: actions/setup-node@v4
with:
cache: npm
- uses: astral-sh/setup-uv@v7
- uses: taiki-e/install-action@just
- run: npm install
- name: Create Release Pull Request or Tag
id: changesets
uses: changesets/action@v1
with:
version: just version
publish: npx @changesets/cli publish
# I like conventional commits
commit: "chore(release): version packages"
title: "chore(release): version packages"
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
docker:
needs: [release]
if: needs.release.outputs.published == 'true'
uses: ./.github/workflows/docker.yml
secrets: inherit
You might be working why we run a workflow explicitly, rather than using something like on.push.tags as a trigger.
It turns out GitHub has two fatal flaws with that intuitive approach (at time of writing). First, if you push >3 tags at once, workflows will not trigger. Unfortunately, this is a relatively common scenario in a monorepo. Second, GitHubβs triggering of on.push.tags is highly unreliable. This unreliability is still present, even if you use a PAT as they instruct.
So, instead consider an explicit workflow_call for the purpose as Iβve done here.
Setting version: just version is the key to polyglot support.
# Version packages based on changesets
[doc('Consume changesets: bump versions, update changelogs, sync native version files.')]
[group('release')]
version:
npx @changesets/cli version
uv run --script contrib/utils/sync-versions.py
The meat of the glue for polyglot support then, is how you implement sync-versions.py.
The key bit here is we rely on changesets to bump the versions in package.json for us when we call npx @changesets/cli version, but then it is up to us to propagate that version to the respective languageβs metadata appropriately.
Here is an example that uses pretty naive parsing. You can write something similar (or better!) for the languages you use.
#!/usr/bin/env -S uv run --script
#
# /// script
# requires-python = ">=3.12"
# dependencies = []
# ///
#
# Sync versions from package.json files (updated by changesets) to native
# package manifests (Cargo.toml, pyproject.toml, etc.).
import json
import re
import subprocess
from enum import Enum, auto
from pathlib import Path
PACKAGES_DIR = Path(__file__).resolve().parent.parent.parent / "packages"
class SyncResult(Enum):
NOT_FOUND = auto()
UP_TO_DATE = auto()
UPDATED = auto()
def read_package_json(pkg_dir: Path) -> dict | None:
"""Read and parse a package.json file."""
pkg_json = pkg_dir / "package.json"
if not pkg_json.exists():
return None
return json.loads(pkg_json.read_text())
def update_cargo_toml(pkg_dir: Path, version: str) -> SyncResult:
"""Update version in [package] section of Cargo.toml."""
cargo_toml = pkg_dir / "Cargo.toml"
if not cargo_toml.exists():
return SyncResult.NOT_FOUND
lines = cargo_toml.read_text().splitlines(keepends=True)
in_package_section = False
for i, line in enumerate(lines):
stripped = line.strip()
# Track which TOML section we're in
if stripped.startswith("["):
in_package_section = stripped == "[package]"
continue
if in_package_section and stripped.startswith("version"):
new_line = re.sub(
r'^(\s*version\s*=\s*")([^"]+)(")',
rf"\g<1>{version}\3",
line,
)
if new_line != line:
lines[i] = new_line
cargo_toml.write_text("".join(lines))
rel = cargo_toml.relative_to(PACKAGES_DIR.parent)
print(f" Updated {rel}")
return SyncResult.UPDATED
return SyncResult.UP_TO_DATE
return SyncResult.UP_TO_DATE
def update_pyproject_toml(pkg_dir: Path, version: str) -> SyncResult:
"""Update version in [project] section of pyproject.toml."""
pyproject = pkg_dir / "pyproject.toml"
if not pyproject.exists():
return SyncResult.NOT_FOUND
lines = pyproject.read_text().splitlines(keepends=True)
in_project_section = False
for i, line in enumerate(lines):
stripped = line.strip()
# Track which TOML section we're in
if stripped.startswith("["):
in_project_section = stripped == "[project]"
continue
if in_project_section and stripped.startswith("version"):
new_line = re.sub(
r'^(\s*version\s*=\s*")([^"]+)(")',
rf"\g<1>{version}\3",
line,
)
if new_line != line:
lines[i] = new_line
pyproject.write_text("".join(lines))
rel = pyproject.relative_to(PACKAGES_DIR.parent)
print(f" Updated {rel}")
return SyncResult.UPDATED
return SyncResult.UP_TO_DATE
return SyncResult.UP_TO_DATE
def refresh_lockfiles() -> None:
"""Refresh all lockfiles under the repo to match updated versions."""
repo_root = PACKAGES_DIR.parent
print("Refreshing lockfiles...")
# Cargo.lock β root workspace + any standalone crate lockfiles
cargo_locks = sorted(
set(repo_root.glob("Cargo.lock")) | set(PACKAGES_DIR.rglob("Cargo.lock"))
)
for cargo_lock in cargo_locks:
lock_dir = cargo_lock.parent
rel = lock_dir.relative_to(repo_root) or Path(".")
print(f" cargo update --workspace in {rel}")
subprocess.run(["cargo", "update", "--workspace"], cwd=lock_dir, check=True)
# uv.lock β Python packages
for uv_lock in sorted(PACKAGES_DIR.rglob("uv.lock")):
lock_dir = uv_lock.parent
print(f" uv lock in {lock_dir.relative_to(repo_root)}")
subprocess.run(["uv", "lock"], cwd=lock_dir, check=True)
def main() -> None:
print("Syncing versions from package.json to native manifests...")
print()
updated = 0
for pkg_json in sorted(PACKAGES_DIR.rglob("package.json")):
pkg_dir = pkg_json.parent
pkg_data = read_package_json(pkg_dir)
if pkg_data is None:
continue
version = pkg_data.get("version")
if version is None:
continue
name = pkg_data.get("name", pkg_dir.name)
print(f"{name} @ {version}")
results = [
update_cargo_toml(pkg_dir, version),
update_pyproject_toml(pkg_dir, version),
]
if any(r == SyncResult.UPDATED for r in results):
updated += 1
elif all(r == SyncResult.NOT_FOUND for r in results):
print(" (no native manifest found)")
else:
print(" (already up to date)")
print()
print(f"Synced {updated} package(s).")
print()
refresh_lockfiles()
print()
print("Done.")
if __name__ == "__main__":
main()
In the standard changesets flow, you will now have a pull request on GitHub with the appropriate CHANGELOG.md updates, as well as the metadata updates for all the relevant packages.
Once that is merged, the very same action will run, realize all the .changeset files are consumed, and push tags.
With our example configuration, changesets will only push tags, not publish packages, because we set
"privatePackages": {
"version": true,
"tag": true
}
in our .changeset/config.json and have all the packages set to private: true.
Typically, youβll then want to react to these pushed tags. For example, to build new docker images.
For that, rather than using an on.push.tags trigger as you would reasonably assume, you probably want a workflow_call. See the tip earlier in the post for why.
on:
workflow_call: {}
workflow_dispatch:
inputs:
dry_run:
description: 'Build images without pushing to GHCR'
required: false
type: boolean
default: false
no_cache:
description: 'Force a build without using the cache'
required: false
type: boolean
default: false
changesets can manage per-package semantic versioning and changelogs in polyglot monorepos today, even without direct native support for multiple languages. The trick is to treat JavaScript package manifests as the canonical source of version bumps, and then syncing those bumps via your own scripts to the language-native manifests.
A few gotchas exist (like explicitly making independent pnpm-workspace.yaml files for subdirectories you want to be independent, or using a separate personal access token to push tags), but none are blockers to being able to benefit from changesetsβs convenient workflow.
I used to suggest versioning monorepos with a single global version using a tool like semantic-release. Since trying changesets, Iβm sold on the benefits of letting people write commit messages for future internal engineers while also adding a separate changelog note for end users. These are often two distinct audiences, and relying on a single conventional commit to serve both is often suboptimal.