Even though I don't like Go, I acknowledge that tooling like this built right into the language is a huge deal for language popularity and maturity. Other languages just aren't this opinionated about build tools, testing frameworks, etc.
I suspect that as newer languages emerge over the years, they'll take notes from Go and how well it integrates stuff like this.
In December 2024, during the frenzied adoption of LLM coding assistants, we became aware that such tools tended—unsurprisingly—to produce Go code in a style similar to the mass of Go code used during training, even when there were newer, better ways to express the same idea. Less obviously, the same tools often refused to use the newer ways even when directed to do so in general terms such as “always use the latest idioms of Go 1.25.” In some cases, even when explicitly told to use a feature, the model would deny that it existed. [...] To ensure that future models are trained on the latest idioms, we need to ensure that these idioms are reflected in the training data, which is to say the global corpus of open-source Go code.
Real kudos to the golang team.
The Go team has built such trust with backwards compatibility that improvements like this are exciting, rather than anxiety-inducing.
Compare that with other ecosystems, where APIs are constantly shifting, and everything seems to be @Deprecated or @Experimental.
We have `go run main.go` as the convention to boot every apps dev environment, with support for multiple work trees, central config management, a pre-migrated database and more. Makes it easy and fast to dev and test many versions of an app at once.
See https://github.com/housecat-inc/cheetah for the shared tool for this.
Then of course `go generate`, `go build`, `go test` and `go vet` are always part of the fast dev and test loop. Excited to add `go fix` into the mix.
We maintain a large Go monorepo and every internal API migration turns into a grep+sed adventure. Half the time someone misses edge cases, the other half the sed pattern breaks on multiline. Having an AST-aware rewriter that understands Go's type system is a massive upgrade over regex hacks.
The -diff preview flag also makes this practical for CI. Run go fix -diff, fail if output is non-empty. That alone could replace a bunch of custom linters people maintain.
The real key: there's no ignore comment as with other linters. The most you can do is run a suppress command so that every file gets its current number of violations of each rule recorded in JSON, and then you can only ever decrease those numbers.
The 1.26 release of Go this month includes a completely rewritten go fix subcommand. Go fix uses a suite of algorithms to identify opportunities to improve your code, often by taking advantage of more modern features of the language and library. In this post, we’ll first show you how to use go fix to modernize your Go codebase. Then in the second section we’ll dive into the infrastructure behind it and how it is evolving. Finally, we’ll present the theme of “self-service” analysis tools to help module maintainers and organizations encode their own guidelines and best practices.
The go fix command, like go build and go vet, accepts a set of patterns that denote packages. This command fixes all packages beneath the current directory:
$ go fix ./...
On success, it silently updates your source files. It discards any fix that touches generated files since the appropriate fix in that case is to the logic of the generator itself. We recommend running go fix over your project each time you update your build to a newer Go toolchain release. Since the command may fix hundreds of files, start from a clean git state so that the change consists only of edits from go fix; your code reviewers will thank you.
To preview the changes the above command would have made, use the -diff flag:
$ go fix -diff ./...
--- dir/file.go (old)
+++ dir/file.go (new)
- eq := strings.IndexByte(pair, '=')
- result[pair[:eq]] = pair[1+eq:]
+ before, after, _ := strings.Cut(pair, "=")
+ result[before] = after
…
You can list the available fixers by running this command:
$ go tool fix help
…
Registered analyzers:
any replace interface{} with any
buildtag check //go:build and // +build directives
fmtappendf replace []byte(fmt.Sprintf) with fmt.Appendf
forvar remove redundant re-declaration of loop variables
hostport check format of addresses passed to net.Dial
inline apply fixes based on 'go:fix inline' comment directives
mapsloop replace explicit loops over maps with calls to maps package
minmax replace if/else statements with calls to min or max
…
Adding the name of a particular analyzer shows its complete documentation:
$ go tool fix help forvar
forvar: remove redundant re-declaration of loop variables
The forvar analyzer removes unnecessary shadowing of loop variables.
Before Go 1.22, it was common to write `for _, x := range s { x := x ... }`
to create a fresh variable for each iteration. Go 1.22 changed the semantics
of `for` loops, making this pattern redundant. This analyzer removes the
unnecessary `x := x` statement.
This fix only applies to `range` loops.
By default, the go fix command runs all analyzers. When fixing a large project it may reduce the burden of code review if you apply fixes from the most prolific analyzers as separate code changes. To enable only specific analyzers, use the flags matching their names. For example, to run just the any fixer, specify the -any flag. Conversely, to run all the analyzers except selected ones, negate the flags, for instance -any=false.
As with go build and go vet, each run of the go fix command analyzes only a specific build configuration. If your project makes heavy use of files tagged for different CPUs or platforms, you may wish to run the command more than once with different values of GOARCH and GOOS for better coverage:
$ GOOS=linux GOARCH=amd64 go fix ./...
$ GOOS=darwin GOARCH=arm64 go fix ./...
$ GOOS=windows GOARCH=amd64 go fix ./...
Running the command more than once also provides opportunities for synergistic fixes, as we’ll see below.
The introduction of generics in Go 1.18 marked the end of an era of very few changes to the language spec and the start of a period of more rapid—though still careful—change, especially in the libraries. Many of the trivial loops that Go programmers routinely write, such as to gather the keys of a map into a slice, can now be conveniently expressed as a call to a generic function such as maps.Keys. Consequently these new features create many opportunities to simplify existing code.
In December 2024, during the frenzied adoption of LLM coding assistants, we became aware that such tools tended—unsurprisingly—to produce Go code in a style similar to the mass of Go code used during training, even when there were newer, better ways to express the same idea. Less obviously, the same tools often refused to use the newer ways even when directed to do so in general terms such as “always use the latest idioms of Go 1.25.” In some cases, even when explicitly told to use a feature, the model would deny that it existed. (See my 2025 GopherCon talk for more exasperating details.) To ensure that future models are trained on the latest idioms, we need to ensure that these idioms are reflected in the training data, which is to say the global corpus of open-source Go code.
Over the past year, we have built dozens of analyzers to identify opportunities for modernization. Here are three examples of the fixes they suggest:
minmax replaces an if statement by a use of Go 1.21’s min or max functions:
x := f() if x < 0 { x = 0 } if x > 100 { x = 100 }
x := min(max(f(), 0), 100)
rangeint replaces a 3-clause for loop by a Go 1.22 range-over-int loop:
for i := 0; i < n; i++ { f() }
for range n { f() }
stringscut (whose -diff output we saw earlier) replaces uses of strings.Index and slicing by Go 1.18’s strings.Cut:
i := strings.Index(s, ":") if i >= 0 { return s[:i] }
before, _, ok := strings.Cut(s, ":") if ok { return before }
These modernizers are included in gopls, to provide instant feedback as you type, and in go fix, so that you can modernize several entire packages at once in a single command. In addition to making code clearer, modernizers may help Go programmers learn about newer features. As part of the process of approving each new change to the language and standard library, the proposal review group now considers whether it should be accompanied by a modernizer. We expect to add more modernizers with each release.
Go 1.26 includes a small but widely useful change to the language specification. The built-in new function creates a new variable and returns its address. Historically, its sole argument was required to be a type, such as new(string), and the new variable was initialized to its “zero” value, such as "". In Go 1.26, the new function may be called with any value, causing it to create a variable initialized to that value, avoiding the need for an additional statement. For example:
ptr := new(string) *ptr = "go1.25"
ptr := new("go1.26")
This feature filled a gap that had been discussed for over a decade and resolved one of the most popular proposals for a change to the language. It is especially convenient in code that uses a pointer type *T to indicate an optional value of type T, as is common when working with serialization packages such as json.Marshal or protocol buffers. This is such a common pattern that people often capture it in a helper, such as the newInt function below, saving the caller from the need to break out of an expression context to introduce additional statements:
type RequestJSON struct {
URL string
Attempts *int // (optional)
}
data, err := json.Marshal(&RequestJSON{
URL: url,
Attempts: newInt(10),
})
func newInt(x int) *int { return &x }
Helpers such as newInt are so frequently needed with protocol buffers that the proto API itself provides them as proto.Int64, proto.String, and so on. But Go 1.26 makes all these helpers unnecessary:
data, err := json.Marshal(&RequestJSON{
URL: url,
Attempts: new(10),
})
To help you take advantage of this feature, the go fix command now includes a fixer, newexpr, that recognizes “new-like” functions such as newInt and suggests fixes to replace the function body with return new(x) and to replace every call, whether in the same package or an importing package, with a direct use of new(expr).
To avoid introducing premature uses of new features, modernizers offer fixes only in files that require at least the minimum appropriate version of Go (1.26 in this instance), either through a go 1.26 directive in the enclosing go.mod file or a //go:build go1.26 build constraint in the file itself.
Run this command to update all calls of this form in your source tree:
$ go fix -newexpr ./...
At this point, with luck, all of your newInt-like helper functions will have become unused and may be safely deleted (assuming they aren’t part of a stable published API). A few calls may remain where it would be unsafe to suggest a fix, such as when the name new is locally shadowed by another declaration. You can also use the deadcode command to help identify unused functions.
Applying one modernization may create opportunities to apply another. For example, this snippet of code, which clamps x to the range 0–100, causes the minmax modernizer to suggest a fix to use max. Once that fix is applied it suggests a second fix, this time to use min.
x := f() if x < 0 { x = 0 } if x > 100 { x = 100 }
x := min(max(f(), 0), 100)
Synergies may also occur between different analyzers. For example, a common mistake is to repeatedly concatenate strings within a loop, resulting in quadratic time complexity—a bug and a potential vector for a denial-of-service attack. The stringsbuilder modernizer recognizes the problem and suggests using Go 1.10’s strings.Builder:
s := "" for _, b := range bytes { s += fmt.Sprintf("%02x", b) } use(s)
var s strings.Builder for _, b := range bytes { s.WriteString(fmt.Sprintf("%02x", b)) } use(s.String())
Once this fix is applied, a second analyzer may recognize that the WriteString and Sprintf operations can be combined as fmt.Fprintf(&s, "%02x", b), which is both cleaner and more efficient, and offer a second fix. (This second analyzer is QF1012 from Dominik Honnef’s staticcheck, which is already enabled in gopls but not yet in go fix, though we plan to add staticcheck analyzers to the go command starting in Go 1.27.)
Consequently, it may be worth running go fix more than once until it reaches a fixed point; twice is usually enough.
A single run of go fix may apply dozens of fixes within the same source file. All fixes are conceptually independent, analogous to a set of git commits with the same parent. The go fix command uses a simple three-way merge algorithm to reconcile the fixes in sequence, analogous to the task of merging a set of git commits that edit the same file. If a fix conflicts with the list of edits accumulated so far, it is discarded, and the tool issues a warning that some fixes were skipped and that the tool should be run again.
This reliably detects syntactic conflicts arising from overlapping edits, but another class of conflict is possible: a semantic conflict occurs when two changes are textually independent but their meanings are incompatible. As an example consider two fixes that each remove the second-to-last use of a local variable: each fix is fine by itself, but when both are applied together the local variable becomes unused, and in Go that’s a compilation error. Neither fix is responsible for removing the variable declaration, but someone has to do it, and that someone is the user of go fix.
A similar semantic conflict arises when a set of fixes causes an import to become unused. Because this case is so common, the go fix command applies a final pass to detect unused imports and remove them automatically.
Semantic conflicts are relatively rare. Fortunately they usually reveal themselves as compilation errors, making them impossible to overlook. Unfortunately, when they happen, they do demand some manual work after running go fix.
Let’s now delve into the infrastructure beneath these tools.
Since the earliest days of Go, the go command has had two subcommands for static analysis, go vet and go fix, each with its own suite of algorithms: “checkers” and “fixers”. A checker reports likely mistakes in your code, such as passing a string instead of an integer as the operand of a fmt.Printf("%d") conversion. A fixer safely edits your code to fix a bug or to express the same thing in a better way, perhaps more clearly, concisely, or efficiently. Sometimes the same algorithm appears in both suites when it can both report a mistake and safely fix it.
In 2017 we redesigned the then-monolithic go vet program to separate the checker algorithms (now called “analyzers”) from the “driver”, the program that runs them; the result was the Go analysis framework. This separation enables an analyzer to be written once then run in a diverse range of drivers for different environments, such as:
go fix and go vet.One benefit of the framework is its ability to express helper analyzers that don’t report diagnostics or suggest fixes of their own but instead compute some intermediate data structure that may be useful to many other analyzers, amortizing the costs of its construction. Examples include control-flow graphs, the SSA representation of function bodies, and data structures for optimized AST navigation.
Another benefit of the framework is its support for making deductions across packages. An analyzer can attach a “fact” to a function or other symbol so that information learned while analyzing the function’s body can be used when later analyzing a call to the function, even if the call appears in another package or the later analysis occurs in a different process. This makes it easy to define scalable interprocedural analyses. For example, the printf checker can tell when a function such as log.Printf is really just a wrapper around fmt.Printf, so it knows that calls to log.Printf should be checked in a similar manner. This process works by induction, so the tool will also check calls to further wrappers around log.Printf, and so on. An example of an analyzer that makes heavy use of facts is Uber’s nilaway, which reports potential mistakes resulting in nil pointer dereferences.
The process of “separate analysis” in go fix is analogous to the process of separate compilation in go build. Just as the compiler builds packages starting from the bottom of the dependency graph and passing type information up to importing packages, the analysis framework works from the bottom of the dependency graph up, passing facts (and types) up to importing packages.
In 2019, as we started developing gopls, the language server for Go, we added the ability for an analyzer to suggest a fix when reporting a diagnostic. The printf analyzer, for example, offers to replace fmt.Printf(msg) with fmt.Printf("%s", msg) to avoid misformatting should the dynamic msg value contain a % symbol. This mechanism has become the basis for many of the quick fixes and refactoring features of gopls.
While all these developments were happening to go vet, go fix remained stuck as it was back before the Go compatibility promise, when early adopters of Go used it to maintain their code during the rapid and sometimes incompatible evolution of the language and libraries.
The Go 1.26 release brings the Go analysis framework to go fix. The go vet and go fix commands have converged and are now almost identical in implementation. The only differences between them are the criteria for the suites of algorithms they use, and what they do with computed diagnostics. Go vet analyzers must detect likely mistakes with low false positives; their diagnostics are reported to the user. Go fix analyzers must generate fixes that are safe to apply without regression in correctness, performance, or style; their diagnostics may not be reported, but the fixes are directly applied. Aside from this difference of emphasis, the task of developing a fixer is no different from that of developing a checker.
As the number of analyzers in go vet and go fix continues to grow, we have been investing in infrastructure both to improve the performance of each analyzer and to make it easier to write each new analyzer.
For example, most analyzers start by traversing the syntax trees of each file in the package looking for a particular kind of node such as a range statement or function literal. The existing inspector package makes this scan efficient by pre-computing a compact index of a complete traversal so that later traversals can quickly skip subtrees that don’t contain any nodes of interest. Recently we extended it with the Cursor datatype to allow flexible and efficient navigation between nodes in all four cardinal directions—up, down, left, and right, similar to navigating the elements of an HTML DOM—making it easy and efficient to express a query such as “find each go statement that is the first statement of a loop body”:
var curFile inspector.Cursor = ...
// Find each go statement that is the first statement of a loop body.
for curGo := range curFile.Preorder((*ast.GoStmt)(nil)) {
kind, index := curGo.ParentEdge()
if kind == edge.BlockStmt_List && index == 0 {
switch curGo.Parent().ParentEdgeKind() {
case edge.ForStmt_Body, edge.RangeStmt_Body:
...
}
}
}
Many analyzers start by searching for calls to a specific function, such as fmt.Printf. Function calls are among the most numerous expressions in Go code, so rather than search every call expression and test whether it is a call to fmt.Printf, it is much more efficient to pre-compute an index of symbol references, which is done by typeindex and its helper analyzer. Then the calls to fmt.Printf can be enumerated directly, making the cost proportional to the number of calls instead of to the size of the package. For an analyzer such as hostport that seeks an infrequently used symbol (net.Dial), this can easily make it 1,000× faster.
Some other infrastructural improvements over the past year include:
strings.Cut in a package that is itself imported by strings.We have come a long way, but there remains much to do. Fixer logic can be tricky to get right. Since we expect users to apply hundreds of suggested fixes with only cursory review, it’s critical that fixers are correct even in obscure edge cases. As just one example (see my GopherCon talk for several more), we built a modernizer that replaces calls such as append([]string{}, slice...) by the clearer slices.Clone(slice) only to discover that, when slice is empty, the result of Clone is nil, a subtle behavior change that in rare cases can cause bugs; so we had to exclude that modernizer from the go fix suite.
Some of these difficulties for authors of analyzers can be ameliorated with better documentation (both for humans and LLMs), particularly checklists of surprising edge cases to consider and test. A pattern-matching engine for syntax trees, similar to those in staticcheck and Tree Sitter, could simplify the fiddly task of efficiently identifying the locations that need fixing. A richer library of operators for computing accurate fixes would help avoid common mistakes. A better test harness would let us check that fixes don’t break the build, and preserve dynamic properties of the target code. These are all on our roadmap.
More fundamentally, we are turning our attention in 2026 to a “self-service” paradigm.
The newexpr analyzer we saw earlier is a typical modernizer: a bespoke algorithm tailored to a particular feature. The bespoke model works well for features of the language and standard library, but it doesn’t really help update uses of third-party packages. Although there’s nothing to stop you from writing a modernizer for your own public APIs and running it on your own project, there’s no automatic way to get users of your API to run it too. Your modernizer probably wouldn’t belong in gopls or the go vet suite unless your API is particularly widely used across the Go ecosystem. Even in that case you would have to obtain code reviews and approvals and then wait for the next release.
Under the self-service paradigm, Go programmers would be able to define modernizations for their own APIs that their users can apply without all the bottlenecks of the current centralized paradigm. This is especially important as the Go community and global Go corpus are growing much faster than the ability of our team to review analyzer contributions.
The go fix command in Go 1.26 includes a preview of the first fruits of this new paradigm: the annotation-driven source-level inliner, which we’ll describe in an upcoming companion blog post next week. In the coming year, we plan to investigate two more approaches within this paradigm.
First, we will be exploring the possibility of dynamically loading modernizers from the source tree and securely executing them, either in gopls or go fix. In this approach a package that provides an API for, say, a SQL database could additionally provide a checker for misuses of the API, such as SQL injection vulnerabilities or failure to handle critical errors. The same mechanism could be used by project maintainers to encode internal housekeeping rules, such as avoiding calls to certain problematic functions or enforcing stronger coding disciplines in critical parts of the code.
Second, many existing checkers can be informally described as “don’t forget to X after you Y!”, such as “close the file after you open it”, “cancel the context after you create it”, “unlock the mutex after you lock it”, “break out of the iterator loop after yield returns false”, and so on. What such checkers have in common is that they enforce certain invariants on all execution paths. We plan to explore generalizations and unifications of these control-flow checkers so that Go programmers can easily apply them to new domains, without complex analytical logic, simply by annotating their own code.
We hope that these new tools will save you effort during maintenance of your Go projects and help you learn about and benefit from newer features sooner. Please try out go fix on your projects and report any problems you find, and do share any ideas you have for new modernizers, fixers, checkers, or self-service approaches to static analysis.