A hands-on walkthrough of Go 1.26's highlights: less boilerplate, a faster garbage collector, stronger generics, and improved observability — with runnable examples you can edit in your browser.
gogo1.26releaseinteractivetour
This post was generated with AI assistance and hasn't been fully verified, some details may be inaccurate. Report an error.
Go 1.26 is all about doing more with less ceremony. Pointer initialization that used to take three lines now takes one, error matching no longer needs a pre-declared variable, and the runtime ships a new garbage collector that cuts GC overhead for most workloads. Below you'll find every highlight organized by theme, with editable snippets — hit Run to try them yourself.
Every snippet runs on Go 1.26 via Codapi sandboxes directly in your browser — no local toolchain required.
Less boilerplate
Pointer initialization with new(expr)
Until now, new only accepted a type. Getting a pointer to a concrete value required an intermediate variable:
go1.25
interactive
package mainimport "fmt"func main() { // Go 1.25: two steps to get a pointer to a value val := 100 ptr := &val fmt.Println(*ptr)}
9
1
2
3
4
// Go 1.25: two steps to get a pointer to a value
val:=100
ptr:=&val
fmt.Println(*ptr)
package main
import "fmt"
func main() {
// Go 1.25: two steps to get a pointer to a value
val := 100
ptr := &val
fmt.Println(*ptr)
}
Where this really shines is optional struct fields. APIs serialized as JSON or protobuf often use *T to distinguish "not set" from the zero value. Before you had to declare a helper variable; now it's a one-liner:
Because the type is checked at compile time, you also avoid the runtime panics that errors.As can produce when called with the wrong kind of target. Here is a dispatcher that classifies multiple error types:
go1.26
interactive
package mainimport ( "encoding/json" "errors" "fmt" "net/url" "strings")func diagnose(err error) string { if se, ok := errors.AsType[*json.SyntaxError](err); ok { return fmt.Sprintf("bad JSON at byte %d", se.Offset) } if ue, ok := errors.AsType[*url.Error](err); ok { return fmt.Sprintf("URL error (%s %s)", ue.Op, ue.URL) } return "unrecognized: " + err.Error()}func main() { // JSON error var m map[string]any jerr := json.NewDecoder(strings.NewReader(`{`)).Decode(&m) fmt.Println(diagnose(jerr)) // URL error _, uerr := url.Parse("://missing-scheme") if uerr != nil { fmt.Println(diagnose(uerr)) }}
bytes.Buffer gains a Peek(n) method that returns the next n bytes without consuming them. This is handy for protocol parsers that need to inspect a header before deciding how to read the rest:
go1.26
interactive
package mainimport ( "bytes" "fmt")func main() { buf := bytes.NewBufferString("GET /index.html HTTP/1.1") // Look at the first 3 bytes — the read cursor stays put method, err := buf.Peek(3) fmt.Printf("method=%q err=%v\n", method, err) // Skip past "GET " buf.Next(4) // Peek at the path path, err := buf.Peek(11) fmt.Printf("path=%q err=%v\n", path, err) // The rest of the buffer is still intact fmt.Printf("remaining=%q\n", buf.String())}
// Look at the first 3 bytes — the read cursor stays put
method, err:=buf.Peek(3)
fmt.Printf("method=%q err=%v\n", method, err)
// Skip past "GET "
buf.Next(4)
// Peek at the path
path, err:=buf.Peek(11)
fmt.Printf("path=%q err=%v\n", path, err)
// The rest of the buffer is still intact
fmt.Printf("remaining=%q\n", buf.String())
package main
import (
"bytes"
"fmt"
)
func main() {
buf := bytes.NewBufferString("GET /index.html HTTP/1.1")
// Look at the first 3 bytes — the read cursor stays put
method, err := buf.Peek(3)
fmt.Printf("method=%q err=%v\n", method, err)
// Skip past "GET "
buf.Next(4)
// Peek at the path
path, err := buf.Peek(11)
fmt.Printf("path=%q err=%v\n", path, err)
// The rest of the buffer is still intact
fmt.Printf("remaining=%q\n", buf.String())
}
The Green Tea GC was introduced as an opt-in experiment in Go 1.25. In 1.26 it becomes the default collector. Instead of chasing individual object pointers scattered across the heap, it walks memory in contiguous regions, which plays much better with modern CPU caches and allows more parallel scanning.
The Go team's benchmarks show GC overhead dropping between 10 % and 40 % for allocation-heavy programs, with additional gains on recent Intel and AMD microarchitectures.
Here is a synthetic workload that creates many short-lived allocations — exactly the scenario where Green Tea helps most:
go1.26
interactive
package mainimport ( "fmt" "runtime")func main() { const N = 250_000 var before, after runtime.MemStats runtime.GC() runtime.ReadMemStats(&before) // Allocate N small structs and keep references alive type Coord struct{ Lat, Lng float64 } coords := make([]Coord, N) for i := range coords { coords[i] = Coord{Lat: float64(i) * 0.01, Lng: float64(i) * -0.01} } runtime.KeepAlive(coords) runtime.GC() runtime.ReadMemStats(&after) fmt.Printf("GC cycles : %d\n", after.NumGC-before.NumGC) fmt.Printf("Pause total: %.2f ms\n", float64(after.PauseTotalNs-before.PauseTotalNs)/1e6) fmt.Printf("Heap in use: %.1f MiB\n", float64(after.HeapInuse)/1024/1024)}
99
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
constN=250_000
varbefore, afterruntime.MemStats
runtime.GC()
runtime.ReadMemStats(&before)
// Allocate N small structs and keep references alive
If you need the old collector for any reason, build with GOEXPERIMENT=nogreenteagc. That escape hatch is expected to disappear in Go 1.27.
io.ReadAll performance overhaul
io.ReadAll was rewritten internally. It now grows its scratch buffer exponentially and produces a final slice trimmed to the exact size needed. Benchmarks show roughly double the throughput with half the peak memory — and the function signature hasn't changed at all.
go1.26
interactive
package mainimport ( "bytes" "fmt" "io" "strings")func main() { // Simulate reading a ~400 KiB HTTP body line := "The quick brown gopher jumps over the lazy mutex.\n" body := strings.Repeat(line, 8_000) data, err := io.ReadAll(bytes.NewBufferString(body)) if err != nil { fmt.Println("error:", err) return } fmt.Printf("Read %d bytes\n", len(data)) fmt.Printf("len == cap: %v (final slice is tightly sized)\n", len(data) == cap(data))}
99
1
2
3
4
5
6
7
8
9
10
11
12
// Simulate reading a ~400 KiB HTTP body
line:="The quick brown gopher jumps over the lazy mutex.\n"
B.Loop() was added in Go 1.25 as the modern replacement for the manual for i := 0; i < b.N; i++ pattern. A regression in 1.25 prevented the loop body from being inlined, which could artificially inflate allocs/op. Go 1.26 fixes that.
The classic b.N pattern:
go1.25
interactive
package mainimport ( "fmt" "testing")func countVowels(s string) int { n := 0 for _, r := range s { switch r { case 'a', 'e', 'i', 'o', 'u': n++ } } return n}func main() { input := "the quick brown fox jumps over the lazy dog" r := testing.Benchmark(func(b *testing.B) { var sink int for i := 0; i < b.N; i++ { sink = countVowels(input) } _ = sink }) fmt.Printf("b.N style: %d ns/op %d allocs/op\n", r.NsPerOp(), r.AllocsPerOp())}
99
1
2
3
4
5
6
7
8
9
10
input:="the quick brown fox jumps over the lazy dog"
package main
import (
"fmt"
"testing"
)
func countVowels(s string) int {
n := 0
for _, r := range s {
switch r {
case 'a', 'e', 'i', 'o', 'u':
n++
}
}
return n
}
func main() {
input := "the quick brown fox jumps over the lazy dog"
r := testing.Benchmark(func(b *testing.B) {
var sink int
for i := 0; i < b.N; i++ {
sink = countVowels(input)
}
_ = sink
})
fmt.Printf("b.N style: %d ns/op %d allocs/op\n", r.NsPerOp(), r.AllocsPerOp())
}
And the cleaner b.Loop() form, now with correct inlining:
go1.26
interactive
package mainimport ( "fmt" "testing")func countVowels(s string) int { n := 0 for _, r := range s { switch r { case 'a', 'e', 'i', 'o', 'u': n++ } } return n}func main() { input := "the quick brown fox jumps over the lazy dog" r := testing.Benchmark(func(b *testing.B) { var sink int for b.Loop() { sink = countVowels(input) } _ = sink }) fmt.Printf("b.Loop style: %d ns/op %d allocs/op\n", r.NsPerOp(), r.AllocsPerOp())}
99
1
2
3
4
5
6
7
8
9
10
input:="the quick brown fox jumps over the lazy dog"
package main
import (
"fmt"
"testing"
)
func countVowels(s string) int {
n := 0
for _, r := range s {
switch r {
case 'a', 'e', 'i', 'o', 'u':
n++
}
}
return n
}
func main() {
input := "the quick brown fox jumps over the lazy dog"
r := testing.Benchmark(func(b *testing.B) {
var sink int
for b.Loop() {
sink = countVowels(input)
}
_ = sink
})
fmt.Printf("b.Loop style: %d ns/op %d allocs/op\n", r.NsPerOp(), r.AllocsPerOp())
}
package main
import "fmt"
// The constraint references itself: T must implement Comparable[T]
type Comparable[T Comparable[T]] interface {
CompareTo(T) int // negative, zero, or positive
}
type Score int
func (a Score) CompareTo(b Score) int { return int(a) - int(b) }
func Clamp[T Comparable[T]](val, lo, hi T) T {
if val.CompareTo(lo) < 0 {
return lo
}
if val.CompareTo(hi) > 0 {
return hi
}
return val
}
func main() {
fmt.Println(Clamp(Score(150), Score(0), Score(100)))
fmt.Println(Clamp(Score(-5), Score(0), Score(100)))
fmt.Println(Clamp(Score(42), Score(0), Score(100)))
}
This unlocks patterns like self-referential builder interfaces and strongly-typed collection contracts that were previously impossible without sacrificing type safety.
reflect — iterator methods
reflect.Type and reflect.Value now expose .Fields() and .Methods() iterators that work directly with for range. No more manual indexing.
Type.Fields — walk struct metadata:
go1.26
interactive
package mainimport ( "fmt" "reflect")type Server struct { Addr string `yaml:"addr"` Port int `yaml:"port"` TLS bool `yaml:"tls"` Workers int `yaml:"workers"`}func main() { for f := range reflect.TypeFor[Server]().Fields() { fmt.Printf("%-8s tag=%s\n", f.Name, f.Tag.Get("yaml")) }}
package main
import (
"fmt"
"reflect"
)
type Server struct {
Addr string `yaml:"addr"`
Port int `yaml:"port"`
TLS bool `yaml:"tls"`
Workers int `yaml:"workers"`
}
func main() {
for f := range reflect.TypeFor[Server]().Fields() {
fmt.Printf("%-8s tag=%s\n", f.Name, f.Tag.Get("yaml"))
}
}
.Methods() works the same way for method sets. The old for i := range t.NumField() pattern still compiles, but the new iterators are shorter and compose nicely with other iterator-based APIs.
Better observability
Fan-out logging with slog.NewMultiHandler
slog.NewMultiHandler sends each log record to every handler you give it. Its Enabled method returns true if any handler accepts the level, so no messages are silently swallowed.
When signal.NotifyContext catches a signal, context.Cause now returns the actual signal instead of the generic context.Canceled. Combined with errors.AsType, you can branch on exactly which signal arrived:
go1.26
interactive
package mainimport ( "context" "errors" "fmt" "os" "os/signal" "syscall" "time")func main() { ctx, stop := signal.NotifyContext(context.Background(), os.Interrupt, syscall.SIGTERM) defer stop() // Fire SIGTERM at ourselves after a short delay go func() { time.Sleep(20 * time.Millisecond) proc, _ := os.FindProcess(os.Getpid()) proc.Signal(syscall.SIGTERM) }() <-ctx.Done() cause := context.Cause(ctx) fmt.Println("ctx.Err() =", ctx.Err()) fmt.Println("context.Cause() =", cause) // Use AsType to identify the exact signal if sig, ok := errors.AsType[syscall.Signal](cause); ok { fmt.Printf("signal %d (%s) — starting graceful shutdown\n", int(sig), sig) }}
A new goroutineleak pprof profile identifies goroutines that are permanently stuck on a channel or sync primitive whose counterpart is unreachable. The collector looks at the reachability graph: if no runnable goroutine can ever unblock a waiting one, it's flagged as a leak.
A minimal leaking example — the sender blocks forever because nobody reads:
go fix was rebuilt from the ground up on the same analysis engine that powers go vet. It ships over 20 fixers that rewrite idiomatic patterns automatically and safely.
# Modernize your entire modulego fix ./...# Preview the diff without writinggo fix -diff ./...# Run a single fixergo fix -stringsCut ./...
The stringsCut fixer, for example, replaces a common two-step strings.Index + slice pattern with strings.Cut:
// Before go fixfunc parseHeader(line string) (string, string) { i := strings.Index(line, ": ") if i < 0 { return line, "" } return line[:i], line[i+2:]}
// After go fixfunc parseHeader(line string) (string, string) { key, val, _ := strings.Cut(line, ": ") return key, val}
Library authors can also mark deprecated wrappers with //go:fix inline so that downstream callers are automatically migrated when they run go fix:
Comments