You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Recently we found on our production servers profiles considerable high memory utilization (50MB) for goroutines, but we couldn't find anything in the goroutine profiles. After a while we discovered this is because there was an spike of goroutines several hours before.
We replicated the scenario with the following code:
package main
import (
"fmt"
"net/http"
_ "net/http/pprof"
"runtime"
"sync"
"time"
)
func main() {
// Start pprof server
go func() {
http.ListenAndServe("localhost:8080", nil)
}()
// Create a WaitGroup to wait for all goroutines to finish
var wg sync.WaitGroup
// Start half a million goroutines
size := 100_000
wg.Add(size)
for i := 0; i < size; i++ {
go func() {
compute()
wg.Done()
wg.Wait()
}()
}
// Wait for all goroutines to finish
wg.Wait()
fmt.Println("Done")
// Run GC every second forever
for {
runtime.GC()
time.Sleep(time.Second)
}
}
// compute performs some simple computation
func compute() {
for i := 0; i < 1000; i++ {
_ = i * i
}
}
I ran it like:
GOMAXPROCS=2 GODEBUG=gctrace=1 go run /tmp/main.go
What did you see happen?
GC trace
gc 512 @521.755s 0%: 0.020+25+0.002 ms clock, 0.041+0/12/23+0.005 ms cpu, 43->43->43 MB, 88 MB goal, 0 MB stacks, 0 MB globals, 2 P (forced)
Go version
go1.22
Output of
go env
in your module/workspace:What did you do?
Recently we found on our production servers profiles considerable high memory utilization (50MB) for goroutines, but we couldn't find anything in the goroutine profiles. After a while we discovered this is because there was an spike of goroutines several hours before.
We replicated the scenario with the following code:
I ran it like:
What did you see happen?
GC trace
gc 512 @521.755s 0%: 0.020+25+0.002 ms clock, 0.041+0/12/23+0.005 ms cpu, 43->43->43 MB, 88 MB goal, 0 MB stacks, 0 MB globals, 2 P (forced)
Heap


Goroutines

It seems the issue comes from:
https://sourcegraph.com/github.com/golang/[email protected]/-/blob/src/runtime/mgcmark.go?L316-318
I see that we do cleanup for stacks for Gs with stacks, but we never cleanup Gs without stacks, we just keep appending:
What did you expect to see?
I'm not sure if this is a known issue or an intended behavior, but the only way for us to clear up that memory is by restarting.
The text was updated successfully, but these errors were encountered: