Closed
Description
What version of Go are you using (go version
)?
$ go version go version go1.9 linux/amd64
Does this issue reproduce with the latest release?
go version go1.9
What operating system and processor architecture are you using (go env
)?
go env
Output
$ go env GOARCH="amd64" GOBIN="" GOEXE="" GOHOSTARCH="amd64" GOHOSTOS="linux" GOOS="linux" GORACE="" GOROOT="/usr/local/golang" GOTOOLDIR="/usr/local/golang/pkg/tool/linux_amd64" GCCGO="gccgo" CC="gcc" GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build725674040=/tmp/go-build" CXX="g++" CGO_ENABLED="1" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config
What did you do?
What did you expect to see?
What did you see instead?
Metadata
Metadata
Assignees
Type
Projects
Milestone
Relationships
Development
No branches or pull requests
Activity
xvsfekcn commentedon Dec 11, 2019
I started my program and everything works fine over time, STW started to rise in time, after about 2.3 days later. My pauseNs:
If I restart the program,everything is returned to the normal.
but when the problem has happened, found the number of inuse_objects at the time of the problem is the same as when it was started, and the same is true for inuse_space.
Using top command, the memory occupied by the program has not increased
ianlancetaylor commentedon Dec 11, 2019
Go 1.9 is quite old and is no longer supported. The most recent release is 1.13.5. There have been many improvements to the garbage collector since 1.9. Is it possible for you to try a newer release?
xvsfekcn commentedon Dec 11, 2019
the cpu profile flat flat% sum% cum cum%
1.03mins 27.30% 27.30% 1.04mins 27.43% runtime.gentraceback /root/go/src/runtime/traceback.go
0.31mins 8.22% 35.52% 0.31mins 8.22% runtime.usleep /root/go/src/runtime/sys_linux_amd64.s
0.17mins 4.49% 40.01% 0.17mins 4.49% runtime.heapBitsForObject /root/go/src/runtime/mbitmap.go
0.17mins 4.48% 44.49% 0.30mins 7.97% runtime.cgocall /root/go/src/runtime/cgocall.go
0.14mins 3.67% 51.98% 0.46mins 12.22% runtime.scanobject /root/go/src/runtime/mgcmark.go
0.09mins 2.49% 54.47% 0.09mins 2.49% runtime.greyobject /root/go/src/runtime/mbitmap.go
0.08mins 2.23% 56.69% 0.08mins 2.23% runtime.futex /root/go/src/runtime/sys_linux_amd64.s
0.07mins 1.98% 58.68% 1.40mins 37.02% runtime.mallocgc /root/go/src/runtime/malloc.go
xvsfekcn commentedon Dec 11, 2019
iI wonder if it's a problem with our code itself
ianlancetaylor commentedon Dec 11, 2019
The most likely cause of this problem is an unpreemptible loop: a loop that runs, perhaps checking some package variable, without making any function calls. That will break the Go scheduler; see #10958 (this problem is fixed in the upcoming 1.14 release). The fix is to add calls to
runtime.Gosched
in the loop.[-]runtime: my gc pauseNs is larger than 4s. [/-][+]runtime: gc pauseNs larger than 4s[/+]xvsfekcn commentedon Dec 13, 2019
I turned off the request to send,the GC PauseNs return to normal. But I only need to send 1 request, the system GC PauseNs larger than hundreds of mililseconds. How can I know what GC is doing at this time?
Why do I have the highest percentage of gentraceback?thx and very much.
xvsfekcn commentedon Dec 13, 2019
Any better suggestions? we use the cpu profile and dichotomy also can not find the tight or busy loop.. so sad.
josharian commentedon Dec 13, 2019
Have you tried go 1.13?
And if it is a non-preemptible loop, try go 1.14. A beta should be coming very soon.
xvsfekcn commentedon Dec 20, 2019
the reason for this is the grpc goroutine lead to the gentraceback with time growth
josharian commentedon Dec 20, 2019
Are you saying that grpc is leaking goroutines?
xvsfekcn commentedon Dec 28, 2019
Yes, the error disappeared when we replaced grpc with http. I'm not sure if it's because our usage is wrong.
agnivade commentedon Dec 28, 2019
It sounds like a bug with grpc then. Please file an issue on their issue tracker.
josharian commentedon Dec 29, 2019
pprof's goroutine profiling may be useful for tracking down where the leaked goroutines are being created. See e.g.
https://stackoverflow.com/questions/38401778/how-to-profile-number-of-goroutines#38414527
I'm going to close this issue, but if the issue does turn out to originate in grpc, please do file an issue with them. Thanks!