Description
What version of Go are you using (go version
)?
I have tested on:
go version go1.9.2 darwin/amd64
go version go1.9.2 linux/amd64
Does this issue reproduce with the latest release?
Yes.
What operating system and processor architecture are you using (go env
)?
darwin: GOARCH=amd64 GOOS=darwin
linux: GOARCH=amd64 GOOS=linux
What did you do?
pprof labels that I add using pprof.Do() do not appear in the goroutine profile.
Steps:
- Compile and start: https://play.golang.org/p/SgYgnDqaVKB
- Start "go tool pprof localhost:5555/debug/pprof/goroutine"
- Run the "tags" command
- See no tags, but I expect to see a tag for the label a-label=a-value
I also downloaded the file "localhost:5555/debug/pprof/goroutine"", gunzipped that file, and did not see either the label key nor value in the protobuf file.
When I run "go tool pprof localhost:5555/debug/pprof/goroutine" twice and in the second run run "tags", I see
(pprof) tags
bytes: Total 3
2 (66.67%): 325.31kB
1 (33.33%): 902.59kB
This shows that labels can work. (I expect no output on the first run, since it is reasonable for no heap memory to have been allocated.)
What did you expect to see?
I expect to see the tags command output the label key-value pair in the program.
What did you see instead?
The tags command reports an empty value:
(pprof) tags
(pprof)
Activity
hyangah commentedon Jan 16, 2018
I believe currently only CPU profile utilizes tag information.
I am quite surprised pprof on goroutine profile ever reports tags (tag 1 and 2?, kB? maybe pprof bug)
pprof code for goroutine profile (https://golang.org/src/runtime/pprof/pprof.go?s=6343:6376#L599) depends on runtime.GoroutineProfile, and that runtime function doesn't handle tags. But I agree that it would be nice if profiles other than CPU profile can generate tagged profile samples.
ccfrost commentedon Jan 17, 2018
Thank you, @hyangah. It would have helped me if the documentation for pprof labels had noted this. Can I ask for that change to the documentation? If it'll help, I'm happy to draft it.
I think that I would find tags helpful in heap and goroutine profiles. If there is someone who would lend some guidance, I may be able to make time to work on the changes to add this. How does this sound and is anyone interested enough to discuss the high level changes and review the work?
hyangah commentedon Jan 19, 2018
@ccfrost Who will not love well-written documentation. Send a CL!
I don't think it's too hard to extend the goroutine profile to include the labels. Probably not difficult for any custom pprof.Profile (returned from pprof.NewProfile) that reports the 'current' status either.
But I am afraid it's challenging to extend profiles such as heap, block, or mutex because they involve never decremented counters (e.g. heap profile's data to support --alloc_* options).
@rsc and @aclements , what do you think about adding tags to other profile types?
ianlancetaylor commentedon Mar 28, 2018
CC @matloob
aclements commentedon Apr 2, 2018
Hi Chris!
I agree labels would be good to add to "bounded" profiles. Unfortunately, most of the profile types are "monotonic", so we'd have to hold on to labels created from the beginning of the program. Even the
--inuse_*
heap profiles have this problem because they can report allocations from early in the program. Maybe for heap profiles we could treat labels as garbage-collectable, so we could still report labels inuse profiles without having to hold on to all labels forever, even if we can't report them for alloc profiles.@matloob, I remember there was some question about whether/how C++ pprof handles this for heap profiles. Did we ever figure out the answer?
hyangah commentedon Apr 2, 2018
@aclements
Does runtime keep track of the labels at the time of the object allocation already? An object can be freed by a different goroutine so the labels for the profiled block should be stored somewhere for use in free.
When using the experimental api in cl/102755 (background profiling support), we kept track of a mapping from a profiled block to the labels when the object was created in a user space. One drawback is that, if the reader is slow so a record gets lost due to the overflow, that will lead a leak in the map. If the runtime actively maintains the mapping, the experimental api can be changed to take advantage of it.
aclements commentedon Apr 2, 2018
It doesn't currently. Right now, we just have a hash table from allocation stack to bucket+memRecord, and profiled allocations have a link to the right bucket. The memRecord is just a few cumulative stats. Adding labels would probably require adding a second level to the memRecord that mapped from label set to per-label-set stats. Alternatively, we could key the main hash table by both stack and label set, but then it's unclear how to clean things out of that hash table when the last object when a given label set has been freed.
Yes, if the runtime was tracking the label set it could be reported on both the alloc and the free.
But isn't there a more fundamental problem here? With a log-based heap profile, if you drop records, won't that permanently skew the profile? Unlike a CPU profile, a log-based heap profile needs to match up the allocation and free records; it's okay to lose both, but not one or the other. I would think the system would have to guarantee that it can report the free record if it reports the allocation record, for example, by chaining overflow free records through the heap specials until the reader catches up.
(FWIW, I'm a big fan of log-based profiling systems because it gets aggregation out of the runtime, which both simplifies the runtime and makes it possible to plug in more sophisticated aggregation. This is something I think Linux perf got really right.)
hyangah commentedon Apr 2, 2018
The first one (two levels memRecord) seems more feasible and simpler for mPtog_Flush*.
Agree the runtime must not drop the free records of sampled allocations.
Regarding the log-based heap profile: inspired by cl/102755, I experimented to generate labeled heap profiles using it, but encountered a couple of challenges, in addition to dealing with the dropped free event records.
Dropping the allocation record is not very ideal either if the allocated object turns out to be a long-lived, large object which users may be interested. We need to start the log reader as early as possible. If the log reader lives outside the runtime, how can we ensure it 'starts' and 'starts as early as possible'?
Implementation of the complex 3-stage algorithm (implemented in mprof.go). The goal of the algorithm is to avoid the bias towards malloc but provide more stable info - so currently inuse_* stat is the stat from the 'latest' GC. So, I think the runtime should log records about GC stages if the goal of this 3-stage algorithm offers is really important.
aclements commentedon Apr 2, 2018
That's true, though this can always happen because it's statistical anyway (albeit more likely to sample large objects). Overflow allocation records could also be threaded through the heap specials.
If there's a canonical log reader (say, in runtime/pprof), we can always special-case this. If there isn't, then things get trickier. One could imagine feeding in synthetic records for the current state of the heap. Obviously you'd miss out on objects that had already been freed before starting the log reader, but you wouldn't miss out on long-lived objects that had already been allocated.
Yes. It seems quite reasonable to feed GC records into the log, and that should make it possible to implement the 3 stage algorithm.
20 remaining items
dfinkel commentedon Apr 27, 2020
I wanted to add them to the
runtime
stacktraces that are produced withdebug=2
, but there are a few complications that I didn't want to address in the first CL:runtime
package, using the same code as the stack-traces dumped for panicslabelMap
type lives in theruntime/pprof
package, and we wouldn't want theruntime
package to depend on its subpackage38c2c12 was the low-hanging fruit of plumbing.
I don't have time to get anything further in before the 1.15 freeze hits us later this week, but I want to loop back and push on some of the other traces (including the
debug=2
dumps) eventually. I'll probably post some ideas for the necessary plumbing/accounting on this issue before writing anything more, though.hyangah commentedon Apr 27, 2020
Can we split this into (at least) a couple of sub issues?
in_use (live) heap profile, and possibly other custom profiles: @aclements proposed log-based profiling inspired by the background cpu profiling. runtime/pprof: labels are not added to profiles #23458 (comment) Need to think about the labels can be different between alloc/free or Profile.Add/Remove.
allocs/mutex/block profiles: not a good, scalable solution is proposed yet. They are monotonic and the sampled profile so its labels is supposed to live forever. Maybe with the recent support for the delta profile, we can consider adopting the log-based profiling like approach, but I am not sure.
goroutine profile with debug=2, as described by @dfinkel's recent comment, currently it is done by calling
runtime.Stack
. A separate unexported function (visible only in runtime/pprof) is a possibility.threadcreates profile: broken. (runtime: threadcreate profile is broken #6104)
dfinkel commentedon Apr 29, 2020
@hyangah thanks for enumerating the topics being discussed here.
WRT goroutine profiles with
debug=2
, we can use an approach to visibility similar to https://golang.org/cl/189318, with a private function moved intoruntime/pprof
via ago:linkname
comment. What do you think about that taking a callback argument with the signaturefunc(p unsafe.Pointer) string
to decode the label-map associated with that goroutine?[dnm]: use background profiling