Description
The Go 1.9 implementation of sync.Map
uses a single Mutex
to guard the read-write map containing new keys. That makes Store
calls with different new keys always contend with each other, and also contend with Load
calls with different new keys, even if the Load
s and Store
s for each key are confined to a single thread.
That doesn't really matter for the sync.Map
use-cases in the standard library because they do not operate on new keys in the steady state, but it limits the utility of sync.Map
for use-cases involving a high rate of churn. Such use-cases may include:
- caches with high eviction rates, such as caches fronting key-value storage services
- maps of RPC or HTTP stream ID to handler state
- maps of opaque handles to Go pointers in order to write C-exported APIs that comply with cgo pointer-passing rules
We should explore ways to address new-key contention, such as sharding the read-write maps and associated locks (as suggested in #20360), journaling writes (and using a Bloom or HyperLogLog filter to avoid reading the journal, along the lines of #21032), or storing the read-write map in an atomic tree data structure instead of a built-in map
.
Activity
bcmills commentedon Jul 16, 2017
(@orcaman and/or @OneOfOne might be interested in this one?)
OneOfOne commentedon Jul 17, 2017
I'm interested but I can't commit right now, the one idea that comes to my mind would require hacking runtime/hashmap* otherwise we'd have to double hash the keys.
orcaman commentedon Jul 17, 2017
Very interesting task, I'd love to take this one. I think it should definitely be doable for the Go1.10 Milestone. I'll think about the design some more before I can 100% commit to doing this one in due time.
OneOfOne commentedon Jul 27, 2017
@orcaman What I wanted to do is something like https://github.com/OneOfOne/cmap/tree/master/v2,
I use a slightly modified version of this in my code, but I wanted to do that with sync.Map in a way, but that'd require a lot more runtime knowledge to be able to use runtime/map's hasher directly to do the sharding rather than double hash it.
Sadly I don't have the knowledge nor time to learn the internals of runtime/map right now.
By all means if you can do that, it'd be great or we can discuss it.
robaho commentedon Nov 26, 2018
I think you can just use a slice of locks and dirty maps with the low order bits of the hash performing the selection - but this requires access to the internal hash code.
robaho commentedon Nov 27, 2018
@bcmills I am willing to give this a try. Is there a document that desribes using internal facilities when an implementation is part of the stdlib?
20 remaining items
thepudds commentedon Dec 3, 2024
Hi @akavel, new APIs need to go through the proposal process, but improving the internal implementation of existing APIs does not, so my guess is there probably was not a specific proposal for updating the implementation of
sync.Map
.As I understand the history, this new implementation was originally merged in CL 573956 as an internal
HashTrieMap
to support the newunique
package proposal (#62483), with some discussion in that proposal.There is a large stack of more recent CLs that add more functionality I think with the goal of supporting
sync.Map
, including a comment about enabling by default in CL 608335 that includes some performance-related comments.There are probably other interesting bits of commentary if you poke around the history some more.
If people are curious about performance improvements, it could be helpful to test it out via benchmarks for their use cases, which is relatively easy via
gotip
:mknyszek commentedon Dec 3, 2024
Thanks @thepudds. Yeah, there wasn't even a tracking issue for this since the fact that it came to be at all was somewhat coincidental. It turned out the new design was faster than
sync.Map
in its own benchmarks, so I pushed a little to get the rest of the operations implemented before the freeze.The commit messages didn't include the benchmark results on
sync
's benchmark suite, but the geomean across all of them something like ~20% improved forGOMAXPROCS=1
andGOMAXPROCS=2
and more for higherGOMAXPROCS
(1 and 2 are probably more realistic just because very few applications are going to be hammering on async.Map
on all cores). I can re-run the benchmarks and post them here, and you're also welcome to run them yourself. You can also disable the new map implementation for now withGOEXPERIMENT=nosynchashtriemap
.akavel commentedon Dec 9, 2024
@mknyszek I think it would be super cool if you could post them here, I suppose other/future readers might also be interested; I got here during some quick research trying to evaluate feasibility of sync.Map for my use, leading me to reach out for thirdparty replacements. If others happen to trek a similar path in the future, could be useful to them.
thepudds commentedon Dec 9, 2024
Hi @akavel and anyone else interested -- @mknyszek posted some micro-benchmark results for the new sync.Map implementation in #70683, along with some brief discussion.
I would definitely encourage anyone curious to try out the new implementation (e.g., via
gotip
or otherwise). Bugs reported now are especially helpful.Also note that he mentioned you can currently disable via
GOEXPERIMENT=nosynchashtriemap
if you want to switch back and forth between implementations (when testing, for example).rabbbit commentedon Dec 11, 2024
I was curious, and I ran some of our internal benchmarks against Gotip. We're testing a single sync.Map "graph" with a set of "elements". The test is effectively a "LoadOrStore" test where the new "element" creation is 200x more expensive, in allocs, than a lookup.
The results are good in the pathological case: see ~20% win when creating a lot of new elements, and iterating at the same time. Comparing gotip with experiment enabled/disabled:
More interesting were the results on 1.23.4 vs gotip (so unrelated to diff PR):
Tangentially, we're using Bazel, so it took me some time to figure out if I got the experiment wired through correctly. Was printing the enabled experiments as part of the benchmark preamble ever considered?
mknyszek commentedon Dec 12, 2024
The performance degradation is interesting. Where are the allocations coming from? What's the diff in the memory profile? I'm wondering if it's from the callsite or something else.
Also, if you're up for checking it, what does Go 1.23.4 vs. gotip-with-experiment-off look like?
rabbbit commentedon Dec 12, 2024
I'm perhaps missing a part of your question (and, experiment-off means new map on? :)), but experiment on/off shows no difference in most benchmarks. So comparison vs 1.23.4 should also be stable?
I rechecked, the degradation is there
Comparing the two profiles, the gotip shows (I apologize for this being an image, it appears I don't know how to use pprof):
Disabling the swissmap results in
Not entirely sure what to make out of this, or even how to debug further right now.
prattmic commentedon Dec 13, 2024
I'll come back to this next week to read the context more carefully, but for now I'll quickly say that an increase in alloc count with swissmaps isn't inherently surprising, as these maps contain more small allocations. Overall size should be similar.
prattmic commentedon Dec 20, 2024
The new sync.Map implementation does not use the builtin map type at all, so the effect from enabling/disabling swissmaps must be from some other part of your benchmark that is using maps.
rabbbit commentedon Feb 11, 2025
Sorry, I missed your response. Yes indeed - there's a large sync.Map, where each of the objects internally contains/creates many standard maps. In benchmarks those sub-maps maps are mostly very sparse (1-2 objects typically), so I would expect the allocs to come from something related to map initialization.
mknyszek commentedon May 13, 2025
I'm going to mark this issue as fixed now since we no longer have a warm-up time for new keys, so accessing disjoint keys already scales nicely from the moment they're inserted.