Skip to content

cmd/compile: tight code optimization opportunities #47120

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
rsc opened this issue Jul 10, 2021 · 5 comments
Open

cmd/compile: tight code optimization opportunities #47120

rsc opened this issue Jul 10, 2021 · 5 comments
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. Performance
Milestone

Comments

@rsc
Copy link
Contributor

rsc commented Jul 10, 2021

The generated x86 code can be improved in some fairly simple ways - hoisting computed constants out of loop bodies, and avoiding unnecessary register moves - that have a significant performance impact on tight loops. In the following example those improvements produce a 35% speedup.

Here is an alternate, DFA-based implementation of utf8.Valid that I have been playing with:

func Valid(x []byte) bool {
	state := uint64(1 * 6)
	for _, b := range x {
		state = dfa[b] >> (state & 63)
	}
	return (state & 63) == 1*6
}

const (
	s00 = 0 | (1*6)<<(1*6)
	sC0 = 0
	sC2 = 0 | (2*6)<<(1*6)
	sE0 = 0 | (3*6)<<(1*6)
	sE1 = 0 | (4*6)<<(1*6)
	sED = 0 | (5*6)<<(1*6)
	sEE = sE1
	sF0 = 0 | (6*6)<<(1*6)
	sF1 = 0 | (7*6)<<(1*6)
	sF4 = 0 | (8*6)<<(1*6)
	sF5 = 0

	s80 = 0 | (1*6)<<(2*6) | (2*6)<<(4*6) | (4*6)<<(5*6) | (4*6)<<(7*6) | (4*6)<<(8*6)
	s90 = 0 | (1*6)<<(2*6) | (2*6)<<(4*6) | (4*6)<<(5*6) | (4*6)<<(6*6) | (4*6)<<(7*6)
	sA0 = 0 | (1*6)<<(2*6) | (2*6)<<(3*6) | (2*6)<<(4*6) | (4*6)<<(6*6) | (4*6)<<(7*6)
)

var dfa = [256]uint64{
	s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00,
	s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00,
	s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00,
	s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00,
	s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00,
	s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00,
	s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00,
	s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00, s00,
	s80, s80, s80, s80, s80, s80, s80, s80, s80, s80, s80, s80, s80, s80, s80, s80,
	s90, s90, s90, s90, s90, s90, s90, s90, s90, s90, s90, s90, s90, s90, s90, s90,
	sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0,
	sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0, sA0,
	sC0, sC0, sC2, sC2, sC2, sC2, sC2, sC2, sC2, sC2, sC2, sC2, sC2, sC2, sC2, sC2,
	sC2, sC2, sC2, sC2, sC2, sC2, sC2, sC2, sC2, sC2, sC2, sC2, sC2, sC2, sC2, sC2,
	sE0, sE1, sE1, sE1, sE1, sE1, sE1, sE1, sE1, sE1, sE1, sE1, sE1, sED, sEE, sEE,
	sF0, sF1, sF1, sF1, sF4, sF5, sF5, sF5, sF5, sF5, sF5, sF5, sF5, sF5, sF5, sF5,
}

There are no big benchmarks of Valid in the package, but here are some that could be added:

var valid1k = bytes.Repeat([]byte("0123456789日本語日本語日本語日abcdefghijklmnopqrstuvwx"), 16)
var valid1M = bytes.Repeat(valid1k, 1024)

func BenchmarkValid1K(b *testing.B) {
	b.SetBytes(int64(len(valid1k)))
	for i := 0; i < b.N; i++ {
		Valid(valid1k)
	}
}

func BenchmarkValid1M(b *testing.B) {
	b.SetBytes(int64(len(valid1M)))
	for i := 0; i < b.N; i++ {
		Valid(valid1M)
	}
}

The old Valid implementation runs at around 1450 MB/s.
The implementation above runs at around 1600 MB/s.
Better but not what I had hoped.
It compiles as follows:

% go test -c && go tool objdump -s 'utf8.Valid$' utf8.test
TEXT unicode/utf8.Valid(SB) /Users/rsc/go/src/unicode/utf8/utf8.go
  utf8.go:647		0x10789c0		4889442408		MOVQ AX, 0x8(SP)
  utf8.go:649		0x10789c5		31c9			XORL CX, CX
  utf8.go:649		0x10789c7		ba06000000		MOVL $0x6, DX
  utf8.go:649		0x10789cc		eb1f			JMP 0x10789ed
  utf8.go:649		0x10789ce		0fb63408		MOVZX 0(AX)(CX*1), SI
  utf8.go:649		0x10789d2		488d7901		LEAQ 0x1(CX), DI
  utf8.go:650		0x10789d6		4c8d0543ef1600		LEAQ unicode/utf8.dfa(SB), R8
  utf8.go:650		0x10789dd		498b34f0		MOVQ 0(R8)(SI*8), SI
  utf8.go:650		0x10789e1		4889d1			MOVQ DX, CX
  utf8.go:650		0x10789e4		48d3ee			SHRQ CL, SI
  utf8.go:649		0x10789e7		4889f9			MOVQ DI, CX
  utf8.go:652		0x10789ea		4889f2			MOVQ SI, DX
  utf8.go:649		0x10789ed		4839cb			CMPQ CX, BX
  utf8.go:649		0x10789f0		7fdc			JG 0x10789ce
  utf8.go:652		0x10789f2		4883e23f		ANDQ $0x3f, DX
  utf8.go:652		0x10789f6		4883fa06		CMPQ $0x6, DX
  utf8.go:652		0x10789fa		0f94c0			SETE AL
  utf8.go:652		0x10789fd		c3			RET
  :-1			0x10789fe		cc			INT $0x3
  :-1			0x10789ff		cc			INT $0x3
%

Translating this to proper non-regabi assembly I get:

#include "textflag.h"

TEXT ·Valid(SB),NOSPLIT,$0
	MOVQ	x_base+0(FP),AX // p = &x[0]
	MOVQ	x_len+8(FP),BX // n = len(x)
	XORL	CX, CX // i = 0
	MOVL	$6, DX
	JMP loop

body:
	MOVBLZX	(AX)(CX*1), SI // b = p[i]
	LEAQ	1(CX), DI // j = i+1
	LEAQ	·dfa(SB), R8
	MOVQ	(R8)(SI*8), SI // t = dfa[b]
	MOVQ	DX, CX // CX = state
	SHRQ	CX, SI // t >>= state&63
	MOVQ	DI, CX // i = j
	MOVQ	SI, DX // state = t

loop:
	CMPQ	CX, BX
	JLT	body

	ANDL	$0x3f, DX
	CMPL	DX, $6
	SETEQ	AL
	MOVB	AX, ret+24(FP)
	RET

This runs also at about 1600 MB/s.

First optimization: the LEAQ ·dfa(SB), R8 should be hoisted out of the loop body.
(I tried to do this in the Go version with dfa := &dfa but it got constant propagated away!)

That change brings it up to 1750 MB/s.

Second optimization: use DI for i instead of CX, to avoid the pressure on CX.
This lets the LEAQ 1(CX), DI and the later MOVQ DI, CX collapse to just LEAQ 1(DI), DI.

That change brings it up to 1900 MB/s.

The body is now:

body:
	MOVBLZX	(AX)(DI*1), SI // b = p[i]
	LEAQ	1(DI), DI // i++
	MOVQ	(R8)(SI*8), SI // t = dfa[b]
	MOVQ	DX, CX // CX = state
	SHRQ	CX, SI // t >>= state&63
	MOVQ	SI, DX // state = t

Third optimization: since DX is moving into CX, do that one instruction earlier, allowing the use of SI to be optimized into DX to eliminate the final MOVQ:

body:
	MOVBLZX	(AX)(DI*1), SI // b = p[i]
	LEAQ	1(DI), DI // i++
	MOVQ	DX, CX // CX = state
	MOVQ	(R8)(SI*8), DX // state = dfa[b]
	SHRQ	CX, DX // state >>= CX&63

I think this ends up being just "compute the shift amount before the shifted value".
That change brings it up to 2150 MB/s.

This is still a direct translation of the Go code: there are no tricks the compiler couldn't do.
For this particular loop, the optimizations make the code run 35% faster.

Final assembly:

#include "textflag.h"

TEXT ·Valid(SB),NOSPLIT,$0
	MOVQ	x_base+0(FP),AX // p = &x[0]
	MOVQ	x_len+8(FP),BX // n = len(x)
	XORL	DI, DI // i = 0
	MOVL	$6, DX
	LEAQ	·dfa(SB), R8
	JMP loop

body:
	MOVBLZX	(AX)(DI*1), SI // b = p[i]
	LEAQ	1(DI), DI // i++
	MOVQ	DX, CX // CX = state
	MOVQ	(R8)(SI*8), DX // t = dfa[b]
	SHRQ	CX, DX // t >>= state&63

loop:
	CMPQ	DI, BX
	JLT	body

	ANDL	$0x3f, DX
	CMPL	DX, $6
	SETEQ	AL
	MOVB	AX, ret+24(FP)
	RET
@rsc
Copy link
Contributor Author

rsc commented Jul 10, 2021

Here is another, separate opportunity, for GOAMD64=v3 compilation. The SHRXQ instruction takes an explicit shift register, has separate source and destination operands, and can read source from memory. That allows reducing the loop to

body:
	MOVBLZX	(AX)(DI*1), SI // b = p[i]
	LEAQ	1(DI), DI // i++
	SHRXQ	DX, (R8)(SI*8), DX // state = dfa[b] >> (state&63)

That change runs at 3400 MB/s (!).

(The DFA tables were carefully constructed exactly to enable this implementation.)

@rsc rsc added the NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. label Jul 10, 2021
@rsc rsc added this to the Backlog milestone Jul 10, 2021
@zchee
Copy link
Contributor

zchee commented Jul 14, 2021

@rsc sorry for hijacked, but what means GOAMD64=v3?

@ALTree
Copy link
Member

ALTree commented Jul 14, 2021

@zchee see #45453 and https://en.wikipedia.org/wiki/X86-64#Microarchitecture_levels.

@laboger
Copy link
Contributor

laboger commented Jan 12, 2022

I see this hasn't had attention for a while but this is a problem I've noticed in ppc64 code too. Invariant values are not moved out of loops. I thought at one time there was work to do this but it must have been abandoned.

Here is one example:

   ff0a0:       00 00 e3 8b     lbz     r31,0(r3)  <--- nil check is not needed on each iteration
   ff0a4:       24 1f c7 78     rldicr  r7,r6,3,60  \
   ff0a8:       14 3a 03 7d     add     r8,r3,r7   /  These two could be strength-reduced?
   ff0ac:       00 00 28 e9     ld      r9,0(r8)
   ff0b0:       2a 20 47 7d     ldx     r10,r7,r4
   ff0b4:       2a 28 67 7d     ldx     r11,r7,r5
   ff0b8:       78 52 6a 7d     xor     r10,r11,r10
   ff0bc:       b0 00 61 39     addi    r11,r1,176  <---- this is invariant in the loop
   ff0c0:       2a 58 e7 7c     ldx     r7,r7,r11
   ff0c4:       78 52 e7 7c     xor     r7,r7,r10
   ff0c8:       78 3a 27 7d     xor     r7,r9,r7
   ff0cc:       00 00 e8 f8     std     r7,0(r8)
   ff0d0:       01 00 c6 38     addi    r6,r6,1
   ff0d4:       80 00 26 2c     cmpdi   r6,128
   ff0d8:       c8 ff 80 41     blt     ff0a0 <golang.org/x/crypto/argon2.processBlockGeneric+0x3a0>

@gopherbot
Copy link
Contributor

Change https://go.dev/cl/385174 mentions this issue: cmd/compile: use shlx&shrx instruction for GOAMD64>=v3

gopherbot pushed a commit that referenced this issue Apr 4, 2022
The SHRX/SHLX instruction can take any general register as the shift count operand, and can read source from memory. This CL introduces some operators to combine load and shift to one instruction.

For #47120

Change-Id: I13b48f53c7d30067a72eb2c8382242045dead36a
Reviewed-on: https://go-review.googlesource.com/c/go/+/385174
Reviewed-by: Keith Randall <[email protected]>
Trust: Cherry Mui <[email protected]>
@gopherbot gopherbot added the compiler/runtime Issues related to the Go compiler and/or runtime. label Jul 13, 2022
@mknyszek mknyszek moved this to Triage Backlog in Go Compiler / Runtime Jul 15, 2022
bors added a commit to rust-lang-ci/rust that referenced this issue Feb 7, 2025
Rewrite UTF-8 validation in shift-based DFA for 53%~133% performance increase on non-ASCII strings

Take 2 of rust-lang#107760 (cc `@thomcc)`

### Background

About the technique: https://gist.github.com/pervognsen/218ea17743e1442e59bb60d29b1aa725

As stated in rust-lang#107760,
> For prior art: shift-DFAs are now used for UTF-8 validation in [PostgreSQL](https://github.com/postgres/postgres/blob/aa6954104644334c53838f181053b9f7aa13f58c/src/common/wchar.c#L1753), and seems to be in progress or under consideration for use in JuliaLang/julia#47880 and perhaps golang/go#47120. Of these, PG's impl is the most similar to this one, at least at a high level[1](rust-lang#107760 (comment)).

### Rationales

1. Performance: This algorithm gives plenty of performance increase on validating strings with many non-ASCII codepoints, which is the normal case for almost every non-English content.

2. Generality: It does not use SIMD instructions and does not rely on the branch predictor to get a good performance, thus is good as a general, default, architecture-agnostic implementation. There is still a bypass for ASCII-only strings to benefit from auto-vectorization, if the target supports.

### Implementation details

I use the ordinary UTF-8 language definition from [RFC3692](https://datatracker.ietf.org/doc/html/rfc3629#section-4) and directly translate it into a 9-state DFA. So the compressed state is 64-bit, resulting in a table of `[u64; 256]`, or 2KiB rodata.

The main algorithm consists of following parts:
1. Main loop: taking a chunk of `MAIN_CHUNK_SIZE = 16` bytes on each iteration, execute the DFA on the chunk, and check if the state is in ERROR once per chunk.
2. ASCII bypass: in each chunk iteration, if the current state is ACCEPT, we know we are not in the middle of an encoded sequence, so we can skip a large block of trivial ASCIIs and stop at the first chunk containing any non-ASCII bytes. I choose `ASCII_CHUNK_SIZE = 16` to align with the current implementation: taking 16 bytes each to check non-ASCIIs, to encourage LLVM auto-vectorize it.
3. Trailing chunk and error reporting: execute the DFA step by step, stop on error as soon as possible, and calculate the error/valid location. To be simple, if any error are encountered in the main loop, it will discard the errornous chunk and `break` into this path to find the precise error location. That is, the erronous chunk, if exists, will be traversed twice, in exchange for a tighter and more efficient hot loop.

There are also some small tricks being used:
1. Since we have i686-linux in Tier 1 support, and its 64-bit shift (SHRD) is quite slow in our latency-intensive hot loop, I arrange the state storage so that the state transition can be done in 32-bit shift and conditional move. It shows a 200%+ speed up comparing to 64-bit-shift version.
2. We still need to get UTF-8 encoded length from the first byte in `utf8_char_width`. I merge the previous lookup table into the unused high bits of the DFA transition table. So we don't need two tables. It did introduce an extra 32-bit shift. I believe it's almost free but have not benchmarked yet.

### Benchmarks

I made an [out-of-tree implementation repository](https://github.com/oxalica/shift-dfa-utf8) for easier testing and benching. It also tested various `MAIN_CHUNK_SIZE` (m) and `ASCII_CHUNK_SIZE` (a) configurations. Bench data are taken from the first 4KiB (from the first paragraph, plain text not HTML, cut at char boundary) of Wikipedia [William Shakespeare in en](https://en.wikipedia.org/wiki/William_Shakespeare), [es](https://es.wikipedia.org/wiki/William_Shakespeare) and [zh](https://zh.wikipedia.org/wiki/%E5%A8%81%E5%BB%89%C2%B7%E8%8E%8E%E5%A3%AB%E6%AF%94%E4%BA%9A) language.

In short: with m=16, a=16, shift-DFA performance gives -43% on en, +53% on es, +133% on zh; with m=16, a=32, it gives -9% on en, +26% on es, +33% on zh. It's quite expected, as the larger ASCII bypass chunk is, it performs better on ASCII, but worse on mixed content like es because of the taken branch is flipping around.

To me, the difference between 27GB/s vs 47GB/s in en is minimal in absolute time 144.61ns - 79.86ns = 64.75ns, comparing to 476.05ns - 392.44ns = 83.61ns in es. So I currently chose m=16, a=16 in the PR.

On x86\_64-linux, Ryzen 7 5700G `@3.775GHz:`

| Algorithm         | Input language | Throughput / (GiB/s)  |
|-------------------|----------------|-----------------------|
| std               | en             | 47.768 +-0.301        |
| shift-dfa-m16-a16 | en             | 27.337 +-0.002        |
| shift-dfa-m16-a32 | en             | 43.627 +-0.006        |
| std               | es             |  6.339 +-0.010        |
| shift-dfa-m16-a16 | es             |  9.721 +-0.014        |
| shift-dfa-m16-a32 | es             |  8.013 +-0.009        |
| std               | zh             |  1.463 +-0.000        |
| shift-dfa-m16-a16 | zh             |  3.401 +-0.002        |
| shift-dfa-m16-a32 | zh             |  3.407 +-0.001        |

### Unresolved

- [ ] Benchmark on aarch64-darwin, another tier 1 target.
  I don't have a machine to play with.

- [ ] Decide the chunk size parameters. I'm currently picking m=16, a=16.

- [ ] Should we also replace the implementation of [lossy conversion](https://github.com/oxalica/rust/blob/c0639b8cad126d886ddd88964f729dd33fb90e67/library/core/src/str/lossy.rs#L194) by calling the new validation function?
  It has a very similar code doing almost the same thing.
bors added a commit to rust-lang-ci/rust that referenced this issue Feb 9, 2025
Rewrite UTF-8 validation in shift-based DFA for 53%~133% performance increase on non-ASCII strings

Take 2 of rust-lang#107760 (cc `@thomcc)`

### Background

About the technique: https://gist.github.com/pervognsen/218ea17743e1442e59bb60d29b1aa725

As stated in rust-lang#107760,
> For prior art: shift-DFAs are now used for UTF-8 validation in [PostgreSQL](https://github.com/postgres/postgres/blob/aa6954104644334c53838f181053b9f7aa13f58c/src/common/wchar.c#L1753), and seems to be in progress or under consideration for use in JuliaLang/julia#47880 and perhaps golang/go#47120. Of these, PG's impl is the most similar to this one, at least at a high level[1](rust-lang#107760 (comment)).

### Rationales

1. Performance: This algorithm gives plenty of performance increase on validating strings with many non-ASCII codepoints, which is the normal case for almost every non-English content.

2. Generality: It does not use SIMD instructions and does not rely on the branch predictor to get a good performance, thus is good as a general, default, architecture-agnostic implementation. There is still a bypass for ASCII-only strings to benefit from auto-vectorization, if the target supports.

### Implementation details

I use the ordinary UTF-8 language definition from [RFC3692](https://datatracker.ietf.org/doc/html/rfc3629#section-4) and directly translate it into a 9-state DFA. So the compressed state is 64-bit, resulting in a table of `[u64; 256]`, or 2KiB rodata.

The main algorithm consists of following parts:
1. Main loop: taking a chunk of `MAIN_CHUNK_SIZE = 16` bytes on each iteration, execute the DFA on the chunk, and check if the state is in ERROR once per chunk.
2. ASCII bypass: in each chunk iteration, if the current state is ACCEPT, we know we are not in the middle of an encoded sequence, so we can skip a large block of trivial ASCIIs and stop at the first chunk containing any non-ASCII bytes. I choose `ASCII_CHUNK_SIZE = 16` to align with the current implementation: taking 16 bytes each to check non-ASCIIs, to encourage LLVM auto-vectorize it.
3. Trailing chunk and error reporting: execute the DFA step by step, stop on error as soon as possible, and calculate the error/valid location. To be simple, if any error are encountered in the main loop, it will discard the errornous chunk and `break` into this path to find the precise error location. That is, the erronous chunk, if exists, will be traversed twice, in exchange for a tighter and more efficient hot loop.

There are also some small tricks being used:
1. Since we have i686-linux in Tier 1 support, and its 64-bit shift (SHRD) is quite slow in our latency-intensive hot loop, I arrange the state storage so that the state transition can be done in 32-bit shift and conditional move. It shows a 200%+ speed up comparing to 64-bit-shift version.
2. We still need to get UTF-8 encoded length from the first byte in `utf8_char_width`. I merge the previous lookup table into the unused high bits of the DFA transition table. So we don't need two tables. It did introduce an extra 32-bit shift. I believe it's almost free but have not benchmarked yet.

### Benchmarks

I made an [out-of-tree implementation repository](https://github.com/oxalica/shift-dfa-utf8) for easier testing and benching. It also tested various `MAIN_CHUNK_SIZE` (m) and `ASCII_CHUNK_SIZE` (a) configurations. Bench data are taken from the first 4KiB (from the first paragraph, plain text not HTML, cut at char boundary) of Wikipedia [William Shakespeare in en](https://en.wikipedia.org/wiki/William_Shakespeare), [es](https://es.wikipedia.org/wiki/William_Shakespeare) and [zh](https://zh.wikipedia.org/wiki/%E5%A8%81%E5%BB%89%C2%B7%E8%8E%8E%E5%A3%AB%E6%AF%94%E4%BA%9A) language.

In short: with m=16, a=16, shift-DFA performance gives -43% on en, +53% on es, +133% on zh; with m=16, a=32, it gives -9% on en, +26% on es, +33% on zh. It's quite expected, as the larger ASCII bypass chunk is, it performs better on ASCII, but worse on mixed content like es because of the taken branch is flipping around.

To me, the difference between 27GB/s vs 47GB/s in en is minimal in absolute time 144.61ns - 79.86ns = 64.75ns, comparing to 476.05ns - 392.44ns = 83.61ns in es. So I currently chose m=16, a=16 in the PR.

On x86\_64-linux, Ryzen 7 5700G `@3.775GHz:`

| Algorithm         | Input language | Throughput / (GiB/s)  |
|-------------------|----------------|-----------------------|
| std               | en             | 47.768 +-0.301        |
| shift-dfa-m16-a16 | en             | 27.337 +-0.002        |
| shift-dfa-m16-a32 | en             | 43.627 +-0.006        |
| std               | es             |  6.339 +-0.010        |
| shift-dfa-m16-a16 | es             |  9.721 +-0.014        |
| shift-dfa-m16-a32 | es             |  8.013 +-0.009        |
| std               | zh             |  1.463 +-0.000        |
| shift-dfa-m16-a16 | zh             |  3.401 +-0.002        |
| shift-dfa-m16-a32 | zh             |  3.407 +-0.001        |

### Unresolved

- [ ] Benchmark on aarch64-darwin, another tier 1 target.
  I don't have a machine to play with.

- [ ] Decide the chunk size parameters. I'm currently picking m=16, a=16.

- [ ] Should we also replace the implementation of [lossy conversion](https://github.com/oxalica/rust/blob/c0639b8cad126d886ddd88964f729dd33fb90e67/library/core/src/str/lossy.rs#L194) by calling the new validation function?
  It has a very similar code doing almost the same thing.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler/runtime Issues related to the Go compiler and/or runtime. NeedsInvestigation Someone must examine and confirm this is a valid issue and not a duplicate of an existing one. Performance
Projects
Status: Triage Backlog
Development

No branches or pull requests

6 participants