Description
What version of Go are you using (go version
)?
$ go version go version go1.15.5 darwin/amd64
Does this issue reproduce with the latest release?
Yes
What operating system and processor architecture are you using (go env
)?
go env
Output
$ go env GO111MODULE="" GOARCH="amd64" GOBIN="" GOCACHE="/Users/vsi/Library/Caches/go-build" GOENV="/Users/vsi/Library/Application Support/go/env" GOEXE="" GOFLAGS="" GOHOSTARCH="amd64" GOHOSTOS="darwin" GOINSECURE="" GOMODCACHE="/Users/vsi/go/pkg/mod" GONOPROXY="github.com/vsivsi" GONOSUMDB="github.com/vsivsi" GOOS="darwin" GOPATH="/Users/vsi/go" GOPRIVATE="github.com/vsivsi" GOPROXY="https://proxy.golang.org,direct" GOROOT="/usr/local/Cellar/go/1.15.5/libexec" GOSUMDB="sum.golang.org" GOTMPDIR="" GOTOOLDIR="/usr/local/Cellar/go/1.15.5/libexec/pkg/tool/darwin_amd64" GCCGO="gccgo" AR="ar" CC="clang" CXX="clang++" CGO_ENABLED="1" GOMOD="" CGO_CFLAGS="-g -O2" CGO_CPPFLAGS="" CGO_CXXFLAGS="-g -O2" CGO_FFLAGS="-g -O2" CGO_LDFLAGS="-g -O2" PKG_CONFIG="pkg-config" GOGCCFLAGS="-fPIC -m64 -pthread -fno-caret-diagnostics -Qunused-arguments -fmessage-length=0 -fdebug-prefix-map=/var/folders/kp/kjdr0ytx5z9djnq4ysl15x0h0000gn/T/go-build186752670=/tmp/go-build -gno-record-gcc-switches -fno-common"
What did you do?
Test for AVX512 support using cpu.X86.HasAVX512
main.go
package main
import (
"fmt"
"golang.org/x/sys/cpu"
)
func main() {
fmt.Println(cpu.X86.HasAVX512)
}
What did you expect to see?
The program above should print true
on any OS/hardware combination that is capable of running AVX512 instructions.
What did you see instead?
This program prints false
on all Macs that are perfectly capable of running AVX512 instructions generated by the Go assembler.
The reason is complicated, and appears to have to do with how recent versions of the darwin kernel (those since AVX512 enabled processors began appearing in Mac hardware) choose to support the greatly expanded AVX512 thread state.
In summary, darwin implements a two-tier "promotion" based scheme to economize on saving thread state when AVX512 specific registers are not in use. It implements this by initially disabling AVX512 support for new threads, and then trapping undefined instruction faults for AVX512 instructions in the kernel, enabling AVX512 support for the thread, and then restarting execution at the faulted instruction. This scheme has the advantage of maintaining pre-AVX512 efficiency when preempting threads that haven't used any AVX512 extensions. But the cost appears to be that testing for AVX512 support is more complex.
Specifically, this code assumes that disabled AVX512 OS support is permanent:
https://github.com/golang/sys/blob/master/cpu/cpu_x86.go#L90
The test in the code above is performed at init time before any AVX512 instructions have been run, and hence the bits inspected from xgetbv()
reflect at that point that AVX512 support is disabled by the OS. Upon failing that test (cpu.X86.HasAVX512 != true
), the CPUID bits indicating that the hardware is AVX512 capable are simply ignored.
Given darwin's two-tier thread state scheme, clearly something more sophisticated is needed here to properly detect whether AVX512 instructions can be run.
Here is a reference to the darwin code implementing these checks:
https://github.com/apple/darwin-xnu/blob/0a798f6738bc1db01281fc08ae024145e84df927/osfmk/i386/fpu.c#L176
And here is an issue on an Intel compiler project raising the same problem:
ispc/ispc#1854
There is also a known issue with darwin where threads executing unsupported AVX512 instructions get stuck in a tight loop of some kind, so properly detecting AVX512 support and the CPUID flags for specific extensions is critical. See:
Activity
vsivsi commentedon Dec 9, 2020
Note: All of the above results were produced using MacOS Catalina (10.15.7). I haven't yet been able to test any of this on Big Sur (11.x).
I've reproduced this behavior on two different Macs supporting AVX512:
Both of these processors support AVX512 (with varying extensions) and I have successfully run golang programs using AVX512 instructions assembled by the go assembler on each of them. Yet
cpu.X86.HasAVX512
is always false.randall77 commentedon Dec 9, 2020
I'm not sure how we would possibly rectify this situation. We ask the hardware if it supports AVX512, and it says no.
Then what? Possibly
3a. If we end up in the signal handler, set cpu.X86.HasAVX512 to false and disable step 3b.
3b. If don't end up in the signal handler, set cpu.X86.HasAVX512 to true (or maybe recheck the hardware support bitvector?).
Seems awfully complicated. We'd have to do this on startup of pretty much every Go binary. We're already parsimonious about how many system calls we make on startup; this goes in decidedly the wrong direction.
Why can't OSX set the hardware enabled bits? Is it because it can't trap on instructions unless the support bits are 0? It would be nice if the interrupt-on-these-instructions bits were distinct from the these-instructions-are-supported bits.
[-]internal/cpu: cpu.X86.HasAVX512 incorrectly always returns false on darwin[/-][+]x/sys/cpu: cpu.X86.HasAVX512 incorrectly always returns false on darwin[/+]martisch commentedon Dec 9, 2020
I agree that claiming in CPUID the instruction is not supported and then still using it is not how it should be. Some hypervisors allow to explicitly disable AVX or AVX512 to allow migration to non AVX/AVX512 machines. So just trying the instruction might not work for those scenarios.
MacBook Pro (13-inch, 2020, Four Thunderbolt 3 ports):
These are 1 even when the program from the first comment in the issue returns false:
vsivsi commentedon Dec 9, 2020
Quoting from the Darwin source linked in the OP:
So the true answer is to probably use one of those kernel supplied mechanisms. I’m not a kernel hacker so the best way to do that is outside my wheelhouse.
It does seem to be the case that the CPUID bits for AVX512 always correctly reflect what the processor itself can do. But XCR0 bits (controlled by the kernel) take precedence of course, because it would be unsafe to execute instructions against register state that might not survive a thread preemption.
So for Darwin, when might it be unsafe to just go by the CPUID bits (AVX512F specifically) ignoring XCR0, and just assume that the thread will be promoted by the kernel?
As near as I can tell, AVX512 has only shipped in 3 lines of Macs, the 2017 iMac Pro (released with MacOS 10.13 High Sierra) and the 2019 Mac Pro and 2020 MacBook Pro 13” (both released with MacOS 10.15 Catalina). I have access to a 2017 iMac Pro that is still running 10.14/Mojave, and can test what happens there tomorrow.
Basically, for Apple Mac hardware running MacOS, I think the only issue is whether there is a version of MacOS/Darwin that supports one of those three machines that doesn’t support AVX512 at all. I’d be surprised if Apple shipped that 2017 iMac Pro with a kernel that couldn’t use the full instruction set, but I can’t currently test it. The public change date for the AVX512 support in Darwin is 9/26/2017, which is exactly one day after MacOS 10.13 released. So it’s fair bet that whatever version of MacOS 10.13 shipped on the 2017 iMac Pro contained those changes.
Which is all a long winded way of saying that for that part of the Darwin universe that consists of Apple hardware running MacOS that supports that hardware, if the processor CPUID bit for AVX512F is set, then you can probably assume that the Darwin kernel will promote a thread that uses AVX512 instructions.
So what other cases are there to care about? Darwin is an open source kernel, so it’s conceivable that there are people running old non-MacOS Darwin, that predates the AVX512 support, on processors that do have AVX512 per the CPUID bits. In that case, respecting the XCR0 bits is critical to identifying that the kernel does not support those instructions, and if you try to run them you will get UD faults regardless of what CPUID says. But how many people are actually doing this? And trying to run AVX512 on this setup? Probably tiny compared to the MacOS installed base, but it’s probably non-zero.
And of course there are people running Hackintoshes, who might be running pre-10.13 MacOS on hardware supporting AVX512, and that will be the same situation as above, although probably somewhat larger numbers.
Anyway, I’ve convinced myself that it’s safe enough for me to just go by the CPUID bits when running on MacOS and ignore the XCR0 bits and assume that the darwin kernel will promote.
But for the golang cpu package, IMO the code needs to actually consult sysctl or the “commpage” (whatever that is) and not make any assumptions.
vsivsi commentedon Dec 9, 2020
To golang’s credit, it currently appears to do precisely what Intel recommends:
From section 15.2 of volume 1 of the Intel software developers manual.
Unfortunately the darwin kernel has other ideas...
martisch commentedon Dec 9, 2020
Yes Go has been following what Intel recommends to check and that has worked on other Operating Systems so far.
Two other categories of valid usage of darwin/amd64 that may need to be considered for interpreting CPUID:
So far if this needs fixing because darwin is set to do it differently then I would likely recommend we check sysctls to detect if AVX512 (maybe zmm if there is a ctl for it) is supported. We can make one check if AVX512F is supported in sysctls and only if that is enabled check the rest.
vsivsi commentedon Dec 9, 2020
@martisch Thanks for those additional cases. Questions:
On point 1) AWS, I’m fairly certain the AWS MacOS support is not virtualized. They are renting bare metal MacMini machines as I recall from the announcements. e.g.: Amazon Web Services adds macOS on bare metal to EC2. That doesn’t change the fact that this might happen in the future however!
On point 2) I haven’t heard anything at all about AVX512 on Rosetta2, I would be more than a little surprised if they went to the Great trouble of emulating it given the relatively small installed base of machines/sw that use it, and the fact that any such software will almost certainly provide non-AVX512 code paths for when it is not supported (and cannot blame Apple if they don’t properly check these things, which brings us right back to why this issue is important!)
Thanks again!
martisch commentedon Dec 9, 2020
You are right AWS is not actually offering virtualized OS X instances it seems. I misremembered that. VMware and other products still let you run OS X virtualized as far as I have seen. This may also allow the migration of VMs from Apple hardware with AVX512 to Apple hardware without AVX512.
I dont think any AVX is supported by Rosetta2. (https://developer.apple.com/documentation/apple_silicon/about_the_rosetta_translation_environment: "What Can't Be Translated?") I was just thinking about the case where CPUID claims AVX512 is not supported and then it actually isnt on modern OS X versions.
I have not tested this but if I understand your later replies correctly OS X isnt actually claiming in CPUID that AVX512 isnt supported on CPUs that support it but rather that XCR0 claims ZMM registers are not supported (saved and restored). So there isnt actually a case of OSX "lying" about the instruction set extension support but it is rather that OSX changes its stance whether it saves and restores ZMM (and other) registers as reported in XCR0 based on when an AVX512 instruction was used.
If thats the case Go might just check AVX512F in sysctl and if that is "1" Go should assume ZMM registers are supported and rely on CPUID for determining all the instructions extensions that may or may not be supported.
On the other hand in "Intel 64 and IA-32 Architectures Software Developer’s Manual Volume 2
2.6.11.1 State Dependent #UD" Intel EVEX-encoded 512-bit instructions require XCR0 Bit Vector [7:0] to be 111xx111b to not cause an invalid opcode exception.
randall77 commentedon Dec 9, 2020
This sounds like an OSX bug. Saving/restoring ZMM registers after the first AVX512 instruction is indistinguishable from saving/restoring ZMM registers from process start. So they should say they do, even if they don't until the first instruction.
Or is there another reason they would lie to us? Helping out user-level thread schedulers, perhaps? Not sure.
martisch commentedon Dec 9, 2020
Tested this on my AVX512 supporting MacBook Pro:
When no AVX512 instruction has been executed:
Bits 5,6,7 of eax from
xgetbv()
are false and bit 16 ofcpuid(7,0)
is true ( AVX-512 Foundation).When I add a VPXORD Z1, Z1, Z1 before xgetbv then:
Bits 5,6,7 of eax from
xgetbv()
are true and bit 16 ofcpuid(7,0)
is true ( AVX-512 Foundation).My hunch why it might be setup this way is that setting ZMM register support to false in XCR0 will trigger an exception by the CPU when AVX512 is used which OS X can catch and then enable ZMM register state saving. So for programs that do not use AVX512 OS X can avoid the extra save and restore overhead for ZMM. Once AVX512 is used the OS will get notified with the exception then enable ZMM support, change XCR0 so that AVX512 doesnt trigger exceptions anymore and resume program execution now with the additional overhead for saves and restores of the register state for ZMM.
vsivsi commentedon Dec 9, 2020
@martisch Yes! Everything you wrote above agrees exactly with both my testing and my understanding of the situation.
So I think there are two reasonable possible ways for golang to handle this:
cpu
package with a special case for darwin to check for AVX512 support using a call to sysctl instead of trusting the XCR0 bits (as has already been discussed above).OR
thread_set_state()
(when on AVX512 hardware, which presumably darwin will handle correctly).Per the Darwin documentation:
So this would only need to be done for the original thread in the runtime process. All subsequent threads should inherit this state. Con: This would make thread preemption of non-AVX512 Go programs on darwin less efficient than it would otherwise be (on hardware with AVX512). Pro: It would presumably make Darwin’s behavior consistent with all other platforms and require no changes to the
cpu
package. The only darwin “special case” would be promoting the initial process thread to force saving AVX512 state.I’m actually pretty partial to option 2). Yes it would make OS task switching Go program threads less efficient on darwin than it would otherwise be when AVX512 is present and unused, but no less efficient than it already will be on any other OS platform in that case. Basically go would be “giving up” whatever performance advantage darwin provides with this rather ugly hack, in exchange for parity with every other OS and isolating the darwin specific code to this single initial thread state setup.
===
As an aside, it also seems clear now that the bug described in #42649 appears to be caused by this sequence:
I mention this here because it’s possible that potential solution 2) above could alleviate this, depending on what darwin does internally with that call to
set_thread_state()
rasky commentedon Dec 10, 2020
It looks like Apple suggest to use sysctl or commpage to check for AVX512 capability rather than using XCR0. Spawning sysctl doesn't sound something we want to do in the runtime, but reading it from the commpage seems like a no-brainer. Why can't we just do that?
https://github.com/apple/darwin-xnu/blob/a449c6a3b8014d9406c2ddbdc81795da24aa7443/osfmk/i386/cpu_capabilities.h#L77
https://github.com/apple/darwin-xnu/blob/a449c6a3b8014d9406c2ddbdc81795da24aa7443/osfmk/i386/cpu_capabilities.h#L178
It looks like there's also a libsystem wrapper to access the CPU capabilities in the commpage, that we can call if we don't want to hardcode the commpage layout as that has bitten us already once:
https://github.com/apple/darwin-xnu/blob/0a798f6738bc1db01281fc08ae024145e84df927/libsyscall/wrappers/__get_cpu_capabilities.s#L35
martisch commentedon Dec 10, 2020
x/sys/cpu already checks if AVX512 instruction extensions are supported by using CPUID and that works even on darwin/amd64 correctly on the machines I could test. AFAIK XCR0 does not contain information if AVX512 is supported by the CPU but if XMM, YMM, ZMM registers save and restore is supported by the Operating System. This can vary between Operating Systems on the same CPU.
https://github.com/apple/darwin-xnu/blob/a449c6a3b8014d9406c2ddbdc81795da24aa7443/osfmk/i386/cpu_capabilities.h
Does not mention ZMM or any restore or save behavior for ZMM it seems.
So the question open for me is what is the correct way on darwin/amd64 to check if ZMM save and restore will correctly be supported? Intel manuals say its XCR0 and that AVX512 even if supported by the CPU will throw an exception if OSXSAVE is not enabled and ZMM save and restore activated.
An even less computationally involved way would be to not check ZMM register support on darwin/amd64 and assume that any OS X that can run on a AVX512 supporting CPU always supports ZMM registers (not sure if that is true).
26 remaining items