The signature is documented as:
int link(const char *, const char *);
(see https://man7.org/linux/man-pages/man2/link.2.html or https://man.netbsd.org/link.2)
And its not some Linux extension, the [syscall
implementation](21b136cc63/fs/namei.c (L4794-L4797))
only expects two arguments too.
It probably *should* have a flags parameter, but its too late now.
I am a bit surprised that linking glibc or musl against code that invokes
a 'link' with three parameters doesn't fail (at least, I couldn't get any
local test cases to trigger a compile or link error).
The test case in std/posix/test.zig is currently disabled, but if I
manually enable it, it works with this change.
Due to the `std.crypto.ecdsa.KeyPair.create` taking and optional of seed, even if the seed is generated, cross-compiling to the environments without standard random source (eg. wasm) (`std.crypto.random.bytes`) will fail to compile.
This commit changes the API of the problematic function and moves the random seed generation to a new utility function.
During the LLVM 18 upgrade, two changes were made that changed `@alignOf(u64)` to 4 for the x86-windows target:
- `Type.maxIntAlignment` was made to return 16 for x86 (200e06b). Before that commit, `maxIntAlignment` was 8 for windows/uefi and 4 for everything else
- `Type.intAbiAlignment` was made to return 4 for 33...64 (7e1cba7 + e89d6fc). Before those commits, `intAbiAlignment` would return 8, since the maxIntAlignment for x86-windows was 8 (and for other targets, the `maxIntAlignment` of 4 would clamp the `intAbiAlignment` to 4)
`src/codegen/llvm.zig` has its own alignment calculations that no longer match the values returned from the `Type` functions. For the x86-windows target, this loop:
ddcb7b1c11/src/codegen/llvm.zig (L558-L567)
when the `size` is 64 will set `abi` and `pref` to 64 (meaning an align of 8 bytes), which doesn't match the `Type` alignment of 4.
This commit makes `Type.intAbiAlignment` match the alignment calculated in `codegen/llvm.zig`.
Fixes#20047Fixes#20466Fixes#20469
this fix bypasses the slice bounds, reading garbage data for up to the
last 7 bits (which are technically supposed to be ignored). that's going
to need to be fixed, let's fix that along with switching from byte elems
to usize elems.