This is fairly straightforward; the actual compiler changes are limited
to the CLI, since `Compilation` already supports this combination.
A new `std.Build` API is introduced to allow representing this. By
passing the `emit_object` option to `std.Build.addTest`, you get a
`Step.Compile` which emits an object file; you can then use that as you
would any other object, such as either installing it for external use,
or linking it into another step.
A standalone test is added to cover the build system API. It builds a
test into an object, and links it into a final executable, which it then
runs.
Using this build system mechanism prevents the build system from
noticing that you're running a `zig test`, so the build runner and test
runner do not communicate over stdio. However, that's okay, because the
real-world use cases for this feature don't want to do that anyway!
Resolves: #23374
std.crypto: add constant-time codecs
Add constant-time hex/base64 codecs designed to process cryptographic
secrets, adapted from libsodium's implementations.
Introduce a `crypto.codecs` namespace for crypto-related encoders and
decoders. Move ASN.1 codecs to this namespace.
This will also naturally accommodate the proposed PEM codecs.
The code was using u32 and usize interchangably, which doesn't work on
64-bit systems. This:
`pub const sigset_t = [1024 / 32]u32;`
is not consistent with this:
`const shift = @as(u5, @intCast(s & (usize_bits - 1)));`
However, normal signal numbers are less than 31, so the bad math doesn't matter much. Also, despite support for 1024 signals in the set, only setting signals between 1 and NSIG (which is mostly 65, but sometimes 128) is defined. The existing tests only exercised signal numbers in the first 31 bits so they didn't trip over this:
The C library `sigaddset` will return `EINVAL` if given an out of bounds signal number. I made the Zig code just silently ignore any out of bounds signal numbers.
Moved all the `sigset` related declarations next to each in the source, too.
The `filled_sigset` seems non-standard to me. I think it is meant to be used like `empty_sigset`, but it only contains 31 set signals, which seems wrong (should be 64 or 128, aka `NSIG`). It's also unused. The oddly named but similar `all_mask` is used (by posix.zig) but sets all 1024 bits (which I understood to be undefined behavior but seems to work just fine). For comparison the musl `sigfillset` fills in 65 bits or 128 bits.
Linux kernel syscalls expect to be given the number of bits of sigset that
they're built for, not the full 1024-bit sigsets that glibc supports.
I audited the other syscalls in here that use `sigset_t` and they're all
using `NSIG / 8`.
Fixes#12715