We postpone emitting debug info until *after* we generate the function
so that we have an idea of the consumed stack space. The stack offsets
encoded within DWARF are with respect to the frame pointer `.fp`.
- the meaning of packed structs changed in zig 0.10. adjust accordingly.
Use "extern struct" for the cases that directly map to C structs.
- Add new type info kinds, like enum64 and DeclTag
- change the Type enum to use the canonical names from libbpf.
This is more predictable when comparing with external BPF
documentation (than invented synonyms that need to be guessed)
This service stopped working two days ago for unknown reasons. Until it
is determined how to get it working again, or we switch to a different
CI provider for aarch64, this CI test coverage is disabled so that
we can continue to use the CI for other targets.
* std.crypto: make ghash faster, esp. for small messages
Aggregated reduction requires 5 additional multiplications (to
precompute the powers of H), in order to save 2 multiplications
per batch.
So, only use large batches when it's actually interesting to do so.
For the last blocks, reuse the precomputations in order to perform
a single reduction.
Also, even in .ReleaseSmall, allow 2-block aggregation.
The speedup is worth it, and the code increase is reasonable.
And in .ReleaseFast, bump the upper batch size up to 16.
Leverage comptime by the way instead of duplicating code.
std/crypto/benchmark.zig on Apple M1:
Zig 0.10.0: 2769 MiB/s
Before: 6014 MiB/s
After: 7334 MiB/s
Normalize function names by the way.
* Change clmul() to accept the half to be processed
This avoids a bunch of truncate() calls.
* Add more ghash tests to check all code paths
* crypto.core.aes: process 6 block in parallel instead of 8 on aarch64
At least on Apple Silicon, this is slightly faster than 8 blocks.
* AES: add parallel blocks for tigerlake, rocketlake, alderlake, zen3
...instead of hard-coding it to 20.
- This is consistent with the ChaCha implementation
- NaCl and libsodium, that this API is designed to interop with,
also support 8 and 12 round variants. The 12 round variant, in
particular, provides the same security level as the 20 round variant,
but is obviously faster.
- scrypt currently uses its own non optimized version of Salsa, just
because it use 8 rounds instead of 20. This will help remove code
duplication.
No behavior nor public API changes. The Salsa20 and XSalsa20 still
represent the 20-round variant.
PR #13101 recently renamed the "i386" architecture to "x86", and it
seems the specific CPU model got swept up in that. "x86" is an umbrella
term that describes a family of CPUs, and the "i386" is the oldest
supported model under that umbrella.