Previous implementation didn't check whether there are pending signals
after return from futex.wait. While it is ok for broadcast case it can
result in multiple wakeups when only one thread is signaled.
This implementation checks that there are pending signals before
returning from wait.
It is similar to the original implementation but the without initial
signal check, here we first go to the futex and then check for pending
signal.
fixes#12877
Current implementation (before this fix) observes number of waiters when
broadcast occurs and then makes that number of wakeups.
If we have multiple threads waiting for wakeup which immediately go into
wait if wakeup is not for that thread (as described in the issue). The
same thread can get multiple wakeups while some got none.
That is not consistent with documented behavior for condition variable
broadcast: `Unblocks all threads currently blocked in a call to wait()
or timedWait() with a given Mutex.`.
This fix ensures that the thread waiting on futext is woken up on futex wake.
* std.crypto.onetimeauth.ghash: faster GHASH on modern CPUs
Carryless multiplication was slow on older Intel CPUs, justifying
the need for using Karatsuba multiplication.
This is not the case any more; using 4 multiplications to multiply
two 128-bit numbers is actually faster than 3 multiplications +
shifts and additions.
This is also true on aarch64.
Keep using Karatsuba only when targeting x86 (granted, this is a bit
of a brutal shortcut, we should really list all the CPU models that
had a slow clmul instruction).
Also remove useless agg_2 treshold and restore the ability to
precompute only H and H^2 in ReleaseSmall.
Finally, avoid using u256. Using 128-bit registers is actually faster.
* Use a switch, add some comments
perf_event_attr.type needs to take a runtime defined value to enable
dynamic PMU:s, such as kprobe and uprobe. This value can exceed
predefined values defined in the linux headers.
reference: perf_event_open(2) man page
In the process of 'remaining blocks',
the length of processed message can be from 1 to 79.
The value of 'n-1' is ranged from 0 to 3.
So, st.hx[i] must be initialized at least from st.hx[0] to st.hx[3]
* Export invalidFmtErr
To allow consistent use of "invalid format string" compile error
response for badly formatted format strings.
See https://github.com/ziglang/zig/pull/13489#issuecomment-1311759340.
* Replace format compile errors with invalidFmtErr
- Provides more consistent compile errors.
- Gives user info about the type of the badly formated value.
* Rename invalidFmtErr as invalidFmtError
For consistency. Zig seems to use “Error” more often than “Err”.
* std: add invalid format string checks to remaining custom formatters
* pass reference-trace to comp when building build file; fix checkobjectstep
These constants were read as a block count in initForBlockCount()
but at the same time, as a size in update().
The unit could be blocks or bytes, but we should use the same one
everywhere.
So, use blocks as intended.
Fixes#13506
- the meaning of packed structs changed in zig 0.10. adjust accordingly.
Use "extern struct" for the cases that directly map to C structs.
- Add new type info kinds, like enum64 and DeclTag
- change the Type enum to use the canonical names from libbpf.
This is more predictable when comparing with external BPF
documentation (than invented synonyms that need to be guessed)
* std.crypto: make ghash faster, esp. for small messages
Aggregated reduction requires 5 additional multiplications (to
precompute the powers of H), in order to save 2 multiplications
per batch.
So, only use large batches when it's actually interesting to do so.
For the last blocks, reuse the precomputations in order to perform
a single reduction.
Also, even in .ReleaseSmall, allow 2-block aggregation.
The speedup is worth it, and the code increase is reasonable.
And in .ReleaseFast, bump the upper batch size up to 16.
Leverage comptime by the way instead of duplicating code.
std/crypto/benchmark.zig on Apple M1:
Zig 0.10.0: 2769 MiB/s
Before: 6014 MiB/s
After: 7334 MiB/s
Normalize function names by the way.
* Change clmul() to accept the half to be processed
This avoids a bunch of truncate() calls.
* Add more ghash tests to check all code paths
* crypto.core.aes: process 6 block in parallel instead of 8 on aarch64
At least on Apple Silicon, this is slightly faster than 8 blocks.
* AES: add parallel blocks for tigerlake, rocketlake, alderlake, zen3
...instead of hard-coding it to 20.
- This is consistent with the ChaCha implementation
- NaCl and libsodium, that this API is designed to interop with,
also support 8 and 12 round variants. The 12 round variant, in
particular, provides the same security level as the 20 round variant,
but is obviously faster.
- scrypt currently uses its own non optimized version of Salsa, just
because it use 8 rounds instead of 20. This will help remove code
duplication.
No behavior nor public API changes. The Salsa20 and XSalsa20 still
represent the 20-round variant.
PR #13101 recently renamed the "i386" architecture to "x86", and it
seems the specific CPU model got swept up in that. "x86" is an umbrella
term that describes a family of CPUs, and the "i386" is the oldest
supported model under that umbrella.
Rewrite GHASH to use 128-bit multiplication over non-reversed
integers, and up to 8 blocks aggregated reduction.
lib/std/crypto/benchmark.zig results:
Xeon E5:
Before: 1604 MiB/s
After: 4005 MiB/s
Apple M1:
Before: 2769 MiB/s
After: 6014 MiB/s
This also makes AES-GCM faster by the way.