I would like a chance to review this before it lands, please. Feel free
to submit the work again without changes and I will make review
comments.
In the meantime, these reverts avoid intermittent CI failures, and
remove bad patterns from occurring in the standard library that other
users might copy.
Revert "std.crypto: improve KT documentation, use key_length for B3 key length (#25807)"
This reverts commit 4b593a6c24797484e68a668818736b0f6a8d81a2.
Revert "crypto - threaded K12: separate context computation from thread spawning (#25793)"
This reverts commit ee4df4ad3edad160fb737a1935cd86bc2f9cfbbe.
Revert "crypto.kt128: when using incremental hashing, use SIMD when possible (#25783)"
This reverts commit bf9082518c32ce7d53d011777bf8d8056472cbf9.
Revert "Add std.crypto.hash.sha3.{KT128,KT256} - RFC 9861. (#25593)"
This reverts commit 95c76b1b4aa7302966281c6b9b7f6cadea3cf7a6.
It was not obvious that the KT128/KT256 customization string can be
used to set a key, or what it was designed to be used for at all.
Also properly use key_length and not digest_length for the BLAKE3
key length (no practical changes as they are both 32, but that was
confusing).
Remove unneeded simd_degree copies by the way, and that doesn't need
to be in the public interface.
This is a rewrite of the BLAKE3 implementation, with vectorization.
On Apple Silicon, the new implementation is about twice as fast as the previous one.
With AVX2, it is more than 4 times faster.
With AVX512, it is more than 7.5x faster than the previous implementation (from 678 MB/s to 5086 MB/s).
Use inline to vastly simplify the exposed API. This allows a
comptime-known endian parameter to be propogated, making extra functions
for a specific endianness completely unnecessary.
This reverts commit 0c99ba1eab63865592bb084feb271cd4e4b0357e, reversing
changes made to 5f92b070bf284f1493b1b5d433dd3adde2f46727.
This caused a CI failure when it landed in master branch due to a
128-bit `@byteSwap` in std.mem.
Most of this migration was performed automatically with `zig fmt`. There
were a few exceptions which I had to manually fix:
* `@alignCast` and `@addrSpaceCast` cannot be automatically rewritten
* `@truncate`'s fixup is incorrect for vectors
* Test cases are not formatted, and their error locations change
We already have a LICENSE file that covers the Zig Standard Library. We
no longer need to remind everyone that the license is MIT in every single
file.
Previously this was introduced to clarify the situation for a fork of
Zig that made Zig's LICENSE file harder to find, and replaced it with
their own license that required annual payments to their company.
However that fork now appears to be dead. So there is no need to
reinforce the copyright notice in every single file.
Gives a ~40% speedup on x86_64.
However, the generic code remains faster on aarch64.
This is still processing only one block at a time for now.
I'm pretty confident that processing more blocks per round
will eventually give a substantial performance improvement on
all platforms with vector units.
- use `PascalCase` for all types. So, AES256GCM is now Aes256Gcm.
- consistently use `_length` instead of mixing `_size` and `_length` for the
constants we expose
- Use `minimum_key_length` when it represents an actual minimum length.
Otherwise, use `key_length`.
- Require output buffers (for ciphertexts, macs, hashes) to be of the right
size, not at least of that size in some functions, and the exact size elsewhere.
- Use a `_bits` suffix instead of `_length` when a size is represented as a
number of bits to avoid confusion.
- Functions returning a constant-sized slice are now defined as a slice instead
of a pointer + a runtime assertion. This is the case for most hash functions.
- Use `camelCase` for all functions instead of `snake_case`.
No functional changes, but these are breaking API changes.
- This avoids having multiple `init()` functions for every combination
of optional parameters
- The API is consistent across all hash functions
- New options can be added later without breaking existing applications.
For example, this is going to come in handy if we implement parallelization
for BLAKE2 and BLAKE3.
- We don't have a mix of snake_case and camelCase functions any more, at
least in the public crypto API
Support for BLAKE2 salt and personalization (more commonly called context)
parameters have been implemented by the way to illustrate this.
Justification:
- reset() is unnecessary; states that have to be reused can be copied
- reset() is error-prone. Copying a previous state prevents forgetting
struct members.
- reset() forces implementation to store sensitive data (key, initial state)
in memory even when they are not needed.
- reset() is confusing as it has a different meaning elsewhere in Zig.
I do not see many cases of constant pointers to arrays in the stdlib.
In fact, this makes the code run a little faster, probably because Zig
automatically converts to pointers where it makes sense.
This is a translation of the [official reference implementation][1] with
few other changes. The bad news is that the reference implementation is
designed for simplicity and not speed, so there's a lot of room for
performance improvement. The good news is that, according to the crypto
benchmark, the implementation is still fast relative to the other
hashing algorithms:
```
md5: 430 MiB/s
sha1: 386 MiB/s
sha256: 191 MiB/s
sha512: 275 MiB/s
sha3-256: 233 MiB/s
sha3-512: 137 MiB/s
blake2s: 464 MiB/s
blake2b: 526 MiB/s
blake3: 576 MiB/s
poly1305: 1479 MiB/s
hmac-md5: 653 MiB/s
hmac-sha1: 553 MiB/s
hmac-sha256: 222 MiB/s
x25519: 8685 exchanges/s
```
[1]: https://github.com/BLAKE3-team/BLAKE3