Implementation of the IND-CCA2 post-quantum secure key encapsulation
mechanism (KEM) CRYSTALS-Kyber, as submitted to the third round of the NIST
Post-Quantum Cryptography (v3.02/"draft00"), and selected for standardisation.
Co-authored-by: Frank Denis <124872+jedisct1@users.noreply.github.com>
Previously, if you had a pointer to multiple array elements and tried to
write to it at comptime, it was incorrectly treated as a pointer to one
specific array value, leading to an assertion down the line. If we try
to mutate a value at an elem_ptr larger than the element type, we need
to perform a modification to multiple array elements.
This solution isn't ideal, since it will result in storePtrVal
serializing the whole array, modifying the relevant parts, and storing
it back. Ideally, it would only take the required elements. However,
this change would have been more complex, and this is a fairly rare
operation (nobody ever ran into the bug before after all), so it doesn't
matter all that much.
* Add configurable side channels mitigations; enable them on soft AES
Our software AES implementation doesn't have any mitigations against
side channels.
Go's generic implementation is not protected at all either, and even
OpenSSL only has minimal mitigations.
Full mitigations against cache-based attacks (bitslicing, fixslicing)
come at a huge performance cost, making AES-based primitives pretty
much useless for many applications. They also don't offer any
protection against other classes of side channel attacks.
In practice, partially protected, or even unprotected implementations
are not as bad as it sounds. Exploiting these side channels requires
an attacker that is able to submit many plaintexts/ciphertexts and
perform accurate measurements. Noisy measurements can still be
exploited, but require a significant amount of attempts. Wether this
is exploitable or not depends on the platform, application and the
attacker's proximity.
So, some libraries made the choice of minimal mitigations and some
use better mitigations in spite of the performance hit. It's a
tradeoff (security vs performance), and there's no one-size-fits all
implementation.
What applies to AES applies to other cryptographic primitives.
For example, RSA signatures are very sensible to fault attacks,
regardless of them using the CRT or not. A mitigation is to verify
every produced signature. That also comes with a performance cost.
Wether to do it or not depends on wether fault attacks are part of
the threat model or not.
Thanks to Zig's comptime, we can try to address these different
requirements.
This PR adds a `side_channels_protection` global, that can later
be complemented with `fault_attacks_protection` and possibly other
knobs.
It can have 4 different values:
- `none`: which doesn't enable additional mitigations.
"Additional", because it only disables mitigations that don't have
a big performance cost. For example, checking authentication tags
will still be done in constant time.
- `basic`: which enables mitigations protecting against attacks in
a common scenario, where an attacker doesn't have physical access to
the device, cannot run arbitrary code on the same thread, and cannot
conduct brute-force attacks without being throttled.
- `medium`: which enables additional mitigations, offering practical
protection in a shared environement.
- `full`: which enables all the mitigations we have.
The tradeoff is that the more mitigations we enable, the bigger the
performance hit will be. But this let applications choose what's
best for their use case.
`medium` is the default.
Currently, this only affects software AES, but that setting can
later be used by other primitives.
For AES, our implementation is a traditional table-based, with 4
32-bit tables and a sbox.
Lookups in that table have been replaced by function calls. These
functions can add a configurable noise level, making cache-based
attacks more difficult to conduct.
In the `none` mitigation level, the behavior is exactly the same
as before. Performance also remains the same.
In other levels, we compress the T tables into a single one, and
read data from multiple cache lines (all of them in `full` mode),
for all bytes in parallel. More precise measurements and way more
attempts become necessary in order to find correlations.
In addition, we use distinct copies of the sbox for key expansion
and encryption, so that they don't share the same L1 cache entries.
The best known attacks target the first two AES round, or the last
one.
While future attacks may improve on this, AES achieves full
diffusion after 4 rounds. So, we can relax the mitigations after
that. This is what this implementation does, enabling mitigations
again for the last two rounds.
In `full` mode, all the rounds are protected.
The protection assumes that lookups within a cache line are secret.
The cachebleed attack showed that it can be circumvented, but
that requires an attacker to be able to abuse hyperthreading and
run code on the same core as the encryption, which is rarely a
practical scenario.
Still, the current AES API allows us to transparently switch to
using fixslicing/bitslicing later when the `full` mitigation level
is enabled.
* Software AES: use little-endian representation.
Virtually all platforms are little-endian these days, so optimizing
for big-endian CPUs doesn't make sense any more.
In 41 (Undefined Behavior) . 5 (Integer Overflow) . 3 (Builtin Overflow Functions), it is stated that
> These builtins return a bool of whether or not overflow occurred, as well as returning the overflowed bits:
> * @addWithOverflow
> * @subWithOverflow
> * @mulWithOverflow
> * @shlWithOverflow
but in their definition says that it returns a `tuple`/`struct`.
Example;
`@addWithOverflow(a: anytype, b: anytype) struct { @TypeOf(a, b), u1 }`
Co-authored-by: zooster <r00ster91@proton.me>
Apple M1/M2 have an EOR3 instruction that can XOR 2 operands with
another one, and LLVM knows how to take advantage of it.
However, two EOR can't be automatically combined into an EOR3 if
one of them is in an assembly block.
That simple change speeds up ciphers doing an AES round immediately
followed by a XOR operation on Apple Silicon.
Before:
aegis-128l mac: 12534 MiB/s
aegis-256 mac: 6722 MiB/s
aegis-128l: 10634 MiB/s
aegis-256: 6133 MiB/s
aes128-gcm: 3890 MiB/s
aes256-gcm: 3122 MiB/s
aes128-ocb: 2832 MiB/s
aes256-ocb: 2057 MiB/s
After:
aegis-128l mac: 15667 MiB/s
aegis-256 mac: 8240 MiB/s
aegis-128l: 12656 MiB/s
aegis-256: 7214 MiB/s
aes128-gcm: 3976 MiB/s
aes256-gcm: 3202 MiB/s
aes128-ocb: 2835 MiB/s
aes256-ocb: 2118 MiB/s
* There was an edge case where the arena could be destroyed twice on
error: once from the arena itself and once from the decl destruction.
* The type of the created decl was incorrect (it should have been the
pointer child type), but it's not required anyway, so it's now just
initialized to anyopaque (which more accurately reflects what's
actually at that memory, since e.g. [*]T may correspond to nothing).
* A runtime bitcast of the pointer was performed, meaning @extern didn't
work at comptime. This is unnecessary: the decl_ref can just be
initialized with the correct pointer type.