std/crypto: use finer-grained error sets in function signatures
Returning the `crypto.Error` error set for all crypto operations
was very convenient to ensure that errors were used consistently,
and to avoid having multiple error names for the same thing.
The flipside is that callers were forced to always handle all
possible errors, even those that could never be returned by a
function.
This PR makes all functions return union sets of the actual errors
they can return.
The error sets themselves are all limited to a single error.
Larger sets are useful for platform-specific APIs, but we don't have
any of these in `std/crypto`, and I couldn't find any meaningful way
to build larger sets.
std.crypto.random
* cross platform, even freestanding
* can't fail. on initialization for some systems requires calling
os.getrandom(), in which case there are rare but theoretically
possible errors. The code panics in these cases, however the
application may choose to override the default seed function and then
handle the failure another way.
* thread-safe
* supports the full Random interface
* cryptographically secure
* no syscall required to initialize on Linux (AT_RANDOM)
* calls arc4random on systems that support it
`std.crypto.randomBytes` is removed in favor of `std.crypto.random.bytes`.
I moved some of the Random implementations into their own files in the
interest of organization.
stage2 no longer requires passing a RNG; instead it uses this API.
Closes#6704
This is a trivial implementation that just does a or[xor] loop.
However, this pattern is used by virtually all crypto libraries and
in practice, even without assembly barriers, LLVM never turns it into
code with conditional jumps, even if one of the parameters is constant.
This has been verified to still be the case with LLVM 11.0.0.
The bcrypt function intentionally requires quite a lot of CPU cycles
to complete.
In addition to that, not having its full state constantly in the
CPU L1 cache causes a massive performance drop.
These properties slow down brute-force attacks against low-entropy
inputs (typically passwords), and GPU-based attacks get little
to no advantages over CPUs.