std/crypto: use finer-grained error sets in function signatures
Returning the `crypto.Error` error set for all crypto operations
was very convenient to ensure that errors were used consistently,
and to avoid having multiple error names for the same thing.
The flipside is that callers were forced to always handle all
possible errors, even those that could never be returned by a
function.
This PR makes all functions return union sets of the actual errors
they can return.
The error sets themselves are all limited to a single error.
Larger sets are useful for platform-specific APIs, but we don't have
any of these in `std/crypto`, and I couldn't find any meaningful way
to build larger sets.
- use `PascalCase` for all types. So, AES256GCM is now Aes256Gcm.
- consistently use `_length` instead of mixing `_size` and `_length` for the
constants we expose
- Use `minimum_key_length` when it represents an actual minimum length.
Otherwise, use `key_length`.
- Require output buffers (for ciphertexts, macs, hashes) to be of the right
size, not at least of that size in some functions, and the exact size elsewhere.
- Use a `_bits` suffix instead of `_length` when a size is represented as a
number of bits to avoid confusion.
- Functions returning a constant-sized slice are now defined as a slice instead
of a pointer + a runtime assertion. This is the case for most hash functions.
- Use `camelCase` for all functions instead of `snake_case`.
No functional changes, but these are breaking API changes.