mirror of
https://github.com/ziglang/zig.git
synced 2025-12-06 06:13:07 +00:00
The previous float-parsing method was lacking in a lot of areas. This
commit introduces a state-of-the art implementation that is both
accurate and fast to std.
Code is derived from working repo https://github.com/tiehuis/zig-parsefloat.
This includes more test-cases and performance numbers that are present
in this commit.
* Accuracy
The primary testing regime has been using test-data found at
https://github.com/tiehuis/parse-number-fxx-test-data. This is a fork of
upstream with support for f128 test-cases added. This data has been
verified against other independent implementations and represents
accurate round-to-even IEEE-754 floating point semantics.
* Performance
Compared to the existing parseFloat implementation there is ~5-10x
performance improvement using the above corpus. (f128 parsing excluded
in below measurements).
** Old
$ time ./test_all_fxx_data
3520298/5296694 succeeded (1776396 fail)
________________________________________________________
Executed in 28.68 secs fish external
usr time 28.48 secs 0.00 micros 28.48 secs
sys time 0.08 secs 694.00 micros 0.08 secs
** This Implementation
$ time ./test_all_fxx_data
5296693/5296694 succeeded (1 fail)
________________________________________________________
Executed in 4.54 secs fish external
usr time 4.37 secs 515.00 micros 4.37 secs
sys time 0.10 secs 171.00 micros 0.10 secs
Further performance numbers can be seen using the
https://github.com/tiehuis/simple_fastfloat_benchmark/ repository, which
compares against some other well-known string-to-float conversion
functions. A breakdown can be found here:
0d9f020f1a/PERFORMANCE.md (commit-b15406a0d2e18b50a4b62fceb5a6a3bb60ca5706)
In summary, we are within 20% of the C++ reference implementation and
have about ~600-700MB/s throughput on a Intel I5-6500 3.5Ghz.
* F128 Support
Finally, f128 is now completely supported with full accuracy. This does
use a slower path which is possible to improve in future.
* Behavioural Changes
There are a few behavioural changes to note.
- `parseHexFloat` is now redundant and these are now supported directly
in `parseFloat`.
- We implement round-to-even in all parsing routines. This is as
specified by IEEE-754. Previous code used different rounding
mechanisms (standard was round-to-zero, hex-parsing looked to use
round-up) so there may be subtle differences.
Closes #2207.
Fixes #11169.
90 lines
2.8 KiB
Zig
90 lines
2.8 KiB
Zig
//! Conversion of hex-float representation into an accurate value.
|
|
//
|
|
// Derived from golang strconv/atof.go.
|
|
|
|
const std = @import("std");
|
|
const math = std.math;
|
|
const common = @import("common.zig");
|
|
const Number = common.Number;
|
|
const floatFromUnsigned = common.floatFromUnsigned;
|
|
|
|
// converts the form 0xMMM.NNNpEEE.
|
|
//
|
|
// MMM.NNN = mantissa
|
|
// EEE = exponent
|
|
//
|
|
// MMM.NNN is stored as an integer, the exponent is offset.
|
|
pub fn convertHex(comptime T: type, n_: Number(T)) T {
|
|
const MantissaT = common.mantissaType(T);
|
|
var n = n_;
|
|
|
|
if (n.mantissa == 0) {
|
|
return if (n.negative) -0.0 else 0.0;
|
|
}
|
|
|
|
const max_exp = math.floatExponentMax(T);
|
|
const min_exp = math.floatExponentMin(T);
|
|
const mantissa_bits = math.floatMantissaBits(T);
|
|
const exp_bits = math.floatExponentBits(T);
|
|
const exp_bias = min_exp - 1;
|
|
|
|
// mantissa now implicitly divided by 2^mantissa_bits
|
|
n.exponent += mantissa_bits;
|
|
|
|
// Shift mantissa and exponent to bring representation into float range.
|
|
// Eventually we want a mantissa with a leading 1-bit followed by mantbits other bits.
|
|
// For rounding, we need two more, where the bottom bit represents
|
|
// whether that bit or any later bit was non-zero.
|
|
// (If the mantissa has already lost non-zero bits, trunc is true,
|
|
// and we OR in a 1 below after shifting left appropriately.)
|
|
while (n.mantissa != 0 and n.mantissa >> (mantissa_bits + 2) == 0) {
|
|
n.mantissa <<= 1;
|
|
n.exponent -= 1;
|
|
}
|
|
if (n.many_digits) {
|
|
n.mantissa |= 1;
|
|
}
|
|
while (n.mantissa >> (1 + mantissa_bits + 2) != 0) {
|
|
n.mantissa = (n.mantissa >> 1) | (n.mantissa & 1);
|
|
n.exponent += 1;
|
|
}
|
|
|
|
// If exponent is too negative,
|
|
// denormalize in hopes of making it representable.
|
|
// (The -2 is for the rounding bits.)
|
|
while (n.mantissa > 1 and n.exponent < min_exp - 2) {
|
|
n.mantissa = (n.mantissa >> 1) | (n.mantissa & 1);
|
|
n.exponent += 1;
|
|
}
|
|
|
|
// Round using two bottom bits.
|
|
var round = n.mantissa & 3;
|
|
n.mantissa >>= 2;
|
|
round |= n.mantissa & 1; // round to even (round up if mantissa is odd)
|
|
n.exponent += 2;
|
|
if (round == 3) {
|
|
n.mantissa += 1;
|
|
if (n.mantissa == 1 << (1 + mantissa_bits)) {
|
|
n.mantissa >>= 1;
|
|
n.exponent += 1;
|
|
}
|
|
}
|
|
|
|
// Denormal or zero
|
|
if (n.mantissa >> mantissa_bits == 0) {
|
|
n.exponent = exp_bias;
|
|
}
|
|
|
|
// Infinity and range error
|
|
if (n.exponent > max_exp) {
|
|
return math.inf(T);
|
|
}
|
|
|
|
var bits = n.mantissa & ((1 << mantissa_bits) - 1);
|
|
bits |= @intCast(MantissaT, (n.exponent - exp_bias) & ((1 << exp_bits) - 1)) << mantissa_bits;
|
|
if (n.negative) {
|
|
bits |= 1 << (mantissa_bits + exp_bits);
|
|
}
|
|
return floatFromUnsigned(T, MantissaT, bits);
|
|
}
|