Originally inspired by Go's `utf8.Valid` function. Includes some test cases from Go's test suite.
Further optimized to be faster in all tested cases (short/long ascii/UTF8), in all release modes.
Takes advantage of SIMD for the ASCII fast path.
`nextAfter()` returns the next representable value after `x` in the direction of `y` and is a standard math library function ([C++](https://en.cppreference.com/w/cpp/numeric/math/nextafter), [Java](https://docs.oracle.com/javase/8/docs/api/java/lang/Math.html#nextAfter-double-double-)). It is primarily useful for bitwise incrementing/decrementing floats.
This implementation supports runtime integers, runtime floats and `comptime_int`. `comptime_float` is not supported because NaNs/infinities are intentionally difficult to obtain and because I'm not sure if the fact that it's backed by `f128` is supposed to be an implementation detail. Either way, the user could just call the function with the floating-point type whose behavior they want at comptime and then cast the result to `comptime_float`.
The float implementation was ported from mingw-w64 with some slight changes made possible because the Zig standard library doesn't care about raising FP exceptions.
The number of test cases may seem excessive but they should cover every normal and edge case for every float type and are especially important for verifying that `f80` works.
Until now, we would pass `candidate: NativeTargetInfo` which creates
a copy of the `NativeTargetInfo.DynamicLinker` buffer. We would then
return this buffer in `bad_dl: []const u8` which would goes out-of-scope
the moment we leave this function frame yielding garbage. To fix this,
we just need to remember to pass by const-pointer
`candidate: *const NativeTargetInfo`.
Follow up to #17383. This is a minor optimization that only matters when a small allocation is resized/free'd soon after it is allocated.
The only real difference I was able to observe with this was via a synthetic benchmark that allocates a full bucket and then frees all but one of the slots, over and over in a loop:
Debug build:
Benchmark 1 (9 runs): gpa-degen-master.exe
measurement mean ± σ min … max outliers delta
wall_time 575ms ± 5.19ms 569ms … 583ms 0 ( 0%) 0%
peak_rss 43.8MB ± 1.37KB 43.8MB … 43.8MB 1 (11%) 0%
Benchmark 2 (10 runs): gpa-degen-search-cur.exe
measurement mean ± σ min … max outliers delta
wall_time 532ms ± 5.55ms 520ms … 539ms 0 ( 0%) ⚡- 7.5% ± 0.9%
peak_rss 43.8MB ± 65.2KB 43.8MB … 44.0MB 1 (10%) + 0.0% ± 0.1%
ReleaseFast build:
Benchmark 1 (129 runs): gpa-degen-master-release.exe
measurement mean ± σ min … max outliers delta
wall_time 38.9ms ± 1.12ms 36.7ms … 42.4ms 8 ( 6%) 0%
peak_rss 23.2MB ± 2.39KB 23.2MB … 23.2MB 0 ( 0%) 0%
Benchmark 2 (151 runs): gpa-degen-search-cur-release.exe
measurement mean ± σ min … max outliers delta
wall_time 33.2ms ± 999us 31.9ms … 36.3ms 20 (13%) ⚡- 14.7% ± 0.6%
peak_rss 23.2MB ± 2.26KB 23.2MB … 23.2MB 0 ( 0%) + 0.0% ± 0.0%
The idea here is to avoid code bloat by having only one actual io.Reader
implementation, which is type erased, and then implement a GenericReader
that preserves type information on top of that as thin glue code.
The strategy here is for that glue code to `@errSetCast` the result of
the type-erased reader functions, however, while trying to do that I
ran into #17343.
The `makeMem*()` functions crashed under valgrind in Debug and
ReleaseSafe modes.
The reason being that `doMemCheckClientRequestExpr()` returns `0`
when not running under Valgrind, and `maxInt(usize)` when running
under Valgrind.
Thus, `@as(i1, @intCast(maxInt(usize)))` always fails and these
functions crashed before returning.
That being said, what these functions used to return was quite
unexpected: `0` on error and `-1` on success (=running under valgrind).
That doesn't match any Zig nor C conventions.
But that return value doesn't seem to be very useful. Either we are
running under Valgrind or we are not. There's no point in checking this
for every single call. Applications are likely to always discard it.
So, just return a `void` instead.
Also avoid function comments that start with `Similarly, ...` because
that doesn't refer to anything in the context of autodoc or in IDEs.
zig fetch [options] <url>
zig fetch [options] <path>
Fetches a package which is found at <url> or <path> into the global
cache directory, printing the package hash to stdout.
Closes#16972
Related to #14280
Additionally, this commit:
* Adds uncompressed .tar support to package fetching
* Introduces symlink support to package fetching
The 5.11 in uname is not something that is ever updated. There is no
versioning of the illumos system in general. Illumos prefers to rely
on feature detection.
I can't say what Solaris does these days as I do not work at Oracle;
so I left it alone.
- Adds `illumos` to the `Target.Os.Tag` enum. A new function,
`isSolarish` has been added that returns true if the tag is either
Solaris or Illumos. This matches the naming convention found in Rust's
`libc` crate[1].
- Add the tag wherever `.solaris` is being checked against.
- Check for the C pre-processor macro `__illumos__` in CMake to set the
proper target tuple. Illumos distros patch their compilers to have
this in the "built-in" set (verified with `echo | cc -dM -E -`).
Alternatively you could check the output of `uname -o`.
Right now, both Solaris and Illumos import from `c/solaris.zig`. In the
future it may be worth putting the shared ABI bits in a base file, and
mixing that in with specific `c/solaris.zig`/`c/illumos.zig` files.
[1]: 6e02a329a2/src/unix/solarish
(Unmanaged)ArrayList.insert has the same inefficiency as the old insertSlice. With the new addManyAt, the solution is trivial.
Also improves the test "growing memory preserves contents". In the previous implementation, if any changes were made to the ArrayList memory growth policy (function growMemory), the list could end up with enough capacity to not trigger a memory growth, defeating the purpose of the test. The new implementation more robustly triggers a memory growth.
These are an order of magnitude quicker than the previous
implementations:
A relative comparison of each, measuring scanning a 1G file.
Reading 1G (1.0000000009313226GiB)
std.mem.sliceTo: 281.232ms
vectorized.sliceTo: 24.769ms
strlen: 24.291ms
std.indexOfScalar: 229.016ms
vectorized.indexOfScalar: 24.685ms
memchr: 24.958ms
* Move `computeBetterCapacity` to the bottom so that `pub` stuff shows
up first.
* Rename `computeBetterCapacity` to `growCapacity`. Every function
implicitly computes something; that word is always redundant in a
function name. "better" is vague. Better in what way? Instead we
describe what is actually happening. "grow".
* Improve doc comments to be very explicit about when element pointers
are invalidated or not.
* Rename `addManyAtIndex` to `addManyAt`. The parameter is named
`index`; that is enough.
* Extract some duplicated code into `addManyAtAssumeCapacity` and make
it `pub`.
* Since I audited every line of code for correctness, I changed the
style to my personal preference.
* Avoid a redundant `@memset` to `undefined` - memory allocation does
that already.
* Fixed comment giving the wrong reason for not calling
`ensureTotalCapacity`.
Includes a more robust implementation of replaceRange, which updates the
ArrayListUnmanaged if state changes in the managed part of the code
before returning an error.
Co-authored-by: Andrew Kelley <andrew@ziglang.org>