This allows Zig code to perform conditional compilation based on a tag
by which a Zig compiler implementation identifies itself.
See the doc comment in this commit for more details.
We now detect if the return type will be set by passing the first argument
as a pointer to stack memory from the callee's frame. This way, we do not have to
worry about stack memory being overwritten.
Besides this, we implement memset by either using wasm's memory.fill instruction when available,
or lower it manually. In the future we can lower this to a compiler_rt call.
copy_cqes() is not guaranteed to return as many CQEs as provided in the
`wait_nr` argument, meaning the assert in `copy_cqe` can trigger.
Instead, loop until we do get at least one CQE returned.
This mimics the behaviour of liburing's _io_uring_get_cqe.
The semantics of this function are that it moves both files and
directories. Previously we had this `is_dir` boolean field of
`std.os.windows.OpenFile` which required the API user to choose: are we
opening a file or directory? And the other kind would either cause
error.IsDir or error.NotDir. But that is not a limitation of the Windows
file system API; it was self-imposed.
On Windows, rename is implemented internally with `NtCreateFile` so we
need to allow it to open either files or directories. This is now done
by `std.os.windows.OpenFile` accepting enum{file_only,dir_only,any}
instead of a boolean.
This allows stage2 to build more of compiler-rt.
I also changed `-%` to `-` for comptime ints in the div and mul
implementations of compiler-rt. This is clearer code and also happens to
work around a bug in stage2.
This improves readability as well as compatibility with stage2. Most of
compiler-rt is now enabled for stage2 with just a few functions disabled
(until stage2 passes more behavior tests).
This saves on comptime format string parsing, as the compiler caches
comptime calls. The catch here, is that parsePlaceHolder cannot take the
placeholder string as a slice. It must take it as an array by value for
the caching to occure.
There is also some logic in here that ensures that the specifier_arg is
always them same slice when the items they contain are the same. This
makes the compiler stamp out less copies of formatType.
For renameat, unlinkat, mkdirat, symlinkat and linkat the error code
differs between kernel 5.4 which returns EBADF and kernel 5.10 which returns EINVAL.
Fixes#10466
- neg can only overflow, if a == MIN
- case `-0` is properly handled by hardware, so overflow check by comparing
`a == MIN` is sufficient
- tests: MIN, MIN+1, MIN+4, -42, -7, -1, 0, 1, 7..
See #1290
* Make bcrypt State struct public
This is useful to implement the various protocols outside of the standard library
* Implement bcrypt pbkdf
This variant is used in e.g. SSH
The OpenBSD implementation was used as a reference
* remove false positive "all prongs handled" compile error for
non-exhaustive enums.
* implement `@TypeInfo` for enums, except enums which have any
declarations is still TODO.
* `getBuiltin` uses nomespaceLookup/analyzeDeclVal rather than
namespaceLookupRef/analyzeLoad. Avoids a detour through an
unnecessary type, and adds a detour through a caching mechanism.
* `Value.eql`: add missing code to handle enum comparisons for
non-exhaustive enums. It works by converting the enum tags to numeric
values and comparing those.
- abs can only overflow, if a == MIN
- comparing the sign change from wrapping addition is branchless
- tests: MIN, MIN+1,..MIN+4, -42, -7, -1, 0, 1, 7..
See #1290
The linker will now emit names for all function, global and data segment symbols.
This increases the ability to debug wasm modules tremendously as tools like wasm2wat
can use this information to generate named functions, globals etc, rather than placeholders such as $f1.
The self-hosted compiler cannot yet deal with the print function that this
field enables. It is not critical, however, and allows us to remove formatting
from the list of neccesary features to implement to get the page allocator
working.
Make `@returnAddress()` return for the BPF target, as the BPF target for
the time being does not support probing for the return address. Stack
traces for the general purpose allocator for the BPF target is also set
to not be captured.
Before, `std.Progress` was printing unwanted stuff to stderr. Now, the
test runner's logic to detect whether we should print each test as a
separate line to stderr is properly activated.
The status quo for the `build.zig` build system is preserved in
the sense that, if the user does not explicitly override
`dylib.setInstallName(...);` in their build script, the default
of `@rpath/libname.dylib` applies. However, should they want to
override the default behaviour, they can either:
1) unset it with
```dylib.setIntallName(null);```
2) set it to an explicit string with
```dylib.setInstallName("somename.dylib");```
When it comes to the command line however, the default is not to
use `@rpath` for the install name when creating a dylib. The user
will now be required to explicitly specify the `@rpath` as part
of the desired install name should they choose so like so:
1) with `build-lib`
```
zig build-lib -dynamic foo.zig -install_name @rpath/libfoo.dylib
```
2) with `cc`
```
zig cc -shared foo.c -o libfoo.dylib -Wl,"-install_name=@rpath/libfoo.dylib"
```
This reverts commit 11803a3a569205d640c7ec0b0aedba83f47a6e64.
Observations from the performance dashboard:
* strictly worse in terms of CPU instructions
* slightly worse wall time (but this can be noisy)
* sometimes better, sometimes worse for branch predictions
Given that the commit was introducing complexity for optimization's
sake, these performance changes do not seem worth it.
See https://github.com/ziglang/zig/pull/10337 for context.
In #10337 the `available` tracking fix necessitated an additional condition on the probe loop in both `getOrPut` and `getIndex` to prevent an infinite loop. Previously, this condition was implicit thanks to the guaranteed presence of a free slot.
The new condition hurts the `HashMap` benchmarks (https://github.com/ziglang/zig/pull/10337#issuecomment-996432758).
This commit removes that extra condition on the loop. Instead, when probing, first check whether the "home" slot is the target key — if so, return it. Otherwise, save the home slot's metadata to the stack and temporarily "free" the slot (but don't touch its value). Then continue with the original loop. Once again, the loop will be implicitly broken by the new "free" slot. The original metadata is restored before the function returns.
`getOrPut` has one additional gotcha — if the home slot is a tombstone and `getOrPut` misses, then the home slot is is written with the new key; that is, its original metadata (the tombstone) is not restored.
Other changes:
- Test hash map misses.
- Test using `getOrPutAssumeCapacity` to get keys at the end (along with `get`).
When entries are inserted and removed into a hash map at an equivalent rate (maintaining a mostly-consistent total count of entries), it should never need to be resized. But `HashMapUnmanaged.available` does not presently count tombstoned slots as "available", so this put/remove pattern eventually panics (assertion failure) when `available` reaches `0`.
The solution implemented here is to count tombstoned slots as "available". Another approach (which hashbrown (b3eaf32e60/src/raw/mod.rs (L1455-L1542)) takes) would be to rehash all entries in place when there are too many tombstones. This is more complex but avoids an `O(n)` bad case when the hash map is full of many tombstones.