Instead of making the memory alignment functions more complicated, I
added more API documentation for their existing semantics.
closes#12118closes#12135
Previously, if you used `zig test -ofmt=c -target foobar` then Zig would
try to compile the generated C code with the native target instead of
"foobar".
With this change, `--test-cmd` with e.g. QEMU still won't work, but at
least the binary will get compiled for the correct target.
We need to be careful to respect side-effects/branching in these
cases, but otherwise this behaves very similarly to multiplication.
`lhs and rhs == false` if either lhs or rhs is comptime-known `false`,
just like `lhs * rhs == 0` if either lhs or rhs is comptime-known to
be zero.
Similar reasoning applies to `lhs or rhs`.
* std.os.uefi: integer backed structs, add tests to catch regressions
device_path_protocol now uses extern structs with align(1) fields because
the transition to integer backed packed struct broke alignment
added comptime asserts that device_path_protocol structs do not violate
alignment and size specifications
this is a hack meant to restore functionality for the upcoming release,
a proper analysis of the new zir structure is required to make a robust
change.
Make the test use the minimum length and set MAX_NAME_BYTES to the maximum so that:
- the test will work on any host platform
- *and* the MAX_NAME_BYTES will be able to hold the max file name component on any host platform
Each u16 within a file name component can be encoded as up to 3 UTF-8 bytes, so we need to use MAX_NAME_BYTES to account for all possible UTF-8 encoded names.
Fixes#8268
Before, the code for building glibc stubs used a special case of the
Cache API that did not add any file inputs, and did not use
writeManifest(). This is not really how the Cache API is designed to
work and it shows because there was a race condition.
This commit adds as an input file the abilists file that comes with
Zig's installation, which has the added benefit of making glibc stub
caching properly detect cache invalidation when the user decides to
overwrite their abilists file. This harmonizes with the rest of how Zig
works, which intentionally allows you to hack the installation files and
have it behave properly with the cache system.
Finally, because of having any file inputs, the normal API flow of the
Cache system can be used, eliminating the one place that used the Cache
API in a special way. In other words, it uses writeManifest() now and
properly obeys the cache hit/miss semantics.
closes#13160
Comptime code can't execute assembly code, so we need some way to
force comptime code to use the generic path. This should be replaced
with whatever is implemented for #868, when that day comes.
I am seeing that the result for the hash is incorrect in stage1 and
crashes stage2, so presumably this never worked correctly. I will follow
up on that soon.
This gets us most of the way back to the performance I had when
I was using the LLVM intrinsics:
- Intel Intel(R) Core(TM) i7-1068NG7 CPU @ 2.30GHz:
190.67 MB/s (w/o intrinsics) -> 1285.08 MB/s
- AMD EPYC 7763 (VM) @ 2.45 GHz:
240.09 MB/s (w/o intrinsics) -> 1360.78 MB/s
- Apple M1:
216.96 MB/s (w/o intrinsics) -> 2133.69 MB/s
Minor changes to this source can swing performance from 400 MB/s to
1400 MB/s or... 20 MB/s, depending on how it interacts with the
optimizer. I have a sneaking suspicion that despite LLVM inheriting
GCC's extremely strict inline assembly semantics, its passes are
rather skittish around inline assembly (and almost certainly, its
instruction cost models can assume nothing)