The two CacheMode values are `whole` and `incremental`.
`incremental` is what we had before; `whole` is new.
Whole cache mode uses everything as inputs to the cache hash;
and when a hit occurs it skips everything including linking.
This is ideal for when source files change rarely and for backends that
do not have good incremental compilation support, for example
compiler-rt or libc compiled with LLVM with optimizations on.
This is the main motivation for the additional mode, so that we can have
LLVM-optimized compiler-rt/libc builds, without waiting for the LLVM
backend every single time Zig is invoked.
Incremental cache mode hashes only the input file path and a few target
options, intentionally relying on collisions to locate already-existing
build artifacts which can then be incrementally updated.
The bespoke logic for caching stage1 backend build artifacts
is removed since we now have a global caching mechanism for
when we want to cache the entire compilation, *including* linking.
Previously we had to get "creative" with libs.txt and a special
byte in the hash id to communicate flags, so that when the cached
artifacts were re-linked, we had this information from stage1
even though we didn't actually run it. Now that `CacheMode.whole`
includes linking, this extra information does not need to be
preserved for cache hits. So although this changeset introduces
complexity, it also removes complexity.
The main trickiness here comes from the inherent differences between the
two modes: `incremental` wants a directory immediately to operate on,
while `whole` doesn't know the output directory until the compilation is
complete. This commit deals with this problem mostly inside `update()`,
where, on a cache miss, it replaces `zig_cache_artifact_directory` with a
temporary directory, and then renames it into place once the compilation is
complete.
Items remaining before this branch can be merged:
* [ ] make sure these things make it into the cache manifest:
- @import files
- @embedFile files
- we already add dep files from c but make sure the main .c files make
it in there too, not just the included files
* [ ] double check that the emit paths of other things besides the binary
are working correctly.
* [ ] test `-fno-emit-bin` + `-fstage1`
* [ ] test `-femit-bin=foo` + `-fstage1`
* [ ] implib emit directory copies bin_file_emit directory in create() and needs
to be adjusted to be overridden as well.
* [ ] make sure emit-h is handled correctly in the cache hash
* [ ] Cache: detect duplicate files added to the manifest
Some preliminary performance measurements of wall clock time and
peak RSS used:
stage1 behavior (1077 tests), llvm backend, release build:
* cold global cache: 4.6s, 1.1 GiB
* warm global cache: 3.4s, 980 MiB
stage2 master branch behavior (575 tests), llvm backend, release build:
* cold global cache: 0.62s, 191 MiB
* warm global cache: 0.40s, 128 MiB
stage2 this branch behavior (575 tests), llvm backend, release build:
* cold global cache: 0.62s, 179 MiB
* warm global cache: 0.27s, 90 MiB
This saves on comptime format string parsing, as the compiler caches
comptime calls. The catch here, is that parsePlaceHolder cannot take the
placeholder string as a slice. It must take it as an array by value for
the caching to occure.
There is also some logic in here that ensures that the specifier_arg is
always them same slice when the items they contain are the same. This
makes the compiler stamp out less copies of formatType.
For renameat, unlinkat, mkdirat, symlinkat and linkat the error code
differs between kernel 5.4 which returns EBADF and kernel 5.10 which returns EINVAL.
Fixes#10466
Previously, the `load` instruction would just pass the pointer to the next instruction
for types that comply to `isByRef`. However, this meant that a defer would directly write
to the reference, rather than a copy. After this commit, we always copy the value.
- This implements all pointer arithmetic related instructions such as ptr_add, ptr_sub, ptr_elem_val
- We refactored the code, to use `isByRef` to ensure consistancy.
- Pointers will now be loaded correctly, rather then being passed around.
- The behaviour test for pointers is now passing.
- Previously the table index and function type index were switched.
This commit swaps them.
- This also emits the correct indirect function calls count when importing the function table
- Add method to easily create local for virtual stack
- Ensure function pointers are passed correctly
- Correctly handle slices as return types and values
- Fix wrapping error sets/payloads.
- Handle ptr-like optionals correctly, by using address '0' as null.
- Implement `array_to_slice`
- linker: Always emit a table, so call_indirect inside bodies do not fail if there's no table.
TODO: Only do this when we emit a call_indirect but the relocation cannot be resolved.
* load address (pointer) to a stack variable in a register via
`lea` instruction
* store value on the stack via a pointer stored in a register via
`mov [reg], imm` instruction
* the lowerings naturally are handled automatically by Mir -> Isel
layer
* add initial (without safety) implementation of `.optional_payload`
* add matching stage2 test cases
Effectively a small continuation of #10152
This allows the for.zig behavior tests to pass. Unfortunately to fully test everything I had to move a lot of behavior tests from array.zig; most of them now pass (sorry @rainbowbismuth!)
I'm also conflicted on how I store constants into arrays because it's kind of stupid; array's can't be re-initialized using the same syntax, so instead of initializing each element, a new array is made which is copied into the destination. This also required that renderValue can't emit string literals for byte arrays given that they need to always have an extra byte for the NULL terminator, meaning that strings are no longer grep-able in the output.
* fix handling of `ah`, `bh`, `ch`, and `dh` registers (which are
actually used as aliases to `dil`, etc. registers). Currenly, we
treat them as aliases only meaning when we encounter `ah` we make
sure to set the REX.W to promote the instruction to 64bits and use
`dil` register instead - otherwise we might have mismatch between
registers used in different parts of the codegen. In the future,
we can and should use `ah`, etc. as upper 8bit halves of 16bit
registers `ax`, etc.
* fix bug in `airCmp` where `.cmp` MIR instruction shouldn't force
type `Bool` but let the type of the original type propagate downwards
- we need this to make an informed choice of the target register
size and hence choose the right encoding down the line.
* implement lowering of 1-byte and 2-byte values to stack and add
matching stage2 tests for x86_64 codegen
To request memory-immediate encoding at the MIR side, we should now
use a new tag such as `mov_mem_imm` where the size of the memory
pointer is encoded as the flags:
```
0b00 => .byte_ptr,
0b01 => .word_ptr,
0b10 => .dword_ptr,
0b11 => .qword_ptr,
```
* `Module.Union.getLayout`: fixes to support components of the union
being 0 bits.
* Implement `@typeInfo` for unions.
* Add missing calls to `resolveTypeFields`.
* Fix explicitly-provided union tag types passing a `Zir.Inst.Ref`
where an `Air.Inst.Ref` was expected. We don't have any type safety
for this; these typess are aliases.
* Fix explicitly-provided `union(enum)` tag Values allocated to the
wrong arena.
* reduce number of branches in zirCmpEq
* implement equality comparison for enums and unions
* fix coercion from union to its tag type resulting in the wrong type
* fix method calls of unions
* implement peer type resolution for unions, enums, and enum literals
* fix union tag type memory in the wrong arena