This improves readability as well as compatibility with stage2. Most of
compiler-rt is now enabled for stage2 with just a few functions disabled
(until stage2 passes more behavior tests).
Instead of juggling GPA-allocated sub_path (and ultimately dropping the
ball, in this analogy), `Compilation.create` allocates an
already-exactly-correct size `sub_path` that has the digest unpopulated.
This is then overwritten in place as necessary and used as the
`emit_bin.sub_path` value, and no allocations/frees are performed for
this file path.
Previously the code asserted source files were already loaded, but this
is not the case when cached ZIR is loaded. Now it will trigger .zig
source code to be loaded for the purposes of hashing the source for
`CacheMode.whole`.
This additionally refactors stat_size, stat_inode, and stat_mtime fields
into using the `Cache.File.Stat` struct.
This fixes a regression in this branch that can be reproduced with the
following steps:
1. `zig build-exe hello.zig`
2. delete the "hello" binary
3. `zig build-exe hello.zig`
4. observe that the "hello" binary is missing
This happened because it was a cache hit, but nothing got copied to the
output directory.
This commit sets CacheMode to incremental - even for stage1 - when the
CLI requests `disable_lld_caching` (this option should be renamed),
resulting in the main Compilation to be repeated (uncached) for stage1,
populating the binary into the cwd as expected.
For stage2 the result is even better: the incremental compilation system
will look for build artifacts to incrementally compile, and start fresh
if not found.
when using `CacheMode.whole`. Also, I verified that `addDepFilePost` is
in fact including the original C source file in addition to the files it
depends on.
* Logic to check whether a bin file is not emitted is more complicated
in between `Compilation.create` and `Compilation.update`. Fixed the
logic that decides whether to build compiler-rt and other support
artifacts.
* Basically, one cannot inspect the value of `comp.bin_file.emit` until
after update() is called - fixed another instance of this happening
in the CLI.
* In the CLI, `runOrTest` is updated to properly use the result value
of `comp.bin_file.options.emit` rather than guessing whether the
output binary is.
* Don't assume that the emit output has no directory components in
sub_path. In other words, don't assume that the emit directory is the
final directory; there may be sub-directories.
The two CacheMode values are `whole` and `incremental`.
`incremental` is what we had before; `whole` is new.
Whole cache mode uses everything as inputs to the cache hash;
and when a hit occurs it skips everything including linking.
This is ideal for when source files change rarely and for backends that
do not have good incremental compilation support, for example
compiler-rt or libc compiled with LLVM with optimizations on.
This is the main motivation for the additional mode, so that we can have
LLVM-optimized compiler-rt/libc builds, without waiting for the LLVM
backend every single time Zig is invoked.
Incremental cache mode hashes only the input file path and a few target
options, intentionally relying on collisions to locate already-existing
build artifacts which can then be incrementally updated.
The bespoke logic for caching stage1 backend build artifacts
is removed since we now have a global caching mechanism for
when we want to cache the entire compilation, *including* linking.
Previously we had to get "creative" with libs.txt and a special
byte in the hash id to communicate flags, so that when the cached
artifacts were re-linked, we had this information from stage1
even though we didn't actually run it. Now that `CacheMode.whole`
includes linking, this extra information does not need to be
preserved for cache hits. So although this changeset introduces
complexity, it also removes complexity.
The main trickiness here comes from the inherent differences between the
two modes: `incremental` wants a directory immediately to operate on,
while `whole` doesn't know the output directory until the compilation is
complete. This commit deals with this problem mostly inside `update()`,
where, on a cache miss, it replaces `zig_cache_artifact_directory` with a
temporary directory, and then renames it into place once the compilation is
complete.
Items remaining before this branch can be merged:
* [ ] make sure these things make it into the cache manifest:
- @import files
- @embedFile files
- we already add dep files from c but make sure the main .c files make
it in there too, not just the included files
* [ ] double check that the emit paths of other things besides the binary
are working correctly.
* [ ] test `-fno-emit-bin` + `-fstage1`
* [ ] test `-femit-bin=foo` + `-fstage1`
* [ ] implib emit directory copies bin_file_emit directory in create() and needs
to be adjusted to be overridden as well.
* [ ] make sure emit-h is handled correctly in the cache hash
* [ ] Cache: detect duplicate files added to the manifest
Some preliminary performance measurements of wall clock time and
peak RSS used:
stage1 behavior (1077 tests), llvm backend, release build:
* cold global cache: 4.6s, 1.1 GiB
* warm global cache: 3.4s, 980 MiB
stage2 master branch behavior (575 tests), llvm backend, release build:
* cold global cache: 0.62s, 191 MiB
* warm global cache: 0.40s, 128 MiB
stage2 this branch behavior (575 tests), llvm backend, release build:
* cold global cache: 0.62s, 179 MiB
* warm global cache: 0.27s, 90 MiB
This saves on comptime format string parsing, as the compiler caches
comptime calls. The catch here, is that parsePlaceHolder cannot take the
placeholder string as a slice. It must take it as an array by value for
the caching to occure.
There is also some logic in here that ensures that the specifier_arg is
always them same slice when the items they contain are the same. This
makes the compiler stamp out less copies of formatType.
Previously, the `load` instruction would just pass the pointer to the next instruction
for types that comply to `isByRef`. However, this meant that a defer would directly write
to the reference, rather than a copy. After this commit, we always copy the value.
- This implements all pointer arithmetic related instructions such as ptr_add, ptr_sub, ptr_elem_val
- We refactored the code, to use `isByRef` to ensure consistancy.
- Pointers will now be loaded correctly, rather then being passed around.
- The behaviour test for pointers is now passing.
- Previously the table index and function type index were switched.
This commit swaps them.
- This also emits the correct indirect function calls count when importing the function table
- Add method to easily create local for virtual stack
- Ensure function pointers are passed correctly
- Correctly handle slices as return types and values
- Fix wrapping error sets/payloads.
- Handle ptr-like optionals correctly, by using address '0' as null.
- Implement `array_to_slice`
- linker: Always emit a table, so call_indirect inside bodies do not fail if there's no table.
TODO: Only do this when we emit a call_indirect but the relocation cannot be resolved.
* load address (pointer) to a stack variable in a register via
`lea` instruction
* store value on the stack via a pointer stored in a register via
`mov [reg], imm` instruction
* the lowerings naturally are handled automatically by Mir -> Isel
layer
* add initial (without safety) implementation of `.optional_payload`
* add matching stage2 test cases
Effectively a small continuation of #10152
This allows the for.zig behavior tests to pass. Unfortunately to fully test everything I had to move a lot of behavior tests from array.zig; most of them now pass (sorry @rainbowbismuth!)
I'm also conflicted on how I store constants into arrays because it's kind of stupid; array's can't be re-initialized using the same syntax, so instead of initializing each element, a new array is made which is copied into the destination. This also required that renderValue can't emit string literals for byte arrays given that they need to always have an extra byte for the NULL terminator, meaning that strings are no longer grep-able in the output.
* fix handling of `ah`, `bh`, `ch`, and `dh` registers (which are
actually used as aliases to `dil`, etc. registers). Currenly, we
treat them as aliases only meaning when we encounter `ah` we make
sure to set the REX.W to promote the instruction to 64bits and use
`dil` register instead - otherwise we might have mismatch between
registers used in different parts of the codegen. In the future,
we can and should use `ah`, etc. as upper 8bit halves of 16bit
registers `ax`, etc.
* fix bug in `airCmp` where `.cmp` MIR instruction shouldn't force
type `Bool` but let the type of the original type propagate downwards
- we need this to make an informed choice of the target register
size and hence choose the right encoding down the line.
* implement lowering of 1-byte and 2-byte values to stack and add
matching stage2 tests for x86_64 codegen
To request memory-immediate encoding at the MIR side, we should now
use a new tag such as `mov_mem_imm` where the size of the memory
pointer is encoded as the flags:
```
0b00 => .byte_ptr,
0b01 => .word_ptr,
0b10 => .dword_ptr,
0b11 => .qword_ptr,
```
* `Module.Union.getLayout`: fixes to support components of the union
being 0 bits.
* Implement `@typeInfo` for unions.
* Add missing calls to `resolveTypeFields`.
* Fix explicitly-provided union tag types passing a `Zir.Inst.Ref`
where an `Air.Inst.Ref` was expected. We don't have any type safety
for this; these typess are aliases.
* Fix explicitly-provided `union(enum)` tag Values allocated to the
wrong arena.
* reduce number of branches in zirCmpEq
* implement equality comparison for enums and unions
* fix coercion from union to its tag type resulting in the wrong type
* fix method calls of unions
* implement peer type resolution for unions, enums, and enum literals
* fix union tag type memory in the wrong arena
Comment from this commit reproduced here:
LLVM does not allow us to change the type of globals. So we must
create a new global with the correct type, copy all its attributes,
and then update all references to point to the new global,
delete the original, and rename the new one to the old one's name.
This is necessary because LLVM does not support const bitcasting
a struct with padding bytes, which is needed to lower a const union value
to LLVM, when a field other than the most-aligned is active. Instead,
we must lower to an unnamed struct, and pointer cast at usage sites
of the global. Such an unnamed struct is the cause of the global type
mismatch, because we don't have the LLVM type until the *value* is created,
whereas the global needs to be created based on the type alone, because
lowering the value may reference the global as a pointer.