Also add a standalone test which covers the `-fentry` case. It does this
by performing two reproducible compilations which are identical other
than having different entry points, and checking whether the emitted
binaries are identical (they should *not* be).
Resolves: #23869
There will be more call sites to `preparePanicId` as we transition away
from safety checks in Sema towards safety checked instructions; it's
silly for them to all have this clunky usage.
This safety check was completely broken; it triggered unchecked illegal
behavior *in order to implement the safety check*. You definitely can't
do that! Instead, we must explicitly check the boundaries. This is a
tiny bit fiddly, because we need to make sure we do floating-point
rounding in the correct direction, and also handle the fact that the
operation truncates so the boundary works differently for min vs max.
Instead of implementing this safety check in Sema, there are now
dedicated AIR instructions for safety-checked intfromfloat (two
instructions; which one is used depends on the float mode). Currently,
no backend directly implements them; instead, a `Legalize.Feature` is
added which expands the safety check, and this feature is enabled for
all backends we currently test, including the LLVM backend.
The `u0` case is still handled in Sema, because Sema needs to check for
that anyway due to the comptime-known result. The old safety check here
was also completely broken and has therefore been rewritten. In that
case, we just check for 'abs(input) < 1.0'.
I've added a bunch of test coverage for the boundary cases of
`@intFromFloat`, both for successes (in `test/behavior/cast.zig`) and
failures (in `test/cases/safety/`).
Resolves: #24161
These conversion routines accept a `round` argument to control how the
result is rounded and return whether the result is exact. Most callers
wanted this functionality and had hacks around it being missing.
Also delete `std.math.big.rational` because it was only being used for
float conversion, and using rationals for that is a lot more complex
than necessary. It also required an allocator, whereas the new integer
routines only need to be passed enough memory to store the result.
and also rename `advancedPrint` to `bufferedPrint` in the zig init templates
These are left overs from my previous changes to zig init.
The new templating system removes LITNAME because the new restrictions on package names make it redundant with NAME, and the use of underscores for marking templated identifiers lets us template variable names while still keeping zig fmt happy.
Did you know that allocators reuse addresses? If not, then don't feel
bad, because apparently I don't either! This dumb mistake was probably
responsible for the CI failures on `master` yesterday.
I messed up atomic orderings on this variable because they changed in a
local refactor at some point. We need to always release on the store and
acquire on the loads so that a linker thread observing `.ready` sees the
stored MIR.
* Sema: allow binary operations and boolean not on vectors of bool
* langref: Clarify use of operators on vectors (`and` and `or` not allowed)
closes#24093
Looking at a compilation of 'test/behavior/x86_64/unary.zig' in
callgrind showed that a full 30% of the compiler runtime was spent in
this `stringToEnum` call, so optimizing it was low-hanging fruit.
We tried replacing it with nested `switch` statements using
`inline else`, but that generated too much code; it didn't emit huge
binaries or anything, but LLVM used a *ridiculous* amount of memory
compiling it in some cases. The core problem here is that only a small
subset of the cases are actually used (the rest fell through to an
"error" path), but that subset is computed at comptime, so we must rely
on the optimizer to eliminate the thousands of redundant cases. This
would be solved by #21507.
Instead, we pre-compute a lookup table at comptime. This table is pretty
big (I guess a couple hundred k?), but only the "valid" subset of
entries will be accessed in practice (unless a bug in the backend is
hit), so it's not too awful on the cache; and it performs much better
than the old `std.meta.stringToEnum` call.
Update the estimated total items for the codegen and link progress nodes
earlier. Rather than waiting for the main thread to dispatch the tasks,
we can add the item to the estimated total as soon as we queue the main
task. The only difference is we need to complete it even in error cases.
Without this cap, unlucky scheduling and/or details of what pipeline
stages perform best on the host machine could cause many gigabytes of
MIR to be stuck in the queue. At a certain point, pause the main thread
until some of the functions in flight have been processed.
This isn't really coherent to model as a `Feature`; this makes sense
because of zig1's specific environment. As such, I opted to check
`dev.env` directly.
Previously, `PerThread.populateTestFunctions` was analyzing the
`test_functions` declaration if it hadn't already been analyzed, so that
it could then populate it. However, the logic for doing this wasn't
actually correct, because it didn't trigger the necessary type
resolution. I could have tried to fix this, but there's actually a
simpler solution! If the `test_functions` declaration isn't referenced
or has a compile error, then we simply don't need to update it; either
it's unreferenced so its value doesn't matter, or we're going to get a
compile error anyway. Either way, we can just give up early. This avoids
doing semantic analysis after `performAllTheWork` finishes.
Also, get rid of the "Code Generation" progress node while updating the
test decl: this is a linking task.
glibc, freebsd, and netbsd all do caching manually, because of the fact
that they emit multiple files which they want to cache as a block.
Therefore, the individual sub-compilation on a cache miss should be
using `CacheMode.none` so that we can specify the output paths for each
sub-compilation as being in the shared output directory.
* "Flush" nodes ("LLVM Emit Object", "ELF Flush") appear under "Linking"
* "Code Generation" disappears when all analysis and codegen is done
* We only show one node under "Semantic Analysis" to accurately convey
that analysis isn't happening in parallel, but rather that we're
pausing one task to do another
Previously, various doc comments heavily disagreed with the
implementation on both what lives where on the filesystem at what time,
and how that was represented in code. Notably, the combination of emit
paths outside the cache and `disable_lld_caching` created a kind of
ad-hoc "cache disable" mechanism -- which didn't actually *work* very
well, 'most everything still ended up in this cache. There was also a
long-standing issue where building using the LLVM backend would put a
random object file in your cwd.
This commit reworks how emit paths are specified in
`Compilation.CreateOptions`, how they are represented internally, and
how the cache usage is specified.
There are now 3 options for `Compilation.CacheMode`:
* `.none`: do not use the cache. The paths we have to emit to are
relative to the compiler cwd (they're either user-specified, or
defaults inferred from the root name). If we create any temporary
files (e.g. the ZCU object when using the LLVM backend) they are
emitted to a directory in `local_cache/tmp/`, which is deleted once
the update finishes.
* `.whole`: cache the compilation based on all inputs, including file
contents. All emit paths are computed by the compiler (and will be
stored as relative to the local cache directory); it is a CLI error to
specify an explicit emit path. Artifacts (including temporary files)
are written to a directory under `local_cache/tmp/`, which is later
renamed to an appropriate `local_cache/o/`. The caller (who is using
`--listen`; e.g. the build system) learns the name of this directory,
and can get the artifacts from it.
* `.incremental`: similar to `.whole`, but Zig source file contents, and
anything else which incremental compilation can handle changes for, is
not included in the cache manifest. We don't need to do the dance
where the output directory is initially in `tmp/`, because our digest
is computed entirely from CLI inputs.
To be clear, the difference between `CacheMode.whole` and
`CacheMode.incremental` is unchanged. `CacheMode.none` is new
(previously it was sort of poorly imitated with `CacheMode.whole`). The
defined behavior for temporary/intermediate files is new.
`.none` is used for direct CLI invocations like `zig build-exe foo.zig`.
The other cache modes are reserved for `--listen`, and the cache mode in
use is currently just based on the presence of the `-fincremental` flag.
There are two cases in which `CacheMode.whole` is used despite there
being no `--listen` flag: `zig test` and `zig run`. Unless an explicit
`-femit-bin=xxx` argument is passed on the CLI, these subcommands will
use `CacheMode.whole`, so that they can put the output somewhere without
polluting the cwd (plus, caching is potentially more useful for direct
usage of these subcommands).
Users of `--listen` (such as the build system) can now use
`std.zig.EmitArtifact.cacheName` to find out what an output will be
named. This avoids having to synchronize logic between the compiler and
all users of `--listen`.
It turns out that LLD caching hasn't been in use for a while. On master,
it is currently only enabled when you compile via the build system,
passing `-fincremental`, using LLD (and so LLVM if there's a ZCU). That
case never happens, because `-fincremental` is only useful when you're
using a backend *other* than the LLVM backend. My previous commits
accidentally re-enabled this logic in some cases, exposing bugs; that
ultimately led to this realisation. So, let's just delete that logic --
less LLVM-related cruft to maintain.
Unfortunately, the self-hosted SPIR-V backend is quite tightly coupled
with the self-hosted SPIR-V linker through its `Object` concept (which
is much like `llvm.Object`). Reworking this would be too much work for
this branch. So, for now, I have introduced a special case (similar to
the LLVM backend's special case) to the codegen logic when using this
backend. We will want to delete this special case at some point, but it
need not block this work.
My original goal here was just to get the self-hosted Wasm backend
compiling again after the pipeline change, but it turned out that from
there it was pretty simple to entirely eliminate the shared state
between `codegen.wasm` and `link.Wasm`. As such, this commit not only
fixes the backend, but makes it the second backend (after CBE) to
support the new 1:N:1 threading model.
As of this commit, every backend other than self-hosted Wasm and
self-hosted SPIR-V compiles and (at least somewhat) functions again.
Those two backends are currently disabled with panics.
Note that `Zcu.Feature.separate_thread` is *not* enabled for the fixed
backends. Avoiding linker references from codegen is a non-trivial task,
and can be done after this branch.
The idea here is that instead of the linker calling into codegen,
instead codegen should run before we touch the linker, and after MIR is
produced, it is sent to the linker. Aside from simplifying the call
graph (by preventing N linkers from each calling into M codegen
backends!), this has the huge benefit that it is possible to
parallellize codegen separately from linking. The threading model can
look like this:
* 1 semantic analysis thread, which generates AIR
* N codegen threads, which process AIR into MIR
* 1 linker thread, which emits MIR to the binary
The codegen threads are also responsible for `Air.Legalize` and
`Air.Liveness`; it's more efficient to do this work here instead of
blocking the main thread for this trivially parallel task.
I have repurposed the `Zcu.Feature.separate_thread` backend feature to
indicate support for this 1:N:1 threading pattern. This commit makes the
C backend support this feature, since it was relatively easy to divorce
from `link.C`: it just required eliminating some shared buffers. Other
backends don't currently support this feature. In fact, they don't even
compile -- the next few commits will fix them back up.
Similar to the previous commit, this commit untangles LLD integration
from the self-hosted linkers. Despite the big network of functions which
were involved, it turns out what was going on here is quite simple. The
LLD linking logic is actually very self-contained; it requires a few
flags from the `link.File.OpenOptions`, but that's really about it. We
don't need any of the mutable state on `Elf`/`Coff`/`Wasm`, for
instance. There was some legacy code trying to handle support for using
self-hosted codegen with LLD, but that's not a supported use case, so
I've just stripped it out.
For now, I've just pasted the logic for linking the 3 targets we
currently support using LLD for into this new linker implementation,
`link.Lld`; however, it's almost certainly possible to combine some of
the logic and simplify this file a bit. But to be honest, it's not
actually that bad right now.
This commit ends up eliminating the distinction between `flush` and
`flushZcu` (formerly `flushModule`) in linkers, where the latter
previously meant something along the lines of "flush, but if you're
going to be linking with LLD, just flush the ZCU object file, don't
actually link"?. The distinction here doesn't seem like it was properly
defined, and most linkers seem to treat them as essentially identical
anyway. Regardless, all calls to `flushZcu` are gone now, so it's
deleted -- one `flush` to rule them all!
The end result of this commit and the preceding one is that LLVM and LLD
fit into the pipeline much more sanely:
* If we're using LLVM for the ZCU, that state is on `zcu.llvm_object`
* If we're using LLD to link, then the `link.File` is a `link.Lld`
* Calls to "ZCU link functions" (e.g. `updateNav`) lower to calls to the
LLVM object if it's available, or otherwise to the `link.File` if it's
available (neither is available under `-fno-emit-bin`)
* After everything is done, linking is finalized by calling `flush` on
the `link.File`; for `link.Lld` this invokes LLD, for other linkers it
flushes self-hosted linker state
There's one messy thing remaining, and that's how self-hosted function
codegen in a ZCU works; right now, we process AIR with a call sequence
something like this:
* `link.doTask`
* `Zcu.PerThread.linkerUpdateFunc`
* `link.File.updateFunc`
* `link.Elf.updateFunc`
* `link.Elf.ZigObject.updateFunc`
* `codegen.generateFunction`
* `arch.x86_64.CodeGen.generate`
So, we start in the linker, take a scenic detour through `Zcu`, go back
to the linker, into its implementation, and then... right back out, into
code which is generic over the linker implementation, and then dispatch
on the *backend* instead! Of course, within `arch.x86_64.CodeGen`, there
are some more places which switch on the `link` implementation being
used. This is all pretty silly... so it shall be my next target.
The main goal of this commit is to make it easier to decouple codegen
from the linkers by being able to do LLVM codegen without going through
the `link.File`; however, this ended up being a nice refactor anyway.
Previously, every linker stored an optional `llvm.Object`, which was
populated when using LLVM for the ZCU *and* linking an output binary;
and `Zcu` also stored an optional `llvm.Object`, which was used only
when we needed LLVM for the ZCU (e.g. for `-femit-llvm-bc`) but were not
emitting a binary.
This situation was incredibly silly. It meant there were N+1 places the
LLVM object might be instead of just 1, and it meant that every linker
had to start a bunch of methods by checking for an LLVM object, and just
dispatching to the corresponding method on *it* instead if it was not
`null`.
Instead, we now always store the LLVM object on the `Zcu` -- which makes
sense, because it corresponds to the object emitted by, well, the Zig
Compilation Unit! The linkers now mostly don't make reference to LLVM.
`Compilation` makes sure to emit the LLVM object if necessary before
calling `flush`, so it is ready for the linker. Also, all of the
`link.File` methods which act on the ZCU -- like `updateNav` -- now
check for the LLVM object in `link.zig` instead of in every single
individual linker implementation. Notably, the change to LLVM emit
improves this rather ludicrous call chain in the `-fllvm -flld` case:
* Compilation.flush
* link.File.flush
* link.Elf.flush
* link.Elf.linkWithLLD
* link.Elf.flushModule
* link.emitLlvmObject
* Compilation.emitLlvmObject
* llvm.Object.emit
Replacing it with this one:
* Compilation.flush
* llvm.Object.emit
...although we do currently still end up in `link.Elf.linkWithLLD` to do
the actual linking. The logic for invoking LLD should probably also be
unified at least somewhat; I haven't done that in this commit.
* The `codegen_nav`, `codegen_func`, `codegen_type` tasks are renamed to
`link_nav`, `link_func`, and `link_type`, to more accurately reflect
their purpose of sending data to the *linker*. Currently, `link_func`
remains responsible for codegen; this will change in an upcoming
commit.
* Don't go on a pointless detour through `PerThread` when linking ZCU
functions/`Nav`s; so, the `linkerUpdateNav` etc logic now lives in
`link.zig`. Currently, `linkerUpdateFunc` is an exception, because it
has broader responsibilities including codegen, but this will be
solved in an upcoming commit.