e.g. `x86_64-windows.win10...win11_dt-gnu` -> `x86_64-windows-gnu`
When the OS version is the default this is redundant with checking the
default in the standard library.
This is necessary in two cases:
* On POSIX, the exe path (`argv[0]`) must contain a path separator
* Some programs might treat a file named e.g. `-foo` as a flag, which
can be avoided by passing `./-foo`
Rather than detecting these two cases, just always include the prefix;
there's no harm in it.
Also, if the cwd is specified, include it in the manifest. If the user
has set the cwd of a Run step, it is clearly because this affects the
behavior of the executable somehow, so that cwd path should be a part of
the step's manifest.
Resolves: #24216
Cmake by default adds the `/RTC1` compiler flag for debug builds.
However, this causes C code that conforms to the C standard and has
well-defined behavior to trap. Here I've updated CMAKE to use the more
lenient `/RTCs` by default which removes the uninitialized variable checks
but keeps the stack error checks.
Also add a standalone test which covers the `-fentry` case. It does this
by performing two reproducible compilations which are identical other
than having different entry points, and checking whether the emitted
binaries are identical (they should *not* be).
Resolves: #23869
`std.Build.Step.ConfigHeader` emits a *directory* containing a config
header under a given sub path, but there's no good way to actually
access that directory as a `LazyPath` in the configure phase. This is
silly; it's perfectly valid to refer to that directory, perhaps to
explicitly pass as a "-I" flag to a different toolchain invoked via a
`Step.Run`. So now, instead of the `GeneratedFile` being the actual
*file*, it should be that *directory*, i.e. `cache/o/<digest>`. We can
then easily get the *file* if needed just by using `LazyPath.path` to go
"deeper", which there is a helper function for.
The legacy `getOutput` function is now a deprecated alias for
`getOutputFile`, and `getOutputDir` is introduced.
`std.Build.Module.IncludeDir.appendZigProcessFlags` needed a fix after
this change, so I took the opportunity to refactor it a little. I was
looking at this function while working on ziglang/translate-c yesterday
and realised it could be expressed much more simply -- particularly
after the `ConfigHeader` change here.
I had to update the test `standalone/cmakedefine/` -- it turns out this
test was well and truly reaching into build system internals, and doing
horrible not-really-allowed stuff like overriding the `makeFn` of a
`TopLevelStep`. To top it all off, the test forgot to set
`b.default_step` to its "test" step, so the test never even ran. I've
refactored it to follow accepted practices and to actually, like, work.
This function is sometimes used to assume a canonical representation of
a path. However, when the `Path` referred to `root_dir` itself, this
function previously resolved `sub_path` to ".", which is incorrect; per
doc comments, it should set `sub_path` to "".
This fix ultimately didn't solve what I was trying to solve, though I'm
still PRing it, because it's still *correct*. The background to this
commit is quite interesting and worth briefly discussing.
I originally worked on this to try and fix a bug in the build system,
where if the root package (i.e. the one you `zig build`) depends on
package X which itself depends back on the root package (through a
`.path` dependency), invalid dependency modules are generated. I hit
this case working on ziglang/translate-c, which wants to depend on
"examples" (similar to the Zig compiler's "standalone" test cases) which
themselves depend back on the translate-c package. However, after this
patch just turned that error into another, I realised that this case
simply cannot work, because `std.Build` needs to eagerly execute build
scripts at `dependency` calls to learn which artifacts, modules, etc,
exist.
...at least, that's how the build system is currently designed. One can
imagine a world where `dependency` sort of "queues" the call, `artifact`
and `module` etc just pretend that the thing exists, and all configure
functions are called non-recursively by the runner. The downside is that
it becomes impossible to query state set by a dependency's configure
script. For instance, if a dependency exposes an artifact, it would
become impossible to get that artifact's resolved target in the
configure phase. However, as well as allowing recursive package imports
(which are certainly kinda nifty), it would also make lazy dependencies
far more useful! Right now, lazy dependencies only really work if you
use options (`std.Build.option`) to block their usage, since any call to
`lazyDependency` causes the dependency to be fetched. However, if we
made this change, lazy dependencies could be made far more versatile by
only fetching them *if the final step plan requires them*. I'm not 100%
sure if this is a good idea or not, but I might open an issue for it
soon.
There will be more call sites to `preparePanicId` as we transition away
from safety checks in Sema towards safety checked instructions; it's
silly for them to all have this clunky usage.
This safety check was completely broken; it triggered unchecked illegal
behavior *in order to implement the safety check*. You definitely can't
do that! Instead, we must explicitly check the boundaries. This is a
tiny bit fiddly, because we need to make sure we do floating-point
rounding in the correct direction, and also handle the fact that the
operation truncates so the boundary works differently for min vs max.
Instead of implementing this safety check in Sema, there are now
dedicated AIR instructions for safety-checked intfromfloat (two
instructions; which one is used depends on the float mode). Currently,
no backend directly implements them; instead, a `Legalize.Feature` is
added which expands the safety check, and this feature is enabled for
all backends we currently test, including the LLVM backend.
The `u0` case is still handled in Sema, because Sema needs to check for
that anyway due to the comptime-known result. The old safety check here
was also completely broken and has therefore been rewritten. In that
case, we just check for 'abs(input) < 1.0'.
I've added a bunch of test coverage for the boundary cases of
`@intFromFloat`, both for successes (in `test/behavior/cast.zig`) and
failures (in `test/cases/safety/`).
Resolves: #24161
These conversion routines accept a `round` argument to control how the
result is rounded and return whether the result is exact. Most callers
wanted this functionality and had hacks around it being missing.
Also delete `std.math.big.rational` because it was only being used for
float conversion, and using rationals for that is a lot more complex
than necessary. It also required an allocator, whereas the new integer
routines only need to be passed enough memory to store the result.
If you write an if expression in mem.doNotOptimizeAway like
doNotOptimizeAway(if (ix < 0x00100000) x / 0x1p120 else x + 0x1p120);,
FCSEL instruction is used on AArch64.
FCSEL instruction selects one of the two registers according to
the condition and copies its value.
In this example, `x / 0x1p120` and `x + 0x1p120` are expressions
that raise different floating-point exceptions.
However, since both are actually evaluated before the FCSEL
instruction, the exception not intended by the programmer may
also be raised.
To prevent FCSEL instruction from being used here, this commit
splits doNotOptimizeAway in two.
and also rename `advancedPrint` to `bufferedPrint` in the zig init templates
These are left overs from my previous changes to zig init.
The new templating system removes LITNAME because the new restrictions on package names make it redundant with NAME, and the use of underscores for marking templated identifiers lets us template variable names while still keeping zig fmt happy.
Did you know that allocators reuse addresses? If not, then don't feel
bad, because apparently I don't either! This dumb mistake was probably
responsible for the CI failures on `master` yesterday.
I messed up atomic orderings on this variable because they changed in a
local refactor at some point. We need to always release on the store and
acquire on the loads so that a linker thread observing `.ready` sees the
stored MIR.
Because any `LazyPath` might be resolved to a relative path, it's
incorrect to pass that directly to a child process whose cwd might
differ. Instead, if the child has an overriden cwd, we need to convert
such paths to be relative to the child cwd using `std.fs.path.relative`.
File arguments added to `std.Build.Step.Run` with e.g. `addFileArg` are
not necessarily passed as absolute paths. It used to be the case that
they were as a consequence of an unnecessary path conversion done by the
frontend, but this no longer happens, at least not always, so these
tests were sometimes failing when run locally. Therefore, the standalone
tests must handle cwd-relative CLI paths correctly.
* Sema: allow binary operations and boolean not on vectors of bool
* langref: Clarify use of operators on vectors (`and` and `or` not allowed)
closes#24093
Looking at a compilation of 'test/behavior/x86_64/unary.zig' in
callgrind showed that a full 30% of the compiler runtime was spent in
this `stringToEnum` call, so optimizing it was low-hanging fruit.
We tried replacing it with nested `switch` statements using
`inline else`, but that generated too much code; it didn't emit huge
binaries or anything, but LLVM used a *ridiculous* amount of memory
compiling it in some cases. The core problem here is that only a small
subset of the cases are actually used (the rest fell through to an
"error" path), but that subset is computed at comptime, so we must rely
on the optimizer to eliminate the thousands of redundant cases. This
would be solved by #21507.
Instead, we pre-compute a lookup table at comptime. This table is pretty
big (I guess a couple hundred k?), but only the "valid" subset of
entries will be accessed in practice (unless a bug in the backend is
hit), so it's not too awful on the cache; and it performs much better
than the old `std.meta.stringToEnum` call.
Update the estimated total items for the codegen and link progress nodes
earlier. Rather than waiting for the main thread to dispatch the tasks,
we can add the item to the estimated total as soon as we queue the main
task. The only difference is we need to complete it even in error cases.
Without this cap, unlucky scheduling and/or details of what pipeline
stages perform best on the host machine could cause many gigabytes of
MIR to be stuck in the queue. At a certain point, pause the main thread
until some of the functions in flight have been processed.
This isn't really coherent to model as a `Feature`; this makes sense
because of zig1's specific environment. As such, I opted to check
`dev.env` directly.
Previously, `PerThread.populateTestFunctions` was analyzing the
`test_functions` declaration if it hadn't already been analyzed, so that
it could then populate it. However, the logic for doing this wasn't
actually correct, because it didn't trigger the necessary type
resolution. I could have tried to fix this, but there's actually a
simpler solution! If the `test_functions` declaration isn't referenced
or has a compile error, then we simply don't need to update it; either
it's unreferenced so its value doesn't matter, or we're going to get a
compile error anyway. Either way, we can just give up early. This avoids
doing semantic analysis after `performAllTheWork` finishes.
Also, get rid of the "Code Generation" progress node while updating the
test decl: this is a linking task.
The name of the ZCU object file emitted by the LLVM backend has been
changed in this branch from e.g. `foo.obj` to `foo_zcu.obj`. This is to
avoid name clashes. This commit just updates the stack trace tests which
started failing on windows because of the object name change.
The name of the ZCU object file emitted by the LLVM backend has been
changed in this branch from e.g. `foo.o` to `foo_zcu.o`. This is to
avoid name clashes. This commit just updates a link test which started
failing because the object name in a linker error changed.