It doesn't really make sense for `target_util.canBuildLibCompilerRt`
(and its ubsan-rt friend) to take in `use_llvm`, because the caller
doesn't control that: they're just going to queue a sub-compilation for
the runtime. The only exception to that is the ZCU strategy, where we
effectively embed `_ = @import("compiler_rt")` into the Zig compilation:
there, the question does matter. Rather than trying to do multiple weird
calls to model this, just have `canBuildLibCompilerRt` return not just a
boolean, but also differentiate the self-hosted backend being capable of
building the library vs only LLVM being capable. Logic in `Compilation`
uses that difference to decide whether to use the ZCU strategy, and also
to disable the library if the compiler does not support LLVM and it is
required.
Also, remove a redundant check later on, when actually queuing jobs.
We've already checked that we can build `compiler_rt`, and
`compiler_rt_strat` is set accordingly. I'm guessing this was there to
work around a bug I saw in the old strategy assignment, where support
was ignored in some cases.
Resolves: #24623
On macOS, when using the LLVM backend, the output binary retains a
reference to this object file's debug info (as opposed to self-hosted
backends which instead emit a dSYM bundle). As such, we need to retain
this object file in such cases. This object does unfortunately "leak",
in that it won't be reused and will just sit in the cache forever (or
until GC'd in the future). But that's no worse than the cache behavior
prior to the rework that caused this, and it will become less of a
problem over time as the self-hosted backend gains usability for debug
builds and eventually becomes the default.
Resolves: #24369
In the best case, this is redundant work, because we aren't actually
going to emit a working binary this update. In the worst case, it causes
bugs because the linker may not have *seen* the thing being exported due
to the compile errors.
Resolves: #24417
This "get" is useless noise and was copied from FixedBufferWriter.
Since this API has not yet landed in a release, now is a good time
to make the breaking change to fix this.
The functions `Compilation.create` and `Compilation.update` previously
returned inferred error sets, which had built up a lot of crap over
time. This meant that certain error conditions -- particularly certain
filesystem errors -- were not being reported properly (at best the CLI
would just print the error name). This was also a problem in
sub-compilations, where at times only the error name -- which might just
be something like `LinkFailed` -- would be visible.
This commit makes the error handling here more disciplined by
introducing concrete error sets to these functions (and a few more as a
consequence). These error sets are small: errors in `update` are almost
all reported via compile errors, and errors in `create` are reported
through a new `Compilation.CreateDiagnostic` type, a tagged union of
possible error cases. This allows for better error reporting.
Sub-compilations also report errors more correctly in several cases,
leading to more informative errors in the case of compiler bugs.
Also fixes some race conditions in library building by replacing calls
to `setMiscFailure` with calls to `lockAndSetMiscFailure`. Compilation
of libraries such as libc happens on the thread pool, so the logic must
synchronize its access to shared `Compilation` state.
If an error occured which prevented a prelink task from being queued,
then `pending_prelink_tasks` would never be decremented, which could
cause deadlocks in some cases. So, instead of calculating ahead of time
the number of prelink tasks to expect, we use a simpler strategy which
is much like a wait group: we add 1 to a value when we spawn a worker,
and in the worker function, `defer` decrementing the value. The initial
value is 1, and there's a decrement after all of the workers are
spawned, so once it hits 0, prelink is done (be it with a failure or a
success).
This reverts commit b461d07a5464aec86c533434dab0b58edfffb331.
After some discussion in the team, we've decided that this is too disruptive,
especially because the linker errors are less than helpful. That's a fixable
problem, so we might reconsider this in the future, but revert it for now.
This use case is handled by ArrayListUnmanaged via the "...Bounded"
method variants, and it's more optimal to share machine code, versus
generating multiple versions of each function for differing array
lengths.
This commit replaces the "fuzzer" UI, previously accessed with the
`--fuzz` and `--port` flags, with a more interesting web UI which allows
more interactions with the Zig build system. Most notably, it allows
accessing the data emitted by a new "time report" system, which allows
users to see which parts of Zig programs take the longest to compile.
The option to expose the web UI is `--webui`. By default, it will listen
on `[::1]` on a random port, but any IPv6 or IPv4 address can be
specified with e.g. `--webui=[::1]:8000` or `--webui=127.0.0.1:8000`.
The options `--fuzz` and `--time-report` both imply `--webui` if not
given. Currently, `--webui` is incompatible with `--watch`; specifying
both will cause `zig build` to exit with a fatal error.
When the web UI is enabled, the build runner spawns the web server as
soon as the configure phase completes. The frontend code consists of one
HTML file, one JavaScript file, two CSS files, and a few Zig source
files which are built into a WASM blob on-demand -- this is all very
similar to the old fuzzer UI. Also inherited from the fuzzer UI is that
the build system communicates with web clients over a WebSocket
connection.
When the build finishes, if `--webui` was passed (i.e. if the web server
is running), the build runner does not terminate; it continues running
to serve web requests, allowing interactive control of the build system.
In the web interface is an overall "status" indicating whether a build
is currently running, and also a list of all steps in this build. There
are visual indicators (colors and spinners) for in-progress, succeeded,
and failed steps. There is a "Rebuild" button which will cause the build
system to reset the state of every step (note that this does not affect
caching) and evaluate the step graph again.
If `--time-report` is passed to `zig build`, a new section of the
interface becomes visible, which associates every build step with a
"time report". For most steps, this is just a simple "time taken" value.
However, for `Compile` steps, the compiler communicates with the build
system to provide it with much more interesting information: time taken
for various pipeline phases, with a per-declaration and per-file
breakdown, sorted by slowest declarations/files first. This feature is
still in its early stages: the data can be a little tricky to
understand, and there is no way to, for instance, sort by different
properties, or filter to certain files. However, it has already given us
some interesting statistics, and can be useful for spotting, for
instance, particularly complex and slow compile-time logic.
Additionally, if a compilation uses LLVM, its time report includes the
"LLVM pass timing" information, which was previously accessible with the
(now removed) `-ftime-report` compiler flag.
To make time reports more useful, ZIR and compilation caches are ignored
by the Zig compiler when they are enabled -- in other words, `Compile`
steps *always* run, even if their result should be cached. This means
that the flag can be used to analyze a project's compile time without
having to repeatedly clear cache directory, for instance. However, when
using `-fincremental`, updates other than the first will only show you
the statistics for what changed on that particular update. Notably, this
gives us a fairly nice way to see exactly which declarations were
re-analyzed by an incremental update.
If `--fuzz` is passed to `zig build`, another section of the web
interface becomes visible, this time exposing the fuzzer. This is quite
similar to the fuzzer UI this commit replaces, with only a few cosmetic
tweaks. The interface is closer than before to supporting multiple fuzz
steps at a time (in line with the overall strategy for this build UI,
the goal will be for all of the fuzz steps to be accessible in the same
interface), but still doesn't actually support it. The fuzzer UI looks
quite different under the hood: as a result, various bugs are fixed,
although other bugs remain. For instance, viewing the source code of any
file other than the root of the main module is completely broken (as on
master) due to some bogus file-to-module assignment logic in the fuzzer
UI.
Implementation notes:
* The `lib/build-web/` directory holds the client side of the web UI.
* The general server logic is in `std.Build.WebServer`.
* Fuzzing-specific logic is in `std.Build.Fuzz`.
* `std.Build.abi` is the new home of `std.Build.Fuzz.abi`, since it now
relates to the build system web UI in general.
* The build runner now has an **actual** general-purpose allocator,
because thanks to `--watch` and `--webui`, the process can be
arbitrarily long-lived. The gpa is `std.heap.DebugAllocator`, but the
arena remains backed by `std.heap.page_allocator` for efficiency. I
fixed several crashes caused by conflation of `gpa` and `arena` in the
build runner and `std.Build`, but there may still be some I have
missed.
* The I/O logic in `std.Build.WebServer` is pretty gnarly; there are a
*lot* of threads involved. I anticipate this situation improving
significantly once the `std.Io` interface (with concurrency support)
is introduced.
added adapter to AnyWriter and GenericWriter to help bridge the gap
between old and new API
make std.testing.expectFmt work at compile-time
std.fmt no longer has a dependency on std.unicode. Formatted printing
was never properly unicode-aware. Now it no longer pretends to be.
Breakage/deprecations:
* std.fs.File.reader -> std.fs.File.deprecatedReader
* std.fs.File.writer -> std.fs.File.deprecatedWriter
* std.io.GenericReader -> std.io.Reader
* std.io.GenericWriter -> std.io.Writer
* std.io.AnyReader -> std.io.Reader
* std.io.AnyWriter -> std.io.Writer
* std.fmt.format -> std.fmt.deprecatedFormat
* std.fmt.fmtSliceEscapeLower -> std.ascii.hexEscape
* std.fmt.fmtSliceEscapeUpper -> std.ascii.hexEscape
* std.fmt.fmtSliceHexLower -> {x}
* std.fmt.fmtSliceHexUpper -> {X}
* std.fmt.fmtIntSizeDec -> {B}
* std.fmt.fmtIntSizeBin -> {Bi}
* std.fmt.fmtDuration -> {D}
* std.fmt.fmtDurationSigned -> {D}
* {} -> {f} when there is a format method
* format method signature
- anytype -> *std.io.Writer
- inferred error set -> error{WriteFailed}
- options -> (deleted)
* std.fmt.Formatted
- now takes context type explicitly
- no fmt string
preparing to rearrange std.io namespace into an interface
how to upgrade:
std.io.getStdIn() -> std.fs.File.stdin()
std.io.getStdOut() -> std.fs.File.stdout()
std.io.getStdErr() -> std.fs.File.stderr()
This matches what we do for small helper libraries like this in MinGW-w64. It
simplifies the compiler a bit, and also means the build system doesn't have to
treat these library names specially.
Closes#24325.
Previously, resinator would use the host arch as the target arch when looking for windows-gnu include directories. However, Zig only thinks it can provide a libc for targets specified in the `std.zig.target.available_libcs` array, which only includes a few for windows-gnu. Therefore, when cross-compiling from a host architecture that doesn't have a windows-gnu target in the available_libcs list, resinator would fail to detect the MinGW include directories.
Now, the custom option `/:target` is passed to `zig rc` which is intended for the COFF object file target, but can be re-used for the include directory target as well. For the include directory target, resinator will convert the MachineType to the relevant arch, or fail if there is no equivalent arch/no support for detecting the includes for the MachineType (currently 64-bit Itanium and EBC).
Fixes the `windows_resources` standalone test failing when the host is, for example, `riscv64-linux`.
This is not meant to be a long-term solution, but it's the easiest thing
to get working quickly at the moment. The main intention of this hack is
to allow more tests to be enabled. By the time the coff linker is far
enough along to be enabled by default, this will no longer be required.
Also add a standalone test which covers the `-fentry` case. It does this
by performing two reproducible compilations which are identical other
than having different entry points, and checking whether the emitted
binaries are identical (they should *not* be).
Resolves: #23869
Update the estimated total items for the codegen and link progress nodes
earlier. Rather than waiting for the main thread to dispatch the tasks,
we can add the item to the estimated total as soon as we queue the main
task. The only difference is we need to complete it even in error cases.
Without this cap, unlucky scheduling and/or details of what pipeline
stages perform best on the host machine could cause many gigabytes of
MIR to be stuck in the queue. At a certain point, pause the main thread
until some of the functions in flight have been processed.
This isn't really coherent to model as a `Feature`; this makes sense
because of zig1's specific environment. As such, I opted to check
`dev.env` directly.
Previously, `PerThread.populateTestFunctions` was analyzing the
`test_functions` declaration if it hadn't already been analyzed, so that
it could then populate it. However, the logic for doing this wasn't
actually correct, because it didn't trigger the necessary type
resolution. I could have tried to fix this, but there's actually a
simpler solution! If the `test_functions` declaration isn't referenced
or has a compile error, then we simply don't need to update it; either
it's unreferenced so its value doesn't matter, or we're going to get a
compile error anyway. Either way, we can just give up early. This avoids
doing semantic analysis after `performAllTheWork` finishes.
Also, get rid of the "Code Generation" progress node while updating the
test decl: this is a linking task.
* "Flush" nodes ("LLVM Emit Object", "ELF Flush") appear under "Linking"
* "Code Generation" disappears when all analysis and codegen is done
* We only show one node under "Semantic Analysis" to accurately convey
that analysis isn't happening in parallel, but rather that we're
pausing one task to do another