This is a partial revert of 105db13536b4dc2affe130cb8d2eee6c97c89bcd.
As we learned from Void Linux packaging, these options are not actually
helpful since the distribution package manager may very well want to
cross-compile the packages that it is building.
So, let's not overcomplicate things. There are already the standard
options: -Dtarget, -Dcpu, and -Ddynamic-linker.
These options are generally provided when the project generates machine
code artifacts, however, there may be a project that does no such thing,
in which case it makes sense for these options to be missing. The Zig
Build System is a general-purpose build system, after all.
We've got a big one here! This commit reworks how we represent pointers
in the InternPool, and rewrites the logic for loading and storing from
them at comptime.
Firstly, the pointer representation. Previously, pointers were
represented in a highly structured manner: pointers to fields, array
elements, etc, were explicitly represented. This works well for simple
cases, but is quite difficult to handle in the cases of unusual
reinterpretations, pointer casts, offsets, etc. Therefore, pointers are
now represented in a more "flat" manner. For types without well-defined
layouts -- such as comptime-only types, automatic-layout aggregates, and
so on -- we still use this "hierarchical" structure. However, for types
with well-defined layouts, we use a byte offset associated with the
pointer. This allows the comptime pointer access logic to deal with
reinterpreted pointers far more gracefully, because the "base address"
of a pointer -- for instance a `field` -- is a single value which
pointer accesses cannot exceed since the parent has undefined layout.
This strategy is also more useful to most backends -- see the updated
logic in `codegen.zig` and `codegen/llvm.zig`. For backends which do
prefer a chain of field and elements accesses for lowering pointer
values, such as SPIR-V, there is a helpful function in `Value` which
creates a strategy to derive a pointer value using ideally only field
and element accesses. This is actually more correct than the previous
logic, since it correctly handles pointer casts which, after the dust
has settled, end up referring exactly to an aggregate field or array
element.
In terms of the pointer access code, it has been rewritten from the
ground up. The old logic had become rather a mess of special cases being
added whenever bugs were hit, and was still riddled with bugs. The new
logic was written to handle the "difficult" cases correctly, the most
notable of which is restructuring of a comptime-only array (for
instance, converting a `[3][2]comptime_int` to a `[2][3]comptime_int`.
Currently, the logic for loading and storing work somewhat differently,
but a future change will likely improve the loading logic to bring it
more in line with the store strategy. As far as I can tell, the rewrite
has fixed all bugs exposed by #19414.
As a part of this, the comptime bitcast logic has also been rewritten.
Previously, bitcasts simply worked by serializing the entire value into
an in-memory buffer, then deserializing it. This strategy has two key
weaknesses: pointers, and undefined values. Representations of these
values at comptime cannot be easily serialized/deserialized whilst
preserving data, which means many bitcasts would become runtime-known if
pointers were involved, or would turn `undefined` values into `0xAA`.
The new logic works by "flattening" the datastructure to be cast into a
sequence of bit-packed atomic values, and then "unflattening" it; using
serialization when necessary, but with special handling for `undefined`
values and for pointers which align in virtual memory. The resulting
code is definitely slower -- more on this later -- but it is correct.
The pointer access and bitcast logic required some helper functions and
types which are not generally useful elsewhere, so I opted to split them
into separate files `Sema/comptime_ptr_access.zig` and
`Sema/bitcast.zig`, with simple re-exports in `Sema.zig` for their small
public APIs.
Whilst working on this branch, I caught various unrelated bugs with
transitive Sema errors, and with the handling of `undefined` values.
These bugs have been fixed, and corresponding behavior test added.
In terms of performance, I do anticipate that this commit will regress
performance somewhat, because the new pointer access and bitcast logic
is necessarily more complex. I have not yet taken performance
measurements, but will do shortly, and post the results in this PR. If
the performance regression is severe, I will do work to to optimize the
new logic before merge.
Resolves: #19452Resolves: #19460
`{}` for decls
`{p}` for enum fields
`{p_}` for struct fields and in contexts following a `.`
Elsewhere, `{p}` was used since it's equivalent to the old behavior.
Now that `-femit-docs` includes all modules, including the builtin
module, in the generated source tarball, it makes sense to apply the
same logic to the std-docs server. std-docs constructs its own tarball,
so a different approach is needed to achieve the same end result.
This takes the code that was previously in src/Compilation.zig to turn resinator diagnostics into Zig error bundles and puts it in resinator/main.zig, and then makes resinator emit the resulting error bundles via std.zig.Server (which is used by the build runner, etc). Also adds support for turning Aro diagnostics into ErrorBundles.
This moves .rc/.manifest compilation out of the main Zig binary, contributing towards #19063
Also:
- Make resinator use Aro as its preprocessor instead of clang
- Sync resinator with upstream
Part of #19063.
Primarily, this moves Aro from deps/ to lib/compiler/ so that it can be
lazily compiled from source. src/aro_translate_c.zig is moved to
lib/compiler/aro_translate_c.zig and some of Zig CLI logic moved to a
main() function there.
aro_translate_c.zig becomes the "common" import for clang-based
translate-c.
Not all of the compiler was able to be detangled from Aro, however, so
it still, for now, remains being compiled with the main compiler
sources due to the clang-based translate-c depending on it. Once
aro-based translate-c achieves feature parity with the clang-based
translate-c implementation, the clang-based one can be removed from Zig.
Aro made it unnecessarily difficult to depend on with these .def files
and all these Zig module requirements. I looked at the .def files and
made these observations:
- The canonical source is llvm .def files.
- Therefore there is an update process to sync with llvm that involves
regenerating the .def files in Aro.
- Therefore you might as well just regenerate the .zig files directly
and check those into Aro.
- Also with a small amount of tinkering, the file size on disk of these
generated .zig files can be made many times smaller, without
compromising type safety in the usage of the data.
This would make things much easier on Zig as downstream project,
particularly we could remove those pesky stubs when bootstrapping.
I have gone ahead with these changes since they unblock me and I will
have a chat with Vexu to see what he thinks.