Clang 17 passed struct{f128} parameters using rdi and rax, while Clang
18 matches GCC 13.2 behavior, passing them using xmm0.
This commit makes Zig's LLVM backend match Clang 18 and GCC 13.2. The
commit deletes a hack in x86_64/abi.zig which miscategorized f128 as
"memory" which obviously disagreed with the spec.
LLVM now refuses to lower arguments and return values on x86 targets
when the total vector bit size is >= 512.
This code detects such a situation and uses byref instead of byval.
* some manual fixes to generated CPU features code. In the future it
would be nice to make the script do those automatically.
* add to various target OS switches. Some of the values I was unsure of
and added TODO panics, for example in the case of spirv CPU arch.
This was a "fake" type used to handle C varargs parameters, much like
generic poison. In fact, it is treated identically to generic poison in
all cases other than one (the final coercion of a call argument), which
is trivially special-cased. Thus, it makes sense to remove this special
tag and instead use `generic_poison_type` in its place. This fixes
several bugs in Sema related to missing handling of this tag.
Resolves: #19781
We've got a big one here! This commit reworks how we represent pointers
in the InternPool, and rewrites the logic for loading and storing from
them at comptime.
Firstly, the pointer representation. Previously, pointers were
represented in a highly structured manner: pointers to fields, array
elements, etc, were explicitly represented. This works well for simple
cases, but is quite difficult to handle in the cases of unusual
reinterpretations, pointer casts, offsets, etc. Therefore, pointers are
now represented in a more "flat" manner. For types without well-defined
layouts -- such as comptime-only types, automatic-layout aggregates, and
so on -- we still use this "hierarchical" structure. However, for types
with well-defined layouts, we use a byte offset associated with the
pointer. This allows the comptime pointer access logic to deal with
reinterpreted pointers far more gracefully, because the "base address"
of a pointer -- for instance a `field` -- is a single value which
pointer accesses cannot exceed since the parent has undefined layout.
This strategy is also more useful to most backends -- see the updated
logic in `codegen.zig` and `codegen/llvm.zig`. For backends which do
prefer a chain of field and elements accesses for lowering pointer
values, such as SPIR-V, there is a helpful function in `Value` which
creates a strategy to derive a pointer value using ideally only field
and element accesses. This is actually more correct than the previous
logic, since it correctly handles pointer casts which, after the dust
has settled, end up referring exactly to an aggregate field or array
element.
In terms of the pointer access code, it has been rewritten from the
ground up. The old logic had become rather a mess of special cases being
added whenever bugs were hit, and was still riddled with bugs. The new
logic was written to handle the "difficult" cases correctly, the most
notable of which is restructuring of a comptime-only array (for
instance, converting a `[3][2]comptime_int` to a `[2][3]comptime_int`.
Currently, the logic for loading and storing work somewhat differently,
but a future change will likely improve the loading logic to bring it
more in line with the store strategy. As far as I can tell, the rewrite
has fixed all bugs exposed by #19414.
As a part of this, the comptime bitcast logic has also been rewritten.
Previously, bitcasts simply worked by serializing the entire value into
an in-memory buffer, then deserializing it. This strategy has two key
weaknesses: pointers, and undefined values. Representations of these
values at comptime cannot be easily serialized/deserialized whilst
preserving data, which means many bitcasts would become runtime-known if
pointers were involved, or would turn `undefined` values into `0xAA`.
The new logic works by "flattening" the datastructure to be cast into a
sequence of bit-packed atomic values, and then "unflattening" it; using
serialization when necessary, but with special handling for `undefined`
values and for pointers which align in virtual memory. The resulting
code is definitely slower -- more on this later -- but it is correct.
The pointer access and bitcast logic required some helper functions and
types which are not generally useful elsewhere, so I opted to split them
into separate files `Sema/comptime_ptr_access.zig` and
`Sema/bitcast.zig`, with simple re-exports in `Sema.zig` for their small
public APIs.
Whilst working on this branch, I caught various unrelated bugs with
transitive Sema errors, and with the handling of `undefined` values.
These bugs have been fixed, and corresponding behavior test added.
In terms of performance, I do anticipate that this commit will regress
performance somewhat, because the new pointer access and bitcast logic
is necessarily more complex. I have not yet taken performance
measurements, but will do shortly, and post the results in this PR. If
the performance regression is severe, I will do work to to optimize the
new logic before merge.
Resolves: #19452Resolves: #19460
Legacy anon decls now have three uses:
* Type owner decls
* Function owner decls
* `@export` and `@extern`
Therefore, there are no longer any cases where we wish to explicitly
omit legacy anon decls from the binary. This means we can remove the
concept of an "alive" vs "dead" `Decl`, which also allows us to remove
the separate `anon_work_queue` in `Compilation`.
Good riddance!
Most of these changes are trivial. There's a fix for a minor bug this
exposed in `Value.readFromPackedMemory`, but aside from that, it's all
just things like changing `intern` calls to `toIntern`.
`Decl` can no longer store un-interned values, so this field is now
unnecessary. The type can instead be fetched with the new `typeOf`
helper method, which just gets the type of the Decl's `Value`.
This issue was causing debug information to sometimes not function
correctly for some local variables, with debuggers simply reporting that
the variable does not exist. What was happening was that after an AIR
body - and thus debug lexical scope - begins, but before any `dbg_stmt`
within it, the `scope` on `self.wip.debug_location` refers to the parent
scope, but the `scope` field on the `DILocalVariable` metadata passed to
`@llvm.dbg.declare` points, correctly, to the nested scope. I haven't
looked into precisely what happens here, but in short, it would appear
that LLVM Doesn't Like It (tm).
The fix is simple: when we change `self.scope` at the start or end of an
AIR body, also modify the scope on `self.wip.debug_location`. This is
correct as we always want the debug info for an instruction to be
associated with the block it is within, even if the line/column are
slightly outdated for any reason.
This commit changes how we represent comptime-mutable memory
(`comptime var`) in the compiler in order to implement the intended
behavior that references to such memory can only exist at comptime.
It does *not* clean up the representation of mutable values, improve the
representation of comptime-known pointers, or fix the many bugs in the
comptime pointer access code. These will be future enhancements.
Comptime memory lives for the duration of a single Sema, and is not
permitted to escape that one analysis, either by becoming runtime-known
or by becoming comptime-known to other analyses. These restrictions mean
that we can represent comptime allocations not via Decl, but with state
local to Sema - specifically, the new `Sema.comptime_allocs` field. All
comptime-mutable allocations, as well as any comptime-known const allocs
containing references to such memory, live in here. This allows for
relatively fast checking of whether a value references any
comptime-mtuable memory, since we need only traverse values up to
pointers: pointers to Decls can never reference comptime-mutable memory,
and pointers into `Sema.comptime_allocs` always do.
This change exposed some faulty pointer access logic in `Value.zig`.
I've fixed the important cases, but there are some TODOs I've put in
which are definitely possible to hit with sufficiently esoteric code. I
plan to resolve these by auditing all direct accesses to pointers (most
of them ought to use Sema to perform the pointer access!), but for now
this is sufficient for all realistic code and to get tests passing.
This change eliminates `Zcu.tmp_hack_arena`, instead using the Sema
arena for comptime memory mutations, which is possible since comptime
memory is now local to the current Sema.
This change should allow `Decl` to store only an `InternPool.Index`
rather than a full-blown `ty: Type, val: Value`. This commit does not
perform this refactor.
A pointer type already has an alignment, so this information does not
need to be duplicated on the function type. This already has precedence
with addrspace which is already disallowed on function types for this
reason. Also fixes `@TypeOf(&func)` to have the correct addrspace and
alignment.
This implements the accepted proposal #18816. Namespace-owning types
(struct, enum, union, opaque) are no longer unique whenever analysed;
instead, their identity is determined based on their AST node and the
set of values they capture.
Reified types (`@Type`) are deduplicated based on the structure of the
type created. For instance, if two structs are created by the same
reification with identical fields, layout, etc, they will be the same
type.
This commit does not produce a working compiler; the next commit, adding
captures for decl references, is necessary. It felt appropriate to split
this up.
Resolves: #18816
Namespace types (`struct`, `enum`, `union`, `opaque`) do not use
structural equality - equivalence is based on their Decl index (and soon
will change to AST node + captures). However, we previously stored all
other information in the corresponding `InternPool.Key` anyway. For
logical consistency, it makes sense to have the key only be the true key
(that is, the Decl index) and to load all other data through another
function. This introduces those functions, by the name of
`loadStructType` etc. It's a big diff, but most of it is no-brainer
changes.
In future, it might be nice to eliminate a bunch of the loaded state in
favour of accessor functions on the `LoadedXyzType` types (like how we
have `LoadedUnionType.size()`), but that can be explored at a later
date.
Since we now elide more ZIR blocks in AstGen, care must be taken in
codegen to introduce lexical scopes for every body, not just `block`s.
Also, elide a few unnecessary AIR blocks in Sema.