what was happening is that instructions like `lb` were only affecting the lower bytes of the register and leaving the top dirty. this would lead to situtations were `cmp_eq` for example was using `xor`, which was failing because of the left-over stuff in the top of the register.
with this commit, we now zero out or truncate depending on the context, to ensure instructions like xor will provide proper results.
- implements `airSlice`, `airBitAnd`, `airBitOr`, `airShr`.
- got a basic design going for the `airErrorName` but for some reason it simply returns
empty bytes. will investigate further.
- only generating `.got.zig` entries when not compiling an object or shared library
- reduced the total amount of ops a mnemonic can have to 3, simplifying the logic
The old vectorization helper (WipElementWise) was clunky and a bit
annoying to use, and it wasn't really flexible enough.
This introduces a new vectorization helper, which uses Temporary and
Operation types to deduce a Vectorization to perform the operation
in a reasonably efficient manner. It removes the outer loop
required by WipElementWise so that implementations of AIR instructions
are cleaner. This helps with sanity when we start to introduce support
for composite integers.
airShift, convertToDirect, convertToIndirect, and normalize are initially
implemented using this new method.
Besides the Intel OpenCL CPU runtime, we can now run the
behavior tests using the Portable Computing Language. This
implementation is open-source, so it will be easier for us
to patch in updated versions of spirv-llvm-translator that
have bug fixes etc.
This reverts commit a7de02e05216db9a04e438703ddf1b6b12f3fbef.
This did not implement the accepted proposal, and I did not sign off
on the changes. I would like a chance to review this, please.
this one is even harder to document then the last large overhaul.
TLDR;
- split apart Emit.zig into an Emit.zig and a Lower.zig
- created seperate files for the encoding, and now adding a new instruction
is as simple as just adding it to a couple of switch statements and providing the encoding.
- relocs are handled in a more sane maner, and we have a clear defining boundary between
lea_symbol and load_symbol now.
- a lot of different abstractions for things like the stack, memory, registers, and others.
- we're using x86_64's FrameIndex now, which simplifies a lot of the tougher design process.
- a lot more that I don't have the energy to document. at this point, just read the commit itself :p
LLVMABIAlignmentOfType(i128) reports 16 on this target, however the C
ABI uses align(4).
Clang in LLVM 17 does this:
%struct.foo = type { i32, i128 }
Clang in LLVM 18 does this:
%struct.foo = type <{ i32, i128 }>
Clang is working around the 16-byte alignment to use align(4) for the C
ABI by making the LLVM struct packed.
I have noted several latent bugs in the handling of packed memory on
big-endian targets, and one of them has been exposed by the previous
commit, triggering this test failure. I am opting to disable it for now
on the ground that the commit preceding this one helps far more than it
hurts.
We've got a big one here! This commit reworks how we represent pointers
in the InternPool, and rewrites the logic for loading and storing from
them at comptime.
Firstly, the pointer representation. Previously, pointers were
represented in a highly structured manner: pointers to fields, array
elements, etc, were explicitly represented. This works well for simple
cases, but is quite difficult to handle in the cases of unusual
reinterpretations, pointer casts, offsets, etc. Therefore, pointers are
now represented in a more "flat" manner. For types without well-defined
layouts -- such as comptime-only types, automatic-layout aggregates, and
so on -- we still use this "hierarchical" structure. However, for types
with well-defined layouts, we use a byte offset associated with the
pointer. This allows the comptime pointer access logic to deal with
reinterpreted pointers far more gracefully, because the "base address"
of a pointer -- for instance a `field` -- is a single value which
pointer accesses cannot exceed since the parent has undefined layout.
This strategy is also more useful to most backends -- see the updated
logic in `codegen.zig` and `codegen/llvm.zig`. For backends which do
prefer a chain of field and elements accesses for lowering pointer
values, such as SPIR-V, there is a helpful function in `Value` which
creates a strategy to derive a pointer value using ideally only field
and element accesses. This is actually more correct than the previous
logic, since it correctly handles pointer casts which, after the dust
has settled, end up referring exactly to an aggregate field or array
element.
In terms of the pointer access code, it has been rewritten from the
ground up. The old logic had become rather a mess of special cases being
added whenever bugs were hit, and was still riddled with bugs. The new
logic was written to handle the "difficult" cases correctly, the most
notable of which is restructuring of a comptime-only array (for
instance, converting a `[3][2]comptime_int` to a `[2][3]comptime_int`.
Currently, the logic for loading and storing work somewhat differently,
but a future change will likely improve the loading logic to bring it
more in line with the store strategy. As far as I can tell, the rewrite
has fixed all bugs exposed by #19414.
As a part of this, the comptime bitcast logic has also been rewritten.
Previously, bitcasts simply worked by serializing the entire value into
an in-memory buffer, then deserializing it. This strategy has two key
weaknesses: pointers, and undefined values. Representations of these
values at comptime cannot be easily serialized/deserialized whilst
preserving data, which means many bitcasts would become runtime-known if
pointers were involved, or would turn `undefined` values into `0xAA`.
The new logic works by "flattening" the datastructure to be cast into a
sequence of bit-packed atomic values, and then "unflattening" it; using
serialization when necessary, but with special handling for `undefined`
values and for pointers which align in virtual memory. The resulting
code is definitely slower -- more on this later -- but it is correct.
The pointer access and bitcast logic required some helper functions and
types which are not generally useful elsewhere, so I opted to split them
into separate files `Sema/comptime_ptr_access.zig` and
`Sema/bitcast.zig`, with simple re-exports in `Sema.zig` for their small
public APIs.
Whilst working on this branch, I caught various unrelated bugs with
transitive Sema errors, and with the handling of `undefined` values.
These bugs have been fixed, and corresponding behavior test added.
In terms of performance, I do anticipate that this commit will regress
performance somewhat, because the new pointer access and bitcast logic
is necessarily more complex. I have not yet taken performance
measurements, but will do shortly, and post the results in this PR. If
the performance regression is severe, I will do work to to optimize the
new logic before merge.
Resolves: #19452Resolves: #19460
There is no way to know the expected parent pointer attributes (most
notably alignment) from the type of the field pointer, so provide them
in the first argument.