This fixes 2 entrypoints within the self-hosted wasm linker that would be called
for the llvm backend, whereas we should simply call into the llvm backend to perform such action.
i.e. not allocate a decl index when we have an llvm object, and when flushing a module,
we should be calling it on llvm's object, rather than have the wasm linker perform the operation.
Also, this fixes the wasm intrinsics for wasm.memory.size and wasm.memory.grow.
Lastly, this commit ensures that when an extern function is being resolved, we tell LLVM how
to import such function.
This also unifies the wasm backend to use `generateSymbol` when lowering a constant
that cannot be lowered to an immediate value.
As both decls and constants are now refactored, the old `genTypedValue` is removed.
Before this we would see ZIR code like this:
```
%69 = alloc_inferred_mut()
%70 = array_base_ptr(%69)
%71 = elem_ptr_imm(%70, 0)
```
This would crash the compiler because it expects to see a
`coerce_result_ptr` instruction after `alloc_inferred_mut`, but that
does not happen in this case because there is no type to coerce the
result pointer to.
In this commit I modified AstGen so that it has similar codegen as when
using a const instead of a var:
```
%69 = alloc_inferred_mut()
%76 = array_init_anon(.{%71, %73, %75})
%77 = store_to_inferred_ptr(%69, %76)
```
This does not obey result locations, meaning if you call a function
inside the initializer, it will end up doing a copy into the LHS.
Solving this problem, or changing the language to make this legal,
will be left for my future self to deal with. Hi future self!
I see you reading this commit log. Hope you're doing OK buddy.
Sema for `store_ptr` of a tuple where the pointer is in fact the same
element type as the operand had an issue where the comptime fields would
get incorrectly lowered to runtime stores to bogus addresses. This is
solved with an exception to the optimization in Sema for storing
pointers that handles tuples element-wise. In the case that we are
storing a tuple to itself, it skips the optimization. This results in
better code and avoids the problem. However this caused a regression in
GeneralPurposeAllocator from the standard library.
I regressed the test runner code back to the simpler path. It's too
hard to debug standard library code in the LLVM backend right now since
we don't have debug info hooked up. Also, we didn't have any behavior
test coverage of whatever was regressed, so let's try to get that
coverage added as a stepping stone to getting the standard library
working.
* Sema: resolve type fully when emitting an alloc AIR instruction to
avoid tripping assertion for checking struct field alignment.
* LLVM backend: keep a reference to the LLVM target data alive during
lowering so that we can ask LLVM what it thinks the ABI alignment
and size of LLVM types are. We need this in order to lower tuples and
structs so that we can put in extra padding bytes when Zig disagrees
with LLVM about the size or alignment of something.
* LLVM backend: make the LLVM struct type packed that contains the most
aligned union field and the padding. This prevents the struct from
being too big according to LLVM. In the future, we may want to
consider instead emitting unions in a "flat" manner; putting the tag,
most aligned union field, and padding all in the same struct field
space.
* LLVM backend: make structs with 2 or fewer fields return isByRef=false.
This results in more efficient codegen. This required lowering of
bitcast to sometimes store the struct into an alloca, ptrcast, and
then load because LLVM does not allow bitcasting structs.
* enable more passing behavior tests.
Packed structs were tripping an LLVM assertion due to calling
`LLVMConstZExt` from i16 to i16. Solved by using instead
`LLVMConstZExtOrBitCast`.
Unions were tripping an LLVM assertion due to a typo using the union
llvm type to construct an integer value rather than the tag type.
Unless the pointer is a pointer to a function, if the pointee type
has zero-bits, we need to return `MCValue.none` as the `Decl` has
not been lowered to memory, and therefore, any GOT reference will be
wrong.
This implements #10113 for the self-hosted compiler only. It removes the
ability to override alignment of packed struct fields, and removes the
ability to put pointers and arrays inside packed structs.
After this commit, nearly all the behavior tests pass for the stage2 llvm
backend that involve packed structs.
I didn't implement the compile errors or compile error tests yet. I'm
waiting until we have stage2 building itself and then I want to rework
the compile error test harness with inspiration from Vexu's arocc test
harness. At that point it should be a much nicer dev experience to work
on compile errors.
* link: add a virtual function `lowerUnnamedConsts`, similar to
`updateFunc` or `updateDecl` which needs to be implemented by the
linker backend in order to be used with the `CodeGen` code
* elf: implement `lowerUnnamedConsts` specialization where we
lower unnamed constants to `.rodata` section. We keep track of the
atoms encompassing the lowered unnamed consts in a global table
indexed by parent `Decl`. When the `Decl` is updated or destroyed,
we clear the unnamed consts referenced within the `Decl`.
* macho: implement `lowerUnnamedConsts` specialization where we
lower unnamed constants to `__TEXT,__const` section. We keep track of the
atoms encompassing the lowered unnamed consts in a global table
indexed by parent `Decl`. When the `Decl` is updated or destroyed,
we clear the unnamed consts referenced within the `Decl`.
* x64: change `MCValue.linker_sym_index` into two `MCValue`s: `.got_load` and
`.direct_load`. The former signifies to the emitter that it should
emit a GOT load relocation, while the latter that it should emit
a direct load (`SIGNED`) relocation.
* x64: lower `struct` instantiations
* pad out (non-packed) struct fields when lowering to bytes to be
saved in the binary - prior to this change, fields would be
saved at non-aligned addresses leading to wrong accesses
* add a matching test case to `behavior/struct.zig` tests
* fix offset to field calculation in `struct_field_ptr` on `x86_64`
There are some restrictions here.
- We either need C11 or a compiler that supports the aligned attribute
- We cannot provide align less than the type's natural C alignment.
Instead use the standarized option for communicating the
zig compiler backend at comptime, which is `zig_backend`. This was
introduced in commit 1c24ef0d0b09a12a1fe98056f2fc04de78a82df3.
This mostly reverts commit 692c254336da71cbe21aaf9fbc21240fd1269b95.
The test "for loop over pointers to struct, getting field from struct
pointer" is still failing on the CI so that one is not moved over.
This reverts commit 0a9b4d092f58595888f9e4be8ef683b2ed8a0da1.
Hm, these are all passing for me locally. I'll have to do some
troubleshooting to figure out which one(s) are failing on the CI.