const locals now detect if the value ends up being comptime known. In
such case, it replaces the runtime AIR instructions with a decl_ref
const.
In the backends, some more sophisticated logic for marking decls as
alive was needed to prevent Decls incorrectly being garbage collected
that were indirectly referenced in such manner.
Add a variant of the `validate_struct_init` ZIR instruction:
`validate_struct_init_comptime` which is the same thing except it
indicates a comptime scope.
Sema code for this instruction now handles default struct field
values and detects when the struct initialization resulted in a
comptime value, replacing the already-emitted AIR instructions
to store each individual field with a single `store` instruction
with a comptime struct value as the operand.
In the case of a comptime scope, there is a simpler path that only
evals the implicit store instructions for default field values, avoiding
the mechanism for detecting comptime values.
This regressed one test case for the wasm backend, but it's just hitting
a different prong of `emitConstant` which currently has "TODO" in there,
so I think it's fine.
This allows Zig code to perform conditional compilation based on a tag
by which a Zig compiler implementation identifies itself.
See the doc comment in this commit for more details.
We now detect if the return type will be set by passing the first argument
as a pointer to stack memory from the callee's frame. This way, we do not have to
worry about stack memory being overwritten.
Besides this, we implement memset by either using wasm's memory.fill instruction when available,
or lower it manually. In the future we can lower this to a compiler_rt call.
This allows stage2 to build more of compiler-rt.
I also changed `-%` to `-` for comptime ints in the div and mul
implementations of compiler-rt. This is clearer code and also happens to
work around a bug in stage2.
Previously, the `load` instruction would just pass the pointer to the next instruction
for types that comply to `isByRef`. However, this meant that a defer would directly write
to the reference, rather than a copy. After this commit, we always copy the value.
- This implements all pointer arithmetic related instructions such as ptr_add, ptr_sub, ptr_elem_val
- We refactored the code, to use `isByRef` to ensure consistancy.
- Pointers will now be loaded correctly, rather then being passed around.
- The behaviour test for pointers is now passing.
- Previously the table index and function type index were switched.
This commit swaps them.
- This also emits the correct indirect function calls count when importing the function table
- Add method to easily create local for virtual stack
- Ensure function pointers are passed correctly
- Correctly handle slices as return types and values
- Fix wrapping error sets/payloads.
- Handle ptr-like optionals correctly, by using address '0' as null.
- Implement `array_to_slice`
- linker: Always emit a table, so call_indirect inside bodies do not fail if there's no table.
TODO: Only do this when we emit a call_indirect but the relocation cannot be resolved.
* load address (pointer) to a stack variable in a register via
`lea` instruction
* store value on the stack via a pointer stored in a register via
`mov [reg], imm` instruction
* the lowerings naturally are handled automatically by Mir -> Isel
layer
* add initial (without safety) implementation of `.optional_payload`
* add matching stage2 test cases
Effectively a small continuation of #10152
This allows the for.zig behavior tests to pass. Unfortunately to fully test everything I had to move a lot of behavior tests from array.zig; most of them now pass (sorry @rainbowbismuth!)
I'm also conflicted on how I store constants into arrays because it's kind of stupid; array's can't be re-initialized using the same syntax, so instead of initializing each element, a new array is made which is copied into the destination. This also required that renderValue can't emit string literals for byte arrays given that they need to always have an extra byte for the NULL terminator, meaning that strings are no longer grep-able in the output.
* fix handling of `ah`, `bh`, `ch`, and `dh` registers (which are
actually used as aliases to `dil`, etc. registers). Currenly, we
treat them as aliases only meaning when we encounter `ah` we make
sure to set the REX.W to promote the instruction to 64bits and use
`dil` register instead - otherwise we might have mismatch between
registers used in different parts of the codegen. In the future,
we can and should use `ah`, etc. as upper 8bit halves of 16bit
registers `ax`, etc.
* fix bug in `airCmp` where `.cmp` MIR instruction shouldn't force
type `Bool` but let the type of the original type propagate downwards
- we need this to make an informed choice of the target register
size and hence choose the right encoding down the line.
* implement lowering of 1-byte and 2-byte values to stack and add
matching stage2 tests for x86_64 codegen
* `Module.Union.getLayout`: fixes to support components of the union
being 0 bits.
* Implement `@typeInfo` for unions.
* Add missing calls to `resolveTypeFields`.
* Fix explicitly-provided union tag types passing a `Zir.Inst.Ref`
where an `Air.Inst.Ref` was expected. We don't have any type safety
for this; these typess are aliases.
* Fix explicitly-provided `union(enum)` tag Values allocated to the
wrong arena.
* reduce number of branches in zirCmpEq
* implement equality comparison for enums and unions
* fix coercion from union to its tag type resulting in the wrong type
* fix method calls of unions
* implement peer type resolution for unions, enums, and enum literals
* fix union tag type memory in the wrong arena
Comment from this commit reproduced here:
LLVM does not allow us to change the type of globals. So we must
create a new global with the correct type, copy all its attributes,
and then update all references to point to the new global,
delete the original, and rename the new one to the old one's name.
This is necessary because LLVM does not support const bitcasting
a struct with padding bytes, which is needed to lower a const union value
to LLVM, when a field other than the most-aligned is active. Instead,
we must lower to an unnamed struct, and pointer cast at usage sites
of the global. Such an unnamed struct is the cause of the global type
mismatch, because we don't have the LLVM type until the *value* is created,
whereas the global needs to be created based on the type alone, because
lowering the value may reference the global as a pointer.