Implements the instruction `vector_init` for structs and arrays.
For arrays, it checks if the element must be passed by reference or not.
When not, it can simply use the `offset` field of a store instruction to copy the values
into the array. When it is byref, it will move the pointer by the element size, and then perform
a store operation. This ensures types like structs will be moved into the right position.
For structs we will always move the pointer, as we currently cannot verify if all fields are
not by ref.
This implements lowering elem_ptr for decl's and constants.
To generate the correct pointer, we perform a relocation by using the addend
that represents the offset. The offset is calculated by taking the element's size
and multiplying that by the index.
For constants this generates a single immediate instruction, and for decl's
this generates a single pointer address.
* pad out (non-packed) struct fields when lowering to bytes to be
saved in the binary - prior to this change, fields would be
saved at non-aligned addresses leading to wrong accesses
* add a matching test case to `behavior/struct.zig` tests
* fix offset to field calculation in `struct_field_ptr` on `x86_64`
Clarify that `astgen.advanceSourceCursor` already increments absolute
values of the line and columns numbers; i.e., `GenZir.calcLine` is thus
not only obsolete but wrong by design.
Incidentally, this clean up allows for specifying the `FnDecl` line
numbers for DWARF use correctly as relative values with respect to
the start of the parent `Decl`. This `Decl` in turn has its line number
information specified relatively to its parent `Decl`, and so on, until
we reach the global scope.
This commit fixes two related things:
1. If the loop goes all the way through the slice without a match, on
the last iteration `mid == symbols.len - 1` which causes
`&symbols[mid + 1]` to be out of bounds. End one step before that
instead.
2. If the address we're looking for is greater than the address of the
last symbol in the slice, we now match it to that symbol. Previously,
we would miss this case since we only matched if the address was _in
between_ the address of two symbols.
which is the index of the key that already exists in the hash map.
This enables the use case of using `AutoArrayHashMap(void, void)` which
may seem surprising at first, but is actually pretty handy!
This commit includes a proof-of-concept of how I want to use it, with a
new InternArena abstraction for stage2 that provides a compact way to
store values (and types) in an "internment arena", thus making types
stored exactly once (per arena), representable with a single u32 as a
reference to a type within an InternArena, and comparable with a
simple u32 integer comparison. If both types are in the same
InternArena, you can check if they are equal by seeing if their index is
the same.
What's neat about `AutoArrayHashMap(void, void)` is that it allows us to
look up the indexes by key, *without actually storing the keys*.
Instead, keys are treated as ephemeral values that are constructed as
needed.
As a result, we have an extremely efficient encoding of types and
values, represented only by three arrays, which has no pointers, and can
therefore be serialized and deserialized by a single writev/readv call.
The `map` field is denormalized data and can be computed from the other
two fields.
This is in contrast to our current Type/Value system which makes
extensive use of pointers.
The test at the bottom of InternArena.zig passes in this commit.