* Advance line and PC prior to ending sequence in debug line program
for a fn_decl. This is equivalent to closing scope in the debugger
and without it, the debugger will not map source-to-address info
as a result will not print the source when breaking at a symbol.
* Fix debug aranges sentinels to be of the size as the actual tuple
descriptor (assuming segment selector to be ommitted). In summary,
the sentinels were 32bit 0s, whereas they ought to be 64bit 0s.
* Make naming of symbols in the binary more consistent by prefixing
each symbol name with an underscore '_'.
This was also an experiment to see if it were easier to implement a new
feature when using the instruction encoder.
Verdict: It's not that much easier, but I think it's certainly much more
readable, because the description of the Instruction annotates what each
field means. Right now, precise knowledge of x86_64 instructions is
still required because things like when to set the 64-bit flag, how to
read x86_64 instruction references, etc. are still not automatically
done for you.
In the future, this interface might make it sligtly easier to write an
assembler for x86_64, by abstracting the bit-fiddling aspects of
instruction encoding.
From my very cursory reading, it seems that the register manager doesn't
distinguish between registers that are physically the same but have
different sizes.
In that case, this means that during codegen, we can't rely on
`reg.size()` when determining the width of the operations we have to
perform. Instead, we must use some form of `ty.abiSize(self.target.*)`
to determine the size of the type we're operating with. If this size is
64 bits, then we should enable 64-bit operation.
This fixed a bug in the codegen for spilling instructions, which was
overwriting the previous stack entry with zeroes. See the modified test
case in this commit.
There were several problems, all fixed:
* AstGen was storing field names as references to the original
source code bytes. However, that data would be destroyed when the
source file is updated. Now, it correctly stores the field names in
the Decl arena for the enum. The same fix applies to error set field
names.
* Sema was missing a memset inside `analyzeSwitch`, leaving the "seen
enum fields" array with undefined memory. Now that they are all
properly set to null, the validation works.
* Moved the "enum declared here" note to the end. It looked weird
interrupting the notes for which enum values were missing.
Before, incremental compilation would crash when trying to emit compile
errors for the update after introducing a parse error.
Parse errors are handled by not invalidating any existing semantic
analysis. However, only the parse error must be reported, with all the
other errors suppressed. Once the parse error is fixed, the new file can
be treated as an update to the previously-succeeded update.
* `analyzeContainer` now has an `outdated_decls` set as well as
`deleted_decls`. Instead of queuing up outdated Decls for re-analysis
right away, they are added to this new set. When processing the
`deleted_decls` set, we remove deleted Decls from the
`outdated_decls` set, to avoid deleted Decl pointers from being in
the work_queue. Only after processing the deleted decls do we add
analyze_decl work items to the queue.
* Module.deletion_set is now an `AutoArrayHashMap` rather than `ArrayList`.
`declareDeclDependency` will now remove a Decl from it as appropriate.
When processing the `deletion_set` in `Compilation.performAllTheWork`,
it now assumes all Decl in the set are to be deleted.
* Fix crash when handling parse errors. Currently we unload the
`ast.Tree` if any parse errors occur. Previously the code emitted a
LazySrcLoc pointing to a token index, but then when we try to resolve
the token index to a byte offset to create a compile error message,
the ast.Tree` would be unloaded. Now we use
`LazySrcLoc.byte_abs` instead of `token_abs` so the error message can
be created even with the `ast.Tree` unloaded.
Together, these changes solve a crash that happened with incremental
compilation when Decls were added and removed in some combinations.
Introduce `ResultLoc.none_or_ref` which is used by field access
expressions to avoid unnecessary loads when the field access itself
will do the load. This turns:
```zig
p.y - p.x - p.x
```
from
```zir
%14 = load(%4) node_offset:8:12
%15 = field_val(%14, "y") node_offset:8:13
%16 = load(%4) node_offset:8:18
%17 = field_val(%16, "x") node_offset:8:19
%18 = sub(%15, %17) node_offset:8:16
%19 = load(%4) node_offset:8:24
%20 = field_val(%19, "x") node_offset:8:25
```
to
```zir
%14 = field_val(%4, "y") node_offset:8:13
%15 = field_val(%4, "x") node_offset:8:19
%16 = sub(%14, %15) node_offset:8:16
%17 = field_val(%4, "x") node_offset:8:25
```
Much more compact. This requires `Sema.zirFieldVal` to support both
pointers and non-pointers.
C backend: Implement typedefs for struct types, as well as the following
TZIR instructions:
* mul
* mulwrap
* addwrap
* subwrap
* ref
* struct_field_ptr
Note that add, addwrap, sub, subwrap, mul, mulwrap instructions are all
incorrect currently and need to be updated to properly handle wrapping
and non wrapping for signed and unsigned.
C backend: change indentation delta to 1, to make the output smaller and
to process fewer bytes.
I promise I will add a test case as soon as I fix those warnings that
are being printed for my test case.
GenZir struct now has rl_ty_inst field which tracks the result location
type (if any) a block expects all of its results to be coerced to.
Remove a redundant coercion on const local initialization with a
specified type.
Switch expressions, during elision of store_to_block_ptr instructions,
now re-purpose them to be type coercion when the block has a type in the
result location.
Also fixed abiAlignment - for pointers it was returning the abi
alignment inside the type, rather than of the pointer itself. There is
now `ptrAlignment` for getting the alignment inside the type of
pointers.