Support for f128, comptime_float, and c_longdouble require improvements
to compiler_rt and will implemented in a later PR. Some of the code in
this commit could be made more generic, for instance `llvm.airSqrt`
could probably be `llvm.airUnaryMath`, but let's cross that
bridge when we get to it.
* link: add a virtual function `lowerUnnamedConsts`, similar to
`updateFunc` or `updateDecl` which needs to be implemented by the
linker backend in order to be used with the `CodeGen` code
* elf: implement `lowerUnnamedConsts` specialization where we
lower unnamed constants to `.rodata` section. We keep track of the
atoms encompassing the lowered unnamed consts in a global table
indexed by parent `Decl`. When the `Decl` is updated or destroyed,
we clear the unnamed consts referenced within the `Decl`.
* macho: implement `lowerUnnamedConsts` specialization where we
lower unnamed constants to `__TEXT,__const` section. We keep track of the
atoms encompassing the lowered unnamed consts in a global table
indexed by parent `Decl`. When the `Decl` is updated or destroyed,
we clear the unnamed consts referenced within the `Decl`.
* x64: change `MCValue.linker_sym_index` into two `MCValue`s: `.got_load` and
`.direct_load`. The former signifies to the emitter that it should
emit a GOT load relocation, while the latter that it should emit
a direct load (`SIGNED`) relocation.
* x64: lower `struct` instantiations
AstGen: Fixed bug where f80 types in source were triggering illegal
behavior.
Value: handle f80 in floating point arithmetic functions.
Value: implement floatRem and floatMod
This commit introduces dependencies on compiler-rt that are not
implemented. Those are a prerequisite to merging this branch.
When setting the break value in an if expression we must explicitly
check if a result location type coercion that needs to happen. This was
already done for switch expression, so let's just imitate that check
and fix for if expressions. To make this possible, we now also propagate
`rl_ty_inst` to sub scopes.
fieldVal handles pointer to pointer to array. This can happen for
example, if a pointer to an array is used as the condition expression of
a for loop.
resolveStructFully handles tuples (by doing nothing).
fixed Type comparison for tuples to handle comptime fields properly.
* resolve_inferred_alloc now gives a proper mutability attribute to the
corresponding alloc instruction. Previously, it would fail to mark
things const.
* slicing: fix the detection for when the end index equals the length
of the underlying object. Previously it was using `end - start` but
it should just use the end index directly. It also takes into account
when slicing a comptime-known slice.
* `Type.sentinel`: fix not handling all slice tags
Singular tests (such as in the bug ones) are moved to top level with exclusions for non-passing backends.
The big behavior tests such as array_llvm and slice are moved to the inner scope with the C backend disabled.
They all pass for the wasm backend now
* pad out (non-packed) struct fields when lowering to bytes to be
saved in the binary - prior to this change, fields would be
saved at non-aligned addresses leading to wrong accesses
* add a matching test case to `behavior/struct.zig` tests
* fix offset to field calculation in `struct_field_ptr` on `x86_64`
* comptime known 0 as a numerator returns comptime 0 independent of
denominator.
* negative numerator and denominator are allowed when the remainder is
zero because that means the modulus would be also zero.
* organize math behavior tests
check the set of passing tests; move towards the disabling logic being
inside each test rather than which files are included.
this enables a few more passing tests.
Takes advantage of the pattern already established with
array_init_anon. Also upgrades array_init (non-anon) to the pattern.
Implements comptime struct value equality and pointer value hashing.
* remove `LoweringError` error set from `Emit.zig` - it actually
was less than helpful; it's better to either not throw an error
since there can be instructions with mismatching operand sizes
such as `movsx` or assert on a by instruction-basis. Currently,
let's just pass through and see how we fare.
* when moving integers into registers, check for signedness and move
with zero- or sign-extension if source operand is smaller than 8
bytes. The destination operand is always assumed to be full-width,
i.e., 8 bytes.
* clean up `airTrunc` a little to match the rest of CodeGen inst
implementations.
This makes all union test cases succeed.
`rem` was also implemented as all we had to do is enable the instruction.
Loading and storing values based on ABI-size was simplified to a direct abiSize() call.
We also enabled all the newly passing test cases and disable them for all non-passing backends.
All of those test cases were verified to see if they perhaps already pass for the c-backend.
AstGen:
* rename the known_has_bits flag to known_non_opv to make it better
reflect what it actually means.
* add a known_comptime_only flag.
* make the flags take advantage of identifiers of primitives and the
fact that zig has no shadowing.
* correct the known_non_opv flag for function bodies.
Sema:
* Rename `hasCodeGenBits` to `hasRuntimeBits` to better reflect what it
does.
- This function got a bit more complicated in this commit because of
the duality of function bodies: on one hand they have runtime bits,
but on the other hand they require being comptime known.
* WipAnonDecl now takes a LazySrcDecl parameter and performs the type
resolutions that it needs during finish().
* Implement comptime `@ptrToInt`.
Codegen:
* Improved handling of lowering decl_ref; make it work for
comptime-known ptr-to-int values.
- This same change had to be made many different times; perhaps we
should look into merging the implementations of `genTypedValue`
across x86, arm, aarch64, and riscv.
This commit updates stage2 to enforce the property that the syntax
`fn()void` is a function *body* not a *pointer*. To get a pointer, the
syntax `*const fn()void` is required.
ZIR puts function alignment into the func instruction rather than the
decl because this way it makes it into function types. LLVM backend
respects function alignments.
Struct and Union have methods `fieldSrcLoc` to help look up source
locations of their fields. These trigger full loading, tokenization, and
parsing of source files, so should only be called once it is confirmed
that an error message needs to be printed.
There are some nice new error hints for explaining why a type is
required to be comptime, particularly for structs that contain function
body types.
`Type.requiresComptime` is now moved into Sema because it can fail and
might need to trigger field type resolution. Comptime pointer loading
takes into account types that do not have a well-defined memory layout
and does not try to compute a byte offset for them.
`fn()void` syntax no longer secretly makes a pointer. You get a function
body type, which requires comptime. However a pointer to a function body
can be runtime known (obviously).
Compile errors that report "expected pointer, found ..." are factored
out into convenience functions `checkPtrOperand` and `checkPtrType` and
have a note about function pointers.
Implemented `Value.hash` for functions, enum literals, and undefined values.
stage1 is not updated to this (yet?), so some workarounds and disabled
tests are needed to keep everything working. Should we update stage1 to
these new type semantics? Yes probably because I don't want to add too
much conditional compilation logic in the std lib for the different
backends.
There are some restrictions here.
- We either need C11 or a compiler that supports the aligned attribute
- We cannot provide align less than the type's natural C alignment.
* AIR instruction vector_init gains the ability to init arrays and
tuples in addition to vectors. This will probably also gain the
ability to initialize structs and be renamed to `aggregate_init`.
* AstGen prefers to use an `anon_array_init` ZIR instruction for
local variables when the init expr is an array literal and there is
no type.
* push the arguments in reverse order
* add logic for pushing args of any abi size to stack - very similar to
`genSetStack` however, uses `.rsp` as the base register
* increment and decrement `.rsp` if we called a function with args on
the stack in `airCall`
* add logic for recovering args from the caller's stack in the callee
Due to the new structure of lowerConstant, we can now simplify the logic in a lot of situations.
- We no longer have to check the `WValue`'s tag to determine how to load/store a value.
- We can now provide simple memcopy's for aggregate types.
- Constants are now memoized, meaning we do no longer lower constants on each callsite.
In the behavior test listings, I had to move type_info.zig test import
to a section that did not include the x86 backend because it got to the
point where adding another test to the file, even if it was an empty
test that just returned immediately, caused a runtime failure when
executing the test binary.
Anyway, type info for opaques is implemented, and the declarations slice
is shared between it, enums, and unions.
Still TODO is the `data` field of a `Declaration`. I want to consider
removing it from the data returned from `@typeInfo` and introducing
`@declInfo` or similar for this data. This would avoid the complexity of
a lazy mechanism.
Instead use the standarized option for communicating the
zig compiler backend at comptime, which is `zig_backend`. This was
introduced in commit 1c24ef0d0b09a12a1fe98056f2fc04de78a82df3.
The ZIR instructions `switch_capture_else` and `switch_capture_ref` are
removed because they are not needed. Instead, the prong index is set to
max int for the special prong.
Else prong with error sets is not handled yet.
Adds a new behavior test because there was not a prior on to cover only
the capture value of else on a switch.