Split big test into the two separate things it is testing.
Add missing checks to the test which revealed the test is not actually
passing yet for the C backend.
According to Apple docs, the long double type is a double precision
IEEE754 binary floating-point type, which makes it identical to the
double type. This behavior contrasts to the standard specification,
in which a long double is a quad-precision, IEEE754 binary,
floating-point type.
Thus, we need to take this into account when using the compiler
intrinsics so that we select the correct function version for
FloatMulAdd.
* The `@bitCast` workaround is removed in favor of `@ptrCast` properly
doing element casting for slice element types. This required an
enhancement both to stage1 and stage2.
* stage1 incorrectly accepts `.{}` instead of `{}`. stage2 code that
abused this is fixed.
* Make some parameters comptime to support functions in switch
expressions (as opposed to making them function pointers).
* Avoid relying on local temporaries being mutable.
* Workarounds for when stage1 and stage2 disagree on function pointer
types.
* Workaround recursive formatting bug with a `@panic("TODO")`.
* Remove unreachable `else` prongs for some inferred error sets.
All in effort towards #89.
The problem was that types of non-anytype parameters were being included
as part of the check to see if generic function instantiations were
equal. Now, Module.Fn additionally stores the information for whether each
parameter is anytype or not. `generic_poison` cannot be used to signal
this because the type is still needed for comptime arguments; in such
case the type will not be present in the newly generated function
prototype.
This presented one additional challenge: we need to compare equality of
two values where one of them is post-coercion and the other is not. So
we make some minor adjustments to `Type.eql` to support this. I think
this small complexity tradeoff is worth it because it means the compiler
does much less work on the hot path that a generic function is called
and there is already an existing matching instantiation.
closes#11146
Prior to this, Liveness encoded `asm`, `call`, and `aggregate_init` with
a single 32-bit integer, allowing up to 35 operands (3 are provided by
the regular tomb_bits). However, the Zig language allows function calls
with more than 35 arguments, inline assembly with more than 35 inputs,
and anonymous tuples with more than 35 elements.
The new encoding stores an index to the extra array instead of the bits
directly, and then as many extra elements as needed to encode all the
operands. The MSB is used as a flag to tell which element is the last
one, allowing for 31 bits per element.
Prior to this, print_air did not bother correctly printing tombstones
for these instructions; now it does.
In addition to updating the BigTomb iteration logic in the machine code
backends, this commit extracts the common logic into the Liveness namespace.
I forgot to check that the new behavior tests also pass in stage1. One
of them does not.
Fixes regression from 4618c41fa6ca70f06c7e65762d2f38d57b00818c.
This shuffles some tests do ensure the new instructions are tested for the wasm backend,
by moving vectors into their own tests as well as move the f16 test cases as those require
special operating also.
Sema avoids adding map entries for certain instructions such as
`set_eval_branch_quota` and `atomic_store`. This means that result
location semantics in AstGen must not emit any instructions that attempt
to use the result of any of these instructions.
This commit makes AstGen replace such instructions with
`Zir.Inst.Ref.void_value` if their result value ends up being
referenced.
This fixes a compiler crash when running std lib atomic tests.
This is not complete support for asm expressions, but allows a few more
test cases from test/behavior/asm.zig to pass. Since the non-register
inputs are named `input_${n}` they can cause name collisions: I'm
wrapping the asm expressions in their own block to prevent that.
Contextually, this change also makes test/behavior/asm.zig run for
stage2, but skips individual tests for most backends (I only verified
the C and LLVM backends successfully run one new test case) and the
entire test file for aarch64, where it's running into preexisting
shortcomings.
* Sema: store the precomputed monomorphed_funcs hash inside Module.Fn.
This is important because it may be accessed when resizing monomorphed_funcs
while this Fn has already been added to the set, but does not have the
owner_decl, comptime_args, or other fields populated yet.
* Sema: in `analyzeIsNonErr`, take advantage of the AIR tag being
`wrap_errunion_payload` to infer that `is_non_err` is comptime true
without performing any error set resolution.
- Also add some code to check for empty inferred error sets in this
function. If necessary we do resolve the inferred error set.
* Sema: queue full type resolution of payload type when
`wrap_errunion_payload` AIR instruction is emitted. This ensures the
backend may check the alignment of it.
* Sema: resolveTypeFully now additionally resolves comptime-only
status.
closes#11306
This commit introduces a new AIR instruction `cmp_lt_errors_len`. It's
specific to this use case for two reasons:
* The total number of errors is not stable during semantic analysis; it
can only be reliably checked when flush() is called. So the backend
that is lowering the instruction must emit a relocation of some kind
and then populate it during flush().
* The fewer AIR instructions in memory, the better for compiler
performance, so we squish complex meanings into AIR tags without
hesitation.
The instruction is implemented only in the LLVM backend so far. It does
this by creating a simple function which is gutted and re-populated
with each flush().
AstGen now uses ResultLoc.coerced_ty for `@intToError` and Sema does the
coercion.
* In semaStructFields and semaUnionFields we return error.GenericPoison
if one of the field types ends up being generic poison.
- This requires handling function calls and function types taking
this into account when calling `typeRequiresComptime` on the return
type.
* Unrelated: I noticed using Valgrind that struct reification did not
populate the `known_opv` field. After fixing it, the behavior tests
run Valgrind-clean.
* ZIR: use `@ptrCast` to cast between slices instead of exploiting
the fact that stage1 incorrectly allows `@bitCast` between slices.
- A future enhancement will make Zig support `@ptrCast` to directly
cast between slices.
To no longer set the error code to undefined. This fixes the problem
where an undefined single-item pointer coerced to an error union of a
slice set the whole thing to undefined even though the sub-coercion to
the slice would have produced a defined value.
Commit 052079c99455d01312d377d72fa1b8b5c0b22aad surfaced two issues with
the generated C code:
- renderInt128() contained a seemingly unnecessary assertion to verify
that the high 64 bits of the number were nonzero, dating back to
9bf1681990fe87a6b2e5fc644a89f1aece304579. I removed it.
- renderValue() didn't have any special handling for undefined structs,
falling back to printing "{}" which generated invalid expressions
such as "return {}" for functions returning structs, whereas
"return (S){}" is the correct form. I changed it accordingly.
At the same time I'm reenabling the relevant tests.
Similar code was already in place for conditional branches. This updates
AstGen to do the same for labeled blocks. It takes advantage of the
`store_to_block_ptr` instructions by mutating them in place to become
`as` instructions, coercing the break operands before they are returned
from the block.
* Added peer type resolution for arrays and vectors: the vector type is
selected.
* Fixed passing the lhs type or rhs type instead of the peer resolved
type when calling Value methods during analyzeArithmetic handling of
comptime expressions.
* `checkVectorizableBinaryOperands` now allows mixing vectors and
arrays, as long as one of the operands is a vector.
This matches stage1's handling of `^=` but apparently stage1 is
inconsistent and does not handle e.g. `*=`. stage2 now will always allow
mixing vector and array operands for all operations.