Introduce `Module.ensureFuncBodyAnalyzed` and corresponding `Sema`
function. This mirrors `ensureDeclAnalyzed` except also waits until the
function body has been semantically analyzed, meaning that inferred
error sets will have been populated.
Resolving error sets can now emit a "unable to resolve inferred error
set" error instead of producing an incorrect error set type. Resolving
error sets now calls `ensureFuncBodyAnalyzed`. Closes#11046.
`coerceInMemoryAllowedErrorSets` now does a lot more work to avoid
resolving an inferred error set if possible. Same with
`wrapErrorUnionSet`.
Inferred error set types no longer check the `func` field to determine if
they are equal. That was incorrect because an inline or comptime function
call produces a unique error set which has the same `*Module.Fn` value for
this field. Instead we use the `*Module.Fn.InferredErrorSet` pointers to
test equality of inferred error sets.
* use the real start code for LLVM backend with x86_64-linux
- there is still a check for zig_backend after initializing the TLS
area to skip some stuff.
* introduce new AIR instructions and implement them for the LLVM
backend. They are the same as `call` except with a modifier.
- call_always_tail
- call_never_tail
- call_never_inline
* LLVM backend calls hasRuntimeBitsIgnoringComptime in more places to
avoid unnecessarily depending on comptimeOnly being resolved for some
types.
* LLVM backend: remove duplicate code for setting linkage and value
name. The canonical place for this is in `updateDeclExports`.
* LLVM backend: do some assembly template massaging to make `%%`
rendered as `%`. More hacks will be needed to make inline assembly
catch up with stage1.
* Reduce branching in Type.eql and Type.hash for error sets.
* `Type.eql` uses element-wise bytes comparison since it can rely on
the error sets being pre-sorted.
* Avoid unnecessarily skipping tests that are passing.
This implements type equality for error sets. This is done
through element-wise error set comparison.
Inferred error sets are always distinct types and other error sets are
always sorted. See #11022.
When the anytype parameter had only one-possible-value
(e.g. `void`), it would create a mismatch in the function type and the
function call. Now the function type and the callsite both omit the
one-possible-value anytype parameter in instantiated generic functions.
By the time zirElemVal is reached for a many pointer, a load has already
happened, making sure the operand is already dereferenced.
This makes `mem.sliceTo` now work.
Array types with sentinels were not being typed correctly in the
translation from ZIR to Sema (comptime). This modifies the `array_init`
ZIR to also retain the type of the init expression (note: untyped array
initialization is done via the `array_init_anon` ZIR and so is unchanged
in this commit).
* mul_add AIR instruction: use `pl_op` instead of `ty_pl`. The type is
always the same as the operand; no need to waste bytes redundantly
storing the type.
* AstGen: use coerced_ty for all the operands except for one which we
use to communicate the type.
* Sema: use the correct source location for requireRuntimeBlock in
handling of `@mulAdd`.
* native backends: handle liveness even for the functions that are
TODO.
* C backend: implement `@mulAdd`. It lowers to libc calls.
* LLVM backend: make `@mulAdd` handle all float types.
- improved fptrunc and fpext to handle f80 with compiler-rt calls.
* Value.mulAdd: handle all float types and use the `@mulAdd` builtin.
* behavior tests: revert the changes to testing `@mulAdd`. These
changes broke the test coverage, making it only tested at
compile-time.
Improved f80 support:
* std.math.fma handles f80
* move fma functions from freestanding libc to compiler-rt
- add __fmax and fmal
- make __fmax and fmaq only exported when they don't alias fmal.
- make their linkage weak just like the rest of compiler-rt symbols.
* removed `longDoubleIsF128` and replaced it with `longDoubleIs` which
takes a type as a parameter. The implementation is now more accurate
and handles more targets. Similarly, in stage2 the function
CTypes.sizeInBits is more accurate for long double for more targets.
This fixes 2 entrypoints within the self-hosted wasm linker that would be called
for the llvm backend, whereas we should simply call into the llvm backend to perform such action.
i.e. not allocate a decl index when we have an llvm object, and when flushing a module,
we should be calling it on llvm's object, rather than have the wasm linker perform the operation.
Also, this fixes the wasm intrinsics for wasm.memory.size and wasm.memory.grow.
Lastly, this commit ensures that when an extern function is being resolved, we tell LLVM how
to import such function.