One change worth noting in this commit is that `module.global_error_set`
is no longer kept strictly up-to-date. The previous code reserved
integer error values when dealing with error set types, but this is no
longer needed because the integer values are not needed for semantic
analysis unless `@errorToInt` or `@intToError` are used and therefore
may be assigned lazily.
* Add some assertions to make sure instructions are not none. I tested
all these with master branch as well and made sure the behavior tests
still passed with the assertions intact (along with a handful of
callsite updates).
* Fix Sema.resolveMaybeUndefValAllowVariablesMaybeRuntime not noticing
that interned values are comptime-known. This was causing all kinds
of chaos.
* Fix print_air writeType calling tag() without checking for ip_index
Instead of doing everything at once which is a hopelessly large task,
this introduces a piecemeal transition that can be done in small
increments at a time.
This is a minimal changeset that keeps the compiler compiling. It only
uses the InternPool for a small set of types.
Behavior tests are not passing.
Air.Inst.Ref and Zir.Inst.Ref are separated into different enums but
compile-time verified to have the same fields in the same order.
The large set of changes is mainly to deal with the fact that most Type
and Value methods now require a Module to be passed in, so that the
InternPool object can be accessed.
`branch_deaths` was a relic from before I had a full understanding of
AIR's control flow structure, and so was unnecessary. This change
simplifies Liveness, fixes a bug exposed by #15235, and likely improves
performance (due to cloning hashmaps less often).
store:
The value to store may be undefined, in which case the destination
memory region has undefined bytes after this instruction is
evaluated. In such case ignoring this instruction is legal
lowering.
store_safe:
Same as `store`, except if the value to store is undefined, the
memory region should be filled with 0xaa bytes, and any other
safety metadata such as Valgrind integrations should be notified of
this memory region being undefined.
Now they use slices or array pointers with any element type instead of
requiring byte pointers.
This is a breaking enhancement to the language.
The safety check for overlapping pointers will be implemented in a
future commit.
closes#14040
Backends want to avoid emitting unused instructions which do not have
side effects: to that end, they all have `Liveness.isUnused` checks for
many instructions. However, checking this in the backends avoids a lot
of potential optimizations. For instance, if a nested field is loaded,
then the first field access would still be emitted, since its result is
used by the next access (which is then unreferenced).
To elide more instructions, Liveness can track this data instead. For
operands which do not have to be lowered (i.e. are not side effecting
and are not something special like `arg), Liveness can ignore their
operand usages, and push the unused information further up, potentially
marking many more instructions as unreferenced.
In doing this, I also uncovered a bug in the LLVM backend relating to
discarding the result of `@cVaArg`, which this change fixes. A behaviour
test has been added to cover it.
This is a partial rewrite of Liveness, so has some other notable changes:
- A proper multi-pass system to prevent code duplication
- Better logging
- Minor bugfixes
* @workItemId returns the index of the work item in a work group for a
dimension.
* @workGroupId returns the index of the work group in the kernel dispatch for a
dimension.
* @workGroupSize returns the size of the work group for a dimension.
These builtins are mainly useful for GPU backends. They are currently only
implemented for the AMDGCN LLVM backend.
This introduces a new builtin function that compiles down to something that results in an illegal instruction exception/interrupt.
It can be used to exit a program abnormally.
This implements the builtin for all backends.
This change adds to Liveness a simple pattern match for the
try-like `.condbr` blocks emitted by Sema's safety checks. This
allows us to determine that these do not modify memory, which
permits us to elide additional loads in the backend.
As @Vexu points out in the main issue, this is probably not a
complete solution on its own. We'll still want a way to reliably
narrow the load/copy when performing several consecutive accesses,
such as `foo.arr[x][y].z`
Resolves https://github.com/ziglang/zig/issues/12215
This is encoded as a primitive AIR instruction to resolve one corner
case: A function may include a `catch { ... }` or `else |err| { ... }`
block but not call any errorable fn. In that case, there is no error
return trace to save the index of and codegen needs to avoid
interacting with the non-existing error trace.
By using a primitive AIR op, we can depend on Liveness to mark this
unused in this corner case.
Rather than lowering float negation as `0.0 - x`.
* Add AIR instruction for float negation.
* Add compiler-rt functions for f128, f80 negation
closes#11853
Implements semantic analysis for the new try/try_inline ZIR
instruction. Adds the new try/try_ptr AIR instructions and implements
them for the LLVM backend.
Fixes not calling rvalue() for tryExpr in AstGen.
This is part of an effort to implement #11772.
Generally, the load instruction may need to make a copy of an
isByRef=true value, such as in the case of the following code:
```zig
pub fn swap(comptime T: type, a: *T, b: *T) void {
const tmp = a.*;
a.* = b.*;
b.* = tmp;
}
```
However, it only needs to do so if there are any instructions which can
possibly write to memory. When calling functions with isByRef=true
parameters, the AIR code that is generated looks like loads followed
directly by call.
This allows for a peephole optimization when lowering loads: if the load
instruction operates on an isByRef=true type and dies before any side effects
occur, then we can safely lower the load as a no-op that returns its
operand.
This is one out of three changes I intend to make to address #11498.
However I will put these changes in separate branches and merge them
separately so that we can have three independent points on the perf
charts.
The reason for having `@tan` is that we already have `@sin` and `@cos`
because some targets have machine code instructions for them, but in the
case that the implementation needs to go into compiler-rt, sin, cos, and
tan all share a common dependency which includes a table of data. To
avoid duplicating this table of data, we promote tan to become a builtin
alongside sin and cos.
ZIR: The tag enum is at capacity so this commit moves
`field_call_bind_named` to be `extended`. I measured this as one of
the least used tags in the zig codebase.
Fix libc math suffix for `f32` being wrong in both stage1 and stage2.
stage1: add missing libc prefix for float functions.
* The `@bitCast` workaround is removed in favor of `@ptrCast` properly
doing element casting for slice element types. This required an
enhancement both to stage1 and stage2.
* stage1 incorrectly accepts `.{}` instead of `{}`. stage2 code that
abused this is fixed.
* Make some parameters comptime to support functions in switch
expressions (as opposed to making them function pointers).
* Avoid relying on local temporaries being mutable.
* Workarounds for when stage1 and stage2 disagree on function pointer
types.
* Workaround recursive formatting bug with a `@panic("TODO")`.
* Remove unreachable `else` prongs for some inferred error sets.
All in effort towards #89.
Prior to this, Liveness encoded `asm`, `call`, and `aggregate_init` with
a single 32-bit integer, allowing up to 35 operands (3 are provided by
the regular tomb_bits). However, the Zig language allows function calls
with more than 35 arguments, inline assembly with more than 35 inputs,
and anonymous tuples with more than 35 elements.
The new encoding stores an index to the extra array instead of the bits
directly, and then as many extra elements as needed to encode all the
operands. The MSB is used as a flag to tell which element is the last
one, allowing for 31 bits per element.
Prior to this, print_air did not bother correctly printing tombstones
for these instructions; now it does.
In addition to updating the BigTomb iteration logic in the machine code
backends, this commit extracts the common logic into the Liveness namespace.
This commit introduces a new AIR instruction `cmp_lt_errors_len`. It's
specific to this use case for two reasons:
* The total number of errors is not stable during semantic analysis; it
can only be reliably checked when flush() is called. So the backend
that is lowering the instruction must emit a relocation of some kind
and then populate it during flush().
* The fewer AIR instructions in memory, the better for compiler
performance, so we squish complex meanings into AIR tags without
hesitation.
The instruction is implemented only in the LLVM backend so far. It does
this by creating a simple function which is gutted and re-populated
with each flush().
AstGen now uses ResultLoc.coerced_ty for `@intToError` and Sema does the
coercion.
The existing `cmp_*` instructions get their result type from `lhs`, but
vector comparison will always return a vector of bools with only the
length derived from its operands. This necessitates the creation of a
new AIR instruction.
Notably, Value.eql and Value.hash are improved to treat NaN as equal to
itself, so that Type/Value can be hash map keys. Likewise float hashing
normalizes the float value before computing the hash.
Adds 2 new AIR instructions:
* dbg_var_ptr
* dbg_var_val
Sema no longer emits dbg_stmt AIR instructions when strip=true.
LLVM backend: fixed lowerPtrToVoid when calling ptrAlignment on
the element type is problematic.
LLVM backend: fixed alloca instructions improperly getting debug
location annotated, causing chaotic debug info behavior.
zig_llvm.cpp: fixed incorrect bindings for a function that should use
unsigned integers for line and column.
A bunch of C test cases regressed because the new dbg_var AIR
instructions caused their operands to be alive, exposing latent bugs.
Mostly it's just a problem that the C backend lowers mutable
and const slices to the same C type, so we need to represent that in the
C backend instead of printing two duplicate typedefs.