The implementation for add_with_overflow and sub_with_overflow is now a lot
more robust and takes account for signed integers and arbitrary integer bitsizes.
The final output is equal to that of the LLVM backend.
Maps lines and columns between wasm bytecode and Zig source code.
While this supports prologue and epilogue information, we need to add
support for performing relocations as the offsets are relative to the code section,
which means we must relocate it according to the atom offset's offset while keeping function count
in mind as well (due to leb128 encoding).
Implements very basic debug information for locals.
For now it only implements debug info when the variable is stored within a
Wasm local. The goal is to support those that live in the data section (virtual stack).
As we now store negative signed integers as two's complement,
we must also ensure that when truncating a float, its value is wrapped
around the integer's size.
This also splits `@mulWithOverflow` into its own function to make
the code more maintainable and reduce branching.
When a signed integer is negative, the integer will be stored as a two's complement,
rather than its signed value. Instead, we verify the signed bits during arithmetic operations.
This fixes signed cases of `@mulWithOverflow`.
The reason for having `@tan` is that we already have `@sin` and `@cos`
because some targets have machine code instructions for them, but in the
case that the implementation needs to go into compiler-rt, sin, cos, and
tan all share a common dependency which includes a table of data. To
avoid duplicating this table of data, we promote tan to become a builtin
alongside sin and cos.
ZIR: The tag enum is at capacity so this commit moves
`field_call_bind_named` to be `extended`. I measured this as one of
the least used tags in the zig codebase.
Fix libc math suffix for `f32` being wrong in both stage1 and stage2.
stage1: add missing libc prefix for float functions.
Rather than allocating Decl objects with an Allocator, we instead allocate
them with a SegmentedList. This provides four advantages:
* Stable memory so that one thread can access a Decl object while another
thread allocates additional Decl objects from this list.
* It allows us to use u32 indexes to reference Decl objects rather than
pointers, saving memory in Type, Value, and dependency sets.
* Using integers to reference Decl objects rather than pointers makes
serialization trivial.
* It provides a unique integer to be used for anonymous symbol names,
avoiding multi-threaded contention on an atomic counter.
When the last instruction is a debug instruction, the type of it is void.
Similarly for 'noreturn' emit an 'unreachable' instruction to tell the wasm-validator
the path cannot be reached.
Also respect the '--strip' flag in the self-hosted wasm linker and not emit a 'name' section
when the flag is set to `true`.
* The `@bitCast` workaround is removed in favor of `@ptrCast` properly
doing element casting for slice element types. This required an
enhancement both to stage1 and stage2.
* stage1 incorrectly accepts `.{}` instead of `{}`. stage2 code that
abused this is fixed.
* Make some parameters comptime to support functions in switch
expressions (as opposed to making them function pointers).
* Avoid relying on local temporaries being mutable.
* Workarounds for when stage1 and stage2 disagree on function pointer
types.
* Workaround recursive formatting bug with a `@panic("TODO")`.
* Remove unreachable `else` prongs for some inferred error sets.
All in effort towards #89.
Rather than using blocks and control flow to check which operand is the maximum or minimum,
we use wasm's `select` instruction which returns us the operand based on a result from a comparison.
This saves us the need of control flow, as well as reduce the instruction count from 13 to 7.
Implements the `ctz` AIR instruction for integers with bitsize <= 64.
When the bitsize of the integer does not match the bitsize of a wasm type,
we first XOR the value with the value of (1<<bitsize) to set the right bits
and ensure we will only count the trailing zeroes of the integer with the correct bitsize.
Implements the `clz` AIR instruction for integers with bitsize <= 64.
When the bitsize of the integer is not the same as wasm's bitsize,
we substract the difference in bits as those will always be 0 for the integer, but should
not be counted towards the end result. We also wrap the result to ensure it fits
in the result type as documented in the language reference.
This implements the `mul_add` AIR instruction for floats of bitsize 32 and 64.
f16's will require us being able to extend and truncate f16's to correctly
store and load them without losing the accuracy.
This implements the `max` and `min` AIR instructions by checking
whether LHS is great/lesser than RHS. If that's the case, we assign
LHS to the result, otherwise assign RHS to it instead.
* Sema: store the precomputed monomorphed_funcs hash inside Module.Fn.
This is important because it may be accessed when resizing monomorphed_funcs
while this Fn has already been added to the set, but does not have the
owner_decl, comptime_args, or other fields populated yet.
* Sema: in `analyzeIsNonErr`, take advantage of the AIR tag being
`wrap_errunion_payload` to infer that `is_non_err` is comptime true
without performing any error set resolution.
- Also add some code to check for empty inferred error sets in this
function. If necessary we do resolve the inferred error set.
* Sema: queue full type resolution of payload type when
`wrap_errunion_payload` AIR instruction is emitted. This ensures the
backend may check the alignment of it.
* Sema: resolveTypeFully now additionally resolves comptime-only
status.
closes#11306
This commit introduces a new AIR instruction `cmp_lt_errors_len`. It's
specific to this use case for two reasons:
* The total number of errors is not stable during semantic analysis; it
can only be reliably checked when flush() is called. So the backend
that is lowering the instruction must emit a relocation of some kind
and then populate it during flush().
* The fewer AIR instructions in memory, the better for compiler
performance, so we squish complex meanings into AIR tags without
hesitation.
The instruction is implemented only in the LLVM backend so far. It does
this by creating a simple function which is gutted and re-populated
with each flush().
AstGen now uses ResultLoc.coerced_ty for `@intToError` and Sema does the
coercion.
This implements the overflow arithmetic for unsigned and signed integers.
Meaning the following instructions:
- @addWithOverflow
- @subWithOverflow
- @shlWithOverflow
- @mulWithOverflow
Rather than creating an import for externs on updateDecl, we now
generate them when they're referenced. This is required so using @TypeOf(extern_fn())
will not emit the import into the binary (causing an incorrect function type index
as it won't be fully analyzed).
That happens after a function body is analyzed. This prevents circular
dependency compile errors and yet a way to mark types that need to be
fully resolved before a given function is sent to the codegen backend.
Implements lowering constants for pointers of value 'opt_payload_ptr'.
The offset is calculated by determining the abi size of the full type and
then substracting the payload size.
Error sets contain the entire global error set.
Users are often switching on specific errors only present within that operand.
This means that cases are almost always sparse and not contiguous.
For this reason, we will instead emit the default case for error values not present in
that specific operand error set. This is fine as those cases will never be hit,
as prevented by the type system.
By still allowing jump tables for those cases, rather than if-else chains, we save runtime cost
as well as binary size.
This implements the `error_name` instruction, which is emit for runtime `@errorName` callsites.
The implementation works by creating 2 symbols and corresponding atoms.
The initial symbol contains a table which each element consisting of a slice where the ptr field
points towards the error name, and the len field contains the error name length without the sentinel.
The secondary symbol contains a list of all error names from the global error set.
During the error_name instruction, we first get a pointer to the first symbol.
Then based on the operand we perform pointer arithmetic, to get the correct index into this table.
e.g. error index 2 = ptr + (2 * ptr size). The result of this will be stored in a local
and then returned as instruction result.
During `flush()` we populate the error names table by looping over the global error set
and creating a relocation for each error name. This relocation is appended to the table symbol.
Then finally, this name is written to the names list itself.
Finally, both symbols' atom are allocated within the rest of the binary.
When no error name is referenced, the `error_name_symbol` is never set, and therefore
no error name table will be emit into the final binary.
The existing `cmp_*` instructions get their result type from `lhs`, but
vector comparison will always return a vector of bools with only the
length derived from its operands. This necessitates the creation of a
new AIR instruction.
This implements improvements/fixes to get all the union, tuple, and array behavior tests passing.
Previously, we lowered parent pointers for field_ptr and element_ptr incompletely. This has now
been improved to recursively lower such pointer.
Also a fix was done to `generateSymbol` when checking a container's layout.
Previously it was assumed to always be a struct. However, the type can also be a tuple, and therefore
panicking. Updating to ask a type's container layout instead allows us to keep a singular branch for both cases.
Notably, Value.eql and Value.hash are improved to treat NaN as equal to
itself, so that Type/Value can be hash map keys. Likewise float hashing
normalizes the float value before computing the hash.
When the length is comptime-known, we perform an inline loop instead of emitting
a runtime loop into the binary.
This also allows us to easily write 'undefined' to aggregate types.
We now do this when we set the error tag of an error union where the payload will be set to undefined.
This implements the `memcpy` instruction and also updates the inline memcpy calls
to make use of the same implementation. We use the fast-loop when the length is comptime known,
and use a runtime loop when the length is runtime known.
We also perform feature-dection to emit a simply wasm memory.copy instruction when the feature
'bulk-memory' is enabled. (off by default).
Adds 2 new AIR instructions:
* dbg_var_ptr
* dbg_var_val
Sema no longer emits dbg_stmt AIR instructions when strip=true.
LLVM backend: fixed lowerPtrToVoid when calling ptrAlignment on
the element type is problematic.
LLVM backend: fixed alloca instructions improperly getting debug
location annotated, causing chaotic debug info behavior.
zig_llvm.cpp: fixed incorrect bindings for a function that should use
unsigned integers for line and column.
A bunch of C test cases regressed because the new dbg_var AIR
instructions caused their operands to be alive, exposing latent bugs.
Mostly it's just a problem that the C backend lowers mutable
and const slices to the same C type, so we need to represent that in the
C backend instead of printing two duplicate typedefs.
* use the real start code for LLVM backend with x86_64-linux
- there is still a check for zig_backend after initializing the TLS
area to skip some stuff.
* introduce new AIR instructions and implement them for the LLVM
backend. They are the same as `call` except with a modifier.
- call_always_tail
- call_never_tail
- call_never_inline
* LLVM backend calls hasRuntimeBitsIgnoringComptime in more places to
avoid unnecessarily depending on comptimeOnly being resolved for some
types.
* LLVM backend: remove duplicate code for setting linkage and value
name. The canonical place for this is in `updateDeclExports`.
* LLVM backend: do some assembly template massaging to make `%%`
rendered as `%`. More hacks will be needed to make inline assembly
catch up with stage1.