26 Commits

Author SHA1 Message Date
mlugg
457c94d353
compiler: implement @branchHint, replacing @setCold
Implements the accepted proposal to introduce `@branchHint`. This
builtin is permitted as the first statement of a block if that block is
the direct body of any of the following:

* a function (*not* a `test`)
* either branch of an `if`
* the RHS of a `catch` or `orelse`
* a `switch` prong
* an `or` or `and` expression

It lowers to the ZIR instruction `extended(branch_hint(...))`. When Sema
encounters this instruction, it sets `sema.branch_hint` appropriately,
and `zirCondBr` etc are expected to reset this value as necessary. The
state is on `Sema` rather than `Block` to make it automatically
propagate up non-conditional blocks without special handling. If
`@panic` is reached, the branch hint is set to `.cold` if none was
already set; similarly, error branches get a hint of `.unlikely` if no
hint is explicitly provided. If a condition is comptime-known, `cold`
hints from the taken branch are allowed to propagate up, but other hints
are discarded. This is because a `likely`/`unlikely` hint just indicates
the direction this branch is likely to go, which is redundant
information when the branch is known at comptime; but `cold` hints
indicate that control flow is unlikely to ever reach this branch,
meaning if the branch is always taken from its parent, then the parent
is also unlikely to ever be reached.

This branch information is stored in AIR `cond_br` and `switch_br`. In
addition, `try` and `try_ptr` instructions have variants `try_cold` and
`try_ptr_cold` which indicate that the error case is cold (rather than
just unlikely); this is reachable through e.g. `errdefer unreachable` or
`errdefer @panic("")`.

A new API `unwrapSwitch` is introduced to `Air` to make it more
convenient to access `switch_br` instructions. In time, I plan to update
all AIR instructions to be accessed via an `unwrap` method which returns
a convenient tagged union a la `InternPool.indexToKey`.

The LLVM backend lowers branch hints for conditional branches and
switches as follows:

* If any branch is marked `unpredictable`, the instruction is marked
  `!unpredictable`.
* Any branch which is marked as `cold` gets a
  `llvm.assume(i1 true) [ "cold"() ]` call to mark the code path cold.
* If any branch is marked `likely` or `unlikely`, branch weight metadata
  is attached with `!prof`. Likely branches get a weight of 2000, and
  unlikely branches a weight of 1. In `switch` statements, un-annotated
  branches get a weight of 1000 as a "middle ground" hint, since there
  could be likely *and* unlikely *and* un-annotated branches.

For functions, a `cold` hint corresponds to the `cold` function
attribute, and other hints are currently ignored -- as far as I can tell
LLVM doesn't really have a way to lower them. (Ideally, we would want
the branch hint given in the function to propagate to call sites.)

The compiler and standard library do not yet use this new builtin.

Resolves: #21148
2024-08-27 00:41:49 +01:00
Jacob Young
62f7276501 Dwarf: emit info about inline call sites 2024-08-20 08:09:33 -04:00
Andrew Kelley
9be8a9000f Revert "implement @expect builtin (#19658)"
This reverts commit a7de02e05216db9a04e438703ddf1b6b12f3fbef.

This did not implement the accepted proposal, and I did not sign off
on the changes. I would like a chance to review this, please.
2024-05-22 09:57:43 -07:00
David Rubin
a7de02e052
implement @expect builtin (#19658)
* implement `@expect`

* add docs

* add a second arg for expected bool

* fix typo

* move `expect` to use BinOp

* update to newer langref format
2024-05-22 10:51:16 -05:00
Jacob Young
aa688567f5 Air: replace .dbg_inline_* with .dbg_inline_block
This prevents the possibility of not emitting a `.dbg_inline_end`
instruction and reduces the allocation requirements of the backends.

Closes #19093
2024-03-02 21:19:34 -08:00
mlugg
59447e5305
compiler: decide dbg_var scoping based on AIR blocks
This commit eliminates the `dbg_block_{begin,end}` instructions from
both ZIR and AIR. Instead, lexical scoping of `dbg_var_{ptr,val}`
instructions is decided based on the AIR block they exist within. This
is a much more robust system, and also results in a huge drop in ZIR
bytes - around 7% for Sema.zig.

This required some enhancements to Sema to prevent elision of blocks
when they are required for debug variable scoping. This can be observed
by looking at the AIR for the following simple test program with and
without `-fstrip`:

```zig
export fn f() void {
    {
        var a: u32 = 0;
        _ = &a;
    }
    {
        var a: u32 = 0;
        _ = &a;
    }
}
```

When `-fstrip` is passed, no AIR blocks are generated. When `-fno-strip`
is passed, the ZIR blocks are lowered to true AIR blocks to give correct
lexical scoping to the debug vars.

The changes here incidentally reolve #19060. A corresponding behavior
test has been added.

Resolves: #19060
2024-02-26 13:20:45 +00:00
Veikka Tuominen
7d75c3d3b8 llvm: ensure returned undef is 0xaa bytes when runtime safety is enabled
Closes #13178
2024-01-29 17:35:07 -08:00
Jacob Young
daf91ed8d1 Air: use typesafe Air.Inst.Index
I need some indices for a thing...
2023-12-03 02:05:06 -08:00
antlilja
6a29646a55 Rename @fabs to @abs and accept integers
Replaces the @fabs builtin with a new @abs builtins which accepts
floats, signed integers and vectors of said types.
2023-09-27 11:15:53 -07:00
mlugg
ff37ccd298 Air: store interned values in Air.Inst.Ref
Previously, interned values were represented as AIR instructions using
the `interned` tag. Now, the AIR ref directly encodes the InternPool
index. The encoding works as follows:
* If the ref matches one of the static values, it corresponds to the same InternPool index.
* Otherwise, if the MSB is 0, the ref corresponds to an InternPool index.
* Otherwise, if the MSB is 1, the ref corresponds to an AIR instruction index (after removing the MSB).

Note that since most static InternPool indices are low values (the
exceptions being `.none` and `.var_args_param_type`), the first rule is
almost a nop.
2023-06-27 01:21:32 -07:00
Andrew Kelley
9684947faa compiler: start moving safety-checks into backends
This actually used to be how it worked in stage1, and there was this
issue to change it: #2649

So this commit is a reversal to that idea. One motivation for that issue
was avoiding emitting the panic handler in compilations that do not have
any calls to panic. This commit only resolves the panic handler in the
event of a safety check function being emitted, so it does not have that
flaw.

The other reason given in that issue was for optimizations that elide
safety checks. It's yet to be determined whether that was a good idea or
not; this can get re-explored when we start adding optimization passes
to AIR.

This commit adds these AIR instructions, which are only emitted if
`backendSupportsFeature(.safety_checked_arithmetic)` is true:
 * add_safe
 * sub_safe
 * mul_safe

It removes these nonsensical AIR instructions:
 * addwrap_optimized
 * subwrap_optimized
 * mulwrap_optimized

The safety-checked arithmetic functions push the burden of invoking the
panic handler into the backend. This makes for a messier compiler
implementation, but it reduces the amount of AIR instructions emitted by
Sema, which reduces time spent in the secondary bottleneck of the
compiler. It also generates more compact LLVM IR, reducing time spent in
the primary bottleneck of the compiler.

Finally, it eliminates 1 stack allocation per safety-check which was
being used to store the resulting tuple. These allocations were going to
be annoying when combined with suspension points.
2023-06-25 01:41:08 -07:00
mlugg
f26dda2117 all: migrate code to new cast builtin syntax
Most of this migration was performed automatically with `zig fmt`. There
were a few exceptions which I had to manually fix:

* `@alignCast` and `@addrSpaceCast` cannot be automatically rewritten
* `@truncate`'s fixup is incorrect for vectors
* Test cases are not formatted, and their error locations change
2023-06-24 16:56:39 -07:00
Eric Joldasov
a6c8ee5231 compiler: rename "@XToY" to "@YFromX", zig fmt: rewrite them
Signed-off-by: Eric Joldasov <bratishkaerik@getgoogleoff.me>
2023-06-19 12:34:24 -07:00
Andrew Kelley
90a877f462 InternPool: pass by const pointer
The Zig language allows the compiler to make this optimization
automatically. We should definitely make the compiler do that, and
revert this commit. However, that will not happen in this branch, and I
want to continue to explore achieving performance parity with
merge-base. So, this commit changes all InternPool parameters to be
passed by const pointer rather than by value.

I measured a 1.03x ± 0.03 speedup vs the previous commit compiling the
(set of passing) behavior tests. Against merge-base, this commit is
1.17x ± 0.04 slower, which is an improvement from the previous
measurement of 1.22x ± 0.02.

Related issue: #13510
Related issue: #14129
Related issue: #15688
2023-06-10 20:47:57 -07:00
Jacob Young
9afa974183 InternPool: fix enough crashes to run build-obj on a simple program 2023-06-10 20:47:55 -07:00
Jacob Young
70cc68e999 Air: remove constant tag
Some uses have been moved to their own tag, the rest use interned.

Also, finish porting comptime mutation to be more InternPool aware.
2023-06-10 20:47:55 -07:00
Jacob Young
1a4626d2cf InternPool: remove more legacy values
Reinstate some tags that will be needed for comptime init.
2023-06-10 20:47:54 -07:00
Andrew Kelley
9ff514b6a3 compiler: move error union types and error set types to InternPool
One change worth noting in this commit is that `module.global_error_set`
is no longer kept strictly up-to-date. The previous code reserved
integer error values when dealing with error set types, but this is no
longer needed because the integer values are not needed for semantic
analysis unless `@errorToInt` or `@intToError` are used and therefore
may be assigned lazily.
2023-06-10 20:47:53 -07:00
Andrew Kelley
2f05b1482a Sema: update core comptime detection logic to be InternPool aware
* Add some assertions to make sure instructions are not none. I tested
   all these with master branch as well and made sure the behavior tests
   still passed with the assertions intact (along with a handful of
   callsite updates).
 * Fix Sema.resolveMaybeUndefValAllowVariablesMaybeRuntime not noticing
   that interned values are comptime-known. This was causing all kinds
   of chaos.
 * Fix print_air writeType calling tag() without checking for ip_index
2023-06-10 20:42:28 -07:00
Andrew Kelley
5e636643d2 stage2: move many Type encodings to InternPool
Notably, `vector`.

Additionally, all alternate encodings of `pointer`, `optional`, and
`array`.
2023-06-10 20:42:27 -07:00
Andrew Kelley
00f82f1c46 stage2: add interned AIR tag
This required additionally passing the `InternPool` into some AIR
methods.

Also, implement `Type.isNoReturn` for interned types.
2023-06-10 20:40:03 -07:00
Andrew Kelley
5378fdffdc stage2: introduce store_safe AIR instruction
store:
The value to store may be undefined, in which case the destination
memory region has undefined bytes after this instruction is
evaluated. In such case ignoring this instruction is legal
lowering.

store_safe:
Same as `store`, except if the value to store is undefined, the
memory region should be filled with 0xaa bytes, and any other
safety metadata such as Valgrind integrations should be notified of
this memory region being undefined.
2023-04-25 11:23:41 -07:00
Andrew Kelley
057c950093 LLVM backend: support non-byte-sized memset
Also introduce memset_safe AIR tag and support it in C backend and LLVM
backend.
2023-04-25 11:23:41 -07:00
Andrew Kelley
edb5e493e6 update @memcpy to require equal src and dest lens
* Sema: upgrade operands to array pointers if possible when emitting
   AIR.
 * Implement safety checks for length mismatch and aliasing.
 * AIR: make ptrtoint support slice operands. Implement in LLVM backend.
 * C backend: implement new `@memset` semantics. `@memcpy` is not done
   yet.
2023-04-25 11:23:40 -07:00
mlugg
407dc6eee4
Liveness: avoid emitting unused instructions or marking their operands as used
Backends want to avoid emitting unused instructions which do not have
side effects: to that end, they all have `Liveness.isUnused` checks for
many instructions. However, checking this in the backends avoids a lot
of potential optimizations. For instance, if a nested field is loaded,
then the first field access would still be emitted, since its result is
used by the next access (which is then unreferenced).

To elide more instructions, Liveness can track this data instead. For
operands which do not have to be lowered (i.e. are not side effecting
and are not something special like `arg), Liveness can ignore their
operand usages, and push the unused information further up, potentially
marking many more instructions as unreferenced.

In doing this, I also uncovered a bug in the LLVM backend relating to
discarding the result of `@cVaArg`, which this change fixes. A behaviour
test has been added to cover it.
2023-04-20 20:28:48 +01:00
Jacob Young
02a8b66b00
Liveness: add a liveness verification pass
This code only runs in a debug zig compiler, similar to verifying llvm modules.
2023-04-20 20:28:47 +01:00