* This has not seen meaningful development for about a decade.
* The Linux kernel port was never upstreamed.
* The glibc port was never upstreamed.
* GCC 15.1 recently deprecated support it.
It may still make sense to support an ILP32 ABI on AArch64 more broadly (which
we already have the Abi.ilp32 tag for), but, to the extent that it even existed
in any "official" sense, the *GNU* ILP32 ABI is certainly dead.
The "musl" part of the Zig target triples `wasm32-wasi-musl` and
`wasm32-emscripten-musl` refers to the libc, not really the ABI.
For WASM, most LLVM-based tooling uses `wasm32-wasi`, which is
normalized into `wasm32-unknown-wasi`, with an implicit `-unknown` and
without `-musl`.
Similarly, Emscripten uses `wasm32-unknown-emscripten` without `-musl`.
By using `-unknown` instead of `-musl` we get better compatibility with
external tooling.
Emscripten currently implements `emscripten_return_address()` by calling
out into JavaScript and parsing a stack trace, which introduces
significant overhead that we would prefer to avoid in release builds.
This is especially problematic for allocators because the generic parts
of `std.mem.Allocator` make frequent use of `@returnAddress`, even
though very few allocator implementations even observe the return
address, which makes allocators nigh unusable for performance-critical
applications like games if the compiler is unable to devirtualize the
allocator calls.
This should be a lot easier to maintain. It's also a small step towards
eventually making the builder API parse the data layout string in order to
answer layout questions that we need to ask during code generation.
This is a correctness issue: When -fno-builtin is used, we must assume that we
could be compiling the memcpy/memset implementations, so generating calls to
them is problematic.
- allow `-fsanitize-coverage-trace-pc-guard` to be used on its own without enabling the fuzzer.
(note that previouly, while the flag was only active when fuzzing, the fuzzer itself doesn't use it, and the code will not link as is.)
- add stub functions in the fuzzer to link with instrumented C code (previously fuzzed tests failed to link if they were calling into C):
while the zig compile unit uses a custom `EmitOptions.Coverage` with features disabled,
the C code is built calling into the clang driver with "-fsanitize=fuzzer-no-link" that automatically enables the default features.
(see de06978ebc/clang/lib/Driver/SanitizerArgs.cpp (L587))
- emit `-fsanitize-coverage=trace-pc-guard` instead of `-Xclang -fsanitize-coverage-trace-pc-guard` so that edge coverrage is enabled by clang driver. (previously, it was enabled only because the fuzzer was)
Functions like isMinGW() and isGnuLibC() have a good reason to exist: They look
at multiple components of the target. But functions like isWasm(), isDarwin(),
isGnu(), etc only exist to save 4-8 characters. I don't think this is a good
enough reason to keep them, especially given that:
* It's not immediately obvious to a reader whether target.isDarwin() means the
same thing as target.os.tag.isDarwin() precisely because isMinGW() and similar
functions *do* look at multiple components.
* It's not clear where we would draw the line. The logical conclusion before
this commit would be to also wrap Arch.isX86(), Os.Tag.isSolarish(),
Abi.isOpenHarmony(), etc... this obviously quickly gets out of hand.
* It's nice to just have a single correct way of doing something.
* arm_apcs is the long dead "OABI" which we never had working support for.
* arm_aapcs16_vfp is for arm-watchos-none which is a dead target that we've
dropped support for.
Sema is arbitrarily scalarizing some operations, which means that when I
try to implement vectorized versions of those operations in a backend,
they are impossible to test due to Sema not producing them. Now, I can
implement them and then temporarily enable the new feature for that
backend in order to test them. Once the backend supports all of them,
the feature can be permanently enabled.
This also deletes the Air instructions `int_from_bool` and
`int_from_ptr`, which are just bitcasts with a fixed result type, since
changing `un_op` to `ty_op` takes up the same amount of memory.
This instruction is like `intcast`, but includes two safety checks:
* Checks that the int is in range of the destination type
* If the destination type is an exhaustive enum, checks that the int
is a named enum value
This instruction is locked behind the `safety_checked_instructions`
backend feature; if unsupported, Sema will emit a fallback, as with
other safety-checked instructions.
This instruction is used to add a missing safety check for `@enumFromInt`
truncating bits. This check also has a fallback for backends which do
not yet support `safety_checked_instructions`.
Resolves: #21946
* `std.builtin.Panic` -> `std.builtin.panic`, because it is a namespace.
* `root.Panic` -> `root.panic` for the same reason. There are type
checks so that we still allow the legacy `pub fn panic` strategy in
the 0.14.0 release.
* `std.debug.SimplePanic` -> `std.debug.simple_panic`, same reason.
* `std.debug.NoPanic` -> `std.debug.no_panic`, same reason.
* `std.debug.FormattedPanic` is now a function `std.debug.FullPanic`
which takes as input a `panicFn` and returns a namespace with all the
panic functions. This handles the incredibly common case of just
wanting to override how the message is printed, whilst keeping nice
formatted panics.
* Remove `std.builtin.panic.messages`; now, every safety panic has its
own function. This reduces binary bloat, as calls to these functions
no longer need to prepare any arguments (aside from the error return
trace).
* Remove some legacy declarations, since a zig1.wasm update has
happened. Most of these were related to the panic handler, but a quick
grep for "zig1" brought up a couple more results too.
Also, add some missing type checks to Sema.
Resolves: #22584
formatted -> full
The original motivation here was to fix regressions caused by #22414.
However, while working on this, I ended up discussing a language
simplification with Andrew, which changes things a little from how they
worked before #22414.
The main user-facing change here is that any reference to a prior
function parameter, even if potentially comptime-known at the usage
site or even not analyzed, now makes a function generic. This applies
even if the parameter being referenced is not a `comptime` parameter,
since it could still be populated when performing an inline call. This
is a breaking language change.
The detection of this is done in AstGen; when evaluating a parameter
type or return type, we track whether it referenced any prior parameter,
and if so, we mark this type as being "generic" in ZIR. This will cause
Sema to not evaluate it until the time of instantiation or inline call.
A lovely consequence of this from an implementation perspective is that
it eliminates the need for most of the "generic poison" system. In
particular, `error.GenericPoison` is now completely unnecessary, because
we identify generic expressions earlier in the pipeline; this simplifies
the compiler and avoids redundant work. This also entirely eliminates
the concept of the "generic poison value". The only remnant of this
system is the "generic poison type" (`Type.generic_poison` and
`InternPool.Index.generic_poison_type`). This type is used in two
places:
* During semantic analysis, to represent an unknown result type.
* When storing generic function types, to represent a generic parameter/return type.
It's possible that these use cases should instead use `.none`, but I
leave that investigation to a future adventurer.
One last thing. Prior to #22414, inline calls were a little inefficient,
because they re-evaluated even non-generic parameter types whenever they
were called. Changing this behavior is what ultimately led to #22538.
Well, because the new logic will mark a type expression as generic if
there is any change its resolved type could differ in an inline call,
this redundant work is unnecessary! So, this is another way in which the
new design reduces redundant work and complexity.
Resolves: #22494Resolves: #22532Resolves: #22538
On x86_64, the `@divFloor` change is a strict improvement, and the
`@mod` change adds one zero latency instruction. In return, once we
upgrade to LLVM 20, when the optimizer discovers one of these operations
has a power-of-two constant rhs, it will be able to optimize the entire
operation into an `ashr` or `and`, respectively.
#I CPL CPT
old `@divFloor` | 8 | 15 | .143 |
new `@divFloor` | 7 | 15 | .148 |
old `@mod` | 9 | 17 | .134 | (rip llvm
new `@mod` | 10 | 17 | .138 | scheduler)
This was done by regex substitution with `sed`. I then manually went
over the entire diff and fixed any incorrect changes.
This diff also changes a lot of `callconv(.C)` to `callconv(.c)`, since
my regex happened to also trigger here. I opted to leave these changes
in, since they *are* a correct migration, even if they're not the one I
was trying to do!
mainly, rework how relocations works. This is the point at which symbol
indexes are known - not before. And don't emit unnecessary relocations!
They're only needed when emitting an object file.
Changes wasm linker to keep MIR around long-lived so that fixups can be
reapplied after linker garbage collection.
use labeled switch while we're at it
The goals of this branch are to:
* compile faster when using the wasm linker and backend
* enable saving compiler state by directly copying in-memory linker
state to disk.
* more efficient compiler memory utilization
* introduce integer type safety to wasm linker code
* generate better WebAssembly code
* fully participate in incremental compilation
* do as much work as possible outside of flush(), while continuing to do
linker garbage collection.
* avoid unnecessary heap allocations
* avoid unnecessary indirect function calls
In order to accomplish this goals, this removes the ZigObject
abstraction, as well as Symbol and Atom. These abstractions resulted
in overly generic code, doing unnecessary work, and needless
complications that simply go away by creating a better in-memory data
model and emitting more things lazily.
For example, this makes wasm codegen emit MIR which is then lowered to
wasm code during linking, with optimal function indexes etc, or
relocations are emitted if outputting an object. Previously, this would
always emit relocations, which are fully unnecessary when emitting an
executable, and required all function calls to use the maximum size LEB
encoding.
This branch introduces the concept of the "prelink" phase which occurs
after all object files have been parsed, but before any Zcu updates are
sent to the linker. This allows the linker to fully parse all objects
into a compact memory model, which is guaranteed to be complete when Zcu
code is generated.
This commit is not a complete implementation of all these goals; it is
not even passing semantic analysis.