In a compiler built with debug extensions, pass `--debug-incremental` to
spawn the "incremental debug server". This is a TCP server exposing a
REPL which allows querying a bunch of compiler state, some of which is
stored only when that flag is passed. Eventually, this will probably
move into `std.zig.Server`/`std.zig.Client`, but this is easier to work
with right now. The easiest way to interact with the server is `telnet`.
* When storing a zero-bit type, we should short-circuit almost
immediately. Zero-bit stores do not need to do any work.
* The bit size computation for arrays is incorrect; the `abiSize` will
already be appropriately aligned, but the logic to do so here
incorrectly assumes that zero-bit types have an alignment of 0. They
don't; their alignment is 1.
Resolves: #21202Resolves: #21508Resolves: #23307
This commits adds the following distinct integer types to std.zig.Ast:
- OptionalTokenIndex
- TokenOffset
- OptionalTokenOffset
- Node.OptionalIndex
- Node.Offset
- Node.OptionalOffset
The `Node.Index` type has also been converted to a distinct type while
`TokenIndex` remains unchanged.
`Ast.Node.Data` has also been changed to a (untagged) union to provide
safety checks.
This is all of the expected 0.14.0 progress on #21530, which can now be
postponed once this commit is merged.
This required rewriting the (un)wrap operations since the original
implementations were extremely buggy.
Also adds an easy way to retrigger Sema OPV bugs so that I don't have to
keep updating #22419 all the time.
Instead, `source`, `tree`, and `zir` should all be optional. This is
precisely what we're actually trying to model here; and `File` isn't
optimized for memory consumption or serializability anyway, so it's fine
to use a couple of extra bytes on actual optionals here.
This check isn't valid in such cases, because the source and destination
pointers both refer to zero bits of memory, meaning they effectively
never alias.
Resolves: #21655
* The langspec definition of `@memcpy` has been changed so that the
source and destination element types must be in-memory coercible,
allowing all such calls to be raw copying operations, not actually
applying any coercions.
* Implement aliasing check for comptime `@memcpy`; a compile error will
now be emitted if the arguments alias.
* Implement more efficient comptime `@memcpy` by loading and storing a
whole array at once, similar to how `@memset` is implemented.
The original motivation here was to fix regressions caused by #22414.
However, while working on this, I ended up discussing a language
simplification with Andrew, which changes things a little from how they
worked before #22414.
The main user-facing change here is that any reference to a prior
function parameter, even if potentially comptime-known at the usage
site or even not analyzed, now makes a function generic. This applies
even if the parameter being referenced is not a `comptime` parameter,
since it could still be populated when performing an inline call. This
is a breaking language change.
The detection of this is done in AstGen; when evaluating a parameter
type or return type, we track whether it referenced any prior parameter,
and if so, we mark this type as being "generic" in ZIR. This will cause
Sema to not evaluate it until the time of instantiation or inline call.
A lovely consequence of this from an implementation perspective is that
it eliminates the need for most of the "generic poison" system. In
particular, `error.GenericPoison` is now completely unnecessary, because
we identify generic expressions earlier in the pipeline; this simplifies
the compiler and avoids redundant work. This also entirely eliminates
the concept of the "generic poison value". The only remnant of this
system is the "generic poison type" (`Type.generic_poison` and
`InternPool.Index.generic_poison_type`). This type is used in two
places:
* During semantic analysis, to represent an unknown result type.
* When storing generic function types, to represent a generic parameter/return type.
It's possible that these use cases should instead use `.none`, but I
leave that investigation to a future adventurer.
One last thing. Prior to #22414, inline calls were a little inefficient,
because they re-evaluated even non-generic parameter types whenever they
were called. Changing this behavior is what ultimately led to #22538.
Well, because the new logic will mark a type expression as generic if
there is any change its resolved type could differ in an inline call,
this redundant work is unnecessary! So, this is another way in which the
new design reduces redundant work and complexity.
Resolves: #22494Resolves: #22532Resolves: #22538
We can still often determine a comptime result based on the type, even
if the pointer is runtime-known.
Also, we previously used load -> is non null instead of AIR
`is_non_null_ptr` if the pointer is comptime-known, but that's a bad
heuristic. Instead, we should check for the pointer to be
comptime-known, *and* for the load to be comptime-known, and only in
that case should we call `Sema.analyzeIsNonNull`.
Resolves: #22556
This was done by regex substitution with `sed`. I then manually went
over the entire diff and fixed any incorrect changes.
This diff also changes a lot of `callconv(.C)` to `callconv(.c)`, since
my regex happened to also trigger here. I opted to leave these changes
in, since they *are* a correct migration, even if they're not the one I
was trying to do!
`Type.hasWellDefinedLayout` was in disagreement with pointer loading
logic about auto-layout structs with zero fields, `struct {}`. For
consistency, these types should not have a well-defined layout.
This is technically a breaking change.
The `Cau` abstraction originated from noting that one of the two primary
roles of the legacy `Decl` type was to be the subject of comptime
semantic analysis. However, the data stored in `Cau` has always had some
level of redundancy. While preparing for #131, I went to remove that
redundany, and realised that `Cau` now had exactly one field: `owner`.
This led me to conclude that `Cau` is, in fact, an unnecessary level of
abstraction over what are in reality *fundamentally different* kinds of
analysis unit (`AnalUnit`). Types, `Nav` vals, and `comptime`
declarations are all analyzed in different ways, and trying to treat
them as the same thing is counterproductive!
So, these 3 cases are now different alternatives in `AnalUnit`. To avoid
stealing bits from `InternPool`-based IDs, which are already a little
starved for bits due to the sharding datastructures, `AnalUnit` is
expanded to 64 bits (30 of which are currently unused). This doesn't
impact memory usage too much by default, because we don't store
`AnalUnit`s all too often; however, we do store them a lot under
`-fincremental`, so a non-trivial bump to peak RSS can be observed
there. This will be improved in the future when I made
`InternPool.DepEntry` less memory-inefficient.
`Zcu.PerThread.ensureCauAnalyzed` is split into 3 functions, for each of
the 3 new types of `AnalUnit`. The new logic is much easier to
understand, because it avoids conflating the logic of these
fundamentally different cases.
This commit reworks how anonymous struct literals and tuples work.
Previously, an untyped anonymous struct literal
(e.g. `const x = .{ .a = 123 }`) was given an "anonymous struct type",
which is a special kind of struct which coerces using structural
equivalence. This mechanism was a holdover from before we used
RLS / result types as the primary mechanism of type inference. This
commit changes the language so that the type assigned here is a "normal"
struct type. It uses a form of equivalence based on the AST node and the
type's structure, much like a reified (`@Type`) type.
Additionally, tuples have been simplified. The distinction between
"simple" and "complex" tuple types is eliminated. All tuples, even those
explicitly declared using `struct { ... }` syntax, use structural
equivalence, and do not undergo staged type resolution. Tuples are very
restricted: they cannot have non-`auto` layouts, cannot have aligned
fields, and cannot have default values with the exception of `comptime`
fields. Tuples currently do not have optimized layout, but this can be
changed in the future.
This change simplifies the language, and fixes some problematic
coercions through pointers which led to unintuitive behavior.
Resolves: #16865
minFunctionAlignment() is something we can know ahead of time for any given
target because it's a matter of ABI. However, defaultFunctionAlignment() is a
matter of optimization and every backend can do it differently depending on any
number of factors. For example, LLVM will base the choice on the CPU model in
its aarch64 backend. So just don't use this value in the frontend.
The old `CallingConvention` type is replaced with the new
`NewCallingConvention`. References to `NewCallingConvention` in the
compiler are updated accordingly. In addition, a few parts of the
standard library are updated to use the new type correctly.
This commit begins implementing accepted proposal #21209 by making
`std.builtin.CallingConvention` a tagged union.
The stage1 dance here is a little convoluted. This commit introduces the
new type as `NewCallingConvention`, keeping the old `CallingConvention`
around. The compiler uses `std.builtin.NewCallingConvention`
exclusively, but when fetching the type from `std` when running the
compiler (e.g. with `getBuiltinType`), the name `CallingConvention` is
used. This allows a prior build of Zig to be used to build this commit.
The next commit will update `zig1.wasm`, and then the compiler and
standard library can be updated to completely replace
`CallingConvention` with `NewCallingConvention`.
The second half of #21209 is to remove `@setAlignStack`, which will be
implemented in another commit after updating `zig1.wasm`.