Both of these instructions were previously under a special case in
`rvalue` which resulted in every reference to such an instruction adding
a new `ref` instruction. This had the effect that, for instance,
`&a != &a` for parameters. Deduplicating these `ref` instructions was
problematic for different reasons.
For `alloc_inferred`, the problem was that it's not valid to `ref` the
alloc until the allocation has been resolved (`resolve_inferred_alloc`),
but `AstGen.appendBodyWithFixups` would place the `ref` directly after
the `alloc_inferred`. This is solved by bringing
`resolve_inferred_alloc` in line with `make_ptr_const` by having it
*return* the final pointer, rather than modifying `sema.inst_map` of the
original `alloc_inferred`. That way, the `ref` refers to the
`resolve_inferred_alloc` instruction, so is placed immediately after it,
avoiding this issue.
For `param`, the problem is a bit trickier: `param` instructions live in
a body which must contain only `param` instructions, then a
`func{,_inferred,_fancy}`, then a `break_inline`. Moreover, `param`
instructions may be referenced not only by the function body, but also
by other parameters, the return type expression, etc. Each of these
bodies requires separate `ref` instructions. This is solved by pulling
entries out of `ref_table` after evaluating each component of the
function declaration, and appending the refs later on when actually
putting the bodies together. This gives way to another issue: if you
write `fn f(x: T) @TypeOf(x.foo())`, then since `x.foo()` takes a
reference to `x`, this `ref` instruction is now in a comptime context
(outside of the `@TypeOf` ZIR body), so emits a compile error. This is
solved by loosening the rules around `ref` instructions; because they
are not side-effecting, it is okay to allow `ref` of runtime values at
comptime, resulting in a runtime-known value in a comptime scope. We
already apply this mechanism in some cases; for instance, it's why
`runtime_array.len` works in a `comptime` context. In future, we will
want to give similar treatment to many operations in Sema: in general,
it's fine to apply runtime operations at comptime provided they don't
have side effects!
Resolves: #22140
The main change here is to partition tracked instructions found within a
declaration. It's very unlikely that, for instance, a `struct { ... }`
type declaration was intentionally turned into a reification or an
anonymous initialization, so it makes sense to track things in a few
different arrays.
In particular, this fixes an issue where a `func` instruction could
wrongly be mapped to something else if the types of function parameters
changed. This would cause huge problems further down the pipeline; we
expect that if a `declaration` is tracked, and it previously contained a
`func`/`func_inferred`/`func_fancy`, then this instruction is either
tracked to another `func`/`func_inferred`/`func_fancy` instruction, or
is lost.
Also, this commit takes the opportunity to rename the functions actually
doing this logic. `Zir.findDecls` was a name that might have made sense
at some point, but nowadays, it's definitely not finding declarations,
and it's not *exclusively* finding type declarations. Instead, the point
is to find instructions which we want to track; hence the new name,
`Zir.findTrackable`.
Lastly, a nice side effect of partitioning the output of `findTrackable`
is that `Zir.declIterator` no longer needs to accept input instructions
which aren't type declarations (e.g. `reify`, `func`).
This commit enhances AstGen to introduce a form of error resilience
which allows valid ZIR to be emitted even when AstGen errors occur.
When a non-fatal AstGen error (e.g. `appendErrorNode`) occurs, ZIR
generation is not affected; the error is added to `astgen.errors` and
ultimately to the errors stored in `extra`, but that doesn't stop us
getting valid ZIR. Fatal AstGen errors (e.g. `failNode`) are a bit
trickier. These errors return `error.AnalysisFail`, which is propagated
up the stack. In theory, any parent expression can catch this error and
handle it, continuing ZIR generation whilst throwing away whatever was
lost. For now, we only do this in one place: when creating declarations.
If a call to `fnDecl`, `comptimeDecl`, `globalVarDecl`, etc, returns
`error.AnalysisFail`, the `declaration` instruction is still created,
but its body simply contains the new `extended(astgen_error())`
instruction, which instructs Sema to terminate semantic analysis with a
transitive error. This means that a fatal AstGen error causes the
innermost declaration containing the error to fail, but the rest of the
file remains intact.
If a source file contains parse errors, or an `error.AnalysisFail`
happens when lowering the top-level struct (e.g. there is an error in
one of its fields, or a name has multiple declarations), then lowering
for the entire file fails. Alongside the existing `Zir.hasCompileErrors`
query, this commit introduces `Zir.loweringFailed`, which returns `true`
only in this case.
The end result here is that files with AstGen failures will almost
always still emit valid ZIR, and hence can undergo semantic analysis on
the parts of the file which are (from AstGen's perspective) valid. This
is a noteworthy improvement to UX, but the main motivation here is
actually incremental compilation. Previously, AstGen failures caused
lots of semantic analysis work to be thrown out, because all `AnalUnit`s
in the file required re-analysis so as to trigger necessary transitive
failures and remove stored compile errors which would no longer make
sense (because a fresh compilation of this code would not emit those
errors, as the units those errors applied to would fail sooner due to
referencing a failed file). Now, this case only applies when a file has
severe top-level errors, which is far less common than something like
having an unused variable.
Lastly, this commit changes a few errors in `AstGen` to become fatal
when they were previously non-fatal and vice versa. If there is still a
reasonable way to continue AstGen and lower to ZIR after an error, it is
non-fatal; otherwise, it is fatal. For instance, `comptime const`, while
redundant syntax, has a clear meaning we can lower; on the other hand,
using an undeclared identifer has no sane lowering, so must trigger a
fatal error.
Previously, stepping from the single statement within the loop would
always exit the loop because all of the code unrolled from the loop is
associated with the same line and treated by the debugger as one line.
This is necessary since isGnuLibC() is true for hurd, so we need to be able to
represent a glibc version for it.
Also add an Os.TaggedVersionRange.gnuLibCVersion() convenience function.
The old isARM() function was a portability trap. With the name it had, it seemed
like the obviously correct function to use, but it didn't include Thumb. In the
vast majority of cases where someone wants to ask "is the target Arm?", Thumb
*should* be included.
There are exactly 3 cases in the codebase where we do actually need to exclude
Thumb, although one of those is in Aro and mirrors a check in Clang that is
itself likely a bug. These rare cases can just add an extra isThumb() check.
Once we upgrade to LLVM 20, these should be lowered verbatim rather than to
simply musl. Similarly, the special case in llvmMachineAbi() should go away.
This commit reworks how anonymous struct literals and tuples work.
Previously, an untyped anonymous struct literal
(e.g. `const x = .{ .a = 123 }`) was given an "anonymous struct type",
which is a special kind of struct which coerces using structural
equivalence. This mechanism was a holdover from before we used
RLS / result types as the primary mechanism of type inference. This
commit changes the language so that the type assigned here is a "normal"
struct type. It uses a form of equivalence based on the AST node and the
type's structure, much like a reified (`@Type`) type.
Additionally, tuples have been simplified. The distinction between
"simple" and "complex" tuple types is eliminated. All tuples, even those
explicitly declared using `struct { ... }` syntax, use structural
equivalence, and do not undergo staged type resolution. Tuples are very
restricted: they cannot have non-`auto` layouts, cannot have aligned
fields, and cannot have default values with the exception of `comptime`
fields. Tuples currently do not have optimized layout, but this can be
changed in the future.
This change simplifies the language, and fixes some problematic
coercions through pointers which led to unintuitive behavior.
Resolves: #16865
Since we exclude Abi.none from the list of ABIs to be tested, it means that
Abi.gnu, which happens to be the first in the list, always gets picked for hosts
where the dynamic linker path does not depend on the ABI component of the
triple. Such hosts include all the BSDs, Haiku, Serenity, Solaris, etc.
To fix this, use DynamicLinker.kind() to determine whether this whole exercise
even makes sense. If it doesn't, as is the case on every OS other than Linux and
Hurd, we'll just fall back to Abi.default() which will try to pick a sensible
default based on the arch and OS components. This detection logic still has
plenty of room for improvement, but is at least a notable step up from
confusingly detecting Abi.gnu ~everywhere.
Closes#9089.
hasDynamicLinker() was just kind of lying in the case of Darwin platforms for
the benefit of std.zig.system.detectAbiAndDynamicLinker(). A better name would
have been hasElfDynamicLinker() or something. It also got the answer wrong for a
bunch of platforms that don't actually use ELF. Anyway, this was clearly the
wrong layer to do this at, so remove this function and instead use
DynamicLinker.kind() + an isDarwin() check in detectAbiAndDynamicLinker().
This commit finishes implementing #21209 by removing the
`@setAlignStack` builtin in favour of `CallingConvention` payloads. The
x86_64 backend is updated to use the stack alignment given in the
calling convention (the LLVM backend was already updated in a previous
commit).
Resolves: #21209
These are really answering questions about the Zig compiler's capacity to
provide a libc/libc++ implementation. As such, std.zig.target seems like a more
fitting place for these.
Embrace the Path abstraction, doing more operations based on directory
handles rather than absolute file paths. Most of the diff noise here
comes from this one.
Fix sorting of crtbegin/crtend atoms. Previously it would look at all
path components for those strings.
Make the C runtime path detection partially a pure function, and move
some logic to glibc.zig where it belongs.