`Sema.explainWhyValueContainsReferenceToComptimeVar` (concise name!)
adds notes to an error explaining how to get from a given `Value` to a
pointer to some `comptime var` (or a comptime field). Previously, this
error could be very opaque in any case where it wasn't obvious where the
comptime var pointer came from; particularly for type captures. Now, the
error notes explain this to the user.
Before the prior commit, the maximum comptime recursion depth on my
system was 4062. After the prior commit, it decreased to 2854. This
commit increases the compiler's stack size enough so that the recursion
depth limit is no less than it was before the `Sema.analyzeCall`
rewrite, preventing this from being a breaking change. Specifically,
this stack size increases my observed maximum comptime recursion depth
to 4105.
This rewrite improves some error messages, hugely simplifies the logic,
and fixes several bugs. One of these bugs is technically a new rule
which Andrew and I agreed on: if a parameter has a comptime-only type
but is not declared `comptime`, then the corresponding call argument
should not be *evaluated* at comptime; only resolved. Implementing this
required changing how function types work a little, which in turn
required allowing a new kind of function coercion for some generic use
cases: function coercions are now allowed to implicitly *remove*
`comptime` annotations from parameters with comptime-only types. This is
okay because removing the annotation affects only the call site.
Resolves: #22262
The relocation range issues will happen eventually as we add more code to the
standard library and test suites, so we may as well just deal with this now.
@MasonRemaley ran into this in #20271, for example.
* MSDN documentation page covering what resource IDs manifests should have:
https://learn.microsoft.com/en-us/windows/win32/sbscs/using-side-by-side-assemblies-as-a-resource
* This change ensures shared libraries that embed win32 manifests use the
proper ID of 2 instead of 1, which is only allowed for .exes. If the manifest
uses the wrong ID, it will not be found and is essentially ignored.
Rather than `Zcu.BuiltinDecl.Memoized` being a struct with fields, it
can instead just be an array, indexed by the enum. This allows runtime
indexing, avoiding a few now-unnecessary `inline` switch cases.
This commit reworks how values like the panic handler function are
memoized during a compiler invocation. Previously, the value was
resolved by whichever analysis requested it first, and cached on `Zcu`.
This is problematic for incremental compilation, as after the initial
resolution, no dependencies are marked by users of this memoized state.
This is arguably acceptable for `std.builtin`, but it's definitely not
acceptable for the panic handler/messages, because those can be set by
the user (`std.builtin.Panic` checks `@import("root").Panic`).
So, here we introduce a new kind of `AnalUnit`, called `memoized_state`.
There are 3 such units:
* `.{ .memoized_state = .va_list }` resolves the type `std.builtin.VaList`
* `.{ .memoized_state = .panic }` resolves `std.Panic`
* `.{ .memoized_state = .main }` resolves everything else we want
These units essentially "bundle" the resolution of their corresponding
declarations, storing the results into fields on `Zcu`. This way, when,
for instance, a function wants to call the panic handler, it simply runs
`ensureMemoizedStateResolved`, registering one dependency, and pulls the
values from the `Zcu`. This "bundling" minimizes dependency edges. The 3
units are separated to allow them to act independently: for instance,
the panic handler can use `std.builtin.Type` without triggering a
dependency loop.
`Zcu.PerThead.ensureTypeUpToDate` is set up in such a way that it only
returns the updated type the first time it is called. In general, that's
okay; however, the exception is that we want the function to continue
returning `error.AnalysisFail` when the type has been lost, or its
number of captures changed.
Therefore, the check for this case now happens before the up-to-date
success return.
For simplicity, the number of captures is now handled by intentionally
losing the instruction in `Zcu.mapOldZirToNew`, since there is nothing
to gain from tracking a type when old instances of it can never be
reused.
The old lowering was kind of neat, but it unintentionally allowed the
syntax `for (123) |_| { ... }`, and there wasn't really a way to fix
that. So, instead, we include both the start and the end of the range in
the `for_len` instruction (each operand to `for` now has *two* entries
in this multi-op instruction). This slightly increases the size of ZIR
for loops of predominantly indexables, but the difference is small
enough that it's not worth complicating ZIR to try and fix it.
Previously, logic in `Compilation.getAllErrorsAlloc` was corrupting the
`failed_analysis` hashmap. This meant that on updates after the initial
update, attempts to remove entries from this map (because the `AnalUnit`
in question is being re-analyzed) silently failed. This resulted in
compile errors from earlier updates wrongly getting "stuck", i.e. never
being removed.
This commit also adds a few log calls which helped me to find this bug.