Most of this migration was performed automatically with `zig fmt`. There
were a few exceptions which I had to manually fix:
* `@alignCast` and `@addrSpaceCast` cannot be automatically rewritten
* `@truncate`'s fixup is incorrect for vectors
* Test cases are not formatted, and their error locations change
Opaque and `noreturn` makes sense since they don't represent real
values, but `null` and `undefined` are perfectly normal
comptime-only values.
Closes#16088
Anecdote 1: The generic version is way more popular than the non-generic
one in Zig codebase:
git grep -w alignForward | wc -l
56
git grep -w alignForwardGeneric | wc -l
149
git grep -w alignBackward | wc -l
6
git grep -w alignBackwardGeneric | wc -l
15
Anecdote 2: In my project (turbonss) that does much arithmetic and
alignment I exclusively use the Generic functions.
Anecdote 3: we used only the Generic versions in the Macho Man's linker
workshop.
These are frequently invalidated whenever a string is interned, so avoid
creating pointers to `string_bytes` wherever possible. This is an
attempt to fix random CI failures.
All but 2 test cases now pass (tested on x86_64 Linux, native only). The
remaining two signify an issue requiring a larger refactor, which I will
do in a separate commit.
Notable changes:
* Fix uninitialized memory when allocating objects from free lists
* Implement TypedValue printing for pointers
* Fix some TypedValue printing logic
* Work around non-existence of InternPool.remove implementation
The main motivation for this commit is eliminating Decl.value_arena.
Everything else is dominoes.
Decl.name used to be stored in the GPA, now it is stored in InternPool.
It ended up being simpler to migrate other strings to be interned as
well, such as struct field names, union field names, and a few others.
This ended up requiring a big diff, sorry about that. But the changes
are pretty nice, we finally start to take advantage of InternPool's
existence.
global_error_set and error_name_list are simplified. Now it is a single
ArrayHashMap(NullTerminatedString, void) and the index is the error tag
value.
Module.tmp_hack_arena is re-introduced (it was removed in
eeff407941560ce8eb5b737b2436dfa93cfd3a0c) in order to deal with
comptime_args, optimized_order, and struct and union fields. After
structs and unions get moved into InternPool properly, tmp_hack_arena
can be deleted again.
This is neither a type nor a value. Simplifies `addStrLit` as well as
the many places that switch on `InternPool.Key`.
This is a partial revert of bec29b9e498e08202679aa29a45dab2a06a69a1e.
This is a bit odd, because this value doesn't actually exist:
see #15909. This gets all the empty enum/union behavior tests passing.
Also adds an assertion to `Sema.analyzeBodyInner` which would have
helped figure out the issue here much more quickly.
Key.PtrType is now an extern struct so that hashing it can be done by
reinterpreting bytes directly. It also uses the same representation for
type_pointer Tag encoding and the Key. Accessing pointer attributes now
requires packed struct access, however, many operations are now a copy
of a u32 rather than several independent fields.
This function moves the top two most used Key variants - pointer types
and pointer values - to use a single-shot hash function that branches
for small keys instead of calling memcpy.
As a result, perf against merge-base went from 1.17x ± 0.04 slower to
1.12x ± 0.04 slower. After the pointer value hashing was changed, total
CPU instructions spent in memcpy went from 4.40% to 4.08%, and after
additionally improving pointer type hashing, it further decreased to
3.72%.
The Zig language allows the compiler to make this optimization
automatically. We should definitely make the compiler do that, and
revert this commit. However, that will not happen in this branch, and I
want to continue to explore achieving performance parity with
merge-base. So, this commit changes all InternPool parameters to be
passed by const pointer rather than by value.
I measured a 1.03x ± 0.03 speedup vs the previous commit compiling the
(set of passing) behavior tests. Against merge-base, this commit is
1.17x ± 0.04 slower, which is an improvement from the previous
measurement of 1.22x ± 0.02.
Related issue: #13510
Related issue: #14129
Related issue: #15688
This is a particularly hot function, so we operate directly on encodings
rather than the more straightforward implementation of calling
`indexToKey`.
I measured this as 1.05 ± 0.04 times faster than the previous commit
with a ReleaseFast build against hello world (which includes std.debug
and formatted printing).
I also profiled the function and found that zigTypeTag() went from being
a major caller of `indexToKey` to being completely insignificant due to
being so fast.
Previously, there were types and values for inferred allocations and a
lot of special-case handling. Now, instead, the special casing is
limited to AIR instructions for these use cases.
Instead of storing data in Value payloads, the data is now stored in AIR
instruction data as well as the previously `void` value type of the
`unresolved_inferred_allocs` hash map.
One change worth noting in this commit is that `module.global_error_set`
is no longer kept strictly up-to-date. The previous code reserved
integer error values when dealing with error set types, but this is no
longer needed because the integer values are not needed for semantic
analysis unless `@errorToInt` or `@intToError` are used and therefore
may be assigned lazily.
Also I moved `anyframe` from being represented by `SimpleType` to being
represented by the `none` tag of `anyframe_type` because most code wants
to handle these two types together.
Anonymous structs and anonymous tuples can be stored via a
only_possible_value tag because their type encodings, by definition,
will have every value specified, which can be used to populate the
fields slice in `Key.Aggregate`.
Also fix `isTupleOrAnonStruct`.
I'm seeing a new assertion trip: the call to `enumTagFieldIndex` in the
implementation of `@Type` is attempting to query the field index of an
union's enum tag, but the type of the enum tag value provided is not the
same as the union's tag type. Most likely this is a problem with type
coercion, since values are now typed.
Another problem is that I added some hacks in std.builtin because I
didn't see any convenient way to access them from Sema. That should
definitely be cleaned up before merging this branch.
Unlike unions and structs, enums are actually *encoded* into the
InternPool directly, rather than using the SegmentedList trick. This
results in them being quite compact, and greatly improved the ergonomics
of using enum types throughout the compiler.
It did however require introducing a new concept to the InternPool which
is an "incomplete" item - something that is added to gain a permanent
Index, but which is then mutated in place. This was necessary because
enum tag values and tag types may reference the namespaces created by
the enum itself, which required constructing the namespace, decl, and
calling analyzeDecl on the decl, which required the decl value, which
required the enum type, which required an InternPool index to be
assigned and for it to be meaningful.
The API for updating enums in place turned out to be quite slick and
efficient - the methods directly populate pre-allocated arrays and
return the information necessary to output the same compilation errors
as before.
This introduces a string table into InternPool as well as a curious new
field called `maps` which is an array list of array hash maps with
void/void key/value.
Some types such as enums, structs, and unions need to store mappings
from field names to field index, or value to field index. In such cases,
they will store the underlying field names and values directly, relying
on one of these maps, stored separately, to provide lookup.
This allows the InternPool to be serialized via simple array copies,
omitting all the maps, which are only used for optimizing lookup based
on field name or field value.
When the InternPool is deserialized it can be loaded via simple array
copies, and then as a post-processing step the field name maps can be
generated as extra metadata that is tacked on.
This commit provides two encodings for enums - one when the integer tag
type is explicitly provided and one when it is not. This is simpler than
the previous setup, which has three encodings.
Previous sizes:
* EnumSimple: 40 bytes + 16 bytes per field
* EnumNumbered: 80 bytes + 24 bytes per field
* EnumFull: 184 bytes + 24 bytes per field
Sizes after this commit:
* type_enum_explicit: 24 bytes + 8 bytes per field
* type_enum_auto: 16 bytes + 4 bytes per field