This is neither a type nor a value. Simplifies `addStrLit` as well as
the many places that switch on `InternPool.Key`.
This is a partial revert of bec29b9e498e08202679aa29a45dab2a06a69a1e.
This is a continuation of 2f24228c758bc8a35d13379703bc1695008212b0.
This commit comes with smaller gains, but gains nonetheless. memcpy is
showing up as much less interesting in callgrind output for behavior
tests.
Current status: this branch is 1.15 ± 0.02 times slower than merge-base.
This is a bit odd, because this value doesn't actually exist:
see #15909. This gets all the empty enum/union behavior tests passing.
Also adds an assertion to `Sema.analyzeBodyInner` which would have
helped figure out the issue here much more quickly.
Key.PtrType is now an extern struct so that hashing it can be done by
reinterpreting bytes directly. It also uses the same representation for
type_pointer Tag encoding and the Key. Accessing pointer attributes now
requires packed struct access, however, many operations are now a copy
of a u32 rather than several independent fields.
This function moves the top two most used Key variants - pointer types
and pointer values - to use a single-shot hash function that branches
for small keys instead of calling memcpy.
As a result, perf against merge-base went from 1.17x ± 0.04 slower to
1.12x ± 0.04 slower. After the pointer value hashing was changed, total
CPU instructions spent in memcpy went from 4.40% to 4.08%, and after
additionally improving pointer type hashing, it further decreased to
3.72%.
The Zig language allows the compiler to make this optimization
automatically. We should definitely make the compiler do that, and
revert this commit. However, that will not happen in this branch, and I
want to continue to explore achieving performance parity with
merge-base. So, this commit changes all InternPool parameters to be
passed by const pointer rather than by value.
I measured a 1.03x ± 0.03 speedup vs the previous commit compiling the
(set of passing) behavior tests. Against merge-base, this commit is
1.17x ± 0.04 slower, which is an improvement from the previous
measurement of 1.22x ± 0.02.
Related issue: #13510
Related issue: #14129
Related issue: #15688
The old code assumed that `intAddScalar` could return a value outside
of the range of `ty`, which is problematic for many reasons.
The new code (ab)uses the InternPool for speed.
Recursion makes this hot function more difficult to profile and
optimize.
I measured a 1.05x speedup vs the previous commit with the (set of
passing) behavior tests.
This commit was the last in a series, and the main thing it needed to do
was make InternPool.typeOf not call indexToKey(). This required adding a
type field to the runtime_value encoding even though it is technically
redundant. This could have been avoided with a loop inside typeOf, but I
wanted to keep the machine code of that hot function as simple as
possible. The variable encoding is still responsible for a relatively
small slice of the InternPool data size.
I added a function that provides the payload type corresponding to the
InternPool.Tag type, which allows for some handy inline switch prongs.
Let's start moving the structs that are specific to InternPool.Tag into
the corresponding namespace. This will provide type safety if the
encoding of InternPool changes for these types later.
Recursion makes this hot function more difficult to profile and
optimize.
This commit adds the integer tag type to the type_enum_auto encoding
even though the integer tag type can be inferred based on the number of
fields of the enum. This avoids a call to getAssumeExists of the integer
tag type inside indexToKey.
Recursion makes this hot function more difficult to profile and
optimize.
The ptr_slice encoding now additionally includes the slice type. This
makes typeOf() implementable without indexToKey() as well as no longer
using recursion in the ptr_slice prong of indexToKey itself.
Unfortunately some logic had to be duplicated. However, I think that a
future enhancement could eliminate the duplication as well as remove
some other unwanted code, improving performance, by representing a slice
value in `Key.Ptr` without `addr` populated directly, but with an
`Index` pointing to the underlying manyptr value.
This is a hot function, and recursion makes it more difficult to
profile, as well as likely making it more difficult to optimize.
Previously, indexToKey for opt_payload would call getAssumeExists() on
the optional type. This made it possible to omit the optional type in
the encoding of opt_payload. However, getAssumeExists() *must* call
indexToKey because of hashing/equality.
So, this commit adds the optional type to the opt_payload encoding,
which increases its "extra" size from 0 to 8 bytes. As a result,
the opt_payload encoding went from not showing up on the top 25 largest
tags to...still not showing up in the top 25 largest tags.
This also helps make InternPool.typeOf() no longer need to call
indexToKey which is another hot function and another source of
recursion.
This is a particularly hot function, so we operate directly on encodings
rather than the more straightforward implementation of calling
`indexToKey`.
I measured this as 1.05 ± 0.04 times faster than the previous commit
with a ReleaseFast build against hello world (which includes std.debug
and formatted printing).
I also profiled the function and found that zigTypeTag() went from being
a major caller of `indexToKey` to being completely insignificant due to
being so fast.