This is a hot function, and recursion makes it more difficult to
profile, as well as likely making it more difficult to optimize.
Previously, indexToKey for opt_payload would call getAssumeExists() on
the optional type. This made it possible to omit the optional type in
the encoding of opt_payload. However, getAssumeExists() *must* call
indexToKey because of hashing/equality.
So, this commit adds the optional type to the opt_payload encoding,
which increases its "extra" size from 0 to 8 bytes. As a result,
the opt_payload encoding went from not showing up on the top 25 largest
tags to...still not showing up in the top 25 largest tags.
This also helps make InternPool.typeOf() no longer need to call
indexToKey which is another hot function and another source of
recursion.
This is a particularly hot function, so we operate directly on encodings
rather than the more straightforward implementation of calling
`indexToKey`.
I measured this as 1.05 ± 0.04 times faster than the previous commit
with a ReleaseFast build against hello world (which includes std.debug
and formatted printing).
I also profiled the function and found that zigTypeTag() went from being
a major caller of `indexToKey` to being completely insignificant due to
being so fast.
I'm not sure if this is the right place for this to happen, and
it should become obsolete when comptime mutation is rewritten
and the remaining legacy value tags are remove, so keeping this
as a separate revertable commit.
Previously, there were types and values for inferred allocations and a
lot of special-case handling. Now, instead, the special casing is
limited to AIR instructions for these use cases.
Instead of storing data in Value payloads, the data is now stored in AIR
instruction data as well as the previously `void` value type of the
`unresolved_inferred_allocs` hash map.
Store `InternPool.Index` as the key instead which means that an AIR
instruction no longer needs to be burned to store the type, and also
that we can use AutoArrayHashMap instead of an ArrayList, which avoids
storing duplicates into the set, potentially saving CPU time.