Explicitly set the alignment requirements to 1 (i.e, mark the load as unaligned)
since there are some architectures (e.g SPARCv9) which has different alignment
requirements between a function pointer and usize pointer. On those
architectures, not explicitly setting it will lead into @frameSize generating
usize-aligned load instruction that could crash if the function pointer happens
to be not usize-aligned.
* thread/condition: fix PthreadCondition compilation
* thread/condition: add wait, signal and broadcast
This is like std.Thread.Mutex which forwards calls to `impl`; avoids
having to call `cond.impl` every time.
* thread/condition: initialize the implementation
After a right shift, top limbs may be all zero. However, without
normalization, the number of limbs is not going to change.
In order to check if a big number is zero, we used to assume that the
number of limbs is 1. Which may not be the case after right shifts,
even if the actual value is zero.
- Normalize after a right shift
- Add a test for that issue
- Check all the limbs in `eqlZero()`. It may not be necessary if
callers always remember to normalize before calling the function.
But checking all the limbs is very cheap and makes the function less
bug-prone.
The astgen for switch expressions did not respect the ZIR rules of only
referencing instructions that are in scope:
%14 = block_comptime_flat({
%15 = block_comptime_flat({
%16 = const(TypedValue{ .ty = comptime_int, .val = 1})
})
%17 = block_comptime_flat({
%18 = const(TypedValue{ .ty = comptime_int, .val = 2})
})
})
%19 = block({
%20 = ref(%5)
%21 = deref(%20)
%22 = switchbr(%20, [%15, %17], {
%15 => {
%23 = const(TypedValue{ .ty = comptime_int, .val = 1})
%24 = store(%10, %23)
%25 = const(TypedValue{ .ty = void, .val = {}})
%26 = break("label_19", %25)
},
%17 => {
%27 = const(TypedValue{ .ty = comptime_int, .val = 2})
%28 = store(%10, %27)
%29 = const(TypedValue{ .ty = void, .val = {}})
%30 = break("label_19", %29)
}
}, {
%31 = unreachable_safe()
}, special_prong=else)
})
In this snippet you can see that the comptime expr referenced %15 and
%17 which are not in scope. There also was no test coverage for runtime
switch expressions.
Switch expressions will have to be re-introduced to follow these rules
and with some test coverage. There is some usable code being deleted in
this commit; it will be useful to reference when re-implementing switch
later.
A few more improvements to do while we're at it:
* only use .ref result loc on switch target if any prongs obtain the
payload with |*syntax|
- this improvement should be done to if, while, and for as well.
- this will remove the needless ref/deref instructions above
* remove switchbr and add switch_block, which is both a block and a
switch branch.
- similarly we should remove loop and add loop_block.
This commit introduces a "force_comptime" flag into the GenZIR
scope. The main purpose of this will be to choose the "comptime"
variants of certain key zir instructions, such as function calls and
branches. We will be moving away from using the block_comptime_flat
ZIR instruction, and eventually deleting it.
This commit also contains miscellaneous fixes to this branch that bring
it to the state of passing all the tests.
on the break instruction operands. This involves a new TZIR instruction,
br_block_flat, which represents a break instruction where the operand is
the result of a flat block. See the doc comments on the instructions for
more details.
How it works: when adding break instructions in semantic analysis, the
underlying allocation is slightly padded so that it is the size of a
br_block_flat instruction, which allows the break instruction to later
be converted without removing instructions inside the parent body. The
extra type coercion instructions go into the body of the br_block_flat,
and backends are responsible for dispatching the instruction correctly
(it should map to the same function calls for related instructions).
Local variable declarations now detect whether the result location for the
initialization expression consumes the result location as a pointer. If
it does, then the local is emitted as a LocalPtr. Otherwise it is
emitted as a LocalVal.
This results in clean, straightforward ZIR code for semantic analysis.
Motivating test case:
```zig
export fn _start() noreturn {
var x: u64 = 1;
var y: u32 = 2;
var thing: u32 = 1;
const result = if (thing == 1) x else y;
exit();
}
```
The main idea here is for astgen to output ideal ZIR depending on
whether or not the sub-expressions of a block consume the result
location. Here, neither `x` nor `y` consume the result location of the
conditional expression block, and so the ZIR should communicate the
result of the condbr using break instructions, not with the result
location pointer.
With this commit, this is accomplished:
```
%22 = alloc_inferred()
%23 = block({
%24 = const(TypedValue{ .ty = type, .val = bool})
%25 = deref(%18)
%26 = const(TypedValue{ .ty = comptime_int, .val = 1})
%27 = cmp_eq(%25, %26)
%28 = as(%24, %27)
%29 = condbr(%28, {
%30 = deref(%4)
< there is no longer a store instruction here >
%31 = break("label_23", %30)
}, {
%32 = deref(%11)
< there is no longer a store instruction here >
%33 = break("label_23", %32)
})
})
%34 = store_to_inferred_ptr(%22, %23) <-- the store is only here
%35 = resolve_inferred_alloc(%22)
```
However if the result location gets consumed, the break instructions
change to break_void, and the result value is communicated only by the
stores, not by the break instructions.
Implementation:
* The GenZIR scope that conditional branches uses now has an optional
result location pointer field and a count of how many times the
result location ended up being an rvalue (not consumed).
* When rvalue() is called on a result location for a block, it
increments this counter. After generating the branches of a block,
astgen for the conditional branch checks this count and if it is 2
then the store_to_block_ptr instructions are elided and it calls
rvalue() using the block result (which will account for peer type
resolution on the break operands).
astgen has many functions disabled until they can be reworked with these
new semantics. That will be done before merging the branch.
There are some new rules for astgen to follow regarding result locations
and what you are allowed/required to do depending on which one is passed
to expr(). See the updated doc comments of ResultLoc for details.
I also changed naming conventions of stuff in this commit, sorry about
that.
This is a proof-of-concept of switching to a new memory layout for
tokens and AST nodes. The goal is threefold:
* smaller memory footprint
* faster performance for tokenization and parsing
* most importantly, a proof-of-concept that can be also applied to ZIR
and TZIR to improve the entire compiler pipeline in this way.
I had a few key insights here:
* Underlying premise: using less memory will make things faster, because
of fewer allocations and better cache utilization. Also using less
memory is valuable in and of itself.
* Using a Struct-Of-Arrays for tokens and AST nodes, saves the bytes of
padding between the enum tag (which kind of token is it; which kind
of AST node is it) and the next fields in the struct. It also improves
cache coherence, since one can peek ahead in the tokens array without
having to load the source locations of tokens.
* Token memory can be conserved by only having the tag (1 byte) and byte
offset (4 bytes) for a total of 5 bytes per token. It is not necessary
to store the token ending byte offset because one can always re-tokenize
later, but also most tokens the length can be trivially determined from
the tag alone, and for ones where it doesn't, string literals for
example, one must parse the string literal again later anyway in
astgen, making it free to re-tokenize.
* AST nodes do not actually need to store more than 1 token index because
one can poke left and right in the tokens array very cheaply.
So far we are left with one big problem though: how can we put AST nodes
into an array, since different AST nodes are different sizes?
This is where my key observation comes in: one can have a hash table for
the extra data for the less common AST nodes! But it gets even better than
that:
I defined this data that is always present for every AST Node:
* tag (1 byte)
- which AST node is it
* main_token (4 bytes, index into tokens array)
- the tag determines which token this points to
* struct{lhs: u32, rhs: u32}
- enough to store 2 indexes to other AST nodes, the tag determines
how to interpret this data
You can see how a binary operation, such as `a * b` would fit into this
structure perfectly. A unary operation, such as `*a` would also fit,
and leave `rhs` unused. So this is a total of 13 bytes per AST node.
And again, we don't have to pay for the padding to round up to 16 because
we store in struct-of-arrays format.
I made a further observation: the only kind of data AST nodes need to
store other than the main_token is indexes to sub-expressions. That's it.
The only purpose of an AST is to bring a tree structure to a list of tokens.
This observation means all the data that nodes store are only sets of u32
indexes to other nodes. The other tokens can be found later by the compiler,
by poking around in the tokens array, which again is super fast because it
is struct-of-arrays, so you often only need to look at the token tags array,
which is an array of bytes, very cache friendly.
So for nearly every kind of AST node, you can store it in 13 bytes. For the
rarer AST nodes that have 3 or more indexes to other nodes to store, either
the lhs or the rhs will be repurposed to be an index into an extra_data array
which contains the extra AST node indexes. In other words, no hash table needed,
it's just 1 big ArrayList with the extra data for AST Nodes.
Final observation, no need to have a canonical tag for a given AST. For example:
The expression `foo(bar)` is a function call. Function calls can have any
number of parameters. However in this example, we can encode the function
call into the AST with a tag called `FunctionCallOnlyOneParam`, and use lhs
for the function expr and rhs for the only parameter expr. Meanwhile if the
code was `foo(bar, baz)` then the AST node would have to be `FunctionCall`
with lhs still being the function expr, but rhs being the index into
`extra_data`. Then because the tag is `FunctionCall` it means
`extra_data[rhs]` is the "start" and `extra_data[rhs+1]` is the "end".
Now the range `extra_data[start..end]` describes the list of parameters
to the function.
Point being, you only have to pay for the extra bytes if the AST actually
requires it. There's no limit to the number of different AST tag encodings.
Preliminary results:
* 15% improvement on cache-misses
* 28% improvement on total instructions executed
* 26% improvement on total CPU cycles
* 22% improvement on wall clock time
This is 1/4 items on the checklist before this can actually be merged:
* [x] parser
* [ ] render (zig fmt)
* [ ] astgen
* [ ] translate-c
It now uses the log scope "gpa" instead of "std".
Additionally, there is a new config option `verbose_log` which enables
info log messages for every allocation. Can be useful when debugging.
This option is off by default.