An attempt to normalize some of the function names in build.zig. Normalize add*Dir to add*Path. Also use "Library" instead of the "Lib" abbreviation.
The PR does not remove the old names, only adds the new normalized ones to faciliate a transition period.
The size of a GUID is not platform-dependent, it's always a fixed number of bits. So I've updated guid to use fixed bit integer types rather than platform-dependent C integer types.
* AstGen: use Ast.zig helper methods to avoid copy pasting token counting logic
- take advantage of the `first_doc_comment` field we already have for
param AST nodes
* Add missing ZIR docs
There are some restrictions here.
- We either need C11 or a compiler that supports the aligned attribute
- We cannot provide align less than the type's natural C alignment.
Looking at the BufferedWriter assembly generated, one can see that is
has to do a lot of work, just to copy over some bytes and increase an
offset. This is because the LinearFifo is a much more general construct
than what BufferedWriter needs and the optimizer cannot prove that we
don't need to do this extra work.
If there is a big atom available for re-use in the free list, and
it's the last atom in section, it's ideal capacity might span the
entire section in which case we do not want to calculate the actual
end VM addr of the symbol since it may overflow. Instead, we just take
the max capacity available as end VM addr estimate. In this case,
the max capacity equals `std.math.maxInt(u64)`.
Instead of using `push` and `pop` combo, we now re-use our stack
allocation mechanism which means we don't have to worry about
16-byte stack adjustments on macOS as it is handled automatically
for us. Another benefit is that we don't have to backpatch stack
offsets when pulling args from the stack.
The previous commit that implemented doc comment zir support for
decls did not properly account for all the possible attribute
keyword combinations (threadlocal, extern, and such).
Previously, optional slices returned the pointer size as abi size.
We now account for slices to calculate the correct size which is abi-alignment + slice ABI size.
To generate better code for tuples, we detect a tuple operand in
storePtr, and analyze field loads and stores directly. This avoids
an extra allocation + memcpy which would occur if we used `coerce`.
When asking a struct or union whether the type requires comptime, it may
need to ask itself recursively, for example because of a field which is
a pointer to itself. This commit adds a field to each to keep track of
when computing the "requires comptime" value and returns `false` if the
check is already ongoing.
* AIR instruction vector_init gains the ability to init arrays and
tuples in addition to vectors. This will probably also gain the
ability to initialize structs and be renamed to `aggregate_init`.
* AstGen prefers to use an `anon_array_init` ZIR instruction for
local variables when the init expr is an array literal and there is
no type.
Prior to this change, `__DATA,__bss` and `__DATA,__thread_bss` would
get actually, physically written out to the output file, unnecessarily
filling the output file with 0s.
This was originally introduced in 4d48948b526337947ef59a83f7dbc81b70aa5723
but broken immediately afterwards in c8af00c66e8b6f62e4dd6ac4d86a3de03e9ea354.
When a constant will be passed by reference, such as a struct, we will call into genTypedValue
to lower the constant to bytes and store them into the `rodata` section. We will then return the address
of this constant as a `WValue`.
This change means we will have all constants lowered during compilation time, and no longer have
to sacrifice runtime to lower them onto the stack.
This is more like a temp hack than anything else - I think the
mechanism we use for adjusting the stack when pushing args onto
the stack could/should be reused - i.e., we should just calculate
the stack alignment before each call and then reset the `rsp`
rather than relying on the current hack in `gen()` logic.
* push the arguments in reverse order
* add logic for pushing args of any abi size to stack - very similar to
`genSetStack` however, uses `.rsp` as the base register
* increment and decrement `.rsp` if we called a function with args on
the stack in `airCall`
* add logic for recovering args from the caller's stack in the callee
This allows us to get rid of unused fields when generating code for non-function decls.
We can now create seperate instances of `DeclGen` which in turn can then be used
to generate the code for a constant.
Besides those reasons, it will be much easier to switch to the generic purpose `codegen.zig` that any
backend should use. Allowing us to deduplicate this code.
The backend can create annonymous local symbols. This can be used for constants
that will be passed by reference so it will not have to be lowered to the stack, and then
stored into the data section. This also means it's valid to return a pointer to a constant array.
Those local symbols that are created, will be managed by the parent decl. Free'ing the parent decl,
will also free all of its locals.
When a local symbol was created, the index of said symbol will be returned and saved in the `memory`
tag of a `WValue` which is then memoized. This means that each 'emit' of this WValue will create a relocation
for that constant/symbol and the actual pointer value will be set after relocation phase.
Due to the new structure of lowerConstant, we can now simplify the logic in a lot of situations.
- We no longer have to check the `WValue`'s tag to determine how to load/store a value.
- We can now provide simple memcopy's for aggregate types.
- Constants are now memoized, meaning we do no longer lower constants on each callsite.