1714 Commits

Author SHA1 Message Date
David Rubin
4ce85f930e riscv: implement structFieldPtr and retLoad 2024-05-11 02:17:11 -07:00
David Rubin
ece70e08a0 riscv: pass optionals by register_pair for resolveCallingConventionValues 2024-05-11 02:17:11 -07:00
David Rubin
26ce82d98e riscv: correctly derefence load_symbol in genSetReg 2024-05-11 02:17:11 -07:00
David Rubin
3bf008a3d0 riscv: implement slices 2024-05-11 02:17:11 -07:00
David Rubin
350ad90cee riscv: totally rewrite how we do loads and stores
this commit is a little too large to document fully, however the main gist of it this

- finish the `genInlineMemcpy` implement
- rename `setValue` to `genCopy` as I agree with jacob that it's a better name
- add in `genVarDbgInfo` for a better gdb experience
- follow the x86_64's method for genCall, as the procedure is very similar for us
- add `airSliceLen` as it's trivial
- change up the `airAddWithOverflow implementation a bit
- make sure to not spill of the elem_ty is 0 size
- correctly follow the RISC-V calling convention and spill the used calle saved registers in the prologue
and restore them in the epilogue
- add `address`, `deref`, and `offset` helper functions for MCValue. I must say I love these,
they make the code very readable and super verbose :)
- fix a `register_manager.zig` issue where when using the last register in the set, the value would overflow at comptime.
was happening because we were adding to `max_id` before subtracting from it.
2024-05-11 02:17:11 -07:00
David Rubin
3c0015c828 riscv: implement a basic @intCast
the truncation panic logic is generated in Sema, so I don't need to roll anything
of my own. I add all of the boilerplate for that detecting the truncation and it works
in basic test cases!
2024-05-11 02:17:11 -07:00
David Rubin
b28c966e33 riscv: fix overflow checks in addition. 2024-05-11 02:17:11 -07:00
David Rubin
e70584e2f8 riscv: change load_symbol psuedo instruction size to 8 2024-05-11 02:17:11 -07:00
David Rubin
06089fc89a riscv: fix how we calculate stack offsets. allows for pass by reference arguments. 2024-05-11 02:17:11 -07:00
David Rubin
c96989aa4b riscv: correctly index struct field access
when the struct is in stack memory, we access it using a byte-offset,
because that's how the stack works. on the other hand when the struct
is in a register, we are working with bits and the field offset should
be a bit offset.
2024-05-11 02:17:11 -07:00
David Rubin
09b7aabe09 riscv: add allocReg helper, and clean up some comparing logic
- Added the basic framework for panicing with an overflow in `airAddWithOverflow`, but there is no check done yet.
- added the `cmp_lt`, `cmp_gte`, and `cmp_imm_eq` MIR instructions, and their respective functionality.
2024-05-11 02:17:11 -07:00
David Rubin
08452b1add riscv: correct the order of the return epilogue 2024-05-11 02:17:11 -07:00
David Rubin
f1fe5c937e riscv: pointer work
lots of thinking later, ive begun to grasp my head around how the pointers should work. this commit allows basic pointer loading and storing to happen.
2024-05-11 02:17:11 -07:00
David Rubin
9229321400 riscv: change up how we do args
- before we were storing each arg in it's own function arg register. with this commit now we store the args in the fa register before calling as per the RISC-V calling convention, however as soon as we enter the callee, aka in airArg, we spill the argument to the stack. this allows us to spend less effort worrying about whether we're going to clobber the function arguments when another function is called inside of the callee.

- we were actually clobbering the fa regs inside of resolveCallingConvetion, because of the null argument to allocReg. now each lock is stored in an array which is then iterated over and unlocked, which actually aids in the first point of this commit.
2024-05-11 02:17:11 -07:00
David Rubin
5e010b6dea riscv: reorganize binOp and implement cmp_imm_gte MIR
this was an annoying one to do, as there is no (to my knowledge) myriad sequence
that will allow us to do `gte` compares with an immediate without allocating a register.
RISC-V provides a single instruction to do compares, that being `lt`, and so you need to
use more than one for other variants, but in this case, i believe you need to allocate a register.
2024-05-11 02:17:11 -07:00
David Rubin
190e7d0239 riscv: update builtin names 2024-05-11 02:17:11 -07:00
David Rubin
2cbd8e1deb riscv: progress toward arrays
- implement `airArrayElemVal` for arrays on the stack. This is really easy
as we can just move the offset by the bytes into the array. This only works
when the index access is comptime-known though, this won't work for runtime access.
2024-05-11 02:17:11 -07:00
David Rubin
9b2a4582c9 riscv: implement 64 bit immediate into register loading
LLVM has a better myriad sequence for this, where they don't allocate
a temporary register, but for now this will do.
2024-05-11 02:17:11 -07:00
David Rubin
f67fa73fe8 riscv: 16 bit @byteSwap 2024-05-11 02:17:11 -07:00
David Rubin
b2150094ba riscv: implement basic logical shifting 2024-05-11 02:17:11 -07:00
David Rubin
664e3e16fa riscv: add cmp_eq MIR instruction
this opens up the door for addition!
2024-05-11 02:17:11 -07:00
David Rubin
3ccf0fd4c2 riscv: basic struct field access
the current implementation only works when the struct is in a register. we use some shifting magic
to get the field into the LSB, and from there, given the type provenance, the generated code should
never reach into the bits beyond the bit size of the type and interact with the rest of the struct.
2024-05-11 02:17:11 -07:00
David Rubin
2be3033acd riscv: implement basic branching
we use a code offset map in Emit.zig to pre-compute what byte offset each MIR instruction is at. this is important because they can be
of different size
2024-05-11 02:17:11 -07:00
David Rubin
28df64cba4 riscv: implement @abs
- add the `abs` MIR instruction
- implement `@abs` by shifting to the right by `bits - 1`, and xoring.
2024-05-11 02:17:11 -07:00
David Rubin
060c475fcd riscv: update start.zig and restore ra from the proper stack offset 2024-05-11 02:17:11 -07:00
David Rubin
5e770407cf riscv: basic function arguments
- rename setRegOrMem -> setValue
- a naive method of passing arguments by register
- gather the prologue and epilogue and generate them in Emit.zig. this is cleaner because we have the final stack size in the emit step.
- define the "fa" register set, which contains the RISC-V calling convention defined function argument registers
2024-05-11 02:17:11 -07:00
David Rubin
dceff2592f riscv: initial cleanup and work 2024-05-11 02:17:11 -07:00
Jacob Young
08cecc1c7e x86_64: fix C abi of incomplete sse register 2024-05-08 19:37:29 -07:00
Andrew Kelley
6986d2aca9 x86_64 sysv C ABI: fix f128 param and return types
Clang 17 passed struct{f128} parameters using rdi and rax, while Clang
18 matches GCC 13.2 behavior, passing them using xmm0.

This commit makes Zig's LLVM backend match Clang 18 and GCC 13.2. The
commit deletes a hack in x86_64/abi.zig which miscategorized f128 as
"memory" which obviously disagreed with the spec.
2024-05-08 19:37:29 -07:00
Jacob Young
5d745d94fb x86_64: fix C abi for unions
Closes #19721
2024-04-22 15:24:29 -07:00
mlugg
d0e74ffe52
compiler: rework comptime pointer representation and access
We've got a big one here! This commit reworks how we represent pointers
in the InternPool, and rewrites the logic for loading and storing from
them at comptime.

Firstly, the pointer representation. Previously, pointers were
represented in a highly structured manner: pointers to fields, array
elements, etc, were explicitly represented. This works well for simple
cases, but is quite difficult to handle in the cases of unusual
reinterpretations, pointer casts, offsets, etc. Therefore, pointers are
now represented in a more "flat" manner. For types without well-defined
layouts -- such as comptime-only types, automatic-layout aggregates, and
so on -- we still use this "hierarchical" structure. However, for types
with well-defined layouts, we use a byte offset associated with the
pointer. This allows the comptime pointer access logic to deal with
reinterpreted pointers far more gracefully, because the "base address"
of a pointer -- for instance a `field` -- is a single value which
pointer accesses cannot exceed since the parent has undefined layout.
This strategy is also more useful to most backends -- see the updated
logic in `codegen.zig` and `codegen/llvm.zig`. For backends which do
prefer a chain of field and elements accesses for lowering pointer
values, such as SPIR-V, there is a helpful function in `Value` which
creates a strategy to derive a pointer value using ideally only field
and element accesses. This is actually more correct than the previous
logic, since it correctly handles pointer casts which, after the dust
has settled, end up referring exactly to an aggregate field or array
element.

In terms of the pointer access code, it has been rewritten from the
ground up. The old logic had become rather a mess of special cases being
added whenever bugs were hit, and was still riddled with bugs. The new
logic was written to handle the "difficult" cases correctly, the most
notable of which is restructuring of a comptime-only array (for
instance, converting a `[3][2]comptime_int` to a `[2][3]comptime_int`.
Currently, the logic for loading and storing work somewhat differently,
but a future change will likely improve the loading logic to bring it
more in line with the store strategy. As far as I can tell, the rewrite
has fixed all bugs exposed by #19414.

As a part of this, the comptime bitcast logic has also been rewritten.
Previously, bitcasts simply worked by serializing the entire value into
an in-memory buffer, then deserializing it. This strategy has two key
weaknesses: pointers, and undefined values. Representations of these
values at comptime cannot be easily serialized/deserialized whilst
preserving data, which means many bitcasts would become runtime-known if
pointers were involved, or would turn `undefined` values into `0xAA`.
The new logic works by "flattening" the datastructure to be cast into a
sequence of bit-packed atomic values, and then "unflattening" it; using
serialization when necessary, but with special handling for `undefined`
values and for pointers which align in virtual memory. The resulting
code is definitely slower -- more on this later -- but it is correct.

The pointer access and bitcast logic required some helper functions and
types which are not generally useful elsewhere, so I opted to split them
into separate files `Sema/comptime_ptr_access.zig` and
`Sema/bitcast.zig`, with simple re-exports in `Sema.zig` for their small
public APIs.

Whilst working on this branch, I caught various unrelated bugs with
transitive Sema errors, and with the handling of `undefined` values.
These bugs have been fixed, and corresponding behavior test added.

In terms of performance, I do anticipate that this commit will regress
performance somewhat, because the new pointer access and bitcast logic
is necessarily more complex. I have not yet taken performance
measurements, but will do shortly, and post the results in this PR. If
the performance regression is severe, I will do work to to optimize the
new logic before merge.

Resolves: #19452
Resolves: #19460
2024-04-17 13:41:25 +01:00
Jacob Young
77abd3a96a x86_64: fix miscompilation regression in package fetching code 2024-04-16 20:12:30 -04:00
Jacob Young
7611d90ba0 InternPool: remove slice from byte aggregate keys
This deletes a ton of lookups and avoids many UAF bugs.

Closes #19485
2024-04-08 13:24:08 -04:00
Jacob Young
f668c8bfd6 x86_64: fix abi of nested structs 2024-04-06 13:02:55 -07:00
Jacob Young
7580879e8b x86_64: cleanup comptime mutable memory change 2024-03-30 20:50:48 -04:00
Jacob Young
9b2345e182 Sema: rework @fieldParentPtr to accept a pointer type
There is no way to know the expected parent pointer attributes (most
notably alignment) from the type of the field pointer, so provide them
in the first argument.
2024-03-30 20:50:48 -04:00
Jacob Young
5a41704f7e cbe: rewrite CType
Closes #14904
2024-03-30 20:50:48 -04:00
mlugg
2a245e3b78
compiler: eliminate TypedValue
The only logic which remained in this file was the Value printing logic.
This has been moved into a new `print_value.zig`.
2024-03-26 13:48:07 +00:00
mlugg
a61def10c6
compiler: eliminate most usages of TypedValue 2024-03-26 13:48:07 +00:00
mlugg
26a94e8481
Zcu: eliminate Decl.alive field
Legacy anon decls now have three uses:
* Type owner decls
* Function owner decls
* `@export` and `@extern`

Therefore, there are no longer any cases where we wish to explicitly
omit legacy anon decls from the binary. This means we can remove the
concept of an "alive" vs "dead" `Decl`, which also allows us to remove
the separate `anon_work_queue` in `Compilation`.
2024-03-26 13:48:06 +00:00
mlugg
884d957b6c
compiler: eliminate legacy Value representation
Good riddance!

Most of these changes are trivial. There's a fix for a minor bug this
exposed in `Value.readFromPackedMemory`, but aside from that, it's all
just things like changing `intern` calls to `toIntern`.
2024-03-26 13:48:06 +00:00
mlugg
c6f3e9d79c
Zcu.Decl: remove ty field
`Decl` can no longer store un-interned values, so this field is now
unnecessary. The type can instead be fetched with the new `typeOf`
helper method, which just gets the type of the Decl's `Value`.
2024-03-26 13:48:06 +00:00
mlugg
9c3670fc93
compiler: implement analysis-local comptime-mutable memory
This commit changes how we represent comptime-mutable memory
(`comptime var`) in the compiler in order to implement the intended
behavior that references to such memory can only exist at comptime.

It does *not* clean up the representation of mutable values, improve the
representation of comptime-known pointers, or fix the many bugs in the
comptime pointer access code. These will be future enhancements.

Comptime memory lives for the duration of a single Sema, and is not
permitted to escape that one analysis, either by becoming runtime-known
or by becoming comptime-known to other analyses. These restrictions mean
that we can represent comptime allocations not via Decl, but with state
local to Sema - specifically, the new `Sema.comptime_allocs` field. All
comptime-mutable allocations, as well as any comptime-known const allocs
containing references to such memory, live in here. This allows for
relatively fast checking of whether a value references any
comptime-mtuable memory, since we need only traverse values up to
pointers: pointers to Decls can never reference comptime-mutable memory,
and pointers into `Sema.comptime_allocs` always do.

This change exposed some faulty pointer access logic in `Value.zig`.
I've fixed the important cases, but there are some TODOs I've put in
which are definitely possible to hit with sufficiently esoteric code. I
plan to resolve these by auditing all direct accesses to pointers (most
of them ought to use Sema to perform the pointer access!), but for now
this is sufficient for all realistic code and to get tests passing.

This change eliminates `Zcu.tmp_hack_arena`, instead using the Sema
arena for comptime memory mutations, which is possible since comptime
memory is now local to the current Sema.

This change should allow `Decl` to store only an `InternPool.Index`
rather than a full-blown `ty: Type, val: Value`. This commit does not
perform this refactor.
2024-03-25 14:49:41 +00:00
SuperAuguste
8e7d9afdac Add 64bit byteswap case, use fewer locals 2024-03-18 12:40:41 +01:00
Michael Dusan
5ce40e61c6
bsd: debitrot AtomicOrder renames
- complete std.builtin.AtomicOrder renames that were missed from 6067d39522f
2024-03-15 02:28:50 -04:00
Andrew Kelley
bd24e66379
Merge pull request #19229 from tiehuis/ryu-128
std.fmt: add ryu floating-point formatting implementation
2024-03-11 18:46:26 -07:00
Tristan Ross
6067d39522
std.builtin: make atomic order fields lowercase 2024-03-11 07:09:10 -07:00
Tristan Ross
9d70d614ae
std.builtin: make link mode fields lowercase 2024-03-11 07:09:10 -07:00
Tristan Ross
099f3c4039
std.builtin: make container layout fields lowercase 2024-03-11 07:09:07 -07:00
Marc Tiehuis
bb1fe112f1 wasm/codegen: add "and" + "or" impl for big ints 2024-03-10 18:15:15 +13:00