This was done by regex substitution with `sed`. I then manually went
over the entire diff and fixed any incorrect changes.
This diff also changes a lot of `callconv(.C)` to `callconv(.c)`, since
my regex happened to also trigger here. I opted to leave these changes
in, since they *are* a correct migration, even if they're not the one I
was trying to do!
The compiler actually doesn't need any functional changes for this: Sema
does reification based on the tag indices of `std.builtin.Type` already!
So, no zig1.wasm update is necessary.
This change is necessary to disallow name clashes between fields and
decls on a type, which is a prerequisite of #9938.
What is `sparcel`, you might ask? Good question!
If you take a peek in the SPARC v8 manual, §2.2, it is quite explicit that SPARC
v8 is a big-endian architecture. No little-endian or mixed-endian support to be
found here.
On the other hand, the SPARC v9 manual, in §3.2.1.2, states that it has support
for mixed-endian operation, with big-endian mode being the default.
Ok, so `sparcel` must just be referring to SPARC v9 running in little-endian
mode, surely?
Nope:
* 40b4fd7a3e/llvm/lib/Target/Sparc/SparcTargetMachine.cpp (L226)
* 40b4fd7a3e/llvm/lib/Target/Sparc/SparcTargetMachine.cpp (L104)
So, `sparcel` in LLVM is referring to some sort of fantastical little-endian
SPARC v8 architecture. I've scoured the internet and I can find absolutely no
evidence that such a thing exists or has ever existed. In fact, I can find no
evidence that a little-endian implementation of SPARC v9 ever existed, either.
Or any SPARC version, actually!
The support was added here: https://reviews.llvm.org/D8741
Notably, there is no mention whatsoever of what CPU this might be referring to,
and no justification given for the "but some are little" comment added in the
patch.
My best guess is that this might have been some private exercise in creating a
little-endian version of SPARC that never saw the light of day. Given that SPARC
v8 explicitly doesn't support little-endian operation (let alone little-endian
instruction encoding!), and no CPU is known to be implemented as such, I think
it's very reasonable for us to just remove this support.
This is a misfeature that we inherited from LLVM:
* https://reviews.llvm.org/D61259
* https://reviews.llvm.org/D61939
(`aarch64_32` and `arm64_32` are equivalent.)
I truly have no idea why this triple passed review in LLVM. It is, to date, the
*only* tag in the architecture component that is not, in fact, an architecture.
In reality, it is just an ILP32 ABI for AArch64 (*not* AArch32).
The triples that use `aarch64_32` look like `aarch64_32-apple-watchos`. Yes,
that triple is exactly what you think; it has no ABI component. They really,
seriously did this.
Since only Apple could come up with silliness like this, it should come as no
surprise that no one else uses `aarch64_32`. Later on, a GNU ILP32 ABI for
AArch64 was developed, and support was added to LLVM:
* https://reviews.llvm.org/D94143
* https://reviews.llvm.org/D104931
Here, sanity seems to have prevailed, and a triple using this ABI looks like
`aarch64-linux-gnu_ilp32` as you would expect.
As can be seen from the diffs in this commit, there was plenty of confusion
throughout the Zig codebase about what exactly `aarch64_32` was. So let's just
remove it. In its place, we'll use `aarch64-watchos-ilp32`,
`aarch64-linux-gnuilp32`, and so on. We'll then translate these appropriately
when talking to LLVM. Hence, this commit adds the `ilp32` ABI tag (we already
have `gnuilp32`).
Reorganize how the binOp and genBinOp functions work.
I've spent quite a while here reading exactly through the spec and so many
tests are enabled because of several critical issues the old design had.
There are some regressions that will take a long time to figure out individually
so I will ignore them for now, and pray they get fixed by themselves. When
we're closer to 100% passing is when I will start diving into them one-by-one.
- implements `airSlice`, `airBitAnd`, `airBitOr`, `airShr`.
- got a basic design going for the `airErrorName` but for some reason it simply returns
empty bytes. will investigate further.
- only generating `.got.zig` entries when not compiling an object or shared library
- reduced the total amount of ops a mnemonic can have to 3, simplifying the logic
this one is even harder to document then the last large overhaul.
TLDR;
- split apart Emit.zig into an Emit.zig and a Lower.zig
- created seperate files for the encoding, and now adding a new instruction
is as simple as just adding it to a couple of switch statements and providing the encoding.
- relocs are handled in a more sane maner, and we have a clear defining boundary between
lea_symbol and load_symbol now.
- a lot of different abstractions for things like the stack, memory, registers, and others.
- we're using x86_64's FrameIndex now, which simplifies a lot of the tougher design process.
- a lot more that I don't have the energy to document. at this point, just read the commit itself :p
LLVMABIAlignmentOfType(i128) reports 16 on this target, however the C
ABI uses align(4).
Clang in LLVM 17 does this:
%struct.foo = type { i32, i128 }
Clang in LLVM 18 does this:
%struct.foo = type <{ i32, i128 }>
Clang is working around the 16-byte alignment to use align(4) for the C
ABI by making the LLVM struct packed.
There is no way to know the expected parent pointer attributes (most
notably alignment) from the type of the field pointer, so provide them
in the first argument.
This commit changes how we represent comptime-mutable memory
(`comptime var`) in the compiler in order to implement the intended
behavior that references to such memory can only exist at comptime.
It does *not* clean up the representation of mutable values, improve the
representation of comptime-known pointers, or fix the many bugs in the
comptime pointer access code. These will be future enhancements.
Comptime memory lives for the duration of a single Sema, and is not
permitted to escape that one analysis, either by becoming runtime-known
or by becoming comptime-known to other analyses. These restrictions mean
that we can represent comptime allocations not via Decl, but with state
local to Sema - specifically, the new `Sema.comptime_allocs` field. All
comptime-mutable allocations, as well as any comptime-known const allocs
containing references to such memory, live in here. This allows for
relatively fast checking of whether a value references any
comptime-mtuable memory, since we need only traverse values up to
pointers: pointers to Decls can never reference comptime-mutable memory,
and pointers into `Sema.comptime_allocs` always do.
This change exposed some faulty pointer access logic in `Value.zig`.
I've fixed the important cases, but there are some TODOs I've put in
which are definitely possible to hit with sufficiently esoteric code. I
plan to resolve these by auditing all direct accesses to pointers (most
of them ought to use Sema to perform the pointer access!), but for now
this is sufficient for all realistic code and to get tests passing.
This change eliminates `Zcu.tmp_hack_arena`, instead using the Sema
arena for comptime memory mutations, which is possible since comptime
memory is now local to the current Sema.
This change should allow `Decl` to store only an `InternPool.Index`
rather than a full-blown `ty: Type, val: Value`. This commit does not
perform this refactor.
A pointer type already has an alignment, so this information does not
need to be duplicated on the function type. This already has precedence
with addrspace which is already disallowed on function types for this
reason. Also fixes `@TypeOf(&func)` to have the correct addrspace and
alignment.
Instead of creating Module.Decl objects, directly create InternPool
pointer values using the anon_decl Addr encoding.
The LLVM backend needed code to notice the alignment of the pointer and
lower accordingly. The other backends likely need a similar change.