before this, calls to `resolveTypeFieldsStruct` (now renamed to the more correct `resolveStructFieldTypes`) would just throw away the sema that `resolveStructInner` created and create its own. There is no reason to do this, and we fix it to preserve the sema through it all.
Some of this is arbitrary since spirv (as opposed to spirv32/spirv64) refers to
the version with logical memory layout, i.e. no 'real' pointers. This change at
least matches what clang does.
This target triple was weird on multiple levels:
* The `ilp32` ABI is the soft float ABI. This is not the main ABI we want to
support on RISC-V; rather, we want `ilp32d`.
* `gnuilp32` is a bespoke tag that was introduced in Zig. The rest of the world
just uses `gnu` for RISC-V target triples.
* `gnu_ilp32` is already the name of an ILP32 ABI used on AArch64. `gnuilp32` is
too easy to confuse with this.
* We don't use this convention for `riscv64-linux-gnu`.
* Supporting all RISC-V ABIs with this convention will result in combinatorial
explosion; see #20690.
std.debug.Dwarf is the parsing/decoding logic. std.dwarf remains the
unopinionated types and bits alone.
If you look at this diff you can see a lot less redundancy in
namespaces.
What is `sparcel`, you might ask? Good question!
If you take a peek in the SPARC v8 manual, §2.2, it is quite explicit that SPARC
v8 is a big-endian architecture. No little-endian or mixed-endian support to be
found here.
On the other hand, the SPARC v9 manual, in §3.2.1.2, states that it has support
for mixed-endian operation, with big-endian mode being the default.
Ok, so `sparcel` must just be referring to SPARC v9 running in little-endian
mode, surely?
Nope:
* 40b4fd7a3e/llvm/lib/Target/Sparc/SparcTargetMachine.cpp (L226)
* 40b4fd7a3e/llvm/lib/Target/Sparc/SparcTargetMachine.cpp (L104)
So, `sparcel` in LLVM is referring to some sort of fantastical little-endian
SPARC v8 architecture. I've scoured the internet and I can find absolutely no
evidence that such a thing exists or has ever existed. In fact, I can find no
evidence that a little-endian implementation of SPARC v9 ever existed, either.
Or any SPARC version, actually!
The support was added here: https://reviews.llvm.org/D8741
Notably, there is no mention whatsoever of what CPU this might be referring to,
and no justification given for the "but some are little" comment added in the
patch.
My best guess is that this might have been some private exercise in creating a
little-endian version of SPARC that never saw the light of day. Given that SPARC
v8 explicitly doesn't support little-endian operation (let alone little-endian
instruction encoding!), and no CPU is known to be implemented as such, I think
it's very reasonable for us to just remove this support.
This is a misfeature that we inherited from LLVM:
* https://reviews.llvm.org/D61259
* https://reviews.llvm.org/D61939
(`aarch64_32` and `arm64_32` are equivalent.)
I truly have no idea why this triple passed review in LLVM. It is, to date, the
*only* tag in the architecture component that is not, in fact, an architecture.
In reality, it is just an ILP32 ABI for AArch64 (*not* AArch32).
The triples that use `aarch64_32` look like `aarch64_32-apple-watchos`. Yes,
that triple is exactly what you think; it has no ABI component. They really,
seriously did this.
Since only Apple could come up with silliness like this, it should come as no
surprise that no one else uses `aarch64_32`. Later on, a GNU ILP32 ABI for
AArch64 was developed, and support was added to LLVM:
* https://reviews.llvm.org/D94143
* https://reviews.llvm.org/D104931
Here, sanity seems to have prevailed, and a triple using this ABI looks like
`aarch64-linux-gnu_ilp32` as you would expect.
As can be seen from the diffs in this commit, there was plenty of confusion
throughout the Zig codebase about what exactly `aarch64_32` was. So let's just
remove it. In its place, we'll use `aarch64-watchos-ilp32`,
`aarch64-linux-gnuilp32`, and so on. We'll then translate these appropriately
when talking to LLVM. Hence, this commit adds the `ilp32` ABI tag (we already
have `gnuilp32`).
This was added as an architecture to LLVM's target triple parser and the Clang
driver in 2015. No backend ever materialized as far as I can see (same for GCC).
In 2016, other code referring to it started using "Myriad" instead. Ultimately,
all code related to it that isn't in the target triple parser was removed. It
seems to be a real product, just... literally no one seems to know anything
about the ISA. I figure after almost a decade with no public ISA documentation
to speak of, and no LLVM backend to reference, it's probably safe to assume that
we're not going to learn much about this ISA, making it useless for Zig.
See: 1b5767f72b
See: 84a7564b28
See: 8cfe9d8f2a
This patch is a pure rename plus only changing the file path in
`@import` sites, so it is expected to not create version control
conflicts, even when rebasing.
This reverts commit a7de02e05216db9a04e438703ddf1b6b12f3fbef.
This did not implement the accepted proposal, and I did not sign off
on the changes. I would like a chance to review this, please.
this one is even harder to document then the last large overhaul.
TLDR;
- split apart Emit.zig into an Emit.zig and a Lower.zig
- created seperate files for the encoding, and now adding a new instruction
is as simple as just adding it to a couple of switch statements and providing the encoding.
- relocs are handled in a more sane maner, and we have a clear defining boundary between
lea_symbol and load_symbol now.
- a lot of different abstractions for things like the stack, memory, registers, and others.
- we're using x86_64's FrameIndex now, which simplifies a lot of the tougher design process.
- a lot more that I don't have the energy to document. at this point, just read the commit itself :p
this provides a much better indication of when we are having a controlled panic with an error message
or when we are actually segfaulting, as before the `trap` as causing a segfault.
* some manual fixes to generated CPU features code. In the future it
would be nice to make the script do those automatically.
* add to various target OS switches. Some of the values I was unsure of
and added TODO panics, for example in the case of spirv CPU arch.
At a minimum required glibc is v2.17, as earlier versions do not define
some symbols (e.g., getauxval()) used by the Zig standard library.
Additionally, glibc only supports some architectures at more recent
versions (e.g., riscv64 support came in glibc v2.27). So add a
`glibc_min` field to `available_libcs` for architectures with stronger
version requirements.
Extend the existing `canBuildLibC` function to check the target against
the Zig minimum, and the architecture/os minimum.
Also filter the list shown by `zig targets`, too:
$ zig targets | jq -c '.glibc'
["2.17.0","2.18.0","2.19.0","2.20.0","2.21.0","2.22.0","2.23.0","2.24.0","2.25.0","2.26.0","2.27.0","2.28.0","2.29.0","2.30.0","2.31.0","2.32.0","2.33.0","2.34.0","2.35.0","2.36.0","2.37.0","2.38.0"]
Fixes#17034Fixes#17769
This is necessary because on COFF, the entry symbol name is not known
until the linker has looked at the set of global symbol names to
determine which of the four possible main entry points is present.
Commit 97e23896a9168132b6d36ca22ae1af10dd53d80d regressed this behavior
because it made target_util.supportsStackProtector *correctly*
notice which zig backend is being used to generate code, while the
logic calling that function *incorrectly assumed* that .zig code is being
compiled, when in reality it might be only C code being compiled.
This commit adjusts the option resolution logic for stack protector so
that it takes into account the zig backend only if there is a zig
compilation unit. A separate piece of logic checks whether clang
supports stack protector for a given target.
closes#18009closes#18114closes#18254
These options are only supposed to be provided to the initialization
functions, resolved, and then computed values stored in the appropriate
place (base struct or the object-format-specific structs).
Many more to go...
Much of the logic from Compilation.create() is extracted into
Compilation.Config.resolve() which accepts many optional settings and
produces concrete settings. This separate step is needed by API users of
Compilation so that they can pass the resolved global settings to the
Module creation function, which itself needs to resolve per-Module
settings.
Since the target and other things are no longer global settings, I did
not want them stored in link.File (in the `options` field). That options
field was already a kludge; those options should be resolved into
concrete settings. This commit also starts to work on that, deleting
link.Options, moving the fields into Compilation and
ObjectFormat-specific structs instead. Some fields were ephemeral and
should not have been stored at all, such as symbol_size_hint.
The link.File object of Compilation is now a `?*link.File` and `null`
when -fno-emit-bin is passed. It is now arena-allocated along with
Compilation itself, avoiding some messy cleanup code that was there
before.
On the command line, it is now possible to configure the standard
library itself by using `--mod std` just like any other module. This
meant that the CLI needed to create the standard library module rather
than having Compilation create it.
There are a lot of changes in this commit and it's still not done. I
didn't realize how quickly this changeset was going to balloon out of
control, and there are still many lines that need to be changed before
it even compiles successfully.
* introduce std.Build.Cache.HashHelper.oneShot
* add error_tracing to std.Build.Module
* extract build.zig file generation into src/Builtin.zig
* each CSourceFile and RcSourceFile now has a Module owner, which
determines some of the C compiler flags.