Compare commits

...

146 Commits

Author SHA1 Message Date
Andrew Kelley
d03a147ea0 Release 0.14.1 2025-05-21 22:46:47 -07:00
mlugg
7218218040
build runner: don't incorrectly omit reference traces
It's incorrect to ever set `include_reference_trace` here, because the
compiler has already given or not given reference traces depending on
the `-freference-trace` option propagated to the compiler process by
`std.Build.Step.Compile`.

Perhaps in future we could make the compiler always return the reference
trace when communicating over the compiler protocol; that'd be more
versatile than the current behavior, because the build runner could, for
instance, show a reference trace on-demand without having to even invoke
the compiler. That seems really useful, since the reference trace is
*often* unnecessary noise, but *sometimes* essential. However, we don't
live in that world right now, so passing the option here doesn't make
sense.

Resolves: #23415
2025-05-17 00:36:54 +02:00
mlugg
f377ea1060
doctest: handle relative paths correctly
Evaluate all child processes in the temporary directory, and use
`std.fs.path.relative` to make every other path relative to that child
cwd instead of our cwd.

Resolves: #22119
2025-05-17 00:36:23 +02:00
Marc Tiehuis
455ea58872
std.hash.Wyhash: fix dangling stack pointer
Closes #23895.
2025-05-16 17:03:39 +02:00
Alex Rønne Petersen
925cc08b95
main: List -f(no-)builtin as per-module options.
Contributes to #23424.
2025-05-14 05:44:45 +02:00
Alex Rønne Petersen
90e8af98eb
test: Fix incorrect interpretation of -Dtest-filter=... for test-debugger. 2025-05-14 05:44:32 +02:00
Alex Rønne Petersen
16b331f5fd
Air: Fix mustLower() to consider volatile for a handful of instructions.
These can all potentially operate on volatile pointers.
2025-05-14 05:43:57 +02:00
Alex Rønne Petersen
4bf17f0a78
Air: Always return true for inline assembly in mustLower().
AstGen requires inline assembly to either have outputs or be marked volatile, so
there doesn't appear to be any point in doing these checks.
2025-05-14 05:43:54 +02:00
Alex Rønne Petersen
59f92bff69
Air: Fix mustLower() for atomic_load with inter-thread ordering. 2025-05-14 05:43:28 +02:00
Alex Rønne Petersen
199782edd1
riscv64: Handle writes to the zero register sensibly in result bookkeeping. 2025-05-14 05:43:13 +02:00
Alex Rønne Petersen
bf21e4f725
riscv64: Add missing fence for seq_cst atomic_store. 2025-05-14 05:43:04 +02:00
Cezary Kupaj
c4237e8909
Fix SIGSEGV handler for AArch64 Darwin targets
* ucontext_t ptr is 8-byte aligned instead of 16-byte aligned which @alignCast() expects
* Retrieve pc address from ucontext_t since unwind_state is null
* Work around __mcontext_data being written incorrectly by the kernel
2025-05-14 05:39:01 +02:00
Michael Pfaff
0cb9ffc6d8
Fix implementation of std.os.linux.accept on x86 2025-05-10 10:27:17 +02:00
Alex Rønne Petersen
9070607c03
glibc: Fix stub libraries containing unwanted symbols.
Closes #8096.
2025-05-09 16:44:04 +02:00
mlugg
7199cfc21f
Compilation: don't warn about failure to delete missing C depfile
If clang encountered bad imports, the depfile will not be generated. It
doesn't make sense to warn the user in this case. In fact,
`FileNotFound` is never worth warning about here; it just means that
the file we were deleting to save space isn't there in the first place!
If the missing file actually affected the compilation (e.g. another
process raced to delete it for some reason) we would already error in
the normal code path which reads these files, so we can safely omit the
warning in the `FileNotFound` case always, only warning when the file
might still exist.

To see what this fixes, create the following file...

```c
#include <nonexist>
```

...and run `zig build-obj` on it. Before this commit, you will get a
redundant warning; after this commit, that warning is gone.
2025-05-09 16:43:57 +02:00
Meghan Denny
b1082a31a5
std.os: handle ENOENT for fnctl on macos 2025-05-09 16:43:50 +02:00
xdBronch
55acb29d68
translate-c: fix callconv attribute in macro 2025-05-09 16:43:35 +02:00
HydroH
b21fa8e2cd
std: fix compile errors in std.crypto.ecc (#23797)
Implemented `neg()` method for `AffineCoordinates` struct of p256,
p384 and secp256k1 curves.

Resolves: #20505 (partially)
2025-05-06 18:03:03 +02:00
Alex Rønne Petersen
5cfd47660c
0934823815f1d4336b2160f09f65df5ba8e52a15 take 2. 2025-05-05 09:07:52 +02:00
Alex Rønne Petersen
0934823815
Unbreak the build (156ab8750056c3ff440af0937806d8cdb2623816 is not in the 0.14.x branch). 2025-05-05 08:06:09 +02:00
tjog
e739ba1bd9
disable getauxvalImpl instrumentation as libfuzzer's allocator may need to call it 2025-05-05 07:26:06 +02:00
tjog
f592674642
link+macho+fuzz: use correct input type
A debug build of the compiler detects invalid union access since `classifyInputFile`
detects `.archive` and this line constructed a `.object` input.
2025-05-05 07:25:55 +02:00
Alex Rønne Petersen
566e4ab6b1
compiler: Set libc++ ABI version to 2 for Emscripten.
It remains 1 everywhere else.

Also remove some code that allowed setting the libc++ ABI version on the
Compilation since there are no current plans to actually expose this in the CLI.
2025-05-05 07:25:25 +02:00
Xavier Bouchoux
7b45bd3c09
fix system library lookup when cross-compiling to windows-msvc 2025-05-04 02:52:33 +02:00
Matthew Lugg
e07d8fccd1
Merge pull request #23263 from mlugg/comptime-field-ptr
Sema: fix pointers to comptime fields of comptime-known aggregate pointers
2025-05-04 02:51:47 +02:00
mlugg
db936b9094
compiler: fix comptime memory store bugs
* When storing a zero-bit type, we should short-circuit almost
  immediately. Zero-bit stores do not need to do any work.
* The bit size computation for arrays is incorrect; the `abiSize` will
  already be appropriately aligned, but the logic to do so here
  incorrectly assumes that zero-bit types have an alignment of 0. They
  don't; their alignment is 1.

Resolves: #21202
Resolves: #21508
Resolves: #23307
2025-05-04 02:51:42 +02:00
mlugg
87983e800a
std.Progress: fix many bugs
There were several bugs with the synchronization here; most notably an
ABA problem which was causing #21663. I fixed that and some other
issues, and took the opportunity to get rid of the `.seq_cst` orderings
from this file. I'm at least relatively sure my new orderings are correct.

Co-authored-by: achan1989 <achan1989@gmail.com>
Resolves: #21663
2025-05-04 02:51:07 +02:00
Pat Tullmann
142a890c37
std.os.linux: Fix MIPS signal numbers
Dunno why the MIPS signal numbers are different, or why Zig had them
already special cased, but wrong.

We have the technology to test these constants.  We should use it.
2025-05-02 18:30:50 +02:00
Pavel Verigo
331bd83f11
wasm-c-abi: llvm fix struct handling + reorganize
I changed to `wasm/abi.zig`, this design is certainly better than the previous one. Still there is some conflict of interest between llvm and self-hosted backend, better design will appear when abi tests will be tested with self-hosted.

Resolves: #23304
Resolves: #23305
2025-05-02 18:30:32 +02:00
Alex Rønne Petersen
0209c68fcc
compiler-rt: Add missing _Qp_sqrt export for sparc64.
https://github.com/ziglang/zig/issues/23716
2025-05-01 21:36:19 +02:00
Alex Rønne Petersen
aea1272a3f
test: Disable vector reduce operation for sparc.
https://github.com/ziglang/zig/issues/23719
2025-05-01 21:34:57 +02:00
Alex Rønne Petersen
a09c1d91ed
test: Disable some varargs behavior tests on sparc.
https://github.com/ziglang/zig/issues/23718
2025-05-01 21:34:53 +02:00
Alex Rønne Petersen
47e46b58d2
std.os.linux: Add missing time_t definition for sparc64. 2025-05-01 21:34:49 +02:00
Ali Cheraghi
200fb1e92e
test: skip "struct fields get automatically reordered" for spirv64 backend 2025-05-01 21:34:34 +02:00
psbob
8717453208
Fix Unexpected error for 1453 on Windows (#23729) 2025-05-01 21:31:58 +02:00
Dongjia Zhang
8a5f834240
use correcct symbol for the end of pcguard section 2025-04-28 20:48:30 +02:00
mlugg
bee19572c8
Sema: fix a few indexing bugs
* Indexing zero-bit types should not produce AIR indexing instructions
* Getting a runtime-known element pointer from a many-pointer should
  check that the many-pointer is not comptime-only

Resolves: #23405
2025-04-28 20:48:24 +02:00
dweiller
b5c22777f8
sema: do checked cast when resolving aggregate size 2025-04-28 20:48:19 +02:00
xdBronch
7e68999f79
Sema: fix memcpy with C pointers 2025-04-28 12:10:04 +02:00
Alex Rønne Petersen
470dac8a77
wasi-libc: Fix paths to psignal.c and strsignal.c.
Closes #23709.
2025-04-28 01:03:15 +02:00
Alex Rønne Petersen
e6a71e9e7a
Sema: Fix some ptr alignment checks to handle a potential ISA tag bit.
Closes #23570.
2025-04-28 00:58:53 +02:00
Shun Sakai
168981c678
docs(std.ascii): Remove redundant three slashes 2025-04-28 00:58:44 +02:00
Kevin Primm
23ab05f1f5
compiler: Fix -m<os>-version-min=... ordering 2025-04-27 14:27:52 +02:00
mlugg
160f2dabed
std.Build.Cache: fix several bugs
Aside from adding comments to document the logic in `Cache.Manifest.hit`
better, this commit fixes two serious bugs.

The first, spotted by Andrew, is that when upgrading from a shared to an
exclusive lock on the manifest file, we do not seek it back to the
start. This is a simple fix.

The second is more subtle, and has to do with the computation of file
digests. Broadly speaking, the goal of the main loop in `hit` is to
iterate the files listed in the manifest file, and check if they've
changed, based on stat and a file hash. While doing this, the
`bin_digest` field of `std.Build.Cache.File`, which is initially
`undefined`, is populated for all files, either straight from the
manifest (if the stat matches) or recomputed from the file on-disk. This
file digest is then used to update `man.hash.hasher`, which is building
the final hash used as, for instance, the output directory name when the
compiler emits into the cache directory. When `hit` returns a cache
miss, it is expected that `man.hash.hasher` includes the digests of all
"initial files"; that is, those which have been already added with e.g.
`addFilePath`, but not those which will later be added with
`addFilePost` (even though the manifest file has told us about some such
files). Previously, `hit` was using the `unhit` function to do this in a
few cases. However, this is incorrect, because `hit` assumes that all
files already have their `bin_digest` field populated; this function is
only valid to call *after* `hit` returns. Instead, we need to actually
compute the hashes which haven't yet been populated. Even if this logic
has been working, there was still a bug here, because we called `unhit`
when upgrading from a shared to an exclusive lock, writing the
(potentially `undefined`) file digests, but the loop itself writes the
file digests *again*! All in all, the hashing logic here was actually
incredibly broken.

I've taken the opportunity to restructure this section of the code into
what I think is a more readable format. A new function,
`hitWithCurrentLock`, uses the open manifest file to try and find a
cache hit. It returns a tagged union which, in the miss case, tells the
caller (`hit`) how many files already have their hash populated. This
avoids redundant work recomputing the same hash multiple times in
situations where the lock needs upgrading. This also eliminates the
outer loop from `hit`, which was a little confusing because it iterated
no more than twice!

The bugs fixed here could manifest in several different ways depending
on how contended file locks were satisfied. Most notably, on a cache
miss, the Zig compiler might have written the compilation output to the
incorrect directory (because it incorrectly constructed a hash using
`undefined` or repeated file digests), resulting in all future hits on
this manifest causing `error.FileNotFound`. This is #23110. I have been
able to reproduce #23110 on `master`, and have not been able to after
this commit, so I am relatively sure this commit resolves that issue.

Resolves: #23110
2025-04-27 14:08:21 +02:00
Michael Pfaff
53f298cffa
Calculate WTF-8 length before converting instead of converting into an intermediate buffer on the stack 2025-04-26 15:07:26 +02:00
tjog
3ca0f18bfe
fuzz: fix expected section start/end symbol name on MacOS when linking libfuzzer
Not only is the section name when adding the sancov variables different.

The linker symbol ending up in the binary is also different.

Reference: 60105ac6ba/llvm/lib/Transforms/Instrumentation/SanitizerCoverage.cpp (L1076-L1104)
2025-04-26 15:07:21 +02:00
Alex Rønne Petersen
a8844ab3bc
std.Target.amdgcn.cpu.gfx1153 doesn't exist in LLVM 19. 2025-04-25 20:14:44 +02:00
Ryan Liptak
aa013b7643
FailingAllocator: remove outdated doc comments, move doc comment example to decltest
Note: The decltests for files-as-a-struct don't show up in autodoc currently
2025-04-25 19:58:11 +02:00
Ali Cheraghi
f38a28a626
revive nvptx linkage 2025-04-25 19:57:45 +02:00
Ali Cheraghi
af6670c403
Module: ignore xnack and sramecc features on some gpu models 2025-04-25 19:57:39 +02:00
Pavel Otchertsov
fd7aafdbd5
cmake: support static linking against libxml2 2025-04-16 23:40:28 +02:00
phatchman
4b47e978e3
Return FileNotFound when CreateProcessW is called with a missing path (#23567) 2025-04-16 04:25:02 +02:00
David Rubin
d80cfa6f41 Compilation: Use trapping UBSan if -fno-ubsan-rt is passed.
This is a mitigation of #23216 meant only for 0.14.x.
2025-04-15 21:29:10 +02:00
Alex Rønne Petersen
379f1c9fa0
std.Build.Step: Don't capture a stack trace if !std.debug.sys_can_stack_trace. 2025-04-15 01:30:52 +02:00
Alex Rønne Petersen
c0378e85b6
link: Improve handling of --build-id when using LLD. 2025-04-15 01:30:34 +02:00
Luis Cáceres
ebb37e719d
src/libunwind.zig: Fix symbol visibility macro define
The define was changed in commit 729899f7b6bf6aff65988d895d7a639391a67608
in upstream llvm.
2025-04-15 01:30:27 +02:00
kcbanner
527df938e1
Value: ensure that extern structs have their layout resolved in ptrField 2025-04-11 21:35:42 +02:00
Jacob Young
1d34616236
x86_64: fix error_set_has_value of inferred error sets 2025-04-11 16:59:55 +02:00
Alex Rønne Petersen
a5f4107d3e
Compilation: Pass -m<os>-version-min=... to Clang for all applicable Darwin targets. 2025-04-11 02:19:22 +02:00
Pat Tullmann
2a7683933a
linux.zig: epoll_wait: pass kernel sigset size
Linux kernel syscalls expect to be given the number of bits of sigset that
they're built for, not the full 1024-bit sigsets that glibc supports.

I audited the other syscalls in here that use `sigset_t` and they're all
using `NSIG / 8`.

Fixes #12715
2025-04-10 10:49:04 +02:00
Techatrix
83e1ce1e00
Compilation: Fix logic in addCCArgs() for various file types and flags.
Co-authored-by: Alex Rønne Petersen <alex@alexrp.com>
2025-04-09 15:04:24 +02:00
Meghan Denny
9397dc5af6
std: add nvidia as a known arm implementer 2025-04-09 15:03:45 +02:00
SuperAuguste
4ab34b142e
Fix mach-o naming for sancov sections 2025-04-09 15:03:17 +02:00
Matthew Roush
fbb297fd2a
Make translate-c more robust in handling macro functions.
Translate-c didn't properly account for C macro functions having parameter names that are C keywords. So something like `#define FOO(float) ((float) + 10)` would've been interpreted as casting `+10` to a `float` type, instead of adding `10` to the parameter `float`.

An example of a real-world macro function like this is SDL3's `SDL_DEFINE_AUDIO_FORMAT` from `SDL_audio.h`, which uses `signed` as a parameter.
2025-04-08 12:11:30 +02:00
Stefan Weigl-Bosker
8bb7c85bd4
start: fix pc register syntax for m68k 2025-04-08 12:10:22 +02:00
Alex Rønne Petersen
79e3c4a9a8
start: Align the stack on m68k. 2025-04-08 12:10:16 +02:00
SuperAuguste
60922dbf34
Remove overzealous LLVM anti-instrumentation attributes 2025-04-07 12:06:26 +02:00
Alex Rønne Petersen
b2feb0d575
glibc: Add missing stubs-lp64s.h for loongarch64-linux-gnusf.
https://sourceware.org/bugzilla/show_bug.cgi?id=32776
2025-04-06 17:23:19 +02:00
Ziyi Yan
a100419d06
Add lld path of linuxbrew installation (#23466)
Co-authored-by: Alex Rønne Petersen <alex@alexrp.com>
2025-04-06 09:11:15 +02:00
Jacob Young
cf6c8eacfe Dwarf: handle undefined type values
Closes #23461
2025-04-06 00:56:57 -04:00
Jacob Young
cac0f56c03 x86_64: fix incorrect handling of unreusable operands
Closes #23448
2025-04-06 00:56:44 -04:00
Zenomat
a2ea4b02bc
std.net: Implement if_nametoindex for windows (#22555) 2025-04-05 20:41:50 +02:00
Dimitris Dinodimos
dac350f7c8
Change the lld path on macos homebrew
Homebrew now provides lld in a separate formula; it was part of llvm
formula.
2025-04-04 06:06:28 +02:00
Alex Rønne Petersen
1bada4b275
Merge pull request #23447 from alexrp/cpuid-updates 2025-04-03 19:32:59 +02:00
Alex Rønne Petersen
f5de2770e5
Merge pull request #23445 from alexrp/external-executor-fixes 2025-04-03 19:32:54 +02:00
Misaki Kasumi
d128f5c0bb
std.os.linux: block all signals in raise 2025-04-02 23:57:32 +02:00
Parker Liu
06fc600aec
translate-c: fix function prototype decalared inside a function
* If a function prototype is declarated inside a function, do not
  translate it to a top-level extern function declaration. Similar to
  extern local variable, just wrapped it into a block-local struct.

* Add a new extern_local_fn tag of aro_translate_c node for present
  extern local function declaration.

* When a function body has a C function prototype declaration, it adds
  an extern local function declaration. Subsequent function references
  will look for this function declaration.
2025-04-02 23:56:07 +02:00
Auguste Rame
0b4176891c
DebugAllocator: Fix bucket removal logic causing segfault/leak (#23390)
Make buckets doubly linked
2025-04-02 14:22:15 +02:00
mlugg
ceb84c647b
stage1: fix wasi_snapshot_preview1_fd_seek on cache files
`wasm2c` uses an interesting mechanism to "fake" the existence of cache
directories. However, `wasi_snapshot_preview1_fd_seek` was not correctly
integrated with this system, so previously crashed when run on a file in
a cache directory due to trying to call `fseek` on a `FILE *` which was
`NULL`.
2025-04-02 14:21:56 +02:00
Mason Remaley
4089134892
Zcu: fix ZOIR cache bugs
* When saving bigint limbs, we gave the iovec the wrong length, meaning
  bigint data (and the following string and compile error data) was corrupted.
* When updating a stale ZOIR cache, we failed to truncate the file, so
  just wrote more bytes onto the end of the stale cache.
2025-04-02 14:21:51 +02:00
David Rubin
f2c838d2cf
Sema: increment extra index even if return type is generic 2025-04-02 08:43:28 +02:00
Ali Cheraghi
edaa9584cc
zon: normalize negative zeroes 2025-04-02 08:43:21 +02:00
mlugg
f5e7850686
Sema: allow @ptrCast slice of zero-bit type to slice of non-zero-bit type
This is actually completely well-defined. The resulting slice always has
0 elements. The only disallowed case is casting *to* a slice of a
zero-bit type, because in that case, you cna't figure out how many
destination elements to use (and there's *no* valid destination length
if the source slice corresponds to more than 0 bits).
2025-04-02 08:43:13 +02:00
Parker Liu
6ecd143212
translate-c: fix referencing extern locals from nested blocks 2025-04-02 08:43:13 +02:00
Jacob Young
373ae980c0 Elf: fix incrementally reallocating the last atom in a section 2025-03-31 23:18:38 -04:00
Alex Rønne Petersen
b8d7866193
Merge pull request #23371 from alexrp/ci-redundancy
Remove some `aarch64-linux` CI steps that are already covered by `x86_64-linux`
2025-03-31 17:52:34 +02:00
Alex Rønne Petersen
4c0913ff7c
Merge pull request #23417 from dweiller/zstd-fixes
Zstd fixes
2025-03-31 17:52:23 +02:00
Simon Brown
e5ea175ffb
Add quota for comptime sort, add test 2025-03-31 17:52:16 +02:00
David Rubin
aca8ed9dec
Sema: convert slice sentinel to single pointer correctly 2025-03-31 17:36:45 +02:00
mlugg
6ac462b088
Zcu: resolve layout of analyzed declaration type
Resolves: #19888
2025-03-31 17:36:44 +02:00
Sean Stasiak
9025f73733
check result of mmap() call to handle a large base_addr value correctly 2025-03-27 21:08:23 +01:00
Alex Rønne Petersen
1423b38c45
Merge pull request #23378 from alexrp/build-zig-cleanup 2025-03-27 21:08:10 +01:00
Alex Rønne Petersen
ed6418544c
Merge pull request #23373 from alexrp/get-base-address
`std.process`: Some minor fixes for `getBaseAddress()`
2025-03-27 21:08:04 +01:00
Alex Rønne Petersen
3ae9a99f62
build: increase test-std max rss 2025-03-27 12:19:16 +01:00
Андрей Краевский
8088105b05
std.meta.FieldType -> @FieldType 2025-03-27 12:19:07 +01:00
孙冰
38a8fd5d85
std.posix: update LFS64 interfaces for android bionic C 2025-03-26 23:52:16 +01:00
Felix "xq" Queißner
3592868435
Enables parsing for '-Wl,-rpath,' in pkg-config output, allows better support for NixOS linking. 2025-03-26 23:51:58 +01:00
wooster0
27ae10afe0
linux: don't export getauxval when not required 2025-03-26 21:52:44 +01:00
Kendall Condon
f391a2cd20
Allocator.create: properly handle alignment for zero-sized types (#21864) 2025-03-26 21:52:12 +01:00
dweiller
172dc6c314
zig build: add env_map entries to hash for Step.Run
This change fixes false-positive cache hits for run steps that get run
with different sets of environment variables due the the environment map
being excluded from the cache hash.
2025-03-26 15:41:17 +01:00
Andrew Barchuk
0d65b014ea
Clarify the multidimensional array example
Use a rectangular matrix instead of a square one to distinguish rows and
columns more clearly. Extend the example with row access.
2025-03-26 15:41:14 +01:00
mlugg
27f3e8b61d
Zcu: include named tests in resolved references
Oops, a little typo from yours truly! No test for this one, because we
don't have any way of testing the reference trace.
2025-03-26 15:41:09 +01:00
Arnau Camprubí
9c857bb32d
Fix std.debug.dumpHex address offsets 2025-03-26 15:40:56 +01:00
Alex Rønne Petersen
38ececf0a7
Merge pull request #23310 from Rexicon226/fix-23309
big.int: return normalized results from `{add,sub}Carry`
2025-03-25 18:44:58 +01:00
Chris Clark
cb3eec285f
std.zig.Ast: Fix error case memory leak in parse() 2025-03-25 15:40:55 +01:00
David Rubin
598413357d
Sema: use unwrapped generic owner in getFuncInstanceIes 2025-03-25 15:24:41 +01:00
godalming123
0367d46d3c
Update the documentation comment in arena_allocator.zig to be more accurate
Update the documentation comment in arena_allocator.zig to specify that free() is a no-op unless the item is the most recent allocation.
2025-03-25 15:24:20 +01:00
孙冰
d67bf8bde3
std.c: android bionic C supports arc4random_buf and getentropy
1. https://android.googlesource.com/platform/bionic/+/refs/heads/main/libc/include/bits/getentropy.h
2. https://android.googlesource.com/platform/bionic/+/refs/heads/main/libc/include/stdlib.h
2025-03-25 15:24:07 +01:00
Alex Rønne Petersen
7d8a556ba9
Merge pull request #23220 from samy-00007/bytesAsSlice-fix
Minor fix for `Allocator.remap` and `mem.bytesAsSlice` for zero-sized types
2025-03-25 15:23:08 +01:00
rpkak
1aca3dd6e0
DepTokenizer: allow space between target and colon 2025-03-24 15:31:59 +01:00
Shun Sakai
f062ec2a1a
docs(std.base64): Add references to RFC 4648
There are multiple implementations of Base64, but `std.base64` appears
to be based on RFC 4648, so we clarify that it is based on RFC 4648.
2025-03-24 15:31:58 +01:00
GasInfinity
a7cfc23e5a
fix(std/fmt.zig): fix overflow in fmtDurationSigned
fixes #23315
2025-03-24 15:31:53 +01:00
Carl Åstholm
e62a3ea74e
Use -unknown when converting WASI/Emscripten target triples into LLVM triples
The "musl" part of the Zig target triples `wasm32-wasi-musl` and
`wasm32-emscripten-musl` refers to the libc, not really the ABI.

For WASM, most LLVM-based tooling uses `wasm32-wasi`, which is
normalized into `wasm32-unknown-wasi`, with an implicit `-unknown` and
without `-musl`.

Similarly, Emscripten uses `wasm32-unknown-emscripten` without `-musl`.

By using `-unknown` instead of `-musl` we get better compatibility with
external tooling.
2025-03-24 07:04:51 +01:00
mlugg
eedfce92b0
Sema: fix in-memory coercion of functions introducing new generic parameters
While it is not allowed for a function coercion to change whether a
function is generic, it *is* okay to make existing concrete parameters
of a generic function also generic, or vice versa. Either of these cases
implies that the result is a generic function, so comptime type checks
will happen when the function is ultimately called.

Resolves: #21099
2025-03-24 07:02:05 +01:00
Jacob Young
7b9e482ed6 x86_64: fix rare miscomp that clobbers memory 2025-03-23 21:59:18 -04:00
Jacob Young
7757302c3a big.int: fix negative multi-limb shift right adjust crash 2025-03-23 21:59:12 -04:00
Jacob Young
4f47be5c6b big.int: fix yet another truncate bug
Too many bugs have been found with `truncate` at this point, so it was
rewritten from scratch.

Based on the doc comment, the utility of `convertToTwosComplement` over
`r.truncate(a, .unsigned, bit_count)` is unclear and it has a subtle
behavior difference that is almost certainly a bug, so it was deleted.
2025-03-23 21:59:07 -04:00
Jacob Young
7199a86b97 Merge pull request #23256 from xtexx/fix-gh-20113
x86_64: fix packedStore miscomp by spilling EFLAGS
2025-03-23 21:53:16 -04:00
Jacob Young
fe8bdf6f04 codegen: fix packed byte-aligned relocations
Closes #23131
2025-03-23 21:40:03 -04:00
mlugg
c71b78eb01 link: mark prelink tasks as procesed under -fno-emit-bin
The old logic only decremented `remaining_prelink_tasks` if `bin_file`
was not `null`. This meant that on `-fno-emit-bin` builds with
registered prelink tasks (e.g. C source files), we exited from
`Compilation.performAllTheWorkInner` early, assuming a prelink error.

Instead, when `bin_file` is `null`, we still decrement
`remaining_prelink_tasks`; we just don't do any actual work.

Resolves: #22682
2025-03-22 18:45:34 -07:00
Ryan Liptak
59bdd77229 Trick the meson build system into thinking zig rc is rc.exe
When determining the type of RC compiler, meson passes `/?` or `--version` and then reads from `stdout` looking for particular string(s) anywhere in the output.

So, by adding the string "Microsoft Resource Compiler" to the `/?` output, meson will recognize `zig rc` as rc.exe and give it the correct options, which works fine since `zig rc` is drop-in CLI compatible with rc.exe.

This allows using `zig rc` with meson for (cross-)compiling, by either:

- Setting WINDRES="zig rc" or putting windres = ['zig', 'rc'] in the cross-file
  + This will work like rc.exe, so it will output .res files. This will only link successfully if you are using a linker that can do .res -> .obj conversion (so something like zig cc, MSVC, lld)
- Setting WINDRES="zig rc /:output-format coff" or putting windres = ['zig', 'rc', '/:output-format', 'coff'] in the cross-file
  + This will make meson pass flags as if it were rc.exe, but it will cause the resulting .res file to actually be a COFF object file, meaning it will work with any linker that handles COFF object files

Example cross file that uses `zig cc` (which can link `.res` files, so `/:output-format coff` is not necessary) and `zig rc`:

```
[binaries]
c = ['zig', 'cc', '--target=x86_64-windows-gnu']
windres = ['zig', 'rc']

[target_machine]
system = 'windows'
cpu_family = 'x86_64'
cpu = 'x86_64'
endian = 'little'
```
2025-03-21 15:07:57 -07:00
Alex Rønne Petersen
bfc554b542 compiler: Support more GCC code models and fix the mapping to LLVM code models.
Closes #22517.
2025-03-20 13:34:04 -07:00
Alex Rønne Petersen
fbdf64a7da
mingw: Rename mingw32.lib to libmingw32.lib.
LLD expects the library file name (minus extension) to be exactly libmingw32. By
calling it mingw32 previously, we prevented it from being detected as being in
LLD's list of libraries that are excluded from the MinGW-specific auto-export
mechanism.

b9d27ac252/lld/COFF/MinGW.cpp (L30-L56)

As a result, a DLL built for *-windows-gnu with Zig would export a bunch of
internal MinGW symbols. This sometimes worked out fine, but it could break at
link or run time when linking an EXE with a DLL, where both are targeting
*-windows-gnu and thus linking separate copies of mingw32.lib. In #23204, this
manifested as the linker getting confused about _gnu_exception_handler() because
it was incorrectly exported by the DLL while also being defined in the
mingw32.lib that was being linked into the EXE.

Closes #23204.
2025-03-18 20:49:35 +01:00
Roman Frołow
74a79da4ec
typo: was issues -> was issued 2025-03-18 04:58:08 +01:00
Jonathan Gautheron
18b821666e
std.zig.c_translation: fix function pointer casting 2025-03-18 04:58:03 +01:00
mlugg
6c690a966a
Sema: correctly handle empty by-ref initializers
Resolves: #23210
2025-03-18 04:57:57 +01:00
Loris Cro
f954950485
std.Build.Watch: fix macos implementation
The code did one useless thing and two wrong things:

- ref counting was basically a noop
- last_dir_fd was chosen from the wrong index and also under the wrong
  condition

This caused regular crashes on macOS which are now gone.
2025-03-18 04:57:31 +01:00
Elijah M. Immer
f79dacbfc4
lib/std/http/Client.zig: Ignore empty proxy environment variables (#23223)
This fixes #21032 by ignoring proxy environment variables that are
empty.
2025-03-14 21:20:55 +01:00
TCROC
dc75a64c46
glibc: fix uninitialized memory in __pthread_cond_s for <=2.40
* https://sourceware.org/bugzilla/show_bug.cgi?id=32786
* https://inbox.sourceware.org/libc-alpha/87zfhpfqsm.fsf@oldenburg.str.redhat.com
2025-03-14 21:19:17 +01:00
mlugg
fdc9326868 Zcu: rename skip_analysis_errors to skip_analysis_this_update and respect it
On updates with failed files, we should refrain from doing any semantic
analysis, or even touching codegen/link. That way, incremental
compilation state is untouched for when the user fixes the AstGen
errors.

Resolves: #23205
2025-03-12 12:25:50 -07:00
mlugg
af4b39395c std.mem.Allocator.remap: fix incorrect doc comment (part 2) 2025-03-12 12:25:32 -07:00
孙冰
99b5a4f294 std.c: fix sysconf names (std.c._SC) for android api
c.f. https://android.googlesource.com/platform/bionic/+/refs/heads/main/libc/include/bits/sysconf.h
2025-03-12 12:25:20 -07:00
Andrew Kelley
22b7d02282 Merge pull request #23188 from jacobly0/fix-23143
x86_64: fix crashes with symbols
2025-03-12 12:25:05 -07:00
mlugg
623d5cc7f6 Sema: fix handling of @This() on opaques
Resolves: #22869
2025-03-11 11:21:03 -07:00
Mathias Lafeldt
ba97b1a2a2 Merge pull request #23193 from mlafeldt/fix-macho-detection
Fetch: enhance Mach-O executable detection for modern Macs

closes #21044
2025-03-11 11:20:55 -07:00
mlugg
72775adcd0 std.mem.Allocator.remap: fix incorrect doc comment
Resolves: #23194
2025-03-11 11:20:46 -07:00
Andrew Kelley
372d56371f Merge pull request #21933 from kcbanner/comptime_nan_comparison
Fix float vector comparisons with signed zero and NaN, add test coverage
2025-03-09 12:07:47 -07:00
Alex Rønne Petersen
6d44a8cd0b std.Target.Query: Don't append glibc version in zigTriple() if ABI isn't GNU. 2025-03-09 12:07:34 -07:00
Andrew Kelley
71e2f653cf Reapply "build: Don't check parent directories for git tag"
This reverts commit 7e0c25eccd8d9bc5b77953dbc9a39a26e383c550.

The `--git-dir` argument is relative to the `-C` argument, making this
patch OK after all.

I added a comment to go along with this since I found it confusing.

Apologies for the revert.
2025-03-09 12:06:59 -07:00
Ian Johnson
2ef72f84ca Sema: handle generated tag enums in union field order check
Fixes #23059

The "note: enum field here" now references the field in the base union type rather than crashing.
2025-03-08 11:29:56 -08:00
Jacob Young
8b9c517515 compiler-rt: fix signed min int from float 2025-03-08 11:22:46 -08:00
Alex Rønne Petersen
1a7ffe4aae Compilation: Fix -fno-rtlib-defaultlib unused argument warning in ReleaseSafe.
Closes #23138.
2025-03-08 11:22:38 -08:00
Alex Rønne Petersen
c9c58ebbe3
test: Disable test-elf-ld-script-path-error for now.
https://github.com/ziglang/zig/issues/23125
2025-03-08 07:07:49 +01:00
Alex Rønne Petersen
ed583e5466
zig cc: Don't pass -mabi for assembly files when targeting arm.
Clang's integrated Arm assembler doesn't understand -mabi yet, so this results
in "unused command line argument" warnings when building musl code and glibc
stubs, for example.
2025-03-08 04:13:20 +01:00
Andrew Kelley
8e91862571 fix InstallArtifact opening empty string
this appears to have been a problem since 43f73af3595c3174b8e67e9f2792c3774f2192e9
2025-03-07 13:34:13 -08:00
Andrew Kelley
61a95ab662 start the 0.14.1 release cycle 2025-03-05 12:42:08 -08:00
154 changed files with 3856 additions and 1566 deletions

View File

@ -3,8 +3,7 @@ on:
pull_request:
push:
branches:
- master
- llvm19
- 0.14.x
concurrency:
# Cancels pending runs when a PR gets updated.
group: ${{ github.head_ref || github.run_id }}-${{ github.actor }}

View File

@ -39,7 +39,7 @@ project(zig
set(ZIG_VERSION_MAJOR 0)
set(ZIG_VERSION_MINOR 14)
set(ZIG_VERSION_PATCH 0)
set(ZIG_VERSION_PATCH 1)
set(ZIG_VERSION "" CACHE STRING "Override Zig version string. Default is to find out with git.")
if("${ZIG_VERSION}" STREQUAL "")
@ -90,6 +90,7 @@ set(ZIG_STATIC_LLVM ${ZIG_STATIC} CACHE BOOL "Prefer linking against static LLVM
set(ZIG_STATIC_ZLIB ${ZIG_STATIC} CACHE BOOL "Prefer linking against static zlib")
set(ZIG_STATIC_ZSTD ${ZIG_STATIC} CACHE BOOL "Prefer linking against static zstd")
set(ZIG_STATIC_CURSES OFF CACHE BOOL "Enable static linking against curses")
set(ZIG_STATIC_LIBXML2 OFF CACHE BOOL "Enable static linking against libxml2")
if (ZIG_SHARED_LLVM AND ZIG_STATIC_LLVM)
message(SEND_ERROR "-DZIG_SHARED_LLVM and -DZIG_STATIC_LLVM cannot both be enabled simultaneously")
@ -167,6 +168,12 @@ if(ZIG_STATIC_CURSES)
list(APPEND LLVM_LIBRARIES "${CURSES}")
endif()
if(ZIG_STATIC_LIBXML2)
list(REMOVE_ITEM LLVM_LIBRARIES "-lxml2")
find_library(LIBXML2 NAMES libxml2.a NAMES_PER_DIR)
list(APPEND LLVM_LIBRARIES "${LIBXML2}")
endif()
find_package(Threads)
set(ZIG_CONFIG_H_OUT "${PROJECT_BINARY_DIR}/config.h")

View File

@ -11,7 +11,7 @@ const assert = std.debug.assert;
const DevEnv = @import("src/dev.zig").Env;
const ValueInterpretMode = enum { direct, by_name };
const zig_version: std.SemanticVersion = .{ .major = 0, .minor = 14, .patch = 0 };
const zig_version: std.SemanticVersion = .{ .major = 0, .minor = 14, .patch = 1 };
const stack_size = 46 * 1024 * 1024;
pub fn build(b: *std.Build) !void {
@ -214,11 +214,6 @@ pub fn build(b: *std.Build) !void {
test_step.dependOn(&exe.step);
if (target.result.os.tag == .windows and target.result.abi == .gnu) {
// LTO is currently broken on mingw, this can be removed when it's fixed.
exe.want_lto = false;
}
const use_llvm = b.option(bool, "use-llvm", "Use the llvm backend");
exe.use_llvm = use_llvm;
exe.use_lld = use_llvm;
@ -257,13 +252,10 @@ pub fn build(b: *std.Build) !void {
var code: u8 = undefined;
const git_describe_untrimmed = b.runAllowFail(&[_][]const u8{
"git",
"-C",
b.build_root.path orelse ".",
"describe",
"--match",
"*.*.*",
"--tags",
"--abbrev=9",
"-C", b.build_root.path orelse ".", // affects the --git-dir argument
"--git-dir", ".git", // affected by the -C argument
"describe", "--match", "*.*.*", //
"--tags", "--abbrev=9",
}, &code, .Ignore) catch {
break :v version_string;
};
@ -334,7 +326,12 @@ pub fn build(b: *std.Build) !void {
try addCmakeCfgOptionsToExe(b, cfg, exe, use_zig_libcxx);
} else {
// Here we are -Denable-llvm but no cmake integration.
try addStaticLlvmOptionsToModule(exe.root_module);
try addStaticLlvmOptionsToModule(exe.root_module, .{
.llvm_has_m68k = llvm_has_m68k,
.llvm_has_csky = llvm_has_csky,
.llvm_has_arc = llvm_has_arc,
.llvm_has_xtensa = llvm_has_xtensa,
});
}
if (target.result.os.tag == .windows) {
// LLVM depends on networking as of version 18.
@ -362,11 +359,7 @@ pub fn build(b: *std.Build) !void {
&[_][]const u8{ tracy_path, "public", "TracyClient.cpp" },
);
// On mingw, we need to opt into windows 7+ to get some features required by tracy.
const tracy_c_flags: []const []const u8 = if (target.result.os.tag == .windows and target.result.abi == .gnu)
&[_][]const u8{ "-DTRACY_ENABLE=1", "-fno-sanitize=undefined", "-D_WIN32_WINNT=0x601" }
else
&[_][]const u8{ "-DTRACY_ENABLE=1", "-fno-sanitize=undefined" };
const tracy_c_flags: []const []const u8 = &.{ "-DTRACY_ENABLE=1", "-fno-sanitize=undefined" };
exe.root_module.addIncludePath(.{ .cwd_relative = tracy_path });
exe.root_module.addCSourceFile(.{ .file = .{ .cwd_relative = client_cpp }, .flags = tracy_c_flags });
@ -513,8 +506,8 @@ pub fn build(b: *std.Build) !void {
.skip_non_native = skip_non_native,
.skip_libc = skip_libc,
.use_llvm = use_llvm,
// I observed a value of 5136793600 on the M2 CI.
.max_rss = 5368709120,
// I observed a value of 5605064704 on the M2 CI.
.max_rss = 6165571174,
}));
const unit_tests_step = b.step("test-unit", "Run the compiler source unit tests");
@ -821,7 +814,12 @@ fn addCmakeCfgOptionsToExe(
}
}
fn addStaticLlvmOptionsToModule(mod: *std.Build.Module) !void {
fn addStaticLlvmOptionsToModule(mod: *std.Build.Module, options: struct {
llvm_has_m68k: bool,
llvm_has_csky: bool,
llvm_has_arc: bool,
llvm_has_xtensa: bool,
}) !void {
// Adds the Zig C++ sources which both stage1 and stage2 need.
//
// We need this because otherwise zig_clang_cc1_main.cpp ends up pulling
@ -845,6 +843,22 @@ fn addStaticLlvmOptionsToModule(mod: *std.Build.Module) !void {
mod.linkSystemLibrary(lib_name, .{});
}
if (options.llvm_has_m68k) for (llvm_libs_m68k) |lib_name| {
mod.linkSystemLibrary(lib_name, .{});
};
if (options.llvm_has_csky) for (llvm_libs_csky) |lib_name| {
mod.linkSystemLibrary(lib_name, .{});
};
if (options.llvm_has_arc) for (llvm_libs_arc) |lib_name| {
mod.linkSystemLibrary(lib_name, .{});
};
if (options.llvm_has_xtensa) for (llvm_libs_xtensa) |lib_name| {
mod.linkSystemLibrary(lib_name, .{});
};
mod.linkSystemLibrary("z", .{});
mod.linkSystemLibrary("zstd", .{});
@ -1333,6 +1347,33 @@ const llvm_libs = [_][]const u8{
"LLVMSupport",
"LLVMDemangle",
};
const llvm_libs_m68k = [_][]const u8{
"LLVMM68kDisassembler",
"LLVMM68kAsmParser",
"LLVMM68kCodeGen",
"LLVMM68kDesc",
"LLVMM68kInfo",
};
const llvm_libs_csky = [_][]const u8{
"LLVMCSKYDisassembler",
"LLVMCSKYAsmParser",
"LLVMCSKYCodeGen",
"LLVMCSKYDesc",
"LLVMCSKYInfo",
};
const llvm_libs_arc = [_][]const u8{
"LLVMARCDisassembler",
"LLVMARCCodeGen",
"LLVMARCDesc",
"LLVMARCInfo",
};
const llvm_libs_xtensa = [_][]const u8{
"LLVMXtensaDisassembler",
"LLVMXtensaAsmParser",
"LLVMXtensaCodeGen",
"LLVMXtensaDesc",
"LLVMXtensaInfo",
};
fn generateLangRef(b: *std.Build) std.Build.LazyPath {
const doctest_exe = b.addExecutable(.{

43
ci/aarch64-linux-debug.sh Normal file → Executable file
View File

@ -48,11 +48,6 @@ unset CXX
ninja install
# simultaneously test building self-hosted without LLVM and with 32-bit arm
stage3-debug/bin/zig build \
-Dtarget=arm-linux-musleabihf \
-Dno-lib
# No -fqemu and -fwasmtime here as they're covered by the x86_64-linux scripts.
stage3-debug/bin/zig build test docs \
--maxrss 24696061952 \
@ -62,34 +57,12 @@ stage3-debug/bin/zig build test docs \
--zig-lib-dir "$PWD/../lib" \
-Denable-superhtml
# Ensure that updating the wasm binary from this commit will result in a viable build.
stage3-debug/bin/zig build update-zig1
mkdir ../build-new
cd ../build-new
export CC="$ZIG cc -target $TARGET -mcpu=$MCPU"
export CXX="$ZIG c++ -target $TARGET -mcpu=$MCPU"
cmake .. \
-DCMAKE_PREFIX_PATH="$PREFIX" \
-DCMAKE_BUILD_TYPE=Debug \
-DZIG_TARGET_TRIPLE="$TARGET" \
-DZIG_TARGET_MCPU="$MCPU" \
-DZIG_STATIC=ON \
-DZIG_NO_LIB=ON \
-GNinja
unset CC
unset CXX
ninja install
stage3/bin/zig test ../test/behavior.zig
stage3/bin/zig build -p stage4 \
-Dstatic-llvm \
-Dtarget=native-native-musl \
stage3-debug/bin/zig build \
--prefix stage4-debug \
-Denable-llvm \
-Dno-lib \
--search-prefix "$PREFIX" \
--zig-lib-dir "$PWD/../lib"
stage4/bin/zig test ../test/behavior.zig
-Dtarget=$TARGET \
-Duse-zig-libcxx \
-Dversion-string="$(stage3-debug/bin/zig version)"
stage4-debug/bin/zig test ../test/behavior.zig

37
ci/aarch64-linux-release.sh Normal file → Executable file
View File

@ -48,11 +48,6 @@ unset CXX
ninja install
# simultaneously test building self-hosted without LLVM and with 32-bit arm
stage3-release/bin/zig build \
-Dtarget=arm-linux-musleabihf \
-Dno-lib
# No -fqemu and -fwasmtime here as they're covered by the x86_64-linux scripts.
stage3-release/bin/zig build test docs \
--maxrss 24696061952 \
@ -77,35 +72,3 @@ stage3-release/bin/zig build \
echo "If the following command fails, it means nondeterminism has been"
echo "introduced, making stage3 and stage4 no longer byte-for-byte identical."
diff stage3-release/bin/zig stage4-release/bin/zig
# Ensure that updating the wasm binary from this commit will result in a viable build.
stage3-release/bin/zig build update-zig1
mkdir ../build-new
cd ../build-new
export CC="$ZIG cc -target $TARGET -mcpu=$MCPU"
export CXX="$ZIG c++ -target $TARGET -mcpu=$MCPU"
cmake .. \
-DCMAKE_PREFIX_PATH="$PREFIX" \
-DCMAKE_BUILD_TYPE=Release \
-DZIG_TARGET_TRIPLE="$TARGET" \
-DZIG_TARGET_MCPU="$MCPU" \
-DZIG_STATIC=ON \
-DZIG_NO_LIB=ON \
-GNinja
unset CC
unset CXX
ninja install
stage3/bin/zig test ../test/behavior.zig
stage3/bin/zig build -p stage4 \
-Dstatic-llvm \
-Dtarget=native-native-musl \
-Dno-lib \
--search-prefix "$PREFIX" \
--zig-lib-dir "$PWD/../lib"
stage4/bin/zig test ../test/behavior.zig

View File

@ -12,8 +12,9 @@ find_path(LLD_INCLUDE_DIRS NAMES lld/Common/Driver.h
/usr/lib/llvm-19/include
/usr/local/llvm190/include
/usr/local/llvm19/include
/usr/local/opt/llvm@19/include
/opt/homebrew/opt/llvm@19/include
/usr/local/opt/lld@19/include
/opt/homebrew/opt/lld@19/include
/home/linuxbrew/.linuxbrew/opt/lld@19/include
/mingw64/include)
find_library(LLD_LIBRARY NAMES lld-19.0 lld190 lld NAMES_PER_DIR
@ -22,8 +23,9 @@ find_library(LLD_LIBRARY NAMES lld-19.0 lld190 lld NAMES_PER_DIR
/usr/lib/llvm-19/lib
/usr/local/llvm190/lib
/usr/local/llvm19/lib
/usr/local/opt/llvm@19/lib
/opt/homebrew/opt/llvm@19/lib
/usr/local/opt/lld@19/lib
/opt/homebrew/opt/lld@19/lib
/home/linuxbrew/.linuxbrew/opt/lld@19/lib
)
if(EXISTS ${LLD_LIBRARY})
set(LLD_LIBRARIES ${LLD_LIBRARY})
@ -37,8 +39,9 @@ else()
/usr/lib/llvm-19/lib
/usr/local/llvm190/lib
/usr/local/llvm19/lib
/usr/local/opt/llvm@19/lib
/opt/homebrew/opt/llvm@19/lib
/usr/local/opt/lld@19/lib
/opt/homebrew/opt/lld@19/lib
/home/linuxbrew/.linuxbrew/opt/lld@19/lib
/mingw64/lib
/c/msys64/mingw64/lib
c:/msys64/mingw64/lib)

View File

@ -316,7 +316,7 @@
<a href="https://ziglang.org/documentation/0.11.0/">0.11.0</a> |
<a href="https://ziglang.org/documentation/0.12.0/">0.12.0</a> |
<a href="https://ziglang.org/documentation/0.13.0/">0.13.0</a> |
<a href="https://ziglang.org/documentation/0.14.0/">0.14.0</a> |
<a href="https://ziglang.org/documentation/0.14.1/">0.14.1</a> |
master
</nav>
<nav aria-labelledby="table-of-contents">
@ -3679,22 +3679,22 @@ void do_a_thing(struct Foo *foo) {
<tr>
<th scope="row">{#syntax#}.{x}{#endsyntax#}</th>
<td>{#syntax#}T{#endsyntax#}</td>
<td>{#syntax#}x{#endsyntax#} is a {#syntax#}std.meta.FieldType(T, .@"0"){#endsyntax#}</td>
<td>{#syntax#}x{#endsyntax#} is a {#syntax#}@FieldType(T, "0"){#endsyntax#}</td>
</tr>
<tr>
<th scope="row">{#syntax#}.{ .a = x }{#endsyntax#}</th>
<td>{#syntax#}T{#endsyntax#}</td>
<td>{#syntax#}x{#endsyntax#} is a {#syntax#}std.meta.FieldType(T, .a){#endsyntax#}</td>
<td>{#syntax#}x{#endsyntax#} is a {#syntax#}@FieldType(T, "a"){#endsyntax#}</td>
</tr>
<tr>
<th scope="row">{#syntax#}T{x}{#endsyntax#}</th>
<td>-</td>
<td>{#syntax#}x{#endsyntax#} is a {#syntax#}std.meta.FieldType(T, .@"0"){#endsyntax#}</td>
<td>{#syntax#}x{#endsyntax#} is a {#syntax#}@FieldType(T, "0"){#endsyntax#}</td>
</tr>
<tr>
<th scope="row">{#syntax#}T{ .a = x }{#endsyntax#}</th>
<td>-</td>
<td>{#syntax#}x{#endsyntax#} is a {#syntax#}std.meta.FieldType(T, .a){#endsyntax#}</td>
<td>{#syntax#}x{#endsyntax#} is a {#syntax#}@FieldType(T, "a"){#endsyntax#}</td>
</tr>
<tr>
<th scope="row">{#syntax#}@Type(x){#endsyntax#}</th>

View File

@ -1,18 +1,22 @@
const std = @import("std");
const expect = std.testing.expect;
const expectEqual = std.testing.expectEqual;
const mat4x4 = [4][4]f32{
[_]f32{ 1.0, 0.0, 0.0, 0.0 },
[_]f32{ 0.0, 1.0, 0.0, 1.0 },
[_]f32{ 0.0, 0.0, 1.0, 0.0 },
[_]f32{ 0.0, 0.0, 0.0, 1.0 },
const mat4x5 = [4][5]f32{
[_]f32{ 1.0, 0.0, 0.0, 0.0, 0.0 },
[_]f32{ 0.0, 1.0, 0.0, 1.0, 0.0 },
[_]f32{ 0.0, 0.0, 1.0, 0.0, 0.0 },
[_]f32{ 0.0, 0.0, 0.0, 1.0, 9.9 },
};
test "multidimensional arrays" {
// mat4x5 itself is a one-dimensional array of arrays.
try expectEqual(mat4x5[1], [_]f32{ 0.0, 1.0, 0.0, 1.0, 0.0 });
// Access the 2D array by indexing the outer array, and then the inner array.
try expect(mat4x4[1][1] == 1.0);
try expect(mat4x5[3][4] == 9.9);
// Here we iterate with for loops.
for (mat4x4, 0..) |row, row_index| {
for (mat4x5, 0..) |row, row_index| {
for (row, 0..) |cell, column_index| {
if (row_index == column_index) {
try expect(cell == 1.0);
@ -20,8 +24,8 @@ test "multidimensional arrays" {
}
}
// initialize a multidimensional array to zeros
const all_zero: [4][4]f32 = .{.{0} ** 4} ** 4;
// Initialize a multidimensional array to zeros.
const all_zero: [4][5]f32 = .{.{0} ** 5} ** 4;
try expect(all_zero[0][0] == 0);
}

View File

@ -1502,19 +1502,29 @@ pub fn ScopeExtra(comptime ScopeExtraContext: type, comptime ScopeExtraType: typ
return scope.base.parent.?.getAlias(name);
}
/// Finds the (potentially) mangled struct name for a locally scoped extern variable given the original declaration name.
/// Finds the (potentially) mangled struct name for a locally scoped extern variable or function given the original declaration name.
///
/// Block scoped extern declarations translate to:
/// const MangledStructName = struct {extern [qualifiers] original_extern_variable_name: [type]};
/// This finds MangledStructName given original_extern_variable_name for referencing correctly in transDeclRefExpr()
pub fn getLocalExternAlias(scope: *Block, name: []const u8) ?[]const u8 {
for (scope.statements.items) |node| {
if (node.tag() == .extern_local_var) {
const parent_node = node.castTag(.extern_local_var).?;
const init_node = parent_node.data.init.castTag(.var_decl).?;
if (std.mem.eql(u8, init_node.data.name, name)) {
return parent_node.data.name;
}
switch (node.tag()) {
.extern_local_var => {
const parent_node = node.castTag(.extern_local_var).?;
const init_node = parent_node.data.init.castTag(.var_decl).?;
if (std.mem.eql(u8, init_node.data.name, name)) {
return parent_node.data.name;
}
},
.extern_local_fn => {
const parent_node = node.castTag(.extern_local_fn).?;
const init_node = parent_node.data.init.castTag(.func).?;
if (std.mem.eql(u8, init_node.data.name.?, name)) {
return parent_node.data.name;
}
},
else => {},
}
}
return null;
@ -1620,7 +1630,11 @@ pub fn ScopeExtra(comptime ScopeExtraContext: type, comptime ScopeExtraType: typ
.root => null,
.block => ret: {
const block = @as(*Block, @fieldParentPtr("base", scope));
break :ret block.getLocalExternAlias(name);
const alias_name = block.getLocalExternAlias(name);
if (alias_name) |_alias_name| {
break :ret _alias_name;
}
break :ret scope.parent.?.getLocalExternAlias(name);
},
.loop, .do_loop, .condition => scope.parent.?.getLocalExternAlias(name),
};

View File

@ -57,6 +57,8 @@ pub const Node = extern union {
static_local_var,
/// const ExternLocal_name = struct { init }
extern_local_var,
/// const ExternLocal_name = struct { init }
extern_local_fn,
/// var name = init.*
mut_str,
func,
@ -367,7 +369,13 @@ pub const Node = extern union {
.c_pointer, .single_pointer => Payload.Pointer,
.array_type, .null_sentinel_array_type => Payload.Array,
.arg_redecl, .alias, .fail_decl => Payload.ArgRedecl,
.var_simple, .pub_var_simple, .static_local_var, .extern_local_var, .mut_str => Payload.SimpleVarDecl,
.var_simple,
.pub_var_simple,
.static_local_var,
.extern_local_var,
.extern_local_fn,
.mut_str,
=> Payload.SimpleVarDecl,
.enum_constant => Payload.EnumConstant,
.array_filler => Payload.ArrayFiller,
.pub_inline_fn => Payload.PubInlineFn,
@ -394,7 +402,7 @@ pub const Node = extern union {
}
pub fn Data(comptime t: Tag) type {
return std.meta.fieldInfo(t.Type(), .data).type;
return @FieldType(t.Type(), "data");
}
};
@ -1285,8 +1293,11 @@ fn renderNode(c: *Context, node: Node) Allocator.Error!NodeIndex {
},
});
},
.extern_local_var => {
const payload = node.castTag(.extern_local_var).?.data;
.extern_local_var, .extern_local_fn => {
const payload = if (node.tag() == .extern_local_var)
node.castTag(.extern_local_var).?.data
else
node.castTag(.extern_local_fn).?.data;
const const_tok = try c.addToken(.keyword_const, "const");
_ = try c.addIdentifier(payload.name);
@ -2338,7 +2349,7 @@ fn renderNullSentinelArrayType(c: *Context, len: usize, elem_type: Node) !NodeIn
fn addSemicolonIfNeeded(c: *Context, node: Node) !void {
switch (node.tag()) {
.warning => unreachable,
.var_decl, .var_simple, .arg_redecl, .alias, .block, .empty_block, .block_single, .@"switch", .static_local_var, .extern_local_var, .mut_str => {},
.var_decl, .var_simple, .arg_redecl, .alias, .block, .empty_block, .block_single, .@"switch", .static_local_var, .extern_local_var, .extern_local_fn, .mut_str => {},
.while_true => {
const payload = node.castTag(.while_true).?.data;
return addSemicolonIfNotBlock(c, payload);
@ -2435,6 +2446,7 @@ fn renderNodeGrouped(c: *Context, node: Node) !NodeIndex {
.builtin_extern,
.static_local_var,
.extern_local_var,
.extern_local_fn,
.mut_str,
.macro_arithmetic,
=> {

View File

@ -740,7 +740,7 @@ fn runStepNames(
if (run.prominent_compile_errors and total_compile_errors > 0) {
for (step_stack.keys()) |s| {
if (s.result_error_bundle.errorMessageCount() > 0) {
s.result_error_bundle.renderToStdErr(.{ .ttyconf = ttyconf, .include_reference_trace = (b.reference_trace orelse 0) > 0 });
s.result_error_bundle.renderToStdErr(.{ .ttyconf = ttyconf });
}
}
@ -1119,11 +1119,7 @@ fn workerMakeOneStep(
defer std.debug.unlockStdErr();
const gpa = b.allocator;
const options: std.zig.ErrorBundle.RenderOptions = .{
.ttyconf = run.ttyconf,
.include_reference_trace = (b.reference_trace orelse 0) > 0,
};
printErrorMessages(gpa, s, options, run.stderr, run.prominent_compile_errors) catch {};
printErrorMessages(gpa, s, .{ .ttyconf = run.ttyconf }, run.stderr, run.prominent_compile_errors) catch {};
}
handle_result: {

View File

@ -17,6 +17,7 @@ pub const usage_string_after_command_name =
\\This is necessary when the input path begins with a forward slash.
\\
\\Supported option prefixes are /, -, and --, so e.g. /h, -h, and --h all work.
\\Drop-in compatible with the Microsoft Resource Compiler.
\\
\\Supported Win32 RC Options:
\\ /?, /h Print this help and exit.

View File

@ -81,7 +81,8 @@ pub fn main() !void {
defer options.deinit();
if (options.print_help_and_exit) {
try cli.writeUsage(stderr.writer(), "zig rc");
const stdout = std.io.getStdOut();
try cli.writeUsage(stdout.writer(), "zig rc");
return;
}

View File

@ -72,10 +72,12 @@ pub inline fn bigIntFromFloat(comptime signedness: std.builtin.Signedness, resul
} });
const parts = math.frexp(a);
const exponent = @max(parts.exponent - significand_bits, 0);
const significand_bits_adjusted_to_handle_smin = @as(i32, significand_bits) +
@intFromBool(signedness == .signed and parts.exponent == 32 * result.len);
const exponent = @max(parts.exponent - significand_bits_adjusted_to_handle_smin, 0);
const int: I = @intFromFloat(switch (exponent) {
0 => a,
else => math.ldexp(parts.significand, significand_bits),
else => math.ldexp(parts.significand, significand_bits_adjusted_to_handle_smin),
});
switch (signedness) {
.signed => {

View File

@ -24,6 +24,8 @@ const __fixdfdi = @import("fixdfdi.zig").__fixdfdi;
const __fixunsdfdi = @import("fixunsdfdi.zig").__fixunsdfdi;
const __fixdfti = @import("fixdfti.zig").__fixdfti;
const __fixunsdfti = @import("fixunsdfti.zig").__fixunsdfti;
const __fixdfei = @import("fixdfei.zig").__fixdfei;
const __fixunsdfei = @import("fixunsdfei.zig").__fixunsdfei;
// Conversion from f128
const __fixtfsi = @import("fixtfsi.zig").__fixtfsi;
@ -681,6 +683,44 @@ test "fixunsdfti" {
try test__fixunsdfti(-0x1.FFFFFFFFFFFFEp+62, 0);
}
fn test_fixdfei(comptime T: type, expected: T, a: f64) !void {
const int = @typeInfo(T).int;
var expected_buf: [@divExact(int.bits, 32)]u32 = undefined;
std.mem.writeInt(T, std.mem.asBytes(&expected_buf), expected, endian);
var actual_buf: [@divExact(int.bits, 32)]u32 = undefined;
_ = switch (int.signedness) {
.signed => __fixdfei,
.unsigned => __fixunsdfei,
}(&actual_buf, int.bits, a);
try testing.expect(std.mem.eql(u32, &expected_buf, &actual_buf));
}
test "fixdfei" {
try test_fixdfei(i256, -1 << 255, -0x1p255);
try test_fixdfei(i256, -1 << 127, -0x1p127);
try test_fixdfei(i256, -1 << 100, -0x1p100);
try test_fixdfei(i256, -1 << 50, -0x1p50);
try test_fixdfei(i256, -1 << 1, -0x1p1);
try test_fixdfei(i256, -1 << 0, -0x1p0);
try test_fixdfei(i256, 0, 0);
try test_fixdfei(i256, 1 << 0, 0x1p0);
try test_fixdfei(i256, 1 << 1, 0x1p1);
try test_fixdfei(i256, 1 << 50, 0x1p50);
try test_fixdfei(i256, 1 << 100, 0x1p100);
try test_fixdfei(i256, 1 << 127, 0x1p127);
try test_fixdfei(i256, 1 << 254, 0x1p254);
}
test "fixundfei" {
try test_fixdfei(u256, 0, 0);
try test_fixdfei(u256, 1 << 0, 0x1p0);
try test_fixdfei(u256, 1 << 1, 0x1p1);
try test_fixdfei(u256, 1 << 50, 0x1p50);
try test_fixdfei(u256, 1 << 100, 0x1p100);
try test_fixdfei(u256, 1 << 127, 0x1p127);
try test_fixdfei(u256, 1 << 255, 0x1p255);
}
fn test__fixtfsi(a: f128, expected: i32) !void {
const x = __fixtfsi(a);
try testing.expect(x == expected);

View File

@ -13,6 +13,8 @@ comptime {
@export(&__sqrtx, .{ .name = "__sqrtx", .linkage = common.linkage, .visibility = common.visibility });
if (common.want_ppc_abi) {
@export(&sqrtq, .{ .name = "sqrtf128", .linkage = common.linkage, .visibility = common.visibility });
} else if (common.want_sparc_abi) {
@export(&_Qp_sqrt, .{ .name = "_Qp_sqrt", .linkage = common.linkage, .visibility = common.visibility });
}
@export(&sqrtq, .{ .name = "sqrtq", .linkage = common.linkage, .visibility = common.visibility });
@export(&sqrtl, .{ .name = "sqrtl", .linkage = common.linkage, .visibility = common.visibility });
@ -242,6 +244,10 @@ pub fn sqrtq(x: f128) callconv(.C) f128 {
return sqrt(@floatCast(x));
}
fn _Qp_sqrt(c: *f128, a: *f128) callconv(.c) void {
c.* = sqrt(@floatCast(a.*));
}
pub fn sqrtl(x: c_longdouble) callconv(.C) c_longdouble {
switch (@typeInfo(c_longdouble).float.bits) {
16 => return __sqrth(x),

View File

@ -468,27 +468,42 @@ export fn fuzzer_init(cache_dir_struct: Fuzzer.Slice) void {
// Linkers are expected to automatically add `__start_<section>` and
// `__stop_<section>` symbols when section names are valid C identifiers.
const pc_counters_start = @extern([*]u8, .{
.name = "__start___sancov_cntrs",
.linkage = .weak,
}) orelse fatal("missing __start___sancov_cntrs symbol", .{});
const ofmt = builtin.object_format;
const pc_counters_end = @extern([*]u8, .{
.name = "__stop___sancov_cntrs",
const start_symbol_prefix: []const u8 = if (ofmt == .macho)
"\x01section$start$__DATA$__"
else
"__start___";
const end_symbol_prefix: []const u8 = if (ofmt == .macho)
"\x01section$end$__DATA$__"
else
"__stop___";
const pc_counters_start_name = start_symbol_prefix ++ "sancov_cntrs";
const pc_counters_start = @extern([*]u8, .{
.name = pc_counters_start_name,
.linkage = .weak,
}) orelse fatal("missing __stop___sancov_cntrs symbol", .{});
}) orelse fatal("missing {s} symbol", .{pc_counters_start_name});
const pc_counters_end_name = end_symbol_prefix ++ "sancov_cntrs";
const pc_counters_end = @extern([*]u8, .{
.name = pc_counters_end_name,
.linkage = .weak,
}) orelse fatal("missing {s} symbol", .{pc_counters_end_name});
const pc_counters = pc_counters_start[0 .. pc_counters_end - pc_counters_start];
const pcs_start_name = start_symbol_prefix ++ "sancov_pcs1";
const pcs_start = @extern([*]usize, .{
.name = "__start___sancov_pcs1",
.name = pcs_start_name,
.linkage = .weak,
}) orelse fatal("missing __start___sancov_pcs1 symbol", .{});
}) orelse fatal("missing {s} symbol", .{pcs_start_name});
const pcs_end_name = end_symbol_prefix ++ "sancov_pcs1";
const pcs_end = @extern([*]usize, .{
.name = "__stop___sancov_pcs1",
.name = pcs_end_name,
.linkage = .weak,
}) orelse fatal("missing __stop___sancov_pcs1 symbol", .{});
}) orelse fatal("missing {s} symbol", .{pcs_end_name});
const pcs = pcs_start[0 .. pcs_end - pcs_start];

View File

@ -99,6 +99,8 @@ struct __pthread_cond_s
unsigned int __g1_orig_size;
unsigned int __wrefs;
unsigned int __g_signals[2];
unsigned int __unused_initialized_1;
unsigned int __unused_initialized_2;
};
typedef unsigned int __tss_t;

View File

@ -152,7 +152,7 @@ enum
/* Conditional variable handling. */
#define PTHREAD_COND_INITIALIZER { { {0}, {0}, {0, 0}, 0, 0, {0, 0} } }
#define PTHREAD_COND_INITIALIZER { { {0}, {0}, {0, 0}, 0, 0, {0, 0}, 0, 0 } }
/* Cleanup buffers */

View File

@ -99,6 +99,8 @@ struct __pthread_cond_s
unsigned int __g1_orig_size;
unsigned int __wrefs;
unsigned int __g_signals[2];
unsigned int __unused_initialized_1;
unsigned int __unused_initialized_2;
};
typedef unsigned int __tss_t;

View File

@ -152,7 +152,7 @@ enum
/* Conditional variable handling. */
#define PTHREAD_COND_INITIALIZER { { {0}, {0}, {0, 0}, 0, 0, {0, 0} } }
#define PTHREAD_COND_INITIALIZER { { {0}, {0}, {0, 0}, 0, 0, {0, 0}, 0, 0 } }
/* Cleanup buffers */

View File

@ -0,0 +1,38 @@
/* This file is automatically generated.
It defines a symbol `__stub_FUNCTION' for each function
in the C library which is a stub, meaning it will fail
every time called, usually setting errno to ENOSYS. */
#ifdef _LIBC
#error Applications may not define the macro _LIBC
#endif
#define __stub___compat_bdflush
#define __stub___compat_create_module
#define __stub___compat_get_kernel_syms
#define __stub___compat_query_module
#define __stub___compat_uselib
#define __stub_chflags
#define __stub_fchflags
#define __stub_feclearexcept
#define __stub_fedisableexcept
#define __stub_feenableexcept
#define __stub_fegetenv
#define __stub_fegetexcept
#define __stub_fegetexceptflag
#define __stub_fegetmode
#define __stub_fegetround
#define __stub_feholdexcept
#define __stub_feraiseexcept
#define __stub_fesetenv
#define __stub_fesetexcept
#define __stub_fesetexceptflag
#define __stub_fesetmode
#define __stub_fesetround
#define __stub_fetestexcept
#define __stub_feupdateenv
#define __stub_gtty
#define __stub_revoke
#define __stub_setlogin
#define __stub_sigreturn
#define __stub_stty

View File

@ -337,6 +337,7 @@ pub const Manifest = struct {
manifest_create: fs.File.OpenError,
manifest_read: fs.File.ReadError,
manifest_lock: fs.File.LockError,
manifest_seek: fs.File.SeekError,
file_open: FileOp,
file_stat: FileOp,
file_read: FileOp,
@ -488,7 +489,6 @@ pub const Manifest = struct {
/// option, one may call `toOwnedLock` to obtain a smaller object which can represent
/// the lock. `deinit` is safe to call whether or not `toOwnedLock` has been called.
pub fn hit(self: *Manifest) HitError!bool {
const gpa = self.cache.gpa;
assert(self.manifest_file == null);
self.diagnostic = .none;
@ -501,12 +501,12 @@ pub const Manifest = struct {
self.hex_digest = binToHex(bin_digest);
self.hash.hasher = hasher_init;
self.hash.hasher.update(&bin_digest);
@memcpy(manifest_file_path[0..self.hex_digest.len], &self.hex_digest);
manifest_file_path[hex_digest_len..][0..ext.len].* = ext.*;
// We'll try to open the cache with an exclusive lock, but if that would block
// and `want_shared_lock` is set, a shared lock might be sufficient, so we'll
// open with a shared lock instead.
while (true) {
if (self.cache.manifest_dir.createFile(&manifest_file_path, .{
.read = true,
@ -575,26 +575,71 @@ pub const Manifest = struct {
self.want_refresh_timestamp = true;
const input_file_count = self.files.entries.len;
while (true) : (self.unhit(bin_digest, input_file_count)) {
const file_contents = self.manifest_file.?.reader().readAllAlloc(gpa, manifest_file_size_max) catch |err| switch (err) {
error.OutOfMemory => return error.OutOfMemory,
error.StreamTooLong => return error.OutOfMemory,
else => |e| {
self.diagnostic = .{ .manifest_read = e };
return error.CacheCheckFailed;
},
};
defer gpa.free(file_contents);
var any_file_changed = false;
var line_iter = mem.tokenizeScalar(u8, file_contents, '\n');
var idx: usize = 0;
if (if (line_iter.next()) |line| !std.mem.eql(u8, line, manifest_header) else true) {
if (try self.upgradeToExclusiveLock()) continue;
self.manifest_dirty = true;
while (idx < input_file_count) : (idx += 1) {
const ch_file = &self.files.keys()[idx];
self.populateFileHash(ch_file) catch |err| {
// We're going to construct a second hash. Its input will begin with the digest we've
// already computed (`bin_digest`), and then it'll have the digests of each input file,
// including "post" files (see `addFilePost`). If this is a hit, we learn the set of "post"
// files from the manifest on disk. If this is a miss, we'll learn those from future calls
// to `addFilePost` etc. As such, the state of `self.hash.hasher` after this function
// depends on whether this is a hit or a miss.
//
// If we return `true` indicating a cache hit, then `self.hash.hasher` must already include
// the digests of the "post" files, so the caller can call `final`. Otherwise, on a cache
// miss, `self.hash.hasher` will include the digests of all non-"post" files -- that is,
// the ones we've already been told about. The rest will be discovered through calls to
// `addFilePost` etc, which will update the hasher. After all files are added, the user can
// use `final`, and will at some point `writeManifest` the file list to disk.
self.hash.hasher = hasher_init;
self.hash.hasher.update(&bin_digest);
hit: {
const file_digests_populated: usize = digests: {
switch (try self.hitWithCurrentLock()) {
.hit => break :hit,
.miss => |m| if (!try self.upgradeToExclusiveLock()) {
break :digests m.file_digests_populated;
},
}
// We've just had a miss with the shared lock, and upgraded to an exclusive lock. Someone
// else might have modified the digest, so we need to check again before deciding to miss.
// Before trying again, we must reset `self.hash.hasher` and `self.files`.
// This is basically just the first half of `unhit`.
self.hash.hasher = hasher_init;
self.hash.hasher.update(&bin_digest);
while (self.files.count() != input_file_count) {
var file = self.files.pop().?;
file.key.deinit(self.cache.gpa);
}
// Also, seek the file back to the start.
self.manifest_file.?.seekTo(0) catch |err| {
self.diagnostic = .{ .manifest_seek = err };
return error.CacheCheckFailed;
};
switch (try self.hitWithCurrentLock()) {
.hit => break :hit,
.miss => |m| break :digests m.file_digests_populated,
}
};
// This is a guaranteed cache miss. We're almost ready to return `false`, but there's a
// little bookkeeping to do first. The first `file_digests_populated` entries in `files`
// have their `bin_digest` populated; there may be some left in `input_file_count` which
// we'll need to populate ourselves. Other than that, this is basically `unhit`.
self.manifest_dirty = true;
self.hash.hasher = hasher_init;
self.hash.hasher.update(&bin_digest);
while (self.files.count() != input_file_count) {
var file = self.files.pop().?;
file.key.deinit(self.cache.gpa);
}
for (self.files.keys(), 0..) |*file, idx| {
if (idx < file_digests_populated) {
// `bin_digest` is already populated by `hitWithCurrentLock`, so we can use it directly.
self.hash.hasher.update(&file.bin_digest);
} else {
self.populateFileHash(file) catch |err| {
self.diagnostic = .{ .file_hash = .{
.file_index = idx,
.err = err,
@ -602,172 +647,195 @@ pub const Manifest = struct {
return error.CacheCheckFailed;
};
}
return false;
}
while (line_iter.next()) |line| {
defer idx += 1;
return false;
}
var iter = mem.tokenizeScalar(u8, line, ' ');
const size = iter.next() orelse return error.InvalidFormat;
const inode = iter.next() orelse return error.InvalidFormat;
const mtime_nsec_str = iter.next() orelse return error.InvalidFormat;
const digest_str = iter.next() orelse return error.InvalidFormat;
const prefix_str = iter.next() orelse return error.InvalidFormat;
const file_path = iter.rest();
if (self.want_shared_lock) {
self.downgradeToSharedLock() catch |err| {
self.diagnostic = .{ .manifest_lock = err };
return error.CacheCheckFailed;
};
}
const stat_size = fmt.parseInt(u64, size, 10) catch return error.InvalidFormat;
const stat_inode = fmt.parseInt(fs.File.INode, inode, 10) catch return error.InvalidFormat;
const stat_mtime = fmt.parseInt(i64, mtime_nsec_str, 10) catch return error.InvalidFormat;
const file_bin_digest = b: {
if (digest_str.len != hex_digest_len) return error.InvalidFormat;
var bd: BinDigest = undefined;
_ = fmt.hexToBytes(&bd, digest_str) catch return error.InvalidFormat;
break :b bd;
return true;
}
/// Assumes that `self.hash.hasher` has been updated only with the original digest, that
/// `self.files` contains only the original input files, and that `self.manifest_file.?` is
/// seeked to the start of the file.
fn hitWithCurrentLock(self: *Manifest) HitError!union(enum) {
hit,
miss: struct {
file_digests_populated: usize,
},
} {
const gpa = self.cache.gpa;
const input_file_count = self.files.entries.len;
const file_contents = self.manifest_file.?.reader().readAllAlloc(gpa, manifest_file_size_max) catch |err| switch (err) {
error.OutOfMemory => return error.OutOfMemory,
error.StreamTooLong => return error.OutOfMemory,
else => |e| {
self.diagnostic = .{ .manifest_read = e };
return error.CacheCheckFailed;
},
};
defer gpa.free(file_contents);
var any_file_changed = false;
var line_iter = mem.tokenizeScalar(u8, file_contents, '\n');
var idx: usize = 0;
const header_valid = valid: {
const line = line_iter.next() orelse break :valid false;
break :valid std.mem.eql(u8, line, manifest_header);
};
if (!header_valid) {
return .{ .miss = .{ .file_digests_populated = 0 } };
}
while (line_iter.next()) |line| {
defer idx += 1;
var iter = mem.tokenizeScalar(u8, line, ' ');
const size = iter.next() orelse return error.InvalidFormat;
const inode = iter.next() orelse return error.InvalidFormat;
const mtime_nsec_str = iter.next() orelse return error.InvalidFormat;
const digest_str = iter.next() orelse return error.InvalidFormat;
const prefix_str = iter.next() orelse return error.InvalidFormat;
const file_path = iter.rest();
const stat_size = fmt.parseInt(u64, size, 10) catch return error.InvalidFormat;
const stat_inode = fmt.parseInt(fs.File.INode, inode, 10) catch return error.InvalidFormat;
const stat_mtime = fmt.parseInt(i64, mtime_nsec_str, 10) catch return error.InvalidFormat;
const file_bin_digest = b: {
if (digest_str.len != hex_digest_len) return error.InvalidFormat;
var bd: BinDigest = undefined;
_ = fmt.hexToBytes(&bd, digest_str) catch return error.InvalidFormat;
break :b bd;
};
const prefix = fmt.parseInt(u8, prefix_str, 10) catch return error.InvalidFormat;
if (prefix >= self.cache.prefixes_len) return error.InvalidFormat;
if (file_path.len == 0) return error.InvalidFormat;
const cache_hash_file = f: {
const prefixed_path: PrefixedPath = .{
.prefix = prefix,
.sub_path = file_path, // expires with file_contents
};
if (idx < input_file_count) {
const file = &self.files.keys()[idx];
if (!file.prefixed_path.eql(prefixed_path))
return error.InvalidFormat;
const prefix = fmt.parseInt(u8, prefix_str, 10) catch return error.InvalidFormat;
if (prefix >= self.cache.prefixes_len) return error.InvalidFormat;
if (file_path.len == 0) return error.InvalidFormat;
const cache_hash_file = f: {
const prefixed_path: PrefixedPath = .{
.prefix = prefix,
.sub_path = file_path, // expires with file_contents
file.stat = .{
.size = stat_size,
.inode = stat_inode,
.mtime = stat_mtime,
};
if (idx < input_file_count) {
const file = &self.files.keys()[idx];
if (!file.prefixed_path.eql(prefixed_path))
return error.InvalidFormat;
file.stat = .{
file.bin_digest = file_bin_digest;
break :f file;
}
const gop = try self.files.getOrPutAdapted(gpa, prefixed_path, FilesAdapter{});
errdefer _ = self.files.pop();
if (!gop.found_existing) {
gop.key_ptr.* = .{
.prefixed_path = .{
.prefix = prefix,
.sub_path = try gpa.dupe(u8, file_path),
},
.contents = null,
.max_file_size = null,
.handle = null,
.stat = .{
.size = stat_size,
.inode = stat_inode,
.mtime = stat_mtime,
};
file.bin_digest = file_bin_digest;
break :f file;
}
const gop = try self.files.getOrPutAdapted(gpa, prefixed_path, FilesAdapter{});
errdefer _ = self.files.pop();
if (!gop.found_existing) {
gop.key_ptr.* = .{
.prefixed_path = .{
.prefix = prefix,
.sub_path = try gpa.dupe(u8, file_path),
},
.contents = null,
.max_file_size = null,
.handle = null,
.stat = .{
.size = stat_size,
.inode = stat_inode,
.mtime = stat_mtime,
},
.bin_digest = file_bin_digest,
};
}
break :f gop.key_ptr;
},
.bin_digest = file_bin_digest,
};
}
break :f gop.key_ptr;
};
const pp = cache_hash_file.prefixed_path;
const dir = self.cache.prefixes()[pp.prefix].handle;
const this_file = dir.openFile(pp.sub_path, .{ .mode = .read_only }) catch |err| switch (err) {
error.FileNotFound => {
// Every digest before this one has been populated successfully.
return .{ .miss = .{ .file_digests_populated = idx } };
},
else => |e| {
self.diagnostic = .{ .file_open = .{
.file_index = idx,
.err = e,
} };
return error.CacheCheckFailed;
},
};
defer this_file.close();
const actual_stat = this_file.stat() catch |err| {
self.diagnostic = .{ .file_stat = .{
.file_index = idx,
.err = err,
} };
return error.CacheCheckFailed;
};
const size_match = actual_stat.size == cache_hash_file.stat.size;
const mtime_match = actual_stat.mtime == cache_hash_file.stat.mtime;
const inode_match = actual_stat.inode == cache_hash_file.stat.inode;
if (!size_match or !mtime_match or !inode_match) {
cache_hash_file.stat = .{
.size = actual_stat.size,
.mtime = actual_stat.mtime,
.inode = actual_stat.inode,
};
const pp = cache_hash_file.prefixed_path;
const dir = self.cache.prefixes()[pp.prefix].handle;
const this_file = dir.openFile(pp.sub_path, .{ .mode = .read_only }) catch |err| switch (err) {
error.FileNotFound => {
if (try self.upgradeToExclusiveLock()) continue;
return false;
},
else => |e| {
self.diagnostic = .{ .file_open = .{
.file_index = idx,
.err = e,
} };
return error.CacheCheckFailed;
},
};
defer this_file.close();
if (self.isProblematicTimestamp(cache_hash_file.stat.mtime)) {
// The actual file has an unreliable timestamp, force it to be hashed
cache_hash_file.stat.mtime = 0;
cache_hash_file.stat.inode = 0;
}
const actual_stat = this_file.stat() catch |err| {
self.diagnostic = .{ .file_stat = .{
var actual_digest: BinDigest = undefined;
hashFile(this_file, &actual_digest) catch |err| {
self.diagnostic = .{ .file_read = .{
.file_index = idx,
.err = err,
} };
return error.CacheCheckFailed;
};
const size_match = actual_stat.size == cache_hash_file.stat.size;
const mtime_match = actual_stat.mtime == cache_hash_file.stat.mtime;
const inode_match = actual_stat.inode == cache_hash_file.stat.inode;
if (!size_match or !mtime_match or !inode_match) {
self.manifest_dirty = true;
cache_hash_file.stat = .{
.size = actual_stat.size,
.mtime = actual_stat.mtime,
.inode = actual_stat.inode,
};
if (self.isProblematicTimestamp(cache_hash_file.stat.mtime)) {
// The actual file has an unreliable timestamp, force it to be hashed
cache_hash_file.stat.mtime = 0;
cache_hash_file.stat.inode = 0;
}
var actual_digest: BinDigest = undefined;
hashFile(this_file, &actual_digest) catch |err| {
self.diagnostic = .{ .file_read = .{
.file_index = idx,
.err = err,
} };
return error.CacheCheckFailed;
};
if (!mem.eql(u8, &cache_hash_file.bin_digest, &actual_digest)) {
cache_hash_file.bin_digest = actual_digest;
// keep going until we have the input file digests
any_file_changed = true;
}
}
if (!any_file_changed) {
self.hash.hasher.update(&cache_hash_file.bin_digest);
if (!mem.eql(u8, &cache_hash_file.bin_digest, &actual_digest)) {
cache_hash_file.bin_digest = actual_digest;
// keep going until we have the input file digests
any_file_changed = true;
}
}
if (any_file_changed) {
if (try self.upgradeToExclusiveLock()) continue;
// cache miss
// keep the manifest file open
self.unhit(bin_digest, input_file_count);
return false;
if (!any_file_changed) {
self.hash.hasher.update(&cache_hash_file.bin_digest);
}
if (idx < input_file_count) {
if (try self.upgradeToExclusiveLock()) continue;
self.manifest_dirty = true;
while (idx < input_file_count) : (idx += 1) {
self.populateFileHash(&self.files.keys()[idx]) catch |err| {
self.diagnostic = .{ .file_hash = .{
.file_index = idx,
.err = err,
} };
return error.CacheCheckFailed;
};
}
return false;
}
if (self.want_shared_lock) {
self.downgradeToSharedLock() catch |err| {
self.diagnostic = .{ .manifest_lock = err };
return error.CacheCheckFailed;
};
}
return true;
}
// If the manifest was somehow missing one of our input files, or if any file hash has changed,
// then this is a cache miss. However, we have successfully populated some or all of the file
// digests.
if (any_file_changed or idx < input_file_count) {
return .{ .miss = .{ .file_digests_populated = idx } };
}
return .hit;
}
/// Reset `self.hash.hasher` to the state it should be in after `hit` returns `false`.
/// The hasher contains the original input digest, and all original input file digests (i.e.
/// not including post files).
/// Assumes that `bin_digest` is populated for all files up to `input_file_count`. As such,
/// this is not necessarily safe to call within `hit`.
pub fn unhit(self: *Manifest, bin_digest: BinDigest, input_file_count: usize) void {
// Reset the hash.
self.hash.hasher = hasher_init;

View File

@ -25,7 +25,7 @@ pub fn next(self: *Tokenizer) ?Token {
},
},
.target => switch (char) {
'\t', '\n', '\r', ' ' => {
'\n', '\r' => {
return errorIllegalChar(.invalid_target, self.index, char);
},
'$' => {
@ -40,6 +40,15 @@ pub fn next(self: *Tokenizer) ?Token {
self.state = .target_colon;
self.index += 1;
},
'\t', ' ' => {
self.state = .target_space;
const bytes = self.bytes[start..self.index];
std.debug.assert(bytes.len != 0);
self.index += 1;
return finishTarget(must_resolve, bytes);
},
else => {
self.index += 1;
},
@ -110,6 +119,19 @@ pub fn next(self: *Tokenizer) ?Token {
self.state = .target;
},
},
.target_space => switch (char) {
'\t', ' ' => {
// silently ignore additional horizontal whitespace
self.index += 1;
},
':' => {
self.state = .rhs;
self.index += 1;
},
else => {
return errorIllegalChar(.expected_colon, self.index, char);
},
},
.rhs => switch (char) {
'\t', ' ' => {
// silently ignore horizontal whitespace
@ -256,6 +278,10 @@ pub fn next(self: *Tokenizer) ?Token {
self.state = .lhs;
return null;
},
.target_space => {
const idx = self.index - 1;
return errorIllegalChar(.expected_colon, idx, self.bytes[idx]);
},
.prereq_quote => {
return errorPosition(.incomplete_quoted_prerequisite, start, self.bytes[start..]);
},
@ -299,6 +325,7 @@ const State = enum {
target_dollar_sign,
target_colon,
target_colon_reverse_solidus,
target_space,
rhs,
rhs_continuation,
rhs_continuation_linefeed,
@ -322,6 +349,7 @@ pub const Token = union(enum) {
expected_dollar_sign: IndexAndChar,
continuation_eol: IndexAndChar,
incomplete_escape: IndexAndChar,
expected_colon: IndexAndChar,
pub const IndexAndChar = struct {
index: usize,
@ -420,6 +448,7 @@ pub const Token = union(enum) {
.expected_dollar_sign,
.continuation_eol,
.incomplete_escape,
.expected_colon,
=> |index_and_char| {
try writer.writeAll("illegal char ");
try printUnderstandableChar(writer, index_and_char.char);
@ -438,6 +467,7 @@ pub const Token = union(enum) {
.expected_dollar_sign => "expecting '$'",
.continuation_eol => "continuation expecting end-of-line",
.incomplete_escape => "incomplete escape",
.expected_colon => "expecting ':'",
};
}
};
@ -545,6 +575,16 @@ test "empty target linefeeds + hspace + continuations" {
, expect);
}
test "empty target + hspace + colon" {
const expect = "target = {foo.o}";
try depTokenizer("foo.o :", expect);
try depTokenizer("foo.o\t\t\t:", expect);
try depTokenizer("foo.o \t \t :", expect);
try depTokenizer("\r\nfoo.o :", expect);
try depTokenizer(" foo.o :", expect);
}
test "prereq" {
const expect =
\\target = {foo.o}
@ -923,9 +963,6 @@ test "error illegal char at position - expecting dollar_sign" {
}
test "error illegal char at position - invalid target" {
try depTokenizer("foo\t.o",
\\ERROR: illegal char \x09 at position 3: invalid target
);
try depTokenizer("foo\n.o",
\\ERROR: illegal char \x0A at position 3: invalid target
);
@ -963,6 +1000,25 @@ test "error prereq - continuation expecting end-of-line" {
);
}
test "error illegal char at position - expecting colon" {
try depTokenizer("foo\t.o:",
\\target = {foo}
\\ERROR: illegal char '.' at position 4: expecting ':'
);
try depTokenizer("foo .o:",
\\target = {foo}
\\ERROR: illegal char '.' at position 4: expecting ':'
);
try depTokenizer("foo \n.o:",
\\target = {foo}
\\ERROR: illegal char \x0A at position 4: expecting ':'
);
try depTokenizer("foo.o\t\n:",
\\target = {foo.o}
\\ERROR: illegal char \x0A at position 6: expecting ':'
);
}
// - tokenize input, emit textual representation, and compare to expect
fn depTokenizer(input: []const u8, expect: []const u8) !void {
var arena_allocator = std.heap.ArenaAllocator.init(std.testing.allocator);

View File

@ -202,6 +202,7 @@ pub fn init(options: StepOptions) Step {
.state = .precheck_unstarted,
.max_rss = options.max_rss,
.debug_stack_trace = blk: {
if (!std.debug.sys_can_stack_trace) break :blk &.{};
const addresses = arena.alloc(usize, options.owner.debug_stack_frames_count) catch @panic("OOM");
@memset(addresses, 0);
const first_ret_addr = options.first_ret_addr orelse @returnAddress();
@ -758,7 +759,7 @@ fn failWithCacheError(s: *Step, man: *const Build.Cache.Manifest, err: Build.Cac
switch (err) {
error.CacheCheckFailed => switch (man.diagnostic) {
.none => unreachable,
.manifest_create, .manifest_read, .manifest_lock => |e| return s.fail("failed to check cache: {s} {s}", .{
.manifest_create, .manifest_read, .manifest_lock, .manifest_seek => |e| return s.fail("failed to check cache: {s} {s}", .{
@tagName(man.diagnostic), @errorName(e),
}),
.file_open, .file_stat, .file_read, .file_hash => |op| {

View File

@ -693,6 +693,8 @@ const PkgConfigResult = struct {
/// Run pkg-config for the given library name and parse the output, returning the arguments
/// that should be passed to zig to link the given library.
fn runPkgConfig(compile: *Compile, lib_name: []const u8) !PkgConfigResult {
const wl_rpath_prefix = "-Wl,-rpath,";
const b = compile.step.owner;
const pkg_name = match: {
// First we have to map the library name to pkg config name. Unfortunately,
@ -783,6 +785,8 @@ fn runPkgConfig(compile: *Compile, lib_name: []const u8) !PkgConfigResult {
try zig_cflags.appendSlice(&[_][]const u8{ "-D", macro });
} else if (mem.startsWith(u8, arg, "-D")) {
try zig_cflags.append(arg);
} else if (mem.startsWith(u8, arg, wl_rpath_prefix)) {
try zig_cflags.appendSlice(&[_][]const u8{ "-rpath", arg[wl_rpath_prefix.len..] });
} else if (b.debug_pkg_config) {
return compile.step.fail("unknown pkg-config flag '{s}'", .{arg});
}

View File

@ -189,9 +189,9 @@ fn make(step: *Step, options: Step.MakeOptions) !void {
const src_dir_path = dir.source.getPath3(b, step);
const full_h_prefix = b.getInstallPath(h_dir, dir.dest_rel_path);
var src_dir = src_dir_path.root_dir.handle.openDir(src_dir_path.sub_path, .{ .iterate = true }) catch |err| {
return step.fail("unable to open source directory '{s}': {s}", .{
src_dir_path.sub_path, @errorName(err),
var src_dir = src_dir_path.root_dir.handle.openDir(src_dir_path.subPathOrDot(), .{ .iterate = true }) catch |err| {
return step.fail("unable to open source directory '{}': {s}", .{
src_dir_path, @errorName(err),
});
};
defer src_dir.close();

View File

@ -620,6 +620,35 @@ fn make(step: *Step, options: Step.MakeOptions) !void {
var man = b.graph.cache.obtain();
defer man.deinit();
if (run.env_map) |env_map| {
const KV = struct { []const u8, []const u8 };
var kv_pairs = try std.ArrayList(KV).initCapacity(arena, env_map.count());
var iter = env_map.iterator();
while (iter.next()) |entry| {
kv_pairs.appendAssumeCapacity(.{ entry.key_ptr.*, entry.value_ptr.* });
}
std.mem.sortUnstable(KV, kv_pairs.items, {}, struct {
fn lessThan(_: void, kv1: KV, kv2: KV) bool {
const k1 = kv1[0];
const k2 = kv2[0];
if (k1.len != k2.len) return k1.len < k2.len;
for (k1, k2) |c1, c2| {
if (c1 == c2) continue;
return c1 < c2;
}
unreachable; // two keys cannot be equal
}
}.lessThan);
for (kv_pairs.items) |kv| {
man.hash.addBytes(kv[0]);
man.hash.addBytes(kv[1]);
}
}
for (run.argv.items) |arg| {
switch (arg) {
.bytes => |bytes| {

View File

@ -612,8 +612,6 @@ const Os = switch (builtin.os.tag) {
/// -1. Otherwise, it needs to be opened in update(), and will be
/// stored here.
dir_fd: i32,
/// Number of files being watched by this directory handle.
ref_count: u32,
}),
const dir_open_flags: posix.O = f: {
@ -673,11 +671,9 @@ const Os = switch (builtin.os.tag) {
try handles.append(gpa, .{
.rs = .{},
.dir_fd = if (skip_open_dir) -1 else dir_fd,
.ref_count = 1,
});
} else {
handles.items(.ref_count)[gop.index] += 1;
}
break :rs &handles.items(.rs)[gop.index];
};
for (files.items) |basename| {
@ -718,10 +714,6 @@ const Os = switch (builtin.os.tag) {
}
}
const ref_count_ptr = &handles.items(.ref_count)[i];
ref_count_ptr.* -= 1;
if (ref_count_ptr.* > 0) continue;
// If the sub_path == "" then this patch has already the
// dir fd that we need to use as the ident to remove the
// event. If it was opened above with openat() then we need
@ -738,10 +730,10 @@ const Os = switch (builtin.os.tag) {
// index in the udata field.
const last_dir_fd = fd: {
const last_path = w.dir_table.keys()[handles.len - 1];
const last_dir_fd = if (last_path.sub_path.len != 0)
const last_dir_fd = if (last_path.sub_path.len == 0)
last_path.root_dir.handle.fd
else
handles.items(.dir_fd)[i];
handles.items(.dir_fd)[handles.len - 1];
assert(last_dir_fd != -1);
break :fd last_dir_fd;
};

View File

@ -39,10 +39,20 @@ draw_buffer: []u8,
/// CPU cache.
node_parents: []Node.Parent,
node_storage: []Node.Storage,
node_freelist: []Node.OptionalIndex,
node_freelist_first: Node.OptionalIndex,
node_freelist_next: []Node.OptionalIndex,
node_freelist: Freelist,
/// This is the number of elements in node arrays which have been used so far. Nodes before this
/// index are either active, or on the freelist. The remaining nodes are implicitly free. This
/// value may at times temporarily exceed the node count.
node_end_index: u32,
const Freelist = packed struct(u32) {
head: Node.OptionalIndex,
/// Whenever `node_freelist` is added to, this generation is incremented
/// to avoid ABA bugs when acquiring nodes. Wrapping arithmetic is used.
generation: u24,
};
pub const TerminalMode = union(enum) {
off,
ansi_escape_codes,
@ -112,7 +122,7 @@ pub const Node = struct {
// causes `completed_count` to be treated as a file descriptor, so
// the order here matters.
@atomicStore(u32, &s.completed_count, integer, .monotonic);
@atomicStore(u32, &s.estimated_total_count, std.math.maxInt(u32), .release);
@atomicStore(u32, &s.estimated_total_count, std.math.maxInt(u32), .release); // synchronizes with acquire in `serialize`
}
/// Not thread-safe.
@ -184,12 +194,24 @@ pub const Node = struct {
const node_index = node.index.unwrap() orelse return Node.none;
const parent = node_index.toParent();
const freelist_head = &global_progress.node_freelist_first;
var opt_free_index = @atomicLoad(Node.OptionalIndex, freelist_head, .seq_cst);
while (opt_free_index.unwrap()) |free_index| {
const freelist_ptr = freelistByIndex(free_index);
const next = @atomicLoad(Node.OptionalIndex, freelist_ptr, .seq_cst);
opt_free_index = @cmpxchgWeak(Node.OptionalIndex, freelist_head, opt_free_index, next, .seq_cst, .seq_cst) orelse {
const freelist = &global_progress.node_freelist;
var old_freelist = @atomicLoad(Freelist, freelist, .acquire); // acquire to ensure we have the correct "next" entry
while (old_freelist.head.unwrap()) |free_index| {
const next_ptr = freelistNextByIndex(free_index);
const new_freelist: Freelist = .{
.head = @atomicLoad(Node.OptionalIndex, next_ptr, .monotonic),
// We don't need to increment the generation when removing nodes from the free list,
// only when adding them. (This choice is arbitrary; the opposite would also work.)
.generation = old_freelist.generation,
};
old_freelist = @cmpxchgWeak(
Freelist,
freelist,
old_freelist,
new_freelist,
.acquire, // not theoretically necessary, but not allowed to be weaker than the failure order
.acquire, // ensure we have the correct `node_freelist_next` entry on the next iteration
) orelse {
// We won the allocation race.
return init(free_index, parent, name, estimated_total_items);
};
@ -243,18 +265,28 @@ pub const Node = struct {
}
const index = n.index.unwrap() orelse return;
const parent_ptr = parentByIndex(index);
if (parent_ptr.unwrap()) |parent_index| {
if (@atomicLoad(Node.Parent, parent_ptr, .monotonic).unwrap()) |parent_index| {
_ = @atomicRmw(u32, &storageByIndex(parent_index).completed_count, .Add, 1, .monotonic);
@atomicStore(Node.Parent, parent_ptr, .unused, .seq_cst);
@atomicStore(Node.Parent, parent_ptr, .unused, .monotonic);
const freelist_head = &global_progress.node_freelist_first;
var first = @atomicLoad(Node.OptionalIndex, freelist_head, .seq_cst);
const freelist = &global_progress.node_freelist;
var old_freelist = @atomicLoad(Freelist, freelist, .monotonic);
while (true) {
@atomicStore(Node.OptionalIndex, freelistByIndex(index), first, .seq_cst);
first = @cmpxchgWeak(Node.OptionalIndex, freelist_head, first, index.toOptional(), .seq_cst, .seq_cst) orelse break;
@atomicStore(Node.OptionalIndex, freelistNextByIndex(index), old_freelist.head, .monotonic);
old_freelist = @cmpxchgWeak(
Freelist,
freelist,
old_freelist,
.{ .head = index.toOptional(), .generation = old_freelist.generation +% 1 },
.release, // ensure a matching `start` sees the freelist link written above
.monotonic, // our write above is irrelevant if we need to retry
) orelse {
// We won the race.
return;
};
}
} else {
@atomicStore(bool, &global_progress.done, true, .seq_cst);
@atomicStore(bool, &global_progress.done, true, .monotonic);
global_progress.redraw_event.set();
if (global_progress.update_thread) |thread| thread.join();
}
@ -291,8 +323,8 @@ pub const Node = struct {
return &global_progress.node_parents[@intFromEnum(index)];
}
fn freelistByIndex(index: Node.Index) *Node.OptionalIndex {
return &global_progress.node_freelist[@intFromEnum(index)];
fn freelistNextByIndex(index: Node.Index) *Node.OptionalIndex {
return &global_progress.node_freelist_next[@intFromEnum(index)];
}
fn init(free_index: Index, parent: Parent, name: []const u8, estimated_total_items: usize) Node {
@ -307,8 +339,10 @@ pub const Node = struct {
@atomicStore(u8, &storage.name[name_len], 0, .monotonic);
const parent_ptr = parentByIndex(free_index);
assert(parent_ptr.* == .unused);
@atomicStore(Node.Parent, parent_ptr, parent, .release);
if (std.debug.runtime_safety) {
assert(@atomicLoad(Node.Parent, parent_ptr, .monotonic) == .unused);
}
@atomicStore(Node.Parent, parent_ptr, parent, .monotonic);
return .{ .index = free_index.toOptional() };
}
@ -329,15 +363,15 @@ var global_progress: Progress = .{
.node_parents = &node_parents_buffer,
.node_storage = &node_storage_buffer,
.node_freelist = &node_freelist_buffer,
.node_freelist_first = .none,
.node_freelist_next = &node_freelist_next_buffer,
.node_freelist = .{ .head = .none, .generation = 0 },
.node_end_index = 0,
};
const node_storage_buffer_len = 83;
var node_parents_buffer: [node_storage_buffer_len]Node.Parent = undefined;
var node_storage_buffer: [node_storage_buffer_len]Node.Storage = undefined;
var node_freelist_buffer: [node_storage_buffer_len]Node.OptionalIndex = undefined;
var node_freelist_next_buffer: [node_storage_buffer_len]Node.OptionalIndex = undefined;
var default_draw_buffer: [4096]u8 = undefined;
@ -456,7 +490,7 @@ fn updateThreadRun() void {
{
const resize_flag = wait(global_progress.initial_delay_ns);
if (@atomicLoad(bool, &global_progress.done, .seq_cst)) return;
if (@atomicLoad(bool, &global_progress.done, .monotonic)) return;
maybeUpdateSize(resize_flag);
const buffer, _ = computeRedraw(&serialized_buffer);
@ -470,7 +504,7 @@ fn updateThreadRun() void {
while (true) {
const resize_flag = wait(global_progress.refresh_rate_ns);
if (@atomicLoad(bool, &global_progress.done, .seq_cst)) {
if (@atomicLoad(bool, &global_progress.done, .monotonic)) {
stderr_mutex.lock();
defer stderr_mutex.unlock();
return clearWrittenWithEscapeCodes() catch {};
@ -500,7 +534,7 @@ fn windowsApiUpdateThreadRun() void {
{
const resize_flag = wait(global_progress.initial_delay_ns);
if (@atomicLoad(bool, &global_progress.done, .seq_cst)) return;
if (@atomicLoad(bool, &global_progress.done, .monotonic)) return;
maybeUpdateSize(resize_flag);
const buffer, const nl_n = computeRedraw(&serialized_buffer);
@ -516,7 +550,7 @@ fn windowsApiUpdateThreadRun() void {
while (true) {
const resize_flag = wait(global_progress.refresh_rate_ns);
if (@atomicLoad(bool, &global_progress.done, .seq_cst)) {
if (@atomicLoad(bool, &global_progress.done, .monotonic)) {
stderr_mutex.lock();
defer stderr_mutex.unlock();
return clearWrittenWindowsApi() catch {};
@ -558,7 +592,7 @@ fn ipcThreadRun(fd: posix.fd_t) anyerror!void {
{
_ = wait(global_progress.initial_delay_ns);
if (@atomicLoad(bool, &global_progress.done, .seq_cst))
if (@atomicLoad(bool, &global_progress.done, .monotonic))
return;
const serialized = serialize(&serialized_buffer);
@ -570,7 +604,7 @@ fn ipcThreadRun(fd: posix.fd_t) anyerror!void {
while (true) {
_ = wait(global_progress.refresh_rate_ns);
if (@atomicLoad(bool, &global_progress.done, .seq_cst))
if (@atomicLoad(bool, &global_progress.done, .monotonic))
return;
const serialized = serialize(&serialized_buffer);
@ -765,37 +799,39 @@ fn serialize(serialized_buffer: *Serialized.Buffer) Serialized {
var any_ipc = false;
// Iterate all of the nodes and construct a serializable copy of the state that can be examined
// without atomics.
const end_index = @atomicLoad(u32, &global_progress.node_end_index, .monotonic);
// without atomics. The `@min` call is here because `node_end_index` might briefly exceed the
// node count sometimes.
const end_index = @min(@atomicLoad(u32, &global_progress.node_end_index, .monotonic), global_progress.node_storage.len);
for (
global_progress.node_parents[0..end_index],
global_progress.node_storage[0..end_index],
serialized_buffer.map[0..end_index],
) |*parent_ptr, *storage_ptr, *map| {
var begin_parent = @atomicLoad(Node.Parent, parent_ptr, .acquire);
while (begin_parent != .unused) {
const dest_storage = &serialized_buffer.storage[serialized_len];
copyAtomicLoad(&dest_storage.name, &storage_ptr.name);
dest_storage.estimated_total_count = @atomicLoad(u32, &storage_ptr.estimated_total_count, .acquire);
dest_storage.completed_count = @atomicLoad(u32, &storage_ptr.completed_count, .monotonic);
const end_parent = @atomicLoad(Node.Parent, parent_ptr, .acquire);
if (begin_parent == end_parent) {
any_ipc = any_ipc or (dest_storage.getIpcFd() != null);
serialized_buffer.parents[serialized_len] = begin_parent;
map.* = @enumFromInt(serialized_len);
serialized_len += 1;
break;
}
begin_parent = end_parent;
} else {
// A node may be freed during the execution of this loop, causing
// there to be a parent reference to a nonexistent node. Without
// this assignment, this would lead to the map entry containing
// stale data. By assigning none, the child node with the bad
// parent pointer will be harmlessly omitted from the tree.
const parent = @atomicLoad(Node.Parent, parent_ptr, .monotonic);
if (parent == .unused) {
// We might read "mixed" node data in this loop, due to weird atomic things
// or just a node actually being freed while this loop runs. That could cause
// there to be a parent reference to a nonexistent node. Without this assignment,
// this would lead to the map entry containing stale data. By assigning none, the
// child node with the bad parent pointer will be harmlessly omitted from the tree.
//
// Note that there's no concern of potentially creating "looping" data if we read
// "mixed" node data like this, because if a node is (directly or indirectly) its own
// parent, it will just not be printed at all. The general idea here is that performance
// is more important than 100% correct output every frame, given that this API is likely
// to be used in hot paths!
map.* = .none;
continue;
}
const dest_storage = &serialized_buffer.storage[serialized_len];
copyAtomicLoad(&dest_storage.name, &storage_ptr.name);
dest_storage.estimated_total_count = @atomicLoad(u32, &storage_ptr.estimated_total_count, .acquire); // sychronizes with release in `setIpcFd`
dest_storage.completed_count = @atomicLoad(u32, &storage_ptr.completed_count, .monotonic);
any_ipc = any_ipc or (dest_storage.getIpcFd() != null);
serialized_buffer.parents[serialized_len] = parent;
map.* = @enumFromInt(serialized_len);
serialized_len += 1;
}
// Remap parents to point inside serialized arrays.

View File

@ -102,7 +102,7 @@ pub fn fromTarget(target: Target) Query {
.os_version_min = undefined,
.os_version_max = undefined,
.abi = target.abi,
.glibc_version = target.os.versionRange().gnuLibCVersion(),
.glibc_version = if (target.abi.isGnu()) target.os.versionRange().gnuLibCVersion() else null,
.android_api_level = if (target.abi.isAndroid()) target.os.version_range.linux.android else null,
};
result.updateOsVersionRange(target.os);

View File

@ -2486,13 +2486,13 @@ test "reIndex" {
test "auto store_hash" {
const HasCheapEql = AutoArrayHashMap(i32, i32);
const HasExpensiveEql = AutoArrayHashMap([32]i32, i32);
try testing.expect(std.meta.fieldInfo(HasCheapEql.Data, .hash).type == void);
try testing.expect(std.meta.fieldInfo(HasExpensiveEql.Data, .hash).type != void);
try testing.expect(@FieldType(HasCheapEql.Data, "hash") == void);
try testing.expect(@FieldType(HasExpensiveEql.Data, "hash") != void);
const HasCheapEqlUn = AutoArrayHashMapUnmanaged(i32, i32);
const HasExpensiveEqlUn = AutoArrayHashMapUnmanaged([32]i32, i32);
try testing.expect(std.meta.fieldInfo(HasCheapEqlUn.Data, .hash).type == void);
try testing.expect(std.meta.fieldInfo(HasExpensiveEqlUn.Data, .hash).type != void);
try testing.expect(@FieldType(HasCheapEqlUn.Data, "hash") == void);
try testing.expect(@FieldType(HasExpensiveEqlUn.Data, "hash") != void);
}
test "sort" {

View File

@ -177,7 +177,7 @@ pub fn isAscii(c: u8) bool {
return c < 128;
}
/// /// Deprecated: use `isAscii`
/// Deprecated: use `isAscii`
pub const isASCII = isAscii;
/// Uppercases the character and returns it as-is if already uppercase or not a letter.

View File

@ -1,4 +1,5 @@
//! Base64 encoding/decoding.
//! Base64 encoding/decoding as specified by
//! [RFC 4648](https://datatracker.ietf.org/doc/html/rfc4648).
const std = @import("std.zig");
const assert = std.debug.assert;
@ -24,12 +25,15 @@ pub const Codecs = struct {
Decoder: Base64Decoder,
};
/// The Base64 alphabet defined in
/// [RFC 4648 section 4](https://datatracker.ietf.org/doc/html/rfc4648#section-4).
pub const standard_alphabet_chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/".*;
fn standardBase64DecoderWithIgnore(ignore: []const u8) Base64DecoderWithIgnore {
return Base64DecoderWithIgnore.init(standard_alphabet_chars, '=', ignore);
}
/// Standard Base64 codecs, with padding
/// Standard Base64 codecs, with padding, as defined in
/// [RFC 4648 section 4](https://datatracker.ietf.org/doc/html/rfc4648#section-4).
pub const standard = Codecs{
.alphabet_chars = standard_alphabet_chars,
.pad_char = '=',
@ -38,7 +42,8 @@ pub const standard = Codecs{
.Decoder = Base64Decoder.init(standard_alphabet_chars, '='),
};
/// Standard Base64 codecs, without padding
/// Standard Base64 codecs, without padding, as defined in
/// [RFC 4648 section 3.2](https://datatracker.ietf.org/doc/html/rfc4648#section-3.2).
pub const standard_no_pad = Codecs{
.alphabet_chars = standard_alphabet_chars,
.pad_char = null,
@ -47,12 +52,15 @@ pub const standard_no_pad = Codecs{
.Decoder = Base64Decoder.init(standard_alphabet_chars, null),
};
/// The URL-safe Base64 alphabet defined in
/// [RFC 4648 section 5](https://datatracker.ietf.org/doc/html/rfc4648#section-5).
pub const url_safe_alphabet_chars = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789-_".*;
fn urlSafeBase64DecoderWithIgnore(ignore: []const u8) Base64DecoderWithIgnore {
return Base64DecoderWithIgnore.init(url_safe_alphabet_chars, null, ignore);
}
/// URL-safe Base64 codecs, with padding
/// URL-safe Base64 codecs, with padding, as defined in
/// [RFC 4648 section 5](https://datatracker.ietf.org/doc/html/rfc4648#section-5).
pub const url_safe = Codecs{
.alphabet_chars = url_safe_alphabet_chars,
.pad_char = '=',
@ -61,7 +69,8 @@ pub const url_safe = Codecs{
.Decoder = Base64Decoder.init(url_safe_alphabet_chars, '='),
};
/// URL-safe Base64 codecs, without padding
/// URL-safe Base64 codecs, without padding, as defined in
/// [RFC 4648 section 3.2](https://datatracker.ietf.org/doc/html/rfc4648#section-3.2).
pub const url_safe_no_pad = Codecs{
.alphabet_chars = url_safe_alphabet_chars,
.pad_char = null,

View File

@ -141,11 +141,16 @@ pub const AtomicRmwOp = enum {
/// therefore must be kept in sync with the compiler implementation.
pub const CodeModel = enum {
default,
tiny,
small,
extreme,
kernel,
medium,
large,
medany,
medium,
medlow,
medmid,
normal,
small,
tiny,
};
/// This data structure is used by the Zig language code generation and

View File

@ -2269,7 +2269,10 @@ pub const SC = switch (native_os) {
else => void,
};
pub const _SC = switch (native_os) {
pub const _SC = if (builtin.abi.isAndroid()) enum(c_int) {
PAGESIZE = 39,
NPROCESSORS_ONLN = 97,
} else switch (native_os) {
.driverkit, .ios, .macos, .tvos, .visionos, .watchos => enum(c_int) {
PAGESIZE = 29,
},
@ -9328,9 +9331,11 @@ pub extern "c" fn setrlimit64(resource: rlimit_resource, rlim: *const rlimit) c_
pub const arc4random_buf = switch (native_os) {
.dragonfly, .netbsd, .freebsd, .solaris, .openbsd, .macos, .ios, .tvos, .watchos, .visionos => private.arc4random_buf,
.linux => if (builtin.abi.isAndroid()) private.arc4random_buf else {},
else => {},
};
pub const getentropy = switch (native_os) {
.linux => if (builtin.abi.isAndroid() and versionCheck(.{ .major = 28, .minor = 0, .patch = 0 })) private.getentropy else {},
.emscripten => private.getentropy,
else => {},
};

View File

@ -1165,6 +1165,8 @@ pub const CPUFAMILY = enum(u32) {
ARM_PALMA = 0x72015832,
ARM_DONAN = 0x6f5129ac,
ARM_BRAVA = 0x17d5b93a,
ARM_TAHITI = 0x75d4acb9,
ARM_TUPAI = 0x204526d0,
_,
};

View File

@ -289,3 +289,22 @@ test "zero sized block" {
try expectEqualDecodedStreaming("", input_raw);
try expectEqualDecodedStreaming("", input_rle);
}
test "declared raw literals size too large" {
const input_raw =
"\x28\xb5\x2f\xfd" ++ // zstandard frame magic number
"\x00\x00" ++ // frame header: everything unset, window descriptor zero
"\x95\x00\x00" ++ // block header with: last_block set, block_type compressed, block_size 18
"\xbc\xf3\xae" ++ // literals section header with: type raw, size_format 3, regenerated_size 716603
"\xa5\x9f\xe3"; // some bytes of literal content - the content is shorter than regenerated_size
// Note that the regenerated_size in the above input is larger than block maximum size, so the
// block can't be valid as it is a raw literals block.
var fbs = std.io.fixedBufferStream(input_raw);
var window: [1024]u8 = undefined;
var stream = decompressor(fbs.reader(), .{ .window_buffer = &window });
var buf: [1024]u8 = undefined;
try std.testing.expectError(error.MalformedBlock, stream.read(&buf));
}

View File

@ -989,6 +989,7 @@ pub fn decodeLiteralsSection(
const header = try decodeLiteralsHeader(source);
switch (header.block_type) {
.raw => {
if (buffer.len < header.regenerated_size) return error.LiteralsBufferTooSmall;
try source.readNoEof(buffer[0..header.regenerated_size]);
return LiteralsSection{
.header = header,

View File

@ -380,7 +380,7 @@ pub const FrameContext = struct {
/// - `error.WindowSizeUnknown` if the frame does not have a valid window
/// size
/// - `error.WindowTooLarge` if the window size is larger than
/// `window_size_max`
/// `window_size_max` or `std.math.intMax(usize)`
/// - `error.ContentSizeTooLarge` if the frame header indicates a content
/// size larger than `std.math.maxInt(usize)`
pub fn init(
@ -395,7 +395,7 @@ pub const FrameContext = struct {
const window_size = if (window_size_raw > window_size_max)
return error.WindowTooLarge
else
@as(usize, @intCast(window_size_raw));
std.math.cast(usize, window_size_raw) orelse return error.WindowTooLarge;
const should_compute_checksum =
frame_header.descriptor.content_checksum_flag and verify_checksum;

View File

@ -471,6 +471,10 @@ pub const AffineCoordinates = struct {
/// Identity element in affine coordinates.
pub const identityElement = AffineCoordinates{ .x = P256.identityElement.x, .y = P256.identityElement.y };
pub fn neg(p: AffineCoordinates) AffineCoordinates {
return .{ .x = p.x, .y = p.y.neg() };
}
fn cMov(p: *AffineCoordinates, a: AffineCoordinates, c: u1) void {
p.x.cMov(a.x, c);
p.y.cMov(a.y, c);

View File

@ -471,6 +471,10 @@ pub const AffineCoordinates = struct {
/// Identity element in affine coordinates.
pub const identityElement = AffineCoordinates{ .x = P384.identityElement.x, .y = P384.identityElement.y };
pub fn neg(p: AffineCoordinates) AffineCoordinates {
return .{ .x = p.x, .y = p.y.neg() };
}
fn cMov(p: *AffineCoordinates, a: AffineCoordinates, c: u1) void {
p.x.cMov(a.x, c);
p.y.cMov(a.y, c);

View File

@ -549,6 +549,10 @@ pub const AffineCoordinates = struct {
/// Identity element in affine coordinates.
pub const identityElement = AffineCoordinates{ .x = Secp256k1.identityElement.x, .y = Secp256k1.identityElement.y };
pub fn neg(p: AffineCoordinates) AffineCoordinates {
return .{ .x = p.x, .y = p.y.neg() };
}
fn cMov(p: *AffineCoordinates, a: AffineCoordinates, c: u1) void {
p.x.cMov(a.x, c);
p.y.cMov(a.y, c);

View File

@ -112,7 +112,7 @@ pub const Options = struct {
/// No host verification is performed, which prevents a trusted connection from
/// being established.
no_verification,
/// Verify that the server certificate was issues for a given host.
/// Verify that the server certificate was issued for a given host.
explicit: []const u8,
},
/// How to verify the authenticity of server certificates.

View File

@ -244,10 +244,14 @@ pub fn dumpHexFallible(bytes: []const u8) !void {
const stderr = std.io.getStdErr();
const ttyconf = std.io.tty.detectConfig(stderr);
const writer = stderr.writer();
try dumpHexInternal(bytes, ttyconf, writer);
}
fn dumpHexInternal(bytes: []const u8, ttyconf: std.io.tty.Config, writer: anytype) !void {
var chunks = mem.window(u8, bytes, 16, 16);
while (chunks.next()) |window| {
// 1. Print the address.
const address = (@intFromPtr(bytes.ptr) + 0x10 * (chunks.index orelse 0) / 16) - 0x10;
const address = (@intFromPtr(bytes.ptr) + 0x10 * (std.math.divCeil(usize, chunks.index orelse bytes.len, 16) catch unreachable)) - 0x10;
try ttyconf.setColor(writer, .dim);
// We print the address in lowercase and the bytes in uppercase hexadecimal to distinguish them more.
// Also, make sure all lines are aligned by padding the address.
@ -292,6 +296,24 @@ pub fn dumpHexFallible(bytes: []const u8) !void {
}
}
test dumpHexInternal {
const bytes: []const u8 = &.{ 0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88, 0x99, 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0xff, 0x01, 0x12, 0x13 };
var output = std.ArrayList(u8).init(std.testing.allocator);
defer output.deinit();
try dumpHexInternal(bytes, .no_color, output.writer());
const expected = try std.fmt.allocPrint(std.testing.allocator,
\\{x:0>[2]} 00 11 22 33 44 55 66 77 88 99 AA BB CC DD EE FF .."3DUfw........
\\{x:0>[2]} 01 12 13 ...
\\
, .{
@intFromPtr(bytes.ptr),
@intFromPtr(bytes.ptr) + 16,
@sizeOf(usize) * 2,
});
defer std.testing.allocator.free(expected);
try std.testing.expectEqualStrings(expected, output.items);
}
/// Tries to print the current stack trace to stderr, unbuffered, and ignores any error returned.
/// TODO multithreaded awareness
pub fn dumpCurrentStackTrace(start_addr: ?usize) void {
@ -415,7 +437,13 @@ pub fn dumpStackTraceFromBase(context: *ThreadContext) void {
var it = StackIterator.initWithContext(null, debug_info, context) catch return;
defer it.deinit();
printSourceAtAddress(debug_info, stderr, it.unwind_state.?.dwarf_context.pc, tty_config) catch return;
// DWARF unwinding on aarch64-macos is not complete so we need to get pc address from mcontext
const pc_addr = if (builtin.target.os.tag.isDarwin() and native_arch == .aarch64)
context.mcontext.ss.pc
else
it.unwind_state.?.dwarf_context.pc;
printSourceAtAddress(debug_info, stderr, pc_addr, tty_config) catch return;
while (it.next()) |return_address| {
printLastUnwindError(&it, debug_info, stderr, tty_config);
@ -1458,8 +1486,26 @@ fn dumpSegfaultInfoPosix(sig: i32, code: i32, addr: usize, ctx_ptr: ?*anyopaque)
.aarch64,
.aarch64_be,
=> {
const ctx: *posix.ucontext_t = @ptrCast(@alignCast(ctx_ptr));
dumpStackTraceFromBase(ctx);
// Some kernels don't align `ctx_ptr` properly. Handle this defensively.
const ctx: *align(1) posix.ucontext_t = @ptrCast(ctx_ptr);
var new_ctx: posix.ucontext_t = ctx.*;
if (builtin.os.tag.isDarwin() and builtin.cpu.arch == .aarch64) {
// The kernel incorrectly writes the contents of `__mcontext_data` right after `mcontext`,
// rather than after the 8 bytes of padding that are supposed to sit between the two. Copy the
// contents to the right place so that the `mcontext` pointer will be correct after the
// `relocateContext` call below.
new_ctx.__mcontext_data = @as(*align(1) extern struct {
onstack: c_int,
sigmask: std.c.sigset_t,
stack: std.c.stack_t,
link: ?*std.c.ucontext_t,
mcsize: u64,
mcontext: *std.c.mcontext_t,
__mcontext_data: std.c.mcontext_t align(@sizeOf(usize)), // Disable padding after `mcontext`.
}, @ptrCast(ctx)).__mcontext_data;
}
relocateContext(&new_ctx);
dumpStackTraceFromBase(&new_ctx);
},
else => {},
}

View File

@ -1380,13 +1380,8 @@ test fmtDuration {
}
fn formatDurationSigned(ns: i64, comptime fmt: []const u8, options: std.fmt.FormatOptions, writer: anytype) !void {
if (ns < 0) {
const data = FormatDurationData{ .ns = @as(u64, @intCast(-ns)), .negative = true };
try formatDuration(data, fmt, options, writer);
} else {
const data = FormatDurationData{ .ns = @as(u64, @intCast(ns)) };
try formatDuration(data, fmt, options, writer);
}
const data = FormatDurationData{ .ns = @abs(ns), .negative = ns < 0 };
try formatDuration(data, fmt, options, writer);
}
/// Return a Formatter for number of nanoseconds according to its signed magnitude:
@ -1457,6 +1452,7 @@ test fmtDurationSigned {
.{ .s = "-1y1m999ns", .d = -(365 * std.time.ns_per_day + std.time.ns_per_min + 999) },
.{ .s = "292y24w3d23h47m16.854s", .d = math.maxInt(i64) },
.{ .s = "-292y24w3d23h47m16.854s", .d = math.minInt(i64) + 1 },
.{ .s = "-292y24w3d23h47m16.854s", .d = math.minInt(i64) },
}) |tc| {
const slice = try bufPrint(&buf, "{}", .{fmtDurationSigned(tc.d)});
try std.testing.expectEqualStrings(tc.s, slice);

View File

@ -1347,13 +1347,11 @@ pub fn realpathW(self: Dir, pathname: []const u16, out_buffer: []u8) RealPathErr
var wide_buf: [w.PATH_MAX_WIDE]u16 = undefined;
const wide_slice = try w.GetFinalPathNameByHandle(h_file, .{}, &wide_buf);
var big_out_buf: [fs.max_path_bytes]u8 = undefined;
const end_index = std.unicode.wtf16LeToWtf8(&big_out_buf, wide_slice);
if (end_index > out_buffer.len)
const len = std.unicode.calcWtf8Len(wide_slice);
if (len > out_buffer.len)
return error.NameTooLong;
const result = out_buffer[0..end_index];
@memcpy(result, big_out_buf[0..end_index]);
return result;
const end_index = std.unicode.wtf16LeToWtf8(out_buffer, wide_slice);
return out_buffer[0..end_index];
}
pub const RealPathAllocError = RealPathError || Allocator.Error;

View File

@ -73,8 +73,8 @@ pub const Wyhash = struct {
newSelf.smallKey(input);
} else {
var offset: usize = 0;
var scratch: [16]u8 = undefined;
if (self.buf_len < 16) {
var scratch: [16]u8 = undefined;
const rem = 16 - self.buf_len;
@memcpy(scratch[0..rem], self.buf[self.buf.len - rem ..][0..rem]);
@memcpy(scratch[rem..][0..self.buf_len], self.buf[0..self.buf_len]);

View File

@ -593,6 +593,8 @@ pub fn testAllocator(base_allocator: mem.Allocator) !void {
const zero_bit_ptr = try allocator.create(u0);
zero_bit_ptr.* = 0;
allocator.destroy(zero_bit_ptr);
const zero_len_array = try allocator.create([0]u64);
allocator.destroy(zero_len_array);
const oversize = try allocator.alignedAlloc(u32, null, 5);
try testing.expect(oversize.len >= 5);

View File

@ -3,8 +3,10 @@ const assert = std.debug.assert;
const mem = std.mem;
const Allocator = std.mem.Allocator;
/// This allocator takes an existing allocator, wraps it, and provides an interface
/// where you can allocate without freeing, and then free it all together.
/// This allocator takes an existing allocator, wraps it, and provides an interface where
/// you can allocate and then free it all together. Calls to free an individual item only
/// free the item if it was the most recent allocation, otherwise calls to free do
/// nothing.
pub const ArenaAllocator = struct {
child_allocator: Allocator,
state: State,

View File

@ -281,6 +281,7 @@ pub fn DebugAllocator(comptime config: Config) type {
allocated_count: SlotIndex,
freed_count: SlotIndex,
prev: ?*BucketHeader,
next: ?*BucketHeader,
canary: usize = config.canary,
fn fromPage(page_addr: usize, slot_count: usize) *BucketHeader {
@ -782,7 +783,11 @@ pub fn DebugAllocator(comptime config: Config) type {
.allocated_count = 1,
.freed_count = 0,
.prev = self.buckets[size_class_index],
.next = null,
};
if (self.buckets[size_class_index]) |old_head| {
old_head.next = bucket;
}
self.buckets[size_class_index] = bucket;
if (!config.backing_allocator_zeroes) {
@ -935,9 +940,18 @@ pub fn DebugAllocator(comptime config: Config) type {
}
bucket.freed_count += 1;
if (bucket.freed_count == bucket.allocated_count) {
if (self.buckets[size_class_index] == bucket) {
self.buckets[size_class_index] = null;
if (bucket.prev) |prev| {
prev.next = bucket.next;
}
if (bucket.next) |next| {
assert(self.buckets[size_class_index] != bucket);
next.prev = bucket.prev;
} else {
assert(self.buckets[size_class_index] == bucket);
self.buckets[size_class_index] = bucket.prev;
}
if (!config.never_unmap) {
const page: [*]align(page_size) u8 = @ptrFromInt(page_addr);
self.backing_allocator.rawFree(page[0..page_size], page_align, @returnAddress());

View File

@ -1241,10 +1241,14 @@ pub fn initDefaultProxies(client: *Client, arena: Allocator) !void {
fn createProxyFromEnvVar(arena: Allocator, env_var_names: []const []const u8) !?*Proxy {
const content = for (env_var_names) |name| {
break std.process.getEnvVarOwned(arena, name) catch |err| switch (err) {
const content = std.process.getEnvVarOwned(arena, name) catch |err| switch (err) {
error.EnvironmentVariableNotFound => continue,
else => |e| return e,
};
if (content.len == 0) continue;
break content;
} else return null;
const uri = Uri.parse(content) catch try Uri.parseAfterScheme("http", content);

View File

@ -925,3 +925,19 @@ test "parse at comptime" {
};
comptime testing.expectEqual(@as(u64, 9999), config.uptime) catch unreachable;
}
test "parse with zero-bit field" {
const str =
\\{
\\ "a": ["a", "a"],
\\ "b": "a"
\\}
;
const ZeroSizedEnum = enum { a };
try testing.expectEqual(0, @sizeOf(ZeroSizedEnum));
const Inner = struct { a: []const ZeroSizedEnum, b: ZeroSizedEnum };
const expected: Inner = .{ .a = &.{ .a, .a }, .b = .a };
try testAllParseFunctions(Inner, expected, str);
}

View File

@ -76,9 +76,18 @@ pub fn calcSqrtLimbsBufferLen(a_bit_count: usize) usize {
return a_limb_count + 3 * u_s_rem_limb_count + calcDivLimbsBufferLen(a_limb_count, u_s_rem_limb_count);
}
// Compute the number of limbs required to store a 2s-complement number of `bit_count` bits.
/// Compute the number of limbs required to store a 2s-complement number of `bit_count` bits.
pub fn calcNonZeroTwosCompLimbCount(bit_count: usize) usize {
assert(bit_count != 0);
return calcTwosCompLimbCount(bit_count);
}
/// Compute the number of limbs required to store a 2s-complement number of `bit_count` bits.
///
/// Special cases `bit_count == 0` to return 1. Zero-bit integers can only store the value zero
/// and this big integer implementation stores zero using one limb.
pub fn calcTwosCompLimbCount(bit_count: usize) usize {
return std.math.divCeil(usize, bit_count, @bitSizeOf(Limb)) catch unreachable;
return @max(std.math.divCeil(usize, bit_count, @bitSizeOf(Limb)) catch unreachable, 1);
}
/// a + b * c + *carry, sets carry to the overflow bits
@ -188,8 +197,10 @@ pub const Mutable = struct {
if (self.limbs.ptr != other.limbs.ptr) {
@memcpy(self.limbs[0..other.limbs.len], other.limbs[0..other.limbs.len]);
}
self.positive = other.positive;
self.len = other.limbs.len;
// Normalize before setting `positive` so the `eqlZero` doesn't need to iterate
// over the extra zero limbs.
self.normalize(other.limbs.len);
self.positive = other.positive or other.eqlZero();
}
/// Efficiently swap an Mutable with another. This swaps the limb pointers and a full copy is not
@ -1096,7 +1107,7 @@ pub const Mutable = struct {
/// Asserts there is enough memory to fit the result. The upper bound Limb count is
/// `a.limbs.len + (shift / (@sizeOf(Limb) * 8))`.
pub fn shiftLeft(r: *Mutable, a: Const, shift: usize) void {
llshl(r.limbs[0..], a.limbs[0..a.limbs.len], shift);
llshl(r.limbs, a.limbs, shift);
r.normalize(a.limbs.len + (shift / limb_bits) + 1);
r.positive = a.positive;
}
@ -1165,7 +1176,7 @@ pub const Mutable = struct {
// This shift should not be able to overflow, so invoke llshl and normalize manually
// to avoid the extra required limb.
llshl(r.limbs[0..], a.limbs[0..a.limbs.len], shift);
llshl(r.limbs, a.limbs, shift);
r.normalize(a.limbs.len + (shift / limb_bits));
r.positive = a.positive;
}
@ -1202,17 +1213,11 @@ pub const Mutable = struct {
break :nonzero a.limbs[full_limbs_shifted_out] << not_covered != 0;
};
llshr(r.limbs[0..], a.limbs[0..a.limbs.len], shift);
llshr(r.limbs, a.limbs, shift);
r.len = a.limbs.len - full_limbs_shifted_out;
r.positive = a.positive;
if (nonzero_negative_shiftout) {
if (full_limbs_shifted_out > 0) {
r.limbs[a.limbs.len - full_limbs_shifted_out] = 0;
r.len += 1;
}
r.addScalar(r.toConst(), -1);
}
if (nonzero_negative_shiftout) r.addScalar(r.toConst(), -1);
r.normalize(r.len);
}
@ -1755,119 +1760,60 @@ pub const Mutable = struct {
y.shiftRight(y.toConst(), norm_shift);
}
/// If a is positive, this passes through to truncate.
/// If a is negative, then r is set to positive with the bit pattern ~(a - 1).
/// r may alias a.
///
/// Asserts `r` has enough storage to store the result.
/// The upper bound is `calcTwosCompLimbCount(a.len)`.
pub fn convertToTwosComplement(r: *Mutable, a: Const, signedness: Signedness, bit_count: usize) void {
if (a.positive) {
r.truncate(a, signedness, bit_count);
return;
}
const req_limbs = calcTwosCompLimbCount(bit_count);
if (req_limbs == 0 or a.eqlZero()) {
r.set(0);
return;
}
const bit = @as(Log2Limb, @truncate(bit_count - 1));
const signmask = @as(Limb, 1) << bit;
const mask = (signmask << 1) -% 1;
r.addScalar(a.abs(), -1);
if (req_limbs > r.len) {
@memset(r.limbs[r.len..req_limbs], 0);
}
assert(r.limbs.len >= req_limbs);
r.len = req_limbs;
llnot(r.limbs[0..r.len]);
r.limbs[r.len - 1] &= mask;
r.normalize(r.len);
}
/// Truncate an integer to a number of bits, following 2s-complement semantics.
/// r may alias a.
/// `r` may alias `a`.
///
/// Asserts `r` has enough storage to store the result.
/// Asserts `r` has enough storage to compute the result.
/// The upper bound is `calcTwosCompLimbCount(a.len)`.
pub fn truncate(r: *Mutable, a: Const, signedness: Signedness, bit_count: usize) void {
const req_limbs = calcTwosCompLimbCount(bit_count);
const abs_trunc_a: Const = .{
.positive = true,
.limbs = a.limbs[0..@min(a.limbs.len, req_limbs)],
};
// Handle 0-bit integers.
if (req_limbs == 0 or abs_trunc_a.eqlZero()) {
if (bit_count == 0) {
@branchHint(.unlikely);
r.set(0);
return;
}
const bit = @as(Log2Limb, @truncate(bit_count - 1));
const signmask = @as(Limb, 1) << bit; // 0b0..010...0 where 1 is the sign bit.
const mask = (signmask << 1) -% 1; // 0b0..01..1 where the leftmost 1 is the sign bit.
const max_limbs = calcTwosCompLimbCount(bit_count);
const sign_bit = @as(Limb, 1) << @truncate(bit_count - 1);
const mask = @as(Limb, maxInt(Limb)) >> @truncate(-%bit_count);
if (!a.positive) {
// Convert the integer from sign-magnitude into twos-complement.
// -x = ~(x - 1)
// Note, we simply take req_limbs * @bitSizeOf(Limb) as the
// target bit count.
// Guess whether the result will have the same sign as `a`.
// * If the result will be signed zero, the guess is `true`.
// * If the result will be the minimum signed integer, the guess is `false`.
// * If the result will be unsigned zero, the guess is `a.positive`.
// * Otherwise the guess is correct.
const same_sign_guess = switch (signedness) {
.signed => max_limbs > a.limbs.len or a.limbs[max_limbs - 1] & sign_bit == 0,
.unsigned => a.positive,
};
r.addScalar(abs_trunc_a, -1);
// Zero-extend the result
@memset(r.limbs[r.len..req_limbs], 0);
r.len = req_limbs;
// Without truncating, we can already peek at the sign bit of the result here.
// Note that it will be 0 if the result is negative, as we did not apply the flip here.
// If the result is negative, we have
// -(-x & mask)
// = ~(~(x - 1) & mask) + 1
// = ~(~((x - 1) | ~mask)) + 1
// = ((x - 1) | ~mask)) + 1
// Note, this is only valid for the target bits and not the upper bits
// of the most significant limb. Those still need to be cleared.
// Also note that `mask` is zero for all other bits, reducing to the identity.
// This means that we still need to use & mask to clear off the upper bits.
if (signedness == .signed and r.limbs[r.len - 1] & signmask == 0) {
// Re-add the one and negate to get the result.
r.limbs[r.len - 1] &= mask;
// Note, addition cannot require extra limbs here as we did a subtraction before.
r.addScalar(r.toConst(), 1);
r.normalize(r.len);
r.positive = false;
} else {
llnot(r.limbs[0..r.len]);
r.limbs[r.len - 1] &= mask;
r.normalize(r.len);
}
} else {
const abs_trunc_a: Const = .{
.positive = true,
.limbs = a.limbs[0..llnormalize(a.limbs[0..@min(a.limbs.len, max_limbs)])],
};
if (same_sign_guess or abs_trunc_a.eqlZero()) {
// One of the following is true:
// * The result is zero.
// * The result is non-zero and has the same sign as `a`.
r.copy(abs_trunc_a);
// If the integer fits within target bits, no wrapping is required.
if (r.len < req_limbs) return;
r.limbs[r.len - 1] &= mask;
if (max_limbs <= r.len) r.limbs[max_limbs - 1] &= mask;
r.normalize(r.len);
if (signedness == .signed and r.limbs[r.len - 1] & signmask != 0) {
// Convert 2s-complement back to sign-magnitude.
// Sign-extend the upper bits so that they are inverted correctly.
r.limbs[r.len - 1] |= ~mask;
llnot(r.limbs[0..r.len]);
// Note, can only overflow if r holds 0xFFF...F which can only happen if
// a holds 0.
r.addScalar(r.toConst(), 1);
r.positive = false;
}
r.positive = a.positive or r.eqlZero();
} else {
// One of the following is true:
// * The result is the minimum signed integer.
// * The result is unsigned zero.
// * The result is non-zero and has the opposite sign as `a`.
r.addScalar(abs_trunc_a, -1);
llnot(r.limbs[0..r.len]);
@memset(r.limbs[r.len..max_limbs], maxInt(Limb));
r.limbs[max_limbs - 1] &= mask;
r.normalize(max_limbs);
r.positive = switch (signedness) {
// The only value with the sign bit still set is the minimum signed integer.
.signed => !a.positive and r.limbs[max_limbs - 1] & sign_bit == 0,
.unsigned => !a.positive or r.eqlZero(),
};
}
}

View File

@ -726,6 +726,34 @@ test "subWrap single-multi, signed, limb aligned" {
try testing.expect((try a.toInt(SignedDoubleLimb)) == maxInt(SignedDoubleLimb));
}
test "addWrap returns normalized result" {
var x = try Managed.initSet(testing.allocator, 0);
defer x.deinit();
var y = try Managed.initSet(testing.allocator, 0);
defer y.deinit();
// make them both non normalized "-0"
x.setMetadata(false, 1);
y.setMetadata(false, 1);
var r = try Managed.init(testing.allocator);
defer r.deinit();
try testing.expect(!(try r.addWrap(&x, &y, .unsigned, 64)));
try testing.expect(r.isPositive() and r.len() == 1 and r.limbs[0] == 0);
}
test "subWrap returns normalized result" {
var x = try Managed.initSet(testing.allocator, 0);
defer x.deinit();
var y = try Managed.initSet(testing.allocator, 0);
defer y.deinit();
var r = try Managed.init(testing.allocator);
defer r.deinit();
try testing.expect(!(try r.subWrap(&x, &y, .unsigned, 64)));
try testing.expect(r.isPositive() and r.len() == 1 and r.limbs[0] == 0);
}
test "addSat single-single, unsigned" {
var a = try Managed.initSet(testing.allocator, maxInt(u17) - 5);
defer a.deinit();
@ -1020,7 +1048,7 @@ test "mul large" {
// Generate a number that's large enough to cross the thresholds for the use
// of subquadratic algorithms
for (a.limbs) |*p| {
p.* = std.math.maxInt(Limb);
p.* = maxInt(Limb);
}
a.setMetadata(true, 50);
@ -1104,7 +1132,7 @@ test "mulWrap large" {
// Generate a number that's large enough to cross the thresholds for the use
// of subquadratic algorithms
for (a.limbs) |*p| {
p.* = std.math.maxInt(Limb);
p.* = maxInt(Limb);
}
a.setMetadata(true, 50);
@ -1961,23 +1989,78 @@ test "truncate to mutable with fewer limbs" {
.positive = undefined,
};
res.truncate(.{ .positive = true, .limbs = &.{ 0, 1 } }, .unsigned, @bitSizeOf(Limb));
try testing.expect(res.eqlZero());
try testing.expect(res.positive and res.len == 1 and res.limbs[0] == 0);
res.truncate(.{ .positive = true, .limbs = &.{ 0, 1 } }, .signed, @bitSizeOf(Limb));
try testing.expect(res.eqlZero());
try testing.expect(res.positive and res.len == 1 and res.limbs[0] == 0);
res.truncate(.{ .positive = false, .limbs = &.{ 0, 1 } }, .unsigned, @bitSizeOf(Limb));
try testing.expect(res.eqlZero());
try testing.expect(res.positive and res.len == 1 and res.limbs[0] == 0);
res.truncate(.{ .positive = false, .limbs = &.{ 0, 1 } }, .signed, @bitSizeOf(Limb));
try testing.expect(res.eqlZero());
res.truncate(.{ .positive = true, .limbs = &.{ std.math.maxInt(Limb), 1 } }, .unsigned, @bitSizeOf(Limb));
try testing.expect(res.toConst().orderAgainstScalar(std.math.maxInt(Limb)).compare(.eq));
res.truncate(.{ .positive = true, .limbs = &.{ std.math.maxInt(Limb), 1 } }, .signed, @bitSizeOf(Limb));
try testing.expect(res.positive and res.len == 1 and res.limbs[0] == 0);
res.truncate(.{ .positive = true, .limbs = &.{ maxInt(Limb), 1 } }, .unsigned, @bitSizeOf(Limb));
try testing.expect(res.toConst().orderAgainstScalar(maxInt(Limb)).compare(.eq));
res.truncate(.{ .positive = true, .limbs = &.{ maxInt(Limb), 1 } }, .signed, @bitSizeOf(Limb));
try testing.expect(res.toConst().orderAgainstScalar(-1).compare(.eq));
res.truncate(.{ .positive = false, .limbs = &.{ std.math.maxInt(Limb), 1 } }, .unsigned, @bitSizeOf(Limb));
res.truncate(.{ .positive = false, .limbs = &.{ maxInt(Limb), 1 } }, .unsigned, @bitSizeOf(Limb));
try testing.expect(res.toConst().orderAgainstScalar(1).compare(.eq));
res.truncate(.{ .positive = false, .limbs = &.{ std.math.maxInt(Limb), 1 } }, .signed, @bitSizeOf(Limb));
res.truncate(.{ .positive = false, .limbs = &.{ maxInt(Limb), 1 } }, .signed, @bitSizeOf(Limb));
try testing.expect(res.toConst().orderAgainstScalar(1).compare(.eq));
}
test "truncate value that normalizes after being masked" {
var res_limbs: [2]Limb = undefined;
var res: Mutable = .{
.limbs = &res_limbs,
.len = undefined,
.positive = undefined,
};
res.truncate(.{ .positive = true, .limbs = &.{ 0, 2 } }, .signed, 1 + @bitSizeOf(Limb));
try testing.expect(res.positive and res.len == 1 and res.limbs[0] == 0);
res.truncate(.{ .positive = true, .limbs = &.{ 1, 2 } }, .signed, 1 + @bitSizeOf(Limb));
try testing.expect(res.toConst().orderAgainstScalar(1).compare(.eq));
}
test "truncate to zero" {
var res_limbs: [1]Limb = undefined;
var res: Mutable = .{
.limbs = &res_limbs,
.len = undefined,
.positive = undefined,
};
res.truncate(.{ .positive = true, .limbs = &.{0} }, .signed, @bitSizeOf(Limb));
try testing.expect(res.positive and res.len == 1 and res.limbs[0] == 0);
res.truncate(.{ .positive = false, .limbs = &.{0} }, .signed, @bitSizeOf(Limb));
try testing.expect(res.positive and res.len == 1 and res.limbs[0] == 0);
res.truncate(.{ .positive = true, .limbs = &.{0} }, .unsigned, @bitSizeOf(Limb));
try testing.expect(res.positive and res.len == 1 and res.limbs[0] == 0);
res.truncate(.{ .positive = false, .limbs = &.{0} }, .unsigned, @bitSizeOf(Limb));
try testing.expect(res.positive and res.len == 1 and res.limbs[0] == 0);
res.truncate(.{ .positive = true, .limbs = &.{ 0, 1 } }, .signed, @bitSizeOf(Limb));
try testing.expect(res.positive and res.len == 1 and res.limbs[0] == 0);
res.truncate(.{ .positive = false, .limbs = &.{ 0, 1 } }, .signed, @bitSizeOf(Limb));
try testing.expect(res.positive and res.len == 1 and res.limbs[0] == 0);
res.truncate(.{ .positive = true, .limbs = &.{ 0, 1 } }, .unsigned, @bitSizeOf(Limb));
try testing.expect(res.positive and res.len == 1 and res.limbs[0] == 0);
res.truncate(.{ .positive = false, .limbs = &.{ 0, 1 } }, .unsigned, @bitSizeOf(Limb));
try testing.expect(res.positive and res.len == 1 and res.limbs[0] == 0);
}
test "truncate to minimum signed integer" {
var res_limbs: [1]Limb = undefined;
var res: Mutable = .{
.limbs = &res_limbs,
.len = undefined,
.positive = undefined,
};
res.truncate(.{ .positive = true, .limbs = &.{1 << @bitSizeOf(Limb) - 1} }, .signed, @bitSizeOf(Limb));
try testing.expect(res.toConst().orderAgainstScalar(-1 << @bitSizeOf(Limb) - 1).compare(.eq));
res.truncate(.{ .positive = false, .limbs = &.{1 << @bitSizeOf(Limb) - 1} }, .signed, @bitSizeOf(Limb));
try testing.expect(res.toConst().orderAgainstScalar(-1 << @bitSizeOf(Limb) - 1).compare(.eq));
res.truncate(.{ .positive = true, .limbs = &.{1 << @bitSizeOf(Limb) - 1} }, .unsigned, @bitSizeOf(Limb));
try testing.expect(res.toConst().orderAgainstScalar(1 << @bitSizeOf(Limb) - 1).compare(.eq));
res.truncate(.{ .positive = false, .limbs = &.{1 << @bitSizeOf(Limb) - 1} }, .unsigned, @bitSizeOf(Limb));
try testing.expect(res.toConst().orderAgainstScalar(1 << @bitSizeOf(Limb) - 1).compare(.eq));
}
test "saturate single signed positive" {
var a = try Managed.initSet(testing.allocator, 0xBBBB_BBBB);
defer a.deinit();
@ -2136,6 +2219,15 @@ test "shift-right negative" {
a.setSign(true);
try a.shiftRight(&arg7, 4);
try testing.expect(try a.toInt(i16) == -2048);
var arg8_limbs: [1]Limb = undefined;
var arg8: Mutable = .{
.limbs = &arg8_limbs,
.len = undefined,
.positive = undefined,
};
arg8.shiftRight(.{ .limbs = &.{ 1, 1 }, .positive = false }, @bitSizeOf(Limb));
try testing.expect(arg8.toConst().orderAgainstScalar(-2).compare(.eq));
}
test "sat shift-left simple unsigned" {

View File

@ -228,6 +228,18 @@ test "Allocator.resize" {
}
}
test "Allocator alloc and remap with zero-bit type" {
var values = try testing.allocator.alloc(void, 10);
defer testing.allocator.free(values);
try testing.expectEqual(10, values.len);
const remaped = testing.allocator.remap(values, 200);
try testing.expect(remaped != null);
values = remaped.?;
try testing.expectEqual(200, values.len);
}
/// Copy all of source into dest at position 0.
/// dest.len must be >= source.len.
/// If the slices overlap, dest.ptr must be <= src.ptr.
@ -4207,10 +4219,11 @@ fn BytesAsSliceReturnType(comptime T: type, comptime bytesType: type) type {
/// Given a slice of bytes, returns a slice of the specified type
/// backed by those bytes, preserving pointer attributes.
/// If `T` is zero-bytes sized, the returned slice has a len of zero.
pub fn bytesAsSlice(comptime T: type, bytes: anytype) BytesAsSliceReturnType(T, @TypeOf(bytes)) {
// let's not give an undefined pointer to @ptrCast
// it may be equal to zero and fail a null check
if (bytes.len == 0) {
if (bytes.len == 0 or @sizeOf(T) == 0) {
return &[0]T{};
}
@ -4288,6 +4301,19 @@ test "bytesAsSlice preserves pointer attributes" {
try testing.expectEqual(in.alignment, out.alignment);
}
test "bytesAsSlice with zero-bit element type" {
{
const bytes = [_]u8{};
const slice = bytesAsSlice(void, &bytes);
try testing.expectEqual(0, slice.len);
}
{
const bytes = [_]u8{ 0x01, 0x02, 0x03, 0x04 };
const slice = bytesAsSlice(u0, &bytes);
try testing.expectEqual(0, slice.len);
}
}
fn SliceAsBytesReturnType(comptime Slice: type) type {
return CopyPtrAttrs(Slice, .slice, u8);
}

View File

@ -150,7 +150,10 @@ pub inline fn rawFree(a: Allocator, memory: []u8, alignment: Alignment, ret_addr
/// Returns a pointer to undefined memory.
/// Call `destroy` with the result to free the memory.
pub fn create(a: Allocator, comptime T: type) Error!*T {
if (@sizeOf(T) == 0) return @as(*T, @ptrFromInt(math.maxInt(usize)));
if (@sizeOf(T) == 0) {
const ptr = comptime std.mem.alignBackward(usize, math.maxInt(usize), @alignOf(T));
return @as(*T, @ptrFromInt(ptr));
}
const ptr: *T = @ptrCast(try a.allocBytesWithAlignment(@alignOf(T), @sizeOf(T), @returnAddress()));
return ptr;
}
@ -308,15 +311,19 @@ pub fn resize(self: Allocator, allocation: anytype, new_len: usize) bool {
/// In such case, it is more efficient for the caller to perform those
/// operations.
///
/// `allocation` may be an empty slice, in which case a new allocation is made.
/// `allocation` may be an empty slice, in which case `null` is returned,
/// unless `new_len` is also 0, in which case `allocation` is returned.
///
/// `new_len` may be zero, in which case the allocation is freed.
///
/// If the allocation's elements' type is zero bytes sized, `allocation.len` is set to `new_len`.
pub fn remap(self: Allocator, allocation: anytype, new_len: usize) t: {
const Slice = @typeInfo(@TypeOf(allocation)).pointer;
break :t ?[]align(Slice.alignment) Slice.child;
} {
const Slice = @typeInfo(@TypeOf(allocation)).pointer;
const T = Slice.child;
const alignment = Slice.alignment;
if (new_len == 0) {
self.free(allocation);
@ -325,6 +332,11 @@ pub fn remap(self: Allocator, allocation: anytype, new_len: usize) t: {
if (allocation.len == 0) {
return null;
}
if (@sizeOf(T) == 0) {
var new_memory = allocation;
new_memory.len = new_len;
return new_memory;
}
const old_memory = mem.sliceAsBytes(allocation);
// I would like to use saturating multiplication here, but LLVM cannot lower it
// on WebAssembly: https://github.com/ziglang/zig/issues/9660

View File

@ -170,6 +170,7 @@ pub fn MultiArrayList(comptime T: type) type {
return lhs.alignment > rhs.alignment;
}
};
@setEvalBranchQuota(3 * fields.len * std.math.log2(fields.len));
mem.sort(Data, &data, {}, Sort.lessThan);
var sizes_bytes: [fields.len]usize = undefined;
var field_indexes: [fields.len]usize = undefined;
@ -565,7 +566,7 @@ pub fn MultiArrayList(comptime T: type) type {
}
fn FieldType(comptime field: Field) type {
return meta.fieldInfo(Elem, field).type;
return @FieldType(Elem, @tagName(field));
}
const Entry = entry: {
@ -978,3 +979,40 @@ test "0 sized struct" {
list.swapRemove(list.len - 1);
try testing.expectEqualSlices(u0, &[_]u0{0}, list.items(.a));
}
test "struct with many fields" {
const ManyFields = struct {
fn Type(count: comptime_int) type {
var fields: [count]std.builtin.Type.StructField = undefined;
for (0..count) |i| {
fields[i] = .{
.name = std.fmt.comptimePrint("a{}", .{i}),
.type = u32,
.default_value_ptr = null,
.is_comptime = false,
.alignment = @alignOf(u32),
};
}
const info: std.builtin.Type = .{ .@"struct" = .{
.layout = .auto,
.fields = &fields,
.decls = &.{},
.is_tuple = false,
} };
return @Type(info);
}
fn doTest(ally: std.mem.Allocator, count: comptime_int) !void {
var list: MultiArrayList(Type(count)) = .empty;
defer list.deinit(ally);
try list.resize(ally, 1);
list.items(.a0)[0] = 42;
}
};
try ManyFields.doTest(testing.allocator, 25);
try ManyFields.doTest(testing.allocator, 50);
try ManyFields.doTest(testing.allocator, 100);
try ManyFields.doTest(testing.allocator, 200);
}

View File

@ -785,6 +785,19 @@ fn if_nametoindex(name: []const u8) IPv6InterfaceError!u32 {
return @as(u32, @bitCast(index));
}
if (native_os == .windows) {
if (name.len >= posix.IFNAMESIZE)
return error.NameTooLong;
var interface_name: [posix.IFNAMESIZE:0]u8 = undefined;
@memcpy(interface_name[0..name.len], name);
interface_name[name.len] = 0;
const index = std.os.windows.ws2_32.if_nametoindex(@as([*:0]const u8, &interface_name));
if (index == 0)
return error.InterfaceNotFound;
return index;
}
@compileError("std.net.if_nametoindex unimplemented for this OS");
}

View File

@ -129,15 +129,16 @@ test "parse and render IPv6 addresses" {
try testing.expectError(error.InvalidIpv4Mapping, net.Address.parseIp6("::123.123.123.123", 0));
try testing.expectError(error.Incomplete, net.Address.parseIp6("1", 0));
// TODO Make this test pass on other operating systems.
if (builtin.os.tag == .linux or comptime builtin.os.tag.isDarwin()) {
if (builtin.os.tag == .linux or comptime builtin.os.tag.isDarwin() or builtin.os.tag == .windows) {
try testing.expectError(error.Incomplete, net.Address.resolveIp6("ff01::fb%", 0));
try testing.expectError(error.Overflow, net.Address.resolveIp6("ff01::fb%wlp3s0s0s0s0s0s0s0s0", 0));
// Assumes IFNAMESIZE will always be a multiple of 2
try testing.expectError(error.Overflow, net.Address.resolveIp6("ff01::fb%wlp3" ++ "s0" ** @divExact(std.posix.IFNAMESIZE - 4, 2), 0));
try testing.expectError(error.Overflow, net.Address.resolveIp6("ff01::fb%12345678901234", 0));
}
}
test "invalid but parseable IPv6 scope ids" {
if (builtin.os.tag != .linux and comptime !builtin.os.tag.isDarwin()) {
if (builtin.os.tag != .linux and comptime !builtin.os.tag.isDarwin() and builtin.os.tag != .windows) {
// Currently, resolveIp6 with alphanumerical scope IDs only works on Linux.
// TODO Make this test pass on other operating systems.
return error.SkipZigTest;
@ -261,7 +262,7 @@ test "listen on a port, send bytes, receive bytes" {
}
test "listen on an in use port" {
if (builtin.os.tag != .linux and comptime !builtin.os.tag.isDarwin()) {
if (builtin.os.tag != .linux and comptime !builtin.os.tag.isDarwin() and builtin.os.tag != .windows) {
// TODO build abstractions for other operating systems
return error.SkipZigTest;
}

View File

@ -120,6 +120,7 @@ pub fn getFdPath(fd: std.posix.fd_t, out_buffer: *[max_path_bytes]u8) std.posix.
.SUCCESS => {},
.BADF => return error.FileNotFound,
.NOSPC => return error.NameTooLong,
.NOENT => return error.FileNotFound,
// TODO man pages for fcntl on macOS don't really tell you what
// errno values to expect when command is F.GETPATH...
else => |err| return posix.unexpectedErrno(err),

View File

@ -482,26 +482,27 @@ pub const O = switch (native_arch) {
/// Set by startup code, used by `getauxval`.
pub var elf_aux_maybe: ?[*]std.elf.Auxv = null;
/// Whether an external or internal getauxval implementation is used.
const extern_getauxval = switch (builtin.zig_backend) {
// Calling extern functions is not yet supported with these backends
.stage2_aarch64, .stage2_arm, .stage2_riscv64, .stage2_sparc64 => false,
else => !builtin.link_libc,
};
comptime {
const root = @import("root");
// Export this only when building executable, otherwise it is overriding
// the libc implementation
if (extern_getauxval and (builtin.output_mode == .Exe or @hasDecl(root, "main"))) {
@export(&getauxvalImpl, .{ .name = "getauxval", .linkage = .weak });
}
}
pub const getauxval = if (extern_getauxval) struct {
comptime {
const root = @import("root");
// Export this only when building an executable, otherwise it is overriding
// the libc implementation
if (builtin.output_mode == .Exe or @hasDecl(root, "main")) {
@export(&getauxvalImpl, .{ .name = "getauxval", .linkage = .weak });
}
}
extern fn getauxval(index: usize) usize;
}.getauxval else getauxvalImpl;
fn getauxvalImpl(index: usize) callconv(.c) usize {
@disableInstrumentation();
const auxv = elf_aux_maybe orelse return 0;
var i: usize = 0;
while (auxv[i].a_type != std.elf.AT_NULL) : (i += 1) {
@ -1979,7 +1980,7 @@ pub fn socketpair(domain: i32, socket_type: i32, protocol: i32, fd: *[2]i32) usi
pub fn accept(fd: i32, noalias addr: ?*sockaddr, noalias len: ?*socklen_t) usize {
if (native_arch == .x86) {
return socketcall(SC.accept, &[4]usize{ fd, addr, len, 0 });
return socketcall(SC.accept, &[4]usize{ @as(usize, @bitCast(@as(isize, fd))), @intFromPtr(addr), @intFromPtr(len), 0 });
}
return accept4(fd, addr, len, 0);
}
@ -2221,7 +2222,7 @@ pub fn epoll_pwait(epoll_fd: i32, events: [*]epoll_event, maxevents: u32, timeou
@as(usize, @intCast(maxevents)),
@as(usize, @bitCast(@as(isize, timeout))),
@intFromPtr(sigmask),
@sizeOf(sigset_t),
NSIG / 8,
);
}
@ -3385,6 +3386,7 @@ pub const SIG = if (is_mips) struct {
pub const UNBLOCK = 2;
pub const SETMASK = 3;
// https://github.com/torvalds/linux/blob/ca91b9500108d4cf083a635c2e11c884d5dd20ea/arch/mips/include/uapi/asm/signal.h#L25
pub const HUP = 1;
pub const INT = 2;
pub const QUIT = 3;
@ -3392,33 +3394,32 @@ pub const SIG = if (is_mips) struct {
pub const TRAP = 5;
pub const ABRT = 6;
pub const IOT = ABRT;
pub const BUS = 7;
pub const EMT = 7;
pub const FPE = 8;
pub const KILL = 9;
pub const USR1 = 10;
pub const BUS = 10;
pub const SEGV = 11;
pub const USR2 = 12;
pub const SYS = 12;
pub const PIPE = 13;
pub const ALRM = 14;
pub const TERM = 15;
pub const STKFLT = 16;
pub const CHLD = 17;
pub const CONT = 18;
pub const STOP = 19;
pub const TSTP = 20;
pub const TTIN = 21;
pub const TTOU = 22;
pub const URG = 23;
pub const XCPU = 24;
pub const XFSZ = 25;
pub const VTALRM = 26;
pub const PROF = 27;
pub const WINCH = 28;
pub const IO = 29;
pub const POLL = 29;
pub const PWR = 30;
pub const SYS = 31;
pub const UNUSED = SIG.SYS;
pub const USR1 = 16;
pub const USR2 = 17;
pub const CHLD = 18;
pub const PWR = 19;
pub const WINCH = 20;
pub const URG = 21;
pub const IO = 22;
pub const POLL = IO;
pub const STOP = 23;
pub const TSTP = 24;
pub const CONT = 25;
pub const TTIN = 26;
pub const TTOU = 27;
pub const VTALRM = 28;
pub const PROF = 29;
pub const XCPU = 30;
pub const XFZ = 31;
pub const ERR: ?Sigaction.handler_fn = @ptrFromInt(maxInt(usize));
pub const DFL: ?Sigaction.handler_fn = @ptrFromInt(0);

View File

@ -304,6 +304,7 @@ pub const msghdr_const = extern struct {
pub const off_t = i64;
pub const ino_t = u64;
pub const time_t = isize;
pub const mode_t = u32;
pub const dev_t = usize;
pub const nlink_t = u32;

View File

@ -516,7 +516,7 @@ pub fn initStatic(phdrs: []elf.Phdr) void {
-1,
0,
);
if (@as(isize, @bitCast(begin_addr)) < 0) @trap();
if (@call(.always_inline, linux.E.init, .{begin_addr}) != .SUCCESS) @trap();
const area_ptr: [*]align(page_size_min) u8 = @ptrFromInt(begin_addr);

View File

@ -687,6 +687,7 @@ pub fn WriteFile(
.INVALID_HANDLE => return error.NotOpenForWriting,
.LOCK_VIOLATION => return error.LockViolation,
.NETNAME_DELETED => return error.ConnectionResetByPeer,
.WORKING_SET_QUOTA => return error.SystemResources,
else => |err| return unexpectedError(err),
}
}
@ -1913,6 +1914,7 @@ pub fn CreateProcessW(
switch (GetLastError()) {
.FILE_NOT_FOUND => return error.FileNotFound,
.PATH_NOT_FOUND => return error.FileNotFound,
.DIRECTORY => return error.FileNotFound,
.ACCESS_DENIED => return error.AccessDenied,
.INVALID_PARAMETER => unreachable,
.INVALID_NAME => return error.InvalidName,
@ -5246,6 +5248,9 @@ pub const PF = enum(DWORD) {
/// This ARM processor implements the ARM v8.3 JavaScript conversion (JSCVT) instructions.
ARM_V83_JSCVT_INSTRUCTIONS_AVAILABLE = 44,
/// This Arm processor implements the Arm v8.3 LRCPC instructions (for example, LDAPR). Note that certain Arm v8.2 CPUs may optionally support the LRCPC instructions.
ARM_V83_LRCPC_INSTRUCTIONS_AVAILABLE,
};
pub const MAX_WOW64_SHARED_ENTRIES = 16;

View File

@ -722,9 +722,12 @@ pub fn raise(sig: u8) RaiseError!void {
}
if (native_os == .linux) {
// https://git.musl-libc.org/cgit/musl/commit/?id=0bed7e0acfd34e3fb63ca0e4d99b7592571355a9
//
// Unlike musl, libc-less Zig std does not have any internal signals for implementation purposes, so we
// need to block all signals on the assumption that any of them could potentially fork() in a handler.
var set: sigset_t = undefined;
// block application signals
sigprocmask(SIG.BLOCK, &linux.app_mask, &set);
sigprocmask(SIG.BLOCK, &linux.all_mask, &set);
const tid = linux.gettid();
const rc = linux.tkill(tid, sig);
@ -7474,7 +7477,7 @@ pub fn ioctl_SIOCGIFINDEX(fd: fd_t, ifr: *ifreq) IoCtl_SIOCGIFINDEX_Error!void {
}
}
const lfs64_abi = native_os == .linux and builtin.link_libc and builtin.abi.isGnu();
const lfs64_abi = native_os == .linux and builtin.link_libc and (builtin.abi.isGnu() or builtin.abi.isAndroid());
/// Whether or not `error.Unexpected` will print its value and a stack trace.
///

View File

@ -1651,14 +1651,15 @@ pub fn posixGetUserInfo(name: []const u8) !UserInfo {
pub fn getBaseAddress() usize {
switch (native_os) {
.linux => {
const base = std.os.linux.getauxval(std.elf.AT_BASE);
const getauxval = if (builtin.link_libc) std.c.getauxval else std.os.linux.getauxval;
const base = getauxval(std.elf.AT_BASE);
if (base != 0) {
return base;
}
const phdr = std.os.linux.getauxval(std.elf.AT_PHDR);
const phdr = getauxval(std.elf.AT_PHDR);
return phdr - @sizeOf(std.elf.Ehdr);
},
.macos, .freebsd, .netbsd => {
.driverkit, .ios, .macos, .tvos, .visionos, .watchos => {
return @intFromPtr(&std.c._mh_execute_header);
},
.windows => return @intFromPtr(windows.kernel32.GetModuleHandleW(null)),

View File

@ -239,7 +239,7 @@ fn _start() callconv(.naked) noreturn {
.csky => ".cfi_undefined lr",
.hexagon => ".cfi_undefined r31",
.loongarch32, .loongarch64 => ".cfi_undefined 1",
.m68k => ".cfi_undefined pc",
.m68k => ".cfi_undefined %%pc",
.mips, .mipsel, .mips64, .mips64el => ".cfi_undefined $ra",
.powerpc, .powerpcle, .powerpc64, .powerpc64le => ".cfi_undefined lr",
.riscv32, .riscv64 => if (builtin.zig_backend == .stage2_riscv64)
@ -355,7 +355,11 @@ fn _start() callconv(.naked) noreturn {
// Note that the - 8 is needed because pc in the jsr instruction points into the middle
// of the jsr instruction. (The lea is 6 bytes, the jsr is 4 bytes.)
\\ suba.l %%fp, %%fp
\\ move.l %%sp, -(%%sp)
\\ move.l %%sp, %%a0
\\ move.l %%a0, %%d0
\\ and.l #-4, %%d0
\\ move.l %%d0, %%sp
\\ move.l %%a0, -(%%sp)
\\ lea %[posixCallMainAndExit] - . - 8, %%a0
\\ jsr (%%pc, %%a0)
,

View File

@ -1,13 +1,5 @@
//! Allocator that fails after N allocations, useful for making sure out of
//! memory conditions are handled correctly.
//!
//! To use this, first initialize it and get an allocator with
//!
//! `const failing_allocator = &FailingAllocator.init(<allocator>,
//! <config>).allocator;`
//!
//! Then use `failing_allocator` anywhere you would have used a
//! different allocator.
const std = @import("../std.zig");
const mem = std.mem;
const FailingAllocator = @This();
@ -28,12 +20,7 @@ const num_stack_frames = if (std.debug.sys_can_stack_trace) 16 else 0;
pub const Config = struct {
/// The number of successful allocations you can expect from this allocator.
/// The next allocation will fail. For example, with `fail_index` equal to
/// 2, the following test will pass:
///
/// var a = try failing_alloc.create(i32);
/// var b = try failing_alloc.create(i32);
/// testing.expectError(error.OutOfMemory, failing_alloc.create(i32));
/// The next allocation will fail.
fail_index: usize = std.math.maxInt(usize),
/// Number of successful resizes to expect from this allocator. The next resize will fail.
@ -159,3 +146,40 @@ pub fn getStackTrace(self: *FailingAllocator) std.builtin.StackTrace {
.index = len,
};
}
test FailingAllocator {
// Fail on allocation
{
var failing_allocator_state = FailingAllocator.init(std.testing.allocator, .{
.fail_index = 2,
});
const failing_alloc = failing_allocator_state.allocator();
const a = try failing_alloc.create(i32);
defer failing_alloc.destroy(a);
const b = try failing_alloc.create(i32);
defer failing_alloc.destroy(b);
try std.testing.expectError(error.OutOfMemory, failing_alloc.create(i32));
}
// Fail on resize
{
var failing_allocator_state = FailingAllocator.init(std.testing.allocator, .{
.resize_fail_index = 1,
});
const failing_alloc = failing_allocator_state.allocator();
const resized_slice = blk: {
const slice = try failing_alloc.alloc(u8, 8);
errdefer failing_alloc.free(slice);
break :blk failing_alloc.remap(slice, 6) orelse return error.UnexpectedRemapFailure;
};
defer failing_alloc.free(resized_slice);
// Remap and resize should fail from here on out
try std.testing.expectEqual(null, failing_alloc.remap(resized_slice, 4));
try std.testing.expectEqual(false, failing_alloc.resize(resized_slice, 4));
// Note: realloc could succeed because it falls back to free+alloc
}
}

View File

@ -100,14 +100,19 @@ pub fn parse(gpa: Allocator, source: [:0]const u8, mode: Mode) Allocator.Error!A
.zon => try parser.parseZon(),
}
const extra_data = try parser.extra_data.toOwnedSlice(gpa);
errdefer gpa.free(extra_data);
const errors = try parser.errors.toOwnedSlice(gpa);
errdefer gpa.free(errors);
// TODO experiment with compacting the MultiArrayList slices here
return Ast{
.source = source,
.mode = mode,
.tokens = tokens.toOwnedSlice(),
.nodes = parser.nodes.toOwnedSlice(),
.extra_data = try parser.extra_data.toOwnedSlice(gpa),
.errors = try parser.errors.toOwnedSlice(gpa),
.extra_data = extra_data,
.errors = errors,
};
}

View File

@ -88,6 +88,9 @@ fn castToPtr(comptime DestType: type, comptime SourceType: type, target: anytype
.pointer => {
return castPtr(DestType, target);
},
.@"fn" => {
return castPtr(DestType, &target);
},
.optional => |target_opt| {
if (@typeInfo(target_opt.child) == .pointer) {
return castPtr(DestType, target);
@ -686,3 +689,14 @@ test "Extended C ABI casting" {
try testing.expect(@TypeOf(Macros.L_SUFFIX(math.maxInt(c_long) + 1)) == c_longlong); // comptime_int -> c_longlong
}
}
// Function with complex signature for testing the SDL case
fn complexFunction(_: ?*anyopaque, _: c_uint, _: ?*const fn (?*anyopaque) callconv(.c) c_uint, _: ?*anyopaque, _: c_uint, _: [*c]c_uint) callconv(.c) usize {
return 0;
}
test "function pointer casting" {
const SDL_FunctionPointer = ?*const fn () callconv(.c) void;
const fn_ptr = cast(SDL_FunctionPointer, complexFunction);
try testing.expect(fn_ptr != null);
}

View File

@ -7526,9 +7526,9 @@ pub const Constant = enum(u32) {
};
}
};
const Mantissa64 = std.meta.FieldType(Float.Repr(f64), .mantissa);
const Exponent32 = std.meta.FieldType(Float.Repr(f32), .exponent);
const Exponent64 = std.meta.FieldType(Float.Repr(f64), .exponent);
const Mantissa64 = @FieldType(Float.Repr(f64), "mantissa");
const Exponent32 = @FieldType(Float.Repr(f32), "exponent");
const Exponent64 = @FieldType(Float.Repr(f64), "exponent");
const repr: Float.Repr(f32) = @bitCast(item.data);
const denormal_shift = switch (repr.exponent) {

View File

@ -125,40 +125,33 @@ pub fn getExternalExecutor(
};
}
if (options.allow_wasmtime and candidate.cpu.arch.isWasm()) {
return Executor{ .wasmtime = "wasmtime" };
}
switch (candidate.os.tag) {
.windows => {
if (options.allow_wine) {
// x86_64 wine does not support emulating aarch64-windows and
// vice versa.
if (candidate.cpu.arch != builtin.cpu.arch and
!(candidate.cpu.arch == .thumb and builtin.cpu.arch == .aarch64) and
!(candidate.cpu.arch == .x86 and builtin.cpu.arch == .x86_64))
{
return bad_result;
}
switch (candidate.ptrBitWidth()) {
32 => return Executor{ .wine = "wine" },
64 => return Executor{ .wine = "wine64" },
else => return bad_result,
}
const wine_supported = switch (candidate.cpu.arch) {
.thumb => switch (host.cpu.arch) {
.arm, .thumb, .aarch64 => true,
else => false,
},
.aarch64 => host.cpu.arch == .aarch64,
.x86 => host.cpu.arch.isX86(),
.x86_64 => host.cpu.arch == .x86_64,
else => false,
};
return if (wine_supported) Executor{ .wine = "wine" } else bad_result;
}
return bad_result;
},
.wasi => {
if (options.allow_wasmtime) {
switch (candidate.ptrBitWidth()) {
32 => return Executor{ .wasmtime = "wasmtime" },
else => return bad_result,
}
}
return bad_result;
},
.macos => {
.driverkit, .macos => {
if (options.allow_darling) {
// This check can be loosened once darling adds a QEMU-based emulation
// layer for non-host architectures:
// https://github.com/darlinghq/darling/issues/863
if (candidate.cpu.arch != builtin.cpu.arch) {
if (candidate.cpu.arch != host.cpu.arch) {
return bad_result;
}
return Executor{ .darling = "darling" };

View File

@ -22,32 +22,34 @@ pub const cpu_models = struct {
// implementer = 0x41
const ARM = [_]E{
E{ .part = 0x926, .m32 = &A32.arm926ej_s, .m64 = null },
E{ .part = 0xb02, .m32 = &A32.mpcore, .m64 = null },
E{ .part = 0xb36, .m32 = &A32.arm1136j_s, .m64 = null },
E{ .part = 0xb56, .m32 = &A32.arm1156t2_s, .m64 = null },
E{ .part = 0xb76, .m32 = &A32.arm1176jz_s, .m64 = null },
E{ .part = 0xc05, .m32 = &A32.cortex_a5, .m64 = null },
E{ .part = 0xc07, .m32 = &A32.cortex_a7, .m64 = null },
E{ .part = 0xc08, .m32 = &A32.cortex_a8, .m64 = null },
E{ .part = 0xc09, .m32 = &A32.cortex_a9, .m64 = null },
E{ .part = 0xc0d, .m32 = &A32.cortex_a17, .m64 = null },
E{ .part = 0xc0f, .m32 = &A32.cortex_a15, .m64 = null },
E{ .part = 0xc0e, .m32 = &A32.cortex_a17, .m64 = null },
E{ .part = 0xc14, .m32 = &A32.cortex_r4, .m64 = null },
E{ .part = 0xc15, .m32 = &A32.cortex_r5, .m64 = null },
E{ .part = 0xc17, .m32 = &A32.cortex_r7, .m64 = null },
E{ .part = 0xc18, .m32 = &A32.cortex_r8, .m64 = null },
E{ .part = 0xc20, .m32 = &A32.cortex_m0, .m64 = null },
E{ .part = 0xc21, .m32 = &A32.cortex_m1, .m64 = null },
E{ .part = 0xc23, .m32 = &A32.cortex_m3, .m64 = null },
E{ .part = 0xc24, .m32 = &A32.cortex_m4, .m64 = null },
E{ .part = 0xc27, .m32 = &A32.cortex_m7, .m64 = null },
E{ .part = 0xc60, .m32 = &A32.cortex_m0plus, .m64 = null },
E{ .part = 0xd01, .m32 = &A32.cortex_a32, .m64 = null },
E{ .part = 0x926, .m32 = &A32.arm926ej_s },
E{ .part = 0xb02, .m32 = &A32.mpcore },
E{ .part = 0xb36, .m32 = &A32.arm1136j_s },
E{ .part = 0xb56, .m32 = &A32.arm1156t2_s },
E{ .part = 0xb76, .m32 = &A32.arm1176jz_s },
E{ .part = 0xc05, .m32 = &A32.cortex_a5 },
E{ .part = 0xc07, .m32 = &A32.cortex_a7 },
E{ .part = 0xc08, .m32 = &A32.cortex_a8 },
E{ .part = 0xc09, .m32 = &A32.cortex_a9 },
E{ .part = 0xc0d, .m32 = &A32.cortex_a17 },
E{ .part = 0xc0e, .m32 = &A32.cortex_a17 },
E{ .part = 0xc0f, .m32 = &A32.cortex_a15 },
E{ .part = 0xc14, .m32 = &A32.cortex_r4 },
E{ .part = 0xc15, .m32 = &A32.cortex_r5 },
E{ .part = 0xc17, .m32 = &A32.cortex_r7 },
E{ .part = 0xc18, .m32 = &A32.cortex_r8 },
E{ .part = 0xc20, .m32 = &A32.cortex_m0 },
E{ .part = 0xc21, .m32 = &A32.cortex_m1 },
E{ .part = 0xc23, .m32 = &A32.cortex_m3 },
E{ .part = 0xc24, .m32 = &A32.cortex_m4 },
E{ .part = 0xc27, .m32 = &A32.cortex_m7 },
E{ .part = 0xc60, .m32 = &A32.cortex_m0plus },
E{ .part = 0xd01, .m32 = &A32.cortex_a32 },
E{ .part = 0xd02, .m64 = &A64.cortex_a34 },
E{ .part = 0xd03, .m32 = &A32.cortex_a53, .m64 = &A64.cortex_a53 },
E{ .part = 0xd04, .m32 = &A32.cortex_a35, .m64 = &A64.cortex_a35 },
E{ .part = 0xd05, .m32 = &A32.cortex_a55, .m64 = &A64.cortex_a55 },
E{ .part = 0xd06, .m64 = &A64.cortex_a65 },
E{ .part = 0xd07, .m32 = &A32.cortex_a57, .m64 = &A64.cortex_a57 },
E{ .part = 0xd08, .m32 = &A32.cortex_a72, .m64 = &A64.cortex_a72 },
E{ .part = 0xd09, .m32 = &A32.cortex_a73, .m64 = &A64.cortex_a73 },
@ -55,16 +57,38 @@ pub const cpu_models = struct {
E{ .part = 0xd0b, .m32 = &A32.cortex_a76, .m64 = &A64.cortex_a76 },
E{ .part = 0xd0c, .m32 = &A32.neoverse_n1, .m64 = &A64.neoverse_n1 },
E{ .part = 0xd0d, .m32 = &A32.cortex_a77, .m64 = &A64.cortex_a77 },
E{ .part = 0xd13, .m32 = &A32.cortex_r52, .m64 = null },
E{ .part = 0xd20, .m32 = &A32.cortex_m23, .m64 = null },
E{ .part = 0xd21, .m32 = &A32.cortex_m33, .m64 = null },
E{ .part = 0xd0e, .m32 = &A32.cortex_a76ae, .m64 = &A64.cortex_a76ae },
E{ .part = 0xd13, .m32 = &A32.cortex_r52 },
E{ .part = 0xd14, .m64 = &A64.cortex_r82ae },
E{ .part = 0xd15, .m64 = &A64.cortex_r82 },
E{ .part = 0xd16, .m32 = &A32.cortex_r52plus },
E{ .part = 0xd20, .m32 = &A32.cortex_m23 },
E{ .part = 0xd21, .m32 = &A32.cortex_m33 },
E{ .part = 0xd40, .m32 = &A32.neoverse_v1, .m64 = &A64.neoverse_v1 },
E{ .part = 0xd41, .m32 = &A32.cortex_a78, .m64 = &A64.cortex_a78 },
E{ .part = 0xd42, .m32 = &A32.cortex_a78ae, .m64 = &A64.cortex_a78ae },
E{ .part = 0xd43, .m64 = &A64.cortex_a65ae },
E{ .part = 0xd44, .m32 = &A32.cortex_x1, .m64 = &A64.cortex_x1 },
E{ .part = 0xd46, .m64 = &A64.cortex_a510 },
E{ .part = 0xd47, .m32 = &A32.cortex_a710, .m64 = &A64.cortex_a710 },
E{ .part = 0xd48, .m64 = &A64.cortex_x2 },
E{ .part = 0xd49, .m32 = &A32.neoverse_n2, .m64 = &A64.neoverse_n2 },
E{ .part = 0xd4a, .m64 = &A64.neoverse_e1 },
E{ .part = 0xd4b, .m32 = &A32.cortex_a78c, .m64 = &A64.cortex_a78c },
E{ .part = 0xd4c, .m32 = &A32.cortex_x1c, .m64 = &A64.cortex_x1c },
E{ .part = 0xd44, .m32 = &A32.cortex_x1, .m64 = &A64.cortex_x1 },
E{ .part = 0xd02, .m64 = &A64.cortex_a34 },
E{ .part = 0xd06, .m64 = &A64.cortex_a65 },
E{ .part = 0xd43, .m64 = &A64.cortex_a65ae },
E{ .part = 0xd4d, .m64 = &A64.cortex_a715 },
E{ .part = 0xd4e, .m64 = &A64.cortex_x3 },
E{ .part = 0xd4f, .m64 = &A64.neoverse_v2 },
E{ .part = 0xd80, .m64 = &A64.cortex_a520 },
E{ .part = 0xd81, .m64 = &A64.cortex_a720 },
E{ .part = 0xd82, .m64 = &A64.cortex_x4 },
E{ .part = 0xd83, .m64 = &A64.neoverse_v3ae },
E{ .part = 0xd84, .m64 = &A64.neoverse_v3 },
E{ .part = 0xd85, .m64 = &A64.cortex_x925 },
E{ .part = 0xd87, .m64 = &A64.cortex_a725 },
E{ .part = 0xd88, .m64 = &A64.cortex_a520ae },
E{ .part = 0xd89, .m64 = &A64.cortex_a720ae },
E{ .part = 0xd8e, .m64 = &A64.neoverse_n3 },
};
// implementer = 0x42
const Broadcom = [_]E{
@ -97,6 +121,7 @@ pub const cpu_models = struct {
};
// implementer = 0x51
const Qualcomm = [_]E{
E{ .part = 0x001, .m64 = &A64.oryon_1 },
E{ .part = 0x06f, .m32 = &A32.krait },
E{ .part = 0x201, .m64 = &A64.kryo, .m32 = &A64.kryo },
E{ .part = 0x205, .m64 = &A64.kryo, .m32 = &A64.kryo },
@ -110,7 +135,7 @@ pub const cpu_models = struct {
E{ .part = 0xc00, .m64 = &A64.falkor },
E{ .part = 0xc01, .m64 = &A64.saphira },
};
// implementer = 0x61
const Apple = [_]E{
E{ .part = 0x022, .m64 = &A64.apple_m1 },
E{ .part = 0x023, .m64 = &A64.apple_m1 },
@ -133,6 +158,7 @@ pub const cpu_models = struct {
0x43 => &Cavium,
0x46 => &Fujitsu,
0x48 => &HiSilicon,
0x4e => &Nvidia,
0x50 => &Ampere,
0x51 => &Qualcomm,
0x61 => &Apple,

View File

@ -408,22 +408,24 @@ pub fn detectNativeCpuAndFeatures() ?Target.Cpu {
switch (current_arch) {
.aarch64, .aarch64_be => {
const model = switch (cpu_family) {
.ARM_EVEREST_SAWTOOTH => &Target.aarch64.cpu.apple_a16,
.ARM_BLIZZARD_AVALANCHE => &Target.aarch64.cpu.apple_a15,
.ARM_FIRESTORM_ICESTORM => &Target.aarch64.cpu.apple_a14,
.ARM_LIGHTNING_THUNDER => &Target.aarch64.cpu.apple_a13,
.ARM_VORTEX_TEMPEST => &Target.aarch64.cpu.apple_a12,
.ARM_MONSOON_MISTRAL => &Target.aarch64.cpu.apple_a11,
.ARM_HURRICANE => &Target.aarch64.cpu.apple_a10,
.ARM_TWISTER => &Target.aarch64.cpu.apple_a9,
.ARM_CYCLONE => &Target.aarch64.cpu.apple_a7,
.ARM_TYPHOON => &Target.aarch64.cpu.apple_a8,
.ARM_CYCLONE => &Target.aarch64.cpu.cyclone,
.ARM_COLL => &Target.aarch64.cpu.apple_a17,
.ARM_TWISTER => &Target.aarch64.cpu.apple_a9,
.ARM_HURRICANE => &Target.aarch64.cpu.apple_a10,
.ARM_MONSOON_MISTRAL => &Target.aarch64.cpu.apple_a11,
.ARM_VORTEX_TEMPEST => &Target.aarch64.cpu.apple_a12,
.ARM_LIGHTNING_THUNDER => &Target.aarch64.cpu.apple_a13,
.ARM_FIRESTORM_ICESTORM => &Target.aarch64.cpu.apple_m1, // a14
.ARM_BLIZZARD_AVALANCHE => &Target.aarch64.cpu.apple_m2, // a15
.ARM_EVEREST_SAWTOOTH => &Target.aarch64.cpu.apple_m3, // a16
.ARM_IBIZA => &Target.aarch64.cpu.apple_m3, // base
.ARM_LOBOS => &Target.aarch64.cpu.apple_m3, // pro
.ARM_PALMA => &Target.aarch64.cpu.apple_m3, // max
.ARM_LOBOS => &Target.aarch64.cpu.apple_m3, // pro
.ARM_COLL => &Target.aarch64.cpu.apple_a17, // a17 pro
.ARM_DONAN => &Target.aarch64.cpu.apple_m4, // base
.ARM_BRAVA => &Target.aarch64.cpu.apple_m4, // pro/max
.ARM_TAHITI => &Target.aarch64.cpu.apple_m4, // a18 pro
.ARM_TUPAI => &Target.aarch64.cpu.apple_m4, // a18
else => return null,
};

View File

@ -2,11 +2,30 @@ const std = @import("std");
const builtin = @import("builtin");
const Target = std.Target;
const XCR0_XMM = 0x02;
const XCR0_YMM = 0x04;
const XCR0_MASKREG = 0x20;
const XCR0_ZMM0_15 = 0x40;
const XCR0_ZMM16_31 = 0x80;
/// Only covers EAX for now.
const Xcr0 = packed struct(u32) {
x87: bool,
sse: bool,
avx: bool,
bndreg: bool,
bndcsr: bool,
opmask: bool,
zmm_hi256: bool,
hi16_zmm: bool,
pt: bool,
pkru: bool,
pasid: bool,
cet_u: bool,
cet_s: bool,
hdc: bool,
uintr: bool,
lbr: bool,
hwp: bool,
xtilecfg: bool,
xtiledata: bool,
apx: bool,
_reserved: u12,
};
fn setFeature(cpu: *Target.Cpu, feature: Target.x86.Feature, enabled: bool) void {
const idx = @as(Target.Cpu.Feature.Set.Index, @intFromEnum(feature));
@ -339,12 +358,6 @@ fn detectNativeFeatures(cpu: *Target.Cpu, os_tag: Target.Os.Tag) void {
leaf = cpuid(1, 0);
setFeature(cpu, .cx8, bit(leaf.edx, 8));
setFeature(cpu, .cmov, bit(leaf.edx, 15));
setFeature(cpu, .mmx, bit(leaf.edx, 23));
setFeature(cpu, .fxsr, bit(leaf.edx, 24));
setFeature(cpu, .sse, bit(leaf.edx, 25));
setFeature(cpu, .sse2, bit(leaf.edx, 26));
setFeature(cpu, .sse3, bit(leaf.ecx, 0));
setFeature(cpu, .pclmul, bit(leaf.ecx, 1));
setFeature(cpu, .ssse3, bit(leaf.ecx, 9));
@ -356,13 +369,20 @@ fn detectNativeFeatures(cpu: *Target.Cpu, os_tag: Target.Os.Tag) void {
setFeature(cpu, .aes, bit(leaf.ecx, 25));
setFeature(cpu, .rdrnd, bit(leaf.ecx, 30));
setFeature(cpu, .cx8, bit(leaf.edx, 8));
setFeature(cpu, .cmov, bit(leaf.edx, 15));
setFeature(cpu, .mmx, bit(leaf.edx, 23));
setFeature(cpu, .fxsr, bit(leaf.edx, 24));
setFeature(cpu, .sse, bit(leaf.edx, 25));
setFeature(cpu, .sse2, bit(leaf.edx, 26));
const has_xsave = bit(leaf.ecx, 27);
const has_avx = bit(leaf.ecx, 28);
// Make sure not to call xgetbv if xsave is not supported
const xcr0_eax = if (has_xsave and has_avx) getXCR0() else 0;
const xcr0: Xcr0 = if (has_xsave and has_avx) @bitCast(getXCR0()) else @bitCast(@as(u32, 0));
const has_avx_save = hasMask(xcr0_eax, XCR0_XMM | XCR0_YMM);
const has_avx_save = xcr0.sse and xcr0.avx;
// LLVM approaches avx512_save by hardcoding it to true on Darwin,
// because the kernel saves the context even if the bit is not set.
@ -384,22 +404,26 @@ fn detectNativeFeatures(cpu: *Target.Cpu, os_tag: Target.Os.Tag) void {
// Darwin lazily saves the AVX512 context on first use: trust that the OS will
// save the AVX512 context if we use AVX512 instructions, even if the bit is not
// set right now.
const has_avx512_save = switch (os_tag.isDarwin()) {
true => true,
false => hasMask(xcr0_eax, XCR0_MASKREG | XCR0_ZMM0_15 | XCR0_ZMM16_31),
};
const has_avx512_save = if (os_tag.isDarwin())
true
else
xcr0.zmm_hi256 and xcr0.hi16_zmm;
// AMX requires additional context to be saved by the OS.
const has_amx_save = xcr0.xtilecfg and xcr0.xtiledata;
setFeature(cpu, .avx, has_avx_save);
setFeature(cpu, .fma, has_avx_save and bit(leaf.ecx, 12));
setFeature(cpu, .fma, bit(leaf.ecx, 12) and has_avx_save);
// Only enable XSAVE if OS has enabled support for saving YMM state.
setFeature(cpu, .xsave, has_avx_save and bit(leaf.ecx, 26));
setFeature(cpu, .f16c, has_avx_save and bit(leaf.ecx, 29));
setFeature(cpu, .xsave, bit(leaf.ecx, 26) and has_avx_save);
setFeature(cpu, .f16c, bit(leaf.ecx, 29) and has_avx_save);
leaf = cpuid(0x80000000, 0);
const max_ext_level = leaf.eax;
if (max_ext_level >= 0x80000001) {
leaf = cpuid(0x80000001, 0);
setFeature(cpu, .sahf, bit(leaf.ecx, 0));
setFeature(cpu, .lzcnt, bit(leaf.ecx, 5));
setFeature(cpu, .sse4a, bit(leaf.ecx, 6));
@ -409,11 +433,21 @@ fn detectNativeFeatures(cpu: *Target.Cpu, os_tag: Target.Os.Tag) void {
setFeature(cpu, .fma4, bit(leaf.ecx, 16) and has_avx_save);
setFeature(cpu, .tbm, bit(leaf.ecx, 21));
setFeature(cpu, .mwaitx, bit(leaf.ecx, 29));
setFeature(cpu, .@"64bit", bit(leaf.edx, 29));
} else {
for ([_]Target.x86.Feature{
.sahf, .lzcnt, .sse4a, .prfchw, .xop,
.lwp, .fma4, .tbm, .mwaitx, .@"64bit",
.sahf,
.lzcnt,
.sse4a,
.prfchw,
.xop,
.lwp,
.fma4,
.tbm,
.mwaitx,
.@"64bit",
}) |feat| {
setFeature(cpu, feat, false);
}
@ -422,10 +456,16 @@ fn detectNativeFeatures(cpu: *Target.Cpu, os_tag: Target.Os.Tag) void {
// Misc. memory-related features.
if (max_ext_level >= 0x80000008) {
leaf = cpuid(0x80000008, 0);
setFeature(cpu, .clzero, bit(leaf.ebx, 0));
setFeature(cpu, .rdpru, bit(leaf.ebx, 4));
setFeature(cpu, .wbnoinvd, bit(leaf.ebx, 9));
} else {
for ([_]Target.x86.Feature{ .clzero, .wbnoinvd }) |feat| {
for ([_]Target.x86.Feature{
.clzero,
.rdpru,
.wbnoinvd,
}) |feat| {
setFeature(cpu, feat, false);
}
}
@ -444,6 +484,7 @@ fn detectNativeFeatures(cpu: *Target.Cpu, os_tag: Target.Os.Tag) void {
setFeature(cpu, .rtm, bit(leaf.ebx, 11));
// AVX512 is only supported if the OS supports the context save for it.
setFeature(cpu, .avx512f, bit(leaf.ebx, 16) and has_avx512_save);
setFeature(cpu, .evex512, bit(leaf.ebx, 16) and has_avx512_save);
setFeature(cpu, .avx512dq, bit(leaf.ebx, 17) and has_avx512_save);
setFeature(cpu, .rdseed, bit(leaf.ebx, 18));
setFeature(cpu, .adx, bit(leaf.ebx, 19));
@ -470,8 +511,8 @@ fn detectNativeFeatures(cpu: *Target.Cpu, os_tag: Target.Os.Tag) void {
setFeature(cpu, .avx512vnni, bit(leaf.ecx, 11) and has_avx512_save);
setFeature(cpu, .avx512bitalg, bit(leaf.ecx, 12) and has_avx512_save);
setFeature(cpu, .avx512vpopcntdq, bit(leaf.ecx, 14) and has_avx512_save);
setFeature(cpu, .avx512vp2intersect, bit(leaf.edx, 8) and has_avx512_save);
setFeature(cpu, .rdpid, bit(leaf.ecx, 22));
setFeature(cpu, .kl, bit(leaf.ecx, 23));
setFeature(cpu, .cldemote, bit(leaf.ecx, 25));
setFeature(cpu, .movdiri, bit(leaf.ecx, 27));
setFeature(cpu, .movdir64b, bit(leaf.ecx, 28));
@ -487,32 +528,153 @@ fn detectNativeFeatures(cpu: *Target.Cpu, os_tag: Target.Os.Tag) void {
// leaves using cpuid, since that information is ignored while
// detecting features using the "-march=native" flag.
// For more info, see X86 ISA docs.
setFeature(cpu, .pconfig, bit(leaf.edx, 18));
setFeature(cpu, .uintr, bit(leaf.edx, 5));
setFeature(cpu, .avx512vp2intersect, bit(leaf.edx, 8) and has_avx512_save);
setFeature(cpu, .serialize, bit(leaf.edx, 14));
setFeature(cpu, .tsxldtrk, bit(leaf.edx, 16));
setFeature(cpu, .pconfig, bit(leaf.edx, 18));
setFeature(cpu, .amx_bf16, bit(leaf.edx, 22) and has_amx_save);
setFeature(cpu, .avx512fp16, bit(leaf.edx, 23) and has_avx512_save);
setFeature(cpu, .amx_tile, bit(leaf.edx, 24) and has_amx_save);
setFeature(cpu, .amx_int8, bit(leaf.edx, 25) and has_amx_save);
// TODO I feel unsure about this check.
// It doesn't really seem to check for 7.1, just for 7.
// Is this a sound assumption to make?
// Note that this is what other implementations do, so I kind of trust it.
const has_leaf_7_1 = max_level >= 7;
if (has_leaf_7_1) {
if (leaf.eax >= 1) {
leaf = cpuid(0x7, 0x1);
setFeature(cpu, .sha512, bit(leaf.eax, 0));
setFeature(cpu, .sm3, bit(leaf.eax, 1));
setFeature(cpu, .sm4, bit(leaf.eax, 2));
setFeature(cpu, .raoint, bit(leaf.eax, 3));
setFeature(cpu, .avxvnni, bit(leaf.eax, 4) and has_avx_save);
setFeature(cpu, .avx512bf16, bit(leaf.eax, 5) and has_avx512_save);
setFeature(cpu, .cmpccxadd, bit(leaf.eax, 7));
setFeature(cpu, .amx_fp16, bit(leaf.eax, 21) and has_amx_save);
setFeature(cpu, .hreset, bit(leaf.eax, 22));
setFeature(cpu, .avxifma, bit(leaf.eax, 23) and has_avx_save);
setFeature(cpu, .avxvnniint8, bit(leaf.edx, 4) and has_avx_save);
setFeature(cpu, .avxneconvert, bit(leaf.edx, 5) and has_avx_save);
setFeature(cpu, .amx_complex, bit(leaf.edx, 8) and has_amx_save);
setFeature(cpu, .avxvnniint16, bit(leaf.edx, 10) and has_avx_save);
setFeature(cpu, .prefetchi, bit(leaf.edx, 14));
setFeature(cpu, .usermsr, bit(leaf.edx, 15));
setFeature(cpu, .avx10_1_256, bit(leaf.edx, 19));
// APX
setFeature(cpu, .egpr, bit(leaf.edx, 21));
setFeature(cpu, .push2pop2, bit(leaf.edx, 21));
setFeature(cpu, .ppx, bit(leaf.edx, 21));
setFeature(cpu, .ndd, bit(leaf.edx, 21));
setFeature(cpu, .ccmp, bit(leaf.edx, 21));
setFeature(cpu, .cf, bit(leaf.edx, 21));
} else {
setFeature(cpu, .avx512bf16, false);
for ([_]Target.x86.Feature{
.sha512,
.sm3,
.sm4,
.raoint,
.avxvnni,
.avx512bf16,
.cmpccxadd,
.amx_fp16,
.hreset,
.avxifma,
.avxvnniint8,
.avxneconvert,
.amx_complex,
.avxvnniint16,
.prefetchi,
.usermsr,
.avx10_1_256,
.egpr,
.push2pop2,
.ppx,
.ndd,
.ccmp,
.cf,
}) |feat| {
setFeature(cpu, feat, false);
}
}
} else {
for ([_]Target.x86.Feature{
.fsgsbase, .sgx, .bmi, .avx2,
.bmi2, .invpcid, .rtm, .avx512f,
.avx512dq, .rdseed, .adx, .avx512ifma,
.clflushopt, .clwb, .avx512pf, .avx512er,
.avx512cd, .sha, .avx512bw, .avx512vl,
.prefetchwt1, .avx512vbmi, .pku, .waitpkg,
.avx512vbmi2, .shstk, .gfni, .vaes,
.vpclmulqdq, .avx512vnni, .avx512bitalg, .avx512vpopcntdq,
.avx512vp2intersect, .rdpid, .cldemote, .movdiri,
.movdir64b, .enqcmd, .pconfig, .avx512bf16,
.fsgsbase,
.sgx,
.bmi,
.avx2,
.smep,
.bmi2,
.invpcid,
.rtm,
.avx512f,
.evex512,
.avx512dq,
.rdseed,
.adx,
.smap,
.avx512ifma,
.clflushopt,
.clwb,
.avx512pf,
.avx512er,
.avx512cd,
.sha,
.avx512bw,
.avx512vl,
.prefetchwt1,
.avx512vbmi,
.pku,
.waitpkg,
.avx512vbmi2,
.shstk,
.gfni,
.vaes,
.vpclmulqdq,
.avx512vnni,
.avx512bitalg,
.avx512vpopcntdq,
.rdpid,
.kl,
.cldemote,
.movdiri,
.movdir64b,
.enqcmd,
.uintr,
.avx512vp2intersect,
.serialize,
.tsxldtrk,
.pconfig,
.amx_bf16,
.avx512fp16,
.amx_tile,
.amx_int8,
.sha512,
.sm3,
.sm4,
.raoint,
.avxvnni,
.avx512bf16,
.cmpccxadd,
.amx_fp16,
.hreset,
.avxifma,
.avxvnniint8,
.avxneconvert,
.amx_complex,
.avxvnniint16,
.prefetchi,
.usermsr,
.avx10_1_256,
.egpr,
.push2pop2,
.ppx,
.ndd,
.ccmp,
.cf,
}) |feat| {
setFeature(cpu, feat, false);
}
@ -520,21 +682,55 @@ fn detectNativeFeatures(cpu: *Target.Cpu, os_tag: Target.Os.Tag) void {
if (max_level >= 0xD and has_avx_save) {
leaf = cpuid(0xD, 0x1);
// Only enable XSAVE if OS has enabled support for saving YMM state.
setFeature(cpu, .xsaveopt, bit(leaf.eax, 0));
setFeature(cpu, .xsavec, bit(leaf.eax, 1));
setFeature(cpu, .xsaves, bit(leaf.eax, 3));
} else {
for ([_]Target.x86.Feature{ .xsaveopt, .xsavec, .xsaves }) |feat| {
for ([_]Target.x86.Feature{
.xsaveopt,
.xsavec,
.xsaves,
}) |feat| {
setFeature(cpu, feat, false);
}
}
if (max_level >= 0x14) {
leaf = cpuid(0x14, 0);
setFeature(cpu, .ptwrite, bit(leaf.ebx, 4));
} else {
setFeature(cpu, .ptwrite, false);
for ([_]Target.x86.Feature{
.ptwrite,
}) |feat| {
setFeature(cpu, feat, false);
}
}
if (max_level >= 0x19) {
leaf = cpuid(0x19, 0);
setFeature(cpu, .widekl, bit(leaf.ebx, 2));
} else {
for ([_]Target.x86.Feature{
.widekl,
}) |feat| {
setFeature(cpu, feat, false);
}
}
if (max_level >= 0x24) {
leaf = cpuid(0x24, 0);
setFeature(cpu, .avx10_1_512, bit(leaf.ebx, 18));
} else {
for ([_]Target.x86.Feature{
.avx10_1_512,
}) |feat| {
setFeature(cpu, feat, false);
}
}
}

View File

@ -627,10 +627,16 @@ pub fn Serializer(Writer: type) type {
return self.writer.writeAll("inf");
} else if (std.math.isNegativeInf(val)) {
return self.writer.writeAll("-inf");
} else if (std.math.isNegativeZero(val)) {
return self.writer.writeAll("-0.0");
} else {
try std.fmt.format(self.writer, "{d}", .{val});
},
.comptime_float => if (val == 0) {
return self.writer.writeAll("0");
} else {
try std.fmt.format(self.writer, "{d}", .{val});
},
.comptime_float => try std.fmt.format(self.writer, "{d}", .{val}),
else => comptime unreachable,
}
}
@ -2103,10 +2109,11 @@ test "std.zon stringify primitives" {
\\ .b = 0.3333333333333333333333333333333333,
\\ .c = 3.1415926535897932384626433832795028,
\\ .d = 0,
\\ .e = -0,
\\ .f = inf,
\\ .g = -inf,
\\ .h = nan,
\\ .e = 0,
\\ .f = -0.0,
\\ .g = inf,
\\ .h = -inf,
\\ .i = nan,
\\}
,
.{
@ -2115,9 +2122,10 @@ test "std.zon stringify primitives" {
.c = std.math.pi,
.d = 0.0,
.e = -0.0,
.f = std.math.inf(f32),
.g = -std.math.inf(f32),
.h = std.math.nan(f32),
.f = @as(f128, -0.0),
.g = std.math.inf(f32),
.h = -std.math.inf(f32),
.i = std.math.nan(f32),
},
.{},
);

View File

@ -1656,6 +1656,7 @@ pub fn mustLower(air: Air, inst: Air.Inst.Index, ip: *const InternPool) bool {
const data = air.instructions.items(.data)[@intFromEnum(inst)];
return switch (air.instructions.items(.tag)[@intFromEnum(inst)]) {
.arg,
.assembly,
.block,
.loop,
.repeat,
@ -1798,12 +1799,8 @@ pub fn mustLower(air: Air, inst: Air.Inst.Index, ip: *const InternPool) bool {
.cmp_vector_optimized,
.is_null,
.is_non_null,
.is_null_ptr,
.is_non_null_ptr,
.is_err,
.is_non_err,
.is_err_ptr,
.is_non_err_ptr,
.bool_and,
.bool_or,
.fptrunc,
@ -1816,7 +1813,6 @@ pub fn mustLower(air: Air, inst: Air.Inst.Index, ip: *const InternPool) bool {
.unwrap_errunion_payload,
.unwrap_errunion_err,
.unwrap_errunion_payload_ptr,
.unwrap_errunion_err_ptr,
.wrap_errunion_payload,
.wrap_errunion_err,
.struct_field_ptr,
@ -1861,17 +1857,13 @@ pub fn mustLower(air: Air, inst: Air.Inst.Index, ip: *const InternPool) bool {
.work_group_id,
=> false,
.assembly => {
const extra = air.extraData(Air.Asm, data.ty_pl.payload);
const is_volatile = @as(u1, @truncate(extra.data.flags >> 31)) != 0;
return is_volatile or if (extra.data.outputs_len == 1)
@as(Air.Inst.Ref, @enumFromInt(air.extra[extra.end])) != .none
else
extra.data.outputs_len > 1;
},
.load => air.typeOf(data.ty_op.operand, ip).isVolatilePtrIp(ip),
.is_non_null_ptr, .is_null_ptr, .is_non_err_ptr, .is_err_ptr => air.typeOf(data.un_op, ip).isVolatilePtrIp(ip),
.load, .unwrap_errunion_err_ptr => air.typeOf(data.ty_op.operand, ip).isVolatilePtrIp(ip),
.slice_elem_val, .ptr_elem_val => air.typeOf(data.bin_op.lhs, ip).isVolatilePtrIp(ip),
.atomic_load => air.typeOf(data.atomic_load.ptr, ip).isVolatilePtrIp(ip),
.atomic_load => switch (data.atomic_load.order) {
.unordered, .monotonic => air.typeOf(data.atomic_load.ptr, ip).isVolatilePtrIp(ip),
else => true, // Stronger memory orderings have inter-thread side effects.
},
};
}

View File

@ -439,6 +439,7 @@ fn checkRef(ref: Air.Inst.Ref, zcu: *Zcu) bool {
pub fn checkVal(val: Value, zcu: *Zcu) bool {
const ty = val.typeOf(zcu);
if (!checkType(ty, zcu)) return false;
if (val.isUndef(zcu)) return true;
if (ty.toIntern() == .type_type and !checkType(val.toType(), zcu)) return false;
// Check for lazy values
switch (zcu.intern_pool.indexToKey(val.toIntern())) {

View File

@ -259,8 +259,6 @@ crt_files: std.StringHashMapUnmanaged(CrtFile) = .empty,
/// Null means only show snippet on first error.
reference_trace: ?u32 = null,
libcxx_abi_version: libcxx.AbiVersion = libcxx.AbiVersion.default,
/// This mutex guards all `Compilation` mutable state.
/// Disabled in single-threaded mode because the thread pool spawns in the same thread.
mutex: if (builtin.single_threaded) struct {
@ -827,7 +825,7 @@ pub const MiscTask = enum {
@"mingw-w64 crt2.o",
@"mingw-w64 dllcrt2.o",
@"mingw-w64 mingw32.lib",
@"mingw-w64 libmingw32.lib",
};
pub const MiscError = struct {
@ -1172,7 +1170,6 @@ pub const CreateOptions = struct {
force_load_objc: bool = false,
/// Whether local symbols should be discarded from the symbol table.
discard_local_symbols: bool = false,
libcxx_abi_version: libcxx.AbiVersion = libcxx.AbiVersion.default,
/// (Windows) PDB source path prefix to instruct the linker how to resolve relative
/// paths when consolidating CodeView streams into a single PDB file.
pdb_source_path: ?[]const u8 = null,
@ -1512,7 +1509,7 @@ pub fn create(gpa: Allocator, arena: Allocator, options: CreateOptions) !*Compil
.emit_asm = options.emit_asm,
.emit_llvm_ir = options.emit_llvm_ir,
.emit_llvm_bc = options.emit_llvm_bc,
.work_queues = .{std.fifo.LinearFifo(Job, .Dynamic).init(gpa)} ** @typeInfo(std.meta.FieldType(Compilation, .work_queues)).array.len,
.work_queues = @splat(.init(gpa)),
.c_object_work_queue = std.fifo.LinearFifo(*CObject, .Dynamic).init(gpa),
.win32_resource_work_queue = if (dev.env.supports(.win32_resource)) std.fifo.LinearFifo(*Win32Resource, .Dynamic).init(gpa) else .{},
.astgen_work_queue = std.fifo.LinearFifo(Zcu.File.Index, .Dynamic).init(gpa),
@ -1545,7 +1542,6 @@ pub fn create(gpa: Allocator, arena: Allocator, options: CreateOptions) !*Compil
.debug_compiler_runtime_libs = options.debug_compiler_runtime_libs,
.debug_compile_errors = options.debug_compile_errors,
.incremental = options.incremental,
.libcxx_abi_version = options.libcxx_abi_version,
.root_name = root_name,
.sysroot = sysroot,
.windows_libs = windows_libs,
@ -1886,7 +1882,7 @@ pub fn create(gpa: Allocator, arena: Allocator, options: CreateOptions) !*Compil
const main_crt_file: mingw.CrtFile = if (is_dyn_lib) .dllcrt2_o else .crt2_o;
comp.queued_jobs.mingw_crt_file[@intFromEnum(main_crt_file)] = true;
comp.queued_jobs.mingw_crt_file[@intFromEnum(mingw.CrtFile.mingw32_lib)] = true;
comp.queued_jobs.mingw_crt_file[@intFromEnum(mingw.CrtFile.libmingw32_lib)] = true;
comp.remaining_prelink_tasks += 2;
// When linking mingw-w64 there are some import libs we always need.
@ -2129,7 +2125,7 @@ pub fn update(comp: *Compilation, main_progress_node: std.Progress.Node) !void {
const is_hit = man.hit() catch |err| switch (err) {
error.CacheCheckFailed => switch (man.diagnostic) {
.none => unreachable,
.manifest_create, .manifest_read, .manifest_lock => |e| return comp.setMiscFailure(
.manifest_create, .manifest_read, .manifest_lock, .manifest_seek => |e| return comp.setMiscFailure(
.check_whole_cache,
"failed to check cache: {s} {s}",
.{ @tagName(man.diagnostic), @errorName(e) },
@ -2261,7 +2257,7 @@ pub fn update(comp: *Compilation, main_progress_node: std.Progress.Node) !void {
zcu.compile_log_text.shrinkAndFree(gpa, 0);
zcu.skip_analysis_errors = false;
zcu.skip_analysis_this_update = false;
// Make sure std.zig is inside the import_table. We unconditionally need
// it for start.zig.
@ -2336,6 +2332,17 @@ pub fn update(comp: *Compilation, main_progress_node: std.Progress.Node) !void {
const pt: Zcu.PerThread = .activate(zcu, .main);
defer pt.deactivate();
if (!zcu.skip_analysis_this_update) {
if (comp.config.is_test) {
// The `test_functions` decl has been intentionally postponed until now,
// at which point we must populate it with the list of test functions that
// have been discovered and not filtered out.
try pt.populateTestFunctions(main_progress_node);
}
try pt.processExports();
}
if (build_options.enable_debug_extensions and comp.verbose_intern_pool) {
std.debug.print("intern pool stats for '{s}':\n", .{
comp.root_name,
@ -2350,15 +2357,6 @@ pub fn update(comp: *Compilation, main_progress_node: std.Progress.Node) !void {
});
zcu.intern_pool.dumpGenericInstances(gpa);
}
if (comp.config.is_test) {
// The `test_functions` decl has been intentionally postponed until now,
// at which point we must populate it with the list of test functions that
// have been discovered and not filtered out.
try pt.populateTestFunctions(main_progress_node);
}
try pt.processExports();
}
if (anyErrors(comp)) {
@ -3310,7 +3308,7 @@ pub fn getAllErrorsAlloc(comp: *Compilation) !ErrorBundle {
}
}
}
if (zcu.skip_analysis_errors) break :zcu_errors;
if (zcu.skip_analysis_this_update) break :zcu_errors;
var sorted_failed_analysis: std.AutoArrayHashMapUnmanaged(InternPool.AnalUnit, *Zcu.ErrorMsg).DataList.Slice = s: {
const SortOrder = struct {
zcu: *Zcu,
@ -3446,7 +3444,7 @@ pub fn getAllErrorsAlloc(comp: *Compilation) !ErrorBundle {
try comp.link_diags.addMessagesToBundle(&bundle, comp.bin_file);
if (comp.zcu) |zcu| {
if (!zcu.skip_analysis_errors and bundle.root_list.items.len == 0 and zcu.compile_log_sources.count() != 0) {
if (!zcu.skip_analysis_this_update and bundle.root_list.items.len == 0 and zcu.compile_log_sources.count() != 0) {
const values = zcu.compile_log_sources.values();
// First one will be the error; subsequent ones will be notes.
const src_loc = values[0].src();
@ -3957,7 +3955,7 @@ fn performAllTheWorkInner(
// However, this means our analysis data is invalid, so we want to omit all analysis errors.
assert(zcu.failed_files.count() > 0); // we will get an error
zcu.skip_analysis_errors = true;
zcu.skip_analysis_this_update = true;
return;
}
@ -5115,11 +5113,13 @@ fn updateCObject(comp: *Compilation, c_object: *CObject, c_obj_prog_node: std.Pr
}
// Just to save disk space, we delete the files that are never needed again.
defer if (out_diag_path) |diag_file_path| zig_cache_tmp_dir.deleteFile(std.fs.path.basename(diag_file_path)) catch |err| {
log.warn("failed to delete '{s}': {s}", .{ diag_file_path, @errorName(err) });
defer if (out_diag_path) |diag_file_path| zig_cache_tmp_dir.deleteFile(std.fs.path.basename(diag_file_path)) catch |err| switch (err) {
error.FileNotFound => {}, // the file wasn't created due to an error we reported
else => log.warn("failed to delete '{s}': {s}", .{ diag_file_path, @errorName(err) }),
};
defer if (out_dep_path) |dep_file_path| zig_cache_tmp_dir.deleteFile(std.fs.path.basename(dep_file_path)) catch |err| {
log.warn("failed to delete '{s}': {s}", .{ dep_file_path, @errorName(err) });
defer if (out_dep_path) |dep_file_path| zig_cache_tmp_dir.deleteFile(std.fs.path.basename(dep_file_path)) catch |err| switch (err) {
error.FileNotFound => {}, // the file wasn't created due to an error we reported
else => log.warn("failed to delete '{s}': {s}", .{ dep_file_path, @errorName(err) }),
};
if (std.process.can_spawn) {
var child = std.process.Child.init(argv.items, arena);
@ -5626,12 +5626,40 @@ pub fn addCCArgs(
const llvm_triple = try @import("codegen/llvm.zig").targetTriple(arena, target);
try argv.appendSlice(&[_][]const u8{ "-target", llvm_triple });
switch (target.os.tag) {
.ios, .macos, .tvos, .watchos => |os| {
try argv.ensureUnusedCapacity(2);
// Pass the proper -m<os>-version-min argument for darwin.
const ver = target.os.version_range.semver.min;
argv.appendAssumeCapacity(try std.fmt.allocPrint(arena, "-m{s}{s}-version-min={d}.{d}.{d}", .{
@tagName(os),
switch (target.abi) {
.simulator => "-simulator",
else => "",
},
ver.major,
ver.minor,
ver.patch,
}));
// This avoids a warning that sometimes occurs when
// providing both a -target argument that contains a
// version as well as the -mmacosx-version-min argument.
// Zig provides the correct value in both places, so it
// doesn't matter which one gets overridden.
argv.appendAssumeCapacity("-Wno-overriding-option");
},
else => {},
}
if (target.cpu.arch.isArm()) {
try argv.append(if (target.cpu.arch.isThumb()) "-mthumb" else "-mno-thumb");
}
if (target_util.llvmMachineAbi(target)) |mabi| {
try argv.append(try std.fmt.allocPrint(arena, "-mabi={s}", .{mabi}));
// Clang's integrated Arm assembler doesn't support `-mabi` yet...
if (!(target.cpu.arch.isArm() and (ext == .assembly or ext == .assembly_with_cpp))) {
try argv.append(try std.fmt.allocPrint(arena, "-mabi={s}", .{mabi}));
}
}
// We might want to support -mfloat-abi=softfp for Arm and CSKY here in the future.
@ -5743,6 +5771,19 @@ pub fn addCCArgs(
try argv.append("-D_SOFT_DOUBLE");
}
switch (mod.optimize_mode) {
.Debug => {
// windows c runtime requires -D_DEBUG if using debug libraries
try argv.append("-D_DEBUG");
},
.ReleaseSafe => {
try argv.append("-D_FORTIFY_SOURCE=2");
},
.ReleaseFast, .ReleaseSmall => {
try argv.append("-DNDEBUG");
},
}
if (comp.config.link_libc) {
if (target.isGnuLibC()) {
const target_version = target.os.versionRange().gnuLibCVersion().?;
@ -5786,11 +5827,12 @@ pub fn addCCArgs(
// See the comment in libcxx.zig for more details about this.
try argv.append("-D_LIBCPP_PSTL_BACKEND_SERIAL");
const abi_version: u2 = if (target.os.tag == .emscripten) 2 else 1;
try argv.append(try std.fmt.allocPrint(arena, "-D_LIBCPP_ABI_VERSION={d}", .{
@intFromEnum(comp.libcxx_abi_version),
abi_version,
}));
try argv.append(try std.fmt.allocPrint(arena, "-D_LIBCPP_ABI_NAMESPACE=__{d}", .{
@intFromEnum(comp.libcxx_abi_version),
abi_version,
}));
try argv.append(libcxx.hardeningModeFlag(mod.optimize_mode));
@ -5835,6 +5877,32 @@ pub fn addCCArgs(
}
}
// Only C-family files support these flags.
switch (ext) {
.c,
.h,
.cpp,
.hpp,
.m,
.hm,
.mm,
.hmm,
=> {
try argv.append("-fno-spell-checking");
if (target.os.tag == .windows and target.abi.isGnu()) {
// windows.h has files such as pshpack1.h which do #pragma packing,
// triggering a clang warning. So for this target, we disable this warning.
try argv.append("-Wno-pragma-pack");
}
if (mod.optimize_mode != .Debug) {
try argv.append("-Werror=date-time");
}
},
else => {},
}
// Only assembly files support these flags.
switch (ext) {
.assembly,
@ -5909,7 +5977,7 @@ pub fn addCCArgs(
else => {},
}
// Only C-family files support these flags.
// Only compiled files support these flags.
switch (ext) {
.c,
.h,
@ -5919,9 +5987,9 @@ pub fn addCCArgs(
.hm,
.mm,
.hmm,
.ll,
.bc,
=> {
try argv.append("-fno-spell-checking");
if (target_util.clangSupportsTargetCpuArg(target)) {
if (target.cpu.model.llvm_name) |llvm_name| {
try argv.appendSlice(&[_][]const u8{
@ -5941,6 +6009,10 @@ pub fn addCCArgs(
// We communicate float ABI to Clang through the dedicated options further down.
if (std.mem.eql(u8, llvm_name, "soft-float")) continue;
// Ignore these until we figure out how to handle the concept of omitting features.
// See https://github.com/ziglang/zig/issues/23539
if (target_util.isDynamicAMDGCNFeature(target, feature)) continue;
argv.appendSliceAssumeCapacity(&[_][]const u8{ "-Xclang", "-target-feature", "-Xclang" });
const plus_or_minus = "-+"[@intFromBool(is_enabled)];
const arg = try std.fmt.allocPrint(arena, "{c}{s}", .{ plus_or_minus, llvm_name });
@ -5948,48 +6020,6 @@ pub fn addCCArgs(
}
}
switch (target.os.tag) {
.windows => {
// windows.h has files such as pshpack1.h which do #pragma packing,
// triggering a clang warning. So for this target, we disable this warning.
if (target.abi.isGnu()) {
try argv.append("-Wno-pragma-pack");
}
},
.macos => {
try argv.ensureUnusedCapacity(2);
// Pass the proper -m<os>-version-min argument for darwin.
const ver = target.os.version_range.semver.min;
argv.appendAssumeCapacity(try std.fmt.allocPrint(arena, "-mmacos-version-min={d}.{d}.{d}", .{
ver.major, ver.minor, ver.patch,
}));
// This avoids a warning that sometimes occurs when
// providing both a -target argument that contains a
// version as well as the -mmacosx-version-min argument.
// Zig provides the correct value in both places, so it
// doesn't matter which one gets overridden.
argv.appendAssumeCapacity("-Wno-overriding-option");
},
.ios => switch (target.cpu.arch) {
// Pass the proper -m<os>-version-min argument for darwin.
.x86, .x86_64 => {
const ver = target.os.version_range.semver.min;
try argv.append(try std.fmt.allocPrint(
arena,
"-m{s}-simulator-version-min={d}.{d}.{d}",
.{ @tagName(target.os.tag), ver.major, ver.minor, ver.patch },
));
},
else => {
const ver = target.os.version_range.semver.min;
try argv.append(try std.fmt.allocPrint(arena, "-m{s}-version-min={d}.{d}.{d}", .{
@tagName(target.os.tag), ver.major, ver.minor, ver.patch,
}));
},
},
else => {},
}
{
var san_arg: std.ArrayListUnmanaged(u8) = .empty;
const prefix = "-fsanitize=";
@ -6026,17 +6056,21 @@ pub fn addCCArgs(
// function was called.
try argv.append("-fno-sanitize=function");
// It's recommended to use the minimal runtime in production environments
// due to the security implications of the full runtime. The minimal runtime
// doesn't provide much benefit over simply trapping.
if (mod.optimize_mode == .ReleaseSafe) {
// If we want to sanitize C, but the ubsan runtime has been turned off,
// we'll switch to just trapping.
if (comp.ubsan_rt_strat == .none or mod.optimize_mode == .ReleaseSafe) {
// It's recommended to use the minimal runtime in production
// environments due to the security implications of the full runtime.
// The minimal runtime doesn't provide much benefit over simply
// trapping, however, so we do that instead.
try argv.append("-fsanitize-trap=undefined");
}
// This is necessary because, by default, Clang instructs LLVM to embed a COFF link
// dependency on `libclang_rt.ubsan_standalone.a` when the UBSan runtime is used.
if (target.os.tag == .windows) {
try argv.append("-fno-rtlib-defaultlib");
} else {
// This is necessary because, by default, Clang instructs LLVM to embed
// a COFF link dependency on `libclang_rt.ubsan_standalone.a` when the
// UBSan runtime is used.
if (target.os.tag == .windows) {
try argv.append("-fno-rtlib-defaultlib");
}
}
}
}
@ -6048,8 +6082,6 @@ pub fn addCCArgs(
switch (mod.optimize_mode) {
.Debug => {
// windows c runtime requires -D_DEBUG if using debug libraries
try argv.append("-D_DEBUG");
// Clang has -Og for compatibility with GCC, but currently it is just equivalent
// to -O1. Besides potentially impairing debugging, -O1/-Og significantly
// increases compile times.
@ -6059,10 +6091,8 @@ pub fn addCCArgs(
// See the comment in the BuildModeFastRelease case for why we pass -O2 rather
// than -O3 here.
try argv.append("-O2");
try argv.append("-D_FORTIFY_SOURCE=2");
},
.ReleaseFast => {
try argv.append("-DNDEBUG");
// Here we pass -O2 rather than -O3 because, although we do the equivalent of
// -O3 in Zig code, the justification for the difference here is that Zig
// has better detection and prevention of undefined behavior, so -O3 is safer for
@ -6071,14 +6101,9 @@ pub fn addCCArgs(
try argv.append("-O2");
},
.ReleaseSmall => {
try argv.append("-DNDEBUG");
try argv.append("-Os");
},
}
if (mod.optimize_mode != .Debug) {
try argv.append("-Werror=date-time");
}
},
else => {},
}

View File

@ -9443,7 +9443,7 @@ pub fn getFuncInstanceIes(
try items.ensureUnusedCapacity(4);
const generic_owner = unwrapCoercedFunc(ip, arg.generic_owner);
const generic_owner_ty = ip.indexToKey(ip.funcDeclInfo(arg.generic_owner).ty).func_type;
const generic_owner_ty = ip.indexToKey(ip.funcDeclInfo(generic_owner).ty).func_type;
// The strategy here is to add the function decl unconditionally, then to
// ask if it already exists, and if so, revert the lengths of the mutated

View File

@ -1850,7 +1850,11 @@ const FileHeader = struct {
return magic_number == std.macho.MH_MAGIC or
magic_number == std.macho.MH_MAGIC_64 or
magic_number == std.macho.FAT_MAGIC or
magic_number == std.macho.FAT_MAGIC_64;
magic_number == std.macho.FAT_MAGIC_64 or
magic_number == std.macho.MH_CIGAM or
magic_number == std.macho.MH_CIGAM_64 or
magic_number == std.macho.FAT_CIGAM or
magic_number == std.macho.FAT_CIGAM_64;
}
pub fn isExecutable(self: *FileHeader) bool {
@ -1875,6 +1879,11 @@ test FileHeader {
h.bytes_read = 0;
h.update(&macho64_magic_bytes);
try std.testing.expect(h.isExecutable());
const macho64_cigam_bytes = [_]u8{ 0xFE, 0xED, 0xFA, 0xCF };
h.bytes_read = 0;
h.update(&macho64_cigam_bytes);
try std.testing.expect(h.isExecutable());
}
// Result of the `unpackResource` operation. Enables collecting errors from

View File

@ -331,6 +331,10 @@ pub fn create(arena: Allocator, options: CreateOptions) !*Package.Module {
// Append disabled features after enabled ones, so that their effects aren't overwritten.
for (target.cpu.arch.allFeaturesList()) |feature| {
if (feature.llvm_name) |llvm_name| {
// Ignore these until we figure out how to handle the concept of omitting features.
// See https://github.com/ziglang/zig/issues/23539
if (target_util.isDynamicAMDGCNFeature(target, feature)) continue;
const is_enabled = target.cpu.features.isEnabled(feature.index);
if (is_enabled) {

View File

@ -3669,7 +3669,10 @@ fn indexablePtrLenOrNone(
const zcu = pt.zcu;
const operand_ty = sema.typeOf(operand);
try checkMemOperand(sema, block, src, operand_ty);
if (operand_ty.ptrSize(zcu) == .many) return .none;
switch (operand_ty.ptrSize(zcu)) {
.many, .c => return .none,
.one, .slice => {},
}
const field_name = try zcu.intern_pool.getOrPutString(sema.gpa, pt.tid, "len", .no_embedded_nulls);
return sema.fieldVal(block, src, operand, field_name, src);
}
@ -13524,7 +13527,7 @@ fn validateErrSetSwitch(
seen_errors: *SwitchErrorSet,
case_vals: *std.ArrayListUnmanaged(Air.Inst.Ref),
operand_ty: Type,
inst_data: std.meta.FieldType(Zir.Inst.Data, .pl_node),
inst_data: @FieldType(Zir.Inst.Data, "pl_node"),
scalar_cases_len: u32,
multi_cases_len: u32,
else_case: struct { body: []const Zir.Inst.Index, end: usize, src: LazySrcLoc },
@ -17841,11 +17844,18 @@ fn zirThis(
const zcu = pt.zcu;
const namespace = pt.zcu.namespacePtr(block.namespace);
const new_ty = try pt.ensureTypeUpToDate(namespace.owner_type);
switch (pt.zcu.intern_pool.indexToKey(new_ty)) {
.struct_type, .union_type => try sema.declareDependency(.{ .interned = new_ty }),
switch (pt.zcu.intern_pool.indexToKey(namespace.owner_type)) {
.opaque_type => {
// Opaque types are never outdated since they don't undergo type resolution, so nothing to do!
return Air.internedToRef(namespace.owner_type);
},
.struct_type, .union_type => {
const new_ty = try pt.ensureTypeUpToDate(namespace.owner_type);
try sema.declareDependency(.{ .interned = new_ty });
return Air.internedToRef(new_ty);
},
.enum_type => {
const new_ty = try pt.ensureTypeUpToDate(namespace.owner_type);
try sema.declareDependency(.{ .interned = new_ty });
// Since this is an enum, it has to be resolved immediately.
// `ensureTypeUpToDate` has resolved the new type if necessary.
@ -17854,11 +17864,10 @@ fn zirThis(
if (zcu.failed_analysis.contains(ty_unit) or zcu.transitive_failed_analysis.contains(ty_unit)) {
return error.AnalysisFail;
}
return Air.internedToRef(new_ty);
},
.opaque_type => {},
else => unreachable,
}
return Air.internedToRef(new_ty);
}
fn zirClosureGet(sema: *Sema, block: *Block, extended: Zir.Inst.Extended.InstData) CompileError!Air.Inst.Ref {
@ -20284,11 +20293,41 @@ fn zirStructInitEmptyResult(sema: *Sema, block: *Block, inst: Zir.Inst.Index, is
const zcu = pt.zcu;
const inst_data = sema.code.instructions.items(.data)[@intFromEnum(inst)].un_node;
const src = block.nodeOffset(inst_data.src_node);
// Generic poison means this is an untyped anonymous empty struct/array init
const ty_operand = try sema.resolveTypeOrPoison(block, src, inst_data.operand) orelse return .empty_tuple;
const ty_operand = try sema.resolveTypeOrPoison(block, src, inst_data.operand) orelse {
if (is_byref) {
return sema.uavRef(.empty_tuple);
} else {
return .empty_tuple;
}
};
const init_ty = if (is_byref) ty: {
const ptr_ty = ty_operand.optEuBaseType(zcu);
assert(ptr_ty.zigTypeTag(zcu) == .pointer); // validated by a previous instruction
switch (ptr_ty.ptrSize(zcu)) {
// Use a zero-length array for a slice or many-ptr result
.slice, .many => break :ty try pt.arrayType(.{
.len = 0,
.child = ptr_ty.childType(zcu).toIntern(),
.sentinel = if (ptr_ty.sentinel(zcu)) |s| s.toIntern() else .none,
}),
// Just use the child type for a single-pointer or C-pointer result
.one, .c => {
const child = ptr_ty.childType(zcu);
if (child.toIntern() == .anyopaque_type) {
// ...unless that child is anyopaque, in which case this is equivalent to an untyped init.
// `.{}` is an empty tuple.
if (is_byref) {
return sema.uavRef(.empty_tuple);
} else {
return .empty_tuple;
}
}
break :ty child;
},
}
if (!ptr_ty.isSlice(zcu)) {
break :ty ptr_ty.childType(zcu);
}
@ -23199,8 +23238,16 @@ fn ptrFromIntVal(
const addr = try operand_val.toUnsignedIntSema(pt);
if (!ptr_ty.isAllowzeroPtr(zcu) and addr == 0)
return sema.fail(block, operand_src, "pointer type '{}' does not allow address zero", .{ptr_ty.fmt(pt)});
if (addr != 0 and ptr_align != .none and !ptr_align.check(addr))
return sema.fail(block, operand_src, "pointer type '{}' requires aligned address", .{ptr_ty.fmt(pt)});
if (addr != 0 and ptr_align != .none) {
const masked_addr = if (ptr_ty.childType(zcu).fnPtrMaskOrNull(zcu)) |mask|
addr & mask
else
addr;
if (!ptr_align.check(masked_addr)) {
return sema.fail(block, operand_src, "pointer type '{}' requires aligned address", .{ptr_ty.fmt(pt)});
}
}
return switch (ptr_ty.zigTypeTag(zcu)) {
.optional => Value.fromInterned(try pt.intern(.{ .opt = .{
@ -23452,11 +23499,14 @@ fn ptrCastFull(
if (src_slice_like_elem.comptimeOnly(zcu) or dest_elem.comptimeOnly(zcu)) {
return sema.fail(block, src, "cannot infer length of slice of '{}' from slice of '{}'", .{ dest_elem.fmt(pt), src_slice_like_elem.fmt(pt) });
}
const src_elem_size = src_slice_like_elem.abiSize(zcu);
// It's okay for `src_slice_like_elem` to be 0-bit; the resulting slice will just always have 0 elements.
// However, `dest_elem` can't be 0-bit. If it were, then either the source slice has 0 bits and we don't
// know how what `result.len` should be, or the source has >0 bits and there is no valid `result.len`.
const dest_elem_size = dest_elem.abiSize(zcu);
if (src_elem_size == 0 or dest_elem_size == 0) {
if (dest_elem_size == 0) {
return sema.fail(block, src, "cannot infer length of slice of '{}' from slice of '{}'", .{ dest_elem.fmt(pt), src_slice_like_elem.fmt(pt) });
}
const src_elem_size = src_slice_like_elem.abiSize(zcu);
break :need_len_change src_elem_size != dest_elem_size;
} else false;
@ -23713,7 +23763,12 @@ fn ptrCastFull(
if (dest_align.compare(.gt, src_align)) {
if (try ptr_val.getUnsignedIntSema(pt)) |addr| {
if (!dest_align.check(addr)) {
const masked_addr = if (Type.fromInterned(dest_info.child).fnPtrMaskOrNull(zcu)) |mask|
addr & mask
else
addr;
if (!dest_align.check(masked_addr)) {
return sema.fail(block, operand_src, "pointer address 0x{X} is not aligned to {d} bytes", .{
addr,
dest_align.toByteUnits().?,
@ -26314,12 +26369,14 @@ fn zirMemcpy(sema: *Sema, block: *Block, inst: Zir.Inst.Index) CompileError!void
var info = dest_ty.ptrInfo(zcu);
info.flags.size = .one;
info.child = array_ty.toIntern();
info.sentinel = .none;
break :info info;
});
const src_array_ptr_ty = try pt.ptrType(info: {
var info = src_ty.ptrInfo(zcu);
info.flags.size = .one;
info.child = array_ty.toIntern();
info.sentinel = .none;
break :info info;
});
@ -26600,13 +26657,12 @@ fn zirFuncFancy(sema: *Sema, block: *Block, inst: Zir.Inst.Index) CompileError!A
break :cc .auto;
};
const ret_ty: Type = if (extra.data.bits.ret_ty_is_generic)
.generic_poison
else if (extra.data.bits.has_ret_ty_body) blk: {
const ret_ty: Type = if (extra.data.bits.has_ret_ty_body) blk: {
const body_len = sema.code.extra[extra_index];
extra_index += 1;
const body = sema.code.bodySlice(extra_index, body_len);
extra_index += body.len;
if (extra.data.bits.ret_ty_is_generic) break :blk .generic_poison;
const val = try sema.resolveGenericBody(block, ret_src, body, inst, Type.type, .{ .simple = .function_ret_ty });
const ty = val.toType();
@ -26614,6 +26670,8 @@ fn zirFuncFancy(sema: *Sema, block: *Block, inst: Zir.Inst.Index) CompileError!A
} else if (extra.data.bits.has_ret_ty_ref) blk: {
const ret_ty_ref: Zir.Inst.Ref = @enumFromInt(sema.code.extra[extra_index]);
extra_index += 1;
if (extra.data.bits.ret_ty_is_generic) break :blk .generic_poison;
const ret_ty_air_ref = try sema.resolveInst(ret_ty_ref);
const ret_ty_val = try sema.resolveConstDefinedValue(block, ret_src, ret_ty_air_ref, .{ .simple = .function_ret_ty });
break :blk ret_ty_val.toType();
@ -28524,12 +28582,17 @@ fn structFieldPtrByIndex(
const zcu = pt.zcu;
const ip = &zcu.intern_pool;
if (try sema.resolveDefinedValue(block, src, struct_ptr)) |struct_ptr_val| {
const val = try struct_ptr_val.ptrField(field_index, pt);
return Air.internedToRef(val.toIntern());
const struct_type = zcu.typeToStruct(struct_ty).?;
const field_is_comptime = struct_type.fieldIsComptime(ip, field_index);
// Comptime fields are handled later
if (!field_is_comptime) {
if (try sema.resolveDefinedValue(block, src, struct_ptr)) |struct_ptr_val| {
const val = try struct_ptr_val.ptrField(field_index, pt);
return Air.internedToRef(val.toIntern());
}
}
const struct_type = zcu.typeToStruct(struct_ty).?;
const field_ty = struct_type.field_types.get(ip)[field_index];
const struct_ptr_ty = sema.typeOf(struct_ptr);
const struct_ptr_ty_info = struct_ptr_ty.ptrInfo(zcu);
@ -28549,6 +28612,7 @@ fn structFieldPtrByIndex(
try Type.fromInterned(struct_ptr_ty_info.child).abiAlignmentSema(pt);
if (struct_type.layout == .@"packed") {
assert(!field_is_comptime);
switch (struct_ty.packedStructFieldPtrInfo(struct_ptr_ty, field_index, pt)) {
.bit_ptr => |packed_offset| {
ptr_ty_data.flags.alignment = parent_align;
@ -28559,6 +28623,7 @@ fn structFieldPtrByIndex(
},
}
} else if (struct_type.layout == .@"extern") {
assert(!field_is_comptime);
// For extern structs, field alignment might be bigger than type's
// natural alignment. Eg, in `extern struct { x: u32, y: u16 }` the
// second field is aligned as u32.
@ -28582,7 +28647,7 @@ fn structFieldPtrByIndex(
const ptr_field_ty = try pt.ptrTypeSema(ptr_ty_data);
if (struct_type.fieldIsComptime(ip, field_index)) {
if (field_is_comptime) {
try struct_ty.resolveStructFieldInits(pt);
const val = try pt.intern(.{ .ptr = .{
.ty = ptr_field_ty.toIntern(),
@ -28979,6 +29044,14 @@ fn elemPtrOneLayerOnly(
}
const result_ty = try indexable_ty.elemPtrType(null, pt);
try sema.validateRuntimeElemAccess(block, elem_index_src, result_ty, indexable_ty, indexable_src);
try sema.validateRuntimeValue(block, indexable_src, indexable);
if (!try result_ty.childType(zcu).hasRuntimeBitsIgnoreComptimeSema(pt)) {
// zero-bit child type; just bitcast the pointer
return block.addBitCast(result_ty, indexable);
}
return block.addPtrElemPtr(indexable, elem_index, result_ty);
},
.one => {
@ -29107,7 +29180,8 @@ fn tupleFieldPtr(
const pt = sema.pt;
const zcu = pt.zcu;
const tuple_ptr_ty = sema.typeOf(tuple_ptr);
const tuple_ty = tuple_ptr_ty.childType(zcu);
const tuple_ptr_info = tuple_ptr_ty.ptrInfo(zcu);
const tuple_ty: Type = .fromInterned(tuple_ptr_info.child);
try tuple_ty.resolveFields(pt);
const field_count = tuple_ty.structFieldCount(zcu);
@ -29125,9 +29199,16 @@ fn tupleFieldPtr(
const ptr_field_ty = try pt.ptrTypeSema(.{
.child = field_ty.toIntern(),
.flags = .{
.is_const = !tuple_ptr_ty.ptrIsMutable(zcu),
.is_volatile = tuple_ptr_ty.isVolatilePtr(zcu),
.address_space = tuple_ptr_ty.ptrAddressSpace(zcu),
.is_const = tuple_ptr_info.flags.is_const,
.is_volatile = tuple_ptr_info.flags.is_volatile,
.address_space = tuple_ptr_info.flags.address_space,
.alignment = a: {
if (tuple_ptr_info.flags.alignment == .none) break :a .none;
// The tuple pointer isn't naturally aligned, so the field pointer might be underaligned.
const tuple_align = tuple_ptr_info.flags.alignment;
const field_align = try field_ty.abiAlignmentSema(pt);
break :a tuple_align.min(field_align);
},
},
});
@ -29439,6 +29520,11 @@ fn elemPtrSlice(
const cmp_op: Air.Inst.Tag = if (slice_sent) .cmp_lte else .cmp_lt;
try sema.addSafetyCheckIndexOob(block, src, elem_index, len_inst, cmp_op);
}
if (!try slice_ty.childType(zcu).hasRuntimeBitsIgnoreComptimeSema(pt)) {
// zero-bit child type; just extract the pointer and bitcast it
const slice_ptr = try block.addTyOp(.slice_ptr, slice_ty.slicePtrFieldType(zcu), slice);
return block.addBitCast(elem_ptr_ty, slice_ptr);
}
return block.addSliceElemPtr(slice, elem_index, elem_ptr_ty);
}
@ -30995,20 +31081,17 @@ fn coerceInMemoryAllowedFns(
} };
}
switch (src_param_ty.toIntern()) {
.generic_poison_type => {},
else => {
// Note: Cast direction is reversed here.
const param = try sema.coerceInMemoryAllowed(block, src_param_ty, dest_param_ty, dest_is_mut, target, dest_src, src_src, null);
if (param != .ok) {
return .{ .fn_param = .{
.child = try param.dupe(sema.arena),
.actual = src_param_ty,
.wanted = dest_param_ty,
.index = param_i,
} };
}
},
if (!src_param_ty.isGenericPoison() and !dest_param_ty.isGenericPoison()) {
// Note: Cast direction is reversed here.
const param = try sema.coerceInMemoryAllowed(block, src_param_ty, dest_param_ty, dest_is_mut, target, dest_src, src_src, null);
if (param != .ok) {
return .{ .fn_param = .{
.child = try param.dupe(sema.arena),
.actual = src_param_ty,
.wanted = dest_param_ty,
.index = param_i,
} };
}
}
}
@ -35485,7 +35568,15 @@ pub fn resolveStructLayout(sema: *Sema, ty: Type) SemaError!void {
offsets[i] = @intCast(aligns[i].forward(offset));
offset = offsets[i] + sizes[i];
}
struct_type.setLayoutResolved(ip, @intCast(big_align.forward(offset)), big_align);
const size = std.math.cast(u32, big_align.forward(offset)) orelse {
const msg = try sema.errMsg(
ty.srcLoc(zcu),
"struct layout requires size {d}, this compiler implementation supports up to {d}",
.{ big_align.forward(offset), std.math.maxInt(u32) },
);
return sema.failWithOwnedErrorMsg(null, msg);
};
struct_type.setLayoutResolved(ip, size, big_align);
_ = try ty.comptimeOnlySema(pt);
}
@ -35760,7 +35851,15 @@ pub fn resolveUnionLayout(sema: *Sema, ty: Type) SemaError!void {
break :layout .{ size, max_align.max(tag_align), padding };
} else .{ max_align.forward(max_size), max_align, 0 };
union_type.setHaveLayout(ip, @intCast(size), padding, alignment);
const casted_size = std.math.cast(u32, size) orelse {
const msg = try sema.errMsg(
ty.srcLoc(pt.zcu),
"union layout requires size {d}, this compiler implementation supports up to {d}",
.{ size, std.math.maxInt(u32) },
);
return sema.failWithOwnedErrorMsg(null, msg);
};
union_type.setHaveLayout(ip, casted_size, padding, alignment);
if (union_type.flagsUnordered(ip).assumed_runtime_bits and !(try ty.hasRuntimeBitsSema(pt))) {
const msg = try sema.errMsg(
@ -36719,7 +36818,7 @@ fn unionFields(
if (enum_index != field_i) {
const msg = msg: {
const enum_field_src: LazySrcLoc = .{
.base_node_inst = tag_info.zir_index.unwrap().?,
.base_node_inst = Type.fromInterned(tag_ty).typeDeclInstAllowGeneratedTag(zcu).?,
.offset = .{ .container_field_name = enum_index },
};
const msg = try sema.errMsg(name_src, "union field '{}' ordered differently than corresponding enum field", .{
@ -38029,6 +38128,11 @@ fn compareScalar(
const pt = sema.pt;
const coerced_lhs = try pt.getCoerced(lhs, ty);
const coerced_rhs = try pt.getCoerced(rhs, ty);
// Equality comparisons of signed zero and NaN need to use floating point semantics
if (coerced_lhs.isFloat(pt.zcu) or coerced_rhs.isFloat(pt.zcu))
return Value.compareHeteroSema(coerced_lhs, op, coerced_rhs, pt);
switch (op) {
.eq => return sema.valuesEqual(coerced_lhs, coerced_rhs, ty),
.neq => return !(try sema.valuesEqual(coerced_lhs, coerced_rhs, ty)),

View File

@ -65,6 +65,15 @@ pub fn storeComptimePtr(
const zcu = pt.zcu;
const ptr_info = ptr.typeOf(zcu).ptrInfo(zcu);
assert(store_val.typeOf(zcu).toIntern() == ptr_info.child);
{
const store_ty: Type = .fromInterned(ptr_info.child);
if (!try store_ty.comptimeOnlySema(pt) and !try store_ty.hasRuntimeBitsIgnoreComptimeSema(pt)) {
// zero-bit store; nothing to do
return .success;
}
}
// TODO: host size for vectors is terrible
const host_bits = switch (ptr_info.flags.vector_index) {
.none => ptr_info.packed_offset.host_size * 8,

View File

@ -1740,10 +1740,7 @@ pub fn bitSizeInner(
const len = array_type.lenIncludingSentinel();
if (len == 0) return 0;
const elem_ty = Type.fromInterned(array_type.child);
const elem_size = @max(
(try elem_ty.abiAlignmentInner(strat_lazy, zcu, tid)).scalar.toByteUnits() orelse 0,
(try elem_ty.abiSizeInner(strat_lazy, zcu, tid)).scalar,
);
const elem_size = (try elem_ty.abiSizeInner(strat_lazy, zcu, tid)).scalar;
if (elem_size == 0) return 0;
const elem_bit_size = try elem_ty.bitSizeInner(strat, zcu, tid);
return (len - 1) * 8 * elem_size + elem_bit_size;

View File

@ -1132,6 +1132,8 @@ pub fn compareHeteroAdvanced(
else => {},
}
}
if (lhs.isNan(zcu) or rhs.isNan(zcu)) return op == .neq;
return (try orderAdvanced(lhs, rhs, strat, zcu, tid)).compare(op);
}
@ -2675,7 +2677,7 @@ pub fn shlSatScalar(
const shift: usize = @intCast(rhs.toUnsignedInt(zcu));
const limbs = try arena.alloc(
std.math.big.Limb,
std.math.big.int.calcTwosCompLimbCount(info.bits) + 1,
std.math.big.int.calcTwosCompLimbCount(info.bits),
);
var result_bigint = BigIntMutable{
.limbs = limbs,
@ -3777,6 +3779,7 @@ pub fn ptrField(parent_ptr: Value, field_idx: u32, pt: Zcu.PerThread) !Value {
.auto => break :field .{ field_ty, try aggregate_ty.fieldAlignmentSema(field_idx, pt) },
.@"extern" => {
// Well-defined layout, so just offset the pointer appropriately.
try aggregate_ty.resolveLayout(pt);
const byte_off = aggregate_ty.structFieldOffset(field_idx, zcu);
const field_align = a: {
const parent_align = if (parent_ptr_info.flags.alignment == .none) pa: {

View File

@ -181,7 +181,10 @@ analysis_roots: std.BoundedArray(*Package.Module, 4) = .{},
/// Allocated into `gpa`.
resolved_references: ?std.AutoHashMapUnmanaged(AnalUnit, ?ResolvedReference) = null,
skip_analysis_errors: bool = false,
/// If `true`, then semantic analysis must not occur on this update due to AstGen errors.
/// Essentially the entire pipeline after AstGen, including Sema, codegen, and link, is skipped.
/// Reset to `false` at the start of each update in `Compilation.update`.
skip_analysis_this_update: bool = false,
stage1_flags: packed struct {
have_winmain: bool = false,
@ -2748,7 +2751,7 @@ pub fn saveZoirCache(cache_file: std.fs.File, stat: std.fs.File.Stat, zoir: Zoir
},
.{
.base = @ptrCast(zoir.limbs),
.len = zoir.limbs.len * 4,
.len = zoir.limbs.len * @sizeOf(std.math.big.Limb),
},
.{
.base = zoir.string_bytes.ptr,
@ -3869,9 +3872,11 @@ fn resolveReferencesInner(zcu: *Zcu) !std.AutoHashMapUnmanaged(AnalUnit, ?Resolv
.unnamed_test => true,
.@"test", .decltest => a: {
const fqn_slice = nav.fqn.toSlice(ip);
for (comp.test_filters) |test_filter| {
if (std.mem.indexOf(u8, fqn_slice, test_filter) != null) break;
} else break :a false;
if (comp.test_filters.len > 0) {
for (comp.test_filters) |test_filter| {
if (std.mem.indexOf(u8, fqn_slice, test_filter) != null) break;
} else break :a false;
}
break :a true;
},
};
@ -3881,7 +3886,10 @@ fn resolveReferencesInner(zcu: *Zcu) !std.AutoHashMapUnmanaged(AnalUnit, ?Resolv
@intFromEnum(inst_info.inst),
});
try unit_queue.put(gpa, .wrap(.{ .nav_val = nav_id }), referencer);
try unit_queue.put(gpa, .wrap(.{ .func = nav.status.fully_resolved.val }), referencer);
// Non-fatal AstGen errors could mean this test decl failed
if (nav.status == .fully_resolved) {
try unit_queue.put(gpa, .wrap(.{ .func = nav.status.fully_resolved.val }), referencer);
}
}
}
for (zcu.namespacePtr(ns).pub_decls.keys()) |nav| {

View File

@ -234,6 +234,7 @@ pub fn updateFile(
error.FileTooBig => unreachable, // 0 is not too big
else => |e| return e,
};
try cache_file.seekTo(0);
if (stat.size > std.math.maxInt(u32))
return error.FileTooBig;
@ -1444,6 +1445,8 @@ fn analyzeNavType(pt: Zcu.PerThread, nav_id: InternPool.Nav.Index) Zcu.CompileEr
break :ty .fromInterned(type_ref.toInterned().?);
};
try resolved_ty.resolveLayout(pt);
// In the case where the type is specified, this function is also responsible for resolving
// the pointer modifiers, i.e. alignment, linksection, addrspace.
const modifiers = try sema.resolveNavPtrModifiers(&block, zir_decl, inst_resolved.inst, resolved_ty);
@ -1705,7 +1708,7 @@ pub fn linkerUpdateFunc(pt: Zcu.PerThread, func_index: InternPool.Index, air: Ai
lf.updateFunc(pt, func_index, air, liveness) catch |err| switch (err) {
error.OutOfMemory => return error.OutOfMemory,
error.CodegenFail => assert(zcu.failed_codegen.contains(nav_index)),
error.Overflow => {
error.Overflow, error.RelocationNotByteAligned => {
try zcu.failed_codegen.putNoClobber(gpa, nav_index, try Zcu.ErrorMsg.create(
gpa,
zcu.navSrcLoc(nav_index),
@ -3131,7 +3134,7 @@ pub fn linkerUpdateNav(pt: Zcu.PerThread, nav_index: InternPool.Nav.Index) error
lf.updateNav(pt, nav_index) catch |err| switch (err) {
error.OutOfMemory => return error.OutOfMemory,
error.CodegenFail => assert(zcu.failed_codegen.contains(nav_index)),
error.Overflow => {
error.Overflow, error.RelocationNotByteAligned => {
try zcu.failed_codegen.putNoClobber(gpa, nav_index, try Zcu.ErrorMsg.create(
gpa,
zcu.navSrcLoc(nav_index),

View File

@ -1768,8 +1768,15 @@ fn finishAirBookkeeping(func: *Func) void {
fn finishAirResult(func: *Func, inst: Air.Inst.Index, result: MCValue) void {
if (func.liveness.isUnused(inst)) switch (result) {
.none, .dead, .unreach => {},
else => unreachable, // Why didn't the result die?
// Why didn't the result die?
.register => |r| if (r != .zero) unreachable,
else => unreachable,
} else {
switch (result) {
.register => |r| if (r == .zero) unreachable, // Why did we discard a used result?
else => {},
}
tracking_log.debug("%{d} => {} (birth)", .{ inst, result });
func.inst_tracking.putAssumeCapacityNoClobber(inst, InstTracking.init(result));
// In some cases, an operand may be reused as the result.
@ -7728,9 +7735,12 @@ fn airAtomicLoad(func: *Func, inst: Air.Inst.Index) !void {
const ptr_mcv = try func.resolveInst(atomic_load.ptr);
const bit_size = elem_ty.bitSize(zcu);
if (bit_size > 64) return func.fail("TODO: airAtomicStore > 64 bits", .{});
if (bit_size > 64) return func.fail("TODO: airAtomicLoad > 64 bits", .{});
const result_mcv = try func.allocRegOrMem(elem_ty, inst, true);
const result_mcv: MCValue = if (func.liveness.isUnused(inst))
.{ .register = .zero }
else
try func.allocRegOrMem(elem_ty, inst, true);
assert(result_mcv == .register); // should be less than 8 bytes
if (order == .seq_cst) {
@ -7746,11 +7756,10 @@ fn airAtomicLoad(func: *Func, inst: Air.Inst.Index) !void {
try func.load(result_mcv, ptr_mcv, ptr_ty);
switch (order) {
// Don't guarnetee other memory operations to be ordered after the load.
.unordered => {},
.monotonic => {},
// Make sure all previous reads happen before any reading or writing accurs.
.seq_cst, .acquire => {
// Don't guarantee other memory operations to be ordered after the load.
.unordered, .monotonic => {},
// Make sure all previous reads happen before any reading or writing occurs.
.acquire, .seq_cst => {
_ = try func.addInst(.{
.tag = .fence,
.data = .{ .fence = .{
@ -7792,6 +7801,17 @@ fn airAtomicStore(func: *Func, inst: Air.Inst.Index, order: std.builtin.AtomicOr
}
try func.store(ptr_mcv, val_mcv, ptr_ty);
if (order == .seq_cst) {
_ = try func.addInst(.{
.tag = .fence,
.data = .{ .fence = .{
.pred = .rw,
.succ = .rw,
} },
});
}
return func.finishAir(inst, .unreach, .{ bin_op.lhs, bin_op.rhs, .none });
}

View File

@ -1398,11 +1398,22 @@ fn resolveCallingConventionValues(
},
.wasm_mvp => {
for (fn_info.param_types.get(ip)) |ty| {
const ty_classes = abi.classifyType(Type.fromInterned(ty), zcu);
for (ty_classes) |class| {
if (class == .none) continue;
try args.append(.{ .local = .{ .value = result.local_index, .references = 1 } });
result.local_index += 1;
if (!Type.fromInterned(ty).hasRuntimeBitsIgnoreComptime(zcu)) {
continue;
}
switch (abi.classifyType(.fromInterned(ty), zcu)) {
.direct => |scalar_ty| if (!abi.lowerAsDoubleI64(scalar_ty, zcu)) {
try args.append(.{ .local = .{ .value = result.local_index, .references = 1 } });
result.local_index += 1;
} else {
try args.append(.{ .local = .{ .value = result.local_index, .references = 1 } });
try args.append(.{ .local = .{ .value = result.local_index + 1, .references = 1 } });
result.local_index += 2;
},
.indirect => {
try args.append(.{ .local = .{ .value = result.local_index, .references = 1 } });
result.local_index += 1;
},
}
}
},
@ -1418,14 +1429,13 @@ pub fn firstParamSRet(
zcu: *const Zcu,
target: *const std.Target,
) bool {
if (!return_type.hasRuntimeBitsIgnoreComptime(zcu)) return false;
switch (cc) {
.@"inline" => unreachable,
.auto => return isByRef(return_type, zcu, target),
.wasm_mvp => {
const ty_classes = abi.classifyType(return_type, zcu);
if (ty_classes[0] == .indirect) return true;
if (ty_classes[0] == .direct and ty_classes[1] == .direct) return true;
return false;
.wasm_mvp => switch (abi.classifyType(return_type, zcu)) {
.direct => |scalar_ty| return abi.lowerAsDoubleI64(scalar_ty, zcu),
.indirect => return true,
},
else => return false,
}
@ -1439,26 +1449,19 @@ fn lowerArg(cg: *CodeGen, cc: std.builtin.CallingConvention, ty: Type, value: WV
}
const zcu = cg.pt.zcu;
const ty_classes = abi.classifyType(ty, zcu);
assert(ty_classes[0] != .none);
switch (ty.zigTypeTag(zcu)) {
.@"struct", .@"union" => {
if (ty_classes[0] == .indirect) {
switch (abi.classifyType(ty, zcu)) {
.direct => |scalar_type| if (!abi.lowerAsDoubleI64(scalar_type, zcu)) {
if (!isByRef(ty, zcu, cg.target)) {
return cg.lowerToStack(value);
} else {
switch (value) {
.nav_ref, .stack_offset => _ = try cg.load(value, scalar_type, 0),
.dead => unreachable,
else => try cg.emitWValue(value),
}
}
assert(ty_classes[0] == .direct);
const scalar_type = abi.scalarType(ty, zcu);
switch (value) {
.nav_ref, .stack_offset => _ = try cg.load(value, scalar_type, 0),
.dead => unreachable,
else => try cg.emitWValue(value),
}
},
.int, .float => {
if (ty_classes[1] == .none) {
return cg.lowerToStack(value);
}
assert(ty_classes[0] == .direct and ty_classes[1] == .direct);
} else {
assert(ty.abiSize(zcu) == 16);
// in this case we have an integer or float that must be lowered as 2 i64's.
try cg.emitWValue(value);
@ -1466,7 +1469,7 @@ fn lowerArg(cg: *CodeGen, cc: std.builtin.CallingConvention, ty: Type, value: WV
try cg.emitWValue(value);
try cg.addMemArg(.i64_load, .{ .offset = value.offset() + 8, .alignment = 8 });
},
else => return cg.lowerToStack(value),
.indirect => return cg.lowerToStack(value),
}
}
@ -2125,23 +2128,16 @@ fn airRet(cg: *CodeGen, inst: Air.Inst.Index) InnerError!void {
if (cg.return_value != .none) {
try cg.store(cg.return_value, operand, ret_ty, 0);
} else if (fn_info.cc == .wasm_mvp and ret_ty.hasRuntimeBitsIgnoreComptime(zcu)) {
switch (ret_ty.zigTypeTag(zcu)) {
// Aggregate types can be lowered as a singular value
.@"struct", .@"union" => {
const scalar_type = abi.scalarType(ret_ty, zcu);
try cg.emitWValue(operand);
const opcode = buildOpcode(.{
.op = .load,
.width = @as(u8, @intCast(scalar_type.abiSize(zcu) * 8)),
.signedness = if (scalar_type.isSignedInt(zcu)) .signed else .unsigned,
.valtype1 = typeToValtype(scalar_type, zcu, cg.target),
});
try cg.addMemArg(Mir.Inst.Tag.fromOpcode(opcode), .{
.offset = operand.offset(),
.alignment = @intCast(scalar_type.abiAlignment(zcu).toByteUnits().?),
});
switch (abi.classifyType(ret_ty, zcu)) {
.direct => |scalar_type| {
assert(!abi.lowerAsDoubleI64(scalar_type, zcu));
if (!isByRef(ret_ty, zcu, cg.target)) {
try cg.emitWValue(operand);
} else {
_ = try cg.load(operand, scalar_type, 0);
}
},
else => try cg.emitWValue(operand),
.indirect => unreachable,
}
} else {
if (!ret_ty.hasRuntimeBitsIgnoreComptime(zcu) and ret_ty.isError(zcu)) {
@ -2267,14 +2263,24 @@ fn airCall(cg: *CodeGen, inst: Air.Inst.Index, modifier: std.builtin.CallModifie
break :result_value .none;
} else if (first_param_sret) {
break :result_value sret;
// TODO: Make this less fragile and optimize
} else if (zcu.typeToFunc(fn_ty).?.cc == .wasm_mvp and ret_ty.zigTypeTag(zcu) == .@"struct" or ret_ty.zigTypeTag(zcu) == .@"union") {
const result_local = try cg.allocLocal(ret_ty);
try cg.addLocal(.local_set, result_local.local.value);
const scalar_type = abi.scalarType(ret_ty, zcu);
const result = try cg.allocStack(scalar_type);
try cg.store(result, result_local, scalar_type, 0);
break :result_value result;
} else if (zcu.typeToFunc(fn_ty).?.cc == .wasm_mvp) {
switch (abi.classifyType(ret_ty, zcu)) {
.direct => |scalar_type| {
assert(!abi.lowerAsDoubleI64(scalar_type, zcu));
if (!isByRef(ret_ty, zcu, cg.target)) {
const result_local = try cg.allocLocal(ret_ty);
try cg.addLocal(.local_set, result_local.local.value);
break :result_value result_local;
} else {
const result_local = try cg.allocLocal(ret_ty);
try cg.addLocal(.local_set, result_local.local.value);
const result = try cg.allocStack(ret_ty);
try cg.store(result, result_local, scalar_type, 0);
break :result_value result;
}
},
.indirect => unreachable,
}
} else {
const result_local = try cg.allocLocal(ret_ty);
try cg.addLocal(.local_set, result_local.local.value);
@ -2547,26 +2553,17 @@ fn airArg(cg: *CodeGen, inst: Air.Inst.Index) InnerError!void {
const cc = zcu.typeToFunc(zcu.navValue(cg.owner_nav).typeOf(zcu)).?.cc;
const arg_ty = cg.typeOfIndex(inst);
if (cc == .wasm_mvp) {
const arg_classes = abi.classifyType(arg_ty, zcu);
for (arg_classes) |class| {
if (class != .none) {
switch (abi.classifyType(arg_ty, zcu)) {
.direct => |scalar_ty| if (!abi.lowerAsDoubleI64(scalar_ty, zcu)) {
cg.arg_index += 1;
}
}
// When we have an argument that's passed using more than a single parameter,
// we combine them into a single stack value
if (arg_classes[0] == .direct and arg_classes[1] == .direct) {
if (arg_ty.zigTypeTag(zcu) != .int and arg_ty.zigTypeTag(zcu) != .float) {
return cg.fail(
"TODO: Implement C-ABI argument for type '{}'",
.{arg_ty.fmt(pt)},
);
}
const result = try cg.allocStack(arg_ty);
try cg.store(result, arg, Type.u64, 0);
try cg.store(result, cg.args[arg_index + 1], Type.u64, 8);
return cg.finishAir(inst, result, &.{});
} else {
cg.arg_index += 2;
const result = try cg.allocStack(arg_ty);
try cg.store(result, arg, Type.u64, 0);
try cg.store(result, cg.args[arg_index + 1], Type.u64, 8);
return cg.finishAir(inst, result, &.{});
},
.indirect => cg.arg_index += 1,
}
} else {
cg.arg_index += 1;

View File

@ -13,70 +13,55 @@ const Zcu = @import("../../Zcu.zig");
/// Defines how to pass a type as part of a function signature,
/// both for parameters as well as return values.
pub const Class = enum { direct, indirect, none };
const none: [2]Class = .{ .none, .none };
const memory: [2]Class = .{ .indirect, .none };
const direct: [2]Class = .{ .direct, .none };
pub const Class = union(enum) {
direct: Type,
indirect,
};
/// Classifies a given Zig type to determine how they must be passed
/// or returned as value within a wasm function.
/// When all elements result in `.none`, no value must be passed in or returned.
pub fn classifyType(ty: Type, zcu: *const Zcu) [2]Class {
pub fn classifyType(ty: Type, zcu: *const Zcu) Class {
const ip = &zcu.intern_pool;
const target = zcu.getTarget();
if (!ty.hasRuntimeBitsIgnoreComptime(zcu)) return none;
assert(ty.hasRuntimeBitsIgnoreComptime(zcu));
switch (ty.zigTypeTag(zcu)) {
.int, .@"enum", .error_set => return .{ .direct = ty },
.float => return .{ .direct = ty },
.bool => return .{ .direct = ty },
.vector => return .{ .direct = ty },
.array => return .indirect,
.optional => {
assert(ty.isPtrLikeOptional(zcu));
return .{ .direct = ty };
},
.pointer => {
assert(!ty.isSlice(zcu));
return .{ .direct = ty };
},
.@"struct" => {
const struct_type = zcu.typeToStruct(ty).?;
if (struct_type.layout == .@"packed") {
if (ty.bitSize(zcu) <= 64) return direct;
return .{ .direct, .direct };
return .{ .direct = ty };
}
if (struct_type.field_types.len > 1) {
// The struct type is non-scalar.
return memory;
return .indirect;
}
const field_ty = Type.fromInterned(struct_type.field_types.get(ip)[0]);
const explicit_align = struct_type.fieldAlign(ip, 0);
if (explicit_align != .none) {
if (explicit_align.compareStrict(.gt, field_ty.abiAlignment(zcu)))
return memory;
return .indirect;
}
return classifyType(field_ty, zcu);
},
.int, .@"enum", .error_set => {
const int_bits = ty.intInfo(zcu).bits;
if (int_bits <= 64) return direct;
if (int_bits <= 128) return .{ .direct, .direct };
return memory;
},
.float => {
const float_bits = ty.floatBits(target);
if (float_bits <= 64) return direct;
if (float_bits <= 128) return .{ .direct, .direct };
return memory;
},
.bool => return direct,
.vector => return direct,
.array => return memory,
.optional => {
assert(ty.isPtrLikeOptional(zcu));
return direct;
},
.pointer => {
assert(!ty.isSlice(zcu));
return direct;
},
.@"union" => {
const union_obj = zcu.typeToUnion(ty).?;
if (union_obj.flagsUnordered(ip).layout == .@"packed") {
if (ty.bitSize(zcu) <= 64) return direct;
return .{ .direct, .direct };
return .{ .direct = ty };
}
const layout = ty.unionGetLayout(zcu);
assert(layout.tag_size == 0);
if (union_obj.field_types.len > 1) return memory;
if (union_obj.field_types.len > 1) return .indirect;
const first_field_ty = Type.fromInterned(union_obj.field_types.get(ip)[0]);
return classifyType(first_field_ty, zcu);
},
@ -97,32 +82,6 @@ pub fn classifyType(ty: Type, zcu: *const Zcu) [2]Class {
}
}
/// Returns the scalar type a given type can represent.
/// Asserts given type can be represented as scalar, such as
/// a struct with a single scalar field.
pub fn scalarType(ty: Type, zcu: *Zcu) Type {
const ip = &zcu.intern_pool;
switch (ty.zigTypeTag(zcu)) {
.@"struct" => {
if (zcu.typeToPackedStruct(ty)) |packed_struct| {
return scalarType(Type.fromInterned(packed_struct.backingIntTypeUnordered(ip)), zcu);
} else {
assert(ty.structFieldCount(zcu) == 1);
return scalarType(ty.fieldType(0, zcu), zcu);
}
},
.@"union" => {
const union_obj = zcu.typeToUnion(ty).?;
if (union_obj.flagsUnordered(ip).layout != .@"packed") {
const layout = Type.getUnionLayout(union_obj, zcu);
if (layout.payload_size == 0 and layout.tag_size != 0) {
return scalarType(ty.unionTagTypeSafety(zcu).?, zcu);
}
assert(union_obj.field_types.len == 1);
}
const first_field_ty = Type.fromInterned(union_obj.field_types.get(ip)[0]);
return scalarType(first_field_ty, zcu);
},
else => return ty,
}
pub fn lowerAsDoubleI64(scalar_ty: Type, zcu: *const Zcu) bool {
return scalar_ty.bitSize(zcu) > 64;
}

View File

@ -88110,12 +88110,15 @@ fn airStore(self: *CodeGen, inst: Air.Inst.Index, safety: bool) !void {
const reg_locks = self.register_manager.lockRegsAssumeUnused(3, .{ .rdi, .rsi, .rcx });
defer for (reg_locks) |lock| self.register_manager.unlockReg(lock);
const ptr_ty = self.typeOf(bin_op.lhs);
const ptr_info = ptr_ty.ptrInfo(zcu);
const is_packed = ptr_info.flags.vector_index != .none or ptr_info.packed_offset.host_size > 0;
if (is_packed) try self.spillEflagsIfOccupied();
const src_mcv = try self.resolveInst(bin_op.rhs);
const ptr_mcv = try self.resolveInst(bin_op.lhs);
const ptr_ty = self.typeOf(bin_op.lhs);
const ptr_info = ptr_ty.ptrInfo(zcu);
if (ptr_info.flags.vector_index != .none or ptr_info.packed_offset.host_size > 0) {
if (is_packed) {
try self.packedStore(ptr_ty, ptr_mcv, src_mcv);
} else {
try self.store(ptr_ty, ptr_mcv, src_mcv, .{ .safety = safety });
@ -97114,23 +97117,29 @@ fn airAtomicRmw(self: *CodeGen, inst: Air.Inst.Index) !void {
fn airAtomicLoad(self: *CodeGen, inst: Air.Inst.Index) !void {
const atomic_load = self.air.instructions.items(.data)[@intFromEnum(inst)].atomic_load;
const result: MCValue = result: {
const ptr_ty = self.typeOf(atomic_load.ptr);
const ptr_mcv = try self.resolveInst(atomic_load.ptr);
const ptr_lock = switch (ptr_mcv) {
.register => |reg| self.register_manager.lockRegAssumeUnused(reg),
else => null,
};
defer if (ptr_lock) |lock| self.register_manager.unlockReg(lock);
const ptr_ty = self.typeOf(atomic_load.ptr);
const ptr_mcv = try self.resolveInst(atomic_load.ptr);
const ptr_lock = switch (ptr_mcv) {
.register => |reg| self.register_manager.lockRegAssumeUnused(reg),
else => null,
};
defer if (ptr_lock) |lock| self.register_manager.unlockReg(lock);
const unused = self.liveness.isUnused(inst);
const dst_mcv =
if (self.reuseOperand(inst, atomic_load.ptr, 0, ptr_mcv))
const dst_mcv: MCValue = if (unused)
.{ .register = try self.register_manager.allocReg(null, self.regSetForType(ptr_ty.childType(self.pt.zcu))) }
else if (self.reuseOperand(inst, atomic_load.ptr, 0, ptr_mcv))
ptr_mcv
else
try self.allocRegOrMem(inst, true);
try self.load(dst_mcv, ptr_ty, ptr_mcv);
return self.finishAir(inst, dst_mcv, .{ atomic_load.ptr, .none, .none });
try self.load(dst_mcv, ptr_ty, ptr_mcv);
break :result if (unused) .unreach else dst_mcv;
};
return self.finishAir(inst, result, .{ atomic_load.ptr, .none, .none });
}
fn airAtomicStore(self: *CodeGen, inst: Air.Inst.Index, order: std.builtin.AtomicOrder) !void {
@ -97909,16 +97918,150 @@ fn airSelect(self: *CodeGen, inst: Air.Inst.Index) !void {
switch (pred_mcv) {
.register => |pred_reg| switch (pred_reg.class()) {
.general_purpose => {},
.sse => if (need_xmm0 and pred_reg.id() != comptime Register.xmm0.id()) {
try self.register_manager.getKnownReg(.xmm0, null);
try self.genSetReg(.xmm0, pred_ty, pred_mcv, .{});
break :mask .xmm0;
} else break :mask if (has_blend)
pred_reg
.sse => if (elem_ty.toIntern() == .bool_type)
if (need_xmm0 and pred_reg.id() != comptime Register.xmm0.id()) {
try self.register_manager.getKnownReg(.xmm0, null);
try self.genSetReg(.xmm0, pred_ty, pred_mcv, .{});
break :mask .xmm0;
} else break :mask if (has_blend)
pred_reg
else
try self.copyToTmpRegister(pred_ty, pred_mcv)
else
try self.copyToTmpRegister(pred_ty, pred_mcv),
return self.fail("TODO implement airSelect for {}", .{ty.fmt(pt)}),
else => unreachable,
},
.register_mask => |pred_reg_mask| {
if (pred_reg_mask.info.scalar.bitSize(self.target) != 8 * elem_abi_size)
return self.fail("TODO implement airSelect for {}", .{ty.fmt(pt)});
const mask_reg: Register = if (need_xmm0 and pred_reg_mask.reg.id() != comptime Register.xmm0.id()) mask_reg: {
try self.register_manager.getKnownReg(.xmm0, null);
try self.genSetReg(.xmm0, ty, .{ .register = pred_reg_mask.reg }, .{});
break :mask_reg .xmm0;
} else pred_reg_mask.reg;
const mask_alias = registerAlias(mask_reg, abi_size);
const mask_lock = self.register_manager.lockRegAssumeUnused(mask_reg);
defer self.register_manager.unlockReg(mask_lock);
const lhs_mcv = try self.resolveInst(extra.lhs);
const lhs_lock = switch (lhs_mcv) {
.register => |lhs_reg| self.register_manager.lockRegAssumeUnused(lhs_reg),
else => null,
};
defer if (lhs_lock) |lock| self.register_manager.unlockReg(lock);
const rhs_mcv = try self.resolveInst(extra.rhs);
const rhs_lock = switch (rhs_mcv) {
.register => |rhs_reg| self.register_manager.lockReg(rhs_reg),
else => null,
};
defer if (rhs_lock) |lock| self.register_manager.unlockReg(lock);
const order = has_blend != pred_reg_mask.info.inverted;
const reuse_mcv, const other_mcv = if (order)
.{ rhs_mcv, lhs_mcv }
else
.{ lhs_mcv, rhs_mcv };
const dst_mcv: MCValue = if (reuse_mcv.isRegister() and self.reuseOperand(
inst,
if (order) extra.rhs else extra.lhs,
@intFromBool(order),
reuse_mcv,
)) reuse_mcv else if (has_avx)
.{ .register = try self.register_manager.allocReg(inst, abi.RegisterClass.sse) }
else
try self.copyToRegisterWithInstTracking(inst, ty, reuse_mcv);
const dst_reg = dst_mcv.getReg().?;
const dst_alias = registerAlias(dst_reg, abi_size);
const dst_lock = self.register_manager.lockReg(dst_reg);
defer if (dst_lock) |lock| self.register_manager.unlockReg(lock);
const mir_tag = @as(?Mir.Inst.FixedTag, if ((pred_reg_mask.info.kind == .all and
elem_ty.toIntern() != .f32_type and elem_ty.toIntern() != .f64_type) or pred_reg_mask.info.scalar == .byte)
if (has_avx)
.{ .vp_b, .blendv }
else if (has_blend)
.{ .p_b, .blendv }
else if (pred_reg_mask.info.kind == .all)
.{ .p_, undefined }
else
null
else if ((pred_reg_mask.info.kind == .all and (elem_ty.toIntern() != .f64_type or !self.hasFeature(.sse2))) or
pred_reg_mask.info.scalar == .dword)
if (has_avx)
.{ .v_ps, .blendv }
else if (has_blend)
.{ ._ps, .blendv }
else if (pred_reg_mask.info.kind == .all)
.{ ._ps, undefined }
else
null
else if (pred_reg_mask.info.kind == .all or pred_reg_mask.info.scalar == .qword)
if (has_avx)
.{ .v_pd, .blendv }
else if (has_blend)
.{ ._pd, .blendv }
else if (pred_reg_mask.info.kind == .all)
.{ ._pd, undefined }
else
null
else
null) orelse return self.fail("TODO implement airSelect for {}", .{ty.fmt(pt)});
if (has_avx) {
const rhs_alias = if (reuse_mcv.isRegister())
registerAlias(reuse_mcv.getReg().?, abi_size)
else rhs: {
try self.genSetReg(dst_reg, ty, reuse_mcv, .{});
break :rhs dst_alias;
};
if (other_mcv.isBase()) try self.asmRegisterRegisterMemoryRegister(
mir_tag,
dst_alias,
rhs_alias,
try other_mcv.mem(self, .{ .size = self.memSize(ty) }),
mask_alias,
) else try self.asmRegisterRegisterRegisterRegister(
mir_tag,
dst_alias,
rhs_alias,
registerAlias(if (other_mcv.isRegister())
other_mcv.getReg().?
else
try self.copyToTmpRegister(ty, other_mcv), abi_size),
mask_alias,
);
} else if (has_blend) if (other_mcv.isBase()) try self.asmRegisterMemoryRegister(
mir_tag,
dst_alias,
try other_mcv.mem(self, .{ .size = self.memSize(ty) }),
mask_alias,
) else try self.asmRegisterRegisterRegister(
mir_tag,
dst_alias,
registerAlias(if (other_mcv.isRegister())
other_mcv.getReg().?
else
try self.copyToTmpRegister(ty, other_mcv), abi_size),
mask_alias,
) else {
try self.asmRegisterRegister(.{ mir_tag[0], .@"and" }, dst_alias, mask_alias);
if (other_mcv.isBase()) try self.asmRegisterMemory(
.{ mir_tag[0], .andn },
mask_alias,
try other_mcv.mem(self, .{ .size = .fromSize(abi_size) }),
) else try self.asmRegisterRegister(
.{ mir_tag[0], .andn },
mask_alias,
if (other_mcv.isRegister())
other_mcv.getReg().?
else
try self.copyToTmpRegister(ty, other_mcv),
);
try self.asmRegisterRegister(.{ mir_tag[0], .@"or" }, dst_alias, mask_alias);
}
break :result dst_mcv;
},
else => {},
}
const mask_reg: Register = if (need_xmm0) mask_reg: {
@ -98121,7 +98264,7 @@ fn airSelect(self: *CodeGen, inst: Air.Inst.Index) !void {
const dst_lock = self.register_manager.lockReg(dst_reg);
defer if (dst_lock) |lock| self.register_manager.unlockReg(lock);
const mir_tag = @as(?Mir.Inst.FixedTag, switch (ty.childType(zcu).zigTypeTag(zcu)) {
const mir_tag = @as(?Mir.Inst.FixedTag, switch (elem_ty.zigTypeTag(zcu)) {
else => null,
.int => switch (abi_size) {
0 => unreachable,
@ -98137,7 +98280,7 @@ fn airSelect(self: *CodeGen, inst: Air.Inst.Index) !void {
null,
else => null,
},
.float => switch (ty.childType(zcu).floatBits(self.target.*)) {
.float => switch (elem_ty.floatBits(self.target.*)) {
else => unreachable,
16, 80, 128 => null,
32 => switch (vec_len) {
@ -98191,30 +98334,20 @@ fn airSelect(self: *CodeGen, inst: Air.Inst.Index) !void {
try self.copyToTmpRegister(ty, lhs_mcv), abi_size),
mask_alias,
) else {
const mir_fixes = @as(?Mir.Inst.Fixes, switch (elem_ty.zigTypeTag(zcu)) {
else => null,
.int => .p_,
.float => switch (elem_ty.floatBits(self.target.*)) {
32 => ._ps,
64 => ._pd,
16, 80, 128 => null,
else => unreachable,
},
}) orelse return self.fail("TODO implement airSelect for {}", .{ty.fmt(pt)});
try self.asmRegisterRegister(.{ mir_fixes, .@"and" }, dst_alias, mask_alias);
try self.asmRegisterRegister(.{ mir_tag[0], .@"and" }, dst_alias, mask_alias);
if (rhs_mcv.isBase()) try self.asmRegisterMemory(
.{ mir_fixes, .andn },
.{ mir_tag[0], .andn },
mask_alias,
try rhs_mcv.mem(self, .{ .size = .fromSize(abi_size) }),
) else try self.asmRegisterRegister(
.{ mir_fixes, .andn },
.{ mir_tag[0], .andn },
mask_alias,
if (rhs_mcv.isRegister())
rhs_mcv.getReg().?
else
try self.copyToTmpRegister(ty, rhs_mcv),
);
try self.asmRegisterRegister(.{ mir_fixes, .@"or" }, dst_alias, mask_alias);
try self.asmRegisterRegister(.{ mir_tag[0], .@"or" }, dst_alias, mask_alias);
}
break :result dst_mcv;
};
@ -100753,11 +100886,11 @@ const Temp = struct {
const new_temp_index = cg.next_temp_index;
cg.temp_type[@intFromEnum(new_temp_index)] = .usize;
cg.next_temp_index = @enumFromInt(@intFromEnum(new_temp_index) + 1);
switch (temp.tracking(cg).short) {
else => |mcv| std.debug.panic("{s}: {}\n", .{ @src().fn_name, mcv }),
const mcv = temp.tracking(cg).short;
switch (mcv) {
else => std.debug.panic("{s}: {}\n", .{ @src().fn_name, mcv }),
.register => |reg| {
const new_reg =
try cg.register_manager.allocReg(new_temp_index.toIndex(), abi.RegisterClass.gp);
const new_reg = try cg.register_manager.allocReg(new_temp_index.toIndex(), abi.RegisterClass.gp);
new_temp_index.tracking(cg).* = .init(.{ .register = new_reg });
try cg.asmRegisterMemory(.{ ._, .lea }, new_reg.to64(), .{
.base = .{ .reg = reg.to64() },
@ -100765,33 +100898,22 @@ const Temp = struct {
});
},
.register_offset => |reg_off| {
const new_reg =
try cg.register_manager.allocReg(new_temp_index.toIndex(), abi.RegisterClass.gp);
const new_reg = try cg.register_manager.allocReg(new_temp_index.toIndex(), abi.RegisterClass.gp);
new_temp_index.tracking(cg).* = .init(.{ .register = new_reg });
try cg.asmRegisterMemory(.{ ._, .lea }, new_reg.to64(), .{
.base = .{ .reg = reg_off.reg.to64() },
.mod = .{ .rm = .{ .disp = reg_off.off + off } },
});
},
.load_symbol, .load_frame => {
const new_reg = try cg.register_manager.allocReg(new_temp_index.toIndex(), abi.RegisterClass.gp);
new_temp_index.tracking(cg).* = .init(.{ .register_offset = .{ .reg = new_reg, .off = off } });
try cg.genSetReg(new_reg, .usize, mcv, .{});
},
.lea_symbol => |sym_off| new_temp_index.tracking(cg).* = .init(.{ .lea_symbol = .{
.sym_index = sym_off.sym_index,
.off = sym_off.off + off,
} }),
.load_frame => |frame_addr| {
const new_reg =
try cg.register_manager.allocReg(new_temp_index.toIndex(), abi.RegisterClass.gp);
new_temp_index.tracking(cg).* = .init(.{ .register_offset = .{
.reg = new_reg,
.off = off,
} });
try cg.asmRegisterMemory(.{ ._, .mov }, new_reg.to64(), .{
.base = .{ .frame = frame_addr.index },
.mod = .{ .rm = .{
.size = .qword,
.disp = frame_addr.off,
} },
});
},
.lea_frame => |frame_addr| new_temp_index.tracking(cg).* = .init(.{ .lea_frame = .{
.index = frame_addr.index,
.off = frame_addr.off + off,
@ -101061,8 +101183,9 @@ const Temp = struct {
const result_temp: Temp = .{ .index = result_temp_index.toIndex() };
assert(cg.reuseTemp(result_temp.index, first_temp.index, first_temp_tracking));
assert(cg.reuseTemp(result_temp.index, second_temp.index, second_temp_tracking));
cg.temp_type[@intFromEnum(result_temp_index)] = .slice_const_u8;
result_temp_index.tracking(cg).* = .init(result);
cg.temp_type[@intFromEnum(result_temp_index)] = .slice_const_u8;
cg.next_temp_index = @enumFromInt(@intFromEnum(result_temp_index) + 1);
first_temp.* = result_temp;
second_temp.* = result_temp;
}
@ -101108,7 +101231,8 @@ const Temp = struct {
=> return temp.toRegClass(true, .general_purpose, cg),
.lea_symbol => |sym_off| {
const off = sym_off.off;
if (off == 0) return false;
// hack around linker relocation bugs
if (false and off == 0) return false;
try temp.toOffset(-off, cg);
while (try temp.toRegClass(true, .general_purpose, cg)) {}
try temp.toOffset(off, cg);
@ -101586,10 +101710,19 @@ const Temp = struct {
.dst_temps = .{ .{ .ref = .src0 }, .unused },
.each = .{ .once = &.{} },
}, .{
.required_features = .{ .fast_imm16, null, null, null },
.src_constraints = .{ .{ .unsigned_int = .word }, .any, .any },
.patterns = &.{
.{ .src = .{ .mut_mem, .none, .none } },
},
.dst_temps = .{ .{ .ref = .src0 }, .unused },
.clobbers = .{ .eflags = true },
.each = .{ .once = &.{
.{ ._, ._, .@"and", .dst0w, .ua(.src0, .add_umax), ._, ._ },
} },
}, .{
.required_features = .{ .fast_imm16, null, null, null },
.src_constraints = .{ .{ .unsigned_int = .word }, .any, .any },
.patterns = &.{
.{ .src = .{ .to_mut_gpr, .none, .none } },
},
.dst_temps = .{ .{ .ref = .src0 }, .unused },
@ -105711,7 +105844,8 @@ const Temp = struct {
) InnerError!void {
const tomb_bits = cg.liveness.getTombBits(inst);
for (0.., op_refs, op_temps) |op_index, op_ref, op_temp| {
if (op_temp.index != temp.index) try op_temp.die(cg);
if (op_temp.index == temp.index) continue;
if (op_temp.tracking(cg).short != .dead) try op_temp.die(cg);
if (tomb_bits & @as(Liveness.Bpi, 1) << @intCast(op_index) == 0) continue;
if (cg.reused_operands.isSet(op_index)) continue;
try cg.processDeath(op_ref.toIndexAllowNone() orelse continue);
@ -105730,6 +105864,12 @@ const Temp = struct {
assert(cg.reuseTemp(inst, temp_index.toIndex(), temp_tracking));
},
}
for (0.., op_refs, op_temps) |op_index, op_ref, op_temp| {
if (op_temp.index != temp.index) continue;
if (tomb_bits & @as(Liveness.Bpi, 1) << @intCast(op_index) == 0) continue;
if (cg.reused_operands.isSet(op_index)) continue;
try cg.processDeath(op_ref.toIndexAllowNone() orelse continue);
}
}
fn die(temp: Temp, cg: *CodeGen) InnerError!void {
@ -105755,7 +105895,8 @@ const Temp = struct {
}
fn isValid(index: Index, cg: *CodeGen) bool {
return index.tracking(cg).short != .dead;
return @intFromEnum(index) < @intFromEnum(cg.next_temp_index) and
index.tracking(cg).short != .dead;
}
fn typeOf(index: Index, cg: *CodeGen) Type {
@ -106887,10 +107028,17 @@ const Select = struct {
},
.frame => |frame_index| .{ try cg.tempInit(spec.type, .{ .load_frame = .{ .index = frame_index } }), true },
.lazy_symbol => |lazy_symbol_spec| {
const ip = &pt.zcu.intern_pool;
const ty = if (lazy_symbol_spec.ref == .none) spec.type else lazy_symbol_spec.ref.typeOf(s);
const lazy_symbol: link.File.LazySymbol = .{
.kind = lazy_symbol_spec.kind,
.ty = ty.toIntern(),
.ty = switch (ip.indexToKey(ty.toIntern())) {
.inferred_error_set_type => |func_index| switch (ip.funcIesResolvedUnordered(func_index)) {
.none => unreachable, // unresolved inferred error set
else => |ty_index| ty_index,
},
else => ty.toIntern(),
},
};
return .{ try cg.tempInit(.usize, .{ .lea_symbol = .{
.sym_index = if (cg.bin_file.cast(.elf)) |elf_file|

View File

@ -1029,8 +1029,8 @@ const mnemonic_to_encodings_map = init: {
storage_i += value.len;
}
var mnemonic_i: [mnemonic_count]usize = @splat(0);
const ops_len = @typeInfo(std.meta.FieldType(Data, .ops)).array.len;
const opc_len = @typeInfo(std.meta.FieldType(Data, .opc)).array.len;
const ops_len = @typeInfo(@FieldType(Data, "ops")).array.len;
const opc_len = @typeInfo(@FieldType(Data, "opc")).array.len;
for (encodings) |entry| {
const i = &mnemonic_i[@intFromEnum(entry[0])];
mnemonic_map[@intFromEnum(entry[0])][i.*] = .{

View File

@ -23,10 +23,7 @@ const Zir = std.zig.Zir;
const Alignment = InternPool.Alignment;
const dev = @import("dev.zig");
pub const CodeGenError = error{
OutOfMemory,
/// Compiler was asked to operate on a number larger than supported.
Overflow,
pub const CodeGenError = GenerateSymbolError || error{
/// Indicates the error is already stored in Zcu `failed_codegen`.
CodegenFail,
};
@ -177,6 +174,8 @@ pub const GenerateSymbolError = error{
OutOfMemory,
/// Compiler was asked to operate on a number larger than supported.
Overflow,
/// Compiler was asked to produce a non-byte-aligned relocation.
RelocationNotByteAligned,
};
pub fn generateSymbol(
@ -481,12 +480,18 @@ pub fn generateSymbol(
// pointer may point to a decl which must be marked used
// but can also result in a relocation. Therefore we handle those separately.
if (Type.fromInterned(field_ty).zigTypeTag(zcu) == .pointer) {
const field_size = math.cast(usize, Type.fromInterned(field_ty).abiSize(zcu)) orelse
return error.Overflow;
var tmp_list = try std.ArrayListUnmanaged(u8).initCapacity(gpa, field_size);
defer tmp_list.deinit(gpa);
try generateSymbol(bin_file, pt, src_loc, Value.fromInterned(field_val), &tmp_list, reloc_parent);
@memcpy(code.items[current_pos..][0..tmp_list.items.len], tmp_list.items);
const field_offset = std.math.divExact(u16, bits, 8) catch |err| switch (err) {
error.DivisionByZero => unreachable,
error.UnexpectedRemainder => return error.RelocationNotByteAligned,
};
code.items.len = current_pos + field_offset;
// TODO: code.lockPointers();
defer {
assert(code.items.len == current_pos + field_offset + @divExact(target.ptrBitWidth(), 8));
// TODO: code.unlockPointers();
code.items.len = current_pos + abi_size;
}
try generateSymbol(bin_file, pt, src_loc, Value.fromInterned(field_val), code, reloc_parent);
} else {
Value.fromInterned(field_val).writeToPackedMemory(Type.fromInterned(field_ty), pt, code.items[current_pos..], bits) catch unreachable;
}

View File

@ -611,7 +611,7 @@ pub const Function = struct {
const a = try Assignment.start(f, writer, ctype);
try f.writeCValue(writer, dst, .Other);
try a.assign(f, writer);
try f.writeCValue(writer, src, .Initializer);
try f.writeCValue(writer, src, .Other);
try a.end(f, writer);
}
@ -2826,7 +2826,7 @@ pub fn genLazyFn(o: *Object, lazy_ctype_pool: *const CType.Pool, lazy_fn: LazyFn
});
try o.dg.renderTypeAndName(w, name_ty, .{ .identifier = "name" }, Const, .none, .complete);
try w.writeAll(" = ");
try o.dg.renderValue(w, Value.fromInterned(name_val), .Initializer);
try o.dg.renderValue(w, Value.fromInterned(name_val), .StaticInitializer);
try w.writeAll(";\n return (");
try o.dg.renderType(w, name_slice_ty);
try w.print("){{{}, {}}};\n", .{
@ -4044,7 +4044,7 @@ fn airStore(f: *Function, inst: Air.Inst.Index, safety: bool) !CValue {
const new_local = try f.allocLocal(inst, src_ty);
try f.writeCValue(writer, new_local, .Other);
try writer.writeAll(" = ");
try f.writeCValue(writer, src_val, .Initializer);
try f.writeCValue(writer, src_val, .Other);
try writer.writeAll(";\n");
break :blk new_local;
@ -4515,7 +4515,7 @@ fn airSlice(f: *Function, inst: Air.Inst.Index) !CValue {
const a = try Assignment.start(f, writer, .usize);
try f.writeCValueMember(writer, local, .{ .identifier = "len" });
try a.assign(f, writer);
try f.writeCValue(writer, len, .Initializer);
try f.writeCValue(writer, len, .Other);
try a.end(f, writer);
}
return local;
@ -4933,7 +4933,7 @@ fn airSwitchDispatch(f: *Function, inst: Air.Inst.Index) !void {
const cond_local = f.loop_switch_conds.get(br.block_inst).?;
try f.writeCValue(writer, .{ .local = cond_local }, .Other);
try writer.writeAll(" = ");
try f.writeCValue(writer, cond, .Initializer);
try f.writeCValue(writer, cond, .Other);
try writer.writeAll(";\n");
try writer.print("goto zig_switch_{d}_loop;", .{@intFromEnum(br.block_inst)});
}
@ -4978,14 +4978,8 @@ fn bitcast(f: *Function, dest_ty: Type, operand: CValue, operand_ty: Type) !CVal
const operand_lval = if (operand == .constant) blk: {
const operand_local = try f.allocLocal(null, operand_ty);
try f.writeCValue(writer, operand_local, .Other);
if (operand_ty.isAbiInt(zcu)) {
try writer.writeAll(" = ");
} else {
try writer.writeAll(" = (");
try f.renderType(writer, operand_ty);
try writer.writeByte(')');
}
try f.writeCValue(writer, operand, .Initializer);
try writer.writeAll(" = ");
try f.writeCValue(writer, operand, .Other);
try writer.writeAll(";\n");
break :blk operand_local;
} else operand;
@ -5697,7 +5691,7 @@ fn airOptionalPayloadPtrSet(f: *Function, inst: Air.Inst.Index) !CValue {
const a = try Assignment.start(f, writer, opt_ctype);
try f.writeCValueDeref(writer, operand);
try a.assign(f, writer);
try f.object.dg.renderValue(writer, Value.false, .Initializer);
try f.object.dg.renderValue(writer, Value.false, .Other);
try a.end(f, writer);
return .none;
},
@ -5717,7 +5711,7 @@ fn airOptionalPayloadPtrSet(f: *Function, inst: Air.Inst.Index) !CValue {
const a = try Assignment.start(f, writer, opt_ctype);
try f.writeCValueDerefMember(writer, operand, .{ .identifier = "is_null" });
try a.assign(f, writer);
try f.object.dg.renderValue(writer, Value.false, .Initializer);
try f.object.dg.renderValue(writer, Value.false, .Other);
try a.end(f, writer);
}
if (f.liveness.isUnused(inst)) return .none;
@ -5843,7 +5837,7 @@ fn airFieldParentPtr(f: *Function, inst: Air.Inst.Index) !CValue {
try writer.writeByte(')');
switch (fieldLocation(container_ptr_ty, field_ptr_ty, extra.field_index, pt)) {
.begin => try f.writeCValue(writer, field_ptr_val, .Initializer),
.begin => try f.writeCValue(writer, field_ptr_val, .Other),
.field => |field| {
const u8_ptr_ty = try pt.adjustPtrTypeChild(field_ptr_ty, .u8);
@ -5897,7 +5891,7 @@ fn fieldPtr(
try writer.writeByte(')');
switch (fieldLocation(container_ptr_ty, field_ptr_ty, field_index, pt)) {
.begin => try f.writeCValue(writer, container_ptr_val, .Initializer),
.begin => try f.writeCValue(writer, container_ptr_val, .Other),
.field => |field| {
try writer.writeByte('&');
try f.writeCValueDerefMember(writer, container_ptr_val, field);
@ -6020,7 +6014,7 @@ fn airStructFieldVal(f: *Function, inst: Air.Inst.Index) !CValue {
const operand_local = try f.allocLocal(inst, struct_ty);
try f.writeCValue(writer, operand_local, .Other);
try writer.writeAll(" = ");
try f.writeCValue(writer, struct_byval, .Initializer);
try f.writeCValue(writer, struct_byval, .Other);
try writer.writeAll(";\n");
break :blk operand_local;
} else struct_byval;
@ -6118,7 +6112,7 @@ fn airUnwrapErrUnionPay(f: *Function, inst: Air.Inst.Index, is_ptr: bool) !CValu
try writer.writeAll(" = (");
try f.renderType(writer, inst_ty);
try writer.writeByte(')');
try f.writeCValue(writer, operand, .Initializer);
try f.writeCValue(writer, operand, .Other);
try writer.writeAll(";\n");
return local;
}
@ -6163,7 +6157,7 @@ fn airWrapOptional(f: *Function, inst: Air.Inst.Index) !CValue {
const a = try Assignment.start(f, writer, operand_ctype);
try f.writeCValueMember(writer, local, .{ .identifier = "payload" });
try a.assign(f, writer);
try f.writeCValue(writer, operand, .Initializer);
try f.writeCValue(writer, operand, .Other);
try a.end(f, writer);
}
return local;
@ -6364,7 +6358,7 @@ fn airArrayToSlice(f: *Function, inst: Air.Inst.Index) !CValue {
try f.writeCValueMember(writer, local, .{ .identifier = "ptr" });
try a.assign(f, writer);
if (operand == .undef) {
try f.writeCValue(writer, .{ .undef = inst_ty.slicePtrFieldType(zcu) }, .Initializer);
try f.writeCValue(writer, .{ .undef = inst_ty.slicePtrFieldType(zcu) }, .Other);
} else {
const ptr_ctype = try f.ctypeFromType(ptr_ty, .complete);
const ptr_child_ctype = ptr_ctype.info(ctype_pool).pointer.elem_ctype;
@ -6381,7 +6375,7 @@ fn airArrayToSlice(f: *Function, inst: Air.Inst.Index) !CValue {
try writer.writeByte('&');
try f.writeCValueDeref(writer, operand);
try writer.print("[{}]", .{try f.fmtIntLiteral(try pt.intValue(.usize, 0))});
} else try f.writeCValue(writer, operand, .Initializer);
} else try f.writeCValue(writer, operand, .Other);
}
try a.end(f, writer);
}
@ -6911,7 +6905,7 @@ fn airMemset(f: *Function, inst: Air.Inst.Index, safety: bool) !CValue {
try writer.writeAll("for (");
try f.writeCValue(writer, index, .Other);
try writer.writeAll(" = ");
try f.object.dg.renderValue(writer, try pt.intValue(.usize, 0), .Initializer);
try f.object.dg.renderValue(writer, try pt.intValue(.usize, 0), .Other);
try writer.writeAll("; ");
try f.writeCValue(writer, index, .Other);
try writer.writeAll(" != ");
@ -7281,7 +7275,7 @@ fn airReduce(f: *Function, inst: Air.Inst.Index) !CValue {
.float => try pt.floatValue(scalar_ty, std.math.nan(f128)),
else => unreachable,
},
}, .Initializer);
}, .Other);
try writer.writeAll(";\n");
const v = try Vectorize.start(f, inst, writer, operand_ty);
@ -8175,7 +8169,7 @@ fn formatIntLiteral(
try writer.writeAll(string);
} else {
try data.ctype.renderLiteralPrefix(writer, data.kind, ctype_pool);
wrap.convertToTwosComplement(int, data.int_info.signedness, c_bits);
wrap.truncate(int, .unsigned, c_bits);
@memset(wrap.limbs[wrap.len..], 0);
wrap.len = wrap.limbs.len;
const limbs_per_c_limb = @divExact(wrap.len, c_limb_info.count);
@ -8207,7 +8201,6 @@ fn formatIntLiteral(
c_limb_int_info.signedness = .signed;
c_limb_ctype = c_limb_info.ctype.toSigned();
c_limb_mut.positive = wrap.positive;
c_limb_mut.truncate(
c_limb_mut.toConst(),
.signed,

View File

@ -265,7 +265,12 @@ pub fn targetTriple(allocator: Allocator, target: std.Target) ![]const u8 {
.eabihf => "eabihf",
.android => "android",
.androideabi => "androideabi",
.musl => "musl",
.musl => switch (target.os.tag) {
// For WASI/Emscripten, "musl" refers to the libc, not really the ABI.
// "unknown" provides better compatibility with LLVM-based tooling for these targets.
.wasi, .emscripten => "unknown",
else => "musl",
},
.muslabin32 => "musl", // Should be muslabin32 in LLVM 20.
.muslabi64 => "musl", // Should be muslabi64 in LLVM 20.
.musleabi => "musleabi",
@ -771,6 +776,30 @@ const DataLayoutBuilder = struct {
}
};
// Avoid depending on `llvm.CodeModel` in the bitcode-only case.
const CodeModel = enum {
default,
tiny,
small,
kernel,
medium,
large,
};
fn codeModel(model: std.builtin.CodeModel, target: std.Target) CodeModel {
// Roughly match Clang's mapping of GCC code models to LLVM code models.
return switch (model) {
.default => .default,
.extreme, .large => .large,
.kernel => .kernel,
.medany => if (target.cpu.arch.isRISCV()) .medium else .large,
.medium => if (target.os.tag == .aix) .large else .medium,
.medmid => .medium,
.normal, .medlow, .small => .small,
.tiny => .tiny,
};
}
pub const Object = struct {
gpa: Allocator,
builder: Builder,
@ -1135,14 +1164,17 @@ pub const Object = struct {
module_flags.appendAssumeCapacity(try o.builder.metadataModuleFlag(
behavior_error,
try o.builder.metadataString("Code Model"),
try o.builder.metadataConstant(try o.builder.intConst(.i32, @as(i32, switch (comp.root_mod.code_model) {
.tiny => 0,
.small => 1,
.kernel => 2,
.medium => 3,
.large => 4,
else => unreachable,
}))),
try o.builder.metadataConstant(try o.builder.intConst(.i32, @as(
i32,
switch (codeModel(comp.root_mod.code_model, comp.root_mod.resolved_target.result)) {
.default => unreachable,
.tiny => 0,
.small => 1,
.kernel => 2,
.medium => 3,
.large => 4,
},
))),
));
}
@ -1294,7 +1326,7 @@ pub const Object = struct {
else
.Static;
const code_model: llvm.CodeModel = switch (comp.root_mod.code_model) {
const code_model: llvm.CodeModel = switch (codeModel(comp.root_mod.code_model, comp.root_mod.resolved_target.result)) {
.default => .Default,
.tiny => .Tiny,
.small => .Small,
@ -1440,8 +1472,10 @@ pub const Object = struct {
_ = try attributes.removeFnAttr(.sanitize_thread);
}
const is_naked = fn_info.cc == .naked;
if (owner_mod.fuzz and !func_analysis.disable_instrumentation and !is_naked) {
try attributes.addFnAttr(.optforfuzzing, &o.builder);
if (!func_analysis.disable_instrumentation and !is_naked) {
if (owner_mod.fuzz) {
try attributes.addFnAttr(.optforfuzzing, &o.builder);
}
_ = try attributes.removeFnAttr(.skipprofile);
_ = try attributes.removeFnAttr(.nosanitize_coverage);
} else {
@ -1735,7 +1769,12 @@ pub const Object = struct {
try o.used.append(gpa, counters_variable.toConst(&o.builder));
counters_variable.setLinkage(.private, &o.builder);
counters_variable.setAlignment(comptime Builder.Alignment.fromByteUnits(1), &o.builder);
counters_variable.setSection(try o.builder.string("__sancov_cntrs"), &o.builder);
if (target.ofmt == .macho) {
counters_variable.setSection(try o.builder.string("__DATA,__sancov_cntrs"), &o.builder);
} else {
counters_variable.setSection(try o.builder.string("__sancov_cntrs"), &o.builder);
}
break :f .{
.counters_variable = counters_variable,
@ -1797,7 +1836,11 @@ pub const Object = struct {
pcs_variable.setLinkage(.private, &o.builder);
pcs_variable.setMutability(.constant, &o.builder);
pcs_variable.setAlignment(Type.usize.abiAlignment(zcu).toLlvm(), &o.builder);
pcs_variable.setSection(try o.builder.string("__sancov_pcs1"), &o.builder);
if (target.ofmt == .macho) {
pcs_variable.setSection(try o.builder.string("__DATA,__sancov_pcs1"), &o.builder);
} else {
pcs_variable.setSection(try o.builder.string("__sancov_pcs1"), &o.builder);
}
try pcs_variable.setInitializer(init_val, &o.builder);
}
@ -12051,7 +12094,7 @@ fn firstParamSRet(fn_info: InternPool.Key.FuncType, zcu: *Zcu, target: std.Targe
.x86_64_win => x86_64_abi.classifyWindows(return_type, zcu) == .memory,
.x86_sysv, .x86_win => isByRef(return_type, zcu),
.x86_stdcall => !isScalar(zcu, return_type),
.wasm_mvp => wasm_c_abi.classifyType(return_type, zcu)[0] == .indirect,
.wasm_mvp => wasm_c_abi.classifyType(return_type, zcu) == .indirect,
.aarch64_aapcs,
.aarch64_aapcs_darwin,
.aarch64_aapcs_win,
@ -12136,18 +12179,9 @@ fn lowerFnRetTy(o: *Object, fn_info: InternPool.Key.FuncType) Allocator.Error!Bu
return o.builder.structType(.normal, types[0..types_len]);
},
},
.wasm_mvp => {
if (isScalar(zcu, return_type)) {
return o.lowerType(return_type);
}
const classes = wasm_c_abi.classifyType(return_type, zcu);
if (classes[0] == .indirect or classes[0] == .none) {
return .void;
}
assert(classes[0] == .direct and classes[1] == .none);
const scalar_type = wasm_c_abi.scalarType(return_type, zcu);
return o.builder.intType(@intCast(scalar_type.abiSize(zcu) * 8));
.wasm_mvp => switch (wasm_c_abi.classifyType(return_type, zcu)) {
.direct => |scalar_ty| return o.lowerType(scalar_ty),
.indirect => return .void,
},
// TODO investigate other callconvs
else => return o.lowerType(return_type),
@ -12401,17 +12435,28 @@ const ParamTypeIterator = struct {
},
}
},
.wasm_mvp => {
it.zig_index += 1;
it.llvm_index += 1;
if (isScalar(zcu, ty)) {
return .byval;
}
const classes = wasm_c_abi.classifyType(ty, zcu);
if (classes[0] == .indirect) {
.wasm_mvp => switch (wasm_c_abi.classifyType(ty, zcu)) {
.direct => |scalar_ty| {
if (isScalar(zcu, ty)) {
it.zig_index += 1;
it.llvm_index += 1;
return .byval;
} else {
var types_buffer: [8]Builder.Type = undefined;
types_buffer[0] = try it.object.lowerType(scalar_ty);
it.types_buffer = types_buffer;
it.types_len = 1;
it.llvm_index += 1;
it.zig_index += 1;
return .multiple_llvm_types;
}
},
.indirect => {
it.zig_index += 1;
it.llvm_index += 1;
it.byval_attr = true;
return .byref;
}
return .abi_sized_int;
},
},
// TODO investigate other callconvs
else => {

View File

@ -864,8 +864,8 @@ pub fn buildSharedObjects(comp: *Compilation, prog_node: std.Progress.Node) anye
// Example:
// .balign 4
// .globl _Exit_2_2_5
// .type _Exit_2_2_5, %function;
// .symver _Exit_2_2_5, _Exit@@GLIBC_2.2.5
// .type _Exit_2_2_5, %function
// .symver _Exit_2_2_5, _Exit@@GLIBC_2.2.5, remove
// _Exit_2_2_5: .long 0
const ver_index = versions_buffer[ver_buf_i];
const ver = metadata.all_versions[ver_index];
@ -876,19 +876,16 @@ pub fn buildSharedObjects(comp: *Compilation, prog_node: std.Progress.Node) anye
const want_default = chosen_def_ver_index != 255 and ver_index == chosen_def_ver_index;
const at_sign_str: []const u8 = if (want_default) "@@" else "@";
if (ver.patch == 0) {
const sym_plus_ver = if (want_default)
sym_name
else
try std.fmt.allocPrint(
arena,
"{s}_GLIBC_{d}_{d}",
.{ sym_name, ver.major, ver.minor },
);
const sym_plus_ver = try std.fmt.allocPrint(
arena,
"{s}_{d}_{d}",
.{ sym_name, ver.major, ver.minor },
);
try stubs_asm.writer().print(
\\.balign {d}
\\.globl {s}
\\.type {s}, %function;
\\.symver {s}, {s}{s}GLIBC_{d}.{d}
\\.type {s}, %function
\\.symver {s}, {s}{s}GLIBC_{d}.{d}, remove
\\{s}: {s} 0
\\
, .{
@ -904,19 +901,16 @@ pub fn buildSharedObjects(comp: *Compilation, prog_node: std.Progress.Node) anye
wordDirective(target),
});
} else {
const sym_plus_ver = if (want_default)
sym_name
else
try std.fmt.allocPrint(
arena,
"{s}_GLIBC_{d}_{d}_{d}",
.{ sym_name, ver.major, ver.minor, ver.patch },
);
const sym_plus_ver = try std.fmt.allocPrint(
arena,
"{s}_{d}_{d}_{d}",
.{ sym_name, ver.major, ver.minor, ver.patch },
);
try stubs_asm.writer().print(
\\.balign {d}
\\.globl {s}
\\.type {s}, %function;
\\.symver {s}, {s}{s}GLIBC_{d}.{d}.{d}
\\.type {s}, %function
\\.symver {s}, {s}{s}GLIBC_{d}.{d}.{d}, remove
\\{s}: {s} 0
\\
, .{
@ -1041,9 +1035,9 @@ pub fn buildSharedObjects(comp: *Compilation, prog_node: std.Progress.Node) anye
// Example:
// .balign 4
// .globl environ_2_2_5
// .type environ_2_2_5, %object;
// .size environ_2_2_5, 4;
// .symver environ_2_2_5, environ@@GLIBC_2.2.5
// .type environ_2_2_5, %object
// .size environ_2_2_5, 4
// .symver environ_2_2_5, environ@@GLIBC_2.2.5, remove
// environ_2_2_5: .fill 4, 1, 0
const ver_index = versions_buffer[ver_buf_i];
const ver = metadata.all_versions[ver_index];
@ -1054,20 +1048,17 @@ pub fn buildSharedObjects(comp: *Compilation, prog_node: std.Progress.Node) anye
const want_default = chosen_def_ver_index != 255 and ver_index == chosen_def_ver_index;
const at_sign_str: []const u8 = if (want_default) "@@" else "@";
if (ver.patch == 0) {
const sym_plus_ver = if (want_default)
sym_name
else
try std.fmt.allocPrint(
arena,
"{s}_GLIBC_{d}_{d}",
.{ sym_name, ver.major, ver.minor },
);
const sym_plus_ver = try std.fmt.allocPrint(
arena,
"{s}_{d}_{d}",
.{ sym_name, ver.major, ver.minor },
);
try stubs_asm.writer().print(
\\.balign {d}
\\.globl {s}
\\.type {s}, %object;
\\.size {s}, {d};
\\.symver {s}, {s}{s}GLIBC_{d}.{d}
\\.type {s}, %object
\\.size {s}, {d}
\\.symver {s}, {s}{s}GLIBC_{d}.{d}, remove
\\{s}: .fill {d}, 1, 0
\\
, .{
@ -1085,20 +1076,17 @@ pub fn buildSharedObjects(comp: *Compilation, prog_node: std.Progress.Node) anye
size,
});
} else {
const sym_plus_ver = if (want_default)
sym_name
else
try std.fmt.allocPrint(
arena,
"{s}_GLIBC_{d}_{d}_{d}",
.{ sym_name, ver.major, ver.minor, ver.patch },
);
const sym_plus_ver = try std.fmt.allocPrint(
arena,
"{s}_{d}_{d}_{d}",
.{ sym_name, ver.major, ver.minor, ver.patch },
);
try stubs_asm.writer().print(
\\.balign {d}
\\.globl {s}
\\.type {s}, %object;
\\.size {s}, {d};
\\.symver {s}, {s}{s}GLIBC_{d}.{d}.{d}
\\.type {s}, %object
\\.size {s}, {d}
\\.symver {s}, {s}{s}GLIBC_{d}.{d}.{d}, remove
\\{s}: .fill {d}, 1, 0
\\
, .{

View File

@ -8,13 +8,6 @@ const build_options = @import("build_options");
const trace = @import("tracy.zig").trace;
const Module = @import("Package/Module.zig");
pub const AbiVersion = enum(u2) {
@"1" = 1,
@"2" = 2,
pub const default: AbiVersion = .@"1";
};
const libcxxabi_files = [_][]const u8{
"src/abort_message.cpp",
"src/cxa_aux_runtime.cpp",
@ -145,11 +138,12 @@ pub fn buildLibCxx(comp: *Compilation, prog_node: std.Progress.Node) BuildError!
const cxxabi_include_path = try comp.zig_lib_directory.join(arena, &[_][]const u8{ "libcxxabi", "include" });
const cxx_include_path = try comp.zig_lib_directory.join(arena, &[_][]const u8{ "libcxx", "include" });
const cxx_src_include_path = try comp.zig_lib_directory.join(arena, &[_][]const u8{ "libcxx", "src" });
const abi_version: u2 = if (target.os.tag == .emscripten) 2 else 1;
const abi_version_arg = try std.fmt.allocPrint(arena, "-D_LIBCPP_ABI_VERSION={d}", .{
@intFromEnum(comp.libcxx_abi_version),
abi_version,
});
const abi_namespace_arg = try std.fmt.allocPrint(arena, "-D_LIBCPP_ABI_NAMESPACE=__{d}", .{
@intFromEnum(comp.libcxx_abi_version),
abi_version,
});
const optimize_mode = comp.compilerRtOptMode();
@ -389,11 +383,12 @@ pub fn buildLibCxxAbi(comp: *Compilation, prog_node: std.Progress.Node) BuildErr
const cxxabi_include_path = try comp.zig_lib_directory.join(arena, &[_][]const u8{ "libcxxabi", "include" });
const cxx_include_path = try comp.zig_lib_directory.join(arena, &[_][]const u8{ "libcxx", "include" });
const cxx_src_include_path = try comp.zig_lib_directory.join(arena, &[_][]const u8{ "libcxx", "src" });
const abi_version: u2 = if (target.os.tag == .emscripten) 2 else 1;
const abi_version_arg = try std.fmt.allocPrint(arena, "-D_LIBCPP_ABI_VERSION={d}", .{
@intFromEnum(comp.libcxx_abi_version),
abi_version,
});
const abi_namespace_arg = try std.fmt.allocPrint(arena, "-D_LIBCPP_ABI_NAMESPACE=__{d}", .{
@intFromEnum(comp.libcxx_abi_version),
abi_version,
});
const optimize_mode = comp.compilerRtOptMode();

View File

@ -119,7 +119,7 @@ pub fn buildStaticLib(comp: *Compilation, prog_node: std.Progress.Node) BuildErr
}
try cflags.append("-I");
try cflags.append(try comp.zig_lib_directory.join(arena, &[_][]const u8{ "libunwind", "include" }));
try cflags.append("-D_LIBUNWIND_DISABLE_VISIBILITY_ANNOTATIONS");
try cflags.append("-D_LIBUNWIND_HIDE_SYMBOLS");
try cflags.append("-Wa,--noexecstack");
try cflags.append("-fvisibility=hidden");
try cflags.append("-fvisibility-inlines-hidden");

Some files were not shown because too many files have changed in this diff Show More