Aside from adding comments to document the logic in `Cache.Manifest.hit`
better, this commit fixes two serious bugs.
The first, spotted by Andrew, is that when upgrading from a shared to an
exclusive lock on the manifest file, we do not seek it back to the
start. This is a simple fix.
The second is more subtle, and has to do with the computation of file
digests. Broadly speaking, the goal of the main loop in `hit` is to
iterate the files listed in the manifest file, and check if they've
changed, based on stat and a file hash. While doing this, the
`bin_digest` field of `std.Build.Cache.File`, which is initially
`undefined`, is populated for all files, either straight from the
manifest (if the stat matches) or recomputed from the file on-disk. This
file digest is then used to update `man.hash.hasher`, which is building
the final hash used as, for instance, the output directory name when the
compiler emits into the cache directory. When `hit` returns a cache
miss, it is expected that `man.hash.hasher` includes the digests of all
"initial files"; that is, those which have been already added with e.g.
`addFilePath`, but not those which will later be added with
`addFilePost` (even though the manifest file has told us about some such
files). Previously, `hit` was using the `unhit` function to do this in a
few cases. However, this is incorrect, because `hit` assumes that all
files already have their `bin_digest` field populated; this function is
only valid to call *after* `hit` returns. Instead, we need to actually
compute the hashes which haven't yet been populated. Even if this logic
has been working, there was still a bug here, because we called `unhit`
when upgrading from a shared to an exclusive lock, writing the
(potentially `undefined`) file digests, but the loop itself writes the
file digests *again*! All in all, the hashing logic here was actually
incredibly broken.
I've taken the opportunity to restructure this section of the code into
what I think is a more readable format. A new function,
`hitWithCurrentLock`, uses the open manifest file to try and find a
cache hit. It returns a tagged union which, in the miss case, tells the
caller (`hit`) how many files already have their hash populated. This
avoids redundant work recomputing the same hash multiple times in
situations where the lock needs upgrading. This also eliminates the
outer loop from `hit`, which was a little confusing because it iterated
no more than twice!
The bugs fixed here could manifest in several different ways depending
on how contended file locks were satisfied. Most notably, on a cache
miss, the Zig compiler might have written the compilation output to the
incorrect directory (because it incorrectly constructed a hash using
`undefined` or repeated file digests), resulting in all future hits on
this manifest causing `error.FileNotFound`. This is #23110. I have been
able to reproduce #23110 on `master`, and have not been able to after
this commit, so I am relatively sure this commit resolves that issue.
Resolves: #23110
Linux kernel syscalls expect to be given the number of bits of sigset that
they're built for, not the full 1024-bit sigsets that glibc supports.
I audited the other syscalls in here that use `sigset_t` and they're all
using `NSIG / 8`.
Fixes#12715
* If a function prototype is declarated inside a function, do not
translate it to a top-level extern function declaration. Similar to
extern local variable, just wrapped it into a block-local struct.
* Add a new extern_local_fn tag of aro_translate_c node for present
extern local function declaration.
* When a function body has a C function prototype declaration, it adds
an extern local function declaration. Subsequent function references
will look for this function declaration.
This change fixes false-positive cache hits for run steps that get run
with different sets of environment variables due the the environment map
being excluded from the cache hash.
Too many bugs have been found with `truncate` at this point, so it was
rewritten from scratch.
Based on the doc comment, the utility of `convertToTwosComplement` over
`r.truncate(a, .unsigned, bit_count)` is unclear and it has a subtle
behavior difference that is almost certainly a bug, so it was deleted.
When determining the type of RC compiler, meson passes `/?` or `--version` and then reads from `stdout` looking for particular string(s) anywhere in the output.
So, by adding the string "Microsoft Resource Compiler" to the `/?` output, meson will recognize `zig rc` as rc.exe and give it the correct options, which works fine since `zig rc` is drop-in CLI compatible with rc.exe.
This allows using `zig rc` with meson for (cross-)compiling, by either:
- Setting WINDRES="zig rc" or putting windres = ['zig', 'rc'] in the cross-file
+ This will work like rc.exe, so it will output .res files. This will only link successfully if you are using a linker that can do .res -> .obj conversion (so something like zig cc, MSVC, lld)
- Setting WINDRES="zig rc /:output-format coff" or putting windres = ['zig', 'rc', '/:output-format', 'coff'] in the cross-file
+ This will make meson pass flags as if it were rc.exe, but it will cause the resulting .res file to actually be a COFF object file, meaning it will work with any linker that handles COFF object files
Example cross file that uses `zig cc` (which can link `.res` files, so `/:output-format coff` is not necessary) and `zig rc`:
```
[binaries]
c = ['zig', 'cc', '--target=x86_64-windows-gnu']
windres = ['zig', 'rc']
[target_machine]
system = 'windows'
cpu_family = 'x86_64'
cpu = 'x86_64'
endian = 'little'
```
The code did one useless thing and two wrong things:
- ref counting was basically a noop
- last_dir_fd was chosen from the wrong index and also under the wrong
condition
This caused regular crashes on macOS which are now gone.