This reverts commit 40cb712d13aff4bfe83256858ad6b18d82e70211.
Thanks to Ava & Luna of Lavatech, we don't need to resort to this, they
have graciously given zig a SourceHut instance to use that gives us 8GB
RAM.
Before merging, do this for every item in the file:
* solve the issue, or
* convert the task to a github issue and update the comment
to link to the issue (and remove "TODO" text from the comment).
Then delete the file.
Related: #363
Drew won't give us enough RAM for stage1 to build stage2. We'll still
have freebsd builds available on releases but we're going to lose
freebsd CI testing for master branch builds until we fully switch over
to stage2 (and have lower memory usage).
Let me know if anyone wants to run a SourceHut instance and give zig
access to run on slightly more powerful machines. We need about 8 GiB
RAM to run the CI test suite for now.
After we're fully self hosted I expect to re-enable this.
The API is pretty specific to the implementationt details of the
self-hosted compiler. I don't want to have to independently support
and maintain this as part of the standard library, and be obligated
to not make breaking changes to it with changes to the implementation of
stage2.
This is not strictly necessary but it increases the likelihood of cache
hits because foo.c and bar.c now will have different cache directories
and can be updated independently without clobbering each other's cache
data.
* update to the new cache hash API
* std.Target defaultVersionRange moves to std.Target.Os.Tag
* std.Target.Os gains getVersionRange which returns a tagged union
* start the process of splitting Module into Compilation and "zig
module".
- The parts of Module having to do with only compiling zig code are
extracted into ZigModule.zig.
- Next step is to rename Module to Compilation.
- After that rename ZigModule back to Module.
* implement proper cache hash usage when compiling C objects, and
properly manage the file lock of the build artifacts.
* make versions optional to match recent changes to master branch.
* proper cache hash integration for compiling zig code
* proper cache hash integration for linking even when not compiling zig
code.
* ELF LLD linking integrates with the caching system. A comment from
the source code:
Here we want to determine whether we can save time by not invoking LLD when the
output is unchanged. None of the linker options or the object files that are being
linked are in the hash that namespaces the directory we are outputting to. Therefore,
we must hash those now, and the resulting digest will form the "id" of the linking
job we are about to perform.
After a successful link, we store the id in the metadata of a symlink named "id.txt" in
the artifact directory. So, now, we check if this symlink exists, and if it matches
our digest. If so, we can skip linking. Otherwise, we proceed with invoking LLD.
* implement disable_c_depfile option
* add tracy to a few more functions
into smaller exposed components and expose all of them. This makes it
more flexible.
`*const Cache` is now passed in with an open manifest dir handle which
the caller is responsible for managing.
Expose some of the base64 stuff.
Extract the hash helper functions into `HashHelper` and add some more
methods such as addOptional and addListOfFiles.
Add `CacheHash.toOwnedLock` so that you can deinitialize everything
except the open file handle which represents the file system lock on the
build artifacts.
Use ArrayListUnmanaged, saving space per allocated CacheHash.
Avoid 1 memory allocation in hit() with a static buffer.
hit() returns a bool; caller code is responsible for calling final() in
either case. This is a simpler and easier to use API.
writeManifest() is no longer called from deinit() with errors ignored.
Speed up a little the slicing-by-8 code path by replacing the
(load+shift+xor)*4 sequence with a single u32 load plus a xor.
Before:
```
iterative: 1018 MiB/s [000000006c3b110d]
small keys: 1075 MiB/s [0035bf3dcac00000]
```
After:
```
iterative: 1114 MiB/s [000000006c3b110d]
small keys: 1324 MiB/s [0035bf3dcac00000]
```
It turns out that the kernel won't read or write more than 0x7fffffff
bytes in a single call, failing with EINVAL when trying to do so.
Adjust the limit and curse whoever is responsible for this.
Closes#6332