Rather than storing the name of a debug section into the structure
`RelocatableData`, we use the `index` field as an offset into the
debug names table. This means we do not have to store an extra 16 bytes
for non-debug sections which can be massive for object files where each
data symbol has its own data section. The name of a debug section
can then be retrieved again when needed by using the offset and
then reading until the 0-delimiter.
Currently, `zig build-exe -fno-emit-bin --verbose-air src/main.zig`
results in no output at all. With this refactor, it dumps AIR
and then exits without invoking LLVM, as expected
saying []T is a pointer is confusing because zig docs say there are two types of pointers (*T and [*]T). It is more clear to say that []T is a slice type which contains a [*]T pointer and a length.
Co-authored-by: Philipp Lühmann <47984692+luehmann@users.noreply.github.com>
The original impetus for making a change here was a typo in --add-header
causing the script to fail. However, upon inspection, I was alarmed that
we were making a --recursive upload to the *root directory* of
ziglang.org. This could result in garbage files being uploaded to the
website, or important files being overwritten. As I addressed this concern,
I decided to take on file compression as well.
Removed compression prior to sending to S3. I am vetoing pre-compressing
objects for the following reasons:
* It prevents clients from working which do not support gzip encoding.
* It breaks a premise that objects on S3 are stored 1-to-1 with what is
on disk.
* It prevents Cloudflare from using a more efficient encoding, such as
brotli, which they have started doing recently.
These systems such as Cloudflare or Fastly already do compression on
the fly, and we should interop with these systems instead of fighting them.
Cloudfront has an arbitrary limit of 9.5 MiB for auto-compression. I looked
and did not see a way to increase this limit. The data.js file is currently
16 MiB. In order to fix this problem, we need to do one of the following things:
* Reduce the size of data.js to less than 9.5 MiB.
* Figure out how to adjust the Cloudfront settings to increase the max size
for auto-compressed objects.
* Migrate to Fastly. Fastly appears to not have this limitation. Note
that we already plan to migrate to Fastly for the website.