This PR adds support for handling ZIP64 format in local file headers,
when a zip file contains entries where the compressed or uncompressed
size fields are set to 0xFFFFFFFF, and the extra field contains ZIP64
extended information tag (0x0001)
The code now:
Reads the actual sizes from the ZIP64 extra field data
Validates these sizes against the entry's compressed and uncompressed sizes
Zip file format spec.: https://pkware.cachefly.net/webdocs/casestudies/APPNOTE.TXT
This change allows proper extraction of ZIP files that use ZIP64 format in their
local file headers.
Fixes: #22329
For representing struct field default values and array/pointer type
sentinel values, we use `*const anyopaque`, since there is no way for
`std.builtin.Type.StructField` etc to refer back to its `type` field.
However, when introspecting a type, this is quite awkward due to the
pointer casts necessary.
As such, this commit renames the `sentinel` fields to `sentinel_ptr`,
and the `default_value` field to `default_value_ptr`, and introduces
helper methods `sentinel()` and `defaultValue()` to load the values.
These methods are marked as `inline` because their return value, which
is always comptime-known, is very often required at comptime by use
sites, so this avoids having to annotate such calls with `comptime`.
This is a breaking change, although note that 0.14.0 is already a
breaking release for all users of `std.builtin.Type` due to the union
fields being renamed.
This was done by regex substitution with `sed`. I then manually went
over the entire diff and fixed any incorrect changes.
This diff also changes a lot of `callconv(.C)` to `callconv(.c)`, since
my regex happened to also trigger here. I opted to leave these changes
in, since they *are* a correct migration, even if they're not the one I
was trying to do!
This matches established naming conventions. Now is an opportune time to
make this change, since we're already performing breaking changes to
`std.builtin.Type`.
The goals of this branch are to:
* compile faster when using the wasm linker and backend
* enable saving compiler state by directly copying in-memory linker
state to disk.
* more efficient compiler memory utilization
* introduce integer type safety to wasm linker code
* generate better WebAssembly code
* fully participate in incremental compilation
* do as much work as possible outside of flush(), while continuing to do
linker garbage collection.
* avoid unnecessary heap allocations
* avoid unnecessary indirect function calls
In order to accomplish this goals, this removes the ZigObject
abstraction, as well as Symbol and Atom. These abstractions resulted
in overly generic code, doing unnecessary work, and needless
complications that simply go away by creating a better in-memory data
model and emitting more things lazily.
For example, this makes wasm codegen emit MIR which is then lowered to
wasm code during linking, with optimal function indexes etc, or
relocations are emitted if outputting an object. Previously, this would
always emit relocations, which are fully unnecessary when emitting an
executable, and required all function calls to use the maximum size LEB
encoding.
This branch introduces the concept of the "prelink" phase which occurs
after all object files have been parsed, but before any Zcu updates are
sent to the linker. This allows the linker to fully parse all objects
into a compact memory model, which is guaranteed to be complete when Zcu
code is generated.
This commit is not a complete implementation of all these goals; it is
not even passing semantic analysis.
`Sema.explainWhyValueContainsReferenceToComptimeVar` (concise name!)
adds notes to an error explaining how to get from a given `Value` to a
pointer to some `comptime var` (or a comptime field). Previously, this
error could be very opaque in any case where it wasn't obvious where the
comptime var pointer came from; particularly for type captures. Now, the
error notes explain this to the user.
This rewrite improves some error messages, hugely simplifies the logic,
and fixes several bugs. One of these bugs is technically a new rule
which Andrew and I agreed on: if a parameter has a comptime-only type
but is not declared `comptime`, then the corresponding call argument
should not be *evaluated* at comptime; only resolved. Implementing this
required changing how function types work a little, which in turn
required allowing a new kind of function coercion for some generic use
cases: function coercions are now allowed to implicitly *remove*
`comptime` annotations from parameters with comptime-only types. This is
okay because removing the annotation affects only the call site.
Resolves: #22262
`Zcu.PerThead.ensureTypeUpToDate` is set up in such a way that it only
returns the updated type the first time it is called. In general, that's
okay; however, the exception is that we want the function to continue
returning `error.AnalysisFail` when the type has been lost, or its
number of captures changed.
Therefore, the check for this case now happens before the up-to-date
success return.
For simplicity, the number of captures is now handled by intentionally
losing the instruction in `Zcu.mapOldZirToNew`, since there is nothing
to gain from tracking a type when old instances of it can never be
reused.
The old lowering was kind of neat, but it unintentionally allowed the
syntax `for (123) |_| { ... }`, and there wasn't really a way to fix
that. So, instead, we include both the start and the end of the range in
the `for_len` instruction (each operand to `for` now has *two* entries
in this multi-op instruction). This slightly increases the size of ZIR
for loops of predominantly indexables, but the difference is small
enough that it's not worth complicating ZIR to try and fix it.