When an error response was encountered, such as 404 not found, the body
wasn't discarded, leading to the string "404 not found" being
incorrectly interpreted as the next request's response.
closes#24732
It doesn't really make sense for `target_util.canBuildLibCompilerRt`
(and its ubsan-rt friend) to take in `use_llvm`, because the caller
doesn't control that: they're just going to queue a sub-compilation for
the runtime. The only exception to that is the ZCU strategy, where we
effectively embed `_ = @import("compiler_rt")` into the Zig compilation:
there, the question does matter. Rather than trying to do multiple weird
calls to model this, just have `canBuildLibCompilerRt` return not just a
boolean, but also differentiate the self-hosted backend being capable of
building the library vs only LLVM being capable. Logic in `Compilation`
uses that difference to decide whether to use the ZCU strategy, and also
to disable the library if the compiler does not support LLVM and it is
required.
Also, remove a redundant check later on, when actually queuing jobs.
We've already checked that we can build `compiler_rt`, and
`compiler_rt_strat` is set accordingly. I'm guessing this was there to
work around a bug I saw in the old strategy assignment, where support
was ignored in some cases.
Resolves: #24623
they seem to be always `null` even when accessed through extern key so we have no way to tell whether they have natural alignment or not to decorate. And the reason we don't always decorate them is because some environments might be too dumb and crash for this.
This is theoretically a bugfix as well, since it enforces the correct
limit on the first write after writing the header. This theoretical bug
hasn't been hit in practice though as far as I know.
Writer.sendFileAll() asserts non-zero buffer capacity in the case that
the fallback is hit. It also requires the caller to flush. The buffer
may be bypassed as an optimization but this is not a guarantee.
Also improve the Writer documentation and add an earlier assert on
buffer capacity in sendFileAll().
On macOS, when using the LLVM backend, the output binary retains a
reference to this object file's debug info (as opposed to self-hosted
backends which instead emit a dSYM bundle). As such, we need to retain
this object file in such cases. This object does unfortunately "leak",
in that it won't be reused and will just sit in the cache forever (or
until GC'd in the future). But that's no worse than the cache behavior
prior to the rework that caused this, and it will become less of a
problem over time as the self-hosted backend gains usability for debug
builds and eventually becomes the default.
Resolves: #24369
* std.Io.Reader: fix confused semantics of rebase. Before it was
ambiguous whether it was supposed to be based on end or seek. Now it
is clearly based on seek, with an added assertion for clarity.
* std.crypto.tls.Client: fix panic due to not enough buffer size
available. Also, avoid unnecessary rebasing.
* std.http.Reader: introduce max_head_len to limit HTTP header length.
This prevents crash in underlying reader which may require a minimum
buffer length.
* std.http.Client: choose better buffer sizes for streams and TLS
client. Crucially, the buffer shared by HTTP reader and TLS client
needs to be big enough for all http headers *and* the max TLS record
size. Bump HTTP header size default from 4K to 8K.
fixes#24872
I have noticed however that there are still fetch problems
Previously, index out-of-bounds could occur when copying match_length bytes while decoding whatever sequence happened to overflow `dest`. Now, each sequence checks that there is enough room for the full sequence_length (literal_length + match_length) before doing any copying.
Fixes the failing inputs found here: https://github.com/ziglang/zig/issues/24817#issuecomment-3192927715
As well as the exact byte count, include a human-readable value so it's
clearer what the error is actually telling you. The exact byte count
might not be worth keeping, but I decided I would in case it's useful in
any scenario.
In the best case, this is redundant work, because we aren't actually
going to emit a working binary this update. In the worst case, it causes
bugs because the linker may not have *seen* the thing being exported due
to the compile errors.
Resolves: #24417
* std.Io.Reader: appendRemaining no longer supports alignment and has
different rules about how exceeding limit. Fixed bug where it would
return success instead of error.StreamTooLong like it was supposed to.
* std.Io.Reader: simplify appendRemaining and appendRemainingUnlimited
to be implemented based on std.Io.Writer.Allocating
* std.Io.Writer: introduce unreachableRebase
* std.Io.Writer: remove minimum_unused_capacity from Allocating. maybe
that flexibility could have been handy, but let's see if anyone
actually needs it. The field is redundant with the superlinear growth
of ArrayList capacity.
* std.Io.Writer: growingRebase also ensures total capacity on the
preserve parameter, making it no longer necessary to do
ensureTotalCapacity at the usage site of decompression streams.
* std.compress.flate.Decompress: fix rebase not taking into account seek
* std.compress.zstd.Decompress: split into "direct" and "indirect" usage
patterns depending on whether a buffer is provided to init, matching
how flate works. Remove some overzealous asserts that prevented buffer
expansion from within rebase implementation.
* std.zig: fix readSourceFileToAlloc returning an overaligned slice
which was difficult to free correctly.
fixes#24608
The previous code assumed that `initFrame` during the `new_frame` state would always result in the `in_frame` state, but that's not always the case. `initFrame` can also result in the `skippable_frame` state, which would lead to access of union field 'in_frame' while field 'skipping_frame' is active.
Now, the switch is re-entered with the updated state so either case is handled appropriately.
Fixes the crashes from https://github.com/ziglang/zig/issues/24817
Previously, the "allow EndOfStream" part of this logic was too permissive. If there are a few dangling bytes at the end of the stream, that should be treated as a bad magic number. The only case where EndOfStream is allowed is when the stream is truly at the end, with exactly zero bytes available.
Validate wildcard certificates as specified in RFC 6125.
In particular, `*.example.com` should match `foo.example.com` but
NOT `bar.foo.example.com` as it previously did.
The Lua headers are needed because, yes, NetBSD has a kernel module for Lua
support. soundcard.h is technically a system header but is installed by
libossaudio and so was missed previously.
This also removes some riscv headers that shouldn't have been added because
NetBSD does not yet officially support the riscv32/riscv64 ports.
Closes#24737.