If a Reader implementation implements `stream` by ignoring the Writer, writing directly to its internal buffer, and returning 0, then `defaultDiscard` would not update `seek` and also return 0, which is incorrect and can cause `discardShort` to violate the contract of `VTable.discard` by calling into `vtable.discard` with a non-empty buffer.
This commit fixes the problem by advancing seek up to the limit after the stream call. This logic could likely be somewhat simplified in the future depending on how #25170 is resolved.
while still preserving the guarantee about async() being assigned a unit
of concurrency (or immediately running the task), this change:
* retains the error from calling getCpuCount()
* spawns all threads in detached mode, using WaitGroup to join them
* treats all workers the same regardless of whether they are processing
concurrent or async tasks. one thread pool does all the work, while
respecting async and concurrent limits.
This is a reimplementation of Io.Threaded that fixes the issues
highlighted in the recent Zulip discussion. It's poorly tested but it
does successfully run to completion the litmust test example that I
offered in the discussion.
This implementation has the following key design decisions:
- `t.cpu_count` is used as the threadpool size.
- `t.concurrency_limit` is used as the maximum number of
"burst, one-shot" threads that can be spawned by `io.concurrent` past
`t.cpu_count`.
- `t.available_thread_count` is the number of threads in the pool that
is not currently busy with work (the bookkeeping happens in the worker
function).
- `t.one_shot_thread_count` is the number of active threads that were
spawned by `io.concurrent` past `t.cpu_count`.
In this implementation:
- `io.async` first tries to decrement `t.available_thread_count`. If
there are no threads available, it tries to spawn a new one if possible,
otherwise it runs the task immediately.
- `io.concurrent` first tries to use a thread in the pool same as
`io.async`, but on failure (no available threads and pool size limit
reached) it tries to spawn a new one-shot thread. One shot threads
run a different main function that just executes one task, decrements
the number of active one shot threads, and then exits.
A relevant future improvement is to have one-shot threads stay on for a
few seconds (and potentially pick up a new task) to amortize spawning
costs.
The build runner was previously forcing child processes to have their
stderr colorization match the build runner by setting `CLICOLOR_FORCE`
or `NO_COLOR`. This is a nice idea in some cases---for instance a simple
`Run` step which we just expect to exit with code 0 and whose stderr is
not being programmatically inspected---but is a bad idea in others, for
instance if there is a check on stderr or if stderr is captured, in
which case forcing color on the child could cause checks to fail.
Instead, this commit adds a field to `std.Build.Step.Run` which
specifies a behavior for the build runner to employ in terms of
assigning the `CLICOLOR_FORCE` and `NO_COLOR` environment variables. The
default behavior is to set `CLICOLOR_FORCE` if the build runner's output
is colorized and the step's stderr is not captured, and to set
`NO_COLOR` otherwise. Alternatively, colors can be always enabled,
always disabled, always match the build runner, or the environment
variables can be left untouched so they can be manually controlled
through `env_map`.
Notably, this fixes a failure when running `zig build test-cli` in a
TTY (or with colors explicitly enabled). GitHub CI hadn't caught this
because it does not request color, but Codeberg CI now does, and we were
seeing a failure in the `zig init` test because the actual output had
color escape codes in it due to 6d280dc.
Apple's own headers and tbd files prefer to think of Mac Catalyst as a distinct
OS target. Earlier, when DriverKit support was added to LLVM, it was represented
a distinct OS. So why Apple decided to only represent Mac Catalyst as an ABI in
the target triple is beyond me. But this isn't the first time they've ignored
established target triple norms (see: armv7k and aarch64_32) and it probably
won't be the last.
While doing this, I also audited all Darwin OS prongs throughout the codebase
and made sure they cover all the tags.
This fixes package fetching on Windows.
Previously, `Async/GroupClosure` allocations were only aligned for the
closure struct type, which resulted in panics when `context_alignment`
(or `result_alignment` for that matter) had a greater alignment.
If we use `undefined`, then `netReceive` can `@intCast` the
control slice len to msghdr controllen, which is sometimes `u32`,
even on 64-bit platforms.
`init` just avoids this entirely by setting `control` to an empty
slice rather than undefined.
restores code closer to master branch in hopes of avoiding a regression
that was introduced when this was based on openSelfExe rather than
GetModuleFileNameExW.