I broke this when porting this logic for the `std.debug` rework in
https://github.com/ziglang/zig/pull/25227. The offset that I copied was
actually being treated as relative to the address of the *saved* base
pointer. I think it makes more sense to do what I did and just treat all
offsets as relative to this frame's base.
- Revive some of the removed cache integration logic in `cmdTranslateC` now that `translate-c` can return error bundles
- Fixup inconsistent path separators (on Windows) when building the aro include path
- Move some error bundle logic from resinator into aro.Diagnostics
- Add `ErrorBundle.addRootErrorMessageWithNotes` (extracted from resinator)
Now it's based on calling fillMore rather than an illegal aliased stream
into the Reader buffer.
This commit also includes a disambiguation block inspired by #25162. If
`StreamTooLong` was added to `RebaseError` then this logic could be
replaced by removing the exit condition from the while loop. That error
code would represent when `buffer` capacity is too small for an
operation, replacing the current use of asserts.
Fix `takeDelimiter` and `takeDelimiterExclusive` tossing too many bytes
(#25132)
Also add/improve test coverage for all delimiter and sentinel methods,
update usages of `takeDelimiterExclusive` to not rely on the fixed bug,
tweak a handful of doc comments, and slightly simplify some logic.
I have not fixed#24950 in this commit because I am a little less
certain about the appropriate solution there.
Resolves: #25132
Co-authored-by: Andrew Kelley <andrew@ziglang.org>
* File.Writer.seekBy passed wrong offset to setPosAdjustingBuffer.
* File.Writer.sendFile incorrectly used non-logical position.
Related to 1d764c1fdf04829cec5974d82cec901825a80e49
Test case provided by:
Co-authored-by: Kendall Condon <goon.pri.low@gmail.com>
Previously, the logic in peekDelimiterInclusive (when the delimiter was not found in the existing buffer) used the `n` returned from `r.vtable.stream` as the length of the slice to check, but it's valid for `vtable.stream` implementations to return 0 if they wrote to the buffer instead of `w`. In that scenario, the `indexOfScalarPos` would be given a 0-length slice so it would never be able to find the delimiter.
This commit changes the logic to assume that `r.vtable.stream` can both:
- return 0, and
- modify seek/end (i.e. it's also valid for a `vtable.stream` implementation to rebase)
Also introduces `std.testing.ReaderIndirect` which helps in being able to test against Reader implementations that return 0 from `stream`/`readVec`
Fixes#25428
This reverts commit 27aba2d776caf59bb6569934626af587fdba9c75.
I'd like to review this contribution more carefully, particularly with
the alternate implementation that is also open as a pull request
(#25109).
Reopens#25093
`findScalarPos` might do repetitive work, even if using simd. For
example, when searching the string `/abcde/fghijk/lm` for the character
`/`, a 16-byte wide search would yield `1000001000000100` but would only
count the first `1` and re-search the remaining of the string.
When testing locally, the difference was quite significative:
```
count scalar
5737 iterations 522.83us per iterations
0 bytes per iteration
worst: 2370us median: 512us stddev: 107.64us
count v2
38333 iterations 78.03us per iterations
0 bytes per iteration
worst: 713us median: 76us stddev: 10.62us
count scalar v2
99565 iterations 29.80us per iterations
0 bytes per iteration
worst: 41us median: 29us stddev: 1.04us
```
Note that `count v2` is a simpler string search, similar to the
remaining version of the simd approach:
```
pub fn countV2(comptime T: type, haystack: []const T, needle: T) usize {
const n = haystack.len;
if (n < 1) return 0;
var count: usize = 0;
for (haystack[0..n]) |item| {
count += @intFromBool(item == needle);
}
return count;
}
```
Which implies the compiler yields some optimized code for a simpler loop
that is more performant than the `findScalarPos`-based approach, hence
the usage of iterative approach for the remaining of the haystack.
Co-authored-by: StAlKeR7779 <stalkek7779@yandex.ru>