I measured this against master branch and found no statistical
difference. Since this code is simpler and logically superior due to
always leaving sufficient unused capacity when growing, it is preferred
over status quo.
* std.Io.Reader: appendRemaining no longer supports alignment and has
different rules about how exceeding limit. Fixed bug where it would
return success instead of error.StreamTooLong like it was supposed to.
* std.Io.Reader: simplify appendRemaining and appendRemainingUnlimited
to be implemented based on std.Io.Writer.Allocating
* std.Io.Writer: introduce unreachableRebase
* std.Io.Writer: remove minimum_unused_capacity from Allocating. maybe
that flexibility could have been handy, but let's see if anyone
actually needs it. The field is redundant with the superlinear growth
of ArrayList capacity.
* std.Io.Writer: growingRebase also ensures total capacity on the
preserve parameter, making it no longer necessary to do
ensureTotalCapacity at the usage site of decompression streams.
* std.compress.flate.Decompress: fix rebase not taking into account seek
* std.compress.zstd.Decompress: split into "direct" and "indirect" usage
patterns depending on whether a buffer is provided to init, matching
how flate works. Remove some overzealous asserts that prevented buffer
expansion from within rebase implementation.
* std.zig: fix readSourceFileToAlloc returning an overaligned slice
which was difficult to free correctly.
fixes#24608
This algorithm is non-trivial and makes sense for any data structure
that acts as an array list, so I thought it would make sense as a
method.
I have a real world case for this in a music player application
(deleting queue items).
Adds the method to:
* ArrayList
* ArrayHashMap
* MultiArrayList
This use case is handled by ArrayListUnmanaged via the "...Bounded"
method variants, and it's more optimal to share machine code, versus
generating multiple versions of each function for differing array
lengths.
This one changes the size of an allocation, allowing it to be relocated.
However, the implementation will still return `null` if it would be
equivalent to
new = alloc
memcpy(new, old)
free(old)
Mainly this prepares for taking advantage of `mremap` which I thought
would be a bigger deal but apparently is only available on Linux. Still,
we should use it on Linux.
This commit reworks how anonymous struct literals and tuples work.
Previously, an untyped anonymous struct literal
(e.g. `const x = .{ .a = 123 }`) was given an "anonymous struct type",
which is a special kind of struct which coerces using structural
equivalence. This mechanism was a holdover from before we used
RLS / result types as the primary mechanism of type inference. This
commit changes the language so that the type assigned here is a "normal"
struct type. It uses a form of equivalence based on the AST node and the
type's structure, much like a reified (`@Type`) type.
Additionally, tuples have been simplified. The distinction between
"simple" and "complex" tuple types is eliminated. All tuples, even those
explicitly declared using `struct { ... }` syntax, use structural
equivalence, and do not undergo staged type resolution. Tuples are very
restricted: they cannot have non-`auto` layouts, cannot have aligned
fields, and cannot have default values with the exception of `comptime`
fields. Tuples currently do not have optimized layout, but this can be
changed in the future.
This change simplifies the language, and fixes some problematic
coercions through pointers which led to unintuitive behavior.
Resolves: #16865
Follow up to #19079, which made test names fully qualified.
This fixes tests that now-redundant information in their test names. For example here's a fully qualified test name before the changes in this commit:
"priority_queue.test.std.PriorityQueue: shrinkAndFree"
and the same test's name after the changes in this commit:
"priority_queue.test.shrinkAndFree"
The code asserted that the range to be replaced is within bounds of
`self.items`.
This is now reflected in the doc comment.
The old, wrong doc comment was copied from the `insert*` fns.
With this assertion holding true, `start + len` is always within the
address space and `start + new_items.len` is, at this point, always
strictly within bounds of `self.items`.
This is useful when you want to have an array list backed by a fixed
slice of memory and no Allocator will be used.
It's an alternative to BoundedArray as you will see in the following
commit.
This reverts commit 0c99ba1eab63865592bb084feb271cd4e4b0357e, reversing
changes made to 5f92b070bf284f1493b1b5d433dd3adde2f46727.
This caused a CI failure when it landed in master branch due to a
128-bit `@byteSwap` in std.mem.
(Unmanaged)ArrayList.insert has the same inefficiency as the old insertSlice. With the new addManyAt, the solution is trivial.
Also improves the test "growing memory preserves contents". In the previous implementation, if any changes were made to the ArrayList memory growth policy (function growMemory), the list could end up with enough capacity to not trigger a memory growth, defeating the purpose of the test. The new implementation more robustly triggers a memory growth.
* Move `computeBetterCapacity` to the bottom so that `pub` stuff shows
up first.
* Rename `computeBetterCapacity` to `growCapacity`. Every function
implicitly computes something; that word is always redundant in a
function name. "better" is vague. Better in what way? Instead we
describe what is actually happening. "grow".
* Improve doc comments to be very explicit about when element pointers
are invalidated or not.
* Rename `addManyAtIndex` to `addManyAt`. The parameter is named
`index`; that is enough.
* Extract some duplicated code into `addManyAtAssumeCapacity` and make
it `pub`.
* Since I audited every line of code for correctness, I changed the
style to my personal preference.
* Avoid a redundant `@memset` to `undefined` - memory allocation does
that already.
* Fixed comment giving the wrong reason for not calling
`ensureTotalCapacity`.
Includes a more robust implementation of replaceRange, which updates the
ArrayListUnmanaged if state changes in the managed part of the code
before returning an error.
Co-authored-by: Andrew Kelley <andrew@ziglang.org>
Now that allocator.resize() is allowed to fail, programs may wish to
test code paths that handle resize() failure. The simplest way to do
this now is to replace the vtable of the testing allocator with one
that uses Allocator.noResize for the 'resize' function pointer.
An alternative way to support this testing capability is to augment the
FailingAllocator (which is already useful for testing allocation failure
scenarios) to intentionally fail on calls to resize(). To do this, add a
'resize_fail_index' parameter to the FailingAllocator that causes
resize() to fail after the given number of calls.