I don't think these belong in std, at least not in their current form.
If someone wants to add these back I'd like to review the patch before
it lands.
Reverts 629e2e784495dd8ac91493fa7bb11e1772698e42
This one changes the size of an allocation, allowing it to be relocated.
However, the implementation will still return `null` if it would be
equivalent to
new = alloc
memcpy(new, old)
free(old)
Mainly this prepares for taking advantage of `mremap` which I thought
would be a bigger deal but apparently is only available on Linux. Still,
we should use it on Linux.
no longer causes compilation failure.
This also addresses the problem of high map count causing OOM by
choosing a page size of 2MiB for most targets when the page_size_max is
smaller than this number.
This allocator now supports alignments greater than page size, with the
same implementation as it used before.
This is a partial revert of ceb0a632cfd6a4eada6bd27bf6a3754e95dcac86.
It looks like VirtualAlloc2 has better solutions to this problem,
including features such as MEM_RESERVE_PLACEHOLDER and MEM_LARGE_PAGES.
This possibility can be investigated as a follow-up task.
This keeps the implementation matching master branch, however,
introduces a compile error that applications can work around by
explicitly setting page_size_max and page_size_min to match their
computer's settings, in the case that those values are not already
equal.
I plan to rework this allocator in a follow-up enhancement with the goal
of reducing total active memory mappings.
* fix merge conflicts
* rename the declarations
* reword documentation
* extract FixedBufferAllocator to separate file
* take advantage of locals
* remove the assertion about max alignment in Allocator API, leaving it
Allocator implementation defined
* fix non-inline function call in start logic
The GeneralPurposeAllocator implementation is totally broken because it
uses global state but I didn't address that in this commit.
heap.zig: define new default page sizes
heap.zig: add min/max_page_size and their options
lib/std/c: add miscellaneous declarations
heap.zig: add pageSize() and its options
switch to new page sizes, especially in GPA/stdlib
mem.zig: remove page_size
The ZON PR (#20271) is causing these tests to inexplicably fail. It
doesn't seem like that PR is what's breaking GPA, so these tests are now
disabled. This is tracked by #22731.
This allocator has no purpose since it cannot truly fulfill the role of
page allocation, and std.heap.wasm_allocator is better both in terms of
performance and code size.
This commit redefines `std.heap.page_allocator` to be less strict:
"On operating systems that support memory mapping, this allocator makes
a syscall directly for every allocation and free. Otherwise, it falls
back to the preferred singleton for the target. Thread-safe."
This now matches how it was actually being implemented, and matches its
use sites - which are mainly as the backing allocator for
`std.heap.ArenaAllocator`.
Most of these changes seem like improvements. The PDB thing had a TODO
saying it used to crash; I anticipate it works now, we'll see what CI
does.
The `std.os.uefi` field renames are a notable breaking change.
The compiler actually doesn't need any functional changes for this: Sema
does reification based on the tag indices of `std.builtin.Type` already!
So, no zig1.wasm update is necessary.
This change is necessary to disallow name clashes between fields and
decls on a type, which is a prerequisite of #9938.
The wrong `size_class` was used when fetching stack traces from empty
buckets. The `size_class` would always be the maximum value after
exhausting the search of active buckets rather than the actual
`size_class` of the allocation.
Empty buckets have their `alloc_cursor` set to `slot_count` to allow the
size class to be calculated later. This happens deep within the free
function.
This adds a helper and a test to verify that the size class of empty
buckets is indeed recoverable.
Follow up to #19079, which made test names fully qualified.
This fixes tests that now-redundant information in their test names. For example here's a fully qualified test name before the changes in this commit:
"priority_queue.test.std.PriorityQueue: shrinkAndFree"
and the same test's name after the changes in this commit:
"priority_queue.test.shrinkAndFree"
This eliminates some simple usages of `usingnamespace` in the standard
library. This construct may in future be removed from the language, and
is generally an inappropriate way to formulate code. It is also
problematic for incremental compilation, which may not initially support
projects using it.
I wasn't entirely sure what the appropriate namespacing for the types in
`std.os.uefi.tables` would be, so I ofted to preserve the current
namespacing, meaning this is not a breaking change. It's possible some
of the moved types should instead be namespaced under `BootServices`
etc, but this can be a future enhancement.
This reverts commit 0c99ba1eab63865592bb084feb271cd4e4b0357e, reversing
changes made to 5f92b070bf284f1493b1b5d433dd3adde2f46727.
This caused a CI failure when it landed in master branch due to a
128-bit `@byteSwap` in std.mem.
Follow up to #17383. This is a minor optimization that only matters when a small allocation is resized/free'd soon after it is allocated.
The only real difference I was able to observe with this was via a synthetic benchmark that allocates a full bucket and then frees all but one of the slots, over and over in a loop:
Debug build:
Benchmark 1 (9 runs): gpa-degen-master.exe
measurement mean ± σ min … max outliers delta
wall_time 575ms ± 5.19ms 569ms … 583ms 0 ( 0%) 0%
peak_rss 43.8MB ± 1.37KB 43.8MB … 43.8MB 1 (11%) 0%
Benchmark 2 (10 runs): gpa-degen-search-cur.exe
measurement mean ± σ min … max outliers delta
wall_time 532ms ± 5.55ms 520ms … 539ms 0 ( 0%) ⚡- 7.5% ± 0.9%
peak_rss 43.8MB ± 65.2KB 43.8MB … 44.0MB 1 (10%) + 0.0% ± 0.1%
ReleaseFast build:
Benchmark 1 (129 runs): gpa-degen-master-release.exe
measurement mean ± σ min … max outliers delta
wall_time 38.9ms ± 1.12ms 36.7ms … 42.4ms 8 ( 6%) 0%
peak_rss 23.2MB ± 2.39KB 23.2MB … 23.2MB 0 ( 0%) 0%
Benchmark 2 (151 runs): gpa-degen-search-cur-release.exe
measurement mean ± σ min … max outliers delta
wall_time 33.2ms ± 999us 31.9ms … 36.3ms 20 (13%) ⚡- 14.7% ± 0.6%
peak_rss 23.2MB ± 2.26KB 23.2MB … 23.2MB 0 ( 0%) + 0.0% ± 0.0%
Before this commit, GeneralPurposeAllocator could run into incredibly degraded performance in scenarios where the bucket count for a particular size class grew to be large. For example, if exactly `slot_count` allocations of a single size class were performed and then all of them were freed except one, then the bucket for those allocations would have to be kept around indefinitely. If that pattern of allocation were done over and over, then the bucket list for that size class could grow incredibly large.
This allocation pattern has been seen in the wild: https://github.com/Vexu/arocc/issues/508#issuecomment-1738275688
In that case, the length of the bucket list for the `128` size class would grow to tens of thousands of buckets and cause Debug runtime to balloon to ~8 minutes whereas with the c_allocator the Debug runtime would be ~3 seconds.
To address this, there are three different changes happening here:
1. std.Treap is used instead of a doubly linked list for the lists of buckets. This takes the time complexity of searchBucket [used in resize and free] from O(n) to O(log n), but increases the time complexity of insert from O(1) to O(log n) [before, all new buckets would get added to the head of the list]. Note: Any data structure with O(log n) or better search/insert/delete would also work for this use-case.
2. If the 'current' bucket for a size class is full, the list of buckets is never traversed and instead a new bucket is allocated. Previously, traversing the bucket list could only find a non-full bucket in specific circumstances, and only because of a separate optimization that is no longer needed (before, after any resize/free, the affected bucket would be moved to the head of the bucket list to allow searchBucket to perform better on average). Now, the current_bucket for each size class only changes when either (1) the current bucket is emptied/freed, or (2) a new bucket is allocated (due to the current bucket being full or null). Because each bucket's alloc_cursor only moves forward (i.e. slots within a bucket are never re-used), we can therefore always know that any bucket besides the current_bucket will be full, so traversing the list in the hopes of finding an existing non-full bucket is entirely pointless.
3. Size + alignment information for small allocations has been moved into the Bucket data instead of keeping it in a separate HashMap. This offers an improvement over the HashMap since whenever we need to get/modify the length/alignment of an allocation it's extremely likely we will already have calculated any bucket-related information necessary to get the data.
The first change is the most relevant and accounts for most of the benefit here. Also note that the overall functionality of GeneralPurposeAllocator is unchanged.
In the degraded `arocc` case, these changes bring Debug performance from ~8 minutes to ~20 seconds.
Benchmark 1: test-master.bat
Time (mean ± σ): 481.263 s ± 5.440 s [User: 479.159 s, System: 1.937 s]
Range (min … max): 477.416 s … 485.109 s 2 runs
Benchmark 2: test-optim-treap.bat
Time (mean ± σ): 19.639 s ± 0.037 s [User: 18.183 s, System: 1.452 s]
Range (min … max): 19.613 s … 19.665 s 2 runs
Summary
'test-optim-treap.bat' ran
24.51 ± 0.28 times faster than 'test-master.bat'
Note: Much of the time taken on Windows in this particular case is related to gathering stack traces. With `.stack_trace_frames = 0` the runtime goes down to 6.7 seconds, which is a little more than 2.5x slower compared to when the c_allocator is used.
These changes may or mat not introduce a slight performance regression in the average case:
Here's the standard library tests on Windows in Debug mode:
Benchmark 1 (10 runs): std-tests-master.exe
measurement mean ± σ min … max outliers delta
wall_time 16.0s ± 30.8ms 15.9s … 16.1s 1 (10%) 0%
peak_rss 42.8MB ± 8.24KB 42.8MB … 42.8MB 0 ( 0%) 0%
Benchmark 2 (10 runs): std-tests-optim-treap.exe
measurement mean ± σ min … max outliers delta
wall_time 16.2s ± 37.6ms 16.1s … 16.3s 0 ( 0%) 💩+ 1.3% ± 0.2%
peak_rss 42.8MB ± 5.18KB 42.8MB … 42.8MB 0 ( 0%) + 0.1% ± 0.0%
And on Linux:
Benchmark 1: ./test-master
Time (mean ± σ): 16.091 s ± 0.088 s [User: 15.856 s, System: 0.453 s]
Range (min … max): 15.870 s … 16.166 s 10 runs
Benchmark 2: ./test-optim-treap
Time (mean ± σ): 16.028 s ± 0.325 s [User: 15.755 s, System: 0.492 s]
Range (min … max): 15.735 s … 16.709 s 10 runs
Summary
'./test-optim-treap' ran
1.00 ± 0.02 times faster than './test-master'
Now that allocator.resize() is allowed to fail, programs may wish to
test code paths that handle resize() failure. The simplest way to do
this now is to replace the vtable of the testing allocator with one
that uses Allocator.noResize for the 'resize' function pointer.
An alternative way to support this testing capability is to augment the
FailingAllocator (which is already useful for testing allocation failure
scenarios) to intentionally fail on calls to resize(). To do this, add a
'resize_fail_index' parameter to the FailingAllocator that causes
resize() to fail after the given number of calls.
Created from a conversation with @andrewrk on irc: Memory leaks when using ArrayList can be inconvenient to debug when the stack frame size is 4 because the entirety of the printed frame is within zig stdlib, and not in the users calling stack. Increasing this to 6 for Debug builds, gives 2 frames of user code. I increased the frame size for tests as well by the equivalent factor, but I'm unconvinced that's actually desirable.
Implements issue #6451.
This was needed to support allocation on Plan 9 and now other operating
systems like DOS can also use it.
It is a modified version of the WasmAllocator since wasm also uses a
sbrk-esque allocation system.
This commit also adds the necessary system bits for sbrk to work on plan 9.