* slab length reduced to 64K
* track freelist length with u8s
* on free(), rotate if freelist length exceeds max_freelist_len
Prevents memory leakage in the scenario where one thread only allocates
and another thread only frees.
In larger small buckets, the comptime logic that computed slot count did
not verify that the number it produced was valid. Now it verifies it,
which made this bug into a compile error. Then I fixed the bug by
introducing a "minimum slots per bucket" declaration.
Reversal on the decision: the Allocator interface is the correct place
for the memset to undefined because it allows Allocator implementations
to bypass the interface and use a backing allocator directly, skipping
the performance penalty of memsetting the entire allocation, which may
be very large, as well as having valuable zeroes on them.
closes#4298
I don't think these belong in std, at least not in their current form.
If someone wants to add these back I'd like to review the patch before
it lands.
Reverts 629e2e784495dd8ac91493fa7bb11e1772698e42
This one changes the size of an allocation, allowing it to be relocated.
However, the implementation will still return `null` if it would be
equivalent to
new = alloc
memcpy(new, old)
free(old)
Mainly this prepares for taking advantage of `mremap` which I thought
would be a bigger deal but apparently is only available on Linux. Still,
we should use it on Linux.
no longer causes compilation failure.
This also addresses the problem of high map count causing OOM by
choosing a page size of 2MiB for most targets when the page_size_max is
smaller than this number.
This allocator now supports alignments greater than page size, with the
same implementation as it used before.
This is a partial revert of ceb0a632cfd6a4eada6bd27bf6a3754e95dcac86.
It looks like VirtualAlloc2 has better solutions to this problem,
including features such as MEM_RESERVE_PLACEHOLDER and MEM_LARGE_PAGES.
This possibility can be investigated as a follow-up task.
This keeps the implementation matching master branch, however,
introduces a compile error that applications can work around by
explicitly setting page_size_max and page_size_min to match their
computer's settings, in the case that those values are not already
equal.
I plan to rework this allocator in a follow-up enhancement with the goal
of reducing total active memory mappings.
* fix merge conflicts
* rename the declarations
* reword documentation
* extract FixedBufferAllocator to separate file
* take advantage of locals
* remove the assertion about max alignment in Allocator API, leaving it
Allocator implementation defined
* fix non-inline function call in start logic
The GeneralPurposeAllocator implementation is totally broken because it
uses global state but I didn't address that in this commit.
heap.zig: define new default page sizes
heap.zig: add min/max_page_size and their options
lib/std/c: add miscellaneous declarations
heap.zig: add pageSize() and its options
switch to new page sizes, especially in GPA/stdlib
mem.zig: remove page_size
The ZON PR (#20271) is causing these tests to inexplicably fail. It
doesn't seem like that PR is what's breaking GPA, so these tests are now
disabled. This is tracked by #22731.
This allocator has no purpose since it cannot truly fulfill the role of
page allocation, and std.heap.wasm_allocator is better both in terms of
performance and code size.
This commit redefines `std.heap.page_allocator` to be less strict:
"On operating systems that support memory mapping, this allocator makes
a syscall directly for every allocation and free. Otherwise, it falls
back to the preferred singleton for the target. Thread-safe."
This now matches how it was actually being implemented, and matches its
use sites - which are mainly as the backing allocator for
`std.heap.ArenaAllocator`.
Most of these changes seem like improvements. The PDB thing had a TODO
saying it used to crash; I anticipate it works now, we'll see what CI
does.
The `std.os.uefi` field renames are a notable breaking change.
The compiler actually doesn't need any functional changes for this: Sema
does reification based on the tag indices of `std.builtin.Type` already!
So, no zig1.wasm update is necessary.
This change is necessary to disallow name clashes between fields and
decls on a type, which is a prerequisite of #9938.