Failing to forward free calls to the underlying allocator makes
`ValidationAllocator` unusable for testing allocators while checking for
leaks. This change allows allocators that wrap `std.testing.allocator`
to be tested with `std.heap.testAllocator()` in test decls without
reporting erroneous leaks.
This allows users to choose which version they need for their particular use case, as the previous default (now the 'any' version) was (1) not always the desired type of delimiter and (2) performed worse than the scalar version if the delimiter was a single item.
This allows users to choose which version they need for their particular use case, as the previous default (now the 'full' version) was (1) not always the desired type of delimiter and (2) performed worse than the scalar version if the delimiter was a single item.
This introduces a parallel set of functions to the std.mem.indexOfAny
functions. They simply invert the logic, yielding the first/last index
which is *not* contained in the "values" slice.
Inverting this logic is useful when you are attempting to determine the
initial span which contains only characters in a particular set.
* Document the `indexOfNone` family.
These descriptions are somewhat brief, but the functions themselves are
also simple enough to describe in such a way.
The majority of these are in comments, some in doc comments which might
affect the generated documentation, and a few in parameter names -
nothing that should be breaking, however.
* docs(std.math): elaborate on difference between absCast and absInt
* docs(std.rand.Random.weightedIndex): elaborate on likelihood
I think this makes it easier to understand.
* langref: add small reminder
* docs(std.fs.path.extension): brevity
* docs(std.bit_set.StaticBitSet): mention the specific types
* std.debug.TTY: explain what purpose this struct serves
This should also make it clearer that this struct is not supposed to provide unrelated terminal manipulation functionality such as setting the cursor position or something because terminals are complicated and we should keep this struct simple and focused on debugging.
* langref(package listing): brevity
* langref: explain what exactly `threadlocal` causes to happen
* std.array_list: link between swapRemove and orderedRemove
Maybe this can serve as a TLDR and make it easier to decide.
* PrefetchOptions.locality: clarify docs that this is a range
This confused me previously and I thought I can only use either 0 or 3.
* fix typos and more
* std.builtin.CallingConvention: document some CCs
* langref: explain possibly cryptic names
I think it helps knowing what exactly these acronyms (@clz and @ctz) and
abbreviations (@popCount) mean.
* variadic function error: add missing preposition
* std.fmt.format docs: nicely hyphenate
* help menu: say what to optimize for
I think this is slightly more specific than just calling it
"optimizations". These are speed optimizations. I used the word
"performance" here.
ccf670c made using `return` from within a comptime block in a non-inline
function illegal, since it is a use of runtime control flow in a
comptime block. It is allowed if the function in question is `inline`,
since no actual control flow occurs in this case. A few functions from
std (notably `std.fmt.comptimePrint`) needed to be marked `inline` to
support this change.
These functions are currently footgunny when working with pointers to
arrays and slices. They just return the stated length of the array/slice
without iterating and looking for the first sentinel, even if the
array/slice is a sentinel terminated type.
From looking at the quite small list of places in the standard
library/compiler that this change breaks existing code, the new code
looks to be more readable in all cases.
The usage of std.mem.span/len was totally unneeded in most of the cases
affected by this breaking change.
We could remove these functions entirely in favor of other existing
functions in std.mem such as std.mem.sliceTo(), but that would be a
somewhat nasty breaking change as std.mem.span() is very widely used for
converting sentinel terminated pointers to slices. It is however not at
all widely used for anything else.
Therefore I think it is better to break these few non-standard and
potentially incorrect usages of these functions now and at some later
time, if deemed worthwhile, finally remove these functions.
If we wait for at least a full release cycle so that everyone adapts to
this change first, updating for the removal could be a simple find and
replace without needing to worry about the semantics.
I originally started monkeying with this because std.mem.zeroes doesn't support sentinel-terminated const slices even with defaults in 0.10.x.
I see that std.mem.zeroes was modified in #13256 to allow setting these slices to "".
That got me partway to where I wanted, but there was still an issue fields whose types are structs, they wouldn't get their defaults.
So when iterating struct fields looking for default values, when there is no default value and the type is .Struct, it will delegate to a call to zeroInit.
* Initialize struct fields in zeroInit exactly once
In my changes, similar to the previous implementation, the priority order for fields being initialized is:
1. If the `init` argument is a tuple, the nth element corresponding to the nth field of the struct.
2. Otherwise, if the `init` argument is not a tuple, try to find a matching field name on `init` and use that field.
3. Is the field has a default value, initalize with that value.
4. Fall back to what the field would have been initialized to via a recursive call to `std.mem.zeroInit`.
But instead of initializing a default instance of the struct and then running multiple passes over it, the init method is chosen per-field and each field is initialized exactly once.
* Initialize `big_align` with 1 as 0 is not a valid alignment.
* Add an assert to `alignForwardGeneric` to catch this issue earlier.
* Refactor valid alignment checks to call a more descriptive function.
* Improve and remove duplicate doNotOptimizeAway() implementations
We currently have two doNotOptimizeAway() implementations, one in
std.math and the other one in std.mem.
Maybe we should deprecate one. In the meantime, the std.math one
now just calls the std.mem one.
In a comptime environment, just ignore the value. Previously,
std.mem.doNotOptimizeAway() did not work at comptime.
If the value fits in a CPU register, just tell the compiler we
need that value to be computed, without clobbering anything else.
Only clobber all possibly escaped memory on pointers or large arrays.
Add tests by the way since we didn't had any (we had, but only
indirect ones).
There are still a few occurrences of "stage1" in the standard library
and self-hosted compiler source, however, these instances need a bit
more careful inspection to ensure no breakage.
Instead of making the memory alignment functions more complicated, I
added more API documentation for their existing semantics.
closes#12118closes#12135