The buffer `buf` contains N (= `slice_sizes.len`) slices followed by the
N null-terminated arguments. The N null-terminated arguments are stored
in the `contents` array list. Thus, `buf` size should be:
@sizeOf([]u8) * slice_sizes.len + contents_slice.len
Instead of:
@sizeOf([]u8) * slice_sizes.len + contents_slice.len + slice_sizes.len
This bug was found thanks to the gpa allocator which checks if freed
size matches allocated sizes for large allocations.
I found that after switching from my custom Guid parser to the one in std that it increased zigwin32 build times substantially (from 40 seconds to over 10 minutes). More information can be found in the benchmark PR I created here: https://github.com/ziglang/gotta-go-fast/pull/21 . This PR ports my GUID parser to std so all projects can leverage the faster comptime performance.
This commit fixes an out of bounds write that can occur when
formatting certain float values. The write messes up the stack and
causes incorrect results, segfaults, or nothing at all, depending on the
optimization mode used.
The `errol` function writes the digits of the float into `buffer`
starting from index 1, leaving index 0 untouched, and returns `buffer[1..]`
and the exponent. This is because `roundToPrecision` relies on index 0 being
unused in case the rounding adds a digit (e.g rounding 999.99
to 1000.00). When this happens, pointer arithmetic is used
[here](0e6d2184ca/lib/std/fmt/errol.zig (L61-L65))
to access index 0 and put the ones digit in the right place.
However, `errol3u` contains two special cases: `errolInt` and `errolFixed`,
which return from the function early. For these two special cases
index 0 was never reserved, and the return value contains `buffer`
instead of `buffer[1..]`. This causes the pointer arithmetic in
`roundToPrecision` to write out of bounds, which in the case of
`std.fmt.formatFloatDecimal` messes up the stack and causes undefined behavior.
The fix is to move the slicing of `buffer` to `buffer[1..]` from `errol3u`
to `errol` so that both the default and the special cases operate on the sliced
buffer.
`ArrayList.ensureTotalCapacityPrecise` uses `Allocator.reallocAtLeast` under
the hood, which can return more than `new_capacity` bytes if `alignment
!= @alignOf(T)`. This implementation of `clone` assures that
the case of `ensureTotalCapacityPrecise` is handled correctly.
Thanks @Vexu and @squeek502 for pointing this out.
Before this commit, compiling an empty main with Stage 2 on macOS x86_64 results in
```
../stage2/bin/zig build-exe -ODebug -fLLVM empty_main.zig
error: sub-compilation of compiler_rt failed
[...]/zig/stage2/lib/zig/std/special/compiler_rt/os_version_check.zig:26:10: error: TODO: Sema.zirStructInit for runtime-known struct values
```
By assigning the value to a variable we can sidestep the issue for now.
This commit updates stage2 to enforce the property that the syntax
`fn()void` is a function *body* not a *pointer*. To get a pointer, the
syntax `*const fn()void` is required.
ZIR puts function alignment into the func instruction rather than the
decl because this way it makes it into function types. LLVM backend
respects function alignments.
Struct and Union have methods `fieldSrcLoc` to help look up source
locations of their fields. These trigger full loading, tokenization, and
parsing of source files, so should only be called once it is confirmed
that an error message needs to be printed.
There are some nice new error hints for explaining why a type is
required to be comptime, particularly for structs that contain function
body types.
`Type.requiresComptime` is now moved into Sema because it can fail and
might need to trigger field type resolution. Comptime pointer loading
takes into account types that do not have a well-defined memory layout
and does not try to compute a byte offset for them.
`fn()void` syntax no longer secretly makes a pointer. You get a function
body type, which requires comptime. However a pointer to a function body
can be runtime known (obviously).
Compile errors that report "expected pointer, found ..." are factored
out into convenience functions `checkPtrOperand` and `checkPtrType` and
have a note about function pointers.
Implemented `Value.hash` for functions, enum literals, and undefined values.
stage1 is not updated to this (yet?), so some workarounds and disabled
tests are needed to keep everything working. Should we update stage1 to
these new type semantics? Yes probably because I don't want to add too
much conditional compilation logic in the std lib for the different
backends.
This reverts commit 1f10cf4edf2b645e63dedc42f5d7475914bf2311.
Re-opens #10618
I want to solve this a different way. `align(S)` where S is a 0-byte
type should work in this context.
This also caused issues such as
https://github.com/Vexu/arocc/issues/221
An attempt to normalize some of the function names in build.zig. Normalize add*Dir to add*Path. Also use "Library" instead of the "Lib" abbreviation.
The PR does not remove the old names, only adds the new normalized ones to faciliate a transition period.
The size of a GUID is not platform-dependent, it's always a fixed number of bits. So I've updated guid to use fixed bit integer types rather than platform-dependent C integer types.
* AstGen: use Ast.zig helper methods to avoid copy pasting token counting logic
- take advantage of the `first_doc_comment` field we already have for
param AST nodes
* Add missing ZIR docs
Looking at the BufferedWriter assembly generated, one can see that is
has to do a lot of work, just to copy over some bytes and increase an
offset. This is because the LinearFifo is a much more general construct
than what BufferedWriter needs and the optimizer cannot prove that we
don't need to do this extra work.
Replaces the inflate API from `inflateStream(reader: anytype, window_slice: []u8)` to
`decompressor(allocator: mem.Allocator, reader: anytype, dictionary: ?[]const u8)` and
`compressor(allocator: mem.Allocator, writer: anytype, options: CompressorOptions)`
Read bytes to check expected values instead of reading and hashing them.
Hashing is a waste of time when we can just read and compare.
This also removes a dependency on std.crypto.hash.sha2.Sha256 for tests.