Note that the `_ = Address` statements in tests previously were a nop,
and now actually check that the type is valid. However, on WASI, the
type is *not* valid.
This commit reverts the handling of partially-undefined values in
bitcasting to transform these bits into an arbitrary numeric value,
like happens on `master` today.
As @andrewrk rightly points out, #19634 has unfortunate consequences
for the standard library, and likely requires more thought. To avoid
a major breaking change, it has been decided to revert this design
decision for now, and make a more informed decision further down the
line.
We've got a big one here! This commit reworks how we represent pointers
in the InternPool, and rewrites the logic for loading and storing from
them at comptime.
Firstly, the pointer representation. Previously, pointers were
represented in a highly structured manner: pointers to fields, array
elements, etc, were explicitly represented. This works well for simple
cases, but is quite difficult to handle in the cases of unusual
reinterpretations, pointer casts, offsets, etc. Therefore, pointers are
now represented in a more "flat" manner. For types without well-defined
layouts -- such as comptime-only types, automatic-layout aggregates, and
so on -- we still use this "hierarchical" structure. However, for types
with well-defined layouts, we use a byte offset associated with the
pointer. This allows the comptime pointer access logic to deal with
reinterpreted pointers far more gracefully, because the "base address"
of a pointer -- for instance a `field` -- is a single value which
pointer accesses cannot exceed since the parent has undefined layout.
This strategy is also more useful to most backends -- see the updated
logic in `codegen.zig` and `codegen/llvm.zig`. For backends which do
prefer a chain of field and elements accesses for lowering pointer
values, such as SPIR-V, there is a helpful function in `Value` which
creates a strategy to derive a pointer value using ideally only field
and element accesses. This is actually more correct than the previous
logic, since it correctly handles pointer casts which, after the dust
has settled, end up referring exactly to an aggregate field or array
element.
In terms of the pointer access code, it has been rewritten from the
ground up. The old logic had become rather a mess of special cases being
added whenever bugs were hit, and was still riddled with bugs. The new
logic was written to handle the "difficult" cases correctly, the most
notable of which is restructuring of a comptime-only array (for
instance, converting a `[3][2]comptime_int` to a `[2][3]comptime_int`.
Currently, the logic for loading and storing work somewhat differently,
but a future change will likely improve the loading logic to bring it
more in line with the store strategy. As far as I can tell, the rewrite
has fixed all bugs exposed by #19414.
As a part of this, the comptime bitcast logic has also been rewritten.
Previously, bitcasts simply worked by serializing the entire value into
an in-memory buffer, then deserializing it. This strategy has two key
weaknesses: pointers, and undefined values. Representations of these
values at comptime cannot be easily serialized/deserialized whilst
preserving data, which means many bitcasts would become runtime-known if
pointers were involved, or would turn `undefined` values into `0xAA`.
The new logic works by "flattening" the datastructure to be cast into a
sequence of bit-packed atomic values, and then "unflattening" it; using
serialization when necessary, but with special handling for `undefined`
values and for pointers which align in virtual memory. The resulting
code is definitely slower -- more on this later -- but it is correct.
The pointer access and bitcast logic required some helper functions and
types which are not generally useful elsewhere, so I opted to split them
into separate files `Sema/comptime_ptr_access.zig` and
`Sema/bitcast.zig`, with simple re-exports in `Sema.zig` for their small
public APIs.
Whilst working on this branch, I caught various unrelated bugs with
transitive Sema errors, and with the handling of `undefined` values.
These bugs have been fixed, and corresponding behavior test added.
In terms of performance, I do anticipate that this commit will regress
performance somewhat, because the new pointer access and bitcast logic
is necessarily more complex. I have not yet taken performance
measurements, but will do shortly, and post the results in this PR. If
the performance regression is severe, I will do work to to optimize the
new logic before merge.
Resolves: #19452Resolves: #19460
A lot of these "shorthand" doc comments were redundant, low quality
filler content. Better to let the actual modules speak for themselves
with top level doc comments rather than trying to document their
aliases.
In a previous commit I removed a load-bearing use of `@hasDecl` to
detect whether the SO.REUSEPORT option should be set. `@hasDecl` should
not be used for OS feature detection because it can hide bugs.
The new logic checks for the operating system specifically and then does
the thing that is supposed to be done on that operating system directly.
The following test fails since NonCanonical is not handled
test "foo" {
std.net.Ip4Address.resolveIp("1.1.1.1", 0) catch unreachable;
}
/usr/lib/zig/std/net.zig:240:60: error: switch must handle all possibilities
if (parse(name, port)) |ip4| return ip4 else |err| switch (err) {
^~~~~~
/usr/lib/zig/std/net.zig:240:60: note: unhandled error value: 'error.NonCanonical'
referenced by:
test.foo: src/dhcp.zig:383:23
Justification: It is common for non-CPU bound short routines to do
non-blocking accept to eliminate unnecessary delays before subscribing
to data, for example in hardware integration tests.
Use inline to vastly simplify the exposed API. This allows a
comptime-known endian parameter to be propogated, making extra functions
for a specific endianness completely unnecessary.
Most of this migration was performed automatically with `zig fmt`. There
were a few exceptions which I had to manually fix:
* `@alignCast` and `@addrSpaceCast` cannot be automatically rewritten
* `@truncate`'s fixup is incorrect for vectors
* Test cases are not formatted, and their error locations change
Now they use slices or array pointers with any element type instead of
requiring byte pointers.
This is a breaking enhancement to the language.
The safety check for overlapping pointers will be implemented in a
future commit.
closes#14040
It seems like the original code of setsockopt is not effective because
i catch the EINVAL branch when uncomment this code, it should call
setsockopt before the bind call.
This should fix issue #14900.
Co-authored-by: Qun He <hawkbee@qq.com>
This fixes a bug in std.net caused during the introduction of
meta.assumeSentinel due to the unfortunate semantics of mem.span()
This leaves only 3 remaining uses of meta.assumeSentinel() in the
standard library, each of which could be a simple @ptrCast([*:0]T, foo)
instead. I think this function should likely be removed.
Here's what I landed on for the TLS client. It's 16896 bytes
(max_ciphertext_record_len is 16640). I believe this is the theoretical
minimum size, give or take a few bytes.
These constraints are satisfied:
* a call to the readvAdvanced() function makes at most one call to the
underlying readv function
* iovecs are provided by the API, and used by the implementation for
underlying readv() calls to the socket
* the theoretical minimum number of memcpy() calls are issued in all
circumstances
* decryption is only performed once for any given TLS record
* large read buffers are fully exploited
This is accomplished by using the partial read buffer to storing both
cleartext and ciphertext.
The read function has been renamed to readAdvanced since it has slightly
different semantics than typical read functions, specifically regarding
the end-of-file. A higher level read function is implemented on top.
Now, API users may pass small buffers to the read function and
everything will work fine. This is done by re-decrypting the same
ciphertext record with each call to read() until the record is finished
being transmitted.
If the buffer supplied to read() is large enough, then any given
ciphertext record will only be decrypted once, since it decrypts
directly to the read() buffer and therefore does not need any memcpy. On
the other hand, if the buffer supplied to read() is small, then the
ciphertext is decrypted into a stack buffer, a subset is copied to the
read() buffer, and then the entire ciphertext record is saved for the
next call to read().