`std.Io.tty.Config.detect` may be an expensive check (e.g. involving
syscalls), and doing it every time we need to print isn't really
necessary; under normal usage, we can compute the value once and cache
it for the whole program's execution. Since anyone outputting to stderr
may reasonably want this information (in fact they are very likely to),
it makes sense to cache it and return it from `lockStderrWriter`. Call
sites who do not need it will experience no significant overhead, and
can just ignore the TTY config with a `const w, _` destructure.
- Revive some of the removed cache integration logic in `cmdTranslateC` now that `translate-c` can return error bundles
- Fixup inconsistent path separators (on Windows) when building the aro include path
- Move some error bundle logic from resinator into aro.Diagnostics
- Add `ErrorBundle.addRootErrorMessageWithNotes` (extracted from resinator)
added adapter to AnyWriter and GenericWriter to help bridge the gap
between old and new API
make std.testing.expectFmt work at compile-time
std.fmt no longer has a dependency on std.unicode. Formatted printing
was never properly unicode-aware. Now it no longer pretends to be.
Breakage/deprecations:
* std.fs.File.reader -> std.fs.File.deprecatedReader
* std.fs.File.writer -> std.fs.File.deprecatedWriter
* std.io.GenericReader -> std.io.Reader
* std.io.GenericWriter -> std.io.Writer
* std.io.AnyReader -> std.io.Reader
* std.io.AnyWriter -> std.io.Writer
* std.fmt.format -> std.fmt.deprecatedFormat
* std.fmt.fmtSliceEscapeLower -> std.ascii.hexEscape
* std.fmt.fmtSliceEscapeUpper -> std.ascii.hexEscape
* std.fmt.fmtSliceHexLower -> {x}
* std.fmt.fmtSliceHexUpper -> {X}
* std.fmt.fmtIntSizeDec -> {B}
* std.fmt.fmtIntSizeBin -> {Bi}
* std.fmt.fmtDuration -> {D}
* std.fmt.fmtDurationSigned -> {D}
* {} -> {f} when there is a format method
* format method signature
- anytype -> *std.io.Writer
- inferred error set -> error{WriteFailed}
- options -> (deleted)
* std.fmt.Formatted
- now takes context type explicitly
- no fmt string
preparing to rearrange std.io namespace into an interface
how to upgrade:
std.io.getStdIn() -> std.fs.File.stdin()
std.io.getStdOut() -> std.fs.File.stdout()
std.io.getStdErr() -> std.fs.File.stderr()
Previously, resinator would use the host arch as the target arch when looking for windows-gnu include directories. However, Zig only thinks it can provide a libc for targets specified in the `std.zig.target.available_libcs` array, which only includes a few for windows-gnu. Therefore, when cross-compiling from a host architecture that doesn't have a windows-gnu target in the available_libcs list, resinator would fail to detect the MinGW include directories.
Now, the custom option `/:target` is passed to `zig rc` which is intended for the COFF object file target, but can be re-used for the include directory target as well. For the include directory target, resinator will convert the MachineType to the relevant arch, or fail if there is no equivalent arch/no support for detecting the includes for the MachineType (currently 64-bit Itanium and EBC).
Fixes the `windows_resources` standalone test failing when the host is, for example, `riscv64-linux`.
When determining the type of RC compiler, meson passes `/?` or `--version` and then reads from `stdout` looking for particular string(s) anywhere in the output.
So, by adding the string "Microsoft Resource Compiler" to the `/?` output, meson will recognize `zig rc` as rc.exe and give it the correct options, which works fine since `zig rc` is drop-in CLI compatible with rc.exe.
This allows using `zig rc` with meson for (cross-)compiling, by either:
- Setting WINDRES="zig rc" or putting windres = ['zig', 'rc'] in the cross-file
+ This will work like rc.exe, so it will output .res files. This will only link successfully if you are using a linker that can do .res -> .obj conversion (so something like zig cc, MSVC, lld)
- Setting WINDRES="zig rc /:output-format coff" or putting windres = ['zig', 'rc', '/:output-format', 'coff'] in the cross-file
+ This will make meson pass flags as if it were rc.exe, but it will cause the resulting .res file to actually be a COFF object file, meaning it will work with any linker that handles COFF object files
Example cross file that uses `zig cc` (which can link `.res` files, so `/:output-format coff` is not necessary) and `zig rc`:
```
[binaries]
c = ['zig', 'cc', '--target=x86_64-windows-gnu']
windres = ['zig', 'rc']
[target_machine]
system = 'windows'
cpu_family = 'x86_64'
cpu = 'x86_64'
endian = 'little'
```
In #22522 I said:
> RC="zig rc" will now work in combination with zig cc and CMake. Here's an example of cross-compiling a simple Windows GUI CMake project
>
> $ RC="zig rc" CC="zig cc --target=x86_64-windows-gnu" cmake .. -DCMAKE_SYSTEM_NAME=Windows -G Ninja
However, I didn't realize that the time that this only works because of the `-G Ninja` part. When not using Ninja as the build tool, CMake adds a workaround for 'very long lists of object files' where it takes all object files and runs them through `ar` to combine them into one archive:
4a11fd8dde/Modules/Platform/Windows-GNU.cmake (L141-L158)
This is a problem for the Windows resource use-case, because `ar` doesn't know how to deal with `.res` files and so this object combining step fails with:
unknown file type: foo.rc.res
Only the linker knows what to do with .res files (since it has its own `.res` -> `.obj` ('cvtres') conversion mechanism). So, when using Ninja, this object file combining step is skipped and the .res file gets passed to the linker and everyone is happy.
Note: When CMake thinks that its using `windres` as the Windows resource compiler, it will pass `-O coff` to windres which causes it to output a COFF object file instead of a `.res` file, which means that the `ar` step can succeed because it's only working on actual object files.
---
This commit gives `zig rc` the ability to output COFF object files directly when `/:output-format coff` is provided as an argument. This effectively matches what happens when CMake uses `windres` for resource compilation, but requires the argument to be provided explicitly.
So, after this change, the following CMake cross-compilation use case will work, even when not using Ninja as the generator:
RC="zig rc /:output-format coff" CC="zig cc --target=x86_64-windows-gnu" cmake .. -DCMAKE_SYSTEM_NAME=Windows
Note: This mostly matches resinator v0.1.0 rather than the latest master version, since the latest master version focuses on adding support for .res -> .obj conversion which is not necessary for the future planned relationship of zig and resinator (resinator will likely be moved out of the compiler and into the build system, a la translate-c).
So, ultimately the changes here consist mostly of bug fixes for obscure edge cases.
The compiler actually doesn't need any functional changes for this: Sema
does reification based on the tag indices of `std.builtin.Type` already!
So, no zig1.wasm update is necessary.
This change is necessary to disallow name clashes between fields and
decls on a type, which is a prerequisite of #9938.
The core functionalities are now in two general functions
`extremeInSubtreeOnDirection()` and `nextOnDirection()` so all the other
traversing functions (`getMin()`, `getMax()`, and `InorderIterator`) are
all just trivial calls to these core functions.
The added two functions `Node.next()` and `Node.prev()` are also just
trivial calls to these.
* std.Treap traversal direction: use u1 instead of usize.
* Treap: fix getMin() and getMax(), and add tests for them.
Deprecated aliases that are now compile errors:
- `std.fs.MAX_PATH_BYTES` (renamed to `std.fs.max_path_bytes`)
- `std.mem.tokenize` (split into `tokenizeAny`, `tokenizeSequence`, `tokenizeScalar`)
- `std.mem.split` (split into `splitSequence`, `splitAny`, `splitScalar`)
- `std.mem.splitBackwards` (split into `splitBackwardsSequence`, `splitBackwardsAny`, `splitBackwardsScalar`)
- `std.unicode`
+ `utf16leToUtf8Alloc`, `utf16leToUtf8AllocZ`, `utf16leToUtf8`, `fmtUtf16le` (all renamed to have capitalized `Le`)
+ `utf8ToUtf16LeWithNull` (renamed to `utf8ToUtf16LeAllocZ`)
- `std.zig.CrossTarget` (moved to `std.Target.Query`)
Deprecated `lib/std/std.zig` decls were deleted instead of made a `@compileError` because the `refAllDecls` in the test block would trigger the `@compileError`. The deleted top-level `std` namespaces are:
- `std.rand` (renamed to `std.Random`)
- `std.TailQueue` (renamed to `std.DoublyLinkedList`)
- `std.ChildProcess` (renamed/moved to `std.process.Child`)
This is not exhaustive. Deprecated aliases that I didn't touch:
+ `std.io.*`
+ `std.Build.*`
+ `std.builtin.Mode`
+ `std.zig.c_translation.CIntLiteralRadix`
+ anything in `src/`
writeFile was deprecated in favor of writeFile2 in f645022d16361865e24582d28f1e62312fbc73bb. This commit renames writeFile2 to writeFile and makes writeFile2 a compile error.
this patch renames ComptimeStringMap to StaticStringMap, makes it
accept only a single type parameter, and return a known struct type
instead of an anonymous struct. initial motivation for these changes
was to reduce the 'very long type names' issue described here
https://github.com/ziglang/zig/pull/19682.
this breaks the previous API. users will now need to write:
`const map = std.StaticStringMap(T).initComptime(kvs_list);`
* move `kvs_list` param from type param to an `initComptime()` param
* new public methods
* `keys()`, `values()` helpers
* `init(allocator)`, `deinit(allocator)` for runtime data
* `getLongestPrefix(str)`, `getLongestPrefixIndex(str)` - i'm not sure
these belong but have left in for now incase they are deemed useful
* performance notes:
* i posted some benchmarking results here:
https://github.com/travisstaloch/comptime-string-map-revised/issues/1
* i noticed a speedup reducing the size of the struct from 48 to 32
bytes and thus use u32s instead of usize for all length fields
* i noticed speedup storing KVs as a struct of arrays
* latest benchmark shows these wall_time improvements for
debug/safe/small/fast builds: -6.6% / -10.2% / -19.1% / -8.9%. full
output in link above.
We've got a big one here! This commit reworks how we represent pointers
in the InternPool, and rewrites the logic for loading and storing from
them at comptime.
Firstly, the pointer representation. Previously, pointers were
represented in a highly structured manner: pointers to fields, array
elements, etc, were explicitly represented. This works well for simple
cases, but is quite difficult to handle in the cases of unusual
reinterpretations, pointer casts, offsets, etc. Therefore, pointers are
now represented in a more "flat" manner. For types without well-defined
layouts -- such as comptime-only types, automatic-layout aggregates, and
so on -- we still use this "hierarchical" structure. However, for types
with well-defined layouts, we use a byte offset associated with the
pointer. This allows the comptime pointer access logic to deal with
reinterpreted pointers far more gracefully, because the "base address"
of a pointer -- for instance a `field` -- is a single value which
pointer accesses cannot exceed since the parent has undefined layout.
This strategy is also more useful to most backends -- see the updated
logic in `codegen.zig` and `codegen/llvm.zig`. For backends which do
prefer a chain of field and elements accesses for lowering pointer
values, such as SPIR-V, there is a helpful function in `Value` which
creates a strategy to derive a pointer value using ideally only field
and element accesses. This is actually more correct than the previous
logic, since it correctly handles pointer casts which, after the dust
has settled, end up referring exactly to an aggregate field or array
element.
In terms of the pointer access code, it has been rewritten from the
ground up. The old logic had become rather a mess of special cases being
added whenever bugs were hit, and was still riddled with bugs. The new
logic was written to handle the "difficult" cases correctly, the most
notable of which is restructuring of a comptime-only array (for
instance, converting a `[3][2]comptime_int` to a `[2][3]comptime_int`.
Currently, the logic for loading and storing work somewhat differently,
but a future change will likely improve the loading logic to bring it
more in line with the store strategy. As far as I can tell, the rewrite
has fixed all bugs exposed by #19414.
As a part of this, the comptime bitcast logic has also been rewritten.
Previously, bitcasts simply worked by serializing the entire value into
an in-memory buffer, then deserializing it. This strategy has two key
weaknesses: pointers, and undefined values. Representations of these
values at comptime cannot be easily serialized/deserialized whilst
preserving data, which means many bitcasts would become runtime-known if
pointers were involved, or would turn `undefined` values into `0xAA`.
The new logic works by "flattening" the datastructure to be cast into a
sequence of bit-packed atomic values, and then "unflattening" it; using
serialization when necessary, but with special handling for `undefined`
values and for pointers which align in virtual memory. The resulting
code is definitely slower -- more on this later -- but it is correct.
The pointer access and bitcast logic required some helper functions and
types which are not generally useful elsewhere, so I opted to split them
into separate files `Sema/comptime_ptr_access.zig` and
`Sema/bitcast.zig`, with simple re-exports in `Sema.zig` for their small
public APIs.
Whilst working on this branch, I caught various unrelated bugs with
transitive Sema errors, and with the handling of `undefined` values.
These bugs have been fixed, and corresponding behavior test added.
In terms of performance, I do anticipate that this commit will regress
performance somewhat, because the new pointer access and bitcast logic
is necessarily more complex. I have not yet taken performance
measurements, but will do shortly, and post the results in this PR. If
the performance regression is severe, I will do work to to optimize the
new logic before merge.
Resolves: #19452Resolves: #19460