Over the last year of using std.log in practice, it has become clear to
me that having the current 8 distinct log levels does more harm than
good. It is too subjective which level a given message should have which
makes filtering based on log level weaker as not all messages will have
been assigned the log level one might expect.
Instead, more granular filtering should be achieved by leveraging the
logging scope feature. Filtering based on a combination of scope and log
level should be sufficiently powerful for all use-cases.
Note that the self hosted compiler has already limited itself to 4
distinct log levels for many months and implemented granular filtering
based on both log scope and level. This has worked very well in practice
while working on the self hosted compiler.
* without this, when an included relocatable references a common symbol
from another translation unit would not be correctly removed from
the unresolved lookup table triggering a misleading assertion down
the line
* assert upon removal that we indeed removed a ref instead of silently
ignoring in debug
* add test case that covers this issue
According to the documentation, `divTrunc` is "Truncated division.
Rounds toward zero". Lower it as a straightforward fdiv + trunc sequence
to make it behave as expected with mixed positive/negative operands.
Closes#10001
The TLS area may be located in the upper part of the address space and,
if the platform expects a constant offset to be applied, may make the tp
register calculation overflow.
Use +% instead of +, the overflow is harmless.
* Fix backend using wrong union field of the slice instruction.
* LLVM backend properly sets alignment on global variables.
* Sema: add coercion for *T to *[1]T
* Sema: pointers to Decls with explicit alignment now have alignment
metadata in them.
Also switch to the more efficient encoding of the bitcast instruction
when the destination type is anyerror in 2 common cases.
LLVM backend: fix using the wrong type as the optional payload type in
the `wrap_optional` AIR instruction.
After a discussion about language specs, this seems like the best way to
go, because it's simpler to reason about both for humans and compilers.
The `bitcast_result_ptr` ZIR instruction is no longer needed.
This commit also implements writing enums, arrays, and vectors to
virtual memory at compile-time.
This unlocked some more of compiler-rt being able to build, which
in turn unlocks saturating arithmetic behavior tests.
There was also a memory leak in the comptime closure system which is now
fixed.
Each element of the output JSON has the VM address of the generated
binary nondecreasing (some elements might occupy the same VM address
for example the atom and the relocation might coincide in the address
space).
The generated JSON can be inspected manually or via a preview tool
`zig-snapshots` that I am currently working on and will allow the user
to inspect interactively the state of the linker together with the
positioning of sections, symbols, atoms and relocations within each
snapshot state, and in the future, between snapshots too. This should
allow for quicker debugging of the linker which is nontrivial when
run in the incremental mode.
Note that the state will only be dumped if the compiler is built with
`-Dlink-snapshot` flag on, and then the compiler is passed `--debug-link-snapshot`
flag upon compiling a source/project.
AIR:
* div is renamed to div_trunc.
* Add div_float, div_floor, div_exact.
- Implemented in Sema and LLVM codegen. C backend has a stub.
Improvements to std.math.big.Int:
* Add `eqZero` function to `Mutable`.
* Fix incorrect results for `divFloor`.
Compiler-rt:
* Add muloti4 to the stage2 section.
This fixes InstallRawStep to handle the cases when there are empty segments (segments with no sections). Before this change, if there was an empty segment with no sections, then the fixup of binaryOffsets is skipped. This fixes that by looping through each segment until a non-empty one is found and then fixing up the sections. This fixed an issue I was having with InstallRawStep for a bootloader I'm writing.
This modifies the error for an unexpected exit code from the ChildProcess of RunStep to be UnexpectedExitCode rather than UncleanExit. This allows the handler of the error to distinguish between an error reported by the ChildProcess, and an error executing the ChildProcess, which is an important dinstinction when it comes to know what information to report to the user. For example, if you have a ChildProcess that you know reports its own errors, an unexpected exit code would mean the error is already reported, but an unclean exit would mean that child process was not able to report any error.
* Restructure elemPtr a bit
* New AIR instruction: slice_elem_ptr, which returns a pointer to an element of a slice
* Value: adapt elemPtr to work on slices
* New AIR instruction: slice, which constructs a slice out of a pointer
and a length.
* AstGen: use `coerced_ty` for start and end expressions, use `none`
for the sentinel, and don't try to load the result of the slice
operation because it returns a by-value result.
* Sema: pointer arithmetic is extracted into analyzePointerArithmetic
and it is used by the implementation of slice.
- Also I implemented comptime pointer addition.
* Sema: extract logic into analyzeSlicePtr, analyzeSliceLen and use them
inside the slice semantic analysis.
- The approach in stage2 is much cleaner than stage1 because it uses
more granular analysis calls for obtaining the slice pointer, doing
arithmetic on it, and checking if the length is comptime-known.
* Sema: use the slice Value Tag for slices when doing coercion from
pointer-to-array.
* LLVM backend: detect when emitting a GEP instruction into a
pointer-to-array and add the extra index that is required.
* Type: ptrAlignment for c_void returns 0.
* Implement Value.hash and Value.eql for slices.
* Remove accidentally duplicated behavior test.