This is a breaking change. Before, usage looked like this:
```zig
const held = mutex.acquire();
defer held.release();
```
Now it looks like this:
```zig
mutex.lock();
defer mutex.unlock();
```
The `Held` type was an idea to make mutexes slightly safer by making it
more difficult to forget to release an aquired lock. However, this
ultimately caused more problems than it solved, when any data structures
needed to store a held mutex. Simplify everything by reducing the API
down to the primitives: lock() and unlock().
Closes#8051Closes#8246Closes#10105
--import-memory import memory from the environment
--initial-memory=[bytes] initial size of the linear memory
--max-memory=[bytes] maximum size of the linear memory
--global-base=[addr] where to start to place global data
See #8633
Because ArrayList.initCapacity uses 'precise' capacity allocation, this should save memory on average, and definitely will save memory in cases where ArrayList is used where a regular allocated slice could have also be used.
Add an option to allow the '-z notext' option to be passed to the linker
via. the compiler frontend, which is a flag that tells the linker that
relocations in read-only sections are permitted. Certain targets such as
Solana BPF rely on this flag.
Expose all linker options i.e. '-z nodelete', '-z now', '-z relro' in
the compiler frontend. Usage documentation has been updated accordingly.
Expose the '-z notext' flag in the standard library build runner.
There is a table of `misc_failures` which previously did not allow
multiple errors for the same enum tag. Now it allows this by freeing the
previous compile error and replacing it with the new one. Typically this
will happen because multiple sub-Compilation objects fail with the same
problem, such as not being able to build glibc because of not having
LLVM extensions enabled.
The main problem that motivated these changes is that global constants
which are referenced by pointer would not be emitted into the binary.
This happened because `semaDecl` did not add `codegen_decl` tasks for
global constants, instead relying on the constant values being copied as
necessary. However when the global constants are referenced by pointer,
they need to be sent to the linker to be emitted.
After making global const arrays, structs, and unions get emitted, this
uncovered a latent issue: the anonymous decls that they referenced would
get garbage collected (via `deleteUnusedDecl`) even though they would
later be referenced by the global const.
In order to solve this problem, I introduced `anon_work_queue` which is
the same as `work_queue` except a lower priority. The `codegen_decl`
task for anon decls goes into the `anon_work_queue` ensuring that the
owner decl gets a chance to mark its anon decls as alive before they are
possibly deleted.
This caused a few regressions, which I made the judgement call to add
workarounds for. Two steps forward, one step back, is still progress.
The regressions were:
* Two behavior tests having to do with unions. These tests were
intentionally exercising the LLVM constant value lowering, however,
due to the bug with garbage collection that was fixed in this commit,
the LLVM code was not getting exercised, and union types/values were
not implemented correctly, due to me forgetting that LLVM does not
allow bitcasting aggregate values.
- This is worked around by allowing those 2 test cases to regress,
moving them to the "passing for stage1 only" section.
* The test-stage2 test cases (in test/cases/*) for non-LLVM backends
previously did not have any calls to lower struct values, but now
they do. The code that was there was just `@panic("TODO")`. I
replaced that code with a stub that generates the wrong value. This
is an intentional miscompilation that will obviously need to get
fixed before any struct behavior tests pass. None of the current
tests we have exercise loading any values from these global const
structs, so there is not a problem until we try to improve these
backends.
Each element of the output JSON has the VM address of the generated
binary nondecreasing (some elements might occupy the same VM address
for example the atom and the relocation might coincide in the address
space).
The generated JSON can be inspected manually or via a preview tool
`zig-snapshots` that I am currently working on and will allow the user
to inspect interactively the state of the linker together with the
positioning of sections, symbols, atoms and relocations within each
snapshot state, and in the future, between snapshots too. This should
allow for quicker debugging of the linker which is nontrivial when
run in the incremental mode.
Note that the state will only be dumped if the compiler is built with
`-Dlink-snapshot` flag on, and then the compiler is passed `--debug-link-snapshot`
flag upon compiling a source/project.
This prevents a compiler_rt built with stage2 (which is intentionally
different than when built with stage1) from being used for stage1 and
vice versa.
Fixes the regression from the previous commit.
* AstGen: fix emitting `store_to_inferred_ptr` when it should be emitting
`store` for a variable that has an explicit alignment.
* Compilation: fix a couple memory leaks
* Sema: implement support for locals that have specified alignment.
* Sema: implement `@intCast` when it needs to emit an AIR instruction.
* Sema: implement `@alignOf`
* Implement debug printing for extended alloc ZIR instructions.
* LLVM backend: respect `sub_path` just like the other stage2 backends
do.
* Compilation has some new logic to only emit work queue jobs for
building stuff when it believes itself to be capable. The linker
backends no longer have duplicate logic; instead they respect the
optional bit on the respective asset.
Previously, linker backends or machine code backends were able to hold
on to references to inside Sema's temporary arena. However there can
be large objects stored there that we want to free after machine code is
generated.
The primary change in this commit is to use a temporary arena for Sema
of function bodies that gets freed after machine code backend finishes
handling `updateFunc` (at the same time that Air and Liveness get freed).
The other changes in this commit are fixing issues that fell out from
the primary change.
* The C linker backend is rewritten to handle updateDecl and updateFunc
separately. Also, all Decl updates get access to typedefs and
fwd_decls, not only functions.
* The C linker backend is updated to the new API that does not depend
on allocateDeclIndexes and does not have to handle garbage collected
decls.
* The C linker backend uses an arena for Type/Value objects that
`typedefs` references. These can be garbage collected every so often
after flush(), however that garbage collection code is not
implemented at this time. It will be pretty simple, just allocate a
new arena, copy all the Type objects to it, update the keys of the
hash map, free the old arena.
* Sema: fix a handful of instances of not copying Type/Value objects
from the temporary arena into the appropriate Decl arena.
* Type: fix some function types not reporting hasCodeGenBits()
correctly.
* test runner is improved to respect `error.SkipZigTest`
* start code is improved to `@setAlignStack(16)` before calling main()
* the newly passing behavior test has a workaround for the fact that
stage2 cannot yet call `std.Target.x86.featureSetHas()` at comptime.
This is blocking on comptime closures. The workaround is that there
is a new decl `@import("builtin").stage2_x86_cx16` which is a `bool`.
* Implement `@setAlignStack`. This language feature should be re-evaluated
at some point - I'll file an issue for it.
* LLVM backend: apply/remove the cold attribute and noinline attribute
where appropriate.
* LLVM backend: loads and stores are properly annotated with alignment
and volatile attributes.
* LLVM backend: allocas are properly annotated with alignment.
* Type: fix integers reporting wrong alignment for 256-bit integers and
beyond. Once you get to 16 byte aligned, there is no further
alignment for larger integers.
Conflicts:
* cmake/Findclang.cmake
* cmake/Findlld.cmake
* cmake/Findllvm.cmake
In master branch, more search paths were added to these files with "12"
in the path. In this commit I updated them to "13".
* src/stage1/codegen.cpp
* src/zig_llvm.cpp
* src/zig_llvm.h
In master branch, ZigLLVMBuildCmpXchg is improved to add
`is_single_threaded`. However, the LLVM 13 C API has this already, and
in the llvm13 branch, ZigLLVMBuildCmpXchg is deleted in favor of the C
API. In this commit I updated stage2 to use the LLVM 13 C API rather
than depending on an improved ZigLLVMBuildCmpXchg.
Additionally, src/target.zig largestAtomicBits needed to be updated to
include the new m68k ISA.
Conflicts:
lib/libcxx/include/__config
d57c0cc3bfeff9af297279759ec2b631e6d95140 added support for DragonFlyBSD
to libc++ by updating some ifdefs. This needed to be synced with llvm13.
clang::ASTUnit::LoadFromCommandLine interprets the first argument as
the name of program (like the main function).
This change shifts the arguments passing "" for the first argument.
This is now no longer limited to targeting macOS natively but also
tries to detect the sysroot when targeting different Apple platforms
from macOS; for instance targeting iPhone Simulator from macOS. In
this case, Zig will try detecting the SDK path by invoking
`xcrun --sdk iphonesimulator --show-sdk-path`, and if the command
fails because the SDK doesn't exist (case when having CLT installed only)
or not having either Xcode or CLT installed, we simply return null
signaling that the user has to provide the sysroot themselves.
AstGen result locations now have a `coerced_ty` tag which is the same as
`ty` except it assumes that Sema will do a coercion, so it does not
redundantly add an `as` instruction into the ZIR code. This results in
cleaner ZIR and about a 14% reduction of ZIR bytes.
param and param_comptime ZIR instructions now have a block body for
their type expressions. This allows Sema to skip evaluation of the
block in the case that the parameter is comptime-provided. It also
allows a new mechanism to function: when evaluating type expressions of
generic functions, if it would depend on another parameter, it returns
`error.GenericPoison` which bubbles up and then is caught by the
param/param_comptime instruction and then handled.
This allows parameters to be evaluated independently so that the type
info for functions which have comptime or anytype parameters will still
have types populated for parameters that do not depend on values of
previous parameters (because evaluation of their param blocks will return
successfully instead of `error.GenericPoison`).
It also makes iteration over the block that contains function parameters
slightly more efficient since it now only contains the param
instructions.
Finally, it fixes the case where a generic function type expression contains
a function prototype. Formerly, this situation would cause shared state
to clobber each other; now it is in a proper tree structure so that
can't happen. This fix also required adding a field to Sema
`comptime_args_fn_inst` to make sure that the `comptime_args` field
passed into Sema is applied to the correct `func` instruction.
Source location for `node_offset_asm_ret_ty` is fixed; it was pointing at
the asm output name rather than the return type as intended.
Generic function instantiation is fixed, notably with respect to
parameter type expressions that depend on previous parameters, and with
respect to types which must be always comptime-known. This involves
passing all the comptime arguments at a callsite of a generic function,
and allowing the generic function semantic analysis to coerce the values
to the proper types (since it has access to the evaluated parameter type
expressions) and then decide based on the type whether the parameter is
runtime known or not. In the case of explicitly marked `comptime`
parameters, there is a check at the semantic analysis of the `call`
instruction.
Semantic analysis of `call` instructions does type coercion on the
arguments, which is needed both for generic functions and to make up for
using `coerced_ty` result locations (mentioned above).
Tasks left in this branch:
* Implement the memoization table.
* Add test coverage.
* Improve error reporting and source locations for compile errors.
After this change, the frontend and backend cooperate to keep track of
which Decls are actually emitted into the machine code. When any backend
sees a `decl_ref` Value, it must mark the corresponding Decl `alive`
field to true.
This prevents unused comptime data from spilling into the output object
files. For example, if you do an `inline for` loop, previously, any
intermediate value calculations would have gone into the object file.
Now they are garbage collected immediately after the owner Decl has its
machine code generated.
In the frontend, when it is time to send a Decl to the linker, if it has
not been marked "alive" then it is deleted instead.
Additional improvements:
* Resolve type ABI layouts after successful semantic analysis of a
Decl. This is needed so that the backend has access to struct fields.
* Sema: fix incorrect logic in resolveMaybeUndefVal. It should return
"not comptime known" instead of a compile error for global variables.
* `Value.pointerDeref` now returns `null` in the case that the pointer
deref cannot happen at compile-time. This is true for global
variables, for example. Another example is if a comptime known
pointer has a hard coded address value.
* Binary arithmetic sets the requireRuntimeBlock source location to the
lhs_src or rhs_src as appropriate instead of on the operator node.
* Fix LLVM codegen for slice_elem_val which had the wrong logic for
when the operand was not a pointer.
As noted in the comment in the implementation of deleteUnusedDecl, a
future improvement will be to rework the frontend/linker interface to
remove the frontend's responsibility of calling allocateDeclIndexes.
I discovered some issues with the plan9 linker backend that are related
to this, and worked around them for now.
Before calling populateTestFunctions() we want to check
totalErrorCount() but that will read from some tables that might get
populated by the thread pool for C compilation tasks. So we wait until
all those tasks are finished before proceeding.
Frontend improvements:
* When compiling in `zig test` mode, put a task on the work queue to
analyze the main package root file. Normally, start code does
`_ = import("root");` to make Zig analyze the user's code, however in
the case of `zig test`, the root source file is the test runner.
Without this change, no tests are picked up.
* In the main pipeline, once semantic analysis is finished, if there
are no compile errors, populate the `test_functions` Decl with the
set of test functions picked up from semantic analysis.
* Value: add `array` and `slice` Tags.
LLVM backend improvements:
* Fix incremental updates of globals. Previously the
value of a global would not get replaced with a new value.
* Fix LLVM type of arrays. They were incorrectly sending
the ABI size as the element count.
* Remove the FuncGen parameter from genTypedValue. This function is for
generating global constants and there is no function available when
it is being called.
- The `ref_val` case is now commented out. I'd like to eliminate
`ref_val` as one of the possible Value Tags. Instead it should
always be done via `decl_ref`.
* Implement constant value generation for slices, arrays, and structs.
* Constant value generation for functions supports the `decl_ref` tag.
* There is now a main_pkg in addition to root_pkg. They are usually the
same. When using `zig test`, main_pkg is the user's source file and
root_pkg has the test runner.
* scanDecl no longer looks for test decls outside the package being
tested. honoring `--test-filter` is still TODO.
* test runner main function has a void return value rather than
`anyerror!void`
* Sema is improved to generate better AIR for for loops on slices.
* Sema: fix incorrect capacity calculation in zirBoolBr
* Sema: add compile errors for trying to use slice fields as an lvalue.
* Sema: fix type coercion for error unions
* Sema: fix analyzeVarRef generating garbage AIR
* C codegen: fix renderValue for error unions with 0 bit payload
* C codegen: implement function pointer calls
* CLI: fix usage text
Adds 4 new AIR instructions:
* slice_len, slice_ptr: to get the ptr and len fields of a slice.
* slice_elem_val, ptr_slice_elem_val: to get the element value of
a slice, and a pointer to a slice.
AstGen gains a new functionality:
* One of the unused flags of struct decls is now used to indicate
structs that are known to have non-zero size based on the AST alone.
The include_compiler_rt stored in the bin file options means that we need
compiler-rt symbols *somehow*. However, in the context of using the stage1 backend
we need to tell stage1 to include compiler-rt only if stage1 is the place that
needs to provide those symbols. Otherwise the stage2 infrastructure will take care
of it in the linker, by putting compiler_rt.o into a static archive, or linking
compiler_rt.a against an executable. In other words we only want to set this flag
for stage1 if we are using build-obj.
When using `build-exe` or `build-lib -dynamic`, `-fcompiler-rt` means building
compiler-rt into a static library and then linking it into the executable.
When using `build-lib`, `-fcompiler-rt` means building compiler-rt into an
object file and then adding it into the static archive.
Before this commit, when using `build-obj`, zig would build compiler-rt
into an object file, and then on ELF, use `lld -r` to merge it into the
main object file. Other linker backends of LLD do not support `-r` to
merge objects, so this failed with error messages for those targets.
Now, `-fcompiler-rt` when used with `build-obj` acts as if the user puts
`_ = @import("compiler_rt");` inside their root source file. The symbols
of compiler-rt go into the same compilation unit as the root source file.
This is hooked up for stage1 only for now. Once stage2 is capable of
building compiler-rt, it should be hooked up there as well.