Because ArrayList.initCapacity uses 'precise' capacity allocation, this should save memory on average, and definitely will save memory in cases where ArrayList is used where a regular allocated slice could have also be used.
If these functions are called more than once, then the array list would no longer be guaranteed to have enough capacity during the appendAssumeCapacity calls. With ensureUnusedCapacity, they will always be guaranteed to have enough capacity regardless of how many times the function is called.
These calls are all late-initialization of ArrayList's that were initialized outside the current scope. This allows us to still get the potential memory-saving benefits of the 'precision' of initCapacity.
Add an option to allow the '-z notext' option to be passed to the linker
via. the compiler frontend, which is a flag that tells the linker that
relocations in read-only sections are permitted. Certain targets such as
Solana BPF rely on this flag.
Expose all linker options i.e. '-z nodelete', '-z now', '-z relro' in
the compiler frontend. Usage documentation has been updated accordingly.
Expose the '-z notext' flag in the standard library build runner.
The ensureUnusedCapacity did not reserve a big enough number. I changed
it to no longer guess the capacity because I saw that the number of
possible items was not determinable ahead of time and this can therefore
avoid allocating more memory than necessary.
The C backend is the only backend that requires each decl to be output
in an order that satisfies the dependency graph. Here it is implemented
with a simple algorithm based on a `remaining_decls` set, using the
`dependencies` edges that are already stored for each Decl.
This satisfies incremental compilation as well as how `zig test` works,
which calls `updateDecl` on `test_functions`.
* without this, when an included relocatable references a common symbol
from another translation unit would not be correctly removed from
the unresolved lookup table triggering a misleading assertion down
the line
* assert upon removal that we indeed removed a ref instead of silently
ignoring in debug
* add test case that covers this issue
Each element of the output JSON has the VM address of the generated
binary nondecreasing (some elements might occupy the same VM address
for example the atom and the relocation might coincide in the address
space).
The generated JSON can be inspected manually or via a preview tool
`zig-snapshots` that I am currently working on and will allow the user
to inspect interactively the state of the linker together with the
positioning of sections, symbols, atoms and relocations within each
snapshot state, and in the future, between snapshots too. This should
allow for quicker debugging of the linker which is nontrivial when
run in the incremental mode.
Note that the state will only be dumped if the compiler is built with
`-Dlink-snapshot` flag on, and then the compiler is passed `--debug-link-snapshot`
flag upon compiling a source/project.
* do not add linkage scope to aliased exported symbols - this is
not respected on macOS
* special-case `MachO.openPath` in `link.File.openPath` as on macOS
we always link with zld
* redirect to `MachO.flushObject` when linking relocatable objects
in MachO linker whereas move the entire linking logic into
`MachO.flushModule`
* apply late symbol resolution for globals - instead of resolving
the exact location of a symbol in locals, globals or undefs,
we postpone the exact resolution until we have a full picture
for relocation resolution.
* fixup stubs to defined symbols - this is currently a hack rather
than a final solution. I'll need to work out the details to make
it more approachable. Currently, we preemptively create a stub
for a lazy bound global and fix up stub offsets in stub helper
routine if the global turns out to be undefined only. This is quite
wasteful in terms of space as we create stub, stub helper and lazy ptr
atoms but don't use them for defined globals.
* change log scope to .link for macho.
* remove redundant code paths from Object and Atom.
* drastically simplify the contents of Relocation struct (i.e., it is
now a simple superset of macho.relocation_info), clean up relocation
parsing and resolution logic.
There was duplicated logic for whether to include compiler_rt in the
linker line both in the frontend and in the linker backends. Now the
logic is only in the frontend; the linker puts it on the linker line if
the frontend provides it.
Fixes the CI failures.
* AstGen: fix emitting `store_to_inferred_ptr` when it should be emitting
`store` for a variable that has an explicit alignment.
* Compilation: fix a couple memory leaks
* Sema: implement support for locals that have specified alignment.
* Sema: implement `@intCast` when it needs to emit an AIR instruction.
* Sema: implement `@alignOf`
* Implement debug printing for extended alloc ZIR instructions.
* Remove the builtins `@addWithSaturation`, `@subWithSaturation`,
`@mulWithSaturation`, and `@shlWithSaturation` now that we have
first-class syntax for saturating arithmetic.
* langref: Clarify the behavior of `@shlExact`.
* Ast: rename `bit_shift_left` to `shl` and `bit_shift_right` to `shr`
for consistency.
* Air: rename to include underscore separator with consistency with
the rest of the ops.
* Air: add shl_exact instruction
* Use non-extended tags for saturating arithmetic, to keep it
simple so that all the arithmetic operations can be done the same
way.
- Sema: unify analyzeArithmetic with analyzeSatArithmetic
- implement comptime `+|`, `-|`, and `*|`
- allow float operands to saturating arithmetic
* `<<|` allows any integer type for the RHS.
* C backend: fix rebase conflicts
* LLVM backend: reduce the amount of branching for arithmetic ops
* zig.h: fix magic number not matching actual size of C integer types
- modify AstGen binOpExt()/assignBinOpExt() to accept generic extended payload T
- rework Sema zirSatArithmetic() to use existing sema.analyzeArithmetic() by adding an `opt_extended` parameter.
- add airSatOp() to codegen/c.zig
- add saturating functions to src/link/C/zig.h
* LLVM backend: respect `sub_path` just like the other stage2 backends
do.
* Compilation has some new logic to only emit work queue jobs for
building stuff when it believes itself to be capable. The linker
backends no longer have duplicate logic; instead they respect the
optional bit on the respective asset.
Introduce an explicit decl_map for *Decl to LLVMValueRef. Doc comment
reproduced here:
Ideally we would use `llvm_module.getNamedFunction` to go from *Decl to
LLVM function, but that has some downsides:
* we have to compute the fully qualified name every time we want to do the lookup
* for externally linked functions, the name is not fully qualified, but when
a Decl goes from exported to not exported and vice-versa, we would use the wrong
version of the name and incorrectly get function not found in the llvm module.
* it works for functions not all globals.
Therefore, this table keeps track of the mapping.
Non-exported functions now use fully-qualified symbol names.
`Module.Decl.getFullyQualifiedName` now returns a sentinel-terminated
slice which is useful to pass to LLVMAddFunction.
Instead of using aliases for all external symbols, now the LLVM backend
takes advantage of LLVMSetValueName to rename functions that become
exported. Aliases are still used for the second and remaining exports.
freeDecl is now handled properly in the LLVM backend, deleting the
LLVMValueRef corresponding to the Decl being deleted. The linker
backends for ELF, COFF, Mach-O, and Wasm had to be updated to forward
the freeDecl call to the LLVM backend.
Previously, linker backends or machine code backends were able to hold
on to references to inside Sema's temporary arena. However there can
be large objects stored there that we want to free after machine code is
generated.
The primary change in this commit is to use a temporary arena for Sema
of function bodies that gets freed after machine code backend finishes
handling `updateFunc` (at the same time that Air and Liveness get freed).
The other changes in this commit are fixing issues that fell out from
the primary change.
* The C linker backend is rewritten to handle updateDecl and updateFunc
separately. Also, all Decl updates get access to typedefs and
fwd_decls, not only functions.
* The C linker backend is updated to the new API that does not depend
on allocateDeclIndexes and does not have to handle garbage collected
decls.
* The C linker backend uses an arena for Type/Value objects that
`typedefs` references. These can be garbage collected every so often
after flush(), however that garbage collection code is not
implemented at this time. It will be pretty simple, just allocate a
new arena, copy all the Type objects to it, update the keys of the
hash map, free the old arena.
* Sema: fix a handful of instances of not copying Type/Value objects
from the temporary arena into the appropriate Decl arena.
* Type: fix some function types not reporting hasCodeGenBits()
correctly.
Otherwise, for last sections in segments it could happen we would
not expand the segment when actually required thus exceeding the
segment's size and causing data clobbering and dyld runtime errors.
* In watch mode, when changing the C source, we will trigger complete
relinking of objects, dylibs and archives (atoms coming from the
incremental updates stay put however). This means, we need to undo
metadata populated when linking in objects, archives and dylibs.
* Remove unused splitting section into atoms bit. This optimisation
will probably be best rewritten from scratch once self-hosted
matures so parking the idea for now. Also, for easier management
of atoms spawned from the Object file, keep the atoms subgraph as
part of the Object file struct.
* Remove obsolete ref to static initializers in object struct.
* Implement handling of global symbol collision in updateDeclExports.
* langref: add some more "see also" links for atomics
* Add the following AIR instructions
- atomic_load
- atomic_store_unordered
- atomic_store_monotonic
- atomic_store_release
- atomic_store_seq_cst
- atomic_rmw
* Implement those AIR instructions in LLVM and C backends.
* AstGen: make the `ty` result locations for `@atomicRmw`, `@atomicLoad`,
and `@atomicStore` be `coerced_ty` to avoid unnecessary ZIR
instructions when Sema will be doing the coercions redundantly.
* Sema for `@atomicLoad` and `@atomicRmw` is done, however Sema for
`@atomicStore` is not yet implemented.
- comptime eval for `@atomicRmw` is not yet implemented.
* Sema: flesh out `coerceInMemoryAllowed` a little bit more. It can now
handle pointers.
* Implement Sema for `@cmpxchgWeak` and `@cmpxchgStrong`. Both runtime
and comptime codepaths are implement.
* Implement Codegen for LLVM backend and C backend.
* Add LazySrcLoc.node_offset_builtin_call_argX 3...5
* Sema: rework comptime control flow.
- `error.ComptimeReturn` is used to signal that a comptime function
call has returned a result (stored in the Inlining struct).
`analyzeCall` notices this and handles the result.
- The ZIR instructions `break_inline`, `block_inline`,
`condbr_inline` are now redundant and can be deleted. `break`,
`block`, and `condbr` function equivalently inside a comptime scope.
- The ZIR instructions `loop` and `repeat` also are modified to
directly perform comptime control flow inside a comptime scope,
skipping an unnecessary mechanism for analysis of runtime code.
This makes Zig perform closer to an interpreter when evaluating
comptime code.
* Sema: zirRetErrValue looks at Sema.ret_fn_ty rather than sema.func
for adding to the inferred error set. This fixes a bug for
inlined/comptime function calls.
* Implement ZIR printing for cmpxchg.
* stage1: make cmpxchg respect --single-threaded
- Our LLVM C++ API wrapper failed to expose this boolean flag before.
* Fix AIR printing for struct fields showing incorrect liveness data.
This applies to stage2 where we make use of the cache system to work
out if we need to relink objects when performing incremental updates.
When the process is restarted however, while in principle the idea is
to carry on where we left off by reparsing the prelinked binary from
file, the required machinery is not there yet, and therefore we always
fully relink upon restart.