* update to the new cache hash API
* std.Target defaultVersionRange moves to std.Target.Os.Tag
* std.Target.Os gains getVersionRange which returns a tagged union
* start the process of splitting Module into Compilation and "zig
module".
- The parts of Module having to do with only compiling zig code are
extracted into ZigModule.zig.
- Next step is to rename Module to Compilation.
- After that rename ZigModule back to Module.
* implement proper cache hash usage when compiling C objects, and
properly manage the file lock of the build artifacts.
* make versions optional to match recent changes to master branch.
* proper cache hash integration for compiling zig code
* proper cache hash integration for linking even when not compiling zig
code.
* ELF LLD linking integrates with the caching system. A comment from
the source code:
Here we want to determine whether we can save time by not invoking LLD when the
output is unchanged. None of the linker options or the object files that are being
linked are in the hash that namespaces the directory we are outputting to. Therefore,
we must hash those now, and the resulting digest will form the "id" of the linking
job we are about to perform.
After a successful link, we store the id in the metadata of a symlink named "id.txt" in
the artifact directory. So, now, we check if this symlink exists, and if it matches
our digest. If so, we can skip linking. Otherwise, we proceed with invoking LLD.
* implement disable_c_depfile option
* add tracy to a few more functions
* improve the ZIR generated of variable decls
- utilize the same ZIR for the type and init value when possible
- init value gets a result location with the variable type.
no manual coercion is required.
* no longer use return instructions to extract values out of comptime
blocks. Instead run the analysis and then look at the corresponding
analyzed instruction, relying on the comptime mechanism to report
errors when something could not be comptime evaluated.
* move SPU code from std to self hosted compiler
* change std lib comments to be descriptive rather than prescriptive
* avoid usingnamespace
* fix case style of error codes
* remove duplication of producer_string
* generalize handling of less than 64 bit arch pointers
* clean up SPU II related test harness code
`is_pub` added to `Fn` would cost us an additional 8
bytes of memory per function, which is a real bummer
since it's only 1 bit of information.
If we wanted to really remove this, I suspect we could
make this a function isPub() which looks at the AST of
the corresponding Decl and finds if the FnProto AST node
has the pub token. However I saw an easier approach -
The data of whether something is pub or not is actually
a property of a Decl anyway, not a function, so we can
look at moving the field into Decl. Indeed, doing this,
we see that Decl already has deletion_flag: bool which
is hiding in the padding bytes between the enum (1 byte)
and the following u32 field (generation). So if we put
the is_pub bool there, it actually will take up no
additional space, with 1 byte of padding remaining.
This was an easy reworking of the code since any
func.is_pub could be changed simply to func.owner_decl.is_pub.
I also modified `Var` to make the init value non-optional
and moved the optional bit to a has_init: bool field. This is worse from
the perspective of control flow and safety, however it makes
`@sizeOf(Var)` go from 32 bytes to 24 bytes. The more code we can fit
into memory at once, the more justified we are in using the compiler as
a long-running process that does incremental updates.
During codegen we do not yet know the indexes that will be used for
called functions. Therefore, we store the offset into the in-memory
code where the index is needed with a pointer to the Decl and use this
data to insert the proper indexes while writing the binary in the flush
function.
* introduce a dump() function on Module.Fn which helpfully prints to
stderr the ZIR representation of a function (can be called before
attempting to codegen it). This is a debugging tool.
* implement x86 codegen for loops
* liveness: fix analysis of conditional branches. The logic was buggy
in a couple ways:
- it never actually saved the results into the IR instruction (fixed now)
- it incorrectly labeled operands as dying when their true death was
after the conditional branch ended (fixed now)
* zir rendering is enhanced to show liveness analysis results. this
helps when debugging liveness analysis.
* fix bug in zir rendering not numbering instructions correctly
closes#6021
* AST: flatten ControlFlowExpression into Continue, Break, and Return.
* AST: unify identifiers and literals into the same AST type: OneToken
* AST: ControlFlowExpression uses TrailerFlags to optimize storage
space.
* astgen: support `var` as well as `const` locals, and support
explicitly typed locals. Corresponding Module and codegen code is not
implemented yet.
* astgen: support result locations.
* ZIR: add the following instructions (see the corresponding doc
comments for explanations of semantics):
- alloc
- alloc_inferred
- bitcast_result_ptr
- coerce_result_block_ptr
- coerce_result_ptr
- coerce_to_ptr_elem
- ensure_result_used
- ensure_result_non_error
- ret_ptr
- ret_type
- store
- param_type
* the skeleton structure for result locations is set up. It's looking
pretty clean so far.
* add compile error for unused result and compile error for discarding
errors.
* astgen: split builtin calls up to implemented manually, and implement
`@as`, `@bitCast` (and others) with respect to result locations.
* add CLI support for hex and raw object formats. They are not
supported by the self-hosted compiler yet, and emit errors.
* rename `--c` CLI to `-ofmt=[objectformat]` which can be any of the
object formats. Only ELF and C are supported so far. Also added missing
help to the help text.
* Remove hard tabs from C backend test cases. Shame on you Noam, you
are grounded, you should know better, etc. Bad boy.
* Delete C backend code and test case that relied on comptime_int
incorrectly making it all the way to codegen.