There were several problems, all fixed:
* AstGen was storing field names as references to the original
source code bytes. However, that data would be destroyed when the
source file is updated. Now, it correctly stores the field names in
the Decl arena for the enum. The same fix applies to error set field
names.
* Sema was missing a memset inside `analyzeSwitch`, leaving the "seen
enum fields" array with undefined memory. Now that they are all
properly set to null, the validation works.
* Moved the "enum declared here" note to the end. It looked weird
interrupting the notes for which enum values were missing.
Before, incremental compilation would crash when trying to emit compile
errors for the update after introducing a parse error.
Parse errors are handled by not invalidating any existing semantic
analysis. However, only the parse error must be reported, with all the
other errors suppressed. Once the parse error is fixed, the new file can
be treated as an update to the previously-succeeded update.
* `analyzeContainer` now has an `outdated_decls` set as well as
`deleted_decls`. Instead of queuing up outdated Decls for re-analysis
right away, they are added to this new set. When processing the
`deleted_decls` set, we remove deleted Decls from the
`outdated_decls` set, to avoid deleted Decl pointers from being in
the work_queue. Only after processing the deleted decls do we add
analyze_decl work items to the queue.
* Module.deletion_set is now an `AutoArrayHashMap` rather than `ArrayList`.
`declareDeclDependency` will now remove a Decl from it as appropriate.
When processing the `deletion_set` in `Compilation.performAllTheWork`,
it now assumes all Decl in the set are to be deleted.
* Fix crash when handling parse errors. Currently we unload the
`ast.Tree` if any parse errors occur. Previously the code emitted a
LazySrcLoc pointing to a token index, but then when we try to resolve
the token index to a byte offset to create a compile error message,
the ast.Tree` would be unloaded. Now we use
`LazySrcLoc.byte_abs` instead of `token_abs` so the error message can
be created even with the `ast.Tree` unloaded.
Together, these changes solve a crash that happened with incremental
compilation when Decls were added and removed in some combinations.
off by one error in the bits that describe defaults/field alignments
as soon as we have tests for struct field alignment and default values,
there will be coverage for this.
Now that we're close to supporting all the types, get rid of the `else`
prong and explicitly list out those types that are not yet implemented.
Thanks @g-w1
A simple enum is an enum which has an automatic integer tag type,
all tag values automatically assigned, and no top level declarations.
Such enums are created directly in AstGen and shared by all the
generic/comptime instantiations of the surrounding ZIR code. This
commit implements, but does not yet add any test cases for, simple enums.
A full enum is an enum for which any of the above conditions are not
true. Full enums are created in Sema, and therefore will create a unique
type per generic/comptime instantiation. This commit does not implement
full enums. However the `enum_decl_nonexhaustive` ZIR instruction is
added and the respective Type functions are filled out.
This commit makes an improvement to ZIR code, removing the decls array
and removing the decl_map from AstGen. Instead, decl_ref and
decl_val ZIR instructions index into the `owner_decl.dependencies`
ArrayHashMap. We already need this dependencies array for incremental
compilation purposes, and so repurposing it to also use it for ZIR decl
indexes makes for efficient memory usage.
Similarly, this commit fixes up incorrect memory management by removing
the `const` ZIR instruction. The two places it was used stored memory in
the AstGen arena, which may get freed after Sema. Now it properly sets
up a new anonymous Decl for error sets and uses a normal decl_val
instruction.
The other usage of `const` ZIR instruction was float literals. These are
now changed to use `float` ZIR instruction when the value fits inside
`zir.Inst.Data` and `float128` otherwise.
AstGen + Sema: implement int_to_enum and enum_to_int. No tests yet; I expect to
have to make some fixes before they will pass tests. Will do that in the
branch before merging.
AstGen: fix struct astgen incorrectly counting decls as fields.
Type/Value: give up on trying to exhaustively list every tag all the
time. This makes the file more manageable. Also found a bug with
i128/u128 this way, since the name of the function was more obvious when
looking at the tag values.
Type: implement abiAlignment and abiSize for structs. This will need to
get more sophisticated at some point, but for now it is progress.
Value: add new `enum_field_index` tag.
Value: add hash_u32, needed when using ArrayHashMap.
As per the other string types, `?[:0]const u8` needs its own case
as otherwise it will raise an error about using `{}` with slices.
There's no reasonable workaround for this, as you would have to
either discount the use of the empty string value or manually
rework the string to be sentinel-terminated at runtime. It's
useful for passing build options to code making use of C libraries
that make strong use of sentinel-terminated arrays for strings.
The avr1 target is a very minimal subset of the AVR ISA, quoting the GCC
manual:
> This ISA is implemented by the minimal AVR core and supported for
> assembler only.
Default to avr2 as GCC and Clang do.
When a connected socket file descriptor on Linux is re-acquired
after being closed, through fuzz testing, it appears that a
subsequent attempt to establish a connection with the file
descriptor causes EALREADY to be reported.
Instead of panicking, choose to return error.ConnectionPending
to allow for users to handle this fairly rare case.
* Switch json testing 'roundTrip()' to use FixedBufferStream, improve error handling, remove comptime from param
* Add 'try' to calls to roundTrip() that can now return an error
* Remove comptime from params in json testing, replace expect(false) with letting error propagate
* Add 'try' to calls to ok() that can now return an error
Co-authored-by: Lewis Gaul <legaul@cisco.com>
Before this change every keypress in the search field causes a browser
history entry, which makes navigating back annoying.
On first keypress in the search field, a new history entry is created.
On subsequent keypresses, the most recent history entry is replaced.
Therefore a typical history after searching and navigating to an entry
might look like
1. documentation root
2. search page "print"
3. docs for `std.debug.print`
Co-authored-by: Žiga Željko <ziga.zeljko@gmail.com>