18d6523888ef08bc66eb808075d13c5e00b8fcf4 regressed compiler-rt tests for
stage1 because it removed a workaround. I updated the comment to better
explain what exactly the workaround is so that it won't happen again.
This reverts commit 75c9936737a6ba991d4ef187ddc9d51bc0ad0998, reversing
changes made to 7f13f5cd5f5a518638b15d7225eae2d88ec1efb5.
I don't think `runZigBuild` belongs in std.testing. We already have
`test/standalone/*` for this.
Additionally test names should explain what they are testing rather than
referencing GitHub issue numbers.
* unify the logic for exporting math functions from compiler-rt,
with the appropriate suffixes and prefixes.
- add all missing f128 and f80 exports. Functions with missing
implementations call other functions and have TODO comments.
- also add f16 functions
* move math functions from freestanding libc to compiler-rt (#7265)
* enable all the f128 and f80 code in the stage2 compiler and behavior
tests (#11161).
* update std lib to use builtins rather than `std.math`.
This function is codegen'd incorrectly in stage2, since it fails to
generate the correct soft-float operations. This will be fixed once
issue #11161 is implemented
So that people can start experimenting with compiling their projects
with the self-hosted compiler.
I expect this commit to be reverted after #89 is closed.
There were a few minor bugs in the rounding behavior and Inf/NaN
handling for the f80 __addxf3 and __subtf3 functions.
This change updates the original generic implementation to correctly
handle f80 floats, including the explicit integer bit.
Some SPARC CPUs (particularly old and/or embedded ones) only has atomic
TAS instruction available (`ldstub`). This adds support for emitting
that instruction in the spinlock.
RunStep on unexpected exit code used to return error.UncleanExit, which
was confusing and unclear. When it was changed, the error handling code
in build_runner was not modified, which produced an error trace.
This commit explicitly handles error.UnexpectedExitCode in build_runner
so that the behavior now matches that of zig 0.8.1 after which it was
regressed.
* make it always return a fully qualified name. stage1 is inconsistent
about this.
* AstGen: fix anon_name_strategy to correctly be `func` when anon type
creation happens in the operand of the return expression.
* Sema: implement type names for the "function" naming strategy.
* Put "enum", "union", "opaque", or "struct" in place of "anon" when
creating respective anonymous Decl names.
* std.testing: add `expectStringStartsWith`. Didn't end up using it
after all.
Also this enables the real test runner for stage2 LLVM backend (sans
wasm32) since it works now.
A const local which had its init expression write to the result pointer,
but then gets elided to directly initialize, was missing the coercion to
the type annotation.
* mul_add AIR instruction: use `pl_op` instead of `ty_pl`. The type is
always the same as the operand; no need to waste bytes redundantly
storing the type.
* AstGen: use coerced_ty for all the operands except for one which we
use to communicate the type.
* Sema: use the correct source location for requireRuntimeBlock in
handling of `@mulAdd`.
* native backends: handle liveness even for the functions that are
TODO.
* C backend: implement `@mulAdd`. It lowers to libc calls.
* LLVM backend: make `@mulAdd` handle all float types.
- improved fptrunc and fpext to handle f80 with compiler-rt calls.
* Value.mulAdd: handle all float types and use the `@mulAdd` builtin.
* behavior tests: revert the changes to testing `@mulAdd`. These
changes broke the test coverage, making it only tested at
compile-time.
Improved f80 support:
* std.math.fma handles f80
* move fma functions from freestanding libc to compiler-rt
- add __fmax and fmal
- make __fmax and fmaq only exported when they don't alias fmal.
- make their linkage weak just like the rest of compiler-rt symbols.
* removed `longDoubleIsF128` and replaced it with `longDoubleIs` which
takes a type as a parameter. The implementation is now more accurate
and handles more targets. Similarly, in stage2 the function
CTypes.sizeInBits is more accurate for long double for more targets.
Before this we would see ZIR code like this:
```
%69 = alloc_inferred_mut()
%70 = array_base_ptr(%69)
%71 = elem_ptr_imm(%70, 0)
```
This would crash the compiler because it expects to see a
`coerce_result_ptr` instruction after `alloc_inferred_mut`, but that
does not happen in this case because there is no type to coerce the
result pointer to.
In this commit I modified AstGen so that it has similar codegen as when
using a const instead of a var:
```
%69 = alloc_inferred_mut()
%76 = array_init_anon(.{%71, %73, %75})
%77 = store_to_inferred_ptr(%69, %76)
```
This does not obey result locations, meaning if you call a function
inside the initializer, it will end up doing a copy into the LHS.
Solving this problem, or changing the language to make this legal,
will be left for my future self to deal with. Hi future self!
I see you reading this commit log. Hope you're doing OK buddy.
Sema for `store_ptr` of a tuple where the pointer is in fact the same
element type as the operand had an issue where the comptime fields would
get incorrectly lowered to runtime stores to bogus addresses. This is
solved with an exception to the optimization in Sema for storing
pointers that handles tuples element-wise. In the case that we are
storing a tuple to itself, it skips the optimization. This results in
better code and avoids the problem. However this caused a regression in
GeneralPurposeAllocator from the standard library.
I regressed the test runner code back to the simpler path. It's too
hard to debug standard library code in the LLVM backend right now since
we don't have debug info hooked up. Also, we didn't have any behavior
test coverage of whatever was regressed, so let's try to get that
coverage added as a stepping stone to getting the standard library
working.
Looks like all these functions are at least compiling successfully. I
haven't tried to run their test suites yet.
The one exception is `clone` which is crashing the compiler due to the
inline assembly. Still, this is progress!