I think of lerp() as a way to change coordinate systems, essentially
remapping the input numberline onto a shifted+rescaled numberline. In
my mind the full numberline is remapped, not just the 0-1 segment.
An example of how this is useful: in a game, you can write:
`myPos = lerp(pos0, pos1, easeOutBack(u))`
for some `u` that changes from 0 to 1 over time.
(see https://easings.net/#easeOutBack)
This will animate `myPos` between `pos0` and `pos1`, overshooting the
goal position `pos1` in a nicely-animated way.
`easeOutBack(float)->float` is a pure function that overshoots 1,
and by combining it with `lerp()` we can remap coordinates in other
coordinate systems, making them overshoot in the same way.
However, this overshooting is only possible because `easeOutBack(t)`
sometimes exceeds the range 0-1 (e.g. `easeOutBack(0.5)` is 1.0877),
which is not allowed by the current `math.lerp` implementation.
This commit removes the asserts that prevented this use-case. Now, any
value can be inputted for t. For example, `lerp(10,20, 2.0)` will now
return 30, instead of throwing an assert error.
In the example from the issue #19052 to_read holds 213_315_584
uncompressed bytes. Calling read with small output results in many
shifts of that big buffer.
This removes need to shift to_read after each read.
Andrew and I have discovered that on Linux max peak rss value
is taken to be `max(build_runner, test_suite)` and since the thunks
test emit a huge binary, we will easily exceed the declared maximum
for any of the test suites. This can be worked around for now by not
checking for $thunk symbols in this test since it doesn't really
yield any additional information; however ideally we would implement
per-thread local temp arena that can be freed.
These systems write the number of *bits* of their inputs as a u64.
However if `@sizeOf(usize) == 4`, an input message or associated data
whose size is > 512 MiB could overflow.
On 64-bit systems, it is safe to assume that no machine has more than
2 EiB of memory.
This takes the code that was previously in src/Compilation.zig to turn resinator diagnostics into Zig error bundles and puts it in resinator/main.zig, and then makes resinator emit the resulting error bundles via std.zig.Server (which is used by the build runner, etc). Also adds support for turning Aro diagnostics into ErrorBundles.
This moves .rc/.manifest compilation out of the main Zig binary, contributing towards #19063
Also:
- Make resinator use Aro as its preprocessor instead of clang
- Sync resinator with upstream
Make std.tar look better in docs. Remove from public interface what is
not necessary. Add comment to the public methods. Add doctest as usage
examples for iterator and pipeToFileSystem.
Problem was manifested only on windows with target `-target
aarch64-windows-gnu`.
I was creating new files but not closing any of them. Tmp dir cleanup
hangs looping in deleteTree forever.
Fixing ci error: error:
'tar.test.test.pipeToFileSystem' failed: slices differ. first difference occurs at index 2 (0x2)
============ expected this output: ============= len: 9 (0x9)
2E 2E 2F 61 2F 66 69 6C 65 ../a/file
============= instead found this: ============== len: 9 (0x9)
2E 2E 5C 61 5C 66 69 6C 65 ..\a\file
After #19136 dir.symlink changes path separtors to \ on windows.
Fixing error from ci:
std/tar.zig:423:54: error: expected type 'usize', found 'u64'
std/tar.zig:423:54: note: unsigned 32-bit int cannot represent all possible unsigned 64-bit values
for the then end users:
1. Don't require user to call file.skip() on file returned from
iterator.next if file is not read. Iterator will now handle this.
Previously that returned header parsing error, without knowing some tar
internals it is hard to understand what is required from user.
2. Use iterator.File.kind enum which is similar to fs.File.Kind,
something familiar. Internal Header.Kind has many types which are not
exposed but the user needs to have else in kind switch to cover those
cases.
3. Add reader interface to the iterator.File.
Tar header stores name in max 256 bytes and link name in max 100 bytes.
Those are minimums for provided buffers. Error is raised during iterator
init if buffers are not long enough.
Pax and gnu extensions can store longer names. If such extension is
reached during unpack and don't fit into provided buffer error is
returned.
Fixes compilation errors in functions that are syntaxic sugar
to operate on serialized scalars.
Also make it explicit that square roots in fields whose size is
not congruent to 3 modulo 4 are not an error, they are just
not implemented yet.
Reported by @vitalonodo - Thanks!