Got an error that I could find for a while. It was when parsing a lot of
files.
Turn out it was the Thread Pool or something because if I run on 1 core,
it's ok.
Different thing in this commit, I just want to freeze a state that
benchmark work.
Before I was parsing all files, getting the max index then parsing from
0 to the max.
Byt now that I delete empty files, I need to parse only existing one.
Added NOW already and now debuging some stuff regarding filter and
parsing file of one struct when it should be another
Also moved query test into a seperated test file.
And some fix and changed in docs
Now I flush only when the file is full and I check the the currently
used file if it is big enough.
So I dont get stat of all files and flush everytime like before
So then I can parse the file again, create a map of UUID sub json and
just iterate over the first json looking for UUID and update with new
data from map
- Added a new data type self, that represent the id of the intity itself
- Fixed multi threading for parsing, now each thread use it's own writer
and I concat them at the end
- Added a schemaStruct id to the list
- Other fixe and stuff to go with the rest
New step, multi threading for all function then finally relationship
When I use s2t, or string to type. Like to transform a string array into
an array. "[1 4 21]" to []i32{ 1, 4, 21}.
Before I return an ArrayList, now it use toOwnedSlice
Created a new function to replace parseAndFindUUIDWithFilter.
Now it parse the file, evaluate with the filter and write directly to a
buffer. There is no UUID that is return, then file are parse again.
Should be a huge perf improve.
Some bugs with additionData tho, got a name: "<UUID>" =(