29 Commits

Author SHA1 Message Date
cb250bbcd0 Small opti 2025-01-27 20:38:14 +01:00
78e8b67b80 Moved benchmark and test to src. Implemented array without testing yet 2025-01-25 19:11:01 +01:00
b2bd1e373b Updated lib config and small stuff 2025-01-23 17:31:41 +01:00
98f0c69e61 SOLVED THREAD BUG
Needed a thread safe alloc. Will need to update other part as I just did
parseEntities and deleteEntities
2025-01-22 22:34:46 +01:00
7c34431702 Error found!!
Got an error that I could find for a while. It was when parsing a lot of
files.

Turn out it was the Thread Pool or something because if I run on 1 core,
it's ok.

Different thing in this commit, I just want to freeze a state that
benchmark work.
2025-01-22 11:34:00 +01:00
abe9445d72 Compilable array manipulation, still some errors 2025-01-20 19:56:29 +01:00
a09a368528 Removed useless code and old comment 2025-01-16 22:34:19 +01:00
d1b430a3d5 Removed some TODO and changer so now it parse all existing file
Before I was parsing all files, getting the max index then parsing from
0 to the max.

Byt now that I delete empty files, I need to parse only existing one.
2025-01-16 22:17:42 +01:00
6338cb6364 NewData ordered by default 2025-01-15 19:59:13 +01:00
672d79cbea Add config and schema for benchmark 2025-01-14 22:32:31 +01:00
6ccfc7feb9 Send back to true and remove .pdb from release 2025-01-12 19:08:34 +01:00
955aff0d09 Moved global error to lib and fuse to a unique one 2025-01-12 00:37:57 +01:00
bd4f0aab7f Removed some TODO, fix and typo 2025-01-11 16:52:19 +01:00
1495e779c9 Speed up date
Date was taking a long time for parsing when using ADD in batch, speeded
up by like x50
2025-01-11 15:27:17 +01:00
71e5f6eb1e Started to debug schema with multiple struct and some time keyword
Added NOW already and now debuging some stuff regarding filter and
parsing file of one struct when it should be another

Also moved query test into a seperated test file.
And some fix and changed in docs
2025-01-08 10:09:15 +01:00
2a4842432d Speed up batch ADD and better bechmark
Now I flush only when the file is full and I check the the currently
used file if it is big enough.

So I dont get stat of all files and flush everytime like before
2025-01-07 13:55:02 +01:00
e3264d8553 Random ADD for benchmark 2025-01-06 20:45:16 +00:00
b075f8b89a Moved config to libs 2025-01-02 12:19:05 +00:00
e7056efec9 Now query with relationship will write the UUID bytes betwen {|<>|}
So then I can parse the file again, create a map of UUID sub json and
just iterate over the first json looking for UUID and update with new
data from map
2024-12-20 22:29:02 +01:00
7f27557ca2 Fix parsing array
Before if the array is [ 1 ] I get [ 0, 1, 0] because I get 2 empty
string that when I try to make into int, it become 0
2024-11-29 21:20:12 +01:00
08cae48cbc Added a writeEntityTable and fixed date and time 2024-11-24 22:26:54 +01:00
979690fec4 IN filter working
Now if I do like GRAB User { best_friends IN {name = 'Bob'}} it should
work

At least now the condition value is a hashmap with key as UUID.
2024-11-17 21:52:11 +01:00
6e7d1d150c Pass test with added stuffs for relationship 2024-11-13 22:44:27 +01:00
b1de4a40c3 Moved ZipponData to lib 2024-11-12 21:17:33 +01:00
3539dd685c Fix, perf ect
- Added a new data type self, that represent the id of the intity itself
- Fixed multi threading for parsing, now each thread use it's own writer
and I concat them at the end
- Added a schemaStruct id to the list
- Other fixe and stuff to go with the rest

New step, multi threading for all function then finally relationship
2024-11-03 19:18:25 +01:00
557e4ab064 Return slice when parsing individual value
When I use s2t, or string to type. Like to transform a string array into
an array. "[1 4 21]" to []i32{ 1, 4, 21}.

Before I return an ArrayList, now it use toOwnedSlice
2024-11-01 20:37:19 +01:00
5e1ac7a0d7 Parse and write at the same time
Created a new function to replace parseAndFindUUIDWithFilter.

Now it parse the file, evaluate with the filter and write directly to a
buffer. There is no UUID that is return, then file are parse again.
Should be a huge perf improve.

Some bugs with additionData tho, got a name: "<UUID>" =(
2024-11-01 20:17:45 +01:00
dbf5a255a9 writeEntity working with new ZipponData package 2024-10-30 23:50:37 +01:00
debc646738 Moved the custom types to a library 2024-10-27 12:59:28 +01:00