218 Commits

Author SHA1 Message Date
e5d0a122e1 Added multi threading to delete 2024-11-06 16:04:07 +01:00
65be757440 Added multithreading to update 2024-11-06 16:01:55 +01:00
fbbeb2f40d Updqte Benchmqrk.md 2024-11-04 23:24:49 +01:00
48a3ae4a5d Update Benchmark.md 2024-11-04 23:20:24 +01:00
900b6c5f54 Replace some path_buff to utils.preintOpenDir/File 2024-11-04 23:18:49 +01:00
96e77a5ad4 Removed last trace of null 2024-11-04 22:57:19 +01:00
280b3b3c3a Update docs 2024-11-04 22:52:44 +01:00
b12116c005 Removed uuid_file_index 2024-11-04 22:52:19 +01:00
463df1dc6c Added time for 1 000 000 2024-11-04 22:38:17 +01:00
3cf33fe4f8 Fixed url of chart 2024-11-04 22:13:33 +01:00
2af9f68c5e Added small benchmark 2024-11-04 22:11:56 +01:00
c500c2a23c Fixed wrong condition
Superior was inferior and reverse
2024-11-03 22:43:02 +01:00
3539dd685c Fix, perf ect
- Added a new data type self, that represent the id of the intity itself
- Fixed multi threading for parsing, now each thread use it's own writer
and I concat them at the end
- Added a schemaStruct id to the list
- Other fixe and stuff to go with the rest

New step, multi threading for all function then finally relationship
2024-11-03 19:18:25 +01:00
72b001b72c I think I just added multithreading for GRAB :o
This look like it work, maybe I need to start using a unique Pool for
everything. Maybe keeping it directly inside the FileEngine
2024-11-02 23:47:41 +01:00
dba73ce113 Started to use fixed lenght alloc for performance
For very importatnt stuff like the writter that write data when parsing,
started to use fixed length because it take the majority of the time to
write, not to parse =/

Gonna need to improve that
2024-11-02 22:12:47 +01:00
aaa1cb2589 Added bach updateEntities using zid 2024-11-02 00:22:13 +01:00
0920927bc3 Removed sendUUIDs
Removed the different sendUUIDs and sendUUID function. new writeEntity
use a buffer to write inside like the other stuffs.
2024-11-01 22:23:04 +01:00
a20a60e566 Delete Entities now use ZipponData
Also stopped to parse then delete, now I parse and delete at the same
time for perf issue
2024-11-01 21:17:31 +01:00
bead52df5a Fix 2024-11-01 20:40:18 +01:00
557e4ab064 Return slice when parsing individual value
When I use s2t, or string to type. Like to transform a string array into
an array. "[1 4 21]" to []i32{ 1, 4, 21}.

Before I return an ArrayList, now it use toOwnedSlice
2024-11-01 20:37:19 +01:00
f5a692558b Fixed the bug with additional data 2024-11-01 20:22:37 +01:00
5e1ac7a0d7 Parse and write at the same time
Created a new function to replace parseAndFindUUIDWithFilter.

Now it parse the file, evaluate with the filter and write directly to a
buffer. There is no UUID that is return, then file are parse again.
Should be a huge perf improve.

Some bugs with additionData tho, got a name: "<UUID>" =(
2024-11-01 20:17:45 +01:00
1bcc4465c5 getAllUUIDList now use the index_file_map map 2024-11-01 17:19:24 +01:00
c37999cbfc Added UUID -> File index hash map
SchemaStruct member now are [] and not ArrayList. Started to use and
understand toOwnedSlice.

Implemented the hashmap that keep all UUID -> File index. Inside each
SchemaStruct
2024-11-01 17:10:13 +01:00
aff8fac0af FileEngine struct_array no longer an ArrayList
Started to use toOwnedSlice to replace members that are arrayList
2024-11-01 12:31:46 +01:00
dbf5a255a9 writeEntity working with new ZipponData package 2024-10-30 23:50:37 +01:00
294d4f7a2c Condition with index instead of member_name
Condition now store the index of the value in the file instead of the
member_name. So when I parse, I can just use the list array []Data and
the index to compare both value
2024-10-27 22:34:17 +01:00
304ec89167 Put back all test
I removed some test while implementing the new Filter
2024-10-27 22:17:15 +01:00
2a946eafd0 Working new filter parse
Now the parseFilter function return a Filter struct.

It is a tree of Condition and Logic operator  that will then be use when
parsing row. Need to do the evaluate function with the []Data and test
some error
2024-10-27 21:56:57 +01:00
15171f73c6 Update README 2024-10-27 19:56:43 +01:00
334c738ac1 New Filter start to work
Changing how condition are handle to create a tree of condition to only
parse once.

So far the basic work, need more testing.
2024-10-27 19:55:07 +01:00
5756e3a530 Fixed build so test use dependencies 2024-10-27 15:04:30 +01:00
fd21d9bd65 Fix and now schema are saved with name schema and not schema.zipponschema 2024-10-27 14:46:19 +01:00
24c204f435 Added an option to reset or not the logs and some typo 2024-10-27 13:18:26 +01:00
debc646738 Moved the custom types to a library 2024-10-27 12:59:28 +01:00
4df151ea85 Moved help message to config 2024-10-27 11:20:02 +01:00
99871ddc73 Deleted test data 2024-10-27 11:07:07 +01:00
e5f2f7c5e5 Changed some print to log 2024-10-26 18:52:10 +02:00
c5f931cf04 Better logs
Now log are managed by the std log, did a custom log to print either
into a file or into the default io strout.

The plan is to create a scope in each file and log depending of stuff,
can also use it in library I think, if should still base by the custom
function.

Next step is logging a lot of stuff a bit everywhere.
2024-10-26 17:30:58 +02:00
9d39d47c3b Removed some test and fixed some synthax 2024-10-26 13:47:52 +02:00
9c36fec517 Removed duplicate code
Changed the expected new value in parseNewData to remove code duplicate
2024-10-22 00:29:01 +02:00
696498d7ed Removed duplicate code
Changed how I check data type in parseCondition to prevent code
duplicate
2024-10-22 00:20:25 +02:00
2b9ad08abf Cleaned ZiQlParser
Removed 4 members to be variable, includding; state, struct_name, action
and additional_data.

Also make it so it only return ZipponEError
2024-10-22 00:03:38 +02:00
62e4f53477 Automatically create all folder if env var provided
If ZIPPONDB_PATH is found, it will try to create and ur the directory.

If ZIPPONDB_SCHEMA is found, it will try to init the schema using it.
2024-10-20 21:19:25 +02:00
08869e5e3d Added links to schema struct
Added a hash map that contain the name of the struct corresponding to
the link.

So the member "best_friend: User" is an entry key: "best_friend" value:
"User"
2024-10-20 18:01:17 +02:00
9d2948d686 SchemaStruct use []const u8
Before it was link that and I changed to use Loc as I throught that I
would save memory.

But a slice is just a pointer and a leng, so it should be better. As
long as I keep the original string in memory. The query and schema
2024-10-20 17:51:40 +02:00
6b1d3d7495 Renamed id and id_array to link and link_array 2024-10-20 17:36:05 +02:00
c7d7a01fa8 Log now print always the same length
When printing 12:03 it was 12;3 before, now it's fix and use the
DateTime.format function
2024-10-20 10:15:25 +02:00
34688a0180 Changed a bit how log work 2024-10-20 09:50:44 +02:00
a6a3c092cc Added logging, all Token use the same Loc 2024-10-20 01:35:12 +02:00