- Added a new data type self, that represent the id of the intity itself
- Fixed multi threading for parsing, now each thread use it's own writer
and I concat them at the end
- Added a schemaStruct id to the list
- Other fixe and stuff to go with the rest
New step, multi threading for all function then finally relationship
For very importatnt stuff like the writter that write data when parsing,
started to use fixed length because it take the majority of the time to
write, not to parse =/
Gonna need to improve that
Created a new function to replace parseAndFindUUIDWithFilter.
Now it parse the file, evaluate with the filter and write directly to a
buffer. There is no UUID that is return, then file are parse again.
Should be a huge perf improve.
Some bugs with additionData tho, got a name: "<UUID>" =(
Condition now store the index of the value in the file instead of the
member_name. So when I parse, I can just use the list array []Data and
the index to compare both value
Now the parseFilter function return a Filter struct.
It is a tree of Condition and Logic operator that will then be use when
parsing row. Need to do the evaluate function with the []Data and test
some error
Added the datatype date, time and datetime
Moved all custom erros to a single file
Cleaned code to be more readable and created some small utils
Other stuffs that I dont recall. Basically preparing for the
relationship implementation
Removed the loop for the CLI, now just take some argument using the
binary. May come back to while loop if I need to keep like the file
engine between session. To run query in parallel.
Moved printError and send to utils and removed duplicate of them.
Organized FileEngine better.
Put the function that convert string to value in a seperate file.
Fix some synthax and make it smaller. Removed unused functions too
Added a check at the end of parse Condition to check if the condition is
valid. Like < between string is not for example.
Also removed the state send_all to check inside filter_and_send instead
so update and delete can do the same.
And some small bugs/erros
Created a function that is use by UPDATE to take a list of uuids and a
map of new value. Can be optimize later but work rn.
Also started to creat more proper error handeling with custom error
starting with ZiQLError
Started by doing a SchemaEngine but at the end I just put everything
inside the FileEngine.
Now you can use 'schema init path/to/schema' to initialize the struct
folders and first data file, Also save a copy of the schema in a file in
the ZipponDB folder.
Created a new folder to clean a bit the repo, put the file and schema
engine inside. As those and Parser depend on the types.zig, I also add
this folder inside the new engines folder
Created a new Parser unique for the FileEngine to read each line.
It is slower as I need to parser character by character because their is
no fixed len for the data in files. Before I was just reading until the
end of the file.
Im gonna need to find some tricks to improve the parsing of data. I am
thinking using the stream directly instead of doing streamUntilDelimiter