Moved stuffs to new dir

This commit is contained in:
Adrien Bouvais 2024-11-09 16:41:28 +01:00
parent 9fa6f25459
commit 18c55194d5
8 changed files with 252 additions and 5 deletions

2
.gitignore vendored
View File

@ -5,5 +5,3 @@ data
engine
engine.o
zig-out
TODO v0.2.md
*.py

View File

@ -205,8 +205,8 @@ TODO: Create a tech doc of what is happening inside.
- [X] Custom data file
- [X] Date
- [ ] Linked query
- [ ] Query optimization
- [X] Logs
- [X] Query multi threading
#### v0.3 - QoL
- [ ] Schema migration
@ -228,8 +228,9 @@ TODO: Create a tech doc of what is happening inside.
#### v0.6 - Performance
- [ ] Transaction
- [ ] Multi threading
- [ ] Lock manager
- [ ] Other multi threading
- [ ] Query optimization
- [ ] Index
#### v0.7 - Safety
- [ ] Auth

37
docs/TODO v0.2.md Normal file
View File

@ -0,0 +1,37 @@
- [ ] Delete the .new file if an error happend
- [ ] Create a struct that manage the schema
Relationships
- [X] Update the schema Parser and Tokenizer
- [X] Include the name of the link struct with the schema_struct
- [ ] New ConditionValue that is an array of UUID
- [ ] When relationship found in filter, check if the type is right and exist
- [ ] When parseFilter, get list of UUID as value for relationship
- [ ] Add new operation in Filter evalue: IN and !IN
- [ ] parseNewData can use filter like in "Add User (friends = [10] {age > 20})" to return UUID
- [ ] parseFilter can use sub filter. "GRAB User {friends IN {age > 20}}" At least one friend in a list of UUID
- [ ] When send, send the entities in link specify between []
Optimizations
- [X] Parse file one time for all conditions, not once per condition
- [X] parse file in parallel, multi threading
- [X] GRAB
- [X] DELETE
- [X] UPDATE
- [ ] Radix Tries ofr UUID list
ADD User (name='Bob', age = 44, best_friend = {id=0000-0000}) => new_user => UPDATE User {id = 0000-0000} TO (best_friend = new_user)
Happy to annonce the v0.2 of my database. New feature include:
- Relationship
- Huge performance increase with multi threading
- Date, time and datetime type
- Linked query
- Compressed binary files
- Logs
All core features of the query language is working, v0.3 will focus on adding things around ot, including:
- Schema migration
- Dump/Bump data
- Recovery
- Better CLI

153
python/charts.ipynb Normal file

File diff suppressed because one or more lines are too long

58
python/dummy_data.py Normal file
View File

@ -0,0 +1,58 @@
import subprocess
from faker import Faker
import random
from tqdm import tqdm
fake = Faker()
def random_array():
length = random.randint(-1, 10)
scores = [random.randint(-1, 100) for _ in range(length)]
return f"[{' '.join(map(str, scores))}]"
def run(process, command):
"""Sends a command to the Zig process and returns the output."""
process.stdin.write('run "' + command + '"\n')
process.stdin.flush()
output = ""
char = process.stdout.read(1) # Read one character
while char:
if char == "\x03": # Check for ETX
break
output += char
char = process.stdout.read(1)
return output.strip()
# Start the Zig binary process once
for _ in tqdm(range(1000)):
process = subprocess.Popen(
["zig-out/bin/ZipponDB"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True # For easier string handling
)
for _ in range(1000):
query = "ADD User ("
query += f"name = '{fake.name()}',"
query += f"age = {random.randint(0, 100)},"
query += f"email = '{fake.email()}',"
query += f"scores={random_array()},"
query += f"friends = [],"
query += f"bday={fake.date(pattern='%Y/%m/%d')},"
query += f"last_order={fake.date_time().strftime('%Y/%m/%d-%H:%M:%S.%f')},"
query += f"a_time={fake.date_time().strftime('%H:%M:%S.%f')}"
query += f")"
run(process, query)
# Ensure we always close the process, even if an error occurs
process.stdin.write("quit\n")
process.stdin.flush()
process.terminate()
process.wait()