From fbbeb2f40d27ef8fbfd151998d4413c6e350e761 Mon Sep 17 00:00:00 2001 From: MrBounty Date: Mon, 4 Nov 2024 23:24:49 +0100 Subject: [PATCH] Updqte Benchmqrk.md --- Benchmark.md | 21 +++++++++++++++------ 1 file changed, 15 insertions(+), 6 deletions(-) diff --git a/Benchmark.md b/Benchmark.md index 2e88327..8daac11 100644 --- a/Benchmark.md +++ b/Benchmark.md @@ -1,10 +1,19 @@ ***Benchmark are set to quicly evolve. I have currently multiple ideas to improve perf*** -# 50 000 000 users +# Intro In this example I create a random dataset of 50 000 000 Users using this shema: -``` -TODO +```lua +User ( + name: str, + age: int, + email: str, + bday: date, + last_order: datetime, + a_time: time, + scores: []int, + friends: []str, +) ``` Here a user example: @@ -14,10 +23,10 @@ run "ADD User (name = 'Diana Lopez',age = 2,email = 'allisonwilliams@example.org First let's do a query that parse all file but dont return anything, so we have the time to read and evaluate file but not writting and sending output. ``` -run "GRAB User {name = 'asdfqwer'} +run "GRAB User {name = 'asdfqwer'}" ``` -### 50 000 000 Users +## 50 000 000 Users This take 6.24GB space on disk, seperated into xx files of xx MB | Thread | Time (s) | Usage (%) | @@ -33,7 +42,7 @@ This take 6.24GB space on disk, seperated into xx files of xx MB ![alt text](https://github.com/MrBounty/ZipponDB/blob/v0.1.4/charts/time_usage_per_thread_50_000_000.png) -### 1 000 000 +## 1 000 000 This take 127MB space on disk, sperated into 24 files of 5.2MB | Thread | Time (ms) |