A bit better docs README

This commit is contained in:
Adrien Bouvais 2025-01-12 01:41:42 +01:00
parent 67546d69a9
commit 83ff27c3f2
4 changed files with 72 additions and 37 deletions

View File

@ -8,13 +8,13 @@ ZipponDB's goal is to be ACID, light, simple, and high-performance. It aims at s
### Why Zippon ?
- Relational database (Soon)
- Relational database
- Simple and minimal query language
- Small, light, fast, and implementable everywhere
For more informations visit the docs: https://mrbounty.github.io/ZipponDB/
***Note: ZipponDB is still in Alpha v0.1 and is missing a lot of features, see roadmap at the end of this README.***
***Note: ZipponDB is still in Alpha v0.2, see roadmap.***
# Declare a schema
@ -31,7 +31,7 @@ User (
)
```
Note that the best friend is a link to another `User`.
Note that the best friend is a link to another `User`. You can find more examples [here](https://github.com/MrBounty/ZipponDB/tree/main/schema).
# ZipponQL
@ -43,8 +43,6 @@ ZipponDB uses its own query language, ZipponQL or ZiQL for short. Here are the k
- `()` contain new or updated data (not already in the file)
- `||` are additional options
***Disclaimer: Lot of stuff are still missing and the language may change over time.***
## GRAB
The main action is `GRAB`, this will parse files and return data.
@ -54,10 +52,10 @@ GRAB User {name = 'Bob' AND (age > 30 OR age < 10)}
Can use [] before the filter to tell what to return.
```js
GRAB User [id, email] {name = 'Bob' AND (age > 30 OR age < 10)}
GRAB User [id, email] {name = 'Bob'}
```
Here a preview to how to use relationship.
Relationship use filter within filter.
```js
GRAB User {best_friend IN {name = 'Bob'}}
```

View File

@ -107,15 +107,15 @@ pub fn main() !void {
std.debug.print("--------------------------------------\n\n", .{});
}
{
for (db_engine.schema_engine.struct_array) |sstruct| {
const mb: f64 = @as(f64, @floatFromInt(sstruct.uuid_file_index.arena.queryCapacity())) / 1024.0 / 1024.0;
std.debug.print("Sstruct: {s}\n", .{sstruct.name});
std.debug.print("Memory: {d:.2}Mb\n", .{mb});
std.debug.print("Count: {d}\n\n", .{sstruct.uuid_file_index.map.count()});
std.debug.print("--------------------------------------\n\n", .{});
}
}
//{
// for (db_engine.schema_engine.struct_array) |sstruct| {
// const mb: f64 = @as(f64, @floatFromInt(sstruct.uuid_file_index.arena.queryCapacity())) / 1024.0 / 1024.0;
// std.debug.print("Sstruct: {s}\n", .{sstruct.name});
// std.debug.print("Memory: {d:.2}Mb\n", .{mb});
// std.debug.print("Count: {d}\n\n", .{sstruct.uuid_file_index.map.count()});
// std.debug.print("--------------------------------------\n\n", .{});
// }
//}
// Define your benchmark queries
{
@ -149,16 +149,6 @@ pub fn main() !void {
std.debug.print("=====================================\n\n", .{});
}
{
for (db_engine.schema_engine.struct_array) |sstruct| {
const mb: f64 = @as(f64, @floatFromInt(sstruct.uuid_file_index.arena.queryCapacity())) / 1024.0 / 1024.0;
std.debug.print("Sstruct: {s}\n", .{sstruct.name});
std.debug.print("Memory: {d:.2}Mb\n", .{mb});
std.debug.print("Count: {d}\n\n", .{sstruct.uuid_file_index.map.count()});
std.debug.print("--------------------------------------\n\n", .{});
}
}
}
}
}

View File

@ -1,6 +1,59 @@
# Benchmark
***Benchmark are set to quicly evolve. I have currently multiple ideas to improve perf***
# Intro
## Command
You can run `zig build benchmark` if you clone the repo to benchmark your machine.
Here an example on my machine:
```
=====================================
Populating with 500000 users.
Populate duration: 8.605314 seconds
Database path: benchmark
Total size: 50.99Mb
LOG: 0.00Mb
BACKUP: 0.00Mb
DATA: 50.93Mb
User: 50.92683124542236Mb 500000 entities
--------------------------------------
Query: GRAB User {}
Duration: 457.758686 ms
Query: GRAB User [1] {}
Duration: 1.285849 ms
Query: GRAB User [name] {}
Duration: 138.041888 ms
Query: GRAB User {name = 'Charlie'}
Duration: 63.094060 ms
Query: GRAB User {age > 30}
Duration: 335.654647 ms
Query: GRAB User {bday > 2000/01/01}
Duration: 52.896498 ms
Query: GRAB User {age > 30 AND name = 'Charlie' AND bday > 2000/01/01}
Duration: 56.295173 ms
Query: GRAB User {best_friend IN {name = 'Charlie'}}
Duration: 69.165272 ms
Query: DELETE User {}
Duration: 93.530622 ms
=====================================
```
## File Parsing
In this example I create a random dataset of Users using this schema:
```lua
@ -21,13 +74,13 @@ Here a user example:
run "ADD User (name = 'Diana Lopez',age = 2,email = 'allisonwilliams@example.org',scores=[37 85 90 71 88 85 68],friends = [],bday=1973/11/13,last_order=1979/07/18-15:05:26.590261,a_time=03:04:06.862213)
```
First let's do a query that parse all file but dont return anything, so we have the time to read and evaluate file but not writting and sending output.
Let's do a query that parse all file but dont return anything, so we have the time to read and evaluate file but not writting and sending output.
```lua
run "GRAB User {name = 'asdfqwer'}"
```
## 50 000 000 Users
This take 6.24GB space on disk, seperated into xx files of xx MB
This take 6.24GB space on disk.
| Thread | Time (s) | Usage (%) |
| --- | --- | --- |
@ -58,10 +111,3 @@ This take 127MB space on disk, sperated into 24 files of 5.2MB
![Chart](images/time_usage_per_thread_1_000_000.png)
## TODO
- [ ] Benchmark per files size, to find the optimal one. For 10kB, 5MB, 100MB, 1GB
- [ ] Create a build command to benchmark. For 1_000, 1_000_000, 50_000_000 users
- [ ] Create a random dataset
- [ ] Do simple query, get average and +- time by set of 25 query
- [ ] Return the data to do a chart

View File

@ -22,7 +22,7 @@
#### v0.3 - QoL
- [X] Docs website
- [ ] Schema migration
- [ ] Dump/Bump data
- [~] Dump/Bump data
- [ ] Recovery
- [ ] Better CLI
- [ ] Linked query
@ -53,6 +53,7 @@
### Gold
#### v0.8 - Advanced
- [ ] Query optimizer
- [ ] Single file
#### v0.9 - Docs
- [ ] ZiQL tuto