Typo in docs

This commit is contained in:
Adrien Bouvais 2024-11-10 11:41:06 +01:00
parent e1c957b3b6
commit dd65f5f03e
2 changed files with 6 additions and 6 deletions

View File

@ -2,7 +2,7 @@
# Intro
In this example I create a random dataset of 50 000 000 Users using this shema:
In this example I create a random dataset of Users using this schema:
```lua
User (
name: str,
@ -40,7 +40,7 @@ This take 6.24GB space on disk, seperated into xx files of xx MB
| 12 | 5.1 | 85 |
| 16 | 4.3 | 100 |
![alt text](https://github.com/MrBounty/ZipponDB/blob/v0.1.4/charts/time_usage_per_thread_50_000_000.png)
![alt text](https://github.com/MrBounty/ZipponDB/blob/v0.1.4/python/charts/time_usage_per_thread_50_000_000.png)
## 1 000 000
This take 127MB space on disk, sperated into 24 files of 5.2MB
@ -56,7 +56,7 @@ This take 127MB space on disk, sperated into 24 files of 5.2MB
| 12 | 136 |
| 16 | 116 |
![alt text](https://github.com/MrBounty/ZipponDB/blob/v0.1.4/charts/time_usage_per_thread_1_000_000.png)
![alt text](https://github.com/MrBounty/ZipponDB/blob/v0.1.4/python/charts/time_usage_per_thread_1_000_000.png)
## TODO

View File

@ -10,13 +10,13 @@ All `Tokenizer` work similary and are based on the [zig tokenizer.](https://gith
The `Tokenizer` role is to take a buffer string and convert it into a list of `Token`. A token have an enum `Tag` that represent what the token is, for example `=` is the tag `equal`, and a `Loc` with a `start` and `end` usize that represent the emplacement in the buffer.
The `Tokenizer` itself have 2 methods: `next` that return the next `Token`. And `TODO` that return the slice of the buffer that represent the `Token`, using it's `Loc`.
The `Tokenizer` itself have 2 methods: `next` that return the next `Token`. And `getTokenSlice` that return the slice of the buffer that represent the `Token`, using it's `Loc`.
This is how to use it:
```zig
const toker = Tokenizer.init(buff);
const token = toker.next();
std.debug.print("{s}", .{toker.xxxx(token)});
std.debug.print("{s}", .{toker.getTokenSlice(token)});
```
I usually use a `Tokenizer` in a loop until the `Tag` is `end`. And in each loop I take the next token and will use a switch on the `Tag` to do stuffs.
@ -26,7 +26,7 @@ Here a simple example:
const toker = Tokenizer.init(buff);
var token = toker.next();
while (token.tag != .end) : (token = toker.next()) switch (token.tag) {
.equal => std.debug.print("{s}", .{toker.xxxx(token)}),
.equal => std.debug.print("{s}", .{toker.getTokenSlice(token)}),
else => {},
}
```