Typo in docs
This commit is contained in:
parent
e1c957b3b6
commit
dd65f5f03e
@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
# Intro
|
# Intro
|
||||||
|
|
||||||
In this example I create a random dataset of 50 000 000 Users using this shema:
|
In this example I create a random dataset of Users using this schema:
|
||||||
```lua
|
```lua
|
||||||
User (
|
User (
|
||||||
name: str,
|
name: str,
|
||||||
@ -40,7 +40,7 @@ This take 6.24GB space on disk, seperated into xx files of xx MB
|
|||||||
| 12 | 5.1 | 85 |
|
| 12 | 5.1 | 85 |
|
||||||
| 16 | 4.3 | 100 |
|
| 16 | 4.3 | 100 |
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
## 1 000 000
|
## 1 000 000
|
||||||
This take 127MB space on disk, sperated into 24 files of 5.2MB
|
This take 127MB space on disk, sperated into 24 files of 5.2MB
|
||||||
@ -56,7 +56,7 @@ This take 127MB space on disk, sperated into 24 files of 5.2MB
|
|||||||
| 12 | 136 |
|
| 12 | 136 |
|
||||||
| 16 | 116 |
|
| 16 | 116 |
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
## TODO
|
## TODO
|
||||||
|
|
||||||
|
@ -10,13 +10,13 @@ All `Tokenizer` work similary and are based on the [zig tokenizer.](https://gith
|
|||||||
|
|
||||||
The `Tokenizer` role is to take a buffer string and convert it into a list of `Token`. A token have an enum `Tag` that represent what the token is, for example `=` is the tag `equal`, and a `Loc` with a `start` and `end` usize that represent the emplacement in the buffer.
|
The `Tokenizer` role is to take a buffer string and convert it into a list of `Token`. A token have an enum `Tag` that represent what the token is, for example `=` is the tag `equal`, and a `Loc` with a `start` and `end` usize that represent the emplacement in the buffer.
|
||||||
|
|
||||||
The `Tokenizer` itself have 2 methods: `next` that return the next `Token`. And `TODO` that return the slice of the buffer that represent the `Token`, using it's `Loc`.
|
The `Tokenizer` itself have 2 methods: `next` that return the next `Token`. And `getTokenSlice` that return the slice of the buffer that represent the `Token`, using it's `Loc`.
|
||||||
|
|
||||||
This is how to use it:
|
This is how to use it:
|
||||||
```zig
|
```zig
|
||||||
const toker = Tokenizer.init(buff);
|
const toker = Tokenizer.init(buff);
|
||||||
const token = toker.next();
|
const token = toker.next();
|
||||||
std.debug.print("{s}", .{toker.xxxx(token)});
|
std.debug.print("{s}", .{toker.getTokenSlice(token)});
|
||||||
```
|
```
|
||||||
|
|
||||||
I usually use a `Tokenizer` in a loop until the `Tag` is `end`. And in each loop I take the next token and will use a switch on the `Tag` to do stuffs.
|
I usually use a `Tokenizer` in a loop until the `Tag` is `end`. And in each loop I take the next token and will use a switch on the `Tag` to do stuffs.
|
||||||
@ -26,7 +26,7 @@ Here a simple example:
|
|||||||
const toker = Tokenizer.init(buff);
|
const toker = Tokenizer.init(buff);
|
||||||
var token = toker.next();
|
var token = toker.next();
|
||||||
while (token.tag != .end) : (token = toker.next()) switch (token.tag) {
|
while (token.tag != .end) : (token = toker.next()) switch (token.tag) {
|
||||||
.equal => std.debug.print("{s}", .{toker.xxxx(token)}),
|
.equal => std.debug.print("{s}", .{toker.getTokenSlice(token)}),
|
||||||
else => {},
|
else => {},
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
Loading…
x
Reference in New Issue
Block a user