* Support different keep alive defaults with different http versions.
* Fix incorrect usage of `copyBackwards`, which copies in a backwards
direction allowing data to be moved forward in a buffer, not
backwards in a buffer.
The HTTP specification does not provide a way to escape \r\n in headers,
so it's the API user's responsibility to ensure the header names and
values do not contain \r\n. Also header names must not contain ':'.
It's an assertion, not an error, because the calling code very likely is
using hard-coded values or server-provided values that do not need to be
checked, and the error would be unreachable anyway.
Untrusted user input must not be put directly into into HTTP headers.
The API automatically handles these requests as expected. After
receiveHead(), the server has a chance to notice the expectation and do
something about it. If it does not, then the Server implementation will
handle it by sending the continuation header when the read stream is
created.
Both respond() and respondStreaming() send the continuation header as
part of discarding the request body, only if the read stream has not
already been created.
Before I mistakenly thought that missing content-length meant zero when
it actually means to stream until the connection is closed.
Now the respond() function accepts transfer_encoding which can be left
as default (use content.len for content-length), set to none which makes
it omit the content-length, or chunked, which makes it format the
response as a chunked transfer even though the server has the entire
contents already buffered.
The echo-content tests are moved from test/standalone/http.zig to the
standard library where they are actually run.
* Uncouple std.http.ChunkParser from protocol.zig
* Fix receiveHead not passing leftover buffer through the header parser.
* Fix content-length read streaming
This implementation handles the final chunk length correctly rather than
"hoping" that the buffer already contains \r\n.
Mainly, this removes the poorly named `wait`, `send`, `finish`
functions, which all operated on the same "Response" object, which was
actually being used as the request.
Now, it looks like this:
1. std.net.Server.accept() gives you a std.net.Server.Connection
2. std.http.Server.init() with the connection
3. Server.receiveHead() gives you a Request
4. Request.reader() gives you a body reader
5. Request.respond() is a one-shot, or Request.respondStreaming() creates
a Response
6. Response.writer() gives you a body writer
7. Response.end() finishes the response; Response.endChunked() allows
passing response trailers.
In other words, the type system now guides the API user down the correct
path.
receiveHead allows extra bytes to be read into the read buffer, and then
will reuse those bytes for the body or the next request upon connection
reuse.
respond(), the one-shot function, will send the entire response in one
syscall.
Streaming response bodies no longer wastefully wraps every call to write
with a chunk header and trailer; instead it only sends the HTTP chunk
wrapper when flushing. This means the user can still control when it
happens but it also does not add unnecessary chunks.
Empirically, in my example project that uses this API, the usage code is
significantly less noisy, it has less error handling while handling
errors more correctly, it's more obvious what is happening, and it is
syscall-optimal.
Additionally:
* Uncouple std.http.HeadParser from protocol.zig
* Delete std.Server.Connection; use std.net.Server.Connection instead.
- The API user supplies the read buffer when initializing the
http.Server, and it is used for the HTTP head as well as a buffer
for reading the body into.
* Replace and document the State enum. No longer is there both "start"
and "first".
This reverts commit 42be972a72c86b32ad8403d082ab42763c6facec.
Using a bit to distinguish between headers and trailers is fine. It was
just named and documented poorly.
I originally removed these in 402f967ed5339fa3d828b7fe1d57cdb5bf38dbf2.
I allowed them to be added back in #15299 because they were smuggled in
alongside a bug fix, however, I wasn't kidding when I said that I wanted
to take the design of std.http in a different direction than using this
data structure.
Instead, some headers are provided via explicit field names populated
while parsing the HTTP request/response, and some are provided via
new fields that support passing extra, arbitrary headers.
This resulted in simplification of logic in many places, as well as
elimination of the possibility of failure in many places. There is
less deinitialization code happening now.
Furthermore, it made it no longer necessary to clone the headers data
structure in order to handle redirects.
http_proxy and https_proxy fields are now pointers since it is common
for them to be unpopulated.
loadDefaultProxies is changed into initDefaultProxies to communicate
that it does not actually load anything from disk or from the network.
The function now is leaky; the API user must pass an already
instantiated arena allocator. Removes the need to deinitialize proxies.
Before, proxies stored arbitrary sets of headers. Now they only store
the authorization value.
Removed the duplicated code between https_proxy and http_proxy. Finally,
parsing failures of the environment variables result in errors being
emitted rather than silently ignoring the proxy.
error.CompressionNotSupported is renamed to
error.CompressionUnsupported, matching the naming convention from all
the other errors in the same set.
Removed documentation comments that were redundant with field and type
names.
Disabling zstd decompression in the server for now; see #18937.
I found some apparently dead code in src/Package/Fetch/git.zig. I want
to check with Ian about this.
I discovered that test/standalone/http.zig is dead code, it is only
being compiled but not being run. Furthermore it hangs at the end if you
run it manually. The previous commits in this branch were written under
the assumption that this test was being run with
`zig build test-standalone`.
This is a state machine that already has a `state` field. No need to
additionally store "done" - it just makes things unnecessarily
complicated and buggy.
The buffer for HTTP headers is now always provided via a static buffer.
As a consequence, OutOfMemory is no longer a member of the read() error
set, and the API and implementation of Client and Server are simplified.
error.HttpHeadersExceededSizeLimit is renamed to
error.HttpHeadersOversize.
Zig deflate compression/decompression implementation. It supports compression and decompression of gzip, zlib and raw deflate format.
Fixes#18062.
This PR replaces current compress/gzip and compress/zlib packages. Deflate package is renamed to flate. Flate is common name for deflate/inflate where deflate is compression and inflate decompression.
There are breaking change. Methods signatures are changed because of removal of the allocator, and I also unified API for all three namespaces (flate, gzip, zlib).
Currently I put old packages under v1 namespace they are still available as compress/v1/gzip, compress/v1/zlib, compress/v1/deflate. Idea is to give users of the current API little time to postpone analyzing what they had to change. Although that rises question when it is safe to remove that v1 namespace.
Here is current API in the compress package:
```Zig
// deflate
fn compressor(allocator, writer, options) !Compressor(@TypeOf(writer))
fn Compressor(comptime WriterType) type
fn decompressor(allocator, reader, null) !Decompressor(@TypeOf(reader))
fn Decompressor(comptime ReaderType: type) type
// gzip
fn compress(allocator, writer, options) !Compress(@TypeOf(writer))
fn Compress(comptime WriterType: type) type
fn decompress(allocator, reader) !Decompress(@TypeOf(reader))
fn Decompress(comptime ReaderType: type) type
// zlib
fn compressStream(allocator, writer, options) !CompressStream(@TypeOf(writer))
fn CompressStream(comptime WriterType: type) type
fn decompressStream(allocator, reader) !DecompressStream(@TypeOf(reader))
fn DecompressStream(comptime ReaderType: type) type
// xz
fn decompress(allocator: Allocator, reader: anytype) !Decompress(@TypeOf(reader))
fn Decompress(comptime ReaderType: type) type
// lzma
fn decompress(allocator, reader) !Decompress(@TypeOf(reader))
fn Decompress(comptime ReaderType: type) type
// lzma2
fn decompress(allocator, reader, writer !void
// zstandard:
fn DecompressStream(ReaderType, options) type
fn decompressStream(allocator, reader) DecompressStream(@TypeOf(reader), .{})
struct decompress
```
The proposed naming convention:
- Compressor/Decompressor for functions which return type, like Reader/Writer/GeneralPurposeAllocator
- compressor/compressor for functions which are initializers for that type, like reader/writer/allocator
- compress/decompress for one shot operations, accepts reader/writer pair, like read/write/alloc
```Zig
/// Compress from reader and write compressed data to the writer.
fn compress(reader: anytype, writer: anytype, options: Options) !void
/// Create Compressor which outputs the writer.
fn compressor(writer: anytype, options: Options) !Compressor(@TypeOf(writer))
/// Compressor type
fn Compressor(comptime WriterType: type) type
/// Decompress from reader and write plain data to the writer.
fn decompress(reader: anytype, writer: anytype) !void
/// Create Decompressor which reads from reader.
fn decompressor(reader: anytype) Decompressor(@TypeOf(reader)
/// Decompressor type
fn Decompressor(comptime ReaderType: type) type
```
Comparing this implementation with the one we currently have in Zig's standard library (std).
Std is roughly 1.2-1.4 times slower in decompression, and 1.1-1.2 times slower in compression. Compressed sizes are pretty much same in both cases.
More resutls in [this](https://github.com/ianic/flate) repo.
This library uses static allocations for all structures, doesn't require allocator. That makes sense especially for deflate where all structures, internal buffers are allocated to the full size. Little less for inflate where we std version uses less memory by not preallocating to theoretical max size array which are usually not fully used.
For deflate this library allocates 395K while std 779K.
For inflate this library allocates 74.5K while std around 36K.
Inflate difference is because we here use 64K history instead of 32K in std.
If merged existing usage of compress gzip/zlib/deflate need some changes. Here is example with necessary changes in comments:
```Zig
const std = @import("std");
// To get this file:
// wget -nc -O war_and_peace.txt https://www.gutenberg.org/ebooks/2600.txt.utf-8
const data = @embedFile("war_and_peace.txt");
pub fn main() !void {
var gpa = std.heap.GeneralPurposeAllocator(.{}){};
defer std.debug.assert(gpa.deinit() == .ok);
const allocator = gpa.allocator();
try oldDeflate(allocator);
try new(std.compress.flate, allocator);
try oldZlib(allocator);
try new(std.compress.zlib, allocator);
try oldGzip(allocator);
try new(std.compress.gzip, allocator);
}
pub fn new(comptime pkg: type, allocator: std.mem.Allocator) !void {
var buf = std.ArrayList(u8).init(allocator);
defer buf.deinit();
// Compressor
var cmp = try pkg.compressor(buf.writer(), .{});
_ = try cmp.write(data);
try cmp.finish();
var fbs = std.io.fixedBufferStream(buf.items);
// Decompressor
var dcp = pkg.decompressor(fbs.reader());
const plain = try dcp.reader().readAllAlloc(allocator, std.math.maxInt(usize));
defer allocator.free(plain);
try std.testing.expectEqualSlices(u8, data, plain);
}
pub fn oldDeflate(allocator: std.mem.Allocator) !void {
const deflate = std.compress.v1.deflate;
// Compressor
var buf = std.ArrayList(u8).init(allocator);
defer buf.deinit();
// Remove allocator
// Rename deflate -> flate
var cmp = try deflate.compressor(allocator, buf.writer(), .{});
_ = try cmp.write(data);
try cmp.close(); // Rename to finish
cmp.deinit(); // Remove
// Decompressor
var fbs = std.io.fixedBufferStream(buf.items);
// Remove allocator and last param
// Rename deflate -> flate
// Remove try
var dcp = try deflate.decompressor(allocator, fbs.reader(), null);
defer dcp.deinit(); // Remove
const plain = try dcp.reader().readAllAlloc(allocator, std.math.maxInt(usize));
defer allocator.free(plain);
try std.testing.expectEqualSlices(u8, data, plain);
}
pub fn oldZlib(allocator: std.mem.Allocator) !void {
const zlib = std.compress.v1.zlib;
var buf = std.ArrayList(u8).init(allocator);
defer buf.deinit();
// Compressor
// Rename compressStream => compressor
// Remove allocator
var cmp = try zlib.compressStream(allocator, buf.writer(), .{});
_ = try cmp.write(data);
try cmp.finish();
cmp.deinit(); // Remove
var fbs = std.io.fixedBufferStream(buf.items);
// Decompressor
// decompressStream => decompressor
// Remove allocator
// Remove try
var dcp = try zlib.decompressStream(allocator, fbs.reader());
defer dcp.deinit(); // Remove
const plain = try dcp.reader().readAllAlloc(allocator, std.math.maxInt(usize));
defer allocator.free(plain);
try std.testing.expectEqualSlices(u8, data, plain);
}
pub fn oldGzip(allocator: std.mem.Allocator) !void {
const gzip = std.compress.v1.gzip;
var buf = std.ArrayList(u8).init(allocator);
defer buf.deinit();
// Compressor
// Rename compress => compressor
// Remove allocator
var cmp = try gzip.compress(allocator, buf.writer(), .{});
_ = try cmp.write(data);
try cmp.close(); // Rename to finisho
cmp.deinit(); // Remove
var fbs = std.io.fixedBufferStream(buf.items);
// Decompressor
// Rename decompress => decompressor
// Remove allocator
// Remove try
var dcp = try gzip.decompress(allocator, fbs.reader());
defer dcp.deinit(); // Remove
const plain = try dcp.reader().readAllAlloc(allocator, std.math.maxInt(usize));
defer allocator.free(plain);
try std.testing.expectEqualSlices(u8, data, plain);
}
```
This reverts commit 0c99ba1eab63865592bb084feb271cd4e4b0357e, reversing
changes made to 5f92b070bf284f1493b1b5d433dd3adde2f46727.
This caused a CI failure when it landed in master branch due to a
128-bit `@byteSwap` in std.mem.
Some servers will respond with the identity encoding, meaning no
encoding, especially when responding to range-get requests. Adding the
identity encoding stops the header parser from failing when it
encounters this.