-
-
Notifications
You must be signed in to change notification settings - Fork 2.8k
std.http.Client
hangs indefinitely on response body sizes > 4096 bytes
#15710
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I have a hunch #15704 fixes this but not entirely sure, seems related. |
I am also getting |
I suspect #15590 is related |
The infinite loop looks to be an issue related to potentially attempting to read into a zero-length buffer. I'll look into this and see what I can find and fix. |
This appears to be the issue when I looked at it too. I managed to fix the hanging and get the whole body response by adding if (out_avail == 0) {
return out_index;
} after lib/std/http/protocol.zig:576. But I dont think that this is the correct solution. r.next_chunk_length appears to be 1 while out_avail is 0. Something similar probably also needs to be done in the other switch cases. |
#15927 didn't appear to fix this, at least when used with https://google.com. It now hits an assert that was added to readAtLeast. Here is an example code to reproduce it: const std = @import("std");
const uri = std.Uri.parse("https://google.com") catch unreachable;
pub fn main() !void {
var arena = std.heap.ArenaAllocator.init(std.heap.page_allocator);
defer arena.deinit();
var allocator = arena.allocator();
var client: std.http.Client = .{ .allocator = allocator };
defer client.deinit();
var req = try client.request(.GET, uri, .{ .allocator = allocator }, .{});
defer req.deinit();
try req.start();
try req.wait();
const body = try req.reader().readAllAlloc(allocator, 2<<24);
defer allocator.free(body);
} and here is the error stack trace:
tested with zig version 0.11.0-dev.3380+7e0a02ee2 (latest master at the moment) |
Ok, it "works" when compiling in ReleaseFast mode (as that assert is disabled and it's not stuck in an infinite loop). But that means that readAtLeast is not reading at least "len" bytes as it describes, I guess?. There seems to be a conflict between that assert and Line 68 in 7e0a02e
len bytes.
|
That assert being triggered means that something is requesting more bytes than they have space to store, which is impossible. The comment on that line refers to partially reading the buffer (ie. not completely consuming it, which means the output buffer will be completely full). The only other place that I can see this coming from is in protocol.zig:
My guess is that |
Yes, out_avail is 0 there so can_read becomes 0 as well as I mentioned in a previous comment:
|
That just means that the buffer is full, and the current chunk has 1 more byte left in it; read doesn't have to fully consume a chunk, nor is it expected to. But this would certainly explain the assertion failing, I think a more sound solution would be to replace the |
For those looking for a work-around while bugs are ironed out, I have found that setting 'Accept-Encoding: identity' in the request headers reduces the frequency of crashes using the http client. |
#16150 seems to have fixed this |
Zig Version
0.11.0-dev.3107+6547d2331
Steps to Reproduce and Observed Behavior
Make a https request using the
std.http.Client
and settransfer_encoding
to.chunked
on the request like so:Once you surpass a certain body size (greater than
4096
i believe), it will remain stuck in an infinite loop. The response body size of the request which causes this issue is much smaller than12Mb
- it's around3.4Mb
. If I make request under4Kb
it seems to work fine.I tracked the bug to BufferedConnection.readAtLeast and it seems it's stuck in the
while (out_index < len)
loop.I logged values of the vars that affect this loop like so:
and it produces these logs continuously:
Also, sometimes it doesn't get stuck in loop and returns this error instead (~20% of the time):
Expected Behavior
It should not hang indefinitely.
The text was updated successfully, but these errors were encountered: