-
-
Notifications
You must be signed in to change notification settings - Fork 9.5k
Feature requests: iter_chunks([max_size]) #2900
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
This already works. Use |
That's awesome \o/ Is there any reason why Would it be possible to add it to the documentation somewhere other than under the discussion about chunked _up_loading? Maybe under Raw response content. I think that's why I didn't find it. Thank you for the quick response! |
I'd happily accept a pull request that adds a similar stanza to that portion of the docs. =) As to why we didn't change One way or another, you should usually set a maximum size there. Arguably we should set it away from 128, but for now I don't think it's unreasonable to leave it as it. We may change it in 3.0.0, though. |
Are you sure that When I take a stack-trace during Speaking of, this has actually been sitting here for quite some time now. Shouldn't it have given me some chunks? It works with smaller |
Here is the stack trace that I observed:
|
What version of requests are you using? |
|
Then the problem is that the website you contacted is not actually doing chunked encoding. In this context, it will attempt to up to the maximum. |
Hmm, okay, maybe I was a bit unclear but I didn't mean this in relation to chunked encoding. Chunked encoding is at the application level, but I wanted it on a packet level. With that I mean that as soon as the first packet of data has arrived to my computer, I want to process that chunk of data. If several packets arrive while I'm doing something else, I don't mind getting a larger chunk. This is the default behaviour or the |
@LinusU Unfortunately, |
Hmm, that is too bad :( Thank you for all your help though, stellar support 👍 |
My pleasure, I'm sorry we can't be more helpful here! |
I would love to have a function that would iterate over the chunks that are received, as they are received on the socket. It would work like
socket.recv()
works in the python standard library.This would be very good for an easy way to consume the stream as efficient as possible. It would be awesome if we could update
__iter__
to use this function as well instead of usingiter_content
with a fixed length of 128.This has been discussed to some extent in #844 but that issue got closed because of inactivity. I opened this to be a more focused issue. If we feel that this is a good approach I could hopefully help implement it as well.
The text was updated successfully, but these errors were encountered: