-
-
Notifications
You must be signed in to change notification settings - Fork 22
Conversation
Helps to avoid bugs that we are not used to diagnose since flake8 avoids them in the first place, such as missing `self`.
Test with `python3 trio_test.py`.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is awesome, thank you! I made a few comments, but they ended up being more notes for future work rather than any kind of critique of what you did here, and this is definitely a step forward. So, up to you: do you want to keep working on this, maybe getting read()
working, or do you want to merge this as is and then future work can be a new PR?
urllib3/sync_connection.py
Outdated
@@ -225,11 +225,12 @@ def consume_bytes(data): | |||
elif isinstance(event, h11.Response): | |||
# We have our response! Save it and get out of here. | |||
h11_response = event | |||
print(event) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this should be removed at some point :-)
def forceful_close(self): | ||
self._stream.forceful_close() | ||
async def forceful_close(self): | ||
await trio.aclose_forcefully(self._stream) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ugh, forceful close raises such tricky questions for API design.
I started to write a long thing here about why close
should maybe be synchronous after all. Trio's abstract stream interface makes it async because it's, y'know, abstract, and some streams need it to be async. But for the particular concrete streams that urllib3 uses, it's always synchronous. So maybe, I was thinking, it should be synchronous here too, and we'd make the trio backend jump through the necessary hoops to make that work.
Then l looked at twisted and asyncio and realized that they also make closing a socket an async operation for weird API reasons, so fine, let's just make it async :-). It's an issue we might want to revisit in the future though as this develops. (I'm also inclined to say that if we're going to make closing async, we should rename it aclose
, which is the convention that trio uses; I stole it from the interpreter's async generator API. But that's also something we can worry about later.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the explanation. So forceful_close
should become aclose_forcefully
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I guess it could. That doesn't matter too much though, because the backend API is internal anyway. The more important question is whether we want to rename Response.close
, PoolManager.close
, etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I'll prepare another PR for this change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just opened #4.
urllib3/sync_connection.py
Outdated
# XX kluge for now | ||
state_machine._cstate.process_error(state_machine.our_role) | ||
|
||
return _response_from_h11(h11_response) | ||
return _response_from_h11(h11_response, request_bytes_iterable) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, here's the problem with read
:-). (Or at least, the first problem :-).)
request_bytes_iterable
is the body of our request, i.e. what we're uploading. The second argument to _response_from_h11
is the body of the response, i.e. what we're downloading. The way the code is written, SyncHTTP1Connection
actually acts as the response body (it implements the async iterator interface, see its __anext__
method), so the second argument here should just be self
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment was really useful.
urllib3/response.py
Outdated
@@ -306,7 +306,8 @@ def _error_catcher(self): | |||
|
|||
with self._error_catcher(): | |||
if amt is None: | |||
data += b''.join(self.stream(decode_content)) | |||
async for chunk in self.stream(decode_content): | |||
data += chunk |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is accidentally quadratic. Better to instead do something like:
chunks = []
async for chunk in self.stream(decode_content):
chunks.append(chunk)
data = b''.join(chunks)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course, sorry. Fixed.
This is possible because it no longer depends on self.
Having trouble with the example shown here — i assume it doesn't work anymore :) |
@kennethreitz Right, the test scripts are now in demo/ in the bleach-spike branch: https://github.com/njsmith/urllib3/tree/bleach-spike/demo. Do they work for you? |
got it |
|
this is working well! I'm impressed :) |
I gave a try at the mechanical parts of njsmith/urllib3#1, ie. passing the backend from the pools to the connection, adding async/await keywords, and fixing minor issue in the trio backend.
Running
python3 test_trio.py
prints:So we see the headers of the h11 response in the trio backend but I don't know how to expose that in the urllib3 Response object, ditto for the body which is just the empty byte string but should not be. I did not even try running the test suite, but can try to work on it if you want to.