-
-
Notifications
You must be signed in to change notification settings - Fork 178
Fix #338 os.writev() instead of os.write(b.join()) and sendmsg() instead of send() #339
Conversation
Please don't submit a PR that fails the tests and has typos in it. At the very least run it against your test program with and without drain(). |
@gvanrossum I have fix typo, and checked spped: Before my patch: After my patch: P.S. Thinking how to fix tests. Please point me.... |
I guess you have to mock os.writev too. Time to learn something! |
BTW great news on the speed! |
@gvanrossum |
# @mock.patch('os.writev') stores internal reference to _buffer, which is MODIFIED | ||
# during this call. So, makeing deep copy :( | ||
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! | ||
n = os.writev(self._fileno, self._buffer.copy()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There must be a better way here. Adding a copy just for convenient testing isn't a good idea. Please also remove all exclamation marks, smiles, and fix typos.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@1st1
I consider you don't ever think to merge PR with such hack, so I added these comments to bring your attention. PR is not ready for merge. But I must point to this bad place.
If you know 'better way' please point me. I will fix immediatelly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK ;) Please try to find a saner approach. I'll myself try to take a look later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Has removed exclamation marks and other trash. But I stil don't know how to overcome that annoying bug in unittest.mock
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Finally, Have fixed test as @1st1 said.
I don't follow. Does the real |
@gvanrossum, in short code is:
Note, that |
|
||
self._buffer.append(data) # Try again later. | ||
self._loop.remove_writer(self._fileno) | ||
self._maybe_resume_protocol() # May append to buffer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems another bug. in sockets, self._maybe_resume_protocol()
is called before if self._buffer
. So, who is right ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would trust the socket code more. So I'd remove line 544-545 above and see how it fares.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed and make behaviour as in sockets.
@socketpair Here's my attempt to tame the mock: 1st1@5b8e2dd |
@1st1 I think the most convenient way is to merge my PR in current state, and after that, merge you PR. Is it suitable workflow ? |
@socketpair You can just incorporate my change into your PR. Please add a comment to |
Sorry, please don't merge PR. I have checked performance. Using just expanding bytearray is much faster. C is not Python. :( This is reproduced both for sockets and for pipes. In theory, writev() / sendmsg() should be faster. In reality - is not. I can not believe that this is true. I will investigate why this happen. Please don't merge PR until problem resolved. Program used to benchmark:
|
This is very strange. Using |
Honestly I'm not that surprised that bytearray performs better. Resizing a
byte array is one of the fastest things you can do in CPython (because
`realloc()` handles the copy for you).
|
Hi @socketpair! Any progress on this PR? |
I'm online, will continue soon. I spent last week in another country, now I returned back to home, and today is a first working day at my job. |
Sure, no rush. 3.5.2 will happen relatively soon, would be nice to have this reviewed and merged before that. |
great news! Now I know why bytearray was faster. Now, my code is even faster than bytearray variant! since it true zerocopy! WOOHOO! |
|
n -= len(chunk) | ||
if n < 0: | ||
# only part of chunk was written, so push unread part of it back to _buffer | ||
# memoryview is required to eliminate memory copying while slicing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a cause of slow throughput
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
Cool! Could you please fix the code style to 79 cars per line? |
@@ -617,7 +617,7 @@ def _call_connection_lost(self, exc): | |||
self._server = None | |||
|
|||
def get_write_buffer_size(self): | |||
return len(self._buffer) | |||
return sum(len(data) for data in self._buffer) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe instead of running sum
each time, we can have an internal _buffer_length
attribute?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe, but I copy-pasted it from another place. Yes I can fix that (and perform benchmarking).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suspect most of the time the data
queue won't be large, so you really need to overwhelm the write buffer in your benchmarks to see the benefits of caching.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I will not fix that. Code become too complex and error prone. Especially when exception occurs in some specific places. I tried.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, I have fixed anyway. I also added many asserts.
I've left some feedback. |
Also, Python3.5 does not support |
@1st1 @methane @gvanrossum ping |
|
||
if self._conn_lost: | ||
return | ||
try: | ||
n = self._sock.send(self._buffer) | ||
if compat.HAS_SENDMSG: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
HAS or HAVE ? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
HAS is better.
Sorry, Mark, I'm busy with other things. This is quite an intrusive change and it has to be merged with extra care. And we have plenty of time before 3.6. |
@1st1 I can split PR to small parts. After each part, everything will be consistent. Should I do that ? |
How do you want to split it? Actually, I was just looking at this PR again, and I think it's ready. I need to spend another hour playing with the code, but I'd say it's almost there. |
I think |
@arthurdarcet can you benchmark asyncio streams with this patch applied, using included tool (in this PR) on Mac OS X ? This patch should speedup pipes significantly on that platform too. |
@socketpair sure, do you have some code i could run? |
@socketpair I did it. |
I have an idea: add to write functions parameters |
I feel it's a premature optimization. |
I've benchmarked on Ubuntu 16.04, with slightly modified master branch:
socketpair:writev branch:
|
cons:
cons:
@gvanrossum @1st1 @methane |
Another cons of Checking "bytes-like" object without copy is very hard in pure Python. mv = memoryview(data)
if mv.itemsize != 1 or not mv.contiguous:
raise TypeError
|
My preference is, use bytearray() in pure Python. But make it replacable. Someone can implement more smart buffer like below in C. def __init__(self):
self.queue = deque()
def __iadd__(self, data):
if type(data) is bytes and len(data) > 200:
self.queue.append(data)
return
if not self.queue or type(self.queue[-1]) is bytes:
self.queue.append(bytearray())
self.queue[-1] += data |
Copy-pasting here: small data:
large data:
|
@gvanrossum @1st1 What do you think about all that ? |
I understand, that simplicity of code is important thing. In real use cases (i.e. small amount of data and small count of chunks) both implementations give the same performance. |
For pipes, use os.writev() instead of os.write(b.join()). For sockets, use sendmsg() instead of send(b''.join()). Also change plain list of buffers to deque() of buffers. This greatly improve performance on large buffers or on many stream.write() calls.
Mark, I'm closing this PR. I think we'd want to have vectorized writes in 3.7, so when you have a new patch please open a new PR. |
No description provided.