Description
Problem
Visdom appeals to me because of the iterative send updates to a server. However, I'm always frustrated to see that visdom get and post requests are 10 times (!) slower than my analysis payload script. This is because visdom uses requests
under the hood, which is sequential by design.
Describe the solution you'd like
Switching to asyncio coroutines. In particular, using asyncio.create_task
. The await
mechanism needs to be discussed (when to await? - Either return coroutines to the users to let them handle awaits or make a public function await
in the visdom client.) One could also add async
flag in Visdom.__init__
which can be False by default.
Describe alternatives you've considered
uvloop is a fast, drop-in replacement of the built-in asyncio event loop. uvloop is implemented in Cython and uses libuv under the hood.
Additional context
Take a look at visdom.scatter
profile botelnecks:
[time spent] [line]
40.9 % exists = self.win_exists(win, env)
54.0 % return self._send(data_to_send, endpoint=endpoint)
Let me know if you're interested in switching to asyncio.
Best,
Danylo