-
Notifications
You must be signed in to change notification settings - Fork 12.8k
tsserver mangels output when more than one request is pending #2758
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@steveluc since I can't attach files I will send you the msg.json by mail |
So far we can't only reproduce under Mac. Doesn't reproduce under Windows so far. One additional thing to note is that under Mac we execute on atom.exe which is based io.js instead of node.js |
You could potentially post a gist for the log. |
how are you reading the input? are you checking if content-length < size of message to keep reading? is it something like: https://github.com/Microsoft/TypeScript-Sublime-Plugin/blob/master/libs/nodeclient.py#L178 |
Based on what you've told us so far, that looks like a case in which two 'threads' are both trying to write to stdout. For example, the syntax and semantics diagnostics happen in a timer 'thread' that writes events to stdout, whereas the readline handler processes requests and writes responses to stdout. Is it possible that io.js will allow a timer interrupt to take place in the middle of a readline handler? This doesn't seem like correct node behavior, but it does match the scenario you are describing. |
@DanielRosenwasser thanks for the gist tip. Wasn't aware of the feature. The log can be found here https://gist.github.com/dbaeumer/53d84d04d787aa23bd50 |
@mhegazy yes, we are handling content-length . We already receive a mangled buffer |
@steveluc: I don't think this is a threadding problem after scanning through the code (I first thought it is a async/callback problem but TS isn't async :-). I think the problem is in the TS System implementation for node. The TSServer calls System.write which is implemented as write(s: string): void { But writeSync returns the number of bytes written. So we need to have a loop around this. I will see if implementing this correctly will fix it (test with our Mac setup on Monday) and provide a pull request if it does. |
Great, Dirk! Looking forward to the results. |
@dbaeumer can we make sure that this is the root of the cause? In other words, verify that the number of bytes written indeed doesn't match the size of the string given when something goes wrong. |
@dbaeumer thanks for digging into this! would you be interested in sending a PR for it? |
OK. Here is what I did so far: I changed the implementation to have a loop around writeSync which checks the written bytes. This now sends more data, but revealed another bug: sometimes the writeSync fails with: -1 EAGAIN (Resource temporarily unavailable). However this is only seen under atom not under node. In general writeSync is IMO bad since it requires that someone on the receiving side reads which might not always be the case. Since we send a lot of data over stdout I think the implementation should not use write from System. Instead the tsserver should do the following.
If you think that is the right thing to do I am happy to work on a PR (after //build) to get this into the tsserver. |
Worth pointing out from http://www.daveeddy.com/2013/03/26/synchronous-file-io-in-nodejs/:
|
I agree we should move to asynchronous I/O, since this is the preferred node model. I think though that we should simultaneously consider re-factoring the protocol if we find we are regularly sending messages larger than 64K. writeSync on stdout only blocks if the 64K buffer fills (the buffer can also be made bigger). It makes sense to use async I/O for rare cases where we might fill that buffer, but performance will suffer if we are regularly stuffing that buffer so full that the client reader thread isn't draining it, whether or not the I/O is async. So I would like to understand what response message(s) are sending large amounts of data and whether we can re-factor the protocol to avoid this. |
Completions can send a lot of data when triggered in cases where no prefix exists and where the global scope is available. var foo: We can still implement the queue and send small messages using writeSync. I wouldn't change the protocol since it might require changing it when implementations change |
thanks @dbaeumer |
This problem happens again with Node v4.2.1 on OSX, which starts to throw the EAGAIN exception and truncates the output if the string we send to |
This has not been an issue. closing. |
We are sending quite some requests to the tsserver. We make sure that what to put onto stdin is always one request, however we don't queue them on the sending side.
The output send from the tsserver mangels different response. See the attached msg.json. Interesting lines are: end of line 1 (e.g {"nContent-Length: 214 )or 125 (e.g. {"Content-Length: 70021 )
From the output it looks like the inference comes mostly from the syntax and semantic events send from the server.
The text was updated successfully, but these errors were encountered: