Skip to content
This repository was archived by the owner on Oct 16, 2020. It is now read-only.
This repository was archived by the owner on Oct 16, 2020. It is now read-only.

performance - very slow compared to tsserver #331

@prabirshrestha

Description

@prabirshrestha

Even an empty project takes for ever to show completion results. console. and window. took almost 5 sec but 'hello'. was immediate. The first time I could expect console. to be slow but even the second time I run type console. it seems slow. I would had also expected the completion to show as I type c in console but the results are so slow that it doesn't show.

slow

Activity

felixfbecker

felixfbecker commented on Aug 9, 2017

@felixfbecker
Contributor

Would be interesting to see a trace for this

damieng

damieng commented on Aug 9, 2017

@damieng

Also seeing this slowness using my Atom package. Does your vim plugin support streaming? I'm wondering if the non-streaming/partialResults versions are that much slower/stacked behind all the partialResults.

Seeing about 7 seconds to resolve super. with only one level of classes. Subsquent updates are initially a bit faster but then if I backspace it takes some time to get alternative completions.

(Windows, using stdio communication)

prabirshrestha

prabirshrestha commented on Aug 9, 2017

@prabirshrestha
Author

It follows strict protocol so no streaming or partial results.

I’m also using stdio on windows.

prabirshrestha

prabirshrestha commented on Aug 9, 2017

@prabirshrestha
Author

Uploaded the log but don't think it will be useful since it doesn't have timestamp. https://gist.github.com/prabirshrestha/fe19248df64de14260d977c42bcdf16c. Also might be good to include the OS, version and the protocol used in the log. Let me know if you still want me to share vim-lsp log which is more verbose.

vim-lsp doesn't support streaming and this could be the root cause of why the completion is being ignored and not shown. Tailing the log I do see partial results come fast.

Given that stream still hasn't been merged to the official protocol I would want to avoid implementing stream in vim-lsp as I would like to get other features first. I also want to avoid writing JSON patch in viml for now.

Support for this method is indicated by the streaming capability in both ClientCapabilities and ServerCapabilities.

https://github.com/sourcegraph/language-server-protocol/blob/streaming/protocol.md#partialResult

Based on the above spec currently when vim-lsp passes capabilities: {} it shouldn't return partialResults. So this would still be a bug in lang server.

damieng

damieng commented on Aug 9, 2017

@damieng

The sending of partialResults when not supported by the client has an issue at #329

felixfbecker

felixfbecker commented on Aug 10, 2017

@felixfbecker
Contributor

Before we jump to conclusions, can you try to comment out this line and see if it changes anything?
https://sourcegraph.com/github.com/sourcegraph/javascript-typescript-langserver@master/-/blob/src/connection.ts#L261-269

damieng

damieng commented on Aug 10, 2017

@damieng

I actually have a patch on mine that puts a if (streaming) block around there. It stops the partialResults sending but it still takes a long time to process.

I have in my small 1 javascript project a const path = require('path') a few lines later in a method I typed path. and saw the textDocument/completion request go out.

It took almost 14 seconds to receive the 31 item array back for the path import. (This was on my mac)

damieng

damieng commented on Aug 10, 2017

@damieng

Just did some more testing. I think what's happening is that firing the autocomplete for every keystroke is causing a backlog as the scope for that is wide. If i turn off autocomplete-on-keypress and then just manually trigger the autocomplete request manually performance is much better.

felixfbecker

felixfbecker commented on Aug 10, 2017

@felixfbecker
Contributor

Does neovim look at the isIncomplete value? If it's false, it shouldn't send more requests

damieng

damieng commented on Aug 10, 2017

@damieng

Sorry, I'm using my Atom bridge not neovim. We're sending a fresh request for completion on every keystroke effectively. (so it provides autocomplete for variable names etc).

tomv564

tomv564 commented on Aug 10, 2017

@tomv564
Contributor

I turned off completion on all keystrokes, just have it on what the server advertises ('.'). Is there debouncing logic on your keystroke trigger?

prabirshrestha

prabirshrestha commented on Aug 10, 2017

@prabirshrestha
Author

I can try @felixfbecker suggestion tonight on commenting partial requests and verify.

I'm using asyncomplete.vim with vim-lsp to provide autocompletion for lang server. It doesn't send completion requests on every keystroke. It does respect isIncomplete flag and will only make requests if it is true.

asyncomplete.vim and nvim-completion-manager uses similar algorithm as VsCode for autocomplete which I have written about it at roxma/nvim-completion-manager#30 (comment). It is really good at caching. Might be atom plugin could implement similar algorithm? There is also an active issue on LSP on how triggerCharacters. microsoft/language-server-protocol#268

I also have written my own implementation of the typescript language server that acts as a proxy for tsserver.cmd which I will try to open source tonight. Currently only implemented textDocument/completion. I didn't have any issues with the perf there even for large codebase.

damieng

damieng commented on Aug 10, 2017

@damieng

Drastically reducing how often we called completion has solved problems on Atom. Now we just trigger it on . and when manually requested.

12 remaining items

felixfbecker

felixfbecker commented on Sep 8, 2017

@felixfbecker
Contributor

Please confirm if this is still a problem with v2.3.1

damieng

damieng commented on Nov 5, 2017

@damieng

The new completionItems are now basically just a kind, a label and a sort. The experience is very compromised and the list nowhere near as useful. Was there no middle ground?

felixfbecker

felixfbecker commented on Nov 6, 2017

@felixfbecker
Contributor

@damieng does your client call completionItem/resolve?

damieng

damieng commented on Nov 6, 2017

@damieng

Not yet but it's my understanding that clients call that as the user highlights the option (e.g. cursor up/down) or actually resolves it.

For a drop-down list not seeing arguments for each entry really cuts down the usefulness.

felixfbecker

felixfbecker commented on Nov 6, 2017

@felixfbecker
Contributor

Sorry, not sure what you mean / whether that is intended or a bug. Could you elaborate a bit what is expected and actual? Maybe a screenshot?

damieng

damieng commented on Nov 6, 2017

@damieng

This is what we used to be able to achieve;

image

Since the change in 2.3.0 now all we can do is this;

image

Yes, I know you can get extra detail from the resolve call but that's only supposed to be if you confirm a selection or go through the list with cursor keys unless I'm mis-reading the spec? i.e. it makes sense for doc params but definitely not for details that affect the initial rendering of the list like details of parameters and return types?

felixfbecker

felixfbecker commented on Nov 6, 2017

@felixfbecker
Contributor

@damieng seems like VS Code and Atom behave differently here, VS Code will only show the "details" for the selected item:

image

https://code.visualstudio.com/docs/editor/intellisense

So I assume VS Code will call completionItem/resolve for the selected item.

In your screenshot, B is selected, so shouldn't that show the details by calling completionItem/resolve? Is that a bug in the client? Could you verify that completionItem/resolve is being sent?

Given that the details are expensive to calculate, I would propose that if Atom users want to see the details for all completion items (not only the selected) Atom adds a setting to call completionItem/resolve not just for the selected item but for all visible items. That way we only have to do this computation for the items that are actually visible (e.g. 10) instead of for the whole list (e.g. 1000)

damieng

damieng commented on Nov 6, 2017

@damieng

I personally think the experience there in VSCode is terrible. If you are looking for a method that takes a particular object then you have to try every single one. It's certainly not on par with other IDEs such as full Visual Studio or Intelli-J.

Right now we don't have the necessary hooks in our autocomplete package necessary to invoke resolve either on selection or for all currently visible :( Working on getting those added.

felixfbecker

felixfbecker commented on Nov 6, 2017

@felixfbecker
Contributor

The problem is that sending back all data would take us back to the performance problem again, and I think we definitely do not have to resolve the details for the whole list, it should only be for the visible or selected (whatever the client prefers). Unfortunately there is no ClientCapability for completionItem/resolve so we can't have different behaviour either, and implementing that might take just as long anyway as implementing completionItem/resolve in Atom. It has been part of the spec for quite a while after all. If you care about this a lot you could rollback to the version before we added support for it in the server, but I think the best path forward is getting support in Atom or you'll meet these problems with other language servers too eventually.

I'm gonna close this issue as I believe the perf issue was solved but if you meet any problems with completionItem/resolve feel free to open a new issue.

maxbrunsfeld

maxbrunsfeld commented on Nov 6, 2017

@maxbrunsfeld

Unfortunately there is no ClientCapability for completionItem/resolve so we can't have different behaviour either, and implementing that might take just as long anyway as implementing completionItem/resolve in Atom.

@felixfbecker We're definitely going to implement completionItem/resolve in Atom, but that doesn't totally solve our problem: we'd like to show the type signatures for all of the initially-visible items right away, not require the user to scroll through the list in order to see them.

It's true that the server doesn't know exactly how many items will be initially visible when it gets the textDocument/completion request, but it can make a pretty good guess. It's probably not going to be more than 10.

What would you think about adding the ability to eagerly include type signatures for the first N completion items, where N is configurable somehow, maybe via a command line flag? I get that it adds some complexity, but I think it's really a huge improvement from a user's perspective.

felixfbecker

felixfbecker commented on Nov 6, 2017

@felixfbecker
Contributor

@maxbrunsfeld why not simply call completionItem/resolve for the N items? That would work for every language server, not just this one. I don't think there would be a performance overhead as the bottleneck is still resolving the detail and it's actually a good thing to show the user the list first and not block on the details.

damieng

damieng commented on Nov 6, 2017

@damieng

I don't think any of the other language servers we've seen are quite as thrifty with the initial detail provided.

tomv564

tomv564 commented on Nov 7, 2017

@tomv564
Contributor

Sorry to comment on a closed issue, but I wanted to add some details to help you make a decision:

  • All Microsoft's completion implementations for Typescript (including full Visual Studio and Sublime Text) only show label and kind (and request details for a tooltip)).

    • The restriction is within Microsoft's own typescript APIs and they are likely not immediately motivated to improve it for 3rd party UIs.
  • With typescript 2.6 all exports of the project will show up in non-member completions, so the cost of converting all to detailed completions will go up even more (Add exported members of all project files in the global completion list microsoft/TypeScript#19069)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @damieng@esamattis@prabirshrestha@maxbrunsfeld@tomv564

        Issue actions

          performance - very slow compared to tsserver · Issue #331 · sourcegraph/javascript-typescript-langserver