-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Add top level keywords completion #4700
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
acc.add(keyword(ctx, "struct", "struct $0 {}")); | ||
acc.add(keyword(ctx, "trait", "trait $0 {}")); | ||
acc.add(keyword(ctx, "fn", "fn $0() {}")); | ||
acc.add(keyword(ctx, "unsafe", "unsafe ")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't checked, but are we missing mod
, const
, static
and union
? include!
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, you are right I will add them and update the PR. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be worth looking at https://github.com/rust-analyzer/rust-analyzer/blob/master/crates/ra_parser/src/grammar/items.rs#L79 and https://github.com/rust-analyzer/rust-analyzer/blob/master/crates/ra_parser/src/grammar/items.rs#L230 for an exhaustive list.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CC #4737
non_trivia_sibling(NodeOrToken::Node(prev_sibling_node), Direction::Prev)? | ||
}; | ||
// traversing the tree down to get the last token or node, i.e. the closest one | ||
loop { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can avoid this loop
if you use SyntaxToken::prev_token
-- unlike prev_sibling_or_token
, it works across parents.
} | ||
acc.add(keyword(ctx, "impl", "impl $0 {}")); | ||
acc.add(keyword(ctx, "enum", "enum $0 {}")); | ||
acc.add(keyword(ctx, "struct", "struct $0 {}")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
acc.add(keyword(ctx, "struct", "struct $0 {}")); | |
acc.add(keyword(ctx, "struct", "struct $0")); |
As this might be a unit struct. All the rest seem fine though.
For the reference, here's how Kotlin handles keyword completion: https://github.com/JetBrains/kotlin/blob/f128e5222ae69635352a3cf31af772f35b04e333/idea/idea-completion/src/org/jetbrains/kotlin/idea/completion/KeywordCompletion.kt |
Do you think it makes sense to try to replicate the handling mechanism in rust-analyzer (to the possible extent of course)? |
Not really. I do think we need a more principled approach for keyword completion. Basically, we need to ensure that all possible keywords are completed in the correct contexts, which I think can be achieved by one of two ways:
I am not sure which of the two approaches is the best, and I don't think there's an easy answer here. Seems like we just need to move slowly and try different things out, building a test suite along the way. |
Here's an interesting diff for Kotlin which swtiched from hard-coded cases to a parser-based approach: JetBrains/kotlin@549171d |
As far as I understand there is one difficulty with Kotlin's approach in our case. We don't have all errors in the syntax tree (seems Kotlin has them - https://github.com/JetBrains/kotlin/blob/ba6da7c40a6cc502508faf6e04fa105b96bc7777/compiler/testData/psi/annotation/ShortAnnotations.txt) and sometimes we have them in the situation when we still can complete the code to be correct. Here is one of the examples: unsafe ->
Here we can definitely use |
Actually what I want to do is to maybe try to use the kind of approach that Kotlin used before they changed to parsing errors. That is to match each keyword with certain filters that should be applied to check if we can use it. These filters will be defined as handlers. To check if that works I will today/tomorrow look at the grammar and see if this abstraction plays good with Rust's syntax. |
More generally, I think there's a library solution here as well: describing positions in source code as patterns. At least, it's obvious that we can have trait TreePattern {
fn matches_node(&self, node: SyntaxNode) -> bool;
fn matches_position(&self, root: SyntaxNode, position: Textsize) -> bool
} We can also add all sorts of algebraic operations on top, like However, I haven't actually used such API a lot, somehow just writing the cases manually was always easier. |
Aha makes sense, just to clarify, so the patterns here would be essentially like filters and we can combine them etc. Then we would have to define patterns like |
Yup. |
Hi @matklad ! I started working with traits approach, but then I understood that we would anyway want to compute each of the filters only once with only one position and if we would combine the traits we would have to either do the computation multiple times or make them refer to some cache underneath (which can be cumbersome to implement). So I decided to actually go with the approach that was before, e.g. working with results of computations made at the start of completions routine, but structure it a little bit differently (syntactically more in accordance with what Kotlin guys did before). I still think it makes sense to separate the patterns logic into some other file, so it would be less cluttered and maybe add additional namespace in the context (i.e. not have all variable inside one big structure), but I am not really sure. Because otherwise the Can you please check if this approach makes sense (also the way I am combining the results to determine if the keyword can be applied)? Please note that I don't filter them by prefix, because it seems that the client does it on its own. Also in computing the respective patterns we would want to use fake token, because it gives the correct view on how the AST will look like (just clarifying, I didn't do this the first time) If that is fine I will make the tests and cover the outstanding cases. |
NodeOrToken::Node(node) => node, | ||
NodeOrToken::Token(token) => token.parent(), | ||
}; | ||
node.ancestors().find(|it| it.kind() == IMPL_DEF).is_some() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like in most cases yo can directly call .ancestors` on the node?
Yup, I think this setup makes sense! I am worrying a bit that // completion_context.rs
struct CompletionContext {
kw_context: KwContext
}
// complete_keyword.rs
pub(crate) fn build_context(...) -> KwContext {}
pub(crate) fn complete(ctx: CompletionContext) Ie, we can preserve the current control flow, but make it such that each completion can, in a single file, provide both a bit of context and the actual code completion logic. I am not sure what's the best stup is though, this seems to be a bit of an overkill. My gut feeling that just cleaning up the current structure would work -- I won'dn't worry about 2koc But I also like how you move patterns to a separate file, maybe we should do that with some other patterns as well! |
|
||
#[test] | ||
fn completes_keywords_in_use_stmt_new_approach() { | ||
assert_completion_keyword( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @matklad! Do you think it makes sense that we use new testing approach? I will have to write a lot of tests for new keywords, so it would be good to simplify it a little bit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I propose to leave only labels and inserts as parameters and all other asserts do under the hood.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I need to make comparisons better so we would have a diff in the end, I will return with changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One of the idea that because we have only one position to insert which we know beforehand, we can maybe generate expected completion items using this position and then just do snapshot_assert but with generated completions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I've tried to play a little bit with asserts, but the problem is that I cannot get the assert_debug_snapshot
work with arbitrary sets of completion items (and probably it shoudn't work because snapshots should be somewhat static). But the thing which is good about snapshot testing is that it gives you nice diff which you would otherwise have to implement yourself.
I am a little bit worried that right now our keyword tests are somewhat not readable due to a lot of extra information.
I see that we can do two things:
(1) Not use debug snapshots and use custom logic
(2) Use debug snapshots working not with CompletionItems
but with simpler structures like tuples r###"[("impl", "impl"),("struct", "struct")]"###
, that way we would still have the diffs but make the tests easier to write
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, you are spot-on, this is a long-standing issue I want to fix in our completion testing infra.
Generally, there are two things we want to test during completion:
- that we complete the right set of items (this corresponds to the completion popup the user sees in the UI)
- that, each item carries the correct edit (this correponds to the user selecting a completion variant and seeing properly inserted)
We at the moment test both, and that's not really convenient.
Ideally, the stuff in complete_xxx
tests only the list of things, and the sutff in presentation.rs
tests the full edits.
I think we need to introduce two new testing utilities, and start using them for completion.
get_completions
, which returns a single renderd list of labels, which should look rougthly like this, but in text:
v acc
M alloc
m asm!
M ast
e ABI
that is, just a list of labels with "icons"
get_single_completion(label: &str)
which additionally filters the list of completions by label, and returns a single one (which is fully feeded into insta comparision).
In this PR, I think it makes sense to do "completion list" thing.
1aa644d
to
42a719a
Compare
|
||
#[test] | ||
fn completes_keywords_in_use_stmt_new_approach() { | ||
assert_debug_snapshot!( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here is the last version which on the one hand reduces the amount of data needed for snapshot (and thus increases readability), and on the other hand still relies on diffs
check_pattern_is_applicable( | ||
r" | ||
trait A w<|> { | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's write such short tests as a single line r"trait A w<}> {}"
|
||
#[test] | ||
fn completes_keywords_in_use_stmt_new_approach() { | ||
assert_completion_keyword( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup, you are spot-on, this is a long-standing issue I want to fix in our completion testing infra.
Generally, there are two things we want to test during completion:
- that we complete the right set of items (this corresponds to the completion popup the user sees in the UI)
- that, each item carries the correct edit (this correponds to the user selecting a completion variant and seeing properly inserted)
We at the moment test both, and that's not really convenient.
Ideally, the stuff in complete_xxx
tests only the list of things, and the sutff in presentation.rs
tests the full edits.
I think we need to introduce two new testing utilities, and start using them for completion.
get_completions
, which returns a single renderd list of labels, which should look rougthly like this, but in text:
v acc
M alloc
m asm!
M ast
e ABI
that is, just a list of labels with "icons"
get_single_completion(label: &str)
which additionally filters the list of completions by label, and returns a single one (which is fully feeded into insta comparision).
In this PR, I think it makes sense to do "completion list" thing.
@matklad, just to confirm, so in this PR we are generating a list of completion labels with their types as a first letter and then matching whole list (a string) using |
Yup! |
63d47c0
to
eeb8b9e
Compare
# Conflicts: # docs/user/generated_features.adoc
@matklad I implemented most of the keywords (I think like the main ones) and added test covering I think most basic situations. There is still room for a lot of improvements, for example, maybe variable names should be shorter and more concise, maybe patterns should be grouped for readability and all keywords sorted (or grouped) etc. Should we split the work into more PRs or issues (I mean leave the scope of the pr as it is and subsequent tasks make in other prs)? |
I think we should land this, as it is getting pretty big :) I've pushed some cleanups on top bors r+ |
bors r+ |
This fixes the following issue: #4566.
Also added simple logic which filters the keywords which can be used with unsafe on the top level.