-
-
Notifications
You must be signed in to change notification settings - Fork 5.6k
Re-enable tab completion of kwargs for large method tables #58012
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
while testing to ensure that absurdly large method tables don't tank the performance of the REPL
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're just essentially demonstrating here that for your specific test case, the subtyping algorithm always takes the fast path--which is true, but seems hardly relevant to the actual point of having this limit
@vtjnash do you have suggestions on how to improve these tests? Or could you communicate how they're not testing what I think they're testing so I can fix them? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are opting into an algorithm that is too slow for interactive usage (O(n^2)
) in measurements when doing anything that doesn't just hit the fast paths like your test, which is why we have these limits. It has nothing to do with display limitations (those limits should not even occur, since display occurs elsewhere). We can opt into a somewhat faster version of the algorithm with -1
(maybe closer to O(n*log n)
but nobody has proved that) , but it is also still too slow, delaying keystrokes by much more than 10 ms of added perceptual latency.
This is also just measurements on a simple system image with hardly anything added.
julia> @time Base._methods_by_ftype(Tuple{DataType,Vararg}, 4000, Base.get_world_counter()) |> length
0.406149 seconds (16.67 k allocations: 1.171 MiB)
julia> @time Base._methods_by_ftype(Tuple{DataType,Vararg}, -1, Base.get_world_counter()) ;
0.113711 seconds (15.12 k allocations: 1.108 MiB)
julia> @time Base._methods_by_ftype(Tuple{Type,Vararg}, 4000, Base.get_world_counter()) |> length
0.858004 seconds (59.50 k allocations: 2.917 MiB)
2745
julia> @time Base._methods_by_ftype(Tuple{Type,Vararg}, -1, Base.get_world_counter()) |> length
0.231704 seconds (50.85 k allocations: 2.568 MiB)
2745
julia> @time Base._methods_by_ftype(Tuple{Type,Vararg}, 40, Base.get_world_counter());
0.000250 seconds (101 allocations: 7.312 KiB)
julia> @time Base._methods_by_ftype(Tuple{typeof(Core.kwcall),Vararg}, 4000, Base.get_world_counter()) |> length
0.105716 seconds (8.45 k allocations: 415.672 KiB)
1138
OK, that's interesting that the limit changes the algorithm — I wasn't aware of that and wouldn't have guessed it. But PR branch is using 500 as its cutoff (not 4000). It's also in contrast with the -1 limit when the REPL has no clue about the function it's calling (the pycall example). I'm trying to restore functionality that the fixes for #54131 broke without impacting perceptual performance in that catastrophic case.
The test-case seems to be adequately representing this distinction. But you may be right that it's still too slow.
Or for show:
|
OK, seems like there could be a better heuristic then. I just pushed a change to disable tab completion in cases where the REPL doesn't know the concrete function — that resolves the PyCall issue (where the function to be called was inferred as julia> using REPL, REPL.REPLCompletions
julia> @time completions((s="Core.kwcall(lim";), lastindex(s), @__MODULE__, false);
0.022046 seconds (18.23 k allocations: 934.469 KiB)
julia> function g54131 end
g54131 (generic function with 0 methods)
julia> for i in 1:498
@eval g54131(::Val{$i}) = i
end
julia> g54131(::Val{499}; kwarg=true) = 499*kwarg
g54131 (generic function with 499 methods)
julia> @time completions((s="g54131(lim";), lastindex(s), @__MODULE__, false)
0.002348 seconds (8.19 k allocations: 427.562 KiB)
(REPL.REPLCompletions.Completion[], 8:10, false)
julia> @time completions((s="g54131(kw";), lastindex(s), @__MODULE__, false)
0.002278 seconds (8.19 k allocations: 427.375 KiB)
(REPL.REPLCompletions.Completion[REPL.REPLCompletions.KeywordArgumentCompletion("kwarg")], 8:9, true)
julia> for i in 500:1000
@eval g54131(::Val{$i}) = i
end
julia> @time completions((s="g54131(kw";), lastindex(s), @__MODULE__, false)
0.006390 seconds (12.21 k allocations: 642.750 KiB)
(REPL.REPLCompletions.Completion[REPL.REPLCompletions.KeywordArgumentCompletion("kwarg")], 8:9, true)
julia> for i in 1001:2000
@eval g54131(::Val{$i}) = i
end
julia> @time completions((s="g54131(kw";), lastindex(s), @__MODULE__, false)
0.020564 seconds (20.22 k allocations: 1.051 MiB)
(REPL.REPLCompletions.Completion[REPL.REPLCompletions.KeywordArgumentCompletion("kwarg")], 8:9, true)
julia> for i in 2001:4000
@eval g54131(::Val{$i}) = i
end
julia> @time completions((s="g54131(kw";), lastindex(s), @__MODULE__, false)
0.073561 seconds (36.22 k allocations: 1.801 MiB)
(REPL.REPLCompletions.Completion[REPL.REPLCompletions.KeywordArgumentCompletion("kwarg")], 8:9, true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the implementation change itself is fine.
Regarding tests using @elasped
, I think we can just omit them since ultimately what should be tested is our perceived speed, and I feel it is difficult to test this systematically.
I mean, the reason I added the test was because of #54131 — that's not just being perceptively slow, it's a being so slow you think it crashed. I suppose I could axe the tests, but they represent a real but less-common use case that can be easy to forget about. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand.
Let's proceed with the merge as is.
If there are any issues with this test case, we can address them when they arise.
Thank you for pushing forward with this PR.
while testing to ensure that
absurdly large method tablestab completing over an abstract function call doesn't tank the performance of the REPLFixes #57836