~1.32x Faster checking with CleanDocument regex #105
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Perf difference
Benchmarking with
ruby_buildpack.rb.txt
:Before: 0.195756 0.005201 0.200957 ( 0.202021)
After: 0.148460 0.003634 0.152094 ( 0.152692)
Profile code
To generate profiler output, run:
See the readme for more details. You can do that against the commit before this one and this one to see the difference.
Before sha: 58a8d74
After sha: 0b4daca74fab5dc2979813461c0f2649951185e5
How I found the issue
I was using ruby prof to try to find perf opportunities when I saw this output from the
CallStackPrinter
:I noticed a lot of time spent in
CleanDocument
, which was curious as it's an N=1 process that happens before we do any looping or iteration.I also saw output from the
RubyProf::GraphHtmlPrinter
that we were creating many many LexValue objects.The fix
The
CleanDocument
class removes lines with a comment. It first runs lex over the document, then uses the lex output to determine which lines have comments that it can remove.Once the comments are removed, the lexer is rerun as you can produce different results with comments removed (specifically
:on_ignored_nl
)—more info on this PR #78.Using lex data to remove comments works but is brute force. It means we must lex the document twice, which is expensive and generates many LexValue objects.
Instead of removing lines based on lex values, we can remove lines based on regex.
One caveat is that we can't distinguish via regex if someone is in a heredoc or not. If a line looks like it could be a heredoc string embed like:
Then we don't remove the line as it's doubtful someone will create a comment with a
#{
pattern in a place we care about (i.e., in-between method calls) like:With this approach, we reduce lexing passes from N=2 to N=1.
Profiler thoughts
As a side note, this didn't show up as a hotspot in the perf tools. Looking back at the screenshots, it's not obvious that there was a problem here, or I could get a 1.3x speed boost by making this change. The only reason I investigated is I knew the code well and believed that it shouldn't be spending much time here. The amount reported isn't outrageously high, it's just surprising to me.
Even running the profiler before and after, it's not clear it got faster based on the output:
(Before is top & after is bottom)
You can see before was 1.1264283200034697 while after was 0.8452127130003646 at the top. It's not clear why the output for
CleanDocument
shows 17% after when it was only 12% before.Side by side, it maybe makes a bit more sense in that the problem dropped in relative priority down the list. However, I'm still not entirely clear what the numbers are indicating:
(Before left, After, right)
You can see the HTML results for yourself https://www.dropbox.com/sh/kueagukss7ho2rm/AABk3V8FQXaFWugKI21ADZJ0a?dl=0