-
Notifications
You must be signed in to change notification settings - Fork 31
feat: regression testing #73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The LSP needs time to init, but we didn't give it enough time. It didn't sleep well, and returned partial results.
哦对了,如果报错可以去 log 里面找到 Artifact download URL 然后检视 |
return c.Conn.Close() | ||
} | ||
|
||
// Extra wrapper around json rpc to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里包了一层,本来是想做 retry on empty response (就不用 sleep 等 lsp 了)的,但是发现实在做不了,只能 sleep。
不过包一层也方便以后做 unified transparent LSP request caching
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR implements regression testing infrastructure to validate that code changes don't introduce behavioral regressions in the uniast generation. The system compares uniast JSON outputs between the main branch and PR branches by building separate executables and running them against all testdata.
Key changes:
- Adds automated regression testing workflow that compares uniast outputs between main and PR branches
- Implements JSON comparison tooling with detailed diff reporting
- Includes code refactoring to separate LSP method implementations into dedicated files
Reviewed Changes
Copilot reviewed 11 out of 11 changed files in this pull request and generated 6 comments.
Show a summary per file
File | Description |
---|---|
.github/workflows/regression.yml | CI workflow for automated regression testing |
script/run_all_testdata.sh | Shell script to generate uniast for all testdata |
script/diffjson.py | Python tool for comparing JSON outputs with detailed diff reporting |
script/requirements.txt | Python dependencies for JSON comparison tool |
lang/lsp/lsp_methods.go | Extracted LSP method implementations from main file |
lang/lsp/lsp.go | Refactored to remove method implementations and improve structure |
lang/lsp/client.go | Added generic Call wrapper method |
lang/lsp/handler.go | Added diagnostic notification handling |
lang/lsp/testutils.go | Increased LSP server wait time |
lang/lsp/clients_test.go | Updated test expectations and removed workspace symbol test |
testdata/rust/0_rust2/src/entity/inter.rs | Updated test data with uncommented code |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
f72c309
to
ca872e9
Compare
|
主要就是把 lsp.go 里面所有 textDocument/*** 择出来到 lsp_method.go。 方法实现完全没变,就是挪个地方。 原因是我在优化 parser,以后直接在 jsonrpc 层做透明 cache。 没改行为,在测例上功能是完全一致的。 |
差不多我把 py 的优化弄到头了(瓶颈已经在单线程的 pylsp 了),有
它们会改动 parser 代码,但是我可以测试保证代码行为一定是不变的。 |
所以优化有效果么? @Hoblovski |
我才发现 CI 里老和新的应该分两个 step,这样还能对比效率(虽然不能并行) |
What type of PR is this?
Add regression testing scripts and incorporate CI integration.
Save you from all those nights wondering "looks like it works but does it break anything"?
Sleep well.
Check the PR title.
(Optional) Translate the PR title into Chinese.
实现系统级回归测试,集成进 CI。
Basically 就是验证 pr 生成的 uniast 和 main 生成的 uniast 是一致的。
逻辑很简单,编译一个 main 分支的可执行文件,编译一个 pr 分支的可执行文件,
两个都在 testdata 跑跑,然后判断一下 uniast json 是否相同。
这个是系统级测试,完全把内部实现当成黑盒,所以相对 robust。
上周还画了个饼做 property based testing,正好作为 uniast 的一个可执行规范。
但是现有代码实在比较难做 PBT…… 所以先做能用的 system regression
跳过回归测试
把 regression.yml 里面被注释的那行加进来就行
这样做完以后,只需要 PR title 加 [NO-REGRESSION-TEST] 就可以跳过。
但是这样太粗了,按理说应该是 pr 作者声明哪些通过哪些不通过
手动回归测试
你可以手动回归测试,从一个干净的仓库开始:
应该输出除了 go 所有别的都没问题。
go 有问题是因为 Yi 加了 Go 的类型参数。
(Optional) More detailed description for this PR(en: English/zh: Chinese).
en:
zh(optional):
(Optional) Which issue(s) this PR fixes:
(optional) The PR that updates user documentation: