-
Notifications
You must be signed in to change notification settings - Fork 4
Add test infrastructure and CUE-based workflow generation #98
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
📝 WalkthroughWalkthroughThis PR establishes a comprehensive testing and CI/CD infrastructure. It adds Lua unit tests and integration tests, introduces CUE-based workflow generation for GitHub Actions, creates a test runner script in Python, documents the testing process, and adds automation commands via Justfile. Workflows are generated from CUE definitions and committed to the repository. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Co-authored-by: NobbZ <[email protected]>
Co-authored-by: NobbZ <[email protected]>
Co-authored-by: NobbZ <[email protected]>
Co-authored-by: NobbZ <[email protected]>
Co-authored-by: NobbZ <[email protected]>
NobbZ
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a first set of comments, I will continue the review tomorrow, though perhaps you can tackle the comments I left until then?
- Convert run-tests.sh to Python (bin/run-tests.py) - Add run-tests as a Nix package and flake app - Replace Makefile with justfile using just command runner - Add just to devShell - Update all documentation to use 'just' instead of 'make' - Update workflows to install just and use 'just check' - Update all references from scripts/run-tests.sh to nix run .#run-tests Co-authored-by: NobbZ <[email protected]>
Co-authored-by: NobbZ <[email protected]>
|
@copilot the error: Unknown start of token '.'
——▶ justfile:21:28
│
21 │ pull-check-workflow: cicd/*.cue cue.mod/module.cue
│ ^
Error: Process completed with exit code 1. |
Just doesn't support wildcard dependencies like Make does. Removed the dependency syntax (cicd/*.cue cue.mod/module.cue) and consolidated the workflow generation into a single recipe. Added informative echo messages for better user feedback. Co-authored-by: NobbZ <[email protected]>
... Fixed in commit 407a99d. The issue was that |
- Remove outdated scripts/ reference from copilot-instructions.md - Fix just-version parameter in workflows (use hyphen not underscore) - Remove IMPLEMENTATION_SUMMARY.md (not needed in repo) - Remove Python cache files (__pycache__) and add to .gitignore - Remove redundant apps section from bin/default.nix (package is enough) - Update run-tests.py to search parent directories for flake.nix - Add system detection in run-tests.py to work on different platforms - Extract CUE and Just versions to top of workflows.cue file - Remove community cache from Nix configuration (not needed) - Remove _freeSpace step (build won't exceed workflow size limit) - Regenerate workflow files to reflect all CUE changes Co-authored-by: NobbZ <[email protected]>
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
🤖 Fix all issues with AI agents
In @.github/workflows/check-generated.yml:
- Around line 1-22: Update the Just installer step "Install Just" to use
just-version "1.39.0" (or a >=1.39.0 range) in the workflow step that uses
extractions/setup-just@v2 by changing the just-version field value from "1.36.0"
to "1.39.0" so the CI installs the newer Just release.
In @.github/workflows/pull-check.yml:
- Line 47: The workflow hardcodes Nix systems as
checks.x86_64-linux.integration-tests and checks.x86_64-linux.lua-tests; update
the CUE source (likely cicd/workflows.cue) so the generated job steps do not
embed "x86_64-linux" directly: either (preferred) add a systems matrix in the
CUE job generation so jobs are emitted for each target system (e.g., a systems
list that produces checks.<system>.integration-tests and
checks.<system>.lua-tests), or (alternative) interpolate the current system at
runtime by generating the step to run nix build .#checks.$(nix eval --impure
--raw --expr builtins.currentSystem).integration-tests (and similar for
lua-tests); ensure you remove the literal "x86_64-linux" occurrences in the CUE
templates so other architectures (aarch64-linux, darwin, etc.) are supported.
In @bin/run-tests.py:
- Around line 77-80: After finding the flake root (flake_root =
find_flake_root()) the script never changes the current working directory, so
nix commands may run from the wrong path; add a chdir to the found flake root
(os.chdir(flake_root)) immediately after the flake_root check and before
returning/continuing execution, and ensure the os module is imported at the top
of the file if not already present; keep the existing error path using
print_error("Not in a Nix flake directory...") unchanged.
- Around line 65-71: The run_command function has an unused parameter
description and a redundant returncode check; remove the unused description
parameter from the run_command signature (keep only cmd: list[str]) and simplify
the function to call subprocess.run(cmd, check=True) inside a try/except that
returns True on success and False on CalledProcessError, then update all call
sites that pass a description (e.g., places invoking run_command with two args)
to pass only the command list.
In @cicd/workflows.cue:
- Around line 13-40: The workflow step defined by _installJust pins the just
installer action to "extractions/setup-just@v2" which is outdated; update the
uses field in the _installJust githubactions.#Step from
"extractions/setup-just@v2" to "extractions/setup-just@v3" so the step uses the
latest stable action version (leave the step name and "with" block unchanged).
In @cue.mod/module.cue:
- Around line 1-10: The CUE module's language.version is outdated; update the
language.version field (symbol: language.version) from "v0.14.2" to the latest
"v0.15.3" and re-run any tooling to ensure compatibility; additionally review
the githubactions dependency entry (symbol: deps."cue.dev/x/githubactions@v0"
with v: "v0.2.0") against the CUE Central Registry and bump its v value if a
newer release is available.
In @TESTING_GUIDE.md:
- Line 214: Replace the bare URL in the line that reads "Solution: Install Nix:
https://nixos.org/download.html" with Markdown link syntax; update it to use a
descriptive link text such as "Install Nix" or "Nix download" (e.g., "Solution:
Install Nix: [Install Nix](https://nixos.org/download.html)") so the MD034
no-bare-urls lint rule is satisfied.
- Line 221: In TESTING_GUIDE.md remove or replace the outdated line that reads
"Check `IMPLEMENTATION_SUMMARY.md` for technical details" (the removed
IMPLEMENTATION_SUMMARY.md file) — either delete the line or update it to point
to a current, valid document (e.g., README.md or another implementation doc) and
ensure any link/anchor is correct and builds without broken references.
In @tests/default.nix:
- Around line 9-30: The lua-tests derivation only checks syntax with luac;
update the buildPhase to invoke the actual test runner (the busted package
included in nativeBuildInputs) so CI fails on test failures: replace or augment
the current luac loop in buildPhase with a command that runs busted (referencing
the busted binary from pkgs.luajitPackages.busted or simply `busted`) against
tests/lua, and ensure the command exits non‑zero on failure so the derivation
fails when tests fail; keep the luac syntax check if you want both validation
and test execution.
In @tests/integration/test-lsp.sh:
- Around line 23-26: The script exits early with `exit 1` which skips later
cleanup and also uses a hardcoded /tmp path; introduce a temporary log file
(e.g., LOG_FILE=$(mktemp)) instead of /tmp/nvim-lsp-test.log and set a trap at
the top like `trap 'rm -f "$TMP_FILE" "$LOG_FILE"' EXIT` (or a failure-specific
trap that cats the log before removing it) to guarantee removal of both TMP_FILE
and LOG_FILE on any exit, and replace occurrences of the hardcoded path with
"$LOG_FILE" and remove the explicit `rm -f "$TMP_FILE"`/`cat`/`exit` sequence so
cleanup always runs via the trap.
- Around line 19-26: The current pipeline uses `timeout ... | tee
/tmp/nvim-lsp-test.log` so the exit code of `tee` masks the NVIM command;
capture the real exit status of the left side and use that to decide pass/fail.
Replace the pipeline with one of: run the command redirecting stdout/stderr into
the log file and then test `$?` (e.g. `timeout 10 $NVIM ... 2>&1 >
/tmp/nvim-lsp-test.log` then check `$?`), or keep `tee` but capture the producer
status via `status=${PIPESTATUS[0]}` (bash) after `timeout 10 $NVIM ... 2>&1 |
tee /tmp/nvim-lsp-test.log` and use `$status` to decide success/failure; ensure
you still clean up `$TMP_FILE` on failure and print `/tmp/nvim-lsp-test.log`
when nonzero.
- Line 11: The TMP_FILE assignment uses the GNU-only "mktemp --suffix=.lua";
replace that with a portable mktemp invocation that uses a template containing X
characters (e.g., a template like "tmp.XXXXXX.lua") so both GNU and BSD/macOS
mktemp implementations work; update the line that sets TMP_FILE (the one calling
mktemp --suffix=.lua) to use the template form instead and ensure the created
filename ends with .lua.
In @tests/integration/test-startup.sh:
- Around line 11-17: The pipeline using NVIM --headless ... | tee
/tmp/nvim-healthcheck.log is masked by tee returning 0; change the check to
preserve Neovim's exit status by either enabling pipefail before running the
pipeline (set -o pipefail) or inspecting PIPESTATUS[0] right after the pipeline
and failing if it's non-zero; update the if/else logic around the NVIM
invocation (and continue to write output to /tmp/nvim-healthcheck.log) so the
script exits with failure when Neovim's exit code indicates an error.
🧹 Nitpick comments (5)
tests/lua/health_spec.lua (1)
13-23: Tests work correctly; consider optional assertion refinement.The invocation tests correctly verify that the register functions don't throw errors (any error would fail the test before the assertion is reached). The current pattern is functional.
♻️ Optional: Consider more idiomatic assertion patterns
For tests that verify no errors occur, you could either:
- Omit the assertion entirely (busted allows this):
it("should register programs correctly", function() health.register_program("test-program") - -- Just verify it doesn't error - assert.is_true(true) end)
- Or use a more descriptive assertion if you want to explicitly capture return values in the future.
Both approaches are valid; the current implementation works correctly.
tests/README.md (1)
1-57: Excellent test documentation; minor capitalization fix needed.The documentation clearly explains the test infrastructure, how to run tests, and how to write new tests. This will be valuable for contributors.
📝 Minor fix: Capitalize "GitHub"
Line 56 should use the official platform name capitalization:
-Tests are automatically run on pull requests through GitHub Actions workflows +Tests are automatically run on pull requests through GitHub Actions workflows(Note: The word "github" in the path
.github/workflows/is correctly lowercase as it's a directory name.)tests/lua/helpers_spec.lua (1)
1-16: LGTM!The test correctly validates the
git_rootfunction's existence and return type, and follows all Lua coding guidelines (local variables, double quotes, call parentheses).♻️ Optional: More idiomatic assertion
Line 14 could use busted's type assertions more idiomatically:
- assert.is_true(type(result) == "string" or result == nil) + assert.is_true(result == nil or type(result) == "string")Or even more explicitly:
- assert.is_true(type(result) == "string" or result == nil) + if result ~= nil then + assert.is_string(result) + endtests/default.nix (1)
33-92: UsecheckPhasefor test execution and adddontBuild = true.The integration tests run in
buildPhase, but Nix convention for test derivations is to:
- Set
dontBuild = true;to skip the build phase- Execute tests in
checkPhaseinstead- Use
installPhaseonly to write result artifactsThis makes the intent clearer and follows stdenv conventions.
♻️ Refactor to use checkPhase
integration-tests = pkgs.stdenv.mkDerivation { name = "integration-tests"; src = ./.; + dontBuild = true; nativeBuildInputs = [ pkgs.bash pkgs.coreutils self'.packages.neovim ]; - buildPhase = '' - mkdir -p $out - + checkPhase = '' export HOME=$TMPDIR export XDG_CONFIG_HOME=$TMPDIR/.config export XDG_DATA_HOME=$TMPDIR/.local/share export XDG_STATE_HOME=$TMPDIR/.local/state export XDG_CACHE_HOME=$TMPDIR/.cache echo "Running integration tests..." # Test 1: nvim starts and quits echo "Test 1: Basic startup..." if timeout 30 ${self'.packages.neovim}/bin/nvim --headless +qa; then echo "✓ Test 1 passed" else echo "✗ Test 1 failed" exit 1 fi # Test 2: Can create and edit a file echo "Test 2: File editing..." TMP_FILE=$(mktemp --suffix=.txt) if timeout 30 ${self'.packages.neovim}/bin/nvim --headless "$TMP_FILE" \ +'normal iHello, World!' \ +wq; then if grep -q "Hello, World!" "$TMP_FILE"; then echo "✓ Test 2 passed" else echo "✗ Test 2 failed: Content not written" exit 1 fi else echo "✗ Test 2 failed: Could not edit file" exit 1 fi # Test 3: Health check runs echo "Test 3: Health check..." if timeout 30 ${self'.packages.neovim}/bin/nvim --headless +'checkhealth nobbz' +qa 2>&1 | tee $TMPDIR/health.log; then echo "✓ Test 3 passed" else echo "✗ Test 3 failed" cat $TMPDIR/health.log exit 1 fi ''; installPhase = '' mkdir -p $out echo "Integration tests passed" > $out/result ''; };bin/run-tests.py (1)
44-62: Document the fallback behavior for unknown systems.Line 61-62 silently defaults to
x86_64-linuxwhen the system can't be detected. This fallback might cause confusing errors on unsupported platforms. Consider either:
- Logging a warning about the fallback, or
- Raising an error for unsupported systems
⚠️ Proposed enhancement# Default to x86_64-linux if we can't detect + print(f"{Colors.YELLOW}Warning: Could not detect system (got {system}/{machine}), defaulting to x86_64-linux{Colors.NC}") return "x86_64-linux"Or alternatively, fail explicitly:
- # Default to x86_64-linux if we can't detect - return "x86_64-linux" + # Unsupported system + print_error(f"Unsupported system: {system}/{machine}") + sys.exit(1)
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (23)
.github/copilot-instructions.md.github/workflows/check-generated.yml.github/workflows/pull-check.yml.gitignoreREADME.mdTESTING_GUIDE.mdbin/default.nixbin/run-tests.nixbin/run-tests.pycicd/README.mdcicd/check-generated.cuecicd/pull-check.cuecicd/workflows.cuecue.mod/module.cueflake.nixjustfiletests/README.mdtests/default.nixtests/integration/test-lsp.shtests/integration/test-startup.shtests/lua/health_spec.luatests/lua/helpers_spec.luatests/lua/lazy_spec.lua
🧰 Additional context used
📓 Path-based instructions (4)
**/*.{nix,lua}
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Run
nix fmtbefore committing to format all Nix files withalejandraand all Lua files withemmy-lua-code-style
Files:
tests/default.nixbin/default.nixtests/lua/health_spec.luatests/lua/lazy_spec.luabin/run-tests.nixtests/lua/helpers_spec.luaflake.nix
**/*.nix
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Maintain a clear distinction between flake-parts modules and package definitions in Nix files
Files:
tests/default.nixbin/default.nixbin/run-tests.nixflake.nix
**/*.lua
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
**/*.lua: Uselocalfor all variable declarations in Lua
Follow luarocks style guide with 2-space indentation, double quotes for strings, and trailing commas in tables in Lua code
Always use call parentheses when invoking functions in Lua
Respect.editorconfig(2 spaces indentation, double quotes) and.stylua.tomlsettings when formatting Lua code
Files:
tests/lua/health_spec.luatests/lua/lazy_spec.luatests/lua/helpers_spec.lua
.github/copilot-instructions.md
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Update
.github/copilot-instructions.mdafter major refactors to keep instructions accurate
Files:
.github/copilot-instructions.md
🧠 Learnings (13)
📚 Learning: 2026-01-07T22:24:07.906Z
Learnt from: CR
Repo: NobbZ/nobbz-vim PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-01-07T22:24:07.906Z
Learning: Test plugin loading with `:checkhealth nobbz` in Neovim after configuration changes
Applied to files:
tests/integration/test-startup.shtests/README.mdREADME.mdtests/lua/health_spec.lua.github/copilot-instructions.mdTESTING_GUIDE.md
📚 Learning: 2026-01-07T22:24:07.906Z
Learnt from: CR
Repo: NobbZ/nobbz-vim PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-01-07T22:24:07.906Z
Learning: Applies to **/*.{nix,lua} : Run `nix fmt` before committing to format all Nix files with `alejandra` and all Lua files with `emmy-lua-code-style`
Applied to files:
README.md.github/copilot-instructions.mdjustfileflake.nix
📚 Learning: 2026-01-07T22:24:07.906Z
Learnt from: CR
Repo: NobbZ/nobbz-vim PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-01-07T22:24:07.906Z
Learning: Verify plugin changes with `nix build` to ensure Nix evaluation and package builds successfully
Applied to files:
tests/default.nixbin/default.nix.github/copilot-instructions.md
📚 Learning: 2026-01-07T22:24:07.906Z
Learnt from: CR
Repo: NobbZ/nobbz-vim PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-01-07T22:24:07.906Z
Learning: Applies to plugins/default.nix : Ensure plugin is listed in the `optionalPlugins` dict in `plugins/default.nix` with `true` for optional (lazy-loaded) plugins or `false` for mandatory (start-on-boot) plugins
Applied to files:
bin/default.nix
📚 Learning: 2026-01-07T22:24:07.906Z
Learnt from: CR
Repo: NobbZ/nobbz-vim PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-01-07T22:24:07.906Z
Learning: Applies to plugins/nobbz/lua/nobbz/lsp/**/*.lua : Register custom LSP configurations with `require("nobbz.health").register_lsp("lsp-name")` in Lua configuration
Applied to files:
tests/lua/health_spec.luatests/lua/lazy_spec.lua
📚 Learning: 2026-01-07T22:24:07.906Z
Learnt from: CR
Repo: NobbZ/nobbz-vim PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-01-07T22:24:07.906Z
Learning: Applies to plugins/nobbz/lua/nobbz/**/*.lua : Create lazy loading specifications in Lua configuration for optional plugins added to `optionalPlugins` with `true` value
Applied to files:
tests/lua/health_spec.luatests/lua/lazy_spec.lua
📚 Learning: 2026-01-07T22:24:07.906Z
Learnt from: CR
Repo: NobbZ/nobbz-vim PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-01-07T22:24:07.906Z
Learning: Applies to .github/copilot-instructions.md : Update `.github/copilot-instructions.md` after major refactors to keep instructions accurate
Applied to files:
.github/copilot-instructions.md
📚 Learning: 2026-01-07T22:24:07.906Z
Learnt from: CR
Repo: NobbZ/nobbz-vim PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-01-07T22:24:07.906Z
Learning: Applies to **/*.lua : Follow luarocks style guide with 2-space indentation, double quotes for strings, and trailing commas in tables in Lua code
Applied to files:
.github/copilot-instructions.md
📚 Learning: 2026-01-07T22:24:07.906Z
Learnt from: CR
Repo: NobbZ/nobbz-vim PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-01-07T22:24:07.906Z
Learning: Applies to **/*.nix : Maintain a clear distinction between flake-parts modules and package definitions in Nix files
Applied to files:
.github/copilot-instructions.mdbin/run-tests.pycicd/workflows.cueflake.nix
📚 Learning: 2026-01-07T22:24:07.906Z
Learnt from: CR
Repo: NobbZ/nobbz-vim PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-01-07T22:24:07.906Z
Learning: Applies to plugins/nobbz/lua/nobbz/**/*.lua : Use lazy loading via `nobbz.lazy` when possible in Lua configuration
Applied to files:
tests/lua/lazy_spec.lua
📚 Learning: 2025-04-12T09:24:45.451Z
Learnt from: NobbZ
Repo: NobbZ/nobbz-vim PR: 0
File: :0-0
Timestamp: 2025-04-12T09:24:45.451Z
Learning: NobbZ uses a homebrewn shallow abstraction over `lz.n` rather than using lazy.nvim directly. References to Lazy.nvim's specific features should be avoided in reviews of NobbZ's repository.
Applied to files:
tests/lua/lazy_spec.lua
📚 Learning: 2026-01-07T22:24:07.906Z
Learnt from: CR
Repo: NobbZ/nobbz-vim PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-01-07T22:24:07.906Z
Learning: Applies to flake.lock : Commit `flake.lock` changes when updating flake inputs via `nix flake update --commit-lock-file`
Applied to files:
bin/run-tests.pyflake.nix
📚 Learning: 2025-01-31T23:06:17.133Z
Learnt from: NobbZ
Repo: NobbZ/nobbz-vim PR: 19
File: plugins/nobbz/lua/nobbz/lsp/beancount.lua:3-6
Timestamp: 2025-01-31T23:06:17.133Z
Learning: In the nobbz-vim project, `LSP_CAPAS` is defined as a global variable in `plugins/nobbz/lua/nobbz/init.lua` and should be used for LSP capabilities across the codebase.
Applied to files:
tests/integration/test-lsp.sh
🪛 LanguageTool
tests/README.md
[uncategorized] ~56-~56: The official name of this software platform is spelled with a capital “H”.
Context: ...ugh GitHub Actions workflows defined in .github/workflows/. These workflows are genera...
(GITHUB)
.github/copilot-instructions.md
[uncategorized] ~193-~193: The official name of this software platform is spelled with a capital “H”.
Context: ...ctions:** After major refactors, verify .github/copilot-instructions.md is still accur...
(GITHUB)
🪛 markdownlint-cli2 (0.18.1)
TESTING_GUIDE.md
214-214: Bare URL used
(MD034, no-bare-urls)
🪛 Ruff (0.14.10)
bin/run-tests.py
65-65: Unused function argument: description
(ARG001)
68-68: subprocess call: check for execution of untrusted input
(S603)
69-69: Consider moving this statement to an else block
(TRY300)
🔇 Additional comments (22)
tests/lua/lazy_spec.lua (1)
1-20: LGTM! Good starting point for test coverage.The test correctly verifies the public API surface of the
nobbz.lazymodule and follows Lua coding guidelines (useslocal, double quotes, and call parentheses)..gitignore (1)
7-10: LGTM! Standard Python ignore patterns.The Python cache file patterns are correctly configured to support the new Python test runner introduced in this PR.
tests/lua/health_spec.lua (1)
1-11: LGTM! API surface verification looks good.The existence checks for
register_programandregister_lspfunctions follow Lua coding guidelines and establish baseline test coverage.flake.nix (2)
10-10: LGTM! Tests module correctly integrated.The addition of
./teststo imports properly integrates the test infrastructure into the flake's module system.
44-44: LGTM! Workflow tooling correctly added to devShell.Adding
cueandjustto the development shell properly supports the CUE-based workflow generation and justfile automation introduced in this PR.tests/integration/test-startup.sh (1)
20-25: LGTM!The
timeoutcommand correctly handles both hanging and failing nvim processes, and will properly propagate failures.README.md (1)
10-54: LGTM!The new Testing and Workflow Generation sections clearly document the test infrastructure and CI/CD workflow generation process. The commands are accurate and well-organized, with appropriate cross-references to detailed documentation.
justfile (1)
1-32: LGTM!The justfile provides clear, well-documented targets for workflow generation, verification, and formatting. The commands correctly use CUE export/vet and align with the project's formatting requirements (alejandra for Nix, cue fmt for CUE files).
cicd/workflows.cue (2)
42-84: LGTM!The job definitions correctly compose steps and use appropriate nix commands with
--print-build-logsfor CI visibility. The architecture-specific paths (.#checks.x86_64-linux.*) align with the PR's testing infrastructure.
5-7: All version constants are available with no known vulnerabilities.Verification confirms:
- Nix 2.32.1 exists (current latest: 2.33.0)
- CUE v0.14.2 exists (current latest: v0.15.3)
- Just 1.36.0 exists (current latest: 1.46.0)
No public CVEs found for any of these versions. While they are not the absolute latest releases, they remain supported and secure for CI/CD workflows.
cicd/README.md (1)
1-98: Comprehensive and accurate CI/CD documentation.The documentation clearly explains the CUE-based workflow generation approach, with well-structured sections covering structure, generation commands, workflow overview, and practical guidance for adding new workflows. Code examples for common patterns (reusing steps, conditionals) are syntactically correct and helpful.
TESTING_GUIDE.md (1)
1-223: Well-structured testing guide with practical examples and CI/CD integration.The guide comprehensively covers quick start, test types, local execution, CI/CD workflows, debugging, best practices, and troubleshooting. The guidance to use
:checkhealth nobbzfor plugin validation aligns well with project conventions, and the integration with the new test infrastructure is clearly explained..github/copilot-instructions.md (5)
30-30: Dev shell tools and workflow generation guidance properly documented.The addition of
cueandjustto the dev shell tools list, along with the clarification that these are used for "making changes to Lua code or generating workflows," accurately reflects the new workflow generation infrastructure.Also applies to: 32-32
59-66: Testing and workflow generation commands correctly integrated into validation steps.New sections clearly document how to run tests via nix commands (
nix run .#run-tests,nix build .#checks.x86_64-linux.lua-tests, etc.) and workflow generation via justfile targets (just workflows,just check,just fmt), providing developers with clear paths to validate changes.
125-138: New project structure sections accurately document tests/ and cicd/ directories.The documentation correctly describes the new testing infrastructure (Lua unit tests, integration tests) and CUE-based workflow generation, with appropriate cross-references to dedicated documentation files (tests/README.md, cicd/README.md).
189-199: Validation steps comprehensively updated to include testing and workflow verification.The expanded validation checklist now includes running tests, checking health with
:checkhealth nobbz, verifying workflows if CUE files were modified, and explicitly notes that workflows are generated from CUE definitions rather than edited directly. This aligns with best practices and prevents common mistakes.
183-199: Copilot instructions file successfully updated to reflect major refactor.Per coding guidelines,
.github/copilot-instructions.mdhas been comprehensively updated after this major refactor to remain accurate and reflect the new testing infrastructure, CI/CD workflow generation, and project structure changes. The file now serves as an up-to-date onboarding reference.bin/default.nix (1)
10-10: Consider adding run-tests to checks for CI validation.The run-tests package follows the same pattern as add-plugin and update-plugins (lines 8-9), which are both included in the checks section (lines 14-15). However, run-tests is not added to checks.
Is this intentional? If run-tests should be validated in CI like the other tools, consider adding it to the checks section:
📋 Suggested addition to checks section
checks = { add-plugin = self'.packages.add-plugin; update-plugins = self'.packages.update-plugins; + run-tests = self'.packages.run-tests; };tests/integration/test-lsp.sh (1)
19-19: Consider test validity: Using 'cat' as LSP server.The test uses
cmd={'cat'}which is not a real LSP server. This may not properly validate LSP functionality since 'cat' will not respond to LSP protocol messages.Is the intent to:
- Test that LSP infrastructure can start (current behavior), or
- Test actual LSP functionality (would require a real LSP server)?
If testing actual LSP functionality is desired, consider using a minimal Lua LSP server or mock LSP implementation that can respond to basic LSP requests.
cicd/check-generated.cue (1)
1-16: LGTM! All referenced step definitions are properly defined.The CUE workflow definition is well-structured and follows the githubactions schema correctly. All three step definitions (_cloneRepo, _installCue, _installJust) are properly defined in cicd/workflows.cue, and the "check" target is defined in the justfile at line 20. The workflow will correctly run on pull requests and execute the configured validation steps.
bin/run-tests.nix (1)
1-8: LGTM! Clean wrapper implementation.The shell script wrapper is well-structured:
execreplaces the shell process with Python (avoiding an unnecessary parent process), and"$@"correctly forwards all arguments to the Python script.cicd/pull-check.cue (1)
1-14: LGTM! Clean CUE workflow definition.The workflow structure is well-organized, using composition with the referenced job templates (
_buildFlake,_buildChecks,_luaTests,_integrationTests). The trigger onpull_requestis appropriate for a PR checker workflow.Note: The hardcoded
x86_64-linuxsystem issue in the generated workflow YAML likely originates from the job template definitions (presumably inworkflows.cue), not from this file.
| # This file is generated from CUE definitions in cicd/ | ||
| # To regenerate: just workflows | ||
| # Do not edit this file directly | ||
|
|
||
| jobs: | ||
| check_generated: | ||
| runs-on: ubuntu-24.04 | ||
| steps: | ||
| - name: Clone Repository | ||
| uses: actions/checkout@v6 | ||
| - name: Install Cue | ||
| uses: cue-lang/[email protected] | ||
| with: | ||
| version: v0.14.2 | ||
| - name: Install Just | ||
| uses: extractions/setup-just@v2 | ||
| with: | ||
| just-version: "1.36.0" | ||
| - run: just check | ||
| name: Check generated files | ||
| "on": | ||
| pull_request: {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
CUE language latest stable version January 2025
💡 Result:
As of January 1, 2025 the latest stable CUE release was v0.10.1 (released October 22, 2024). [1][2]
Sources:
- Release timeline listing (v0.10.1 — Oct 22, 2024). [1]
- CUE release archive listing. [2]
🌐 Web query:
Just command runner latest stable version January 2025
💡 Result:
Do you mean the latest stable release of the "just" command runner as of January 2025 (i.e., the newest release on or before 2025-01-31)?
🌐 Web query:
CUE v0.14.2 release date when
💡 Result:
CUE v0.14.2 was released on October 10, 2024. [1]
Sources
[1] GitHub — cue-lang/cue Releases (v0.14.2).
🌐 Web query:
Just command runner releases latest January 2025
💡 Result:
The latest release of the Just command runner in January 2025 is version 1.39.0, released January 22, 2025. [1]
Sources
- casey/just Releases (GitHub) — 1.39.0 (22 Jan 2025). [1]
Update Just version to 1.39.0 or later.
The CUE version (v0.14.2, released October 10, 2024) is appropriate for the workflow's timeframe. However, the Just version (1.36.0) is outdated—version 1.39.0 was released on January 22, 2025, and should be used instead.
🤖 Prompt for AI Agents
In @.github/workflows/check-generated.yml around lines 1 - 22, Update the Just
installer step "Install Just" to use just-version "1.39.0" (or a >=1.39.0 range)
in the workflow step that uses extractions/setup-just@v2 by changing the
just-version field value from "1.36.0" to "1.39.0" so the CI installs the newer
Just release.
| experimental-features = nix-command flakes | ||
| install_url: https://releases.nixos.org/nix/nix-2.32.1/install | ||
| - name: Run integration tests | ||
| run: nix build .#checks.x86_64-linux.integration-tests --print-build-logs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hardcoded x86_64-linux breaks multi-architecture support.
Lines 47 and 61 hardcode the system architecture in the nix build commands:
nix build .#checks.x86_64-linux.integration-testsnix build .#checks.x86_64-linux.lua-tests
This prevents the workflow from running on other architectures (aarch64-linux, darwin variants). Since this file is generated from CUE definitions, the fix should be applied to the CUE source (likely cicd/workflows.cue).
Consider these solutions:
- Preferred: Use a matrix strategy in CUE to generate jobs for multiple systems
- Alternative: Use
nix build .#checks.$(nix eval --impure --raw --expr builtins.currentSystem).integration-teststo detect the system dynamically
Would you like me to search for the CUE source that generates these hardcoded system references?
#!/bin/bash
# Find where x86_64-linux is hardcoded in CUE definitions
rg -n "x86_64-linux" cicd/Also applies to: 61-61
🤖 Prompt for AI Agents
In @.github/workflows/pull-check.yml at line 47, The workflow hardcodes Nix
systems as checks.x86_64-linux.integration-tests and
checks.x86_64-linux.lua-tests; update the CUE source (likely cicd/workflows.cue)
so the generated job steps do not embed "x86_64-linux" directly: either
(preferred) add a systems matrix in the CUE job generation so jobs are emitted
for each target system (e.g., a systems list that produces
checks.<system>.integration-tests and checks.<system>.lua-tests), or
(alternative) interpolate the current system at runtime by generating the step
to run nix build .#checks.$(nix eval --impure --raw --expr
builtins.currentSystem).integration-tests (and similar for lua-tests); ensure
you remove the literal "x86_64-linux" occurrences in the CUE templates so other
architectures (aarch64-linux, darwin, etc.) are supported.
| def run_command(cmd: list[str], description: str) -> bool: | ||
| """Run a command and return success status.""" | ||
| try: | ||
| result = subprocess.run(cmd, check=True) | ||
| return result.returncode == 0 | ||
| except subprocess.CalledProcessError: | ||
| return False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused parameter and redundant return check.
Two issues in run_command:
- The
descriptionparameter (line 65) is never used—consider removing it or using it in error messages for better diagnostics. - Line 69's
result.returncode == 0check is redundant: whensubprocess.runis called withcheck=True, it raisesCalledProcessErroron non-zero exit, so reaching line 69 means the return code is already 0.
Based on static analysis hints.
🔧 Proposed fix
-def run_command(cmd: list[str], description: str) -> bool:
+def run_command(cmd: list[str]) -> bool:
"""Run a command and return success status."""
try:
- result = subprocess.run(cmd, check=True)
- return result.returncode == 0
+ subprocess.run(cmd, check=True)
+ return True
except subprocess.CalledProcessError:
return FalseThen update all call sites to remove the description argument:
- if run_command(
- ["nix", "build", ".#neovim", "--print-build-logs"],
- "Building neovim package",
- ):
+ if run_command(["nix", "build", ".#neovim", "--print-build-logs"]):(Apply similar changes to lines 99-101, 110-117, and 126-128)
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def run_command(cmd: list[str], description: str) -> bool: | |
| """Run a command and return success status.""" | |
| try: | |
| result = subprocess.run(cmd, check=True) | |
| return result.returncode == 0 | |
| except subprocess.CalledProcessError: | |
| return False | |
| def run_command(cmd: list[str]) -> bool: | |
| """Run a command and return success status.""" | |
| try: | |
| subprocess.run(cmd, check=True) | |
| return True | |
| except subprocess.CalledProcessError: | |
| return False |
🧰 Tools
🪛 Ruff (0.14.10)
65-65: Unused function argument: description
(ARG001)
68-68: subprocess call: check for execution of untrusted input
(S603)
69-69: Consider moving this statement to an else block
(TRY300)
🤖 Prompt for AI Agents
In @bin/run-tests.py around lines 65 - 71, The run_command function has an
unused parameter description and a redundant returncode check; remove the unused
description parameter from the run_command signature (keep only cmd: list[str])
and simplify the function to call subprocess.run(cmd, check=True) inside a
try/except that returns True on success and False on CalledProcessError, then
update all call sites that pass a description (e.g., places invoking run_command
with two args) to pass only the command list.
| flake_root = find_flake_root() | ||
| if flake_root is None: | ||
| print_error("Not in a Nix flake directory (searched parent directories)") | ||
| return 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing chdir to flake root after finding it.
The script finds the flake root directory but doesn't change to it. This means nix commands will be executed from the current working directory, which may not be the flake root. If invoked from a subdirectory, commands like nix build .#neovim may fail.
📂 Proposed fix
# Find flake root
flake_root = find_flake_root()
if flake_root is None:
print_error("Not in a Nix flake directory (searched parent directories)")
return 1
+
+ # Change to flake root directory
+ import os
+ os.chdir(flake_root)Alternatively, add the import at the top of the file and use it here.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| flake_root = find_flake_root() | |
| if flake_root is None: | |
| print_error("Not in a Nix flake directory (searched parent directories)") | |
| return 1 | |
| flake_root = find_flake_root() | |
| if flake_root is None: | |
| print_error("Not in a Nix flake directory (searched parent directories)") | |
| return 1 | |
| # Change to flake root directory | |
| import os | |
| os.chdir(flake_root) |
🤖 Prompt for AI Agents
In @bin/run-tests.py around lines 77 - 80, After finding the flake root
(flake_root = find_flake_root()) the script never changes the current working
directory, so nix commands may run from the wrong path; add a chdir to the found
flake root (os.chdir(flake_root)) immediately after the flake_root check and
before returning/continuing execution, and ensure the os module is imported at
the top of the file if not already present; keep the existing error path using
print_error("Not in a Nix flake directory...") unchanged.
| _cloneRepo: githubactions.#Step & { | ||
| name: "Clone Repository" | ||
| uses: "actions/checkout@v6" | ||
| } | ||
|
|
||
| _installCue: githubactions.#Step & { | ||
| name: "Install Cue" | ||
| uses: "cue-lang/[email protected]" | ||
| with: version: _cueVersion | ||
| } | ||
|
|
||
| _installJust: githubactions.#Step & { | ||
| name: "Install Just" | ||
| uses: "extractions/setup-just@v2" | ||
| with: "just-version": _justVersion | ||
| } | ||
|
|
||
| _installNix: githubactions.#Step & { | ||
| name: "Install nix" | ||
| uses: "cachix/install-nix-action@v31" | ||
| with: { | ||
| extra_nix_config: """ | ||
| auto-optimise-store = true | ||
| experimental-features = nix-command flakes | ||
| """ | ||
| install_url: "https://releases.nixos.org/nix/nix-\(_nixVersion)/install" | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🌐 Web query:
What is the latest stable version of actions/checkout GitHub Action?
💡 Result:
The latest stable version is v6. [1][2]
Sources:
[1] actions/checkout GitHub repo (README).
[2] Checkout action on GitHub Marketplace.
🌐 Web query:
What is the latest stable version of cue-lang/setup-cue GitHub Action?
💡 Result:
The latest stable release is v1.0.1. [1]
Sources
- cue-lang/setup-cue releases on GitHub (v1.0.1 — Latest, Feb 20, 2024). [1]
🌐 Web query:
What is the latest stable version of extractions/setup-just GitHub Action?
💡 Result:
The latest stable version is v3 (extractions/setup-just@v3). [1][2]
Sources:
[1] casey/just README (shows example using extractions/setup-just@v3).
[2] Cerbos changelog (and multiple project changelogs) noting bump to extractions/setup-just v3.
🌐 Web query:
What is the latest stable version of cachix/install-nix-action GitHub Action?
💡 Result:
The latest stable release is v31.9.0 (released Dec 11, 2025). [1][2]
Sources:
[1] cachix/install-nix-action releases.
[2] cachix/install-nix-action README (usage shows the v31 major tag).
Update extractions/setup-just to v3
Three of the four GitHub Actions versions are current (actions/checkout@v6, cue-lang/[email protected], and cachix/install-nix-action@v31), but extractions/setup-just is pinned to v2, which is outdated. Update to v3 to use the latest stable version.
🤖 Prompt for AI Agents
In @cicd/workflows.cue around lines 13 - 40, The workflow step defined by
_installJust pins the just installer action to "extractions/setup-just@v2" which
is outdated; update the uses field in the _installJust githubactions.#Step from
"extractions/setup-just@v2" to "extractions/setup-just@v3" so the step uses the
latest stable action version (leave the step name and "with" block unchanged).
| lua-tests = pkgs.stdenv.mkDerivation { | ||
| name = "lua-tests"; | ||
| src = ./.; | ||
| nativeBuildInputs = [ | ||
| pkgs.lua5_1 | ||
| pkgs.luajitPackages.busted | ||
| ]; | ||
| buildPhase = '' | ||
| # For now, just verify the test files exist and are valid Lua | ||
| echo "Checking Lua test files..." | ||
| for test in tests/lua/*_spec.lua; do | ||
| if [ -f "$test" ]; then | ||
| echo "Found test: $test" | ||
| ${pkgs.lua5_1}/bin/luac -p "$test" || exit 1 | ||
| fi | ||
| done | ||
| ''; | ||
| installPhase = '' | ||
| mkdir -p $out | ||
| echo "Lua tests validated" > $out/result | ||
| ''; | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The lua-tests check only validates syntax, not running actual tests.
The derivation includes busted in nativeBuildInputs but only uses luac -p for syntax validation. While the comment on line 17 indicates this is intentional ("For now, just verify..."), this means the lua-tests job in CI won't catch test failures—only syntax errors.
Consider running the actual test suite with busted. Would you like me to provide a diff that executes the tests properly?
🧪 Proposed enhancement to run actual tests
buildPhase = ''
- # For now, just verify the test files exist and are valid Lua
- echo "Checking Lua test files..."
- for test in tests/lua/*_spec.lua; do
- if [ -f "$test" ]; then
- echo "Found test: $test"
- ${pkgs.lua5_1}/bin/luac -p "$test" || exit 1
- fi
- done
+ export HOME=$TMPDIR
+ cd tests/lua
+ ${pkgs.luajitPackages.busted}/bin/busted *_spec.lua
'';📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| lua-tests = pkgs.stdenv.mkDerivation { | |
| name = "lua-tests"; | |
| src = ./.; | |
| nativeBuildInputs = [ | |
| pkgs.lua5_1 | |
| pkgs.luajitPackages.busted | |
| ]; | |
| buildPhase = '' | |
| # For now, just verify the test files exist and are valid Lua | |
| echo "Checking Lua test files..." | |
| for test in tests/lua/*_spec.lua; do | |
| if [ -f "$test" ]; then | |
| echo "Found test: $test" | |
| ${pkgs.lua5_1}/bin/luac -p "$test" || exit 1 | |
| fi | |
| done | |
| ''; | |
| installPhase = '' | |
| mkdir -p $out | |
| echo "Lua tests validated" > $out/result | |
| ''; | |
| }; | |
| lua-tests = pkgs.stdenv.mkDerivation { | |
| name = "lua-tests"; | |
| src = ./.; | |
| nativeBuildInputs = [ | |
| pkgs.lua5_1 | |
| pkgs.luajitPackages.busted | |
| ]; | |
| buildPhase = '' | |
| export HOME=$TMPDIR | |
| cd tests/lua | |
| ${pkgs.luajitPackages.busted}/bin/busted *_spec.lua | |
| ''; | |
| installPhase = '' | |
| mkdir -p $out | |
| echo "Lua tests validated" > $out/result | |
| ''; | |
| }; |
🤖 Prompt for AI Agents
In @tests/default.nix around lines 9 - 30, The lua-tests derivation only checks
syntax with luac; update the buildPhase to invoke the actual test runner (the
busted package included in nativeBuildInputs) so CI fails on test failures:
replace or augment the current luac loop in buildPhase with a command that runs
busted (referencing the busted binary from pkgs.luajitPackages.busted or simply
`busted`) against tests/lua, and ensure the command exits non‑zero on failure so
the derivation fails when tests fail; keep the luac syntax check if you want
both validation and test execution.
| echo "Testing: LSP functionality..." | ||
|
|
||
| # Create a temporary Lua file | ||
| TMP_FILE=$(mktemp --suffix=.lua) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: mktemp --suffix is not portable.
The --suffix flag is GNU-specific and will fail on macOS and BSD systems where mktemp has different syntax.
🔧 Portable mktemp solution
-TMP_FILE=$(mktemp --suffix=.lua)
+TMP_FILE=$(mktemp).lua
+mv "$(mktemp)" "$TMP_FILE" 2>/dev/null || TMP_FILE=$(mktemp -t tmp.XXXXXX).luaOr use a simpler portable approach:
-TMP_FILE=$(mktemp --suffix=.lua)
+TMP_FILE=$(mktemp)
+mv "$TMP_FILE" "$TMP_FILE.lua"
+TMP_FILE="$TMP_FILE.lua"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| TMP_FILE=$(mktemp --suffix=.lua) | |
| TMP_FILE=$(mktemp) | |
| mv "$TMP_FILE" "$TMP_FILE.lua" | |
| TMP_FILE="$TMP_FILE.lua" |
🤖 Prompt for AI Agents
In @tests/integration/test-lsp.sh at line 11, The TMP_FILE assignment uses the
GNU-only "mktemp --suffix=.lua"; replace that with a portable mktemp invocation
that uses a template containing X characters (e.g., a template like
"tmp.XXXXXX.lua") so both GNU and BSD/macOS mktemp implementations work; update
the line that sets TMP_FILE (the one calling mktemp --suffix=.lua) to use the
template form instead and ensure the created filename ends with .lua.
| if timeout 10 $NVIM --headless "$TMP_FILE" +"lua vim.lsp.start({name='test', cmd={'cat'}})" +qa 2>&1 | tee /tmp/nvim-lsp-test.log; then | ||
| echo "✓ Neovim LSP test passed" | ||
| else | ||
| echo "✗ Neovim LSP test failed" | ||
| cat /tmp/nvim-lsp-test.log | ||
| rm -f "$TMP_FILE" | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Pipe to tee masks exit code, preventing failure detection.
The command structure timeout ... | tee ... will always succeed because the exit code of tee is used, not the exit code of the nvim command. This means test failures won't be detected.
🔧 Fix to properly capture exit code
+LOG_FILE=$(mktemp)
+
# Test that nvim can open a Lua file
-if timeout 10 $NVIM --headless "$TMP_FILE" +"lua vim.lsp.start({name='test', cmd={'cat'}})" +qa 2>&1 | tee /tmp/nvim-lsp-test.log; then
+if timeout 10 $NVIM --headless "$TMP_FILE" +"lua vim.lsp.start({name='test', cmd={'cat'}})" +qa > "$LOG_FILE" 2>&1; then
echo "✓ Neovim LSP test passed"
+ cat "$LOG_FILE"
else
echo "✗ Neovim LSP test failed"
- cat /tmp/nvim-lsp-test.log
+ cat "$LOG_FILE"
rm -f "$TMP_FILE"
+ rm -f "$LOG_FILE"
exit 1
fi
rm -f "$TMP_FILE"
+rm -f "$LOG_FILE"
echo "✓ LSP integration test passed!"Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @tests/integration/test-lsp.sh around lines 19 - 26, The current pipeline
uses `timeout ... | tee /tmp/nvim-lsp-test.log` so the exit code of `tee` masks
the NVIM command; capture the real exit status of the left side and use that to
decide pass/fail. Replace the pipeline with one of: run the command redirecting
stdout/stderr into the log file and then test `$?` (e.g. `timeout 10 $NVIM ...
2>&1 > /tmp/nvim-lsp-test.log` then check `$?`), or keep `tee` but capture the
producer status via `status=${PIPESTATUS[0]}` (bash) after `timeout 10 $NVIM ...
2>&1 | tee /tmp/nvim-lsp-test.log` and use `$status` to decide success/failure;
ensure you still clean up `$TMP_FILE` on failure and print
`/tmp/nvim-lsp-test.log` when nonzero.
| cat /tmp/nvim-lsp-test.log | ||
| rm -f "$TMP_FILE" | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resource leak: Cleanup skipped on failure.
When exit 1 is called on line 25, the cleanup on line 28 never executes. Additionally, the hardcoded /tmp/nvim-lsp-test.log path should use a temporary file instead.
🧹 Use trap for guaranteed cleanup
NVIM=${1:-nvim}
+# Create temp files
+TMP_FILE=$(mktemp)
+LOG_FILE=$(mktemp)
+
+# Ensure cleanup on exit
+trap 'rm -f "$TMP_FILE" "$LOG_FILE"' EXIT
+
echo "Testing: LSP functionality..."
# Create a temporary Lua file
-TMP_FILE=$(mktemp --suffix=.lua)
cat > "$TMP_FILE" << 'EOF'
local function test()
print("Hello, world!")
end
EOF
# Test that nvim can open a Lua file
-if timeout 10 $NVIM --headless "$TMP_FILE" +"lua vim.lsp.start({name='test', cmd={'cat'}})" +qa 2>&1 | tee /tmp/nvim-lsp-test.log; then
+if timeout 10 $NVIM --headless "$TMP_FILE" +"lua vim.lsp.start({name='test', cmd={'cat'}})" +qa > "$LOG_FILE" 2>&1; then
echo "✓ Neovim LSP test passed"
else
echo "✗ Neovim LSP test failed"
- cat /tmp/nvim-lsp-test.log
- rm -f "$TMP_FILE"
+ cat "$LOG_FILE"
exit 1
fi
-rm -f "$TMP_FILE"
echo "✓ LSP integration test passed!"Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In @tests/integration/test-lsp.sh around lines 23 - 26, The script exits early
with `exit 1` which skips later cleanup and also uses a hardcoded /tmp path;
introduce a temporary log file (e.g., LOG_FILE=$(mktemp)) instead of
/tmp/nvim-lsp-test.log and set a trap at the top like `trap 'rm -f "$TMP_FILE"
"$LOG_FILE"' EXIT` (or a failure-specific trap that cats the log before removing
it) to guarantee removal of both TMP_FILE and LOG_FILE on any exit, and replace
occurrences of the hardcoded path with "$LOG_FILE" and remove the explicit `rm
-f "$TMP_FILE"`/`cat`/`exit` sequence so cleanup always runs via the trap.
| if $NVIM --headless +"checkhealth nobbz" +qa 2>&1 | tee /tmp/nvim-healthcheck.log; then | ||
| echo "✓ Neovim health check passed" | ||
| else | ||
| echo "✗ Neovim health check failed" | ||
| cat /tmp/nvim-healthcheck.log | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: Pipeline with tee masks nvim exit code.
The tee command always returns 0, so the if condition will always succeed regardless of whether nvim failed. This means the health check will never detect failures.
🐛 Proposed fix using PIPESTATUS
-if $NVIM --headless +"checkhealth nobbz" +qa 2>&1 | tee /tmp/nvim-healthcheck.log; then
+$NVIM --headless +"checkhealth nobbz" +qa 2>&1 | tee /tmp/nvim-healthcheck.log
+if [ "${PIPESTATUS[0]}" -eq 0 ]; then
echo "✓ Neovim health check passed"
else
echo "✗ Neovim health check failed"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if $NVIM --headless +"checkhealth nobbz" +qa 2>&1 | tee /tmp/nvim-healthcheck.log; then | |
| echo "✓ Neovim health check passed" | |
| else | |
| echo "✗ Neovim health check failed" | |
| cat /tmp/nvim-healthcheck.log | |
| exit 1 | |
| fi | |
| $NVIM --headless +"checkhealth nobbz" +qa 2>&1 | tee /tmp/nvim-healthcheck.log | |
| if [ "${PIPESTATUS[0]}" -eq 0 ]; then | |
| echo "✓ Neovim health check passed" | |
| else | |
| echo "✗ Neovim health check failed" | |
| cat /tmp/nvim-healthcheck.log | |
| exit 1 | |
| fi |
🤖 Prompt for AI Agents
In @tests/integration/test-startup.sh around lines 11 - 17, The pipeline using
NVIM --headless ... | tee /tmp/nvim-healthcheck.log is masked by tee returning
0; change the check to preserve Neovim's exit status by either enabling pipefail
before running the pipeline (set -o pipefail) or inspecting PIPESTATUS[0] right
after the pipeline and failing if it's non-zero; update the if/else logic around
the NVIM invocation (and continue to write output to /tmp/nvim-healthcheck.log)
so the script exits with failure when Neovim's exit code indicates an error.
Recent Changes (addressing code review):
Original prompt
💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.
Summary by CodeRabbit
New Features
Documentation
Chores
✏️ Tip: You can customize this high-level summary in your review settings.