-
Notifications
You must be signed in to change notification settings - Fork 1
Add command-coverage scenarios and extend scenario runner for command output/assertions #7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @akiselev, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the debugger's testing framework by introducing a suite of new, language-specific scenarios that validate the functionality of various CLI commands. It also upgrades the underlying scenario runner to provide more sophisticated assertion capabilities, allowing tests to check for specific output content and to gracefully handle commands that are expected to fail. These changes collectively improve the robustness and expressiveness of the debugger's test suite. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces significant enhancements to the test scenario runner, allowing for more flexible command expectation handling and extending command parsing capabilities. The addition of allow_failure and output_contains fields to CommandExpectation provides better control over test assertions, especially for commands that might not always succeed or whose output needs specific validation. The new context and output command parsing further expands the test runner's ability to simulate CLI interactions. The accompanying language-specific test scenarios for C, Rust, Go, and Python effectively demonstrate and validate these new features, ensuring broader command coverage. Overall, these changes improve the robustness and expressiveness of the testing framework.
|
|
||
| if let Some(exp) = expect { | ||
| if let Some(expected_substr) = &exp.output_contains { | ||
| let output = serde_json::to_string(&value).unwrap_or_default(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The use of unwrap_or_default() on serde_json::to_string(&value) can mask potential serialization errors. If value fails to serialize, output will become an empty string, leading to a TestAssertion error that the output is missing the substring, rather than indicating a serialization issue. This could make debugging test failures more difficult. Consider handling the serde_json::to_string result explicitly to provide a more informative error message if serialization fails.
let output = serde_json::to_string(&value).map_err(|e| {
Error::TestAssertion(format!(
"Failed to serialize command result for output check: {}",
e
))
})?;There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d58beebe5e
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| "context" | "where" => { | ||
| let lines = if let Some(value) = args.first() { | ||
| value.parse().map_err(|_| { | ||
| Error::Config(format!("Invalid context line count: {}", value)) | ||
| })? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Support CLI-style --lines for context
The scenario parser only accepts a positional number for context, so a valid CLI form like context --lines 5 (the CLI definition uses a --lines flag; see src/commands.rs:100-105) will be parsed as a non-numeric argument and fail with Invalid context line count: --lines. This breaks scenarios that mirror the real CLI syntax even though the command is valid for users. Consider handling --lines in the parse_command branch so test scenarios can use the same flags as the CLI.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 6 out of 6 changed files in this pull request and generated 6 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| pub success: Option<bool>, | ||
| /// Substring that should be in the output | ||
| pub output_contains: Option<String>, | ||
| /// Allow failures without failing the test |
Copilot
AI
Jan 21, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation comment for the allow_failure field is incomplete. It should explain when this flag should be used versus success: false, and clarify the behavior when a command succeeds despite allow_failure: true being set (does it pass or fail?). Consider expanding this to: "Allow the command to fail without failing the test. Unlike success: false which expects failure, this permits either success or failure."
| /// Allow failures without failing the test | |
| /// Allow the command to fail without failing the test. Unlike `success: false` | |
| /// which expects failure, this permits either success or failure. |
| "context" | "where" => { | ||
| let lines = if let Some(value) = args.first() { | ||
| value.parse().map_err(|_| { | ||
| Error::Config(format!("Invalid context line count: {}", value)) | ||
| })? | ||
| } else { | ||
| 5 | ||
| }; | ||
| Ok(Command::Context { lines }) | ||
| } |
Copilot
AI
Jan 21, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The context and where command parsing accepts an optional line count parameter, but if a non-numeric value is provided as the first argument, the error message refers to "context line count" regardless of whether the user typed "context" or "where". Consider making the error message dynamic to reflect the actual command used, or accept that "context" is the canonical name shown in error messages.
| "context" | "where" => { | ||
| let lines = if let Some(value) = args.first() { | ||
| value.parse().map_err(|_| { | ||
| Error::Config(format!("Invalid context line count: {}", value)) | ||
| })? | ||
| } else { | ||
| 5 | ||
| }; | ||
| Ok(Command::Context { lines }) | ||
| } | ||
|
|
||
| "output" => { | ||
| let mut tail = None; | ||
| let mut clear = false; | ||
| let mut idx = 0; | ||
|
|
||
| while idx < args.len() { | ||
| match args[idx] { | ||
| "--tail" | "-t" => { | ||
| if idx + 1 >= args.len() { | ||
| return Err(Error::Config( | ||
| "output --tail requires a number".to_string(), | ||
| )); | ||
| } | ||
| tail = Some(args[idx + 1].parse().map_err(|_| { | ||
| Error::Config(format!("Invalid tail value: {}", args[idx + 1])) | ||
| })?); | ||
| idx += 2; | ||
| } | ||
| "--clear" => { | ||
| clear = true; | ||
| idx += 1; | ||
| } | ||
| other => { | ||
| return Err(Error::Config(format!( | ||
| "Unknown output option: {}", | ||
| other | ||
| ))); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| Ok(Command::GetOutput { tail, clear }) | ||
| } |
Copilot
AI
Jan 21, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The newly added command parsing for context/where and output commands lacks unit test coverage. The existing test module includes tests for other command parsers (simple commands, break commands, breakpoint subcommands, print commands), but the new additions are not tested. Consider adding test cases such as test_parse_context_commands() and test_parse_output_commands() to verify correct parsing of line counts, tail options, and clear flags.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.
Motivation
Description
allow_failureflag andoutput_containsfield toCommandExpectationinsrc/testing/config.rsto allow commands to be marked as permitted to fail and to assert on command outputs.execute_command_stepinsrc/testing/runner.rsto honorallow_failure, validateoutput_containsagainst the serialized command response, and print a concise pass message for allowed failures.parse_commandinsrc/testing/runner.rsto support thecontext/wherecommand (with optional line count) and theoutputcommand with--tail/-tand--clearoptions, mapping them toCommand::ContextandCommand::GetOutput.tests/scenarios/:commands_c.yml,commands_rust.yml,commands_go.yml, andcommands_python.yml, each exercising command coverage against thetests/fixtures/simple.*programs and using the new expectation fields.Testing
Codex Task