Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions src/testing/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,9 @@ pub struct CommandExpectation {
pub success: Option<bool>,
/// Substring that should be in the output
pub output_contains: Option<String>,
/// Allow failures without failing the test
Copy link

Copilot AI Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation comment for the allow_failure field is incomplete. It should explain when this flag should be used versus success: false, and clarify the behavior when a command succeeds despite allow_failure: true being set (does it pass or fail?). Consider expanding this to: "Allow the command to fail without failing the test. Unlike success: false which expects failure, this permits either success or failure."

Suggested change
/// Allow failures without failing the test
/// Allow the command to fail without failing the test. Unlike `success: false`
/// which expects failure, this permits either success or failure.

Copilot uses AI. Check for mistakes.
#[serde(default)]
pub allow_failure: bool,
}

/// Expectations for a stop event
Expand Down
70 changes: 69 additions & 1 deletion src/testing/runner.rs
Original file line number Diff line number Diff line change
Expand Up @@ -230,6 +230,7 @@ async fn execute_command_step(
let cmd = parse_command(command_str)?;

let result = client.send_command(cmd).await;
let allow_failure = expect.map(|exp| exp.allow_failure).unwrap_or(false);

// Check expectations
if let Some(exp) = expect {
Expand All @@ -255,7 +256,29 @@ async fn execute_command_step(
return Ok(());
}

result?;
if allow_failure && result.is_err() {
println!(
" {} Step {}: {} (allowed failure)",
"✓".green(),
step_num,
command_str.dimmed()
);
return Ok(());
}

let value = result?;

if let Some(exp) = expect {
if let Some(expected_substr) = &exp.output_contains {
let output = serde_json::to_string(&value).unwrap_or_default();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The use of unwrap_or_default() on serde_json::to_string(&value) can mask potential serialization errors. If value fails to serialize, output will become an empty string, leading to a TestAssertion error that the output is missing the substring, rather than indicating a serialization issue. This could make debugging test failures more difficult. Consider handling the serde_json::to_string result explicitly to provide a more informative error message if serialization fails.

            let output = serde_json::to_string(&value).map_err(|e| {
                Error::TestAssertion(format!(
                    "Failed to serialize command result for output check: {}",
                    e
                ))
            })?;

if !output.contains(expected_substr) {
return Err(Error::TestAssertion(format!(
"Command '{}' output missing '{}'",
command_str, expected_substr
)));
}
}
}

println!(
" {} Step {}: {}",
Expand Down Expand Up @@ -736,6 +759,51 @@ fn parse_command(s: &str) -> Result<Command> {
})
}

"context" | "where" => {
let lines = if let Some(value) = args.first() {
value.parse().map_err(|_| {
Error::Config(format!("Invalid context line count: {}", value))
})?
Comment on lines +762 to +766

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Support CLI-style --lines for context

The scenario parser only accepts a positional number for context, so a valid CLI form like context --lines 5 (the CLI definition uses a --lines flag; see src/commands.rs:100-105) will be parsed as a non-numeric argument and fail with Invalid context line count: --lines. This breaks scenarios that mirror the real CLI syntax even though the command is valid for users. Consider handling --lines in the parse_command branch so test scenarios can use the same flags as the CLI.

Useful? React with 👍 / 👎.

} else {
5
};
Ok(Command::Context { lines })
}
Comment on lines +762 to +771
Copy link

Copilot AI Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The context and where command parsing accepts an optional line count parameter, but if a non-numeric value is provided as the first argument, the error message refers to "context line count" regardless of whether the user typed "context" or "where". Consider making the error message dynamic to reflect the actual command used, or accept that "context" is the canonical name shown in error messages.

Copilot uses AI. Check for mistakes.

"output" => {
let mut tail = None;
let mut clear = false;
let mut idx = 0;

while idx < args.len() {
match args[idx] {
"--tail" | "-t" => {
if idx + 1 >= args.len() {
return Err(Error::Config(
"output --tail requires a number".to_string(),
));
}
tail = Some(args[idx + 1].parse().map_err(|_| {
Error::Config(format!("Invalid tail value: {}", args[idx + 1]))
})?);
idx += 2;
}
"--clear" => {
clear = true;
idx += 1;
}
other => {
return Err(Error::Config(format!(
"Unknown output option: {}",
other
)));
}
}
}

Ok(Command::GetOutput { tail, clear })
}
Comment on lines +762 to +805
Copy link

Copilot AI Jan 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The newly added command parsing for context/where and output commands lacks unit test coverage. The existing test module includes tests for other command parsers (simple commands, break commands, breakpoint subcommands, print commands), but the new additions are not tested. Consider adding test cases such as test_parse_context_commands() and test_parse_output_commands() to verify correct parsing of line counts, tail options, and clear flags.

Copilot uses AI. Check for mistakes.

"stop" => Ok(Command::Stop),
"detach" => Ok(Command::Detach),
"restart" => Ok(Command::Restart),
Expand Down
207 changes: 207 additions & 0 deletions tests/scenarios/commands_c.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,207 @@
# C Command Coverage Test
# Exercises debugger commands and subcommands against a C fixture.

name: "C Command Coverage Test"
description: "C coverage for breakpoints, stepping, stack, threads, context, output, and session commands"

setup:
- shell: "mkdir -p tests/fixtures/bin && gcc -g -O0 tests/fixtures/simple.c -o tests/fixtures/bin/simple_c"

target:
program: "tests/fixtures/bin/simple_c"
args: []
stop_on_entry: true

steps:
- action: command
command: "b tests/fixtures/simple.c:24"
expect:
success: true

- action: command
command: "breakpoint add tests/fixtures/simple.c:6"
expect:
success: true

- action: command
command: "breakpoint list"
expect:
output_contains: "breakpoints"

- action: command
command: "breakpoint disable 2"
expect:
output_contains: "disabled"

- action: command
command: "breakpoint enable 2"
expect:
output_contains: "enabled"

- action: command
command: "continue"

- action: await
timeout: 15
expect:
reason: "breakpoint"
file: "simple.c"
line: 24

- action: command
command: "pause"
expect:
allow_failure: true

- action: command
command: "locals"
expect:
output_contains: "variables"

- action: inspect_locals
asserts:
- name: "x"
value_contains: "10"
- name: "y"
value_contains: "20"

- action: command
command: "print x + y"
expect:
output_contains: "30"

- action: command
command: "eval x + y"
expect:
output_contains: "30"

- action: command
command: "step"
expect:
output_contains: "stepping"

- action: await
timeout: 10
expect:
reason: "step"

- action: command
command: "backtrace"
expect:
output_contains: "frames"

- action: inspect_stack
asserts:
- index: 0
function: "add"
file: "simple.c"
- index: 1
function: "main"

- action: command
command: "frame 0"
expect:
output_contains: "selected"

- action: command
command: "up"
expect:
output_contains: "selected"

- action: inspect_locals
asserts:
- name: "x"
value_contains: "10"
- name: "y"
value_contains: "20"

- action: command
command: "down"
expect:
output_contains: "selected"

- action: inspect_locals
asserts:
- name: "a"
value_contains: "10"
- name: "b"
value_contains: "20"

- action: command
command: "next"
expect:
output_contains: "stepping"

- action: await
timeout: 10
expect:
reason: "step"

- action: command
command: "finish"
expect:
output_contains: "stepping"

- action: await
timeout: 10

- action: command
command: "context 5"
expect:
output_contains: "source_lines"

- action: command
command: "threads"
expect:
output_contains: "threads"

- action: command
command: "thread 1"
expect:
output_contains: "selected"

- action: command
command: "frame 0"
expect:
output_contains: "selected"

- action: command
command: "breakpoint remove 2"
expect:
output_contains: "removed"

- action: command
command: "breakpoint remove --all"
expect:
output_contains: "removed"

- action: command
command: "continue"

- action: await
timeout: 15
expect:
reason: "exited"

- action: command
command: "output"
expect:
output_contains: "Sum"

- action: check_output
contains: "Factorial"

- action: command
command: "restart"
expect:
allow_failure: true

- action: command
command: "stop"
expect:
allow_failure: true

- action: command
command: "detach"
expect:
allow_failure: true
Loading
Loading