-
Notifications
You must be signed in to change notification settings - Fork 1
Add environments to the CLI #96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add environments to the CLI #96
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This pull request adds environment management capabilities to the Tower CLI, allowing users to create and list environments through command line interface. The changes include code formatting improvements and introduction of a new environments module for first-class environments support.
- Add CLI commands for creating and listing environments
- Code formatting improvements across multiple files (import reordering, line formatting)
- Update dependencies and module organization
Reviewed Changes
Copilot reviewed 196 out of 199 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| crates/tower-cmd/src/environments.rs | New module implementing environment list and create commands |
| crates/tower-cmd/src/lib.rs | Add environments subcommand to main CLI interface |
| crates/tower-cmd/src/api.rs | Add API functions for listing and creating environments |
| Multiple other files | Code formatting improvements and import reorganization |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
socksy
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry for the late review, read through it all now and it LGTM 👍
* Introducing black formatting for python code * use pre-commit framework and update readme * Format everything * Update workflow to check * cargo fmt * feat: add Towerfile generation and manipulation tools - Add Towerfile generation from pyproject.toml - Support working directory parameter for file operations - Generate valid TOML format (not JSON-like) - Add parameter manipulation functions for Towerfile updates * feat: add MCP server support with core functionality - Add MCP server implementation using rmcp abstractions - Implement core tower operations (apps, secrets, teams, etc.) - Add proper error handling for run operations - Support both local and remote tower operations via MCP protocol * feat: add output abstraction and improve run handling - Add OutputSink trait for pluggable output handling - Improve run completion detection and matching - Restore error handling and ctrl-c support for remote runs - Update documentation for new connection methods * feat: add comprehensive BDD integration tests - Replace Rust cucumber tests with Python BDD tests using behave - Add mock API server for isolated testing - Fix async hanging issues with proper behave async support - Implement JWT-based authentication for test scenarios - Add comprehensive test coverage for MCP operations - Include CI workflow for integration tests * chore: update tooling and dependencies - Switch from fenix to rust-overlay in Nix flake - Add uv.lock files for Python dependency management - Update logging configuration to handle setup failures gracefully - Add MCP and testing dependencies to pyproject.toml * fix: use appropriate fields after rebase on new API * fix: resolve mock JWT authentication in integration tests The integration tests were failing because the mock API session endpoint response was missing required fields that were added in newer API schema versions. This caused JWT authentication to fail with "UnknownDescribeSessionValue" errors, preventing local app runs from working in tests. Changes: - Add missing required fields to mock API user response (is_confirmed, is_subscribed_to_changelog) - Add comprehensive documentation and debugging guide in tests/mock-api-server/README.md - Enhance test error messages to point to specific debugging steps - Add detailed comments in mock API explaining schema dependencies - Fix workflow file to use --color flag for better CI output 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * fix: remove pointless, potentially buggy directory change * fix: make run_tests.py work + use latest behave + update workflow to use it + return empty vector for sources in towerfile gen * feat: add support for new schedules in mcp server * chore: run formatter * chore: run formatter * fix: update Towerfile test to reflect change to default to wildcard * fix: make the schedules deletion and updating tests pass * chore: disable broken build triple * chore: pr feedback - consistency * chore: switch to error enum pattern * chore: fmt (black) * Add environments to the CLI (#96) * Initial pass at environments management * chore: List environments as a table * Integrate feedback from @copilot --------- Co-authored-by: Ben Lovell <ben.j.lovell@gmail.com> * feat: create a tower package command * fix: use global output::write() function instead of sinks and conditional logic everywhere * fix: remove sinks and conditional logic everywhere in favour of global write - Remove do_run_local_capture and do_run_remote_capture functions - Remove OutputSink trait and ChannelSink implementations - Remove monitor_output_capture function - Simplify monitor_output to use output functions directly - Remove sink parameter from do_follow_run_impl and handle_run_completion - Improve error propagation in do_run_remote (use err.into() instead of tower_error) - Update MCP functions to use regular run functions instead of capture variants * fix: remove timeout test We removed the timeout in 8f8cb29 (rebased away by now). The reason was that it was a timeout for the entire app run, at which point it was v. inappropriate. Even CI systems only do it on _output_ inactivity, not total inactivity. Secondly it was the wrong location for implementing a timeout, which is clearly something on the MCP or CLI level of concern, not the lower down the stack monitor_output(). But the thing I don't get is... here was the test the entire time, running fine in fix/cli-run-regressions. But there was no timeout mechanism. WTF? * fix: improve deploy error handling and propagation - Convert deploy functions to return Results instead of void - Add proper error variants for DeployApp and DescribeApp API errors - Update MCP deploy tool to handle and report deploy failures - Ensure deploy errors are properly propagated up the call stack * fix+feat: add --auto-create for apps (fixes mcp's deploy, which hung waiting for y/n answer) * feat: add error handling utilities for deploy and MCP operations - Add extract_api_error_message() to format errors consistently - Add is_deployment_error() to detect deployment vs validation errors - Improve tower_run_remote error messages for better UX * feat: implement MCP streaming output with real-time progress - Add streaming output functionality for MCP tools - Add RequestContext parameter to tower_run_local and tower_run_remote - Implement unified with_streaming method that accepts custom error handlers - Set up progress notifications during tool execution - Support different error handling strategies (local vs remote) * refactor: simplify streaming abstraction - Rename with_streaming to execute_with_progress - Remove error handler closure, return Result directly - Let callers handle error formatting * feat: notify_logging fits much better than notify_progress * fix: capture run API errors properly, improve status monitoring Trying to make those tests go green. We had - api errors being printed out to the user rather than the actual result. So we gotta look for those exact errors and return the right thing - wait for both output and status to complete - if status task detects a crash, let's Err(Status::Crashed) ourselves - let runs bubble up errors from following runs (`.await?`) - get rid of now pointless do_follow_run_impl vs do_follow_run split - the side effectful writing is no longer a trouble for the mcp server - end spinner on run successs * fix: parse error messages correctly in MCP server - Error messages need to be matched against (fn extract_api_error_message) - Stream run output correctly - needed because those messages are matched by the CLI and integration tests - Return log streams when runs error out, as well as when the succeed - fix a working_dir spot, in tower_deploy (I swear I did this one already) - add cute wee struct for streaming output instead of a tuple * chore: fix tests to actually run CLI properly * improve: test infrastructure and mock server enhancements * fix: make mock server track "deployed" apps - Fix the validation error tests' expectation of being deployed or not - check for "nonexistent_param" and return proper error model for undeployed and invalid params errors (cli is not matching http statuses!) * fix: real status group (+ debug help in assertion) * chore: test log streaming with real streams in mock hmm this mock server is getting pretty featureful, when can it replace prod? * fix: make sure mutable test apps don't clash The mock server is stateful and with multiple tests running either after another (was getting issues with hello world being deployed when it shouldn't have been), and also with parallel workers, I think it's better we scope each test with their own unique hello_world app name. `test_app` remains non-unique - it needs to be expected to be there (I guess I could put it in the context, but it doesn't need to be yet so 🤷). Other alternative was resetting the mock DB but I figured that might lead to race conditions and it was better if this stuff was just as idempotent as possible * fmt: fmt (forgive me, i'm too tired to work out which commits to fixup) * fix(tests): ensure correct unique app names - make sure we can access the app names across steps in a scenario - move to @step everywhere so we can use @when or @then without it complaining - make sure we deploy apps that are supposed to exist in advance - hardcode in the predeployed app, to really communicate that this app is a special case (maybe can be replaced with actual deploying later) * chore: try to get integration tests to run on this PR... * fmt: cargo fmt * chore: don't run CI twice for branches with no / in the name * chore: make integration test CI job run faster * chore: Claude feedback (already had a helper fn lol) * refactor: use colored directly instead of stripping color with a regex * chore: better error catching and remove debug printlns * refactor: move api error response checking to unwrap_api_response * chore: --auto-create -> --create * Bump latest Tower API versions * Bump version to v0.3.31 * fix: make CLI less brittle to erroneous nulls When we see a null value for a required field, the generated code will now instruct serde to give it the default value (e.g. "" for string, [] for array, etc). This may be nonsense, of course, but it's less likely to be nonsense than "null", and makes the CLI a whole lot less brittle to bugs on the backend - a single null value shouldn't kill the rest of a response! * fix: make describe_app_response resilient using DefaultOnNull pattern * chore: make generated files fmt'd properly * chore: remove ...non existent rustfmt parameter? * chore: generate with cargo fmt running * chore: actually use environment now we can * fix: integration test mock server to reflect modern reality --------- Co-authored-by: Vim Wickramasinghe <vim@sammuti.com> Co-authored-by: Vim <121349594+sammuti@users.noreply.github.com> Co-authored-by: Ben Lovell <ben.j.lovell@gmail.com> Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Konstantinos Stefanidis Vozikis <kons.ste@gmail.com>
Title says it all: We need to be able to create and list environments from the CLI now that we have better, first-class environments support. This PR introduces that.