Skip to content

snetting/llmcli

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 

Repository files navigation

llmcli --- Shell Task Automation with LLM Assistance

llmcli is a command-line tool that uses a locally or remotely hosted Large Language Model (LLM) to:

  • Answer user questions

  • Automate shell tasks by generating and executing commands step by step

  • Summarize the results of those tasks after execution

It communicates with an OpenAI-compatible API endpoint (tested with OpenWebUI/Ollama) and can operate in interactive or non-interactive modes.


Configuration

The behavior of llmcli can be customized using environment variables, allowing users to safely avoid embedding secrets or configuration directly in the script.

The following variables are supported:

  • LLMCLI_API: URL of the API endpoint

  • LLMCLI_API_TOKEN: API token used for authentication

  • LLMCLI_MODEL: The name of the LLM model to use

  • LLMCLI_COLLECTION_ID: Optional collection ID for retrieval-augmented queries

These should be defined in your shell environment before running the script.

Example:

export LLMCLI_API="http://localhost:4000/v1/chat/completions"
export LLMCLI_API_TOKEN="your-token"
export LLMCLI_MODEL="mistral"

Recommended: Use environment variables to manage credentials and endpoint configuration.
Optional: You may also hardcode default values inside the script if you are the sole user or running in a secure, isolated environment --- but this is not recommended for shared or production use.


Command-line Options

  • -s or --shell-task
    Automate a shell task using the LLM to generate and explain commands.

    Note: This option must be immediately followed by the shell task description, without any intervening flags or arguments.

  • -f or --file
    Include the contents of a file as part of the prompt for context.

  • -v
    Show basic status messages (e.g. "Thinking...").

  • -vv
    Also show the LLM's reasoning steps.

  • -vvv
    Also display each generated command and its output.

  • -i
    Interactive mode. Prompts you to confirm each command before it's run.

  • -h
    Display help and usage instructions.


Usage

Ask a question:

llmcli "How do I configure rsync over SSH?"

Ask a question with file context:

llmcli -f config.txt "What does this configuration do?"

  • or -

cat config.txt | llmcli "What does this configuration do?"

To interactively create four 4KB files, with explanations and confirmations before each step:

llmcli -i -s "Create 4 empty files of 4KB each"

Run an automated shell task (WARNING: no interaction / confirmations):

llmcli -s "Set up a local HTTP server using Python"


Important Warning and Disclaimer

This tool can execute arbitrary shell commands generated by an AI.

If you use llmcli without the -i (interactive) option, commands are executed immediately without user confirmation. This can be risky, especially if:

  • You run the script with elevated permissions (e.g. sudo)

  • The LLM misinterprets your request or generates destructive commands

Always use the -i flag unless you're completely confident.
NEVER run the script as root unless you are fully aware of the risks.


About the Code

This script was partially written using a language model (LLM).
While care has been taken to review its logic, users are strongly encouraged to read and understand the code before trusting it with system-level tasks.


License and Liability

This tool is provided without warranty or guarantees.
The author assumes no responsibility for any damage, data loss, or unintended side effects caused by use of this tool.
Use at your own risk.


Example

To create four 4KB files safely, with explanations and confirmations before each step:

llmcli -i -s "Create 4 empty files of 4KB each"


Use wisely. Test safely.

About

Execute shell commands using LLM

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages