This was initial a fork of AssistAI with focus on OnPremise usage, so Ollama/LocalAI/JLama are the primary supported backends - but other cloud providers will be supported as well (ChatGPT and Copilot are in alpha (Alpaca?) stage) with some features found on the net. It makes heavy usage of the langchain4j library for model interaction, so most of their supported providers would be a option in the future.
The plugin brings you a Large Language Model (LLM) assistant into your Eclipse IDE.
_Why that name? There are many llamas in the AI world, a whip is used to make animals to do what you want, and according to some companies it is good practice to whip llamas *** ... ;-) _
- Engage in multiple conversations with different LLM about the content of the currently opened files
- In-editor prompt to generate specific code at current cursor location
- Copy/save code generated blocks to the clipboard
- Customize pre-defined prompts
- Using the function call feature LlamaWhip can:
- use related source code to better understand the context
- perform a web search using (selectable per query)
- read a content of a web page
- open new file/compare editor
- Theme support (light and dark right now)
- Create contexts for the LLM that include source files
- Switch between defined LLMs per request
- Persistent chat history
You can also pose general questions to LLM, just like with the regular LLM interfaces.
| Provider | Support |
|---|---|
| Ollama | Working |
| JLama | Preview |
| LocalAI | InDev |
| GitHub | Planning |
| ChatGPT | Planning |
Download the release and extract it in your %ECLIPSE%/dropins folder. Marketplace will be done when the plugin is kind of code stable.
After installing the plugin, configure access to the LlamaWhip panel inside the general preferences.
Add the LlamaWhip Chat to your IDE:
- Open Window > Show View > Other
- Select Assistent Chat from the LlamaWhip category
Press CTRL-ALT-A to open a small inline code generation prompt
Press "Attach" in the chat window or drop files on the button to add them to your message (do not add too many/too big, they will be added to the chat request directly)



