This is a system that allows you to trigger an agent with the command @agent, which can then search, edit and commit your code, as well as post comments on your GitLab MR or issue. The agent runs securely in your pipeline runner.
This project was forked from RealMikeChong. I used his gitlab webhook app and refactored the runner, added more documentation and added MCP & Opencode Support...
- Single webhook endpoint for all projects
- Triggers pipelines when
@aiis mentioned in comments (or your custom @) - Updates comment with progress (emoji reaction)
- Configurable rate limiting (or no rl at all)
- Works with personal access tokens (no OAuth required)
- Docker-ready deployment
- MCP Server Integration
We need to set up a Webhook in GitLab, the GitLab Webhook App that receives events from GitLab, and a Pipeline that runs the agent.
To receive comments from GitLab, you need to set up a webhook in your GitLab project. This webhook will send a POST request to the GitLab Webhook App whenever a comment is made.
Go to your GitLab project settings, then to the Webhooks section.
Enter https://your-server.com/webhook as the URL (replace your-server.com with your actual server address).
Tip
If you are developing locally, use ngrok or the built-in port forwarding from VS Code.
Set a secret token for the webhook (you will need to set this in your GitLab Webhook App).
Add the Comments trigger for the webhook.
The agent will run in the GitLab CI/CD environment. This is ideal because that way we already have an isolated environment with all necessary tools and permissions.
For that, we use the agent-image Docker image. This provides the agent with the required dependencies for C# and Node.js, and the opencode CLI for multi-provider LLMs. You can easily customize the base image in agent-image/Dockerfile.
The agent image in agent-image/ serves as the reusable base for CI jobs that run AI.
- Base image:
dotnetimages/microsoft-dotnet-core-sdk-nodejs:8.0_24.x- .NET SDK version: 8 (can be changed)
- Node.js version: 24.x (can also be changed)
- Source and available tags: https://github.com/DotNet-Docker-Images/dotnet-nodejs-docker
- Includes git, curl, jq, opencode CLI, and the modular runner (
ai-runner).
Build and publish the image to your registry of choice, or use the prebuilt one and reference it in CI via the AI_AGENT_IMAGE variable.
Then set in your GitLab CI/CD variables:
AI_AGENT_IMAGE=ghcr.io/schickli/ai-code-for-gitlab/agent-image:latest
You will need to add the following CI/CD variables in your GitLab project (Settings → CI/CD → Variables):
-
Provider API key(s) depending on which model you want to use via opencode. Common ones:
OPENAI_API_KEYANTHROPIC_API_KEYOPENROUTER_API_KEYGROQ_API_KEYTOGETHER_API_KEYDEEPSEEK_API_KEYFIREWORKS_API_KEYCEREBRAS_API_KEYZ_API_KEY- Or Azure OpenAI envs:
AZURE_API_KEY,AZURE_RESOURCE_NAME: Your Azure OpenAI resource name (e.g.,my-azure-openai).OPENCODE_MODELthen needs to beazure/{Deployment Name}. - Or Bedrock envs:
AWS_ACCESS_KEY_ID(orAWS_PROFILE/AWS_BEARER_TOKEN_BEDROCK)
-
GITLAB_TOKEN: Your GitLab Personal Access Token (withapi,read_repository,write_repositorypermissions)
Caution
The variables should not be protected variables.
Copy the .gitlab-ci.yml file in gitlab-utils to your project root, or add the important parts to your existing configuration. The pipelines variables can also be added. I strongly recommend adapting the existing Agent Prompt. With CUSTOM_AGENT_PROMPT you can set repository-specific instructions for the agent. But first look at the default prompt (in the gitlab-app).
You can run the prebuilt image locally:
When using it locally, you must expose your local port 3000 to the internet using either ngrok or the built-in port forwarding from VS Code. You must also change it in the webhook configuration.
Pull the image from the GitHub Container Registry:
docker pull ghcr.io/schickli/ai-code-for-gitlab/gitlab-app:latestAll configuration options can be seen in
.env.exampleor the Configuration section. With this you only build the GitLab Webhook App.
Run the following steps in the gitlab-app directory:
-
Copy
.env.exampleto.envand configure:cp .env.example .env
-
Edit
.envwith your GitLab personal access token and all the other variables (or bot credentials withapi,read_repository,write_repositorypermissions):GITLAB_TOKEN=glpat-xxxxxxxxxxxxxxxxxxxx WEBHOOK_SECRET=your-webhook-secret-here ...
-
Deploy the application:
docker-compose -f docker-compose.yml up -d
-
GITLAB_URL: GitLab instance URL (default: https://gitlab.com, e.g. https://gitlab.company.com) -
WEBHOOK_SECRET: Secret that you set in you Gitlab Webhook configuration -
ADMIN_TOKEN: Optional admin token for/adminendpoints -
OPENCODE_AGENT_PROMPT: The custom base prompt for the AI agent. (This appends to the Opencode system prompt and prepends the custom pipeline additions if set) -
GITLAB_TOKEN: Personal access token withapiscope -
AI_GITLAB_USERNAME: The GitLab username for the AI user (of the account the Gitlab Token is from) -
AI_GITLAB_EMAIL: The GitLab email for the AI user (of the account the Gitlab Token is from) -
PORT: Server port (default: 3000) -
CANCEL_OLD_PIPELINES: Cancel older pending pipelines (default: true) -
TRIGGER_PHRASE: Custom trigger phrase instead of@ai(default:@ai) -
BRANCH_PREFIX: Prefix for branches created by AI (default:ai) -
OPENCODE_MODEL: The model used by opencode inprovider/model(for azure its the deployment name) form (e.g.,azure/gpt-4.1) -
RATE_LIMITING_ENABLED: Enable/disable rate limiting (default: true). If set tofalse, Redis is not used and not required. -
REDIS_URL: Redis connection URL -
RATE_LIMIT_MAX: Max requests per window (default: 3) -
RATE_LIMIT_WINDOW: Time window in seconds (default: 900)
When a pipeline is triggered, these variables are available:
AI_AGENT_IMAGE: The Docker image for the AI agentCUSTOM_AGENT_PROMPT: Repository-specific additions to the agent prompt. If set, it is appended to the base prompt defined in the webhook app.
Set the appropriate provider key(s) for your chosen OPENCODE_MODEL as listed above, plus:
GITLAB_TOKEN: Your GitLab Personal Access Token (withapi,read_repository,write_repositorypermissions)
- Base Prompt: Set
OPENCODE_AGENT_PROMPTin the webhook app. - Pipeline Additions (optional): Define a
CUSTOM_AGENT_PROMPTvariable directly in.gitlab-ci.ymlor via CI/CD variables. If present, it will be appended to the base prompt. - Combination: When both exist they are merged.
Tip
Keep the base/system behavior prompt in the webhook app and use the pipeline addition only for small repository specific instructions.
GET /health— Health checkGET /admin/disable— Disable bot (requiresADMIN_TOKENtoken)GET /admin/enable— Enable bot (requiresADMIN_TOKENtoken)
When AI is triggered from a GitLab issue comment:
- Automatic Branch Creation: A new branch is created with the format
ai/issue-{IID}-{sanitized-title}-{timestamp}or your configured branch prefix. - Unique Branch Names: Timestamps ensure each branch is unique, preventing conflicts.
- No Main Branch Execution: If branch creation fails, the webhook returns an error. AI will never execute on the main/default branch.
- Merge Request Source: For existing merge requests, AI uses the MR's source branch.
This ensures that:
- Protected branches remain safe from automated changes
- Each AI execution has its own isolated branch
- Failed branch creation stops the process entirely (fail-safe behavior)
- Create/Move to agent image to streamline the pipeline configuration
- Move to
opencode - Move "In Procress..." comment to the gitlab-app to provide faster feedback
- Show agent working in the pipeline logs
- Refactor the runner to be more modular (So that other tools can be added more easily)
- Try moving the comment and commiting logic to a agent tool (Enables custom commit messaages, better comments)
- Cleanup
@aiconfiguration (So that its not needed in both configurations) - Create the pipeline on the merge request if the comment is on a merge request
- Add option to disable ratelimiting (removes redis dependency)
- Add comment thread as context (So that the agent can see the full discussion)
- Add a new tool to get the Jira ticket description and comments (So that the agent can see the full ticket)
- Provide configuration for the MCP Servers (So that other MCP Servers can be added more easily)
- Add the Sonar MCP Server
- Evaluate the change to listen on mentioned events instead of all comments
- Add cost to the comment (So that the user knows how much it costed)

