evan.jarrett.net / git-summarizer
AI-powered git change summarizer using agentic tool-calling with self-hosted LLMs
Pull this image
docker pull atcr.io/evan.jarrett.net/git-summarizer:latest
Overview
Git Summarizer
An AI-powered tool that generates human-readable summaries of git changes using tool calling with a self-hosted LLM (llama.cpp, Ollama, etc).
Uses go-git for pure Go git operations — no git binary required.
Architecture
┌─────────────────────┐ ┌─────────────────────┐
│ git-summarizer │ │ llama.cpp server │
│ (this service) │ ──────▶ │ (your GPU) │
│ │ API │ │
│ - clones repos │ ◀────── │ - qwen2.5-coder │
│ - runs git ops │ │ │
│ - tool call loop │ │ │
└─────────────────────┘ └─────────────────────┘
The summarizer acts as an orchestrator — it doesn’t need GPU access. It:
- Receives a request to summarize changes
- Sends prompts to your LLM server
- Executes git tool calls locally (using go-git)
- Returns results to the LLM
- Repeats until the LLM produces a summary
Project Structure
├── cmd/git-summarizer/main.go # Entry point
├── pkg/
│ ├── config/ # Configuration loading
│ ├── git/ # Git operations (go-git wrapper)
│ └── llm/ # LLM API types and tool definitions
├── internal/
│ ├── api/ # HTTP handlers
│ └── summarizer/ # Agentic loop orchestration
Recommended Models
For tool calling support, these work well:
- Qwen2.5-Coder (7B, 14B, 32B) — Best for this task
- Llama 3.1+ (8B, 70B)
- Mistral/Mixtral
Quick Start
# Build
make build
# Run (assumes llama.cpp at localhost:8080)
./git-summarizer --llama-url http://your-llama-server:8080
# Test with a public repo
curl -X POST http://localhost:8000/summarize \
-H "Content-Type: application/json" \
-d '{
"repo_url": "https://github.com/user/repo.git",
"base": "v1.0.0",
"head": "v1.1.0"
}'
Configuration
| Flag | Env Var | Default | Description |
|---|---|---|---|
--llama-url |
LLAMA_URL |
http://localhost:8080 |
llama.cpp server URL |
--model |
LLAMA_MODEL |
qwen2.5-coder |
Model name |
--listen |
- | :8000 |
Listen address |
--repo-dir |
- | /tmp |
Directory for cloned repos |
--max-diff |
- | 16000 |
Max diff chars to send to LLM |
Docker
docker build -t git-summarizer .
docker run -p 8000:8000 \
-e LLAMA_URL=http://host.docker.internal:8080 \
git-summarizer
Kubernetes
Update k8s/deployment.yaml with your llama.cpp service address, then:
kubectl apply -f k8s/deployment.yaml
API Reference
POST /summarize
Request body:
{
"repo_url": "https://github.com/user/repo.git", // Clone from URL
"repo_path": "/local/path", // OR use existing path
"base": "main", // Base ref
"head": "HEAD" // Head ref
}
Response:
{
"summary": "## Summary\n\nThis release includes..."
}
GET /health
Returns 200 OK if healthy.
Tools Available to the LLM
The LLM can call these tools to explore the repository:
| Tool | Description |
|---|---|
git_log |
Get commit log between refs |
git_diff |
Get diff (optionally filtered to specific files) |
list_changed_files |
List changed files with status |
git_show_commit |
Show details of a specific commit |
read_file |
Read file contents at a ref |
git_diff_stats |
Get stats (files changed, insertions, deletions) |
Example Output
## Summary
This release (v1.2.0 to v1.3.0) includes 23 commits focusing on performance
improvements and bug fixes.
### Key Changes
**Performance**
- Refactored database query layer to use connection pooling, reducing
latency by ~40% under load
- Added caching for user session data
**Bug Fixes**
- Fixed race condition in websocket handler that caused dropped messages
- Corrected timezone handling in scheduled tasks
**Other**
- Updated Go version to 1.22
- Added new health check endpoint
### Breaking Changes
None in this release.
### Files Changed
47 files changed, 1,203 insertions, 456 deletions. Main areas:
- `pkg/database/` - Connection pooling refactor
- `internal/websocket/` - Race condition fix
- `cmd/server/` - Health check endpoint
License
MIT