atcr.io / hold
ATCR Hold Service - Bring Your Own Storage component for ATCR
Pull this image
docker pull atcr.io/atcr.io/hold:latest
Overview
ATCR Hold Service
Hold Service is the BYOS (Bring Your Own Storage) blob storage backend for ATCR. It stores container image layers in your own S3-compatible storage (AWS S3, Storj, Minio, UpCloud, etc.) and generates presigned URLs so clients transfer data directly to/from S3. Each hold runs an embedded ATProto PDS with its own DID, repository, and crew-based access control.
Hold Service is one component of the ATCR ecosystem:
- AppView — Registry API + web interface
- Hold Service (this component) — Storage backend with embedded PDS
- Credential Helper — Client-side tool for ATProto OAuth authentication
Docker Client --> AppView (resolves identity) --> User's PDS (stores manifest)
|
Hold Service (generates presigned URL)
|
S3/Storj/etc. (client uploads/downloads directly)
Manifests (small JSON metadata) live in users’ ATProto PDS. Blobs (large binary layers) live in hold services. AppView orchestrates the routing.
When to Run Your Own Hold
Most users can push to the default hold at https://hold01.atcr.io — you don’t need to run your own.
Run your own hold if you want to:
- Control where your container layer data is stored (own S3 bucket, geographic region)
- Manage access for a team or organization via crew membership
- Run a shared hold for a community or project
- Use a CDN pull zone for faster downloads
Prerequisites: S3-compatible storage with a bucket already created, and a domain with TLS for production.
Quick Start
1. Generate Configuration
# Build the hold binary
go build -o bin/atcr-hold ./cmd/hold
# Generate a fully-commented config file with all defaults
./bin/atcr-hold config init config-hold.yaml
Or generate config from Docker without building locally:
docker run --rm -i $(docker build -q -f Dockerfile.hold .) config init > config-hold.yaml
The generated file documents every option with inline comments. Edit only what you need.
2. Minimal Configuration
Only three things need to be set — everything else has sensible defaults:
storage:
access_key: "YOUR_S3_ACCESS_KEY"
secret_key: "YOUR_S3_SECRET_KEY"
bucket: "your-bucket-name"
endpoint: "https://gateway.storjshare.io" # omit for AWS S3
server:
public_url: "https://hold.example.com"
registration:
owner_did: "did:plc:your-did-here"
server.public_url— Your hold’s public HTTPS URL. This becomes the hold’sdid:webidentity.storage.bucket— S3 bucket name (must already exist).registration.owner_did— Your ATProto DID. Creates you as captain (admin) on first boot. Get yours from:https://bsky.social/xrpc/com.atproto.identity.resolveHandle?handle=yourhandle.bsky.social
3. Build and Run with Docker
# Build the image
docker build -f Dockerfile.hold -t atcr-hold:latest .
# Run it
docker run -d \
--name atcr-hold \
-p 8080:8080 \
-v $(pwd)/config-hold.yaml:/config.yaml:ro \
-v atcr-hold-data:/var/lib/atcr-hold \
atcr-hold:latest serve --config /config.yaml
/var/lib/atcr-hold— Persistent volume for the embedded PDS (carstore database + signing keys). Back this up.- Port 8080 — Default listen address. Put a reverse proxy (Caddy, nginx) in front for TLS.
- The image is built
FROM scratch— the binary includes SQLite statically linked. - Optional:
docker build --build-arg BILLING_ENABLED=trueto include Stripe billing support.
Configuration
Config loads in layers: defaults → YAML file → environment variables. Later layers override earlier ones.
All YAML fields can be overridden with environment variables using the HOLD_ prefix and _ path separators. For example, server.public_url becomes HOLD_SERVER_PUBLIC_URL.
S3 credentials also accept standard AWS environment variable names: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_REGION, S3_BUCKET, S3_ENDPOINT.
For the complete configuration reference with all options and defaults, see config-hold.example.yaml or run atcr-hold config init.
Access Control
| Setting | Who can pull | Who can push |
|---|---|---|
server.public: true |
Anyone | Captain + crew with blob:write |
server.public: false (default) |
Crew with blob:read |
Captain + crew with blob:write |
+ registration.allow_all_crew: true |
(per above) | Any authenticated user |
The captain (set via registration.owner_did) has all permissions implicitly. blob:write implies blob:read.
Authentication uses ATProto service tokens: AppView requests a token from the user’s PDS scoped to the hold’s DID, then includes it in XRPC requests. The hold validates the token and checks crew membership.
See BYOS.md for the full authorization model.
Optional Subsystems
| Subsystem | Default | Config key | Notes |
|---|---|---|---|
| Admin panel | Enabled | admin.enabled |
Web UI for crew, settings, and storage management |
| Quotas | Disabled | quota.tiers |
Tier-based storage limits (e.g., deckhand=5GB, bosun=50GB) |
| Garbage collection | Disabled | gc.enabled |
Nightly cleanup of orphaned blobs and records |
| Vulnerability scanner | Disabled | scanner.secret |
Requires separate scanner service; see SBOM_SCANNING.md |
| Billing (Stripe) | Disabled | Build flag + env | Build with --build-arg BILLING_ENABLED=true; see BILLING.md |
| Bluesky posts | Disabled | registration.enable_bluesky_posts |
Posts push notifications from hold’s identity |
Hold Identity
did:web (default) — Derived from server.public_url with zero setup. https://hold.example.com becomes did:web:hold.example.com. The DID document is served at /.well-known/did.json. Tied to domain ownership — if you lose the domain, you lose the identity.
did:plc (portable) — Set database.did_method: plc in config. Registered with plc.directory. Survives domain changes. Requires a rotation key (auto-generated at {database.path}/rotation.key). Use database.did to adopt an existing DID for recovery or migration.
Verification
After starting your hold, verify it’s working:
# Health check — should return {"version":"..."}
curl https://hold.example.com/xrpc/_health
# DID document — should return valid JSON with service endpoints
curl https://hold.example.com/.well-known/did.json
# Captain record — should show your owner DID
curl "https://hold.example.com/xrpc/com.atproto.repo.listRecords?repo=HOLD_DID&collection=io.atcr.hold.captain"
# Crew records
curl "https://hold.example.com/xrpc/com.atproto.repo.listRecords?repo=HOLD_DID&collection=io.atcr.hold.crew"
Replace HOLD_DID with your hold’s DID (from the /.well-known/did.json response).
Docker Compose
services:
atcr-hold:
build:
context: .
dockerfile: Dockerfile.hold
command: ["serve", "--config", "/config.yaml"]
volumes:
- ./config-hold.yaml:/config.yaml:ro
- atcr-hold-data:/var/lib/atcr-hold
ports:
- "8080:8080"
healthcheck:
test: ["CMD", "/healthcheck", "http://localhost:8080/xrpc/_health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
volumes:
atcr-hold-data:
For production with TLS termination, see deploy/docker-compose.prod.yml which includes a Caddy reverse proxy.
Further Reading
config-hold.example.yaml— Complete configuration reference with inline comments- BYOS.md — Bring Your Own Storage architecture and authorization model
- HOLD_XRPC_ENDPOINTS.md — XRPC endpoint reference
- BILLING.md — Stripe billing integration
- QUOTAS.md — Quota management
- SBOM_SCANNING.md — Vulnerability scanning
Tags
sha256:0ff29194f8c9447d6157d80876ad9f10b5e00772dc9a231940197be8c312ccd7
docker pull atcr.io/atcr.io/hold:latest
Manifests
sha256:0ff29194f8c9447d6157d80876ad9f10b5e00772dc9a231940197be8c312ccd7