evan.jarrett.net / atcr-hold
ATCR Hold Service - Bring Your Own Storage component for ATCR
Pull this image
docker pull atcr.io/evan.jarrett.net/atcr-hold:latest
Overview
ATCR Hold Service
The storage backend component of ATCR (ATProto Container Registry)
Overview
Hold Service is the storage backend component of ATCR. It enables BYOS (Bring Your Own Storage) - users can store their own container image layers in their own S3, Storj, Minio, or filesystem storage. Each hold runs as a full ATProto user with an embedded PDS, exposing both standard ATProto sync endpoints and custom XRPC endpoints for OCI multipart blob uploads.
What Hold Service Does
Hold Service is the storage layer that:
- Bring Your Own Storage (BYOS) - Store your own container image layers in your own S3, Storj, Minio, or filesystem
- Embedded ATProto PDS - Each hold is a full ATProto user with its own DID, repository, and identity
- Custom XRPC Endpoints - OCI-compatible multipart upload endpoints (
io.atcr.hold.*) for blob operations - Presigned URL Generation - Creates time-limited S3 URLs for direct client-to-storage transfers (~99% bandwidth reduction)
- Crew Management - Controls access via captain and crew records stored in the hold’s embedded PDS
- Standard ATProto Sync - Exposes com.atproto.sync.* endpoints for repository synchronization and firehose
- Multi-Backend Support - Works with S3, Storj, Minio, filesystem, Azure, GCS via distribution’s driver system
- Bluesky Integration - Optional: Posts container image push notifications from the hold’s identity to Bluesky
The ATCR Ecosystem
Hold Service is the storage backend of a multi-component architecture:
- AppView - Registry API + web interface
- Hold Service (this component) - Storage backend with embedded PDS
- Credential Helper - Client-side tool for ATProto OAuth authentication
Data flow:
Docker Client → AppView (resolves identity) → User's PDS (stores manifest)
↓
Hold Service (generates presigned URL)
↓
S3/Storj/etc. (client uploads/downloads blobs directly)
Manifests (small JSON metadata) live in users’ ATProto PDS, while blobs (large binary layers) live in hold services. AppView orchestrates the routing, and hold services provide presigned URLs to eliminate bandwidth bottlenecks.
When to Run Your Own Hold
Most users can push to the default hold at https://hold01.atcr.io - you don’t need to run your own hold.
Run your own hold if you want to:
- Control where your container layer data is stored (own S3 bucket, Storj, etc.)
- Manage access for a team or organization via crew membership
- Reduce bandwidth costs by using presigned URLs for direct S3 transfers
- Run a shared hold for a community or project
- Maintain data sovereignty (keep blobs in specific geographic regions)
Prerequisites:
- S3-compatible storage (AWS S3, Storj, Minio, UpCloud, etc.) OR filesystem storage
- (Optional) Domain name with SSL/TLS certificates for production
- ATProto DID for hold owner (get from:
https://bsky.social/xrpc/com.atproto.identity.resolveHandle?handle=yourhandle.bsky.social)
Quick Start
Using Docker Compose
The fastest way to run Hold service with S3 storage:
# Clone repository
git clone https://tangled.org/@evan.jarrett.net/at-container-registry
cd atcr
# Copy and configure environment
cp .env.hold.example .env.hold
# Edit .env.hold - set HOLD_PUBLIC_URL, HOLD_OWNER, S3 credentials (see Configuration below)
# Start hold service
docker-compose -f docker-compose.hold.yml up -d
# Verify
curl http://localhost:8080/.well-known/did.json
Minimal Configuration
At minimum, you must set:
# Required: Public URL (generates did:web identity)
HOLD_PUBLIC_URL=https://hold.example.com
# Required: Your ATProto DID (for captain record)
HOLD_OWNER=did:plc:your-did-here
# Required: Storage driver type
STORAGE_DRIVER=s3
# Required for S3: Credentials and bucket
AWS_ACCESS_KEY_ID=your-access-key
AWS_SECRET_ACCESS_KEY=your-secret-key
S3_BUCKET=your-bucket-name
# Recommended: Database directory for embedded PDS
HOLD_DATABASE_DIR=/var/lib/atcr-hold
See Configuration Reference below for all options.
Configuration Reference
Hold Service is configured entirely via environment variables. Load them with:
source .env.hold
./bin/atcr-hold
Or via Docker Compose (recommended).
Server Configuration
HOLD_PUBLIC_URL ⚠️ REQUIRED
- Default: None (required)
- Description: Public URL of this hold service. Used to generate the hold’s did:web identity. The hostname becomes the hold’s DID.
- Format:
https://hold.example.comorhttp://127.0.0.1:8080(development) - Example:
https://hold01.atcr.io→ DID isdid:web:hold01.atcr.io - Note: This URL must be reachable by AppView and Docker clients
HOLD_SERVER_ADDR
- Default:
:8080 - Description: HTTP listen address for XRPC endpoints
- Example:
:8080,:9000,0.0.0.0:8080
HOLD_PUBLIC
- Default:
false - Description: Allow public blob reads (pulls) without authentication. Writes always require crew membership.
- Use cases:
true: Public registry (anyone can pull, authenticated users can push if crew)false: Private registry (authentication required for both push and pull)
Storage Configuration
STORAGE_DRIVER
- Default:
s3 - Options:
s3,filesystem - Description: Storage backend type. S3 enables presigned URLs for direct client-to-storage transfers (~99% bandwidth reduction). Filesystem stores blobs locally (development/testing).
S3 Storage (when STORAGE_DRIVER=s3)
AWS_ACCESS_KEY_ID ⚠️ REQUIRED for S3
- Description: S3 access key ID for authentication
- Example:
AKIAIOSFODNN7EXAMPLE
AWS_SECRET_ACCESS_KEY ⚠️ REQUIRED for S3
- Description: S3 secret access key for authentication
- Example:
wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
AWS_REGION
- Default:
us-east-1 - Description: S3 region
- AWS regions:
us-east-1,us-west-2,eu-west-1, etc. - UpCloud regions:
us-chi1,us-nyc1,de-fra1,uk-lon1,sg-sin1
S3_BUCKET ⚠️ REQUIRED for S3
- Description: S3 bucket name where blobs will be stored
- Example:
atcr-blobs,my-company-registry-blobs - Note: Bucket must already exist
S3_ENDPOINT
- Default: None (uses AWS S3)
- Description: S3-compatible endpoint URL for non-AWS providers
- Storj:
https://gateway.storjshare.io - UpCloud:
https://[bucket-id].upcloudobjects.com - Minio:
http://minio:9000 - Note: Leave empty for AWS S3
Filesystem Storage (when STORAGE_DRIVER=filesystem)
STORAGE_ROOT_DIR
- Default:
/var/lib/atcr/hold - Description: Directory path where blobs will be stored on local filesystem
- Use case: Development, testing, or single-server deployments
- Note: Presigned URLs are not available with filesystem driver (hold proxies all blob transfers)
Embedded PDS Configuration
HOLD_DATABASE_DIR
- Default:
/var/lib/atcr-hold - Description: Directory path for embedded PDS carstore (SQLite database). Carstore creates
db.sqlite3inside this directory. - Note: This must be a directory path, NOT a file path. If empty, embedded PDS is disabled (not recommended - hold authorization requires PDS).
HOLD_KEY_PATH
- Default:
{HOLD_DATABASE_DIR}/signing.key - Description: Path to hold’s signing key (secp256k1). Auto-generated on first run if missing.
- Note: Keep this secure - it’s used to sign ATProto commits in the hold’s repository
Access Control
HOLD_OWNER
- Default: None
- Description: Your ATProto DID. Used to create the captain record and add you as the first crew member with admin role.
- Get your DID:
https://bsky.social/xrpc/com.atproto.identity.resolveHandle?handle=yourhandle.bsky.social - Example:
did:plc:abc123xyz789 - Note: If set, the hold will initialize with your DID as owner on first run
HOLD_ALLOW_ALL_CREW
- Default:
false - Description: Allow any authenticated ATCR user to write to this hold (treat all as crew)
- Security model:
true: Any authenticated user can push images (useful for shared/community holds)false: Only hold owner and explicit crew members can push (verified via crew records in hold’s PDS)
- Use cases:
- Public registry:
HOLD_PUBLIC=true, HOLD_ALLOW_ALL_CREW=true - ATProto users only:
HOLD_PUBLIC=false, HOLD_ALLOW_ALL_CREW=true - Private hold:
HOLD_PUBLIC=false, HOLD_ALLOW_ALL_CREW=false(default)
- Public registry:
Bluesky Integration
HOLD_BLUESKY_POSTS_ENABLED
- Default:
false - Description: Create Bluesky posts when users push container images. Posts include image name, tag, size, and layer count.
- Note: Posts are created from the hold’s embedded PDS identity (did:web). Requires hold to be crawled by Bluesky relay.
- Enable relay crawl:
./deploy/request-crawl.sh hold.example.com
HOLD_PROFILE_AVATAR
- Default:
https://imgs.blue/evan.jarrett.net/1TpTOdtS60GdJWBYEqtK22y688jajbQ9a5kbYRFtwuqrkBAE - Description: URL to download avatar image for hold’s Bluesky profile. Downloaded and uploaded as blob during bootstrap.
- Note: Avatar is stored in hold’s PDS and displayed on Bluesky profile
Advanced Configuration
TEST_MODE
- Default:
false - Description: Enable test mode (skips some validations). Do not use in production.
DISABLE_PRESIGNED_URLS
- Default:
false - Description: Force proxy mode even with S3 configured (for testing). Disables presigned URL generation and routes all blob transfers through the hold service.
- Use case: Testing, debugging, or environments where presigned URLs don’t work
XRPC Endpoints
Hold Service exposes two types of XRPC endpoints:
ATProto Sync Endpoints (Standard)
GET /.well-known/did.json- DID document (did:web resolution)GET /xrpc/com.atproto.sync.getRepo- Download full repository as CAR fileGET /xrpc/com.atproto.sync.getBlob- Get blob or presigned download URLGET /xrpc/com.atproto.sync.subscribeRepos- WebSocket firehose for real-time eventsGET /xrpc/com.atproto.sync.listRepos- List all repositories (single-user PDS)GET /xrpc/com.atproto.repo.describeRepo- Repository metadataGET /xrpc/com.atproto.repo.getRecord- Get record by collection and rkeyGET /xrpc/com.atproto.repo.listRecords- List records in collectionPOST /xrpc/com.atproto.repo.deleteRecord- Delete record (owner/crew admin only)
OCI Multipart Upload Endpoints (Custom)
POST /xrpc/io.atcr.hold.initiateUpload- Start multipart upload sessionPOST /xrpc/io.atcr.hold.getPartUploadUrl- Get presigned URL for uploading a partPUT /xrpc/io.atcr.hold.uploadPart- Direct buffered part upload (alternative to presigned URLs)POST /xrpc/io.atcr.hold.completeUpload- Finalize multipart uploadPOST /xrpc/io.atcr.hold.abortUpload- Cancel multipart uploadPOST /xrpc/io.atcr.hold.notifyManifest- Notify hold of manifest upload (creates layer records, Bluesky posts)
Authorization Model
Hold Service uses crew membership records in its embedded PDS for access control:
Read Access (Blob Downloads)
Public Hold (HOLD_PUBLIC=true):
- Anonymous users: ✅ Allowed
- Authenticated users: ✅ Allowed
Private Hold (HOLD_PUBLIC=false):
- Anonymous users: ❌ Forbidden
- Authenticated users with crew membership: ✅ Allowed
- Crew must have
blob:readpermission
Write Access (Blob Uploads)
Regardless of HOLD_PUBLIC setting:
- Hold owner (from captain record): ✅ Allowed
- Crew members with
blob:writepermission: ✅ Allowed - Non-crew authenticated users: Depends on
HOLD_ALLOW_ALL_CREWHOLD_ALLOW_ALL_CREW=true: ✅ AllowedHOLD_ALLOW_ALL_CREW=false: ❌ Forbidden
Authentication Method
AppView uses service tokens from user’s PDS to authenticate with hold service:
- AppView calls user’s PDS:
com.atproto.server.getServiceAuthwith hold DID - User’s PDS returns a service token scoped to the hold DID
- AppView includes service token in XRPC requests to hold
- Hold validates token and checks crew membership in its embedded PDS
Deployment Scenarios
Personal Hold (Single User)
Your own storage for your images:
# Hold config
HOLD_PUBLIC_URL=https://hold.alice.com
HOLD_OWNER=did:plc:alice-did
HOLD_PUBLIC=false # Private (only you can pull)
HOLD_ALLOW_ALL_CREW=false # Only you can push
HOLD_DATABASE_DIR=/var/lib/atcr-hold
# S3 storage
STORAGE_DRIVER=s3
AWS_ACCESS_KEY_ID=your-key
AWS_SECRET_ACCESS_KEY=your-secret
S3_BUCKET=alice-container-registry
S3_ENDPOINT=https://gateway.storjshare.io # Using Storj
Shared Hold (Team/Organization)
Shared storage for a team with crew members:
# Hold config
HOLD_PUBLIC_URL=https://hold.acme.corp
HOLD_OWNER=did:plc:acme-org-did
HOLD_PUBLIC=false # Private reads (crew only)
HOLD_ALLOW_ALL_CREW=false # Explicit crew membership required
HOLD_DATABASE_DIR=/var/lib/atcr-hold
# S3 storage
STORAGE_DRIVER=s3
AWS_ACCESS_KEY_ID=your-key
AWS_SECRET_ACCESS_KEY=your-secret
S3_BUCKET=acme-registry-blobs
Then add crew members via XRPC or hold PDS records.
Public Hold (Community Registry)
Open storage allowing anyone to push and pull:
# Hold config
HOLD_PUBLIC_URL=https://hold.community.io
HOLD_OWNER=did:plc:community-did
HOLD_PUBLIC=true # Public reads (anyone can pull)
HOLD_ALLOW_ALL_CREW=true # Any authenticated user can push
HOLD_DATABASE_DIR=/var/lib/atcr-hold
# S3 storage
STORAGE_DRIVER=s3
AWS_ACCESS_KEY_ID=your-key
AWS_SECRET_ACCESS_KEY=your-secret
S3_BUCKET=community-registry-blobs
Development/Testing
Local filesystem storage for testing:
# Hold config
HOLD_PUBLIC_URL=http://127.0.0.1:8080
HOLD_OWNER=did:plc:your-test-did
HOLD_PUBLIC=true
HOLD_ALLOW_ALL_CREW=true
HOLD_DATABASE_DIR=/tmp/atcr-hold
# Filesystem storage
STORAGE_DRIVER=filesystem
STORAGE_ROOT_DIR=/tmp/atcr-hold-blobs
Production Deployment
For production deployments with:
- SSL/TLS certificates
- S3 storage with presigned URLs
- Proper access control
- Systemd service files
- Monitoring
See deploy/README.md for comprehensive production deployment guide.
Quick Production Checklist
Before going to production:
- Set
HOLD_PUBLIC_URLto your public HTTPS URL - Set
HOLD_OWNERto your ATProto DID - Configure S3 storage (
STORAGE_DRIVER=s3) - Set
AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,S3_BUCKET,S3_ENDPOINT - Set
HOLD_DATABASE_DIRto persistent directory - Configure
HOLD_PUBLICandHOLD_ALLOW_ALL_CREWfor desired access model - Configure SSL/TLS termination (Caddy/nginx/Cloudflare)
- Verify DID document:
curl https://hold.example.com/.well-known/did.json - Test presigned URLs: Check logs for “presigned URL” messages during push
- Monitor crew membership:
curl https://hold.example.com/xrpc/com.atproto.repo.listRecords?repo={holdDID}&collection=io.atcr.hold.crew - (Optional) Enable Bluesky posts:
HOLD_BLUESKY_POSTS_ENABLED=true - (Optional) Request relay crawl:
./deploy/request-crawl.sh hold.example.com
Configuration Files Reference
- .env.hold.example - All available environment variables with documentation
- deploy/.env.prod.template - Production configuration template (includes both AppView and Hold)
- deploy/README.md - Production deployment guide
- AppView Documentation - Registry API server setup
- BYOS Architecture - Bring Your Own Storage technical design