The CI system supports transparent volume caching backed by S3-compatible storage. Caches persist data across pipeline runs, making subsequent runs faster by restoring previously computed artifacts, dependencies, or build outputs.
caches in YAML), the system checks if a cached version exists in S3.This is transparent to the pipeline — volumes behave identically whether caching is enabled or not.
Caching is configured via the driver DSN using query parameters:
--driver=docker://?cache=s3://bucket-name&cache_compression=zstd&cache_prefix=myproject
| Parameter | Description | Default | Example |
|---|---|---|---|
cache |
S3 URL for cache storage (required) | — | s3://my-cache-bucket |
cache_compression |
Compression algorithm | zstd |
zstd, gzip, none |
cache_prefix |
Key prefix for all cache entries | "" |
myproject → keys become myproject/volume.tar |
s3://bucket-name/optional-prefix?region=us-east-1&endpoint=http://localhost:9000&ttl=24h
| Parameter | Description | Default | Example |
|---|---|---|---|
region |
AWS region | AWS SDK default | us-east-1 |
endpoint |
Custom S3 endpoint (for MinIO, etc.) | AWS S3 | http://localhost:9000 |
ttl |
Cache expiration duration | No expiration | 24h, 7d, 168h |
ci run pipeline.yml \
--driver='docker://?cache=s3://my-ci-cache?region=us-west-2&cache_prefix=project-a'
# Start MinIO locally
docker run -p 9000:9000 -p 9001:9001 \
-e MINIO_ROOT_USER=minioadmin \
-e MINIO_ROOT_PASSWORD=minioadmin \
minio/minio server /data --console-address ":9001"
# Create bucket
aws --endpoint-url http://localhost:9000 s3 mb s3://cache-bucket
# Run with caching
ci run pipeline.yml \
--driver='docker://?cache=s3://cache-bucket?endpoint=http://localhost:9000®ion=us-east-1'
# Use gzip instead of zstd
ci run pipeline.yml \
--driver='docker://?cache=s3://bucket&cache_compression=gzip'
# Disable compression (faster for already-compressed data)
ci run pipeline.yml \
--driver='docker://?cache=s3://bucket&cache_compression=none'
Use the caches field in task configs to define cache directories:
jobs:
- name: build
plan:
- task: install-deps
config:
platform: linux
image_resource:
type: registry-image
source:
repository: node:20
caches:
- path: node_modules
- path: .npm
run:
path: sh
args:
- -c
- |
npm ci
npm run build
node_modules → cache-node_modules)caches:
- path: .cache/go-build # Go build cache
- path: .cache/golangci # Linter cache
- path: vendor # Vendored dependencies
For direct JS/TS pipelines, create named volumes:
const pipeline = async () => {
// Create a cached volume
const cache = await runtime.createVolume({ name: "build-cache" });
// Use the volume in a task
await runtime.run({
name: "build",
image: "node:20",
command: { path: "npm", args: ["run", "build"] },
mounts: [{ name: cache.name, path: "node_modules" }],
});
};
export { pipeline };
Caching works with drivers that implement VolumeDataAccessor:
| Driver | Caching Support | Notes |
|---|---|---|
docker |
✅ Yes | Uses docker cp for volume data transfer |
native |
✅ Yes | Uses tar directly on the filesystem |
k8s |
✅ Yes | Uses a helper pod for volume data transfer |
Cache keys are structured as:
{cache_prefix}/{volume_name}.tar.{compression}
Examples:
myproject/cache-node_modules.tar.zstbuild-cache.tar.zst (no prefix)ci/main/vendor.tar.gzipAWS credentials can be provided via standard AWS SDK environment variables:
export AWS_ACCESS_KEY_ID=your-key
export AWS_SECRET_ACCESS_KEY=your-secret
export AWS_REGION=us-east-1
ci run pipeline.yml --driver='docker://?cache=s3://bucket'
Or use IAM roles, instance profiles, or other AWS SDK credential sources.
cache_prefix and volume names match between
runsPutObjectzstd compression (default) for best speed/ratio balancenone compression for already-compressed data (tar.gz archives, etc.)ttl to automatically expire stale caches