How Silo works
A tour of the stack: a Go CLI, a Swift FFI bridge, Apple's Containerization framework, and a rootfs cache that clones in a millisecond on APFS.
Silo is a CLI that runs dev tools inside ephemeral Apple Container micro-VMs. The surface is simple — silo run python -- script.py — but there are four moving parts underneath. This post walks the stack top to bottom.
The stack
User
↓
silo CLI (Go, cobra)
↓
internal/engine (VM orchestration)
↓
internal/bridge (cgo, C callbacks)
↓
libSiloBridge.dylib (Swift, @_cdecl)
↓
Apple Containerization framework
↓
Lightweight Linux VM
The binary is Go. Swift is only a dynamic library — libSiloBridge.dylib — loaded at runtime via rpath. The split exists because Apple’s Containerization framework is Swift-only, but the rest of the tool (CLI, config, cache, networking, LSP proxy) reads better in Go and has a deeper ecosystem for what we need.
Go side
cmd/silo/main.go is thin: it handles argv[0] shim detection (invoked as python? rewrite to silo run python --shim python -- <args>), tool shorthand (silo python foo.py → silo run python -- foo.py), and delegates everything else to cobra commands under internal/commands/.
The commands are glue. The real logic lives in a handful of internal packages:
internal/config—~/.silo/config.yamlfor installed tools,.siloconfwalk-up with project-overrides-global merge, cache policy.internal/engine— theContainerEngineorchestrator.EnsureRuntimehandles first-run bootstrap;RunEphemeralis the hot path.internal/cache— APFS-clonefile rootfs cache, zstd cold tier, LRU + age-based GC.internal/tools— embedded registry (registry.yaml), project detector, shim installer.internal/network— port forwarder (TCP relay) and HTTP proxy allowlist.internal/lsp— JSON-RPC framing and a bidirectional stdio proxy with path rewriting (/Users/me/project/...↔/workspace/...).
The FFI bridge
The cgo boundary is internal/bridge. Apple’s Containerization APIs are asynchronous and callback-driven. The bridge converts them into synchronous-looking Go using channels: each @_cdecl Swift export has a matching //export Go callback that pushes results onto a channel the calling goroutine reads from. Opaque handles — Manager, Container, Image, Process — are typed wrappers around unsafe.Pointer.
The C header silo_bridge.h mirrors the Swift @_cdecl signatures exactly. marshal.go converts Go structs (mount specs, env vars, exec configs) into the matching C struct layouts.
Swift side
swift-bridge/Sources/SiloBridge/ is about 600 lines. Bridge.swift exports @_cdecl functions for every operation the Go side needs: create a manager, pull an image, create a container from an image reference or from a pre-unpacked rootfs, start/stop/exec/wait a container, resize a terminal. Boxes.swift wraps each Swift object in an ARC-reference class so it can cross the C boundary as a stable pointer. Config.swift turns C structs into strongly-typed Swift values.
The Makefile handles the two-phase build: swift build in swift-bridge/ produces the dylib, then go build links against it with CGO_LDFLAGS="-L... -lSiloBridge -Wl,-rpath,...". The rpath means the binary finds the dylib at runtime without needing DYLD_LIBRARY_PATH.
The ephemeral hot path
silo run is the one users hit thousands of times. Every invocation is a fresh VM — no “warm pool,” no shared state. That’s where the isolation guarantees come from.
Without any optimisation, that’s ~25 seconds: OCI layer fetch, layer extraction, ext4 image build, VM boot. Unusable as a python replacement.
The fix is a rootfs cache. After the first unpack, the resulting ext4 file is stored at ~/.silo/rootfs-cache/<digestHex>.ext4. On the next run, we call APFS clonefile(2) on it — a copy-on-write clone, which returns in about a millisecond regardless of the file’s size. The container sees a fresh writable rootfs; the cache sees no modification.
The cache key is the image digest alone. Tag updates produce new digests, which naturally invalidate stale entries. GC is LRU + age-based and runs once per process at the top of silo run, so users passively reclaim disk by just using the tool. Cold entries are compressed to zstd (~4× smaller); promotion back to raw on the next hit costs about 1–3 seconds for a 500 MB image.
The result is ~600 ms warm starts — most of that is the Linux VM’s boot, not anything Silo does.
Config resolution
Every silo run resolves tool config in layers:
registry.yaml defaults
↓ overridden by
~/.silo/siloconf (global user overrides)
↓ overridden by
.siloconf (walk-up from cwd)
The merged result — image, cache mounts, env, network allowlist, port forwards, LSP install command — is what gets handed to the bridge. Project config lets different repos pin different Python versions, expose different ports, or tighten the outbound proxy allowlist without touching the registry.
Why not Docker
Apple Containerization launches a real Linux VM per container with its own kernel, not a namespaced process on the host. On macOS this is the strongest boundary you can get without a full hypervisor stack, and it’s what gives Silo its “SSH keys don’t exist” property — they don’t, because the VM has no filesystem mounted from the host beyond the explicit project directory and any files named in passFiles.
A Docker backend is on the roadmap for Linux users (where namespaced containers are the native primitive), but on macOS the VM model is the answer.
Where to look in the code
- Ephemeral hot path:
internal/engine/ephemeral.go - Rootfs cache:
internal/cache/rootfs.go - Swift exports:
swift-bridge/Sources/SiloBridge/Bridge.swift - cgo callbacks:
internal/bridge/callbacks.go - LSP proxy:
internal/lsp/proxy.go
If the architecture is interesting to you and you spot something that could be better, issues and PRs are welcome.
← all posts