The Operations File: A Pattern for Establishing Maintainable Systems

Sometimes, you forget what you built.

Any complex system outgrows an operator’s memory. When you’re building a solution, you get a handful of services running and for the time we’re doing immediate development, you can hold the whole picture in your head. We know where the configs live, which ports map to what services & how to restart the occasionally hanging service. Then one evening something breaks and you realize you can’t remember whether the database runs in a container or as a systemd service. We’re not even sure which log to check first. We’re going to have to probe about for a while- and we’d rather be spending time with our kids.

I started writing operations diaries when I noticed how much time I was spending re-learning my own systems. Every time I needed to do some maintenance on an aging server, I started with twenty minutes of archaeology: I’d do some generic linux system fingerprinting to retrace how things are wired together, confirm which config controls what and eventually rediscover the decisions I had made a year ago.

I don’t do this much anymore. I’ve landed on a pattern that saves me from the archeology. I build and maintain a structured document (Operations.md) in the Home directory of systems I need to maintain. It sits in the Home directory of my system so it’s easy to quickly get re-acquainted as soon as I log into the system. Making this a permanent practice produces a way of enabling agents to engage with the server without any pre-conceived external context. It answers the questions I usually have when I sit down at the terminal:

  • What’s running on this system?
  • How do I check the health of the important components?
  • What do I do when something goes wrong? and
  • What has gone wrong before?

The operations file is a living document that evolves with the project.The file has the following sections:

Operations.md
├── Quick Reference (status checks, logs, restarts)
├── Architecture Overview (visual map + port table)
├── Services (Homebrew/systemd, launchd, Docker)
├── Hardware Specifications
├── Disk Layout & Usage
├── Network Configuration
├── Listening Ports
├── Scheduled Tasks
├── Remote Access Setup
├── Troubleshooting Guide
├── Configuration Locations
├── Backup Recommendations
├── Known Issues
└── Changelog

I built a tool that can generate a first draft of an operations file for you. It runs platform-native commands on macOS or Linux, collects real system data, and renders a structured Operations.md based on what the script discovers.

This post walks through the concepts behind my Operations.md file. I describe the the layout of the document. It’s optimized for rapid troubleshooting. I explain the problems each section solves and describes how to start building and maintaining an Operations.md file for systems you need to maintain.

How Do I Know Nothing Is Broken Right Now?

At the very top of the Operations.md file, I capture one command (or a short loop) that checks all critical services at once. Below that, I capture per-service checks for troubleshooting. Imagine a broad check that identifies a problem- we’re going to need to drill into the components that could produce the problem. I’ll need to do some open ended troubleshooting- but I always want to start with the broadest check first and the ability to drill into specifics second. If you go after a hunch too soon, you can end up wasting hours on the wrong problem. My operations file starts with guidance on how to quickly collect the state of services on the system:

Services often fail silently. A container can report “running” while the application inside has crashed. A database can be online but unable to accept writes because the disk is full. A scheduled job can fail every execution for weeks, but I won’t notice till I need its output.

I handle these risks with a general health check section: a set of commands we can run in sequence to verify that each component of the system has what it needs to perform their jobs.

A good health check tests the delivery experience that a service is supposed to facilitate. For a web application, that means hitting an endpoint and confirming a valid response. For a database, that means running a query. For an API, that means making a real request. Checking whether the process is alive tells us something, but not enough. “Is the process delivering its feature?” tells us whether the system is failing.

Next I need to get a conceptual lay of the land for services running on the system. My operations file addresses this with an architecture overview near the top. The document lists every service, how it was installed (container, systemd unit, native process), what port it listens on, and what it depends on. A diagram showing how traffic flows through the system works for understanding relationships. Both sections give two views of the same information, optimized for different questions. What software’s on this system vs what are the listening services & ports operating on the system?

When something breaks, I need to know what else might be affected. When I want to add a new service, I need to know which ports are already taken and which dependencies exist. Without an inventory, I am reasoning about a system I cannot fully describe.

Generally speaking- I always run the same set of discovery commands to figure out the state of the system. I find I don’t retain all of the commands I need for checking systemd, docker containers, nginx and other services. To overcome my failing memory- I use a python script to collect the discrete details of the system- and I use an AI agent to infer the details & rationale of the system. If the agent doesn’t get it right- I correct it and update the documentation.

With my current operating practices- I don’t have to keep dragging out my operations diaries to rediscover the state of the system. I can re-apply agents to maintain a self-updating Operations file that gives me the details I need for operating the system and troubleshooting/tuning the core services on the system.

I would recommend writing this section when the system is healthy and there is time to verify each entry. Run through the services, confirm the ports, confirm the install methods, trace the dependency chains. Writing documentation forces you to obtain a level of understanding that casual troubleshooting does not. We cannot write down how a service connects to its database without confirming we actually know which database it uses. The act of writing documentation helps you discover surprises.

How Do I Fix Things When It Does Break?

When something goes wrong, we need a way to diagnose the problem and a record of how similar problems were solved before.

I build the troubleshooting section from specific incidents, each following a pattern: symptoms observed, commands run to investigate, root cause found, fix applied. The most valuable entries are the ones written right after spending two hours solving a problem, while the details are still fresh. In the future, we’ll solve the same problem in five minutes.

I try to organize this section by symptom rather than by cause. When something breaks, I know what we are seeing (the service is unresponsive, the page returns an error, the log is full of a particular message). I don’t know why at this point. Entries organized by symptom let me match what I observe to a known pattern without knowing the components I need to check.

I try to capture the wrong turns too. If I spend thirty minutes investigating a configuration issue before discovering that the real problem was an upstream provider outage, that sequence belongs in the entry. Next time I see the same symptoms, I check the upstream provider first and save myself from unnecessary detours.

How Do I Know Something Is About to Break?

The operations file addresses potential failures with a section on what to watch and how to check it. For each item, we document the command to check current status.

Known Issues, Constraints, and Workarounds

Every system has compromises. A partition that turned out to be too small. A driver that behaves unpredictably under certain conditions. I capture these in a Known Issues section.

Keeping It Alive: The Operations File as a Living Document

An operations file written once and never updated becomes a liability that provides false confidence. We trust it, act on stale information that no longer reflects the system, and make things worse.

I update the file during all maintenance. When I add a service, I add it to the inventory before moving on to the next task. When I solve a problem, I write the troubleshooting entry while the details are fresh. When I discover a new constraint, I add it to known issues immediately. The file grows with the system instead of drifting away from it. I leverage coding agents to troubleshoot in new ways. I also use coding agents to update the documentation.

A changelog section anchors the document. Every significant change gets a dated entry describing what changed, why it changed, and what I learned in the process. The emphasis belongs on reasoning, because the reasoning behind a decision decays fastest. Capturing the time of the decision makes tech debt manageable.

Over time, this section becomes institutional memory. When I encounter a configuration I do not recognize, the changelog tells me when it was added and what problem it was solving. That context is the difference between “I should not touch this because I do not understand it” and “I understand why this is here and whether it still applies.”

Getting Started: What to Write Down First

If this all sounds like a lot- don’t panic. The scope of a complete operations file can feel paralyzing when you are starting from nothing. So don’t start with a target of completeness. Start from what you already know. Build it over time. Also- don’t start from scratch. I have a tool at the bottom of this post that you can use to do your own initial discovery.

Speedy navigation of expanding Markdown files

As the Operations file grows navigation will get harder. You can use a tool like mindmap-cli to generate a mind map from a markdown file. A mind map generated from the markdown source gives you a bird’s-eye view of the whole structure, collapsible and clickable, without maintaining a separate document. The markdown file itself is for depth. The mindmap is for orientation. Together they let someone operate the system without reading a thousand lines of operations manuals.

Build an Operations.md file for Your System

The tool I mentioned at the top, Operations Discovery Mechanism, automates the hardest part of getting started: the initial inventory. It collects real data from the system using platform-native commands (brew serviceslaunchctl, and lsof on macOS; systemctljournalctl, and ss on Linux), structures it as JSON, and renders a complete Operations.md with copy-paste ready commands for every service it finds.

Create an Operations directory in the home directory of your target system. clone the repository into it.

gh repo clone CaptainMcCrank/OperationsDiscoveryMechanism

The workflow takes two steps. First, you’ll run the collector script to produce a JSON snapshot of your system. Second, run the renderer to turn that snapshot into a structured markdown document. No dependencies beyond Python 3.9 and the standard library.

# macOS
python3 mac_system_info.py -o system_info.json
python3 generate_operations.py -o Operations.md

# Linux
python3 linux_system_info.py -o system_info.json

For richer documentation, feed the collected JSON to Claude Code with the prompt template embedded in the Readme. Claude can infer relationships between services, add descriptions of what each one does, and flag potential issues that the script collects but cannot interpret.

The operations.md file covers the architecture overview, the service inventory, the quick reference commands, the port map, and the disk layout. Over time, if you’re disciplined about updating the troubleshooting entries, known issues and changelog the document becomes extremely powerful. You’ll be able to hop back onto old systems with ease. You’ll be in a position to start doing interesting experiments with Agentic operations. You can now have agents logging into systems that can acquire context without spending tokens on system enumerations.

Allocating RAM for GPU performance on self hosted LLM systems with integrated System & GPU RAM

Are you sure that the system you’re running self-hosted LLMs on has properly allocated its GPU memory?

I was doing some work on my 128gb Ryzen AMD mini PC. I operate this machine as a linux server dedicated to self-hosted AI infrastructure. I had run into a performance problem on the system where I had saturated all resources and experienced a hard lock. After rebooting to do some troubleshooting, I discovered it didn’t look like my system was operating with 128 gb of RAM.

Diagnosing the problem

This machine’s purpose is hosting local AI inference. The product listing indicated 128 gb of unified memory. Did GMKTeck/Amazon ship me the wrong unit? I tested for system memory:

$ free -h
  Mem: 62Gi

Linux reported sixty-two gigabytes. I tested for the GPU’s VRAM total from the kernel:

$ cat /sys/class/drm/card*/device/mem_info_vram_total
  68719476736

Sixty-four gigabytes on the graphics side. Sixty-two visible to the operating system. That accounts for roughly 126 gigabytes if you add them together, but the system showed only half of the memory I thought I paid for.

Memory in integrated GPU/CPU systems

The processor on this system carries an integrated GPU. Unlike desktop workstations, there is no discrete graphics card on a separate board with dedicated memory. Every byte of physical RAM occupies one unified pool of LPDDR5X shared between CPU and GPU. I should have known this, but I didn’t. I haven’t built a gaming PC in over 20 years. On this hardware, the distinction between “system memory” and “graphics memory” exists only in firmware. On this machine, the BIOS has configurations for assigning memory to the CPU and GPUs.

Integrated graphics have been operating this way for a while. Intel’s onboard GPUs quietly borrow from 128 megabytes up to a gigabyte or two from system RAM.

The Intel 810 chipset (1999) was Intel’s first integrated graphics chipset and used what Intel called “Unified Memory Architecture” (UMA). It borrowed 7-11 MB of system RAM for the GPU’s frame buffer, textures, and Z-buffer. This document describes the Graphics and Memory Controller Hub directly.

Intel later formalized this as DVMT (Dynamic Video Memory Technology), which let the graphics driver and OS dynamically allocate system RAM to the iGPU based on real-time demand. The BIOS setting “DVMT Pre-Allocated” (letting you choose 32 MB, 64 MB, 128 MB, etc.) became a standard fixture on Intel-based motherboards for the next two decades. https://www.techarp.com/bios-guide/dvmt-mode/ documents the DVMT modes in detail.

Intel’s own support documentation still explains this architecture for current hardware: https://www.intel.com/content/www/us/en/support/articles/000020962/graphics.html confirms that integrated Intel GPUs use system memory rather than a separate memory bank.

The kernel-level term is “stolen memory” (or Graphics Stolen Memory / GSM). https://igor-blue.github.io/2021/02/10/graphics-part1.html documents how the UEFI firmware reserves a region of physical RAM for the GPU through the Global GTT, managed by hardware and invisible to the OS’s general memory pool.

This design lineage runs from the Intel 810 in 1999 through every Intel iGPU since, with the same fundamental mechanism: firmware carves system RAM away from the OS and hands it to the GPU. The Strix Halo platform applies the same idea at 1000x the scale.

I’ve never noticed because I’ve been operating with MacOS for the last 15 years.

The M-series chips (M1 through M4) share the same fundamental architecture: CPU, GPU, and Neural Engine all access one physical pool of memory. But Apple and AMD made different choices about how to manage that pool.

On Apple Silicon, macOS sees all the memory and allocates it dynamically. If you buy a MacBook with 64 GB of unified memory, top and Activity Monitor report 64 GB. The GPU draws from that pool on demand. The CPU draws from it on demand. No firmware partition divides them. When the GPU needs 20 GB for a rendering task, it gets 20 GB. When it finishes, that memory returns to the general pool. The OS arbitrates in real time.

But on this purpose-specific machine- the default resource allocations produce performance degradation. It looks like GMKTec is assuming windows and gaming are going to be the main application of this hardware. If your objective is running LLMs locally, the default config is going to need adjustments.

I reached out to GMKTec to clarify if there was a hardware problem. They indicated that the default config assigns 64 gigabytes to graphics and 64 gigabytes to the system. To fix this inefficient configuration, I needed to get into the BIOS and adjust the split.

Adjusting memory allocated to GPUs

That raised a practical question: how much memory should I allocate to the Host OS versus the GPU?

My system has Docker containers handling most of the system workload: a search engine, a workflow automation platform, a CMS, a kanban board, a chat interface for local models and the databases backing all of them. My Gnome/COSMIC desktop session was also running, plus a couple of terminal processes consuming their share of memory. Total system memory use hovered around 12 gigabytes. Fifty gigabytes of allocated system RAM sat idle.

The GPU told the same story from a different angle. Of its 64-gigabyte allocation, 330 megabytes held active data. The local inference server sat installed and waiting. Models rested on disk, ready to load, but nothing filled the VRAM. The GPU’s enormous partition accomplished almost nothing.

$ cat /sys/class/drm/card*/device/mem_info_vram_used
348594176

That returned 348,594,176 bytes, which is roughly 330 MB. The companion command for the total allocation was:

$ cat /sys/class/drm/card*/device/mem_info_vram_total
68719476736

That returned 68,719,476,736 bytes, which is 64 GB.

Both values come from the amdgpu kernel driver, which exposes them as sysfs files under /sys/class/drm/card*/device/. The mem_info_vram_used file reports how much of the GPU’s allocated partition is actively holding data at that moment. The mem_info_vram_total file reports the size of the partition itself.

This machine was built to run large language models. I wasn’t getting the utilization I expected. A 70-billion parameter model quantized to Q8 needs roughly 70 gigabytes of VRAM. With this system’s default, larger models don’t fit. I rebooted into the BIOS and bumped the GPU allocation to 96 gigabytes. The system side drops to 32 gigabytes, which still exceeds my current workloads by a wide margin. Twelve gigabytes of active use against 32 gigabytes of capacity leaves generous headroom for growth.

Post fix memory layout

Aside on model quantization

When you run something like ollama pull deepseek-coder-v2:16b, the quantization level is baked into that specific model file. If you look at the Ollama model library, you’ll typically see tags like:

  • model:7b-q4_0
  • model:7b-q5_K_M
  • model:7b-q8_0
  • model:7b-fp16

The Q4, Q5, Q8, fp16 suffixes indicate the quantization level. Lower numbers mean more compression (smaller file, less VRAM, lower quality). Higher numbers and fp16 mean less compression (larger file, more VRAM, better quality). Quantization reduces the numerical precision of a model’s weights. A weight stored at fp16 uses 16 bits. Q8 uses 8 bits. Q4 uses 4 bits. Fewer bits mean the weight carries a rounded approximation of its original value instead of the precise one the model learned during training.

Where you notice performance that is “higher quality”:

  • Complex reasoning chains. A Q4 model is more likely to lose the thread on multi-step logic, math problems, or long code generation. The accumulated rounding errors across billions of weights degrade the model’s ability to hold coherent structure over long outputs.
  • Nuance in language. Word choice becomes slightly flatter. A fp16 model might select a precise, unexpected word. The Q4 version gravitates toward more generic alternatives. The difference is hard to spot in a single response but becomes noticeable over a session.
  • Instruction following. Heavily quantized models drift from instructions more often. They might ignore a formatting constraint, repeat themselves, or partially answer a question. The precision loss makes the model slightly less responsive to the signal embedded in your prompt.
  • Factual reliability. Q4 models hallucinate marginally more. The degraded weights weaken the model’s ability to distinguish between what it “knows” confidently and what it is guessing at.

Where you probably won’t notice “lower quality” quantization levels:

  • Simple question and answer.
  • Casual conversation.
  • Summarization of short texts.

Ollama does not re-quantize a model at load time. You pick your quantization when you pull the model. With this change, I now have the ability to pull larger models with higher precision for experimentation & training. This produces increases in inference quality and a far better experience with models.

Hope this helps- To summarize:

There are a few systems that have come out in recent months that are good candidates for running local LLMs. If you get a mini-pc with AMD hardware, you’re likely going to need to adjust the split of ram for your inference goals. I covered how I encountered a problem that led me to discover problems with my config- and I summarized how to reason about changing the config for better performance.

Want help building self-hosted LLMs? Let’s connect!

Post Script:
If you’re exploring buying hardware for self hosting and considering an AMD GPU- you should absolutely take some time to read https://strixhalo.wiki/Guides/Buyer’s_Guide

The Strixhalo wiki has a ton of valuable & relevant resources.