Reproducible Sneaky Wifi Part 2

Last week I left you with a nail-biter. I ran a sneaky wifi network near a weird marathon in 2018 and I captured close to 200 devices. I reproduced the experiment this fall- how’d it go in 2025? Terrible in some regards, but awesome in terms of prototyping acceleration. An experiment that took 2 months in 2018 took me 4 days in 2025.

Time lapse of runners

The Bad

In the 2025 experiment, I caught a grand total of 18 devices.

Does this mean mobile phones are more secure? Was it the exact same experiment? No!

Low Participant Turnout: My Wifi Hotspot was active starting at 7 am. The marathon was scheduled to start at 8 am. We didn’t see a single runner till ~9:15 am. When runners did start arriving, the quantity of runners was limited compared to past years. The 2018 marathon spanned two days. The race was only one day this year. The participant cohort of runners was significantly smaller than in years past.

Bad SSID choices: This attack depends upon your ability to anticipate a wifi ssid that your targets have an affinity for. The wifi SSID i used in 2018 wasn’t going to work because it has been deprecated. I went with “Starbucks WiFi” initially, but this only caught 2 devices. The lack of “Starbucks WiFi” tuned devices is an interesting indicator of how times have changed. It used to be that mobile phone owners needed to attach to wifi to use email/browse the web with their phones. This was because cellular networks did not have unlimited data, and so you either ran out of data for the month or you were hit with a large cellphone bill if you used cellular for data. People used to go to coffee shops to “work” on their phones and laptops. Now you’re really there to socialize or caffinate. I also wonder if Starbucks’ popularity has declined. In the last 10 years, I’ve only drank Starbucks out of necessity.

So after a couple of hours of watching only 2 attaches, I yielded to temptation and changed the SSID to “xfinitywifi.” The xfinitywifi ssid is a controversial wifi network vended by Comcast, exclusive to comcast customers.

You can use wigle.net to see the most popular active SSIDs:

Changing to use xfinitywifi felt like desperation! Comcast does not have much presence in Snoqualmie valley. I reasoned that most of the runners were probably coming from cities where Comcast is dominant- e.g. Bellevue, Issaquah and Redmond. I managed to catch 16 more devices over the next 4 hours. The count was so small I didn’t bother to keep my logs. But here are some screenshots to give you a feel for what I experienced:

Raspberry Pi with AWUS036ACH WiFi adapter & home built dual yagis
Paperwhite display
Custom status monitor


This experiment agitated me greatly. I know there are still problems related to wifi offloading- but I only caught 18 devices. I didn’t spend enough time researching SSIDs and the end result was low attaches.

Despite my grumpiness about the data, this experiment was a major success.

Did you notice the external Wifi Adapter above? How about the nice Paperwhite display presenting status of the device. My monitoring script was far more sophisticated than a tail of a hostapd logs. I didn’t have to write this code or fiddle with hostapd configurations or nftables rules. I didn’t have to find the right kernel headers and compile wifi drivers. I didn’t have to flex my terrible design skills. I knew the features I wanted and I gave my agents direction on how to deploy the features.

I was able to successfully produce an IoT prototype with complex hardware dependencies in 4 days.

The Good:


I implemented a working prototype of a custom wifi hotspot with a paperwhite display, an external wifi adapter & a Yagi wifi antenna in 4 days.

Methodology

Claude Code & Pre-prompting strategies

I leveraged Claude code for most of my work. I created a working directory invoked Claude with a 1,500 line pre-prompt for requirements analysis and planning. This pre-prompt produced ansible playbooks that take advantage of my Firmware Development caching containers. The pre-prompt addresses topics related to Requirement Exploration, Architecture Safety, Known Good Deployment Patterns, Domain Specific Knowledge and Documentation & Maintenance. I’ve been iterating on this prompt for about 6 weeks through applications on about 5 other projects. I constructed a separate pre-prompt of 166 lines that handles Deploying code, Code analysis, system access, frameworks for deploying code & systematic troubleshooting and refactoring the code to address discovered defects.

Development Loop

The normal lifecycle of developing a reliable working prototype seems to take about 3-4 build cycles.

My agent would serially perform the following operations during the build process:

  • Initiate a build
  • Discover defects during build process
  • Troubleshoot them on the recipient system
  • Make corrections to the original build playbooks
  • Resume the build at the corrected defect
  • Complete a working build.

If the build experienced errors, I waited to get a complete build and then started again on fresh recipient image. I kept seeing improvements until the build process ran reliably without errors.

Throttling

My biggest challenge was rate limiting:

My agents hit my 5 hour Anthropic token limit on the $20 plan in about 2 hours. During this 4 day period, I scheduled my day around throttling limits. I tried to make sure that some building happened while I slept. Two days before the marathon, I upgraded to the $200 plan. My iOS screen time report was 1 hour during that week.

I didn’t have to write any code to make this project work. That’s not to suggest that anybody could do this experiment. I was successful because I knew exactly what software libraries I wanted to see deployed and how I wanted them tuned. I regularly had to intervene when the agents proposed bad plans. But I’m now approaching a point where my single board computer development processes are automated. It felt like having a mildly competent apprentice.

Over the last few years, I’ve been able to build a range of Raspberry Pi Prototypes. All of them were a labor of large effort. My build process made prototyping faster, but it still took me several months to work out the details of various project:

Making reproducible builds was expensive and typically took 2-3 months. I’d steal spare time on evenings or weekends to work on projects. The greatest costs come from the testing & validation needed to create durable, reproducible firmware images.  With a combination of tasteful pre-prompts, custom agents & an automated build process I can turn around reproducible firmware builds in less than a week.



1. Software & Hardware Testing Houses

You need repeatable, cost-effective environments to validate new software and hardware under real-world conditions, but setting up and tearing down test rigs is slow, inconsistent, and prone to configuration drift.


2. Managed Security Service Providers (MSSPs)
You need deployable, trusted network nodes inside customer environments for monitoring, detection, and incident response — but sourcing, configuring, and reproducing reliable hardware platforms across dozens of clients eats up valuable engineering time.


3. IoT Manufacturers

You want to prove out your next device concept quickly, with working prototypes that demonstrate connectivity, edge processing, and security — but your in-house teams are bottle-necked by long development cycles and unpredictable integration issues.

4. Agricultural & Rural Networking Providers

You need rugged, affordable devices to extend connectivity into fields, barns, and remote communities — but commercial gear is overpriced, hard to customize, and not designed for rapid prototyping or deployment in challenging environments.

5. Telecom & Network Operators
You need cost-effective, rapidly deployable edge devices for monitoring network performance, testing bandwidth in rural or urban environments, or validating new customer premises equipment—but traditional hardware procurement cycles are too slow and expensive.

6. Smart City & Infrastructure Providers
You’re deploying IoT devices to manage traffic lights, utilities, or environmental sensors across a city, but you need quick, low-cost prototypes to validate integrations before scaling to tens of thousands of units.

7. Educational & Research Institutions
Your students or researchers need reproducible, documented environments for experimentation with hardware, networking, or AI, but setting up reliable builds consumes valuable teaching and research time.

8. Healthcare & MedTech Device Innovators
You’re exploring connected health devices—remote patient monitors, smart diagnostic tools, or secure data collection endpoints—but you need a prototype that proves functionality while meeting strict reliability and security requirements.

9. Defense & Public Safety Contractors
You’re tasked with rapidly developing ruggedized, secure edge devices for field communication, surveillance, or sensor fusion, but your internal teams can’t keep pace with the prototyping demands.

10. Environmental & Energy Monitoring Firms
You need distributed, low-power devices to collect data in harsh or remote environments—forests, farms, offshore rigs, or mines—but your current prototypes fail due to durability or reproducibility issues.

11. Media & Event Production Companies
You want portable, reliable devices for live-streaming, crowd analytics, or on-site Wi-Fi provisioning at concerts and sporting events, but consumer gear isn’t flexible enough and enterprise hardware is overkill.

12. Transportation & Logistics Providers
You’re experimenting with fleet tracking, warehouse automation, or smart inventory systems, but you need a way to test edge hardware integrations quickly before committing to full-scale rollouts.

13. Industrial Automation & Robotics
You need controllers and monitoring systems for robots, conveyors, or factory IoT sensors, but the cost and time of custom PLCs and proprietary systems make it hard to experiment quickly.

14. Consultancies & Systems Integrators
You’re responsible for stitching together hardware and software for your clients, but you lack a streamlined way to spin up reproducible prototypes that demonstrate proof-of-concept value quickly and reliably.

Sneaky wifi near weird marathons (Part 1)

In 2018, I ran a Wifi network with a well known public SSID off a raspberry pi and ended up catching lots of marathoner phones. My network was not configured for sniffing- purely attaching. Phones with the right WiFi settings would automatically attach to the WiFi network.

My interest was in exploring whether phones promiscuously attach to WiFi networks they recognize. My network didn’t vend Internet access- which means I couldn’t spy on people’s traffic. But I did vend DHCP to anyone who tried to connect, which enabled me to gather some data about devices that attached.

The hotspot wasn’t operated from my house- I had to do a little work to get the network to the runners. I live in the pacific northwest. Rain is an issue. Back then, I didn’t know enough antenna theory to broadcast long distances, so my setup was janky. If you looked around, you’d see a Tupperware box left behind during some spring cleaning.

After several weeks of iteration, I was ready for the marathon. The race is called “Beat the Blerch.” The name is a tribute to the desire to quit. Running is about ignoring that desire. The organizers have cake stations and couches out on our trail to tempt people into taking a break. Some runners wear inflatable t-rex costumes. Pretty gross!

I turned my hotspot on and started looking at logs. When you monitor the logs of HostAPD, you can see the MAC addresses of the devices that attach. This information can be used to identify the device type that connected. Over the course of the marathon, I saw an interesting diversity of devices attach:

You can see that Apple dominated the running community. It’s interesting to see a Blackberry device in 2018. Someone was in a committed relationship with their phone!

This project worked because carriers have a “WiFi offload” strategy. Unlimited data is relatively new. Carriers were still scrambling to provide transport that met the demand of customers. Phones have been tuned to attach to recognized networks in order to offload traffic during metering. I suspect that some day in the future, data caps will get reintroduced thanks to the popularity of 4k streams on 3 inch displays. Time will tell.

There is another fun property of my data! I can graph the attachment rate of runners passing during the marathon. The slope is steep when we’re at the start of the race. Competitive runners quickly disappear and the slope goes gradual. Our graph is pretty boring till we get to the end of the marathon. Is this because the slowest runners don’t give up?

NO! There’s a 10k happening as well! It happens to turn around at the end of the trestle. The slope in our graph declines because the 10k participants start showing up. Short races are more popular! We see a much more steady rate of attaches as a result. As we move to the right, the marathoners are on their return. The tangent-like shape isn’t because of runner resilience. It’s showing you that the steepest slopes are representing folks doing harder things.

The run spanned two days. The second day was rainy, which significantly dampened participation:

On day 1 I caught about 155 devices, but day 2 only brought us about 40.

This was a fun project- but it was scrappy. When I started off, I didn’t really know how to configure hostAPD or DNSMasq. I had to figure out a bunch of implementation details on the fly. I didn’t document my project. It took several weeks and I was lucky. I had enough saved logs and sed magic to generate a cool looking set of graphs. But compiling the WiFi drivers was a pain. You can see my setup had to be in close proximity to the race. The antenna set was not optimized for outdoor transmission. It was not a reproducible project- and it certainly wasn’t stable.

2025

The annual Blerch marathon ran past my house earlier this month.

Four days before the event, I put a challenge in front of myself: Create a reproducible version of the ‘catcher’ project using my LLM-supported automation

I’m more experienced now and consequently, less interested in proving vulnerabilities. I’d prefer to build enduring solutions. In this case, my goal is rapid delivery of IoT prototypes and projects. Anecdotally, I’ve heard prototyping a first iteration of complex IoT takes between 3-9 months. I would consider developing a project requirements doc, implementing code, implementing unit & integration tests and delivering a working implementation in scope for the first run of a prototype. Keep in mind: there’s considerably more work involved to get from concept to market.

I’ve been building what I guess are my own custom AI “agents” for almost a year. I’ve had some intuition about using different tools for quickly building firmware images that were useful. I’ve recently started experimenting with creating agents that actually deploy and troubleshoot deployments. It’s been working so well that it’s starting to feel weird. Building complex hardware systems quickly shouldn’t be this fast. I suspect I can turn a device around in a single day.

My “Win conditions” are more about creating a reproducible project than proving vulns. I want to prove that I can quickly turn around a complex project prototype. “Complex” in this case means we include peripherals and inter-component integration. This boils down to 3 goals

  1. Demonstrate the implementation of an external wifi adapter for vending the wifi network. This would require autonomous troubleshooting and configuration tasks related to wifi configuration. There are complex design and implementation decisions that come with activating AP Mode. An AI Agent can speed run that process. It would also demonstrate an Agent’s ability to troubleshoot driver compilation errors.
  2. Implement a paperwhite display that could present status of the pi. This would include status of the wifi network and any attached devices. Most IoT has some kind of interface that people will interact with. I wanted to demonstrate that a peripheral-based UI can be implemented with agents.
  3. Implement the whole project via custom deployment & troubleshooting agents. When I did this last time, I was in my office on weekends and evenings at the expense of spending time with my kids. I wanted to wield my AI towards productivity gains.

How did it work out? Hit refresh for about a week and I’ll include a link to Part 2!

Resolving Raspberry pi architecture conflicts when running apt-get update

The 32bit version of Raspberry pi OS has an architecture of armv7l.

The 64 bit version of raspberry pi OS has an architecture of arm64.

There may be circumstances where your system has 64 bit software included (maybe some driver compilation support). If you constrain your apt repositories to 32 bit sources, you will run an apt-get update one day and see an error that says:

Skipping acquire of configured file 'main/binary-arm64/Packages' as repository 'http://aptcache-ng.local:3142/raspbian.raspberrypi.org/raspbian bookworm InRelease' doesn't support architecture 'arm64'

This shouldn’t matter! I only want 32 bit software. Aren’t I running on 32 bit Raspberry pi os? Here’s how you can determine what’s deployed:

uname -a
Linux ansibledest 6.12.40-v7+ #1896 SMP Thu Jul 24 15:19:33 BST 2025 armv7l GNU/Linux

We’ve confirmed I’m running 32 bit raspbery pi os. Why is the system attempting to pull arm64 software updates if we’re armv7l?

Spoiler: It’s because of dpkg has support on your system for foreign-architectures, obviously:

dpkg --print-foreign-architectures
arm64

We had foreign-architecture support enabled. Some packages install 64 bit architectures (for compatibility reasons???)- and if your apt-get repository libraries intentionally avoid 64 bit, you’ll get errors that prevent updates. Annoying. ‘Premature’ optimization has consequences.

Removing foreign architecture support

Removing a foreign architecture is a two step operation. You’ll have to pull any 64 bit packages off the system and then disable foreign architecture support.

Step 1: Purge all arm64 software

sudo apt-get remove --purge <package-name>:arm64

Step2: Purge foreign architecture support:

sudo dpkg --remove-architecture arm64

Hope that helps someone!

Mastering apt-get package configurations for Raspberry PI OS and Raspbian.

If you’re a Raspberry Pi user, you’ve likely encountered both “Raspbian” and “Raspberry Pi OS” terms. This was a significant change that relates to my last post on mastering Apt-get. Let’s clarify the relationship between these distributions and their package sources.

The Name Change: Raspbian to Raspberry Pi OS

Originally, the official operating system for Raspberry Pi was called “Raspbian,” a portmanteau of “Raspberry” and “Debian.” In May 2020, the Raspberry Pi Foundation renamed it to “Raspberry Pi OS” to better reflect that it’s the official operating system for Raspberry Pi hardware.

Despite this rebranding, the underlying package repositories still use the Raspbian name in many configurations.

Repository Configuration for Raspberry Pi

When you examine /etc/apt/sources.list or files in /etc/apt/sources.list.d/ on a Raspberry Pi, you’ll typically find repositories that look like:

deb http://archive.raspbian.org/raspbian/ bookworm main contrib non-free non-free-firmware rpi

And:

deb http://archive.raspbian.org/raspbian/ bookworm main contrib non-free non-free-firmware rpi

This is the primary repository containing packages rebuilt for the ARM architecture. It’s essentially a recompiled version of Debian packages optimized for ARM processors used in Raspberry Pi. Note the additional rpi component, which includes Raspberry Pi-specific packages not found in standard Debian.

The Raspberry Pi Repository

deb http://archive.raspberrypi.org/debian/ bookworm main

This secondary repository contains Raspberry Pi-specific packages maintained by the Raspberry Pi Foundation. These include:

raspberrypi-bootloader: The specialized boot firmware
raspberrypi-kernel: The customized Linux kernel
rpi-imager: The Raspberry Pi Imaging utility
Various Pi-specific tools and optimized software

Key Differences in Package Sources

Understanding the distinction between these repositories is crucial:

Origin and Maintenance:

Raspbian packages (archive.raspbian.org) are community-maintained Debian packages rebuilt for ARM
Raspberry Pi packages (archive.raspberrypi.org) are maintained directly by the Raspberry Pi Foundation

Architecture Support:

Raspbian repository supports multiple ARM architectures (armhf, arm64)
Raspberry Pi repository focuses specifically on packages tested and optimized for Pi hardware

Package Selection:


Raspbian provides the broad base of software (tens of thousands of package
Raspberry Pi repository provides a smaller set of specialized tools and optimizations

Example: A Complete Raspberry Pi OS sources.list Configuration
Here’s what a typical Raspberry Pi OS sources.list setup looks like:

# Raspbian main repositories
deb [signed-by=/usr/share/keyrings/raspbian-archive-keyring.gpg] http://archive.raspbian.org/raspbian/ bookworm main contrib non-free non-free-firmware rpi
# deb-src [signed-by=/usr/share/keyrings/raspbian-archive-keyring.gpg] http://archive.raspbian.org/raspbian/ bookworm main contrib non-free non-free-firmware rpi

# Raspberry Pi specific packages
deb [signed-by=/usr/share/keyrings/raspberrypi-archive-keyring.gpg] http://archive.raspberrypi.org/debian/ bookworm main
# deb-src [signed-by=/usr/share/keyrings/raspberrypi-archive-keyring.gpg] http://archive.raspberrypi.org/debian/ bookworm main

Notice that:

  • Both repositories use their own signing keys
  • Source packages (deb-src) are commented out by default to save bandwidth during apt update
  • The rpi component only exists in the Raspbian repository

Best Practices for Raspberry Pi Package Management

  1. Keep both repositories enabled: Both the Raspbian and Raspberry Pi repositories are essential for a properly functioning Raspberry Pi OS system.
  2. Update both repositories: Always run sudo apt update to refresh package lists from both sources.
  3. Be cautious with third-party repositories: The Raspberry Pi has limited resources and architecture-specific requirements. Not all Debian packages will work correctly.
  4. Handle architecture-specific packages carefully: When adding repositories, make sure to use the [arch=armhf] or [arch=arm64] modifiers as appropriate for your Pi model.
  5. Watch for Pi-specific package versions: Sometimes the same package name exists in both repositories, but the Raspberry Pi repository version might have Pi-specific optimizations.

Troubleshooting Raspberry Pi Package Issues

If you encounter package errors on your Raspberry Pi, check:

  1. Repository availability: Ensure both repositories are accessible and not commented out
  2. GPG key validation: Verify that you have the correct signing keys installed
  3. Architecture compatibility: Confirm packages are available for your Pi’s architecture (armhf or arm64)
  4. Repository priority: In case of conflicts, the Raspberry Pi repository usually should take precedence

By understanding the relationship between Raspbian and Raspberry Pi OS repositories, you can better manage packages on your Pi and troubleshoot any issues that arise during system updates or package installations.

Conclusion

Understanding how apt-get repositories work gives you more control over your Debian-based system. Whether you’re troubleshooting package issues, setting up a new system, or just curious about how Linux package management works, knowing the ins and outs of repository configuration is invaluable knowledge.

For Raspberry Pi users, the distinction between Raspbian and Raspberry Pi OS repositories adds another layer to consider, ensuring you get both the broad Debian software base and the Pi-specific optimizations that make your small computer run effectively.

By properly managing your sources.list and leveraging the flexibility of repository modifiers, you can create a stable, secure, and well-maintained system tailored to your specific needs.