Vulnerability Prioritization made easy!

“Which vulnerabilities should we fix first?”

This question often leads to confusion, especially for those deeply involved in security. Every company has unique priorities, making it challenging to create a one-size-fits-all approach. Here’s a straightforward method inspired by musicians’ mnemonics to help guide your decisions.

Every Engineer Always Prioritizes Data By Evaluating Risk

This phrase breaks down into key factors to consider:

  • E – Exploitability: How easily can someone exploit the vulnerability?
  • E – Exposure: Is the system connected to the internet or internal?
  • A – Access Required: What level of access does an attacker need?
  • P – Patch Difficulty: How hard is it to fix the issue?
  • D – Data Sensitivity: Does the system handle sensitive information?
  • B – Business Impact: What effect would an exploit have on the company?
  • E – Environmental Mitigations: Are there existing defenses in place?
  • R – Raw CVSS Score: What is the base severity score?

In large IT environments, achieving zero vulnerabilities is ideal but often unrealistic. Resources are limited, so it’s crucial to prioritize effectively. This mnemonic helps ensure you consider all vital aspects when deciding which vulnerabilities to address first.

By evaluating each factor, you can make informed decisions that balance risk and resource allocation, leading to a more secure and efficient system.

Beyond the basics of Linux software installations: Become an expert in the configuration of apt-get in 2025

apt-get is used to install software on various Linux systems, including Ubuntu, Debian, Pop!_os, et. al. Sometimes, you’ll experience errors installing software using apt-get. In this post, I will cover what I’ve learned about how apt-get configuration works.

Let’s start with a discussion about how the apt-get binary knows where to find packages on the Internet.

The Primary Source: /etc/apt/sources.list

The /etc/apt/sources.list file is the central configuration file that tells apt-get where to look for packages. Each line in this file represents a repository – a server containing packages that can be installed on your system.
A typical entry in sources.list looks like this:

deb http://deb.debian.org/debian bookworm main contrib non-free

This single line contains several key pieces of information:

  • The repository type (deb)
  • The repository URL (http://deb.debian.org/debian)
  • The distribution release name (bookworm)
  • The components to include (main contrib non-free)

The sources.list file can contain multiple repository lines, allowing you to install packages from various sources.

Modular Configuration: /etc/apt/sources.list.d/

As systems became more complex, managing everything in a single sources.list file became unwieldy. The /etc/apt/sources.list.d/ directory helps us handle this complexity with a more modular approach to repository management.

This directory contains individual .list files, each typically dedicated to a specific repository or application. For example, when you add a third-party repository for an application like Visual Studio Code, it might create a file called /etc/apt/sources.list.d/vscode.list.

This approach keeps your system organized and makes it easier to remove repositories when you no longer need them. Simply delete the corresponding file, and the repository is gone – no need to edit the main sources.list file and potentially make mistakes.

Repository Types: deb vs. deb-src

You may have noticed that repository lines start with either deb or deb-src. What’s the difference?

deb: Binary Packages

Lines starting with deb point to repositories containing pre-compiled binary packages. These are the packages most users install – they contain the executable programs, libraries, and other files ready to be used on your system.

When you run apt-get install firefox, apt-get downloads and installs the binary package for Firefox from a deb repository.

deb-src: Source Packages

Lines starting with deb-src point to repositories containing source code packages. These aren’t pre-compiled programs but rather the original source code used to build the binary packages.

Source packages are useful for:

  • Developers who want to examine or modify the code
  • Users who need to compile packages with custom options
  • Those who need to troubleshoot issues by looking at the source code

To download source packages, you use commands like apt-get source firefox instead of apt-get install firefox.

Most typical users don’t need deb-src repositories enabled unless they plan to compile software from source or if they need to compile drivers for hardware like usb-wifi adapters.

Understanding Repository Modifiers

Repository lines can include various modifiers that provide additional options and constraints. Let’s break down some common ones:

Architecture Modifiers: [arch=arm64]

deb [arch=arm64] http://deb.debian.org/debian bookworm main

The arch= modifier specifies that this repository should only be used for a specific architecture. In this example, the repository will only be used when looking for packages for the ARM64 architecture. This is particularly useful for systems like Raspberry Pi or when maintaining multi-architecture systems.

Security Modifiers: [signed-by=/usr/share/keyrings/raspbian-archive-keyring.gpg]

deb [signed-by=/usr/share/keyrings/raspbian-archive-keyring.gpg] http://archive.raspbian.org/raspbian/ bookworm main

The signed-by= modifier specifies which GPG key should be used to verify the packages from this repository. This enhances security by ensuring packages are only installed if they’re signed by a trusted key.

Modern Debian-based systems store repository keys in the /usr/share/keyrings/ directory as separate files rather than in a central keyring, making key management more secure and flexible.

Trust Modifiers: [trusted=yes]

deb [trusted=yes] http://repository.example.com/ stable main

The trusted=yes modifier tells apt to trust this repository even if it doesn’t have valid signatures. This should be used with extreme caution, as it bypasses crucial security checks. Only use this for repositories you absolutely trust, like local repositories on your network. This feature comes in helpful if you’re troubleshooting software installation problems using my firmwarebuildercontainers. Learn more about my firmware image creation process here: https://patrickmccanna.net/overview-of-my-repeatable-iot-build-process-using-ansible-docker/

Distribution and Component Modifiers

After the repository URL, you’ll find several additional modifiers:

deb http://deb.debian.org/debian bookworm main contrib non-free-firmware non-free

Distribution Release Names (bookworm)

The first word after the URL (bookworm in this example) specifies which distribution release to use. Debian uses code names for its releases:

  • bookworm: Debian 12
  • bullseye: Debian 11
  • buster: Debian 10

Ubuntu similarly uses names like jammy (22.04), focal (20.04), etc.

You might also see special release names like:

  • stable: Always points to the current stable Debian release
  • testing: Points to the next Debian release in preparation
  • unstable or sid: The development branch of Debian

Component Categories

The words that follow the release name define which components or sections of the repository to use:

main

deb http://deb.debian.org/debian bookworm main

The main component contains packages that:

  • Are considered part of the distribution
  • Comply with the Debian Free Software Guidelines (DFSG)
  • Don’t depend on packages outside the main section

This is the core of any Debian-based distribution and contains most of the software you’ll need.

contrib

deb http://deb.debian.org/debian bookworm main contrib

The contrib component contains packages that:

  • Comply with the DFSG (free software)
  • Depend on packages that are outside the main section

For example, a free software tool that requires a non-free library would be in contrib.

non-free

deb http://deb.debian.org/debian bookworm main contrib non-free

The non-free component contains packages that:

  • Do not comply with the DFSG
  • Have restrictions on use, modification, or distribution

This includes proprietary drivers, firmware, and software with restrictive licenses.

non-free-firmware

deb http://deb.debian.org/debian bookworm main contrib non-free-firmware

The non-free-firmware component is a newer addition that specifically contains non-free firmware packages required for hardware support. This was separated from the general non-free component to make it easier for users to include just the firmware they need without enabling all non-free software.

Distribution & Component modifiers give you some broad ways of controlling the types of software that can be deployed on your system. I don’t know anyone who is using management of apt-get to prevent the deployment of non-free software on the platform. In practice- I just see this as being an annoying hurdle to go through for enabling the deployment of software you want/need at the time you need it. But it is nice to know there is some granularity of control you can implement for reducing the total set of packages that could be deployed on your system.

Putting It All Together

Let’s analyze a complete example:

deb [arch=arm64 signed-by=/usr/share/keyrings/raspbian-archive-keyring.gpg trusted=yes] http://archive.raspbian.org/raspbian/ bookworm main contrib non-free-firmware non-free

This line tells apt-get:

  1. Use binary packages (deb)
  2. Only for ARM64 architecture (arch=arm64)
  3. Verify packages using the specified key (signed-by=...)
  4. Trust this repository even without valid signatures (trusted=yes) – again, use with caution!
  5. Get packages from the specified URL
  6. For the Debian 12 “Bookworm” release
  7. Include packages from all components (main contrib non-free-firmware non-free)

Best Practices for Managing Repositories

  1. Be selective about third-party repositories: Each repository you add increases the risk of package conflicts or security issues.
  2. Use the modular approach: Place third-party repositories in separate files in /etc/apt/sources.list.d/ rather than editing the main sources.list.
  3. Verify GPG keys: Always verify the GPG keys of repositories you add to ensure you’re getting packages from the intended source.
  4. Only enable what you need: Don’t enable deb-src lines unless you actually need source packages.
  5. Be cautious with non-free components: While sometimes necessary for hardware support, non-free components may have license restrictions or security implications.

Conclusion

Understanding how apt-get repositories work gives you more control over your Debian-based system. Whether you’re troubleshooting package issues, setting up a new system, or just curious about how Linux package management works, knowing the ins and outs of repository configuration is invaluable knowledge.

By properly managing your sources.list and leveraging the flexibility of repository modifiers, you can create a stable, secure, and well-maintained system tailored to your specific needs.

Got no time? Using LLMs to inspire kids during “Hour of Code” week

(Once upon a time in December of 2024…)

My daughter’s 6th grade teacher asked me to join her class and talk about coding this week.

Mrs. Susan used python to create games with the class. She said the kids weren’t learning how they could use the code they were writing. They complete their tutorials. But they were not interested in creating their own projects. They didn’t understand the “Why” of the code they were writing. They weren’t inspired. Mrs Susan hoped was that I could assist by talking about my career with the kids. I loved the sentiment- but we need to show the kids something exciting before we talk about a career in security.

Knowing how hackers break systems gives you advantages in life. In every kid, there is enthusiasm for being perceived as someone who can do “cool stuff” with computers. Most kids won’t put in the effort without a push. The hurdles to learning this field are high enough that most kids never get off the launch pad.


Problem: I am too busy.

The timing of this ask has been awful. I’ve been extremely busy with work. My wife and daughter were sick for most of December. I’ve had to pick up all the farm chores for the girls in addition to my job. I was away in Austin last week- and the //todo: debt has grown. I thought I could wield My Hack-me-AP project- but Mrs Susan needed to constrain our time to an hour. Hack-me-ap is not a good candidate for an hour long discussion with the class.

I needed a compelling demo to ignite enthusiasm before I spoke about working in the field. I didn’t have time to craft the demo from scratch. I gambled 2 hours of my weekend using Anthropic Claude to generate a good cybersecurity demo. This was the workflow I used.


Using LLMs to generate a Proof of Concept in under 2 hours

I started off with an open ended prompt that could solve the problem. I gave it a little direction on what the implementation should be:

I'd like to set up a docker container to demonstrate basic hacking techniques for kids. I'd like to construct a vulnerable web application that we can use to "hack" a web server.

Claude helpfully provided me with a robustly vulnerable python flask web application. I was given instructions for building & running the containers. I was given helpful exploit examples. The design of the code included some practices I haven’t seen before- so I had to probe claude with some follow up questions to understand how the app would work. But for a first pass- it was a pretty good start.

I didn’t need 6 different paths for exploitation- but I figured we could keep it simple with a SQL injection attack. I’ve been managing for the last year. My memory is a little foggy- Let’s have Claude help us figure out the details:

How would I use a tool like sqlmap to discover the sql injection vulnerability in the Educational Vulnerable Web Application?

Ugh- this is looking ugly. I’m going to have to explain what a database is. I’m going to have to talk about URL parameters. I’m going to have to talk about tables. Too much fresh background for 6th graders.

I didn’t use this demo, but it helped me quite a bit. I didn’t know that I don’t want a “damn vulnerable app.” I needed a simple demo of what a hacker can do. 6th graders have a longer attention span than 3rd graders… but not for long. 7th grade hormones are starting to appear. We need to keep the demonstration lean so the kids don’t feel overwhelmed with info.

We’re starting to get some shape to the project now. The vulnerability & attack are simple. Next we need to provoke the kids. I get an idea about the purpose of the page.

The first pass of the app worked, but it was 1999 era html. Ugly. It looks like a security guy wrote it.


I used Grok to generate a picture of a stereotypical 6th grade class- and with a little bit of code tweaking, I get a working demo:


The bait is set. We can present this page to the kids and claim a mysterious org called the “Internet Hall of Fame” has just published a “World’s coolest class” award. There’s a function for posting messages- the kids can leave some raspberries for the World’s coolest class. The Internet Hall of Fame obviously selected the wrong class. Somebody needs to do something!

Now we just need to get our hack working. I decide that I’m going to take a selfie with the class and then host it on a web server I control. Rather than using Apache- I’m just going to use Python to publish a directory. In terms of order of operations:

  1. Demo the page for the kids
  2. ~/Development/HackingDemoHillside$ docker run -p 5000:5000 hall-of-fame
  3. Visit the page in a browser: http://127.0.0.1:5000
  4. Provoke the class- Are we sure that this is really the best class ever? what if it was us?
  5. Take picture of the class on iphone. Connect phone to linux box. open files app. Copy the picture I want to the directory /Home/Pictures/TmpWebServer/ Rename it as pic.jpg
  6. Run the temporary webserver that hosts the file:
  7. python3 -m http.server 9999

So now we’ve got a plan- I just need to write my xss attack. Who’s got time for that?

Claude wrote my exploit. It didn’t work.

I had to perform multiple iterations with Claude- a lot of summarizing what happened vs. what I wanted. Eventually I was able to get a working payload:

<script>
setTimeout(function() {
    var photos = document.getElementsByClassName('class-photo');
    if (photos.length > 0) {
        photos[0].src = 'http://127.0.0.1:9999/pic.jpg';
    }
}, 100);
</script>

My Big Monday Demo

Mrs. Susan introduced me. She shared that I work in cybersecurity and told the class I was going to tell them how I got interested in the field and what it’s like to work in this industry. I told the kids that working in cybersecurity means you help people protect systems from hackers. You need to know how people attack systems to understand what to defend those systems. If we’re going to talk about defense- we need to know what offense looks like.

I told the kids about the Internet Hall of Fame’s “world’s coolest class” website. They were outraged to see those AI kids on a page that should have celebrated Mrs. Susan’s class. All I needed to ask is “Should we do something about it?” They cheered- and before long we made everything better:


LLMs are going to change- not replace your job. Get hacking.

Merry Christmas & Happy Holidays!

The code is available here for those who are interested:

https://github.com/CaptainMcCrank/HackingDemoHillside

Cultivating Happiness in Security Teams

Happy Friday, everyone!

I have a daughter who is studying abroad, which is driving me to spend time thinking about how to aide her towards patterns that are healthy.

The Inspiration

I ran into a diagram about human emotions during a video about music theory. The video was describing the properties of various chord progressions and the feelings they tend to generate.

In this diagram, there are six fundamental emotions represented:

  • Fear
  • Anger
  • Disgust
  • Surprise
  • Sad
  • Happy

In this model, only 1/6 of the fundamental classes of emotion we experience are about flourishing. 5/6 of our fundamental emotions are about fight or flight instincts. That’s a lot of opportunity to feel anything but Happy. During my weekly call with my daughter, I ask how she spends time cultivating feelings of hope, playfulness and inspiration.

These are not naturally easy discussions. Anyone with a teenager knows you must go where the conversation goes. Structure helps remind me to stay positive and consistently nudge her towards positive outcomes.

This has lead me to think about what it means to invest time into building team happiness. It’s affecting my perspective about what kind of team I want to build. I think it’s helpful- which is why I’m sharing this with you today.

When I get feedback from peers and partners, this chart helps me do a check to discover if trends are developing that require a change. Security teams get better engagement when partners trust you. People using language associated with emotions of fear, anger, disgust and surprise give you a signal about your impact.

I look at the language my partners and peers use and explore where they align against this wheel.

Ask yourself if the feedback you receive is aligning with the emotions you’re hoping to cultivate in your teams. If you’re ending up in emotions you didn’t anticipate, you may need to explore how you can rebuild trust.

The larger video isn’t great, but I liked the chart. If you’re more interested in the music theory discussion, you can check it out here:

Happy teams require intention. What are you doing to cultivate confidence, courage, provocative inspiration, confidence and respect in your teams?

History of discoveries of weakness in SHA-1

  • 1995: SHA-0 was published by the National Institute of Standards and Technology (NIST) as a cryptographic hash function.
  • 1996: SHA-0 was found to have serious flaws by cryptographers, leading NIST to revise the design and release SHA-1 as a replacement.
  • 2005: A group of cryptographers discovered a theoretical attack on SHA-1 that would allow for collisions to be found in about 2^69 operations, making it vulnerable to a “collision attack”.
  • 2006: A team of researchers from Shandong University in China presented a practical collision attack on SHA-1, allowing them to generate two different messages with the same hash value.
  • 2008: The first verifiably successful attack on SHA-1 was announced by a team of researchers from the French National Institute for Research in Computer Science and Control (INRIA) and the École Polytechnique Fédérale de Lausanne (EPFL), which could produce collisions in just 2^52 operations.
  • 2010: A team of researchers from the Netherlands and Germany presented an attack that could break SHA-1 by finding a collision in 2^51 operations.
  • 2015: Google researchers announced that they had successfully created two PDF files with different content but the same SHA-1 hash, marking the first time that a practical attack on the algorithm had been demonstrated in the wild.
  • 2017: Researchers from the CWI Amsterdam and Google announced a “collision attack” on SHA-1 that could generate two different PDF files with the same SHA-1 hash in about 2^63 operations, demonstrating the algorithm’s vulnerability to attacks in practice.
  • 2020: SHA-1 is no longer considered secure for any use and is officially deprecated by NIST. It is recommended to switch to more secure hash functions such as SHA-256 or SHA-3.

FIPS 180: Secure Hash Standard (SHA-0) – https://nvlpubs.nist.gov/nistpubs/Legacy/FIPS/NIST.FIPS.180.pdf
FIPS 180-1: Secure Hash Standard (SHA-1) – https://csrc.nist.gov/publications/detail/fips/180/1/archive/1995-04-17
Discovery of the theoretical attack on SHA-1 in 2005:
Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu. “Finding Collisions in the Full SHA-1.” CRYPTO 2005. Springer, Berlin, Heidelberg, 2005. https://link.springer.com/chapter/10.1007/11535218_17

Practical collision attack on SHA-1 presented in 2006: Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu. “Collisions for Hash Functions MD4, MD5, HAVAL-128 and RIPEMD.” EUROCRYPT 2005. Springer, Berlin, Heidelberg, 2005. https://link.springer.com/chapter/10.1007/11426639_17
Verifiably successful attack on SHA-1 in 2008:
Marc Stevens, Arjen K. Lenstra, and Benne de Weger. “The first collision for full SHA-1.” CRYPTO 2017. Springer, Cham, 2017. https://link.springer.com/chapter/10.1007/978-3-319-63715-0_17

SHA-1 presented in 2010:
Thomas Peyrin and Pierre Karpman. “Predicting and Distinguishing Attacks on SHA-0 and SHA-1.” EUROCRYPT 2010. Springer, Berlin, Heidelberg, 2010. https://link.springer.com/chapter/10.1007/978-3-642-13190-5_22

SHA-1 demonstrated by Google in 2015:
Marc Stevens, Elie Bursztein, Pierre Karpman, Ange Albertini, and Yarik Markov. “The first SHA-1 collision.” Google Security Blog, February 23, 2017. https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html

Collision attack on SHA-1 demonstrated by researchers from CWI Amsterdam and Google in 2017:
Marc Stevens, Elie Bursztein, Pierre Karpman, Ange Albertini, Yarik Markov, and A. J. Bernstein. “The SHAttered Cryptographic Hash Function Collision Attack.” EUROCRYPT 2017. Springer, Cham, 2017. https://link.springer.com/chapter/10.1007/978-3-319-56617-7_24

official deprecation of SHA-1 by NIST in 2020:
National Institute of Standards and Technology. “SHA-1 Deprecation Notice.” Federal Register, May 7,

History of discoveries of weakness in SHA-1:

  • 1995: SHA-0 was published by the National Institute of Standards and Technology (NIST) as a cryptographic hash function.
  • 1996: SHA-0 was found to have serious flaws by cryptographers, leading NIST to revise the design and release SHA-1 as a replacement.
  • 2005: A group of cryptographers discovered a theoretical attack on SHA-1 that would allow for collisions to be found in about 2^69 operations, making it vulnerable to a “collision attack”.
  • 2006: A team of researchers from Shandong University in China presented a practical collision attack on SHA-1, allowing them to generate two different messages with the same hash value.
  • 2008: The first verifiably successful attack on SHA-1 was announced by a team of researchers from the French National Institute for Research in Computer Science and Control (INRIA) and the École Polytechnique Fédérale de Lausanne (EPFL), which could produce collisions in just 2^52 operations.
  • 2010: A team of researchers from the Netherlands and Germany presented an attack that could break SHA-1 by finding a collision in 2^51 operations.
  • 2015: Google researchers announced that they had successfully created two PDF files with different content but the same SHA-1 hash, marking the first time that a practical attack on the algorithm had been demonstrated in the wild.
  • 2017: Researchers from the CWI Amsterdam and Google announced a “collision attack” on SHA-1 that could generate two different PDF files with the same SHA-1 hash in about 2^63 operations, demonstrating the algorithm’s vulnerability to attacks in practice.
  • 2020: SHA-1 is no longer considered secure for any use and is officially deprecated by NIST. It is recommended to switch to more secure hash functions such as SHA-256 or SHA-3.

FIPS 180: Secure Hash Standard (SHA-0) – https://nvlpubs.nist.gov/nistpubs/Legacy/FIPS/NIST.FIPS.180.pdf
FIPS 180-1: Secure Hash Standard (SHA-1) – https://csrc.nist.gov/publications/detail/fips/180/1/archive/1995-04-17
Discovery of the theoretical attack on SHA-1 in 2005:
Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu. “Finding Collisions in the Full SHA-1.” CRYPTO 2005. Springer, Berlin, Heidelberg, 2005. https://link.springer.com/chapter/10.1007/11535218_17

Practical collision attack on SHA-1 presented in 2006: Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu. “Collisions for Hash Functions MD4, MD5, HAVAL-128 and RIPEMD.” EUROCRYPT 2005. Springer, Berlin, Heidelberg, 2005. https://link.springer.com/chapter/10.1007/11426639_17
Verifiably successful attack on SHA-1 in 2008:
Marc Stevens, Arjen K. Lenstra, and Benne de Weger. “The first collision for full SHA-1.” CRYPTO 2017. Springer, Cham, 2017. https://link.springer.com/chapter/10.1007/978-3-319-63715-0_17

SHA-1 presented in 2010:
Thomas Peyrin and Pierre Karpman. “Predicting and Distinguishing Attacks on SHA-0 and SHA-1.” EUROCRYPT 2010. Springer, Berlin, Heidelberg, 2010. https://link.springer.com/chapter/10.1007/978-3-642-13190-5_22

SHA-1 demonstrated by Google in 2015:
Marc Stevens, Elie Bursztein, Pierre Karpman, Ange Albertini, and Yarik Markov. “The first SHA-1 collision.” Google Security Blog, February 23, 2017. https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html

Collision attack on SHA-1 demonstrated by researchers from CWI Amsterdam and Google in 2017:
Marc Stevens, Elie Bursztein, Pierre Karpman, Ange Albertini, Yarik Markov, and A. J. Bernstein. “The SHAttered Cryptographic Hash Function Collision Attack.” EUROCRYPT 2017. Springer, Cham, 2017. https://link.springer.com/chapter/10.1007/978-3-319-56617-7_24

official deprecation of SHA-1 by NIST in 2020:
National Institute of Standards and Technology. “SHA-1 Deprecation Notice.” Federal Register, May 7,

Overview of my repeatable IoT Build process using Ansible & Docker

I built the “FirmwareBuilder” to make it easy for people to build raspberry pi images using Ansible. Not everyone has two Raspberry pi’s unfortunately. 😥

I built docker containers that can help you make reproducible single board computer projects. Now you can get by with only one raspberry pi!

This overview video explains how everything works!

Want to try it out? Check out instructions here to get started! Let me know how it works for you!

Container-based builds of Raspberry Pi using Ansible

I needed a way to collaborate on raspberry pi development. I wanted to make it possible for others to reproduce my builds without demanding that they manually execute command after command to reproduce my project. I also wanted a solution that saves time by caching redundant downloads. So I built the “Builderhotspot

There is a disadvantage to using the builderhotspot- not everyone has at least 2 raspberry pi’s.

I needed a solution that works if you have only one pi, so I turned to docker. I’ve created a docker-compose file that will spin up an ansible container & an apt-cacher-ng container which can be used to push firmware images to any devices on your network with the hostname of “ansibledest.local”

This tutorial assumes you have docker installed on a host system- and that somewhere on your network is a raspberry pi attached under the hostname of “ansibledest.local”

Step 1: Clone the FirmwareBuilderContainers project into your directory of choice.  

On your host operating system, cd into a directory where you want to host your Firmware Builder Containers.  I use ~/Development/containers.

git clone git@github.com:CaptainMcCrank/FirmwareBuilderContainers.git

Step 2: Modify the Docker-compose script to reflect the details of your host system

There are 3 modifications you’ll need to make to the docker-compose file you just cloned:

DOCKER_HOST

The docker-compose file’s DOCKER_HOST variable tunes the recipient device to use your local apt-cache-ng container. This reduces redundant apt-get install downloads.  It sets the server hostname values in the /etc/apt/sources.list and /etc/apt/sources.list.d/raspi.list files on the recipient device so that it pulls downloads from the caching server.   My firmware recipes use this value as control logic.  

The value will be “builderhotspot.local” if you use the builderhotspot to push recipes to devices. Since we’re going to use containers- we need to set the value to reflect your host os’s hostname.  To get the hostname of a system, you can use different commands on Windows, macOS, and Linux. Here are the commands for each operating system:

Windows

Command Prompt:

hostname

PowerShell:

hostname 
or 
$env:COMPUTERNAME

Mac & Linux

hostname

Alternatively, you can also use the uname -n command to retrieve the hostname on a linux system. Open the docker-compose.yml file and browse to the section for the ansible container. Note the “environment:” section. Change the DOCKER_HOST value to reflect your host system’s hostname.

VOLUMES

We need to expose two directories from the host system to the ansible container.  This is done by specifying a volume & indicating where in the container it should be accessible.

The first volume is our playbooks directory. This is where we will story playbooks for all the projects we want to push to devices. You can git clone playbooks for other projects into this directory from the Host operating system. They will be exposed in the /home/pi/Playbooks directory on your ansible container.

Change the first volume’s value to reflect the correct directory on your host system. Be sure to retain

:/home/pi/Playbooks 

at the end of the first line. This is how we specify the location. My playbooks are designed to run on both the builderhotspot as well as via containers- but if you modify this second value, the playbooks won’t work without modifications.

The second volume is for enabling mdns resolution.  In our playbooks we want to use hostnames to specify the recipient device. This makes pushing a playbook to a target device easy. You don’t have to discover the recipient device’s IP address. You only need to know it’s hostname. This keeps life simple.  If you’re on linux, the best way to do this is to share the avahi-daemon socket on the host system as a volume. If you skip this step, name resolution won’t work within the ansible container. I don’t think you need to change this value on a mac- it seems to work on my system.  I still need to test this on a Windows machine to confirm mdns works. works.     

Step 3: Build & launch the containers

cd ~/Development/containers/FirmwareBuilderContainers

Build the containers & run them detached.

docker-compose up --build -d 

Step 4: Attach to the ansible container & test connectivity: 

docker exec -it ansible bash

If you have an ansibledest system running on your network, you should be able to ping it:

root@docker-desktop:/# ping ansibledest.local
PING ansibledest.local (192.168.6.247) 56(84) bytes of data.
64 bytes from 192.168.6.247 (192.168.6.247): icmp_seq=1 ttl=63 time=0.898 ms
64 bytes from 192.168.6.247 (192.168.6.247): icmp_seq=2 ttl=63 time=1.12 ms

Step 5: Detach from your containers & deactivate the containers:

From within the ansible container, type “exit” to leave the container. Then use the docker-compose command to deactivate the containers:

docker-compose down

Step 6 (From this point forward, the only commands you’ll really need when building): 

From now on, we skip the build commands. Run the docker-compose command from within the container directory on your host system:

docker-compose up
docker-exec -it ansible bash

Step 7: Building a firmware example

You now should be good to go to use Docker Containers for pushing my firmware recipes to your devices. To try out the “hack this wifi” firmware, Go to your Host OS’s terminal and cd into the playbook directory ( I use /Users/Patrick/Development/Playbooks/DockerVolume). Run the following command:

git clone  https://github.com/CaptainMcCrank/Learn_Linux_Networking_And_Hacking.git

Attach to your container:

docker-exec -it ansible bash

And now cd into your playbooks directory:

cd /home/pi/Playbooks/

If you run ls, you should see the “Learn_Linux_Networking_And_Hacking” directory. cd into it:

cd /home/pi/Playbooks/Learn_Linux_Networking_And_Hacking

If your ansibledest system is online, you can copy the container’s ssh key to the destination system, which enables you to use ansible to install software on the recipient device:

ssh-copy-id pi@ansibledest.local

And now you can deploy the firmware:

ansible-playbook run.yml

If for some reason you forgot to set your hosts file, you can fix this in ansible for your container’s session with the “export DOCKER_HOST=hostname.local” command- where hostname.local is your host system’s hostname.

Persuasion: Pitching improvements when your Vulnerability Management programs are non-existent or insufficient.

Security leaders working on security sustainment programs should set three goals for their team:

  • Implement controls that make it easy for development teams to consistently deliver high-quality outputs.
  • Implement controls that test for the existence of known negative outcomes.
  • Implement controls that prevent known negative outcomes.

VULNERABILITY MANAGEMENT programs are crucial for maintaining the security of your company. These programs involve various methods to track and address vulnerabilities associated with outdated and misconfigured systems. In less mature organizations, vulnerability management can be challenging. Some perceive patching as insignificant work, while others rely too heavily on automated vulnerability scanners, leading to complacency. However, obtaining funding for a vulnerability management program is just the beginning. Executing vulnerability management effectively is crucial to protecting customer data and maintaining trust, as regulatory fines can deplete resources.

To emphasize the importance of vulnerability management, it is essential to present a use case that demonstrates the consequences of neglecting vulnerabilities. You have to arm your audience with examples of failures that are relevant to your audience. The use case should also highlight the implications of an inadequate vulnerability management program. Although it may not be an exciting issue to address, it is necessary to help people understand the dangers of complacency. This is not a fun problem to tackle, Sisyphus. Vulnerability Management is never finished. You must help people understand the danger of complacency.

Why do we need vulnerability management?

Think of Vulnerability Management as regular car maintenance. Similar to checking the car’s oil, inspecting the brakes, and replacing worn-out tires to ensure optimal performance and safety, continuously measuring vulnerabilities in your enterprise environment is essential. You must also measure the mean time to resolution and aim to keep it within acceptable limits.

Software vulnerabilities exist, even before they are assigned CVE numbers and become widely known. Humans make mistakes that can be exploited. A simple way to view software vulnerabilities is as indicators of software quality. If you want to increase the chances of delivering high-quality work, you need to test for the existence of defects. Skilled tradespeople know where mistakes are likely to occur in projects and test for their presence.

How can you pitch a vulnerability management program?

Part of the challenge of launching a vulnerability management program is you need anecdotes that persuade funding sources of the importance of this boring & tedious activity. It is common for developers and managers to underestimate security impacts. You need a realistic use case that helps shortcut bad-faith discussions from parties whose execution demands improvement. So here is an example to build your argument for your quantifiable vulnerability management reporting program.

Don’t be like Equifax: Why you need a measurable, sustaining Vulnerability Management program with monthly reporting

The Federal Trade Commission Act (FTCA): The Federal Trade Commission (FTC) has the authority to bring enforcement actions against companies that engage in “unfair or deceptive acts or practices.” This authority has been used in the past to penalize companies that fail to adequately protect consumer data.  

In 2017, https://en.wikipedia.org/wiki/2017_Equifax_data_breach. “Private records of 147.9 million Americans along with 15.2 million British citizens and about 19,000 Canadian citizens were compromised in the breach, making it one of the largest cyber crimes related to identity theft.”

“The total cost of the settlement included $300 million to a fund for victim compensation, $175 million to the states and territories in the agreement, and $100 million to the CFPB in fines”

Understanding the Equifax Breach

https://www.ftc.gov/legal-library/browse/cases-proceedings/172-3203-equifax-inc

  • https://www.ftc.gov/system/files/documents/cases/172_3203_equifax_complaint_7-22-19.pdf
    • March 8, 2017, (“US-CERT”) alerted Equifax to a new critical security vulnerability (referred to as 2017-CVE-5638) found in Apache Struts
    • Security team received the US-CERT alert and, on or about March 9, 2017, disseminated the alert internally by a mass email to more than 400 employees. The mass email directed employees, “if [they were] responsible for an Apache Struts installation,” to patch the vulnerability within 48 hours
    • The ACIS Dispute Portal contained a vulnerable version of Apache Struts. However, Equifax failed to apply the patch to the ACIS Dispute Portal for months. Although Equifax’s security team issued an order to patch all vulnerable systems within 48 hours, Equifax failed to send the email ordering a patch to the employee responsible for maintaining the ACIS Dispute Portal.
    • On or about March 15, 2017, Equifax performed an automated vulnerability scan intended to search for vulnerable instances of Apache Struts that remained on Equifax’s network. But Equifax used a scanner that was not configured to correctly search all of Equifax’s potentially vulnerable assets. As a result, the automated scanner did not identify any systems vulnerable to 2017-CVE- 5638 and the ACIS Dispute Portal remained unpatched.
    • On or about July 29, 2017, Equifax’s security team identified some suspicious traffic on the ACIS Dispute Portal after replacing expired security certificates.
    • Equifax retained a forensic consultant who ultimately determined that between May 13, 2017 and July 30, 2017, multiple attackers were each able to separately exploit the 2017-CVE-5638 vulnerability in the ACIS Dispute Portal to gain unauthorized access to Equifax’s network. Once inside, the attackers were able to crawl through dozens of unrelated databases containing information that went well beyond the ACIS Dispute Portal, in part because of a lack of network segmentation. The attackers also accessed an unsecured file share (or common storage space) connected to the ACIS databases where they discovered numerous administrative credentials, stored in plain text, that they used to obtain further access to Equifax’s network.
    • According to Equifax’s forensic analysis, the attackers were able to steal approximately 147 million names and dates of birth, 145.5 million SSNs, 99 million physical addresses, 20.3 million telephone numbers, 17.6 million email addresses and 209,000 payment card numbers and expiration dates, among other things.

Social Engineering your Corporate Colleagues to Embrace Vulnerability Management

When pitching the vulnerability management program, make it clear that it aims to protect the reputation of the person opposing your pitch. By avoiding predictable and repeatable situations like the Equifax breach, you help these individuals build sustainable and successful careers. Encourage a collaborative approach rather than adversarial thinking. You want to ensure that their name and reputation is not associated with unforgiving headlines like this:

Seriously, Equifax? This Is a Breach No One Should Get Away With


We all know people who worked at these companies during times of strife. Their network will wonder if they were the person responsible. It is in their best interest to avoid a predictable, repeatable situation where they are associated with this level of headline news coverage.

Deliver vulnerability reports based on leaders and encourage healthy competition among them. Ensure that every leader sees their peers’ summary vulnerability reports, enabling them to assess their position within the organization. Are they in the top or bottom half in terms of performance? At the end of the year, provide team awards for the teams with the shortest mean time to detection and resolution, and highlight teams with the lowest total count of critical vulnerabilities as exemplary.

Tips for pitching your Vuln Management Program

  • Implement tests to discover when a system is no longer getting scanned.
  • Implement tests to discover new systems that have never been scanned.
  • Implement tests to discover scanner misconfigurations that positively fail to discover relevant & new vulnerabilities.
  • Implement tests to discover a scanner that’s stopped firing.
  • Vuln Management programs should be assumed to be failing. Test for failures. Report on them. Implement mechanisms to systematically eliminate failures.
  • Loudly & clearly communicate that following concept: The existence of a vulnerability management does not mean your business is secure. Vuln Management is a cost of a sustainable, long term business. Complacent companies get burned. If you fail to maintain a high quality Vulnerability Management program, your business’s “Check Engine” light will go off at the worst possible moment.
  • Watching firewall logs is a good intention. You are not going to be the next equifax. You implement mechanisms for discovering vulnerability management non-compliance. You test your mechanisms for discovery gaps.
  • You can have 100% of your vulnerabilities patched within your SLAs and still experience catastrophic breaches & ransomware events.

Your Vulnerability Management Program should measure KPIs

Here are some example KPI’s to consider (from https://www.cisoplatform.com/profiles/blogs/top-10-metrics-for-your-vulnerability-management-program):

  • Mean Time to Detect
  • Mean Time to Resolve
  • Average Window of Exposure
  • Scanner Coverage
  • Scan Frequency by Asset Group
  • Number of Open Critical / High Vulnerabilities
  • Average Risk by BU / Asset Group etc.
  • Number of Exceptions Granted
  • Vulnerability Reopen Rate
  • % of Systems with no open High / Critical Vulnerability

Additionally- note that NIST has some recommendations around vulnerability management reporting: https://nvlpubs.nist.gov/nistpubs/legacy/sp/nistspecialpublication800-55r1.pdf

What’s your take? What else should be being measured & reported?

Sustainable PirateBox Part 2

Some notes:


Tokumei is the image board I’m playing with.

Unfortunately the installation scripts are stale & depend on deprecated cert-bot software from eff. I have to hack together an update for making this thing work.

There are instructions for installing Tokumei.co in a self hosted environment.

They call a script that can be obtained with

wget https://tokumei.co/privclear.sh.

The script accepts input from the user and then tunes various configuration files for the image board & nginx. For example, you can specify the domain of a page. This is useful because we’ll have to set a hostname for the captive portal.

The script fails first because it expects to install ssl certificates using certbot.

Certbot is deprecated. If you run this script, you get 404’d.

So the first thing I need to do is create my own self signed certificates.

Then I need to hack the nginx.conf and sites-available/default pages.

I put a lot of effort into figuring out how to carve out a page to host static pdfs and MP3s. I tried very hard to document the edits. If you decide to do any work with nginx and creating static content, you might find this material helpful.

This is the sites-available default page.

The key points of this config:

  1. Shows you how to create a self signed certificate in the comments of the script
  2. Shows you where you put the cert and key file so that nginx can access it.
  3. Shows you how to reduce noise in the config by using snippets pointing to self-signed.conf and ssl-params.conf
  4. Shows you how to configure nginx to limit file uploads on the tokumei board.
  5. Shows you how to create the /offgrid uri that vends a directory listing within the skinning of Tokumei
  6. A minor hack to help you debug if nginx locations are not getting triggered because of aggressive / location definitions

I have a major //TODO:

If you look around here you see the reference to werc.

It references a fastcgi server running at localhost on 3333.

That cgi server is invoked with the following command:


sudo /usr/local/go/bin/cgd -f -c /var/www/ansibledest.local/bin/werc.rc > /dev/null 2>&1 &

Cgd is a daemon that can serve a CGI script over HTTP or FastCGI.

Useful to run CGI scripts that serve a whole domain (like werc) without need for a “real” HTTP server, or to wrap CGI scripts so they can be served by fcgi-only web servers like nginx.

//TODO: implement a startup script for launching the cgd daemon on boot in the final ansible playbook.

Creating a sustainable Piratebox Alternative

Piratebox was a fun project that got orphaned, which is sad.

I’m constructing a new one of sorts using my ansible automation.

The firmware will turn a Raspberry Pi into a wifi access point that will broadcast a network labeled “JoinMe.”

Users who connect to the network are forwarded to a captive portal hosted on the device. The captive portal app is a local communicty image board.

The board supports anonymous image posting & is based off the code from https://tokumei.co/. The design is an implementation of some interesting plan-9 inspired tooling called Werc. The licensing is public. It has a more inviting design than the 4chan futaba image boards.

When neighbors are attached to the network, they don’t get access to the Internet. They do get private access to whatever local community resources you insert.

In my implementation, I modified the entry tokumei page to include a link to a static directory of files. You can now host a private library of mp3 files, pdfs, zines and other culture for sharing with your neighbors.

Run an offgrid wifi community network that hosts a bulletin board and a shared library.

Create local community without logging off completely. It’s off the Internet and only accessible if you stop and connect to it when you’re in range. No Internet Trolls- only your local neighbors. No advertisers. No centralized control. I hope it will help you connect with people in your proximity.