Multiple hosts on your network with the same mdns name

What happens when you have multiple hosts on your network with the same mdns name?

mDNS handles naming collisions through a process called “probing and announcing” defined in RFC 6762. Here’s how it works:

Collision Detection Process:

  • When a device wants to claim a name (like “ansibledest.local”), it first “probes” by sending queries for that name
  • If another device already has that name, it responds, indicating a collision
  • The new device must then choose a different name, typically by appending a number (ansibledest-2.local, ansibledest-3.local, etc.)
  • This process repeats until an unclaimed name is found

In practice- I haven’t seen this behavior, exactly. The devices all come online and if you run the hostname command, each of them thinks they’re Spartacus. You have to ping a specific ip address to get the updated mdns name in your cache. Annoying:

RFC 6762 told me I was good!

Well- I know the host is online apparently. I wonder what happens if I ping the hostname again-

It works. hooray.

Identifying duplicate mdns entries

You can use the avahi-browse command to see what’s up on your network. The hosts are apparently doing some kind of incremental naming- but not in a way that provides tcp/ip utility.

avahi-browse -at | grep ansibledest*
+ wlp0s20f3 IPv6 ansibledest-4 [mac:addy:du:jour]             Workstation          local
+ wlp0s20f3 IPv6 ansibledest [mac:addy:du:jour]              Workstation          local
+ wlp0s20f3 IPv6 ansibledest-3 [mac:addy:du:jour]             Workstation          local
+ wlp0s20f3 IPv6 ansibledest-3 [mac:addy:du:jour]             Workstation          local
+ wlp0s20f3 IPv6 ansibledest [mac:addy:du:jour]               Workstation          local
+ wlp0s20f3 IPv6 ansibledest-2 [mac:addy:du:jour]             Workstation          local
+ wlp0s20f3 IPv6 ansibledest-2 [mac:addy:du:jour]             Workstation          local
+ wlp0s20f3 IPv4 ansibledest-4 [mac:addy:du:jour]             Workstation          local
+ wlp0s20f3 IPv4 ansibledest-2 [mac:addy:du:jour]             Workstation          local
+ wlp0s20f3 IPv4 ansibledest-3 [mac:addy:du:jour]             Workstation          local
+ wlp0s20f3 IPv4 ansibledest-2 [mac:addy:du:jour]             Workstation          local
+ wlp0s20f3 IPv4 ansibledest [mac:addy:du:jour]               Workstation          local
+ wlp0s20f3 IPv4 ansibledest [[mac:addy:du:jour]               Workstation          local

Kernel Driver Compilation: Concepts and Troubleshooting

Overview

Kernel driver compilation is the process of creating loadable kernel modules (.ko files) that allow hardware to communicate with the Linux kernel. This involves several stages of compilation and linking.

Major Concepts

1. Source Code Compilation

  • Concept: Converting human-readable C source code (.c files) into machine code
  • Process: C compiler (gcc) compiles each .c file into an object file (.o)
  • Purpose: Create intermediate files containing compiled code that can be linked together
  • Example: rtw_cmd.c → rtw_cmd.o

2. Object Files (.o)

  • Concept: Intermediate compiled files containing machine code but not yet executable
  • Characteristics:
  • Contain compiled code from individual source files
  • Need to be linked together to create final executable
  • Cannot be loaded directly into kernel
  • Example: rtw_cmd.o, rtw_security.o, rtw_debug.o

3. Linking Process

  • Concept: Combining multiple object files into a single executable module
  • Process: Linker combines all .o files using linker scripts
  • Purpose: Create a cohesive module from individual compiled components
  • Requirements: Linker scripts (like module.lds) define how to organize the code

4. Kernel Modules (.ko)

  • Concept: Loadable kernel modules that can be inserted into running kernel
  • Characteristics:
  • Final product of compilation process
  • Can be loaded/unloaded without rebooting
  • Contains all necessary code and symbols
  • Example: 8812au.ko (final WiFi driver module)

5. Kernel Headers and Build System

  • Concept: Infrastructure needed to compile kernel modules
  • Components:
  • Kernel headers (function declarations, data structures)
  • Build scripts and Makefiles
  • Linker scripts (module.lds)
  • Module build tools (modprobe, depmod)

The Compilation Pipeline

text

Apply to AWUS036ACH_w…

Source Files (.c) → Object Files (.o) → Kernel Module (.ko)

     ↓                    ↓                    ↓

   Compile            Link with           Load into

   with gcc           module.lds          kernel

What Was Failing in Your Case

Stage 1: Source Compilation ✅

  • Status: Working correctly
  • Evidence: All .o files were being created successfully
  • Output: CC [M] /home/pi/8812au_src/core/rtw_cmd.o, etc.

Stage 2: Linking ❌

  • Status: Failing at final linking step
  • Root Cause: Missing module.lds linker script
  • Error: No rule to make target ‘8812au.ko’
  • Impact: Could not create final .ko file

Stage 3: Module Creation ❌

  • Status: Never reached due to linking failure
  • Expected: Creation of 8812au.ko file
  • Reality: Build process terminated before completion

The Missing Piece: module.lds

What is module.lds?

  • Purpose: Linker script that defines how to organize compiled code
  • Location: /lib/modules/$(uname -r)/build/scripts/module.lds
  • Function: Tells linker how to combine .o files into .ko file

Why it was missing:

  • Incomplete kernel source: rpi-source downloaded kernel source but missing build infrastructure
  • Missing build tools: The kernel source was missing essential linker scripts
  • Incomplete setup: Kernel headers symlink was pointing to incomplete source

The Fix:

AWUS036ACH_w…Run# Create the missing linker scriptcat > /lib/modules/$(uname -r)/build/scripts/module.lds << ‘EOF’SECTIONS{  . = ALIGN(4096);  .text : { (.text) }  .rodata : { (.rodata) }  .data: { (.data) }  .bss : { (.bss) }}EOF

Key Takeaways

  1. Compilation vs Linking: Two distinct phases – compilation creates .o files, linking creates .ko files
  2. Infrastructure Dependencies: Kernel module compilation requires complete build infrastructure
  3. Linker Scripts: Essential for final module creation, often overlooked in troubleshooting
  4. Debugging Approach: Check each stage separately – compilation, linking, and module creation
  5. Common Failure Points: Missing headers, incomplete kernel source, missing build tools

Diagnostic Questions for Future Issues

  • Compilation failing? → Check kernel headers, compiler, source code
  • Linking failing? → Check linker scripts, build infrastructure, kernel source completeness
  • Module loading failing? → Check module format, kernel compatibility, dependencies

This systematic approach helps isolate where in the pipeline the failure occurs and what infrastructure is missing.

Avoiding Social Media outrage.

Tenets for handling outrage inducing content on social media:

Most people don’t see the world through a True/False filter but an Us/Them filter

Introspection is uncommon

30-50% of people you meet don’t have an inner monologue.

The Internet Rewards Narcissists

When you see someone talking shit about a stranger online, remember that narcissists use “Splitting” to expel a narcissist’s bad behavior from their own memory.

Opinions != Expertise

The founding fathers were concerned about giving the right people the right to vote. They wanted to index on property owners to ensure that outcomes were dictated by citizens with a “stake in the game”. They were right to be concerned with the intentions of outsiders. Consider if the person whinging has a stake in the game- or is someone who is espousing their feelings about how other people should live.

Dangerous Outcomes Feared by the Founding Fathers

  • “Tyranny of the Majority” – Madison’s great fear in Federalist No. 10, where majority factions could oppress minorities and violate property rights
  • Mob Rule and Instability – Hamilton worried about “temporary delusion” of the masses (Federalist No. 68)
  • Wealth Redistribution – John Adams wrote: “The moment the idea is admitted into society that property is not as sacred as the laws of God… anarchy and tyranny commence” (Defence of the Constitutions, 1787)
  • Election of Demagogues – The Electoral College was partly designed as a safeguard against this (Hamilton, Federalist No. 68)

Public schools are funnels for producing useful idiots.

The public education community has long forgone any semblance of objectivity and fairness. The education system employs people who want a safe career with a pension and ample vacation. The incentives of public education are for teachers who comply and teach compliance. This attracts teachers who drive students towards voting for policies that produce government jobs which forego excellence and accountability.

Disagreeableness is undesirable

Most people generally prefer to get along to go along.

Seeking what is true is not seeking what is desirable.

Cruelty is easier than compassion.

It is the weak who are cruel. Gentleness is only to be expected from the strong.

Apt-get under apt-cacher-ng with a missing GPG public key

Here’s a common error to run into:


TASK [essential : Run the equivilent of apt-get update] **************************************************************
fatal: [ansibledest.local]: FAILED! => {“changed”: false, “msg”: “Failed to update apt cache: W:Updating from such a repository can’t be done securely, and is therefore disabled by default., W:See apt-secure(8) manpage for repository creation and user configuration details., W:GPG error: http://hostname.local:3142/raspbian.raspberrypi.org/raspbian bookworm InRelease: The following signatures couldn’t be verified because the public key is not available: NO_PUBKEY 9165938D90FDDD2E, E:The repository ‘http://hostname.local:3142/raspbian.raspberrypi.org/raspbian bookworm InRelease’ is not signed.”}

This can be solved by adding the following to your ansible playbooks:


– name: Add raspbian public signing key
apt_key:
keyserver: keyserver.ubuntu.com
id: 9165938D90FDDD2E
state: present

– name: Update apt cache
apt:
update_cache: yes

An NMCLI Cheatsheet for Wifi Access Points

Listing Creating, modifying and deleting connections

List your connections:

nmcli connection show

Rename a connection that you want to skip IPv6 on:

sudo nmcli connection modify "JoinMe-AP" 802-11-wireless.mode ap 802-11-wireless.ssid JoinMe 802-11-wireless.band bg 802-11-wireless-security.key-mgmt none connection.interface-name wlan1 ipv4.method shared ipv6.method ignore

Delete a connection by it’s name:

 sudo nmcli connection delete "JoinMe-AP" # Clean slate (optional)

Vulnerability Prioritization made easy!

“Which vulnerabilities should we fix first?”

This question often leads to confusion, especially for those deeply involved in security. Every company has unique priorities which makes it challenging to create a one-size-fits-all approach.
Don’t lose hope- here’s a straightforward method inspired by musicians’ mnemonics to help you feel confident that you considered everything that’s important when assessing a vulnerability’s priority.

Every Engineer Always Prioritizes Data By Evaluating Risk

This phrase breaks down into key factors to consider:

  • E – Exploitability: How easily can someone exploit the vulnerability?
  • E – Exposure: Is the system connected to the internet or internal?
  • A – Access Required: What level of access does an attacker need?
  • P – Patch Difficulty: How hard is it to fix the issue?
  • D – Data Sensitivity: Does the system handle sensitive information?
  • B – Business Impact: What effect would an exploit have on the company?
  • E – Environmental Mitigations: Are there existing defenses in place?
  • R – Raw CVSS Score: What is the base severity score?

Achieving zero vulnerabilities is ideal but often unrealistic. Resources are often limited, so it’s crucial that we prioritize effectively. This mnemonic helps me ensure I’ve considered the whole set of aspects that should be reviewed when deciding which vulnerabilities to address first.

By evaluating each factor, you can make informed decisions that balance risk and resource allocation, leading to a more secure and efficient system.

Beyond the basics of Linux software installations: Become an expert in the configuration of apt-get in 2025

apt-get is used to install software on various Linux systems, including Ubuntu, Debian, Pop!_os, et. al. Sometimes, you’ll experience errors installing software using apt-get. In this post, I will cover what I’ve learned about how apt-get configuration works.

Let’s start with a discussion about how the apt-get binary knows where to find packages on the Internet.

The Primary Source: /etc/apt/sources.list

The /etc/apt/sources.list file is the central configuration file that tells apt-get where to look for packages. Each line in this file represents a repository – a server containing packages that can be installed on your system.
A typical entry in sources.list looks like this:

deb http://deb.debian.org/debian bookworm main contrib non-free

This single line contains several key pieces of information:

  • The repository type (deb)
  • The repository URL (http://deb.debian.org/debian)
  • The distribution release name (bookworm)
  • The components to include (main contrib non-free)

The sources.list file can contain multiple repository lines, allowing you to install packages from various sources.

Modular Configuration: /etc/apt/sources.list.d/

As systems became more complex, managing everything in a single sources.list file became unwieldy. The /etc/apt/sources.list.d/ directory helps us handle this complexity with a more modular approach to repository management.

This directory contains individual .list files, each typically dedicated to a specific repository or application. For example, when you add a third-party repository for an application like Visual Studio Code, it might create a file called /etc/apt/sources.list.d/vscode.list.

This approach keeps your system organized and makes it easier to remove repositories when you no longer need them. Simply delete the corresponding file, and the repository is gone – no need to edit the main sources.list file and potentially make mistakes.

Repository Types: deb vs. deb-src

You may have noticed that repository lines start with either deb or deb-src. What’s the difference?

deb: Binary Packages

Lines starting with deb point to repositories containing pre-compiled binary packages. These are the packages most users install – they contain the executable programs, libraries, and other files ready to be used on your system.

When you run apt-get install firefox, apt-get downloads and installs the binary package for Firefox from a deb repository.

deb-src: Source Packages

Lines starting with deb-src point to repositories containing source code packages. These aren’t pre-compiled programs but rather the original source code used to build the binary packages.

Source packages are useful for:

  • Developers who want to examine or modify the code
  • Users who need to compile packages with custom options
  • Those who need to troubleshoot issues by looking at the source code

To download source packages, you use commands like apt-get source firefox instead of apt-get install firefox.

Most typical users don’t need deb-src repositories enabled unless they plan to compile software from source or if they need to compile drivers for hardware like usb-wifi adapters.

Understanding Repository Modifiers

Repository lines can include various modifiers that provide additional options and constraints. Let’s break down some common ones:

Architecture Modifiers: [arch=arm64]

deb [arch=arm64] http://deb.debian.org/debian bookworm main

The arch= modifier specifies that this repository should only be used for a specific architecture. In this example, the repository will only be used when looking for packages for the ARM64 architecture. This is particularly useful for systems like Raspberry Pi or when maintaining multi-architecture systems.

Security Modifiers: [signed-by=/usr/share/keyrings/raspbian-archive-keyring.gpg]

deb [signed-by=/usr/share/keyrings/raspbian-archive-keyring.gpg] http://archive.raspbian.org/raspbian/ bookworm main

The signed-by= modifier specifies which GPG key should be used to verify the packages from this repository. This enhances security by ensuring packages are only installed if they’re signed by a trusted key.

Modern Debian-based systems store repository keys in the /usr/share/keyrings/ directory as separate files rather than in a central keyring, making key management more secure and flexible.

Trust Modifiers: [trusted=yes]

deb [trusted=yes] http://repository.example.com/ stable main

The trusted=yes modifier tells apt to trust this repository even if it doesn’t have valid signatures. This should be used with extreme caution, as it bypasses crucial security checks. Only use this for repositories you absolutely trust, like local repositories on your network. This feature comes in helpful if you’re troubleshooting software installation problems using my firmwarebuildercontainers. Learn more about my firmware image creation process here: https://patrickmccanna.net/overview-of-my-repeatable-iot-build-process-using-ansible-docker/

Distribution and Component Modifiers

After the repository URL, you’ll find several additional modifiers:

deb http://deb.debian.org/debian bookworm main contrib non-free-firmware non-free

Distribution Release Names (bookworm)

The first word after the URL (bookworm in this example) specifies which distribution release to use. Debian uses code names for its releases:

  • bookworm: Debian 12
  • bullseye: Debian 11
  • buster: Debian 10

Ubuntu similarly uses names like jammy (22.04), focal (20.04), etc.

You might also see special release names like:

  • stable: Always points to the current stable Debian release
  • testing: Points to the next Debian release in preparation
  • unstable or sid: The development branch of Debian

Component Categories

The words that follow the release name define which components or sections of the repository to use:

main

deb http://deb.debian.org/debian bookworm main

The main component contains packages that:

  • Are considered part of the distribution
  • Comply with the Debian Free Software Guidelines (DFSG)
  • Don’t depend on packages outside the main section

This is the core of any Debian-based distribution and contains most of the software you’ll need.

contrib

deb http://deb.debian.org/debian bookworm main contrib

The contrib component contains packages that:

  • Comply with the DFSG (free software)
  • Depend on packages that are outside the main section

For example, a free software tool that requires a non-free library would be in contrib.

non-free

deb http://deb.debian.org/debian bookworm main contrib non-free

The non-free component contains packages that:

  • Do not comply with the DFSG
  • Have restrictions on use, modification, or distribution

This includes proprietary drivers, firmware, and software with restrictive licenses.

non-free-firmware

deb http://deb.debian.org/debian bookworm main contrib non-free-firmware

The non-free-firmware component is a newer addition that specifically contains non-free firmware packages required for hardware support. This was separated from the general non-free component to make it easier for users to include just the firmware they need without enabling all non-free software.

Distribution & Component modifiers give you some broad ways of controlling the types of software that can be deployed on your system. I don’t know anyone who is using management of apt-get to prevent the deployment of non-free software on the platform. In practice- I just see this as being an annoying hurdle to go through for enabling the deployment of software you want/need at the time you need it. But it is nice to know there is some granularity of control you can implement for reducing the total set of packages that could be deployed on your system.

Putting It All Together

Let’s analyze a complete example:

deb [arch=arm64 signed-by=/usr/share/keyrings/raspbian-archive-keyring.gpg trusted=yes] http://archive.raspbian.org/raspbian/ bookworm main contrib non-free-firmware non-free

This line tells apt-get:

  1. Use binary packages (deb)
  2. Only for ARM64 architecture (arch=arm64)
  3. Verify packages using the specified key (signed-by=...)
  4. Trust this repository even without valid signatures (trusted=yes) – again, use with caution!
  5. Get packages from the specified URL
  6. For the Debian 12 “Bookworm” release
  7. Include packages from all components (main contrib non-free-firmware non-free)

Best Practices for Managing Repositories

  1. Be selective about third-party repositories: Each repository you add increases the risk of package conflicts or security issues.
  2. Use the modular approach: Place third-party repositories in separate files in /etc/apt/sources.list.d/ rather than editing the main sources.list.
  3. Verify GPG keys: Always verify the GPG keys of repositories you add to ensure you’re getting packages from the intended source.
  4. Only enable what you need: Don’t enable deb-src lines unless you actually need source packages.
  5. Be cautious with non-free components: While sometimes necessary for hardware support, non-free components may have license restrictions or security implications.

Conclusion

Understanding how apt-get repositories work gives you more control over your Debian-based system. Whether you’re troubleshooting package issues, setting up a new system, or just curious about how Linux package management works, knowing the ins and outs of repository configuration is invaluable knowledge.

By properly managing your sources.list and leveraging the flexibility of repository modifiers, you can create a stable, secure, and well-maintained system tailored to your specific needs.

Got no time? Using LLMs to inspire kids during “Hour of Code” week

(Once upon a time in December of 2024…)

My daughter’s 6th grade teacher asked me to join her class and talk about coding this week.

Mrs. Susan used python to create games with the class. She said the kids weren’t learning how they could use the code they were writing. They complete their tutorials. But they were not interested in creating their own projects. They didn’t understand the “Why” of the code they were writing. They weren’t inspired. Mrs Susan hoped was that I could assist by talking about my career with the kids. I loved the sentiment- but we need to show the kids something exciting before we talk about a career in security.

Knowing how hackers break systems gives you advantages in life. In every kid, there is enthusiasm for being perceived as someone who can do “cool stuff” with computers. Most kids won’t put in the effort without a push. The hurdles to learning this field are high enough that most kids never get off the launch pad.


Problem: I am too busy.

The timing of this ask has been awful. I’ve been extremely busy with work. My wife and daughter were sick for most of December. I’ve had to pick up all the farm chores for the girls in addition to my job. I was away in Austin last week- and the //todo: debt has grown. I thought I could wield My Hack-me-AP project- but Mrs Susan needed to constrain our time to an hour. Hack-me-ap is not a good candidate for an hour long discussion with the class.

I needed a compelling demo to ignite enthusiasm before I spoke about working in the field. I didn’t have time to craft the demo from scratch. I gambled 2 hours of my weekend using Anthropic Claude to generate a good cybersecurity demo. This was the workflow I used.


Using LLMs to generate a Proof of Concept in under 2 hours

I started off with an open ended prompt that could solve the problem. I gave it a little direction on what the implementation should be:

I'd like to set up a docker container to demonstrate basic hacking techniques for kids. I'd like to construct a vulnerable web application that we can use to "hack" a web server.

Claude helpfully provided me with a robustly vulnerable python flask web application. I was given instructions for building & running the containers. I was given helpful exploit examples. The design of the code included some practices I haven’t seen before- so I had to probe claude with some follow up questions to understand how the app would work. But for a first pass- it was a pretty good start.

I didn’t need 6 different paths for exploitation- but I figured we could keep it simple with a SQL injection attack. I’ve been managing for the last year. My memory is a little foggy- Let’s have Claude help us figure out the details:

How would I use a tool like sqlmap to discover the sql injection vulnerability in the Educational Vulnerable Web Application?

Ugh- this is looking ugly. I’m going to have to explain what a database is. I’m going to have to talk about URL parameters. I’m going to have to talk about tables. Too much fresh background for 6th graders.

I didn’t use this demo, but it helped me quite a bit. I didn’t know that I don’t want a “damn vulnerable app.” I needed a simple demo of what a hacker can do. 6th graders have a longer attention span than 3rd graders… but not for long. 7th grade hormones are starting to appear. We need to keep the demonstration lean so the kids don’t feel overwhelmed with info.

We’re starting to get some shape to the project now. The vulnerability & attack are simple. Next we need to provoke the kids. I get an idea about the purpose of the page.

The first pass of the app worked, but it was 1999 era html. Ugly. It looks like a security guy wrote it.


I used Grok to generate a picture of a stereotypical 6th grade class- and with a little bit of code tweaking, I get a working demo:


The bait is set. We can present this page to the kids and claim a mysterious org called the “Internet Hall of Fame” has just published a “World’s coolest class” award. There’s a function for posting messages- the kids can leave some raspberries for the World’s coolest class. The Internet Hall of Fame obviously selected the wrong class. Somebody needs to do something!

Now we just need to get our hack working. I decide that I’m going to take a selfie with the class and then host it on a web server I control. Rather than using Apache- I’m just going to use Python to publish a directory. In terms of order of operations:

  1. Demo the page for the kids
  2. ~/Development/HackingDemoHillside$ docker run -p 5000:5000 hall-of-fame
  3. Visit the page in a browser: http://127.0.0.1:5000
  4. Provoke the class- Are we sure that this is really the best class ever? what if it was us?
  5. Take picture of the class on iphone. Connect phone to linux box. open files app. Copy the picture I want to the directory /Home/Pictures/TmpWebServer/ Rename it as pic.jpg
  6. Run the temporary webserver that hosts the file:
  7. python3 -m http.server 9999

So now we’ve got a plan- I just need to write my xss attack. Who’s got time for that?

Claude wrote my exploit. It didn’t work.

I had to perform multiple iterations with Claude- a lot of summarizing what happened vs. what I wanted. Eventually I was able to get a working payload:

<script>
setTimeout(function() {
    var photos = document.getElementsByClassName('class-photo');
    if (photos.length > 0) {
        photos[0].src = 'http://127.0.0.1:9999/pic.jpg';
    }
}, 100);
</script>

My Big Monday Demo

Mrs. Susan introduced me. She shared that I work in cybersecurity and told the class I was going to tell them how I got interested in the field and what it’s like to work in this industry. I told the kids that working in cybersecurity means you help people protect systems from hackers. You need to know how people attack systems to understand what to defend those systems. If we’re going to talk about defense- we need to know what offense looks like.

I told the kids about the Internet Hall of Fame’s “world’s coolest class” website. They were outraged to see those AI kids on a page that should have celebrated Mrs. Susan’s class. All I needed to ask is “Should we do something about it?” They cheered- and before long we made everything better:


LLMs are going to change- not replace your job. Get hacking.

Merry Christmas & Happy Holidays!

The code is available here for those who are interested:

https://github.com/CaptainMcCrank/HackingDemoHillside

Cultivating Happiness in Security Teams

Happy Friday, everyone!

I have a daughter who is studying abroad, which is driving me to spend time thinking about how to aide her towards patterns that are healthy.

The Inspiration

I ran into a diagram about human emotions during a video about music theory. The video was describing the properties of various chord progressions and the feelings they tend to generate.

In this diagram, there are six fundamental emotions represented:

  • Fear
  • Anger
  • Disgust
  • Surprise
  • Sad
  • Happy

In this model, only 1/6 of the fundamental classes of emotion we experience are about flourishing. 5/6 of our fundamental emotions are about fight or flight instincts. That’s a lot of opportunity to feel anything but Happy. During my weekly call with my daughter, I ask how she spends time cultivating feelings of hope, playfulness and inspiration.

These are not naturally easy discussions. Anyone with a teenager knows you must go where the conversation goes. Structure helps remind me to stay positive and consistently nudge her towards positive outcomes.

This has lead me to think about what it means to invest time into building team happiness. It’s affecting my perspective about what kind of team I want to build. I think it’s helpful- which is why I’m sharing this with you today.

When I get feedback from peers and partners, this chart helps me do a check to discover if trends are developing that require a change. Security teams get better engagement when partners trust you. People using language associated with emotions of fear, anger, disgust and surprise give you a signal about your impact.

I look at the language my partners and peers use and explore where they align against this wheel.

Ask yourself if the feedback you receive is aligning with the emotions you’re hoping to cultivate in your teams. If you’re ending up in emotions you didn’t anticipate, you may need to explore how you can rebuild trust.

The larger video isn’t great, but I liked the chart. If you’re more interested in the music theory discussion, you can check it out here:

Happy teams require intention. What are you doing to cultivate confidence, courage, provocative inspiration, confidence and respect in your teams?

History of discoveries of weakness in SHA-1

  • 1995: SHA-0 was published by the National Institute of Standards and Technology (NIST) as a cryptographic hash function.
  • 1996: SHA-0 was found to have serious flaws by cryptographers, leading NIST to revise the design and release SHA-1 as a replacement.
  • 2005: A group of cryptographers discovered a theoretical attack on SHA-1 that would allow for collisions to be found in about 2^69 operations, making it vulnerable to a “collision attack”.
  • 2006: A team of researchers from Shandong University in China presented a practical collision attack on SHA-1, allowing them to generate two different messages with the same hash value.
  • 2008: The first verifiably successful attack on SHA-1 was announced by a team of researchers from the French National Institute for Research in Computer Science and Control (INRIA) and the École Polytechnique Fédérale de Lausanne (EPFL), which could produce collisions in just 2^52 operations.
  • 2010: A team of researchers from the Netherlands and Germany presented an attack that could break SHA-1 by finding a collision in 2^51 operations.
  • 2015: Google researchers announced that they had successfully created two PDF files with different content but the same SHA-1 hash, marking the first time that a practical attack on the algorithm had been demonstrated in the wild.
  • 2017: Researchers from the CWI Amsterdam and Google announced a “collision attack” on SHA-1 that could generate two different PDF files with the same SHA-1 hash in about 2^63 operations, demonstrating the algorithm’s vulnerability to attacks in practice.
  • 2020: SHA-1 is no longer considered secure for any use and is officially deprecated by NIST. It is recommended to switch to more secure hash functions such as SHA-256 or SHA-3.

FIPS 180: Secure Hash Standard (SHA-0) – https://nvlpubs.nist.gov/nistpubs/Legacy/FIPS/NIST.FIPS.180.pdf
FIPS 180-1: Secure Hash Standard (SHA-1) – https://csrc.nist.gov/publications/detail/fips/180/1/archive/1995-04-17
Discovery of the theoretical attack on SHA-1 in 2005:
Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu. “Finding Collisions in the Full SHA-1.” CRYPTO 2005. Springer, Berlin, Heidelberg, 2005. https://link.springer.com/chapter/10.1007/11535218_17

Practical collision attack on SHA-1 presented in 2006: Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu. “Collisions for Hash Functions MD4, MD5, HAVAL-128 and RIPEMD.” EUROCRYPT 2005. Springer, Berlin, Heidelberg, 2005. https://link.springer.com/chapter/10.1007/11426639_17
Verifiably successful attack on SHA-1 in 2008:
Marc Stevens, Arjen K. Lenstra, and Benne de Weger. “The first collision for full SHA-1.” CRYPTO 2017. Springer, Cham, 2017. https://link.springer.com/chapter/10.1007/978-3-319-63715-0_17

SHA-1 presented in 2010:
Thomas Peyrin and Pierre Karpman. “Predicting and Distinguishing Attacks on SHA-0 and SHA-1.” EUROCRYPT 2010. Springer, Berlin, Heidelberg, 2010. https://link.springer.com/chapter/10.1007/978-3-642-13190-5_22

SHA-1 demonstrated by Google in 2015:
Marc Stevens, Elie Bursztein, Pierre Karpman, Ange Albertini, and Yarik Markov. “The first SHA-1 collision.” Google Security Blog, February 23, 2017. https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html

Collision attack on SHA-1 demonstrated by researchers from CWI Amsterdam and Google in 2017:
Marc Stevens, Elie Bursztein, Pierre Karpman, Ange Albertini, Yarik Markov, and A. J. Bernstein. “The SHAttered Cryptographic Hash Function Collision Attack.” EUROCRYPT 2017. Springer, Cham, 2017. https://link.springer.com/chapter/10.1007/978-3-319-56617-7_24

official deprecation of SHA-1 by NIST in 2020:
National Institute of Standards and Technology. “SHA-1 Deprecation Notice.” Federal Register, May 7,

History of discoveries of weakness in SHA-1:

  • 1995: SHA-0 was published by the National Institute of Standards and Technology (NIST) as a cryptographic hash function.
  • 1996: SHA-0 was found to have serious flaws by cryptographers, leading NIST to revise the design and release SHA-1 as a replacement.
  • 2005: A group of cryptographers discovered a theoretical attack on SHA-1 that would allow for collisions to be found in about 2^69 operations, making it vulnerable to a “collision attack”.
  • 2006: A team of researchers from Shandong University in China presented a practical collision attack on SHA-1, allowing them to generate two different messages with the same hash value.
  • 2008: The first verifiably successful attack on SHA-1 was announced by a team of researchers from the French National Institute for Research in Computer Science and Control (INRIA) and the École Polytechnique Fédérale de Lausanne (EPFL), which could produce collisions in just 2^52 operations.
  • 2010: A team of researchers from the Netherlands and Germany presented an attack that could break SHA-1 by finding a collision in 2^51 operations.
  • 2015: Google researchers announced that they had successfully created two PDF files with different content but the same SHA-1 hash, marking the first time that a practical attack on the algorithm had been demonstrated in the wild.
  • 2017: Researchers from the CWI Amsterdam and Google announced a “collision attack” on SHA-1 that could generate two different PDF files with the same SHA-1 hash in about 2^63 operations, demonstrating the algorithm’s vulnerability to attacks in practice.
  • 2020: SHA-1 is no longer considered secure for any use and is officially deprecated by NIST. It is recommended to switch to more secure hash functions such as SHA-256 or SHA-3.

FIPS 180: Secure Hash Standard (SHA-0) – https://nvlpubs.nist.gov/nistpubs/Legacy/FIPS/NIST.FIPS.180.pdf
FIPS 180-1: Secure Hash Standard (SHA-1) – https://csrc.nist.gov/publications/detail/fips/180/1/archive/1995-04-17
Discovery of the theoretical attack on SHA-1 in 2005:
Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu. “Finding Collisions in the Full SHA-1.” CRYPTO 2005. Springer, Berlin, Heidelberg, 2005. https://link.springer.com/chapter/10.1007/11535218_17

Practical collision attack on SHA-1 presented in 2006: Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu. “Collisions for Hash Functions MD4, MD5, HAVAL-128 and RIPEMD.” EUROCRYPT 2005. Springer, Berlin, Heidelberg, 2005. https://link.springer.com/chapter/10.1007/11426639_17
Verifiably successful attack on SHA-1 in 2008:
Marc Stevens, Arjen K. Lenstra, and Benne de Weger. “The first collision for full SHA-1.” CRYPTO 2017. Springer, Cham, 2017. https://link.springer.com/chapter/10.1007/978-3-319-63715-0_17

SHA-1 presented in 2010:
Thomas Peyrin and Pierre Karpman. “Predicting and Distinguishing Attacks on SHA-0 and SHA-1.” EUROCRYPT 2010. Springer, Berlin, Heidelberg, 2010. https://link.springer.com/chapter/10.1007/978-3-642-13190-5_22

SHA-1 demonstrated by Google in 2015:
Marc Stevens, Elie Bursztein, Pierre Karpman, Ange Albertini, and Yarik Markov. “The first SHA-1 collision.” Google Security Blog, February 23, 2017. https://security.googleblog.com/2017/02/announcing-first-sha1-collision.html

Collision attack on SHA-1 demonstrated by researchers from CWI Amsterdam and Google in 2017:
Marc Stevens, Elie Bursztein, Pierre Karpman, Ange Albertini, Yarik Markov, and A. J. Bernstein. “The SHAttered Cryptographic Hash Function Collision Attack.” EUROCRYPT 2017. Springer, Cham, 2017. https://link.springer.com/chapter/10.1007/978-3-319-56617-7_24

official deprecation of SHA-1 by NIST in 2020:
National Institute of Standards and Technology. “SHA-1 Deprecation Notice.” Federal Register, May 7,