This is a small series I wanted to start, where I write about my small
threathunting setup and describe a little what I build and what I am doing
with it.
In this part, I will describe the Network setup for my Environment, more about
how I build the honeypots and the ELK Server I will describe in the follow up
articles about threathunting.
Keep in mind this is for Education and fun, no serious stuff going on here.
Why I Built a Home Lab for Threat Hunting 🕵
The threat landscape is constantly evolving, with new attack vectors, tools,
and tactics appearing almost daily.
And to keep my skills current with real-world threats, I built a home lab dedicated
to threat hunting. This environment allows me to safely observe attacks and
develop detection and defense methods. I deployed web and shell honeypots,
and collect real threat data in a controlled setting.
It’s a practical, hands-on way to explore the behavior of adversaries and its a
lot of fun!
Network Setup
Topology, Hardware and Tools 🛠
For the hardware setup, I kept things lightweight and affordable by using
Raspberry Pi devices and open-source tools. The honeypot is based on the
well-known Cowrie SSH honeypot and the honeyhttpd HTTP honeypot .
It runs on a Raspberry Pi 4 with 8GB of RAM, hosted inside a Docker 🐳
container. On the honeypot host, Filebeat is running to ingest the Cowrie
logs into the ELK stack.
For the ELK stack, I used a Raspberry Pi 5 with 16GB of RAM, running
Debian. The ELK services are also containerized using Docker. The stack is
based on the DShield-SIEM project, which I customized to better fit
my needs. I’ll dive deeper into those modifications and the ELK setup in
a follow-up article.
The network topology is straightforward but deliberately segmented. The router
is connected to a managed switch, which is responsible for handling VLAN
separation. Both the honeypot and the ELK server are connected to this switch
and are placed in an isolated VLAN (VLAN210). This VLAN is dedicated
exclusively to threat hunting, ensuring that any potentially malicious
traffic remains fully contained and cannot interfere with the rest of the
home network.
My client system 💻 is the only machine allowed to connect from outside the
VLAN to both the ELK server and the honeypot. This connection is strictly
for maintenance and administrative purposes. The ELK server is allowed to
access the internet, primarily to pull threat intelligence data from
external sources and security feeds.
In contrast, the honeypot is completely blocked from internet access,
with the exception of SSH and HTTP traffic going in and out of it. These
are the only services deliberately exposed to simulate vulnerable endpoints.
Communication between the honeypot and the ELK server is allowed for log
ingestion and analysis. However, I intend to introduce stricter controls on
this internal traffic in the future to further reduce the attack surface.
Firewall configuration🧱
For the pf(1) configuration It was as always with UNIX fairly easy to get to work:
match in quick log on egress proto tcp from any to any port 22 flags S/SA rdr-to $honeypot port 2222match in quick log on egress proto tcp from any to any port 443 flags S/SA rdr-to $honeypot port 4433
This rule makes sure any incoming TCP connection attempt to port 22 (SSH) and
port 443 (HTTPS) is immediately intercepted, logged, and transparently
redirected to the $honeypot server listening on port 2222 or 4433 for HTTPS
Traffic.
Switch configuration
Here you can see my managed switch configuration. Port 5 (honeypot) and port 3
(ELK) is assigned to VLAN210, port 2 is the router it needs to talk into both
networks and at port 1 is my workstation to access the theathunting
environment.
What I Learned
Building and maintaining this lightweight honeypot and monitoring setup on
Raspberry Pi devices has been an insightful experience. Here are some key takeaways:
Resource Efficiency: Raspberry Pis provide a surprisingly capable
platform for running complex services like Cowrie honeypot and the ELK stack
in Docker containers, keeping costs and power consumption low.
Network Segmentation Matters: Isolating the honeypot and ELK server in a
dedicated VLAN (VLAN210) effectively contains malicious traffic, protecting
the rest of the home network from potential threats.
Controlled Access Is Crucial: Restricting external access to only
authorized clients and limiting the honeypot’s internet connectivity
reduces the attack surface while still enabling useful data collection.
Logging and Data Collection: Using Filebeat to ship logs from the
honeypot to the ELK stack provides real-time visibility into attacker
behavior, which is essential for threat hunting and incident response.
Customization Pays Off: Adapting existing tools and SIEM projects
(like DShield) to specific needs improves effectiveness and allows for
tailored threat detection.
Future Improvements: There is always room to tighten internal
communication rules and harden the setup further to minimize risk and
improve operational security.
This project highlights the balance between practical constraints and security
needs, demonstrating that even modest hardware can contribute significantly
to threat intelligence and network defense.
I drew inspiration for this setup from the DShield SIEM project by SANS and
would like to express my gratitude for their valuable work.
As someone who is passionate about security and has an interest in
Unix operating systems, OpenBSD particularly captivates due to its
dedication to security, stability, and simplicity. In comparison to
other OSes, what sets OpenBSD apart? And how do these principles
align with my journey through Zen meditation?
At first glance, OpenBSD and Zen may appear to be vastly disparate
concepts - one being a potent operating system, while the other is
a spiritual practice originating from ancient China. However, as I
delved deeper into both realms, I uncovered some fascinating
similarities.
Simplicity and Clarity
In Zen, simplicity is key to achieving inner clarity and balance.
By stripping away unnecessary complexity, OpenBSD aims to create a
stable and secure foundation for users. Similarly, in meditation,
simplicity helps to quiet the mind and focus on the present moment.
This alignment between OpenBSD’s philosophy and Zen practices extends
to their shared emphasis on mindfulness and deliberate decision-making,
fostering an environment of security and tranquility in both realms.
Attention to Detail
Both OpenBSD and Zen underscore the significance of attending to detail.
In software development, this entails meticulously crafting each line of
code to guarantee stability and security. In Zen practice, it involves
paying close attention to one’s breath, posture, and mental state to
attain a state of mindfulness. By zeroing in on these details, both
OpenBSD and Zen strive for perfection.
The Power of Consistency
OpenBSD’s dedication to consistency is manifested in its codebase, where each
code change undergoes a thorough code review process. Consistency holds equal
importance in Zen practice, as it fosters a sense of routine and stability.
By cultivating a consistent daily meditation practice, I have discovered that
consistency is instrumental in making progress on my spiritual journey.
OpenBSD’s emphasis on consistency mirrors the principles of Zen, emphasizing
the value of diligence and discipline in both domains.
The Beauty of Imperfection
Finally, both OpenBSD and Zen acknowledge the elegance in imperfection.
In software development, imperfections can often be rectified or lessened
through meticulous design and testing. In Zen practice, imperfections are
perceived as avenues for growth and self-awareness.
By acknowledging our imperfections, we can nurture humility and compassion.
As I progress in my journey with OpenBSD and Zen, I am consistently struck
by the ways in which these two seemingly unrelated realms intersect. By
embracing simplicity, attention to detail, consistency, and the beauty of
imperfection, both OpenBSD and Zen provide unique perspectives on the nature
of software development and personal growth. Stay tuned for further insights
from my exploration in the realm of security!
This post provides a brief walkthrough of how to deploy a lightweight,
containerized SSH honeypot using Cowrie and Podman, with the goal of
capturing and analyzing malicious activity as part of my threat hunting
strategy.
What is Cowrie?
Cowrie is an interactive SSH and Telnet honeypot designed to emulate a
real system, capturing attacker behavior in a controlled environment.
It allows defenders and researchers to observe malicious activity without
exposing actual infrastructure.
Key capabilities of Cowrie include
Full session logging: Records all commands entered by the attacker,
along with input/output streams and timing data. Sessions can be saved
as plaintext or in formats suitable for replay.
Fake file system and shell environment: Emulates a basic Linux shell
with a user-modifiable file system. Attackers can navigate directories,
read/write fake files, or attempt to download/upload payloads.
Command emulation: Supports a large set of common Unix commands (`ls`,
`cat`, `wget`, etc.), allowing attackers to interact naturally, as
if on a real system. And can be extended with more commands
Credential logging: Captures usernames and passwords used in
brute-force login attempts or interactive logins.
File download capture: Logs and optionally stores any files attackers
attempt to retrieve via `wget`, `curl`, or similar tools.
JSON-formatted logging and integration’s: Outputs structured logs that
are easy to parse and ingest into systems like ELK, Splunk, or custom
analysis pipelines.
Cowrie is widely used in research, threat intelligence, and proactive defense
efforts to gather Indicators of Compromise (IOCs) and understand attacker
tactics,techniques, and procedures (TTPs).
Why Podman over Docker?
Podman offers several advantages over Docker, particularly in terms of security
and system integration. It supports rootless containers, allowing users to run
containers without elevated privileges, which reduces the attack surface.
Podman is daemon-less, integrating more seamlessly with systemd and existing
Linux workflows. Additionally, Podman is fully compatible with the Open
Container Initiative (OCI) standards, ensuring interoperability and
flexibility across container ecosystems.
Preconditions / System setup
Before I proceed with the cowrie setup, I made sure the following preconditions are met:
Ubuntu Installed on Raspberry Pi 4+
I am using a Raspberry Pi 4+ running Ubuntu
System Fully Updated
After installation, I made sure system is up to date:
sudo apt update && sudo apt upgrade -y
Podman installed and working
# Ubuntu 20.10 and newersudo apt-get -y install podman
Run the Hello World Container.In this moment I did not had the cowrie user yet
setup so I used my system user to test
podman run hello-world
Trying to pull docker.io/library/hello-world:latest...
...
Hello from Docker!
This message shows that your installation appears to be working correctly.
tho sometimes the pulling fails like that then I had to put `docker.io` in
front of the container name like:
podman run docker.io/hello-world
then it would work for sure.
VLAN Tagging Configured on Network Interface
In my network setup for threathunting the honeypot requires VLAN tagging to
configured to reachable from the outside, VLAN210 is my restricted Network.
Therefore i needed to configure the vlan using nmcli so it’s persistent across reboots.
Example: Create a VLAN interface (e.g., VLAN ID 210 on main if)
sudo nmcli con add type vlan con-name vlan210 dev mainif id 210 ip4 192.168.210.3/24 gw4 192.168.210.1
sudo nmcli con up vlan210
con-name vlan210: Name of the new VLAN connection.
dev mainif: Physical interface to tag.
id 210: VLAN ID.
ip4, gw4: Optional IP and gateway assignment.
This will persist the configuration and activate the VLAN interface
immediately. Next I moved on to Install the honeypot.
Setup environment, install cowrie as container and adjust configuration
🐧 Create a Dedicated User for Cowrie (No Login Shell)
Running the Podman container under a dedicated system user with no login shell
is a recommended security best practice. Reasons include:
Privilege Separation:
Isolates the container from other system processes and users, limiting
the potential impact of a compromise.
Reduced Attack Surface:
The user has no login shell (e.g., /usr/sbin/nologin), meaning it can’t be
used to log into the system interactively.
Auditing & Logging:
Helps distinguish container activity in system logs and process lists,
making monitoring easier.
Least Privilege Principle:
The user has only the permissions necessary to run the container — nothing more.
1. Create the ‘cowrie’ user (no home directory, no login shell)
The `cowrie.cfg` file is the main configuration for Cowrie, the SSH/Telnet
honeypot we use. It uses INI-style syntax and is divided into sections. Each section
begins with a header like [section_name].
📁 Key Sections & Settings
[ssh]
Enable or disable SSH/Telnet and set the port to listen on::
enabled = true
listen_port =2222
[honeypot]
Set honeypot host name and logpath properties:
hostname = cowrie-host
# Directory where to save log files in.log_path = var/log/cowrie
I use AuthRandom here which causes to allow access after “randint(2,5)”
attempts. This means the threat actor will fail with some logins and some
will be logged in immediately.
sudo -u cowrie: Runs the Podman command as the unprivileged cowrie user.
--uidmap 0:999:1001: Maps root (UID 0) inside the container to the cowrie UID on the host.
-v /opt/cowrie/etc and /opt/cowrie/var: Mounts configuration and data volumes from the host with `:Z` to apply correct SELinux labels (optional on systems without SELinux).
-p 2222:2222: Forwards port 2222 from host to container (Cowrie’s SSH honeypot port).
cowrie/cowrie: The container image name (use latest or specific tag as needed).
Benefits:
Container runs as non-root on the host:
Even if a process inside the container thinks it’s root, it’s actually limited to the unprivileged cowrie user outside the container.
Enhanced security:
If the container is compromised, the attacker only gets access as the cowrie user — not real root.
Avoids root-equivalent risks:
Prevents privilege escalation or access to sensitive host files and devices.
🎯 Operating the Honeypot
View logs
I think to know how to debug the container is important so we start first
with the logs:
sudo -u cowrie podman logs -f cowrie
...snip...
[HoneyPotSSHTransport,14,10.0.2.100] Closing TTY Log: var/lib/cowrie/tty/e52d9c508c502347344d8c07ad91cbd6068afc75ff6292f062a09ca381c89e71 after 0.8 seconds
[cowrie.ssh.connection.CowrieSSHConnection#info] sending close 0[cowrie.ssh.session.HoneyPotSSHSession#info] remote close
[HoneyPotSSHTransport,14,10.0.2.100] Got remote error, code 11 reason: b'disconnected by user'[HoneyPotSSHTransport,14,10.0.2.100] avatar root logging out
[cowrie.ssh.transport.HoneyPotSSHTransport#info] connection lost
[HoneyPotSSHTransport,14,10.0.2.100] Connection lost after 2.8 seconds
...snip...
Restart container
If things go left just restart that thing:
sudo -u cowrie podman restart cowrie
In the logs you can see that cowrie is running and accepting SSH connections:
...snip...
[-] CowrieSSHFactory starting on 2222[cowrie.ssh.factory.CowrieSSHFactory#info] Starting factory <cowrie.ssh.factory.CowrieSSHFactory object at 0x7fb66f26d0>
[-] Ready to accept SSH connections
...snip...
When the log says “Ready to accept SSH connections” I tested if I could login:
ssh 192.168.210.3 -p 2222 -l root
root@192.168.210.3 password:
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@svr04:~# uname -a
Linux svr04 3.2.0-4-amd64 #1 SMP Debian 3.2.68-1+deb7u1 x86_64 GNU/Linuxroot@svr04:~#
Stop container
Nothing special here:
sudo -u cowrie podman stop cowrie
🔄 Automatically Restart Cowrie Podman Container with systemd
To keep your Cowrie container running reliably and restart it if it stops, use a systemd service with restart policies.
Please make sure to double check this part on your side as I am no systemd
expert at all, for me this just worked.
Step 1: Generate a systemd Service File
Create `/etc/systemd/system/cowrie-container.service` with the following
content:
You can create the systemd file with the command:
[Unit]Description=Run Cowrie health check every minute[Timer]OnBootSec=1minOnUnitActiveSec=1minUnit=check_cowrie.service[Install]WantedBy=timers.target
The filebeat config is straight forward. You have to write a filebeat.input
block which contains the path where the logfiles are you need to ingest. And
at the end the log-destination (logstash) so that filebeat knows where to send
the logs to:
sudo filebeat test config
logstash: 192.168.210.5:5044...
connection...
parse host... OK
dns lookup... OK
addresses: 192.168.210.5
dial up... OK
TLS... WARN secure connection disabled
talk to server... OK
Monitoring a router is something many people forget about, especially at home.
But a router is the heart of the network — when it fails, everything fails.
OpenBSD already provides a strong foundation for reliability and security.
By adding Monit (a lightweight monitoring tool) and using Pushover (simple mobile notifications),
you can build a robust alerting and monitoring setup that works even on small hardware.
This article shows how to install, configure, and use Monit to watch essential
router services and send push notifications with Pushover.
Requirements requirements
To follow this guide you need:
OpenBSD router (any supported version)
Monit installed from packages
Basic shell access
A Pushover account and an API token (application token)
Your Pushover user key
All configuration happens in /etc/monitrc.
Installing Monit on OpenBSD install
Installing Monit on OpenBSD is simple:
pkg_add monit
After installation, enable Monit so that it starts automatically:
You must set permissions correctly, because Monit refuses unsafe files:
chmod 600 /etc/monitrc
Monit – Essential System and Router Services
System monitoring runs every 45 seconds. The first check is delayed
by 120 seconds to avoid overloading the system immediately after boot.
set daemon 45with start delay 120
Monit logs to syslog. `idfile` and `statefile` store Monit’s
persistent state and identity across restarts.
set log syslogset idfile /var/monit/idset statefile /var/monit/state
Limits control buffer sizes and timeouts for
program outputs, network I/O, and service start/stop/restart
operations. This prevents Monit from hanging or processing excessive data.
Monit will send alerts via local email. Events are queued under `/var/monit/events` to prevent message loss during temporary network problems.
set mailserver localhostset eventqueuebasedir /var/monit/eventsslots 200set mail-format { from: root@monit }set alert root@localhost not on { instance, action }
Simply comment out or delete all `set alert` entries:
# set alert root@localhost not on { instance, action }
After this, Monit will not send any emails, but it will still monitor services.
Monit HTTP interface is on port 2812. Access is restricted to localhost,
a local subnet (`192.168.X.0/24`), and an admin user with a password.
set httpd port 2812 andallow localhostallow 192.168.X.0/255.255.255.0allow admin:foobar
Monit will start all monitored services
automatically on reboot.
set onreboot start
This monitors overall system health:
1- and 5-minute load per CPU core
CPU usage
Memory and swap usage
If thresholds are exceeded, it triggers `pushover.sh` for alerts.
check system $HOSTif loadavg (1min) per core > 2 for 5 cycles then exec /usr/local/bin/pushover.shif loadavg (5min) per core > 1.5 for 10 cycles then exec /usr/local/bin/pushover.shif cpu usage > 95% for 10 cycles then exec /usr/local/bin/pushover.shif memory usage > 75% then exec /usr/local/bin/pushover.shif swap usage > 25% then exec /usr/local/bin/pushover.shgroup system
`/home` filesystem is monitored for:
Disk space and inode usage
Read/write throughput (MB/s and IOPS)
Service response time
Alerts are sent via `pushover.sh` if any threshold is exceeded.
check filesystem home_fs with path /dev/sd0kstart program="/sbin/mount /home"
stop program = "/sbin/umount /home"
if space usage > 90% then exec /usr/local/bin/pushover.sh
if inode usage > 95% then exec /usr/local/bin/pushover.sh
if read rate > 8 MB/s for 20 cycles then exec /usr/local/bin/pushover.sh
if read rate > 800 operations/s for 15 cycles then exec /usr/local/bin/pushover.sh
if write rate > 8 MB/s for 20 cycles then exec /usr/local/bin/pushover.sh
if write rate > 800 operations/s for 15 cycles then exec /usr/local/bin/pushover.sh
if service time > 10 milliseconds for 3 times within 15 cycles then exec /usr/local/bin/pushover.sh
group system
Root filesystem `/` has similar checks but shorter cycles since it’s critical to system stability.
check filesystem root_fs with path /dev/sd0astart program="/sbin/mount /"
stop program = "/sbin/umount /"
if space usage > 90% then exec /usr/local/bin/pushover.sh
if inode usage > 95% then exec /usr/local/bin/pushover.sh
if read rate > 8 MB/s for 5 cycles then exec /usr/local/bin/pushover.sh
if read rate > 800 operations/s for 5 cycles then exec /usr/local/bin/pushover.sh
if write rate > 8 MB/s for 5 cycles then exec /usr/local/bin/pushover.sh
if write rate > 800 operations/s for 5 cycles then exec /usr/local/bin/pushover.sh
if service time > 10 milliseconds for 3 times within 5 cycles then exec /usr/local/bin/pushover.sh
group system
Monit ensures secure permissions for `/root`. If permissions are wrong, monitoring for this directory is disabled to avoid false alarms.
check directory bin with path /rootif failed permission 700 then unmonitorif failed uid 0 then unmonitorif failed gid 0 then unmonitorgroup system
A network host is ping-checked. Frequent failures trigger alerts. Dependencies on
interfaces and services ensure checks only run when the network is up.
check host homeassistant with address 192.168.X.19if failed ping then alertif 5 restarts within 10 cycles then exec /usr/local/bin/pushover.shgroup networkdepends on iface_in,dhcpd,unbound
Monit watches network interface `pppoeX`:
Restarts interface if link goes down
Alerts on saturation or high upload
Limits repeated restarts to avoid loops
check network iface_out with interface pppoeXstart program="/bin/sh /etc/netstart pppoeX"
if link down then restart else exec /usr/local/bin/pushover.sh
if changed link then exec /usr/local/bin/pushover.sh
if saturation > 90% then exec /usr/local/bin/pushover.sh
if total uploaded > 5 GB in last hour then exec /usr/local/bin/pushover.sh
if 5 restarts within 10 cycles then exec /usr/local/bin/pushover.sh
group network
DNS resolver `unbound` is monitored by PID and port. Failures trigger a restart, repeated failures trigger alerts.
check process unbound with pidfile /var/unbound/unbound.pidstart program="/usr/sbin/rcctl start unbound"
stop program = "/usr/sbin/rcctl stop unbound"
if failed port 53 for 3 cycles then restart
if 3 restarts within 10 cycles then exec /usr/local/bin/pushover.sh
group network
depends on dnscrypt_proxy,iface_out,iface_in
DHCP server is monitored. Missing process triggers a restart. Alerts are sent if failures happen repeatedly.
check process dhcpd with matching /usr/sbin/dhcpdstart program="/usr/sbin/rcctl start dhcpd"
stop program = "/usr/sbin/rcctl stop dhcpd"
if does not exist then restart
if 2 restarts within 10 cycles then exec /usr/local/bin/pushover.sh
group network
depends on iface_in
NTP daemon ensures time synchronization. Missing process triggers restart; repeated issues generate alerts.
check process ntpd with matching /usr/sbin/ntpdstart program="/usr/sbin/rcctl start ntpd"
stop program = "/usr/sbin/rcctl stop ntpd"
if does not exist then restart
if 5 restarts within 5 cycles then exec /usr/local/bin/pushover.sh
group network
depends on iface_out
vnStat daemon monitors network traffic statistics. Monit restarts it if it stops and alerts on repeated failures.
check process vnstatd with matching /usr/local/sbin/vnstatdstart program="/usr/sbin/rcctl start vnstatd"
stop program = "/usr/sbin/rcctl stop vnstatd"
if does not exist then restart
if 5 restarts within 15 cycles then exec /usr/local/bin/pushover.sh
group network
depends on iface_out
Adding Pushover Alerts pushover
Pushover provides a simple HTTPS API for sending notifications to your phone.
Monit can call an external script.
Create /usr/local/bin/pushover.sh:
Now the checks which contain the “exec /usr/local/bin/pushover.sh” line will trigger pushover notifications:
check process vnstatd with matching /usr/local/sbin/vnstatdstart program="/usr/sbin/rcctl start vnstatd"
stop program = "/usr/sbin/rcctl stop vnstatd"
if does not exist then restart
if 5 restarts within 15 cycles then exec /usr/local/bin/pushover.sh
group network
depends on iface_out
Monit will automatically send the full text of the event to Pushover.
Testing and Maintenance ops
Test your configuration
monit -t # syntax checkmonit reload # reload configurationmonit summary # Show command line overviewmonit status vnstatd # Show check status
Conclusion conclusion
Using Monit together with Pushover is an excellent way to keep a close eye on an OpenBSD router.
Monit is tiny, fast, and reliable — perfect for embedded hardware.
Pushover provides instant alerts with almost no configuration or overhead.
For a home router or small business network, this combination gives you
semi professional-grade monitoring with minimal effort.
If you’re running Elasticsearch on a single node — like a Raspberry Pi or small lab setup like I am —
you might notice some indices appear with a yellow health status.
This show article explains what that means and how to fix it, especially in resource-constrained, single-node environments.
What Does “Yellow” Mean?
In Elasticsearch:
green: All primary and replica shards are assigned and active.
yellow: All primary shards are active, but at least one replica shard is unassigned.
red: At least one primary shard is missing → critical!
Why Yellow Happens on Single Nodes
In single-node clusters, Elasticsearch cannot assign replica shards (because replicas must be on a different node).
So any index with replicas will always be yellow unless:
You add more nodes (not ideal on a Raspberry Pi)
Or: You disable replicas (number_of_replicas: 0)
Step-by-Step: Diagnose Yellow Shards
1. List all yellow indices
GET _cat/indices?v&health=yellow
2. See why a shard is unassigned
GET _cluster/allocation/explain
3. Inspect shard assignment of a specific index
GET _cat/shards/.monitoring-beats-7-2025.08.06?v
Example output:
index shard prirep state docs store ip node
.monitoring-beats-7-2025.08.06 0 p STARTED 7790 5.9mb 127.0.0.1 mynode
.monitoring-beats-7-2025.08.06 0 r UNASSIGNED
→ The r (replica) is unassigned → yellow status.
How to Fix It
A. Fix an individual index
Set replicas to zero:
PUT .monitoring-beats-7-2025.08.06/_settings
{"index" : {"number_of_replicas" : 0}}
This changes the index health from yellow to green.
B. Automatically fix all yellow indices
If you want to automate the fix, use this (Kibana Dev Tools):
> ⚠️ This applies to all future indices. Only do this in single-node environments.
Conclusion
Yellow indices aren’t dangerous by default — they just mean you’re missing redundancy.
In small environments, it’s perfectly safe to run with zero replicas.
So I had this USB Disk attached to my OpenBSD Router used as storage, one saturday when I was walking by
I noticed the weird clicking sounds from the disk. So I knew my time was running before the disc would fail.
Curiously, when I plugged the same drive into a Linux box, it was detected — and even
showed a valid OpenBSD partition table. That gave me a glimmer of hope:
maybe the hardware wasn’t completely dead yet.
So, for fun (and a little bit of stubborn curiosity), I decided to spend
the weekend seeing how much I could rescue from it.
This post documents the process — part forensic experiment, part recovery attempt,
and part “let’s see what happens if I do this.”
Phase 1: Identifying the Disk under Linux
Before doing anything risky, I wanted to be sure I was imaging the right disk.
The idea was to identify the OpenBSD partition and dump it to an image file.
Listing block devices
lsblk -o NAME,SIZE,FSTYPE,TYPE,LABEL,UUID
That gives a good overview — which disks are present, how large they are, and what filesystems they contain.
Sure enough, my external USB drive showed up as `/dev/sda`.
Inspecting partition table
sudo fdisk -l /dev/sda
Example output:
Disk /dev/sda: 931.5 GiB, 1000204883968 bytes, 1953525164 sectors
Disk model: External USB 3.0
Sector size: 512 bytes
Disklabel type: dos
Device Boot Start End Sectors Size Id Type
/dev/sda4 * 64 1953525163 1953525100 931.5G a6 OpenBSD
Perfect. The OpenBSD partition was still there (`/dev/sda4`), and it even reported the correct size.
The Start sector (64) is important later for offset calculations.
Type a6 OpenBSD confirmed the filesystem was OpenBSD-specific (likely softraid).
Knowing the sector size (512 bytes) ensured that later tools like `dd` or `ddrescue` wouldn’t misalign reads.
At this point, the goal was to make a bit-for-bit copy of that partition, compress it, and work
on the image rather than risk further damage to the actual disk.
Phase 2: Creating a Compressed Disk Image
For imaging, I decided to use GNU ddrescue — it’s great for
flaky disks and can retry sectors intelligently.
Installing ddrescue
On Fedora, installation was trivial:
sudo dnf install ddrescue
First Attempt (Quick and Dirty)
I tried a fast, one-shot dump — not ideal for a failing disk, but I wanted to see if it would work at all:
That command streams data directly from the device, compresses it with xz, and writes the result.
It works — if the disk is healthy. Mine wasn’t, so it failed partway through.
This time, ddrescue created a detailed log file so I could resume later if the system froze or the disk disconnected.
It took most of the night, but eventually I had a clean (or mostly clean) image.
Explanation of parameters
-r3 retries each bad block 3 times
-d enables direct disk I/O
The `.log` file lets you stop and restart without losing progress
xz -T0 uses all CPU cores for compression
After the dump, I verified the output:
ls -lh openbsd_sda4.img.xz
xz -t openbsd_sda4.img.xz # test integritysha256sum openbsd_sda4.img.xz > openbsd_sda4.img.xz.sha256
Everything checked out — a ~450 GB compressed image file safely sitting on my main system.
Phase 3: Simulating Disk Damage (For Fun and Testing)
Since the real disk was unstable, I wanted a safe way to experiment.
So I created a copy of the image and simulated damage to practice recovery techniques.
And just like that, I could practice recovery without touching the actual hardware again.
Optional Compression
xz -T0 openbsd_sda4.img
It’s amazing how much you can still do with raw disk images and a few classic Unix tools.
Phase 4: Performance Tuning and System Stability
During the rescue, I learned (the hard way) that ddrescue can saturate I/O and make your system lag like crazy.
So I ended up using this combination for a gentler approach:
tmux new-session -s rescue
sudo ddrescue -d -r3 /dev/sda4 openbsd_sda4.img openbsd_sda4.log
# Detach with Ctrl-B D
Later, I could simply:
tmux attach -t rescue
That setup saved me more than once when I accidentally closed an SSH session.
Phase 5: Next Steps — Future Analysis
Once I had a full image, the plan was to:
Decompress it (unxz openbsd_sda4.img.xz)
Attach it as a loopback device under Linux, or use vnconfig under OpenBSD
Attempt to reassemble the softraid volume using bioctl
If all goes well — mount the decrypted filesystem and access my old data
That’s a topic for another weekend. But getting to this
point already felt like a small victory.
Conclusion
What started as a “let’s see if I can still read this disk” experiment turned into
a proper mini-forensics exercise. Even though the original USB drive was dying,
I managed to preserve most of its data and learned a ton in the process.
Allover it was quite fun to do something forensics related on a OpenBSD target, I guess it is
something you don’t come across everyday but when you do its good to be prepared I think.
Key takeaways:
ddrescue is your friend for unstable media
Always work on images, not the original device
Compression and checksums are cheap insurance
And most importantly: never underestimate what you can recover with a bit of patience and Unix philosophy
In an age where digital identities are easily faked and impersonation is just a few clicks away, I decided to take a step forward in securing mine. GPG (GNU Privacy Guard) provides a robust way to authenticate, encrypt, and sign digital content. In this post, I’ll walk you through how I:
Created a GPG key pair
Set up subkeys and stored them on my YubiKey
Published my public key on my website
Signed and encrypted personal documents for secure public sharing
Configured email signing using GPG
Step 1: Installing GPG
To start, I made sure GPG was installed. Here’s how I did it on each of my systems:
On Ubuntu/Debian:
sudo apt update && sudo apt install gnupg
On Fedora 40:
sudo dnf install gnupg2
On OpenBSD 7.6:
doas pkg_add gnupg
Check your installation:
gpg --version
Step 2: Creating My GPG Key Pair
I created a new key using:
gpg --full-generate-key
Here’s what I chose:
Key type: ed25519 (modern and compact) or RSA and RSA (widely compatible)
Key length: 4096 bits (if RSA)
Expiration: 2 years (I can always renew)
My real name or handle
My preferred contact email
A strong passphrase, saved in a password manager
After generating the key, I listed it and saved the fingerprint:
I uploaded publickey.asc to my website and linked it like this:
<ahref="/publickey.asc">🔑 Download my GPG public key</a>
Additionally, I displayed my key’s fingerprint on the page so that people can verify its authenticity manually.
Step 5: Email Signing and Encryption
I configured email signing using my GPG key.
For Thunderbird (Linux, OpenBSD, Windows):
OpenPGP support is built-in.
I enabled signing for all outgoing mail.
The key lives on the YubiKey, so no key is stored on disk.
For Mutt / CLI mailers:
I used `gpg-agent` for passphrase and key handling.
Configured .muttrc to sign and/or encrypt automatically.
Signing ensures message authenticity. If recipients have my key, they can encrypt replies.
Step 6: Signing and Encrypting Documents for the Public
To safely share personal certificates and private files, I signed and optionally encrypted them:
# Sign only (adds signature block)gpg --sign --armor diploma.pdf
# Sign and encrypt with a password (no public key needed)gpg --symmetric --armor --cipher-algo AES256 diploma.pdf
This way, the document is verifiably mine and only decryptable with the shared password.
The encrypted .asc files can be uploaded to the website, with instructions for downloading and decrypting.
Step 7: Offline Backup of My Master Key
Before moving entirely to the YubiKey, I backed up the master key offline:
I stored it on an encrypted USB drive with either one:
LUKS (on Linux)
OpenBSD softraid(4) encryption
Conclusion
Rolling out GPG was super easy. With my identity cryptographically verifiable, email
signing in place, and secure document sharing live on my site, I now
have a strong, decentralized identity system.
Hi, I’m Dirk — a security engineer with a deep passion for skateboarding and
digital forensics.
Skateboarding is more than a hobby to me; it’s a source of creativity, freedom,
and community. It shapes how I approach challenges — with persistence, balance,
and a mindset open to innovation.
Beyond that, I’m an OpenBSD enthusiast. I’ve built an OpenBSD-based router and
threat-hunting infrastructure to stay ahead in cybersecurity. I appreciate
OpenBSD for its simplicity, security, and elegance — qualities I strive to
bring to my work.
I’m also a longtime Emacs user, relying on it daily for coding, writing, and
organizing my thoughts. It’s part of how I stay productive and focused.
In cybersecurity, I’m committed to continuous growth and adapting to new
challenges. When I’m not working on security projects, you’ll find me skating or
exploring new ideas inspired by Zen philosophy.
You can download my CV as a signed and encrypted PDF for authenticity and
privacy. If you need the password to decrypt it,
please send me an E-mail
Stay tuned for updates on my journey as a security engineer, skateboarder, and
lifelong learner.