Welcome
Welcome to my technical blog and knowledge base!
Topics
Latest posts
Get in Touch
Suggestions or feedback?
Contact me here or visit the project repository.
You can also subscribe via RSS.
Welcome to my technical blog and knowledge base!
Suggestions or feedback?
Contact me here or visit the project repository.
You can also subscribe via RSS.
Welcome to my technical blog and knowledge base!
Suggestions or feedback?
Contact me here or visit the project repository.
You can also subscribe via RSS.
This is a small series I wanted to start, where I write about my small threathunting setup and describe a little what I build and what I am doing with it.
In this part, I will describe the Network setup for my Environment, more about how I build the honeypots and the ELK Server I will describe in the follow up articles about threathunting.
Keep in mind this is for Education and fun, no serious stuff going on here.
The threat landscape is constantly evolving, with new attack vectors, tools, and tactics appearing almost daily.
And to keep my skills current with real-world threats, I built a home lab dedicated to threat hunting. This environment allows me to safely observe attacks and develop detection and defense methods. I deployed web and shell honeypots, and collect real threat data in a controlled setting.
It’s a practical, hands-on way to explore the behavior of adversaries and its a lot of fun!
For the hardware setup, I kept things lightweight and affordable by using Raspberry Pi devices and open-source tools. The honeypot is based on the well-known Cowrie SSH honeypot and the honeyhttpd HTTP honeypot . It runs on a Raspberry Pi 4 with 8GB of RAM, hosted inside a Docker 🐳 container. On the honeypot host, Filebeat is running to ingest the Cowrie logs into the ELK stack.
For the ELK stack, I used a Raspberry Pi 5 with 16GB of RAM, running Debian. The ELK services are also containerized using Docker. The stack is based on the DShield-SIEM project, which I customized to better fit my needs. I’ll dive deeper into those modifications and the ELK setup in a follow-up article.
The network topology is straightforward but deliberately segmented. The router is connected to a managed switch, which is responsible for handling VLAN separation. Both the honeypot and the ELK server are connected to this switch and are placed in an isolated VLAN (VLAN210). This VLAN is dedicated exclusively to threat hunting, ensuring that any potentially malicious traffic remains fully contained and cannot interfere with the rest of the home network.
My client system 💻 is the only machine allowed to connect from outside the VLAN to both the ELK server and the honeypot. This connection is strictly for maintenance and administrative purposes. The ELK server is allowed to access the internet, primarily to pull threat intelligence data from external sources and security feeds.
In contrast, the honeypot is completely blocked from internet access, with the exception of SSH and HTTP traffic going in and out of it. These are the only services deliberately exposed to simulate vulnerable endpoints. Communication between the honeypot and the ELK server is allowed for log ingestion and analysis. However, I intend to introduce stricter controls on this internal traffic in the future to further reduce the attack surface.
For the pf(1) configuration It was as always with UNIX fairly easy to get to work:
match in quick log on egress proto tcp from any to any port 22 flags S/SA rdr-to $honeypot port 2222
match in quick log on egress proto tcp from any to any port 443 flags S/SA rdr-to $honeypot port 4433
This rule makes sure any incoming TCP connection attempt to port 22 (SSH) and port 443 (HTTPS) is immediately intercepted, logged, and transparently redirected to the $honeypot server listening on port 2222 or 4433 for HTTPS Traffic.
Here you can see my managed switch configuration. Port 5 (honeypot) is only assigned to VLAN210 like port 5 too, port 2 is the router it needs to talk into both networks and at port 1 is my workstation to access the theathunting environment.
Building and maintaining this lightweight honeypot and monitoring setup on Raspberry Pi devices has been an insightful experience. Here are some key takeaways:
Resource Efficiency: Raspberry Pis provide a surprisingly capable platform for running complex services like Cowrie honeypot and the ELK stack in Docker containers, keeping costs and power consumption low.
Network Segmentation Matters: Isolating the honeypot and ELK server in a dedicated VLAN (VLAN210) effectively contains malicious traffic, protecting the rest of the home network from potential threats.
Controlled Access Is Crucial: Restricting external access to only authorized clients and limiting the honeypot’s internet connectivity reduces the attack surface while still enabling useful data collection.
Logging and Data Collection: Using Filebeat to ship logs from the honeypot to the ELK stack provides real-time visibility into attacker behavior, which is essential for threat hunting and incident response.
Customization Pays Off: Adapting existing tools and SIEM projects (like DShield) to specific needs improves effectiveness and allows for tailored threat detection.
Future Improvements: There is always room to tighten internal communication rules and harden the setup further to minimize risk and improve operational security.
This project highlights the balance between practical constraints and security needs, demonstrating that even modest hardware can contribute significantly to threat intelligence and network defense.
I drew inspiration for this setup from the DShield SIEM project by SANS and would like to express my gratitude for their valuable work.
Next I had to build the ssh honeypot and the HTTP Honeypot, stay tuned for the follow up!
This post provides a brief walkthrough of how to deploy a lightweight, containerized SSH honeypot using Cowrie and Podman, with the goal of capturing and analyzing malicious activity as part of my threat hunting strategy.
Cowrie is an interactive SSH and Telnet honeypot designed to emulate a real system, capturing attacker behavior in a controlled environment. It allows defenders and researchers to observe malicious activity without exposing actual infrastructure.
Key capabilities of Cowrie include
Full session logging: Records all commands entered by the attacker, along with input/output streams and timing data. Sessions can be saved as plaintext or in formats suitable for replay.
Fake file system and shell environment: Emulates a basic Linux shell with a user-modifiable file system. Attackers can navigate directories, read/write fake files, or attempt to download/upload payloads.
Command emulation: Supports a large set of common Unix commands (`ls`, `cat`, `wget`, etc.), allowing attackers to interact naturally, as if on a real system. And can be extended with more commands
Credential logging: Captures usernames and passwords used in brute-force login attempts or interactive logins.
File download capture: Logs and optionally stores any files attackers attempt to retrieve via `wget`, `curl`, or similar tools.
JSON-formatted logging and integration’s: Outputs structured logs that are easy to parse and ingest into systems like ELK, Splunk, or custom analysis pipelines.
Cowrie is widely used in research, threat intelligence, and proactive defense efforts to gather Indicators of Compromise (IOCs) and understand attacker tactics,techniques, and procedures (TTPs).
Podman offers several advantages over Docker, particularly in terms of security and system integration. It supports rootless containers, allowing users to run containers without elevated privileges, which reduces the attack surface.
Podman is daemon-less, integrating more seamlessly with systemd and existing Linux workflows. Additionally, Podman is fully compatible with the Open Container Initiative (OCI) standards, ensuring interoperability and flexibility across container ecosystems.
Before I proceed with the cowrie setup, I made sure the following preconditions are met:
I am using a Raspberry Pi 4+ running Ubuntu
After installation, I made sure system is up to date:
sudo apt update && sudo apt upgrade -y
# Ubuntu 20.10 and newer
sudo apt-get -y install podman
Run the Hello World Container.In this moment I did not had the cowrie user yet setup so I used my system user to test
podman run hello-world
Trying to pull docker.io/library/hello-world:latest...
...
Hello from Docker!
This message shows that your installation appears to be working correctly.
tho sometimes the pulling fails like that then I had to put `docker.io` in front of the container name like:
podman run docker.io/hello-world
then it would work for sure.
In my network setup for threathunting the honeypot requires VLAN tagging to
configured to reachable from the outside, VLAN210 is my restricted Network.
Therefore i needed to configure the vlan using nmcli
so it’s persistent across reboots.
sudo nmcli con add type vlan con-name vlan210 dev mainif id 210 ip4 192.168.210.3/24 gw4 192.168.210.1
sudo nmcli con up vlan210
con-name vlan210
: Name of the new VLAN connection.dev mainif
: Physical interface to tag.id 210
: VLAN ID.ip4
, gw4
: Optional IP and gateway assignment.This will persist the configuration and activate the VLAN interface immediately. Next I moved on to Install the honeypot.
Running the Podman container under a dedicated system user with no login shell is a recommended security best practice. Reasons include:
Privilege Separation: Isolates the container from other system processes and users, limiting the potential impact of a compromise.
Reduced Attack Surface:
The user has no login shell (e.g., /usr/sbin/nologin
), meaning it can’t be
used to log into the system interactively.
Auditing & Logging: Helps distinguish container activity in system logs and process lists, making monitoring easier.
Least Privilege Principle: The user has only the permissions necessary to run the container — nothing more.
1. Create the ‘cowrie’ user (no home directory, no login shell)
sudo useradd --system --no-create-home --shell /usr/sbin/nologin cowrie
2. Create necessary directories and set ownership
sudo mkdir -p /opt/cowrie/etc
sudo mkdir -p /opt/cowrie/var
sudo chown -R cowrie:cowrie /opt/cowrie
3. As the cowrie user, pull the container image
sudo -u cowrie podman pull docker.io/cowrie/cowrie
4. Copy default config file into persistent volume
sudo -u cowrie podman run --rm cowrie/cowrie \
cat /cowrie/cowrie-git/etc/cowrie.cfg.dist > /opt/cowrie/etc/cowrie.cfg
The `cowrie.cfg` file is the main configuration for Cowrie, the SSH/Telnet honeypot we use. It uses INI-style syntax and is divided into sections. Each section begins with a header like [section_name].
📁 Key Sections & Settings
[ssh] / [telnet]
enabled = true
listen_port = 2222
[honeypot]
Set honeypot host name and logpath properties:
hostname = cowrie-host
# Directory where to save log files in.
log_path = var/log/cowrie
Define login behavior:
auth_class = AuthRandom
auth_class_parameters = 1, 5, 10
I use AuthRandom here which causes to allow access after “randint(2,5)” attempts. This means the threat actor will fail with some logins and some will be logged in immediately.
[output_jsonlog]
[output_jsonlog]
enabled = true
logfile = ${honeypot:log_path}/cowrie.json
epoch_timestamp = false
This is the whole configuration needed to run the honeypot.
📌 Notes
Once I had created the dedicated system user (see earlier section), I
was able to run the Cowrie container with Podman using sudo -u
and a secure UID mapping.
sudo -u cowrie podman run -d --name cowrie \
--uidmap 0:$(id -u cowrie):1 \
-v /opt/cowrie/etc:/cowrie/cowrie-git/etc:Z \
-v /opt/cowrie/var:/cowrie/cowrie-git/var:Z \
-p 2222:2222 \
cowrie/cowrie
sudo -u cowrie
: Runs the Podman command as the unprivileged cowrie
user.--uidmap 0:$(id -u cowrie):1
: Maps root (UID 0) inside the container to the cowrie
UID on the host.-v /opt/cowrie/etc
and /opt/cowrie/var
: Mounts configuration and data volumes from the host with `:Z` to apply correct SELinux labels (optional on systems without SELinux).-p 2222:2222
: Forwards port 2222 from host to container (Cowrie’s SSH honeypot port).cowrie/cowrie
: The container image name (use latest or specific tag as needed).Container runs as non-root on the host:
Even if a process inside the container thinks it’s root, it’s actually limited to the unprivileged cowrie
user outside the container.
Enhanced security:
If the container is compromised, the attacker only gets access as the cowrie
user — not real root.
Avoids root-equivalent risks: Prevents privilege escalation or access to sensitive host files and devices.
View logs I think to know how to debug the container is important so we start first with the logs:
sudo -u cowrie podman logs -f cowrie
...snip...
[HoneyPotSSHTransport,14,10.0.2.100] Closing TTY Log: var/lib/cowrie/tty/e52d9c508c502347344d8c07ad91cbd6068afc75ff6292f062a09ca381c89e71 after 0.8 seconds
[cowrie.ssh.connection.CowrieSSHConnection#info] sending close 0
[cowrie.ssh.session.HoneyPotSSHSession#info] remote close
[HoneyPotSSHTransport,14,10.0.2.100] Got remote error, code 11 reason: b'disconnected by user'
[HoneyPotSSHTransport,14,10.0.2.100] avatar root logging out
[cowrie.ssh.transport.HoneyPotSSHTransport#info] connection lost
[HoneyPotSSHTransport,14,10.0.2.100] Connection lost after 2.8 seconds
...snip...
Restart container If things go left just restart that thing:
sudo -u cowrie podman restart cowrie
In the logs you can see that cowrie is running and accepting SSH connections:
...snip...
[-] CowrieSSHFactory starting on 2222
[cowrie.ssh.factory.CowrieSSHFactory#info] Starting factory <cowrie.ssh.factory.CowrieSSHFactory object at 0x7fb66f26d0>
[-] Ready to accept SSH connections
...snip...
When the log says “Ready to accept SSH connections” I tested if I could login:
ssh 192.168.210.3 -p 2222 -l root
root@192.168.210.3 password:
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
root@svr04:~# uname -a
Linux svr04 3.2.0-4-amd64 #1 SMP Debian 3.2.68-1+deb7u1 x86_64 GNU/Linux
root@svr04:~#
Stop container Nothing special here:
sudo -u cowrie podman stop cowrie
To keep your Cowrie container running reliably and restart it if it stops, use a systemd service with restart policies.
Create `/etc/systemd/system/cowrie-container.service` with the following content:
[Unit]
Description=Cowrie Honeypot Podman Container
After=network.target
[Service]
User=cowrie
Group=cowrie
Restart=on-failure
RestartSec=10s
ExecStart=/usr/bin/podman run -d --name cowrie \
--uidmap 0:$(id -u cowrie):1 \
-v /opt/cowrie/etc:/cowrie/cowrie-git/etc:Z \
-v /opt/cowrie/var:/cowrie/cowrie-git/var:Z \
-p 2222:2222 \
cowrie/cowrie
ExecStop=/usr/bin/podman stop -t 10 cowrie
ExecStopPost=/usr/bin/podman rm cowrie
ExecReload=/usr/bin/podman restart cowrie
TimeoutStartSec=120
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable --now container-cowrie.service
To detect if Cowrie stops accepting connections even if the container is still running, create a health check script running as cowrie
:
Create `/usr/local/bin/check_cowrie.sh`:
#!/bin/bash
if ! nc -z localhost 2222; then
echo "Cowrie not responding, restarting container"
/usr/bin/podman restart cowrie
/usr/local/bin/pushover.sh "Cowrie was restarted!"
fi
This restarts the service and sends out a notification via pushover.
Make it executable:
sudo chmod +x /usr/local/bin/check_cowrie.sh
sudo chown cowrie:cowrie /usr/local/bin/check_cowrie.sh
Create systemd service `/etc/systemd/system/check_cowrie.service`:
[Unit]
Description=Check Cowrie honeypot health
[Service]
User=cowrie
Group=cowrie
Type=oneshot
ExecStart=/usr/local/bin/check_cowrie.sh
Create systemd timer `/etc/systemd/system/check_cowrie.timer`:
[Unit]
Description=Run Cowrie health check every minute
[Timer]
OnBootSec=1min
OnUnitActiveSec=1min
Unit=check_cowrie.service
[Install]
WantedBy=timers.target
Enable and start the timer:
sudo systemctl daemon-reload
sudo systemctl enable --now check_cowrie.timer
The `cowrie` user has no login shell (`/usr/sbin/no login`)
Running Cowrie isolated via Podman increases containment
All files are owned by `cowrie`, no root access required for normal operation
1. Add Elastic’s GPG key and repository
curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elastic.gpg
echo "deb [signed-by=/usr/share/keyrings/elastic.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | \
sudo tee /etc/apt/sources.list.d/elastic-8.x.list
2. Update APT and install Filebeat
sudo apt update
sudo apt install filebeat
3. Edit Filebeat config
sudo mg /etc/filebeat/filebeat.yml
The filebeat config is straight forward. You have to write a filebeat.input block which contains the path where the logfiles are you need to ingest. And at the end the log-destination (logstash) so that filebeat knows where to send the logs to:
filebeat.inputs:
- type: log
enabled: true
paths:
- /opt/cowrie/var/log/cowrie/cowrie.json
json.keys_under_root: true
json.add_error_key: true
fields:
source: cowrie
fields_under_root: true
output.logstash:
hosts: ["192.168.123.5:5044"]
4. (Optional) Test Filebeat config
sudo filebeat test config
5. Enable and start Filebeat
sudo systemctl enable filebeat
sudo systemctl daemon-reload
sudo systemctl start filebeat
6. Check Filebeat status and logs
sudo systemctl status filebeat
sudo journalctl -u filebeat -f
1. We deployed Cowrie like pros.
2. Logs? Sorted.
3. Everything’s persistent.
4. Setup is clean and modular.
5. It’s nerdy, useful, and kinda fun.
Next I had to build the HTTP honeypot, stay tuned for the follow up!
As someone who is passionate about security and has an interest in Unix operating systems, OpenBSD particularly captivates due to its dedication to security, stability, and simplicity. In comparison to other OSes, what sets OpenBSD apart? And how do these principles align with my journey through Zen meditation?
At first glance, OpenBSD and Zen may appear to be vastly disparate concepts - one being a potent operating system, while the other is a spiritual practice originating from ancient China. However, as I delved deeper into both realms, I uncovered some fascinating similarities.
In Zen, simplicity is key to achieving inner clarity and balance. By stripping away unnecessary complexity, OpenBSD aims to create a stable and secure foundation for users. Similarly, in meditation, simplicity helps to quiet the mind and focus on the present moment. This alignment between OpenBSD’s philosophy and Zen practices extends to their shared emphasis on mindfulness and deliberate decision-making, fostering an environment of security and tranquility in both realms.
Both OpenBSD and Zen underscore the significance of attending to detail. In software development, this entails meticulously crafting each line of code to guarantee stability and security. In Zen practice, it involves paying close attention to one’s breath, posture, and mental state to attain a state of mindfulness. By zeroing in on these details, both OpenBSD and Zen strive for perfection.
OpenBSD’s dedication to consistency is manifested in its codebase, where each code change undergoes a thorough code review process. Consistency holds equal importance in Zen practice, as it fosters a sense of routine and stability. By cultivating a consistent daily meditation practice, I have discovered that consistency is instrumental in making progress on my spiritual journey. OpenBSD’s emphasis on consistency mirrors the principles of Zen, emphasizing the value of diligence and discipline in both domains.
Finally, both OpenBSD and Zen acknowledge the elegance in imperfection. In software development, imperfections can often be rectified or lessened through meticulous design and testing. In Zen practice, imperfections are perceived as avenues for growth and self-awareness.
By acknowledging our imperfections, we can nurture humility and compassion. As I progress in my journey with OpenBSD and Zen, I am consistently struck by the ways in which these two seemingly unrelated realms intersect. By embracing simplicity, attention to detail, consistency, and the beauty of imperfection, both OpenBSD and Zen provide unique perspectives on the nature of software development and personal growth. Stay tuned for further insights from my exploration in the realm of security!
Hi, I’m Dirk — a security engineer with a deep passion for skateboarding and digital forensics. I help my company protect networks and systems against evolving cybersecurity threats through a mix of technical expertise and continuous learning.
Skateboarding is more than a hobby to me; it’s a source of creativity, freedom, and community. It shapes how I approach challenges — with persistence, balance, and a mindset open to innovation.
Beyond that, I’m an OpenBSD enthusiast. I’ve built an OpenBSD-based router and threat-hunting infrastructure to stay ahead in cybersecurity. I appreciate OpenBSD for its simplicity, security, and elegance — qualities I strive to bring to my work.
I’m also a longtime Emacs user, relying on it daily for coding, writing, and organizing my thoughts. It’s part of how I stay productive and focused.
In cybersecurity, I’m committed to continuous growth and adapting to new challenges. When I’m not working on security projects, you’ll find me skating or exploring new ideas inspired by Zen philosophy.
You can download my CV as a signed and encrypted PDF for authenticity and privacy. If you need the password to decrypt it, please send me an mail.
Stay tuned for updates on my journey as a security engineer, skateboarder, and lifelong learner.
0xC2920C559CAD6CB
40CA 727E 96D3 CC2D 8CBB 1540 0C29 20C5 59CA D6CB
c7359e0e8bd69ed7cee3ea97453c10e327bfe2416822f54c6390efe72b0d6e7a
I used to believe that the place I worked for meant something. That our mission was shared. That our values were real. That if you showed up with honesty, effort, and a willingness to carry more than your share — you’d be met with respect. Or at least, fairness.
I was wrong.
What hurts isn’t the exit itself. What hurts is realizing that the foundation I stood on was never really there at all. That the culture I believed in — the one I helped build — was, in the end, just an image.
That when things got hard, the masks stayed on, and the people I trusted turned away.
For a while, I wanted to fight. Not because I love conflict, but because the silence felt like betrayal.
I wanted to prove something. To show them they were wrong about me. To remind them that I was worth more than the way they let me go.
But I’ve chosen a different path.
I won’t drag names through the mud. I won’t post receipts or pass around whispers. Not because they deserve protection — but because I deserve peace.
What I build next will be louder than anything I could say.
I have ideas. Good ones. Open source, threat hunting on a budget, monitoring stacks that actually work, stories about real resilience in the face of bullshit.
I know what I’ve built. I know what I can offer. And I’ll keep showing up — not for them, but for the part of me that never wanted to be anything but real.
To those who watched in silence as I burned — I hope the quiet served you well.
To those who ever believed in me — I’m still here.
And to myself — This is where it begins again. With truth, and with clarity.
“The fire that consumed my old world is now the light that guides me forward.”