Reading view

There are new articles available, click to refresh the page.

Introduction to GrapheneOS

# Introduction

This blog post is an introduction to the smartphone and security oriented operating system GrapheneOS.

=> https://grapheneos.org/ GrapheneOS official project web page

Thanks to my patrons support, last week I have been able to replace my 6.5 years old BQ Aquaris X which has been successfully running Lineage OS all that time, by a Google Pixel 8a now running GrapheneOS.

Introducing GrapheneOS is a daunting task, I will do my best to present you the basics information you need to understand if it might be useful for you, and let a link to the project FAQ which contains a lot of valuable technical explanations I do not want to repeat here.

=> https://grapheneos.org/faq GrapheneOS FAQ

# What is GrapheneOS?

GrapheneOS (written GOS from now on) is an Android based operating system that focuses security.  It is only compatible with Google Pixel devices for multiple reasons: availability of hardware security components, long term support (series 8 and 9 are supported at least 7 years after release) and the hardware has a good quality / price ratio.

The goal of GOS is to provide users a lot more control about what their smartphone is doing.  A main profile is used by default (the owner profile), but users are encouraged to do all their activities in a separate profile (or multiples profiles).  This may remind you about Qubes OS workflow, although it does not translate entirely here.  Profiles can not communicate between each others, encryption is done per profile, and some permissions can be assigned per profile (installing apps, running applications in background when a profile is not used, using the SIM...).  This is really effective for privacy or security reasons (or both), you can have a different VPN per profile if you want, or use a different Google Play login, different applications sets, whatever!  The best feature here in my opinion is the ability to completely stop a profile so you are sure it does not run anything in the background once you exit it.

When you make a new profile, it is important to understand it is like booting your phone again, the first log-in with the profile you will be asked questions like if you started the system for the first time.  All settings have the defaults values, and any change is limited to the profile only, this includes ringtones, sound, default apps, themes…  Switching between profile is a bit painful, you need to get the top to bottom dropdown menu at full size, then tap the bottom right corner icon and choose the profile you want to switch to, and tap the PIN of that profile.  Only the owner profile can toggle important settings like 4G/5G network, or do SIM operations and other "lower level" settings.

GOS has a focus on privacy, but let the user in charge.  Google Play and Google Play Services can be installed in one click from a dedicated GOS app store which is limited to GOS apps only, as you are supposed to install apps from Google Play, F-droid or Accrescent.  Applications can be installed in a single profile, but can also be installed in the owner profile which lets you copy it to other profiles.  This is actually how I do, I install all apps in the user profile, I always uncheck the "network permission" so they just can't do anything, and then I copy them to profiles where I will use it for real.  There is no good or bad approach, this fits your need in terms of usability, privacy and security.

Just to make sure it is clear, it is possible to use GOS totally Google free, but if you want to use Google services, it is made super easy to do so.  Google Play could be used in a dedicated profile if you ever need it once.

# Installation and updates

The installation was really simple as it can be done from the web page (from a Linux, Windows or macOS system), by just clicking buttons in the correct order from the installation page.  The image integrity check can be done AFTER installation, thanks to the TPM features in the phone which guarantees the boot of valid software only, which will allow you to generate a proof of boot that is basically a post-install checksum. (More explanations in GOS website).  The whole process took approximately 15 minutes between plugging the phone to my computer and using the phone.

It is possible to install from the command line, I did not test it.

Updates are 100% over-the-air (OTA), which mean the system is able to download updates over network.  This is rather practical as you never need to do any adb command to push a new image, which have always been a stressful experience for me when using smartphones.  GOS automatically download base system updates and offer you to reboot to install it, while GOS apps will just be downloaded and update in place.  This is a huge difference from LineageOS which always required to manually download new builds, and applications updates were parts of the big image update.

# Permission management

A cool thing with GOS is the tight controls offered over applications.  First, this is done by profile, so if you use the same app in two profiles, you can give different permissions, and secondly, GOS allows you to define a scope to some permissions.  For example, if an application requires storage permission, you can list which paths are allowed, if it requires contacts access, you can give a list of contacts entries (or empty).

GOS Google Play installation (which is not installed by default) is sand-boxed to restrict what it can do, they also succeeded at sand-boxing Android Auto. (More details in the FAQ).  I have a dedicated Android Auto profile, the setup was easy thanks to the FAQ has a lot of permissions must be manually given for it to work.

GOS does not allow you to become root on your phone though, it just gives you more control through permissions and profiles.

# Performance

I did not try CPU/GPU intensive tasks for now, but there should be almost no visible performance penalty when using GOS.  There are many extra security features enabled which may lead to a few percent of extra CPU usage, but there are no benchmark and the few reviews of people who played high demanding video games on their phone did not notice any performance change.

# Security

GOS website has a long and well detailed list of hardening done over the stock Android code, you can read about them on the following link.

=> https://grapheneos.org/features#exploit-protection GrapheneOS website: Exploitation Protection

# My workflow

As an example, here is how I configured my device, this is not the only way to proceed, so I just share it to give the readers an idea of what it looks like for me:

* my owner profile has Google Play installed used to install most apps.  All apps are installed there with no network permission, then I copy them to the profile that will use the applications.
* a profile that looks like what I was doing in my previous phone: allowed to phone/SMS, web browser, IM apps, TOTP app.
* a profile for multimedia where I store music files, run audio players and use Android Auto.  Profile is not allowed to run in background.
* a profile for games (local and cloud).  Profile is not allowed to run in background.
* a "other" profile used to run crappy apps.  Profile is not allowed to run in background.
* a profile for each of my clients, so I can store any authentication app (TOTP, Microsoft authenticator, whatever), use any app required.  Profile is not allowed to run in background.
* a guest profile that can be used if I need to lend my phone to someone if they want to do something like look up something on the Internet.  This profile always starts freshly reset.

After a long week of use, I came up with this.  At first, I had a separate profile for TOTP, but having to switch back and forth to it a dozen time a day was creating too much friction.

# The device itself

I chose to buy a Google Pixel 8a 128 GB as it was the cheapest of the 8 and 9 series which have a 7 years support, but also got a huge CPU upgrade compared to the 7 series.  The device could be bought at 300€ on second hand market and 400€ brand new.

The 120 Hz OLED screen is a blast!  Colors are good, black is truly black (hence dark themes for OLED reduce battery usage and looks really great) and it is super smooth.

There is no SD card support, which is pretty sad especially since almost every Android smartphone support this, I guess they just want you to pay more for storage.  I am fine with 128 GB though, I do not store much data on my smartphone, but being able to extend it would have been nice.

The camera is OK, I am not using it a lot and I have no comparison, from reviews I have read they were saying it is just average.

Wi-Fi 6 works really fine (latency, packet loss, range and bandwidth) although I have no way to verify its maximum bandwidth because it is faster than my gigabit wired network. 

The battery lasts long, I use my smartphone a bit more now, the battery approximately drops by 20% for a day of usage.  I did not test charge speed.

# Conclusion

I am really happy with GrapheneOS, I finally feel in control of my smartphone and I never considered it a safe device before.  I never really used an Android ROM from a manufacturer or iOS, I bet they can provide a better user experience, but they can not provide anything like GrapheneOS.

LineageOS was actually ok on my former BQ Aquaris X, but there were often regressions, and it did not provide anything special in terms of features, except it was still having updates for my old phone.  GrapheneOS on the other hand provides a whole new experience, that may be what you are looking for.

This system is not for everyone!  If you are happy with your current Android, do not bother buying a Google Pixel to try GOS.

# Going further

The stock Android version supports profiles (this can be enabled in system -> users -> allow multiple users), but there is no way to restrict what profiles can do, it seems they are all administrators.  I have been using this on our Android tablet at home, it is available on every Android phone as well.  I am not sure if it can be used as a security feature as this.

Systemd journald cheatsheet

# Introduction

This blog post is part of a series that will be about Systemd ecosystem, today's focus is on journaling.

Systemd got a regrettable reputation since its arrival mid 2010.  I think this is due to Systemd being radically different than traditional tooling, and people got lost without a chance to be noticed beforehand they would have to deal with it.  The transition was maybe rushed a bit with a half-baked product, in addition to the fact users had to learn new paradigms and tooling to operate their computer.

Nowadays, Systemd is working well, and there are serious non-Systemd alternatives, so everyone should be happy. :)

# Introduction to journald

Journald is the logging system that was created as part of Systemd.  It handles logs created by all Systemd units.  A huge difference compared to the traditional logs is that there is a single journal file acting as a database to store all the data.  If you want to read logs, you need to use `journalctl` command to extract data from the database as it is not plain text.

Most of the time journald logs data from units by reading their standard error and output, but it is possible to send data to journald directly.

On the command line, you can use `systemd-cat` to run a program or pipe data to it to send them to logs.

=> https://www.man7.org/linux/man-pages/man1/systemd-cat.1.html systemd-cat man page

# Journalctl 101

Here is a list of the most common cases you will encounter:

* View new logs live: `journalctl -f`
* View last 2000 lines of logs: `journalctl -n 2000`
* Restrict logs to a given unit: `journalctl -u nginx.service`
* Pattern matching: `journalctl -g somepattern`
* Filter by date (since): `journalctl --since="10 minutes ago"` or `journalctl --since="1 hour ago"` or `journalctl --since=2024-12-01`
* Filter by date (range): `journalctl --since="today" --until="1 hour ago"` or `journalctl --since="2024-12-01 12:30:00" --until="2024-12-01 16:00:00"`
* Filter logs since boot: `journalctl -b`
* Filter logs to previous (n-1) boot: `journalctl -b -1`
* Switch date time output to UTC: `journalctl --utc`

You can use multiple parameters at the same time:

* Last 200 lines of logs of nginx since current boot: `journalctl -n 200 -u nginx -b`
* Live display of nginx logs files matching "wp-content": `journalctl -f -g wg-content -u nginx`

=> https://www.man7.org/linux/man-pages/man1/journalctl.1.html journalctl man page

# Send logs to syslog

If you want to bypass journald and send all messages to syslog to handle your logs with it, you can edit the file `/etc/systemd/journald.conf` to add the line `ForwardToSyslog=Yes`.

This will make journald relay all incoming messages to syslog, so you can process your logs as you want.

Restart journald service: `systemctl restart systemd-journal.service`

=> https://www.man7.org/linux/man-pages/man8/systemd-journald.service.8.html systemd-journald man page
=> https://www.man7.org/linux/man-pages/man5/journald.conf.5.html journald.conf man page

# Journald entries metadata

Journalctl contains a lot more information than just the log line (raw content).  Traditional syslog files contain the date and time, maybe the hostname, and the log message.

This is just for information, only system administrators will ever need to dig through this, it is important to know it exists in case you need it.

## Example

Here is what journald stores for each line (pretty printed from json output), using samba server as an example.

```
# journalctl -u smbd -o json -n 1 | jq
{
  "_EXE": "/usr/libexec/samba/rpcd_winreg",
  "_CMDLINE": "/usr/libexec/samba/rpcd_winreg --configfile=/etc/samba/smb.conf --worker-group=4 --worker-index=5 --debuglevel=0",
  "_RUNTIME_SCOPE": "system",
  "__MONOTONIC_TIMESTAMP": "749298223244",
  "_SYSTEMD_SLICE": "system.slice",
  "MESSAGE": "  Copyright Andrew Tridgell and the Samba Team 1992-2023",
  "_MACHINE_ID": "f23c6ba22f8e02aaa8a9722df464cae3",
  "_SYSTEMD_INVOCATION_ID": "86f0f618c0b7dedee832aef6b28156e7",
  "_BOOT_ID": "42d47e1b9a109551eaf1bc82bd242aef",
  "_GID": "0",
  "PRIORITY": "5",
  "SYSLOG_IDENTIFIER": "rpcd_winreg",
  "SYSLOG_TIMESTAMP": "Dec 19 11:00:03 ",
  "SYSLOG_RAW": "<29>Dec 19 11:00:03 rpcd_winreg[4142801]:   Copyright Andrew Tridgell and the Samba Team 1992-2023\n",
  "_CAP_EFFECTIVE": "1ffffffffff",
  "_SYSTEMD_UNIT": "smbd.service",
  "_PID": "4142801",
  "_HOSTNAME": "pelleteuse",
  "_SYSTEMD_CGROUP": "/system.slice/smbd.service",
  "_UID": "0",
  "SYSLOG_PID": "4142801",
  "_TRANSPORT": "syslog",
  "__REALTIME_TIMESTAMP": "1734606003126791",
  "__CURSOR": "s=1ab47d484c31144909c90b4b97f3061d;i=bcdb43;b=42d47e1b9a109551eaf1bc82bd242aef;m=ae75a7888c;t=6299d6ea44207;x=8d7340882cc85cab",
  "_SOURCE_REALTIME_TIMESTAMP": "1734606003126496",
  "SYSLOG_FACILITY": "3",
  "__SEQNUM": "12376899",
  "_COMM": "rpcd_winreg",
  "__SEQNUM_ID": "1ab47d484c31144909c90b4b97f3061d",
  "_SELINUX_CONTEXT": "unconfined\n"
}
```

The "real" log line is the value of `SYSLOG_RAW`, everything else is created by journald when it receives the information.

## Filter

As the logs can be extracted in JSON format, it becomes easy to parse them properly using any programming language able to deserialize JSON data, this is far more robust than piping lines to AWK / grep, although it can work "most of the time" (until it does not due to a weird input).

On the command line, you can query/filter such logs using `jq` which is a bit the awk of JSON.  For instance, if I output all the logs of "today" to filter lines generated by the binary `/usr/sbin/sshd`, I can use this:

```
journalctl --since="today" -o json | jq -s '.[] | select(._EXE == "/usr/sbin/sshd")'
```

This command line will report each line of logs where "_EXE" field is exactly "/usr/sbin/sshd" and all the metadata.  This kind of data can be useful when you need to filter tightly for a problem or a security incident.

The example above was made easy as it is a bit silly in its form: filtering on SSH server can be done with `journalctl -u sshd.service --since=today`.

# Conclusion

Journald is a powerful logging system, journalctl provides a single entry point to extract and filter logs in a unified system.

With journald, it became easy to read logs of multiple services over a time range, and log rotation is now a problem of the past for me.

Presentation of Pi-hole

# Introduction

This blog post is about the project Pi-hole, a libre software suite to monitor and filter DNS requests over a local network.

=> https://pi-hole.net/ Pi-hole official project page

Pi-hole is Linux based, it is a collection of components and configuration that can be installed on Linux, or be used from a Raspberry PI image ready to write on a flash memory.

=> static/img/pihole-startrek.png The top of Pi-hole dashboard display, star trek skin

# Features

Most of Pi-hole configuration happens on a clear web interface (which is available with a star trek skin by the way), but there is also a command line utility and a telnet API if you need to automate some tasks.

## Filtering

The most basic feature of Pi-hole is filtering DNS requests.  While it comes with a default block list from the Internet, you can add custom lists using their URLs, the import supports multiple formats as long as you tell Pi-hole which format to use for each source.

Filtering can be done for all queries, although you can create groups that will not be filtered and assign LAN hosts that will belong to this group, in some situation there are hosts you may not want to filter.

The resolving can be done using big upstream DNS servers (Cloudflare, Google, OpenDNS, Quad9 ...), but also custom servers.  It is possible to configure a recursive resolver by installing unbound locally.

=> https://docs.pi-hole.net/guides/dns/unbound/ Pi-hole documentation: how to install and configure unbound

## Dashboard

A nice dashboard allows you to see all queries with the following information:

* date
* client IP / host
* domain in the query
* result (allowed, blocked)

It can be useful to understand what is happening if a website is not working, but also see how much queries are blocked.

It is possible to choose the privacy level of logging, because you may only want to have statistics about numbers of queries allowed / blocked and not want to know who asked what (this may also be illegal to monitor this on your LAN).

=> https://docs.pi-hole.net/ftldns/privacylevels/ Documentation about privacy levels

## Audit log

In addition to lists, the audit log will display two columns with the 10 most allowed / blocked domains appearing in queries, that were not curated through the audit log.

Each line in the "allowed" column have a "blacklist" and "audit" buttons.  The former will add the domain to the internal blacklist while the latter will just acknowledge this domain and remove it from the audit log.  If you click on audit, it means "I agree with this domain being allowed".

The column with blocked queries will show a "Whitelist" and "Audit" buttons that can be used to definitely allow a domain or just acknowledge that it's blocked.

Once you added a domain to a list or clicked on audit, it got removed from the displayed list, and you can continue to manually review the new top 10 domains.

## Disable blocking

There is a feature to temporarily disable blocking for 10 seconds, 30 seconds, 5 minutes, indefinitely or a custom time.  This can be useful if you have an important website that misbehave and want to be sure the DNS filtering is not involved.

## Local hostnames

It is possible to add custom hostnames that resolve to whatever IP you want, this makes easy to give nice names to your machines on your LAN.  There is nothing really fancy, but the web ui makes it easy to handle this task.

## Extra features

Pi-hole can provide a DHCP server to your LAN, has self diagnosis, easy configuration backup / restore.  Maybe more features I did not see or never used.

# Conclusion

While Pi-hole requires more work than configuring unbound on your local LAN and feed it with a block list, it provides a lot more features, flexibility and insights about your DNS than unbound.

Pi-hole works perfectly fine on low end hardware, it uses very little resources despite all its features.

# Going further

I am currently running Pi-hole as a container with podman, from an unpriviliged user.  This setup is out of scope, but I may write about it later (or if people ask for it) as it required some quirks due to replying to UDP packets through the local NAT, and the use of the port 53 (which is restricted to root, usually).

Getting started to write firewall rules

# Introduction

This blog post is about designing firewall rules, not focusing on a specific operating system.

The idea came after I made a mistake on my test network where I exposed LAN services to the Internet after setting up a VPN with a static IPv4 on it due to too simplistic firewall rules.  While discussing this topic on Mastodon, some mentioned they never know where to start when writing firewall rules.

# Firewall rules ordering

Firewall rules are evaluated one by one, and the evaluation order matters.

Some firewall use a "first match" type, where the first rule matching a packet is the rule that is applied.  Other firewalls are of type "last match", where the last matching rule is the one applied.

# Block everything

The first step when writing firewall rules is to block all incoming and outgoing traffic.

There is no other way to correctly configure a firewall, if you plan to block all services you want to restrict and let the default allow rule do its job, you are doing it wrong.

# Identify flows to open

As all flows should be blocked by default, you have to list what should go through the firewall, inbound and outbound.

In most cases, you will want to allow outbound traffic, except if you have a specific environment on which you want to only allow outgoing traffic to a certain IP / port.

For inbound traffic, if you do not host any services, there are nothing to open.  Otherwise, make a list of TCP, UDP, or any other ports that should be reachable, and who should be allowed to reach it.

# Write the rules

When writing your rules, whether they are inbound or outbound, be explicit whenever possible about this:

* restrict to a network interface
* restrict the source addresses (maybe a peer, a LAN, or anyone?)
* restrict to required ports only

Eventually, in some situations you may want to filter by source and destination port at the same time.  This is usually useful when you have two servers communicating over a protocol enforcing both ports.

This is actually where I failed and exposed my LAN minecraft server to the wild.  After setting up a VPN with a static IPv4 address, I only had a "allow tcp/25565" rule on my firewall as I was relying on my ISP router to not forward traffic.  This rule was not effective once the traffic was received from the VPN, although it would have been filtrated when using a given network interface or a source network.

If you want to restrict the access of a critical service to a some user (1 or more), but that they do not have a static IP address, you should consider using a VPN for this service and restrict the access to the VPN interface only.

# Write comments and keep track of changes

Firewall rules will evolve over time, you may want to write for your future you why you added this or that rule.  Ideally, use a version control system on the firewall rules file, so you can easily revert changes or track history to understand a change.

# Do not lock yourself out

When applying the firewall rules the first time, you may have made a mistake and if it is on remote equipment with no (or complicated) physical access, it is important to prepare an escape.

There are different methods, the most simple is to run a command in a second terminal that sleeps for 30 seconds before resetting the firewall to a known state, you have to run this command just before loading the new rules.  So if you are locked out after applying, just wait 30 seconds to fix the rules.

# Add statistics and logging

If you want to monitor your firewall, consider adding counters to rules, it will tell you how many times it was evaluated/matched and how many packets and traffic went through.  With nftables on Linux they are named "counters", whereas OpenBSD packet filter names this "label".

It is also possible to log packets matching a rule, this can be useful to debug an issue on the firewall, or if you want to receive alerts in your logs when a rule is triggered.

# Conclusion

Writing firewall rules is not a hard task once you identified all flows.

While companies have to maintain flow tables, I do not think it can be useful for a personal network (your mileage may vary).

Why I stopped using OpenBSD

# Introduction

Last month, I decided to leave the OpenBSD team as I have not been using OpenBSD myself for a while.  A lot of people asked me why I stopped using OpenBSD, although I have been advocating it for a while.  Let me share my thoughts.

First, I like OpenBSD, it has values, and it is important that it exists.  It just does not fit all needs, it does not fit mine anymore.

# Issues

Here is a short list of problems that, while bearable when taken individually, they summed up to a point I had to move away from OpenBSD.

## Hardware compatibility

* no Bluetooth support
* limited game pad support (not supported by all programs, not all game pad will work)
* battery life / heat / power usage (OpenBSD draws more power than alternatives, by a good margin)

## Software compatibility

As part of staying relevant on the DevOps market, I need to experiment and learn with a lot of stuff, this includes OCI containers, but also machine learning and some weird technologies.  Running virtual machines on OpenBSD is really limited, running programs headless with one core and poor performance is not a good incentive to work at staying sharp.

As part of my consultancy work, I occasionally need to run proprietary crap, this is not an issue when running it in a VM, but I can not do that on OpenBSD without a huge headache and very bad performance.

## Reliability

I have grievances against OpenBSD file system.  Every time OpenBSD crash, and it happens very often for me when using it as a desktop, it ends with file corrupted or lost files.  This is just not something I can accept.

Of course, it may be some hardware compatibility issue, I never have issues on an old ThinkPad T400, but I got various lock up, freeze or kernel panic on the following machines:

* ThinkPad X395
* ThinkPad t470
* ThinkPad t480
* ryzen 5600X + AMD GPU (desktop)

Would you like to keep using an operating system that daily eat your data?  I don't.  Maybe I am doing something weirds, I don't know, I have never been able to pinpoint why I got so many crashes although everyone else seem to have a stable experience with OpenBSD.

# Moving to Linux

I moved from OpenBSD to Qubes OS for almost everything (except playing video games) on which I run Fedora virtual machines (approximately 20 VM simultaneously in average).  This provides me better security than OpenBSD could provide me as I am able to separate every context into different spaces, this is absolutely hardcore for most users, but I just can't go back to a traditional system after this.

=> https://dataswamp.org/~solene/2023-06-17-qubes-os-why.html Earlier blog post: Why one would use Qubes OS?

In addition, I have learned the following Linux features and became really happy of it:

* namespaces: being able to reduce the scope of a process is incredibly powerful, this is something that exists in Linux since a very long time, this is also the foundation for running containers, it is way better than chroots.
* cgroups: this is the name of the kernel subsystem that is responsible for resource accounting, with it, it is possible to get access to accurate and reliable monitoring.  It is possible to know how much network, i/o, CPU or memory have been used by a process.  From an operator point of view, it is really valuable to know exactly what is consuming resources when looking at the metrics.  Where on OpenBSD you can notice a CPU spike at some timestamp, on Linux you would be able to know which user used the CPU.
* systemd: journald, timers and scripting possibilities.  I need to write a blog post about this, systemd is clearly disruptive, but it provides many good features.  I understand it can make some people angry as they have to learn how to use it.  The man pages are good though.
* swap compression: this feature allows me to push my hardware to its limit, with lz4 compression algorithm, it is easy to get access to **extremely** fast swap paid with some memory.  The compression ratio is usually 3:1 or 4:1 which is pretty good.
* modern storage backend: between LVM, btrfs and ZFS, there are super nice things to achieve depending on the hardware, for maximum performance / reliability and scalability.  I love transparent compression as I can just store more data on my hardware. (when it's compressible of course).
* flatpak: I really like software distribution done with flatpak, packages are all running in their own namespace, they can't access all the file system, you can roll back to a previous version, and do some interesting stuff
* auditd: this is a must-have for secure environments, it allows logging all accesses matching some rules (like when was accessed this arbitrary file, when that file is modified, etc...).  This does not even exist in OpenBSD (maybe if you can run ktrace on pid 1 you could do something?).  This kind of feature is a basic requirement for many qualified secure environments.
* SELinux: although many people disable it immediately after the first time it gets on their way (without digging further), this is a very powerful security mechanism that mitigates entire classes of vulnerabilities.

When using a desktop for gaming, I found Fedora Silverblue to be a very solid system with reliable upgrades, good quality and a lot of software choice.

# Conclusion

I got too many issues with OpenBSD, I wanted to come back to it twice this year, but I just have lost 2 days of my life due to all the crashes eating data.  And when it was working fine, I was really frustrated by the performance and not being able to achieve the work I needed to do.

But as I said, I am glad people there are happy OpenBSD users who enjoy it and have a reliable system with it.  From the various talks I had with users, the most common (by far) positive fact that make OpenBSD good is that users can understand what is going on.  This is certainly a quality that can only be found in OpenBSD (maybe NetBSD too?).

I will continue to advocate OpenBSD for situations I think it is relevant, and I will continue to verify OpenBSD compatibility when contributing to open source software (last in date is Peergos).  This is something that matters a lot for me, in case I go back to OpenBSD :-)

Self-hosted web browser bookmarks syncing

# Introduction

This blog post is about Floccus, a self-hosting web browser bookmarks and tabs syncing software.

What is cool with Floccus is that it works on major web browsers (Chromium, Google Chrome, Mozilla Firefox, Opera, Brave, Vivaldi and Microsoft Edge), allowing sharing bookmarks/tabs without depending on the web browser integrated feature, but it also supports multiple backends and also allow the sync file to be encrypted.

=> https://floccus.org/ Floccus official project website

The project is actively developed and maintained.

=> https://github.com/floccusaddon/floccus Floccus GitHub repository

If you want to share a bookmark folder with other people (relatives, a team at work), do not forget to make a dedicated account on the backend as the credentials will be shared.

# Features

* can sync bookmarks or tabs
* sync over WebDAV, Nextcloud, git, linkwarden and Google Drive
* (optional) encrypted file on the shared storage with WebDAV and Google Drive backends
* (optional) security to not sync if more than 50% of the bookmarks changed
* can sync a single bookmark directory
* sync one-way or two-ways
* non HTTP URLs can be saved when using WebDAV or Google Drive backends (ftp:, javascript, data:, chrome:)
* getting rid of Floccus is easy, it has an export feature, but you can also export your bookmarks

# Setup

There is not much to setup, but the process looks like this:

1. install the web browser extension (it is published on Chrome, Mozilla and Edge stores)
2. click on the Floccus icon and click on "Add a profile"
3. choose the backend
4. type credentials for the backend
5. configure the sync options you want
6. enjoy!

After you are done, repeat the process on another web browser if you want to enable sync, otherwise Floccus will "only" serve as a bookmark backup solution.

# Conclusion

It is the first bookmark sync solution I am happy with, it just works, supports end-to-end encryption, and does not force you to use the same web browser across all your devices.

Before this, I tried integrated web browser sync solutions, but self-hosting them was not always possible (or a terrible experience).  I gave a try to "bookmark managers" (linkding, buku, shiori), but whether in command line or with a web UI, I did not really like it as I found it rather impractical for daily use.  I just wanted to have my bookmarks stored in the browser, and be able to easily search/open them. Floccus does the job.

Using a dedicated administration workstation for my infrastructure

# Introduction

As I moved my infrastructure to a whole new architecture, I decided to only expose critical accesses to dedicated administration systems (I have just one).  That workstation is dedicated to my infrastructure administration, it can only connect to my servers over a VPN and can not reach the Internet.

This blog post explains why I am doing this, and gives a high level overview of the setup.  Implementation details are not fascinating as it only requires basics firewall, HTTP proxy and VPN configuration.

# The need

I wanted to have my regular computer not being able to handle any administration task, so I have a computer "like a regular person" without SSH keys, VPN and a password manager that does not mix personal credentials with administration credentials ...  To prevent credentials leaks or malware risks, it makes sense to uncouple the admin role from the "everything else" role.  So far, I have been using Qubes OS which helped me to do so at the software level, but I wanted to go further.

# Setup

This is a rather quick and simple explanation of what you have to do in order to set up a dedicated system for administration tasks.

## Workstation

The admin workstation I use is an old laptop, it only needs a web browser (except if you have no internal web services), a SSH client, and being able to connect to a VPN.  Almost any OS can do it, just pick the one you are the most conformable with, especially with regard to the firewall configuration.

The workstation has its own SSH key that is deployed on the servers.  It also has its own VPN to the infrastructure core.  And its own password manager.

Its firewall is configured to block all in and out traffic except the following:

* UDP traffic to allow WireGuard
* HTTP proxy address:port through WireGuard interface
* SSH through WireGuard

The HTTP proxy exposed on the infrastructure has a whitelist to allow some fqdn.  I actually want to use the admin workstation for some tasks, like managing my domains through my registrar web console.  Keeping the list as small as possible is important, you do not want to start using this workstation for browsing the web or reading emails.

On this machine, make sure to configure the system to use the HTTP proxy for updates and installing packages.  The difficulty of doing so will vary from an operating system to another.  While Debian required a single file in `/etc/apt/apt.conf.d/` to configure apt to use the HTTP proxy, OpenBSD needed both `http_proxy` and `https_proxy` environment variables, but some scripts needed to be patched as they do not use the variables, I had to check fw_update, pkg_add, sysupgrade and syspatch were all working.

Ideally, if you can afford it, configure a remote logging of this workstation logs to a central log server.  When available, `auditd` monitoring important files access/changes in `/etc` could give precious information.

## Servers

My SSH servers are only reachable through a VPN, I do not expose it publicly anymore.  And I do IP filtering over the VPN, so only the VPN clients that have a use to connect over SSH will be allowed to connect.

When I have some web interfaces for services like Minio, Pi-Hole and the monitoring dashboard, all of that is restricted to the admin workstations only.  Sometimes, you have the opportunity to separate the admin part by adding a HTTP filter on a `/admin/` URI, or if the service uses a different port for the admin and the service (like Minio).  When enabling a new service, you need to think about all the things you can restrict to the admin workstations only.

Depending on your infrastructure size and locations, you may want to use dedicated systems for SSH/VPN/HTTP proxy entry points, it is better if it is not shared with important services.

## File exchange

You will need to exchange data to the admin workstation (rarely the other way), I found nncp to be a good tool for that.  You can imagine a lot of different setup, but I recommend picking one that:

* does not require a daemon on the admin workstation: this does not increase the workstation attack surface
* allows encryption at rest: so you can easily use any deposit system for the data exchange
* is asynchronous: as a synchronous connection could be potentially dangerous because it establishes a link directly between the sender and the receiver

=> https://dataswamp.org/~solene/2024-10-04-secure-file-transfer-with-nncp.html Previous blog post: Secure file transfer with NNCP

# Conclusion

I learned about this method while reading ANSSI (French cybersecurity national agency) papers.  While it may sound extreme, it is a good practice I endorse.  This gives a use to old second hand hardware I own, and it improves my infrastructure security while giving me peace of mind.

=> https://cyber.gouv.fr/ ANSSI website (in French)

In addition, if you want to allow some people to work on your infrastructure (maybe you want to set up some infra for an association?), you already have the framework to restrict their scope and trace what they do.

Of course, the amount of complexity and resources you can throw at this is up to you, you could totally have a single server and lock most of its services behind a VPN and call it a day, or have multiple servers worldwide and use dedicated servers to enter their software defined network.

Last thing, make sure that you can bootstrap into your infrastructure if the only admin workstation is lost/destroyed.  Most of the time, you will have a physical/console access that is enough (make sure the password manager is reachable from the outside for this case).

Securing backups using S3 storage

# Introduction

In this blog post, you will learn how to make secure backups using Restic and a S3 compatible object storage.

Backups are incredibly important, you may lose important files that only existed on your computer, you may lose access to some encrypted accounts or drives, when you need backups, you need them to be reliable and secure.

There are two methods to handle backups:

* pull backups: a central server connects to the system and pulls data to store it locally, this is how rsnapshot, backuppc or bacula work
* push backups: each system run the backup software locally to store it on the backup repository (either locally or remotely), this is how most backups tool work

Both workflows have pros and cons.  The pull backups are not encrypted, and a single central server owns everything, this is rather bad from a security point of view.  While push backups handle all encryption and accesses to the system where it runs, an attacker could destroy the backup using the backup tool.

I will explain how to leverage S3 features to protect your backups from an attacker.

# Quick intro to object storage

S3 is the name of an AWS service used for Object Storage.  Basically, it is a huge key-value store in which you can put data and retrieve it, there are very little metadata associated with an object.  Objects are all stored in a "bucket", they have a path, and you can organize the bucket with directories and subdirectories.

Buckets can be encrypted, which is an important feature if you do not want your S3 provider to be able to access your data, however most backup tools already encrypt their repository, so it is not really useful to add encryption to the bucket.  I will not explain how to use encryption in the bucket in this guide, although you can enable it if you want.  Using encryption requires more secrets to store outside of the backup system if you want to restore, and it does not provide real benefits because the repository is already encrypted.

S3 was designed to be highly efficient for retrieving / storage data, but it is not a competitor to POSIX file systems.  A bucket can be public or private, you can host your website in a public bucket (and it is rather common!).  A bucket has permissions associated to it, you certainly do not want to allow random people to put files in your public bucket (or list the files), but you need to be able to do so.

The protocol designed around S3 was reused for what we call "S3-compatible" services on which you can directly plug any "S3-compatible" client, so you are not stuck with AWS.

This blog post exists because I wanted to share a cool S3 feature (not really S3 specific, but almost everyone implemented this feature) that goes well with backups: a bucket can be versioned.  So, every change happening on a bucket can be reverted.  Now, think about an attacker escalating to root privileges, they can access the backup repository and delete all the files there, then destroy the server.  With a backup on a versioned S3 storage, you could revert your bucket just before the deletion happened and recover your backup.  In order to prevent this, the attacker should also get access to the S3 storage credentials, which is different from the credentials required to use the bucket.

Finally, restic supports S3 as a backend, and this is what we want.

## Open source S3-compatible storage implementations

There is a list of open source and free S3-compatible storage, I played with them all, and they have different goals and purposes, they all worked well enough for me:

=> https://github.com/seaweedfs/seaweedfs Seaweedfs GitHub project page
=> https://garagehq.deuxfleurs.fr/ Garage official project page
=> https://min.io/ Minio official project page

A quick note about those:

* I consider seaweedfs to be the Swiss army knife of storage, you can mix multiple storage backends and expose them over different protocols (like S3, HTTP, WebDAV), it can also replicate data over remote instances.  You can do tiering (based on last access time or speed) as well.
* Garage is a relatively new project, it is quite bare bone in terms of features, but it works fine and support high availability with multiple instances, it only offers S3.
* Minio is the big player, it has a paid version (which is extremely expensive) although the free version should be good enough for most users.

# Configure your S3

You need to pick a S3 provider, you can self-host it or use a paid service, it is up to you.  I like backblaze as it is super cheap, with $6/TB/month, but I also have a local minio instance for some needs.

Create a bucket, enable the versioning on it and define the data retention, for the current scenario I think a few days is enough.

Create an application key for your restic client with the following permissions: "GetObject", "PutObject", "DeleteObject", "GetBucketLocation", "ListBucket", the names can change, but it needs to be able to put/delete/list data in the bucket (and only this bucket!).  After this process done, you will get a pair of values: an identifier and a secret key

Now, you will have to provide the following environment variables to restic when it runs:

* `AWS_DEFAULT_REGION` which contains the region of the S3 storage, this information is given when you configure the bucket.
* `AWS_ACCESS_KEY` which contains the access key generated when you created the application key.
* `AWS_SECRET_ACCESS_KEY` which contains the secret key generated when you created the application key.
* `RESTIC_REPOSITORY` which will look like `s3:https://$ENDPOINT/$BUCKET` with $ENDPOINT being the bucket endpoint address and $BUCKET the bucket name.
* `RESTIC_PASSWORD` which contains your backup repository passphrase to encrypt it, make sure to write it down somewhere else because you need it to recover the backup.

If you want a simple script to backup some directories, and remove old data after a retention of 5 hourly, 2 daily, 2 weekly and 2 monthly backups:

```
restic backup -x /home /etc /root /var
restic forget --prune -H 5 -d 2 -w 2 -m 2
```

Do not forget to run `restic init` the first time, to initialize the restic repository.

# Conclusion

I really like this backup system as it is cheap, very efficient and provides a fallback in case of a problem with the repository (mistakes happen, there is not always need for an attacker to lose data ^_^').

If you do not want to use S3 backends, you need to know Borg backup and Restic both support an "append-only" method, which prevents an attacker from doing damages or even read the backup, but I always found the use to be hard, and you need to have another system to do the prune/cleanup on a regular basis.

# Going further

This approach could work on any backend supporting snapshots, like BTRFS or ZFS.  If you can recover the backup repository to a previous point in time, you will be able to access to the working backup repository.

You could also do a backup of the backup repository, on the backend side, but you would waste a lot of disk space.

Snap integration in Qubes OS templates

# Introduction

Snap package format is interesting, while it used to have a bad reputation, I wanted to make my opinion about it.  After reading its design and usage documentation, I find it quite good, and I have a good experience using some programs installed with snap.

=> https://snapcraft.io/ Snapcraft official website (store / documentation)

Snap programs can be either packaged as "strict" or "classic"; when it is strict there is some confinement at work which can be inspected on an installed snap using `snap connections $appname`, while a "classic" snap has no sandboxing at all.  Snap programs are completely decorrelated from the host operating system where snap is running, so you can have old or new versions of a snap packaged program without having to handle shared library versions.

The following setup explains how to install snap programs in a template to run them from AppVMs, and not how to install snap programs in AppVMs as a user, if you need this, please us the Qubes OS guide linked below.

Qubes OS documentation explains how to setup snap in a template, but with a helper to allow AppVMs to install snap programs in the user directory.

=> https://www.qubes-os.org/doc/how-to-install-software/#installing-snap-packages Qubes OS official documentation: install snap packages in AppVMs

In a previous blog post, I explained how to configure a Qubes OS template to install flatpak programs in it, and how to integrate it to the template.

=> https://dataswamp.org/~solene/2023-09-15-flatpak-on-qubesos.html Previous blog post: Installing flatpak programs in a Qubes OS template

# Setup on Fedora

All commands are meant to be run as root.

## Snap installation

=> https://snapcraft.io/docs/installing-snap-on-fedora Snapcraft official documentation: Installing snap on Fedora

Installing snap is easy, run the following command:

```
dnf install snapd
```

To allow "classic" snaps to work, you need to run the following command:

```
sudo ln -s /var/lib/snapd/snap /snap
```

## Proxy configuration

Now, you have to configure snap to use the http proxy in the template, this command can take some time because snap will time out as it tries to use the network when invoked...

```
snap set system proxy.http="http://127.0.0.1:8082/"
snap set system proxy.https="http://127.0.0.1:8082/"
```

## Run updates on template update

You need to prevent snap from searching for updates on its own as you will run updates when the template is updated:

```
snap refresh --hold
```

To automatically update snap programs when the template is updating (or doing any dnf operation), create the file `/etc/qubes/post-install.d/05-snap-update.sh` with the following content and make it executable:

```
#!/bin/sh

if [ "$(qubesdb-read /type)" = "TemplateVM" ]
then
    snap refresh
fi
```

## Qube settings menu integration

To add the menu entry of each snap program in the qube settings when you install/remove snaps, create the file `/usr/local/sbin/sync-snap.sh` with the following content and make it executable:

```
#!/bin/sh

# when a desktop file is created/removed
# - links snap .desktop in /usr/share/applications
# - remove outdated entries of programs that were removed
# - sync the menu with dom0

inotifywait -m -r \
-e create,delete,close_write \
/var/lib/snapd/desktop/applications/ |
while  IFS=':' read event
do
    find /var/lib/snapd/desktop/applications/ -type l -name "*.desktop" | while read line
    do
        ln -s "$line" /usr/share/applications/
    done
    find /usr/share/applications/ -xtype l -delete
    /etc/qubes/post-install.d/10-qubes-core-agent-appmenus.sh
done
```

Install the package `inotify-tools` to make the script above working, and add this to `/rw/config/rc.local` to run it at boot:

```
/usr/local/bin/sync-snap.sh &
```

You can run the script now with `/usr/local/bin/sync-snap.sh &` if you plan to install snap programs.

## Snap store GUI

If you want to browse and install snap programs using a nice interface, you can install the snap store.

```
snap install snap-store
```

You can run the store with `snap run snap-store` or configure your template settings to add the snap store into the applications list, and run it from your Qubes OS menu.

# Debian

The setup on Debian is pretty similar, you can reuse the Fedora guide except you need to replace `dnf` by `apt`.

=> https://snapcraft.io/docs/installing-snap-on-debian Snapcraft official documentation: Installing snap on Debian

# Conclusion

More options to install programs is always good, especially when it comes with features like quota or sandboxing.  Qubes OS gives you the flexibility to use multiple templates in parallel, a new source of packages can be useful for some users.

Asynchronous secure file transfer with nncp

# Introduction

nncp (node to node copy) is a software to securely exchange data between peers.  Is it command line only, it is written in Go and compiles on Linux and BSD systems (although it is only packaged for FreeBSD in BSDs).

The website will do a better job than me to talk about the numerous features, but I will do my best to explain what you can do with it and how to use it.

=> http://www.nncpgo.org/ nncp official project website

# Explanations

nncp is a suite of tools to asynchronously exchange data between peers, using zero knowledge encryption.  Once peers have exchanged their public keys, they are able to encrypt data to send to this peer, this is nothing really new to be honest, but there is a twist.

* a peer can directly connect to another using TCP, you can even configure different addresses like a tor onion or I2P host and use the one you want
* a peer can connect to another using ssh
* a peer can generate plain files that will be carried over USB, network storage, synchronization software, whatever, to be consumed by a peer.  Files can be split in chunks of arbitrary size in order to prevent anyone snooping from figuring how many files are exchanged or their name (hence zero knowledge).
* a peer can generate data to burn on a CD or tape (it is working as a stream of data instead of plain files)
* a peer can be reachable through another relay peer
* when a peer receives files, nncp generates ACK files (acknowledgement) that will tell you they correctly received it
* a peer can request files and/or trigger pre-configured commands you expose to this peer
* a peer can send emails with nncp (requires a specific setup on the email server)
* data transfer can be interrupted and resumed

What is cool with nncp is that files you receive are unpacked in a given directory and their integrity is verified.  This is sometimes more practical than a network share in which you are never sure when you can move / rename / modify / delete the file that was transferred to you.

I identified a few "realistic" use cases with nncp:

* exchange files between air gap environments (I tried to exchange files over sound or QR codes, I found no reliable open source solution)
* secure file exchange over physical medium with delivery notification (the medium needs to do a round-trip for the notification)
* start a torrent download remotely, prepare the file to send back once downloaded, retrieve the file at your own pace
* reliable data transfer over poor connections (although I am not sure if it beats kermit at this task :D )
* "simple" file exchange between computers / people over network

This let a lot of room for other imaginative use cases.

# Real world example: Syncthing gateway

My preferred workflow with nncp that I am currently using is a group of three syncthing servers.

Each syncthing server is running on a different computer, the location does not really matter.  There is a single share between these syncthing instances.

The servers where syncthing are running have incoming and outgoing directories exposed over a NFS / SMB share, with a directory named after each peer in both directories.  Deposing a file in the "outgoing" directory of a peer will make nncp to prepare the file for this peer, put it into the syncthing share and let it share, the file is consumed in the process.
In the same vein, in the incoming directory, new files are unpacked in the incoming directory of emitting peer on the receiver server running syncthing.

Why is it cool?  You can just drop a file in the peer you want to send to, it disappears locally and magically appears on the remote side.  If something wrong happens, due to ACK, you can verify if the file was delivered and unpacked.  With three shares, you can almost have two connected at the same time.

It is a pretty good file deposit that requires no knowledge to use.

This could be implemented with pure syncthing, however you would have to:

* for each peer, configure a one-way directory share in syncthing for each other peer to upload data to
* for each peer, configure a one-way directory share in syncthing for each other peer to receive data from
* for each peer, configure an encrypted share to relay all one way share from other peers

This does not scale well.

Side note, I am using syncthing because it is fun and requires no infrastructure.  But actually, a webdav filesystem, a Nextcloud drive or anything to share data over the network would work just fine.

# Setup

## Configuration file and private keys

On each peer, you have to generate a configuration file with its private keys.  The default path for the configuration file is `/etc/nncp.hjson` but nothing prevents you from storing this file anywhere, you will have to use the parameter `-cfg /path/to/config` file in that case.

Generate the file like this:

```
nncp-cfgnew > /etc/nncp.hjson
```

The file contains comments, this is helpful if you want to see how the file is structured and existing options.  Never share the private keys of this file!

I recommend checking the spool and log paths, and decide which user should use nncp.  For instance, you can use `/var/spool/nncp` to store nncp data (waiting to be delivered or unpacked) and the log file, and make your user the owner of this directory.

## Public keys

Now, generate the public keys (they are just derived from the private keys generated earlier) to share with your peers, there is a command for this that will read the private keys and output the public keys in a format ready to put in the nncp.hjson file of recipients.

```
nncp-cfgmin > my-peer-name.pub
```

You can share the generated file with anyone, this will allow them to send you files.  The peer name of your system is "self", you can rename it, it is just an identifier.

## Import public keys

When import public keys, you just need to add the content generated by the command `nncp-cfgmin` of a peer in your nncp configuration file.

Just copy / paste the content in the `neigh` structure within the configuration file, just make sure to rename "self" by the identifier you want to give to this peer.

If you want to receive data from this peer, make sure to add an attribute line `incoming: "/path/to/incoming/data"` for that peer, otherwise you will not be able to unpack received file.

# Usage

Now you have peers who exchanged keys, they are able to send data to each other.  nncp is a collection of tools, let's see the most common and what they do:

* nncp-file: add a file in the spool to deliver to a peer
* nncp-toss: unpack incoming data (files, commands, file request, emails) and generate ack
* nncp-reass: reassemble files that were split in smaller parts
* nncp-exec: trigger a pre-configured command on the remote peer, stdin data will be passed as the command parameters.  Let's say a peer offers a "wget" service, you can use `echo "https://some-domain/uri/" | nncp-exec peername wget` to trigger a remote wget.

If you use the client / server model over TCP, you will also use:

* nncp-daemon: the daemon waiting for connections
* nncp-caller: a daemon occasionally triggering client connections (it works like a crontab)
* nncp-call: trigger a client connection to a peer

If you use asynchronous file transfers, you will use:

* nncp-xfer: generates to / consumes files from a directory for async transfer

# Workflow (how to use)

## Sending files

For sending files, just use `nncp-file file-path peername:`, the file name will be used when unpacked, but you can also give the filename you want to give once unpacked. 

A directory could be used as a parameter instead of a file, it will be stored automatically in a .tar file for delivery. 

Finally, you can send a stream of data using nncp-file stdin, but you have to give a name to the resulting file.

## Sync and file unpacking

This was not really clear from the documentation, so here it is how to best use nncp when exchanging files using plain files, the destination is `/mnt/nncp` in my examples (it can be an external drive, a syncthing share, a NFS mount...):

When you want to sync, always use this scheme:

1. `nncp-xfer -rx /mnt/nncp`
2. `nncp-toss -gen-ack`
3. `nncp-xfer -keep -tx -mkdir /mnt/nncp`
4. `nncp-rm -all -ack`

This receives files using `nncp-xfer -rx`, the files are stored in nncp spool directory.  Then, with `nncp-toss -gen-ack`, the files are unpacked to the "incoming" directory of each peer who sent files, and ACK are generated (older versions of `nncp-toss` does not handle ack, you need to generate ack befores and remove them after tx, with `nncp-ack -all 4>acks` and `nncp-rm -all -pkt < acks`).

`nncp-xfer -tx` will put in the directory the data you want to send to peers, and also the ack files generated by the rx which happened before.  The `-keep` flag is crucial here if you want to make use of ACK, with `-keep`, the sent data are kept in the pool until you receive the ACK for them, otherwise the data are removed from the spool and will not be retransmited if the files were not received.  Finally, `nncp-rm` will delete all ACK files so you will not transmit them again.

# Explanations about ACK

From my experience and documentation reading, there are three cases with the spool and ACK:

* the shared drive is missing the files you sent (that are still in pool), and you received no ACK, the next time you run `nncp-xfer`, the files will be transmitted again
* when you receive ACK files for files in spool, they are deleted from the spool
* when you do not use `-keep` when sending files with `nncp-xfer`, the files will not be stored in the spool so you will not be able to know what to retransmit if ACK are missing

ACKs do not clean up themselves, you need to use `nncp-rm`.  It took me a while to figure this, my nodes were sending ACKs to each other repeatedly.

# Conclusion

I really like nncp as it allows me to securely transfer files between my computers without having to care if they are online.  Rsync is not always possible because both the sender and receiver need to be up at the same time (and reachable correctly).

The way files are delivered is also practical for me, as I already shared above, files are unpacked in a defined directory by peer, instead of remembering I moved something in a shared drive.  This removes the doubt about files being in a shared drive: why is it there? Why did I put it there? What was its destination??

I played with various S3 storage to exchange nncp data, but this is for another blog post :-)

# Going further

There are more features in nncp, I did not play with all of them.

You can define "areas" in parallel of using peers, you can use emails notifications when a remote receives data from you to have a confirmation, requesting remote files etc...  It is all in the documentation.

I have the idea to use nncp on a SMTP server to store encrypted incoming emails until I retrieve them (I am still working at improving the security of email storage), stay tuned :)

I moved my emails to Proton Mail

# Introduction

I recently took a very hard decision: I moved my emails to Proton Mail.

This is certainly a shock for people following this blog for a long time, this was a shock for me as well!  This was actually pretty difficult to think this topic objectively, I would like to explain how I came up to this decision.

I have been self-hosting my own email server since I bought my first domain name, back in 2009.  The server have been migrated multiple times, from hosting companies to another and regularly changing the underlying operating system for fun.  It has been running on: Slackware, NetBSD, FreeBSD, NixOS and Guix.

# My needs

First, I need to explain my previous self-hosted setup, and what I do with my emails.

I have two accounts:

* one for my regular emails, mailing lists, friends, family
* one for my company to reach client, send quotes and invoices

Ideally, having all the emails retrieved locally and not stored on my server would be ideal.  But I am using a lot of devices (most are disposable), and having everything on a single computer will not work for me.

Due to my emails being stored remotely and containing a lot of private information, I have never been really happy with how emails work at all.  My dovecot server has access to all my emails, unencrypted and a single password is enough to connect to the server.  Adding a VPN helps to protect dovecot if it is not exposed publicly, but the server could still be compromised by other means.  OpenBSD smtpd server got critical vulnerabilities patched a few years ago, basically allowing to get root access, since then I have never been really comfortable with my email setup.

I have been looking for ways to secure my emails, this is how I came to the setup encrypting incoming emails with GPG.  This is far from being ideal, and I stopped using it quickly.  This breaks searches, the server requires a lot of CPU and does not even encrypt all information.

=> https://dataswamp.org/~solene/2024-08-14-automatic-emails-gpg-encryption-at-rest.html Emails encryption at rest on OpenBSD using dovecot and GPG

Someone shown me a dovecot plugin to encrypt emails completely, however my understanding of the encryption of this plugin is that the IMAP client must authenticate the user using a plain text password that is used by dovecot to unlock an asymmetric encryption key.  The security model is questionable: if the dovecot server is compromised, users passwords are available to the attacker and they can decrypt all the emails.  It would still be better than nothing though, except if the attacker has root access.

=> https://0xacab.org/liberate/trees Dovecot encryption plugin: TREES

One thing I need from my emails is to arrive to the recipients.  My emails were almost always considered as spam by big email providers (GMail, Microsoft), this has been an issue for me for years, but recently it became a real issue for my business.  My email servers were always perfectly configured with everything required to be considered as legit as possible, but it never fully worked.

# Proton Mail

Why did I choose Proton Mail over another email provider?  There are a few reasons for it, I evaluated a few providers before deciding.

Proton Mail is a paid service, actually this is an argument in itself, I would not trust a good service to work for free, this would be too good to be true, so it would be a scam (or making money on my data, who knows).

They offer zero-knowledge encryption and MFA, which is exactly what I wanted.  Only me should be able to read my email, even if the provider is compromised, adding MFA on top is just perfect because it requires two secrets to access the data.  Their zero-knowledge security could be criticized for a few things, ultimately there is no guarantee they do it as advertised.

Long story short, when making your account, Proton Mail generates an encryption key on their server that is password protected with your account password.  When you use the service and log-in, the encrypted key is sent to you so all crypto operations happens locally, but there is no way to verify if they kept your private key unencrypted at the beginning, or if they modified their web apps to key log the password typed.  Applications are less vulnerable to the second problem as it would impact many users and this would leave evidences.  I do trust them for doing the things right, although I have no proof.

I did not choose Proton Mail for end-to-end encryption, I only use GPG occasionally and I could use it before.

IMAP is possible with Proton Mail when you have a paid account, but you need to use a "connect bridge", it is a client that connects to Proton with your credentials and download all encrypted emails locally, then it exposes an IMAP and SMTP server on localhost with dedicated credentials.  All emails are saved locally and it syncs continuously, it works great, but it is not lightweight.  There is a custom implementation named hydroxide, but it did not work for me.  The bridge does not support caldav and cardav, which is not great but not really an issue for me anyway.

=> https://github.com/emersion/hydroxide GitHub project page: hydroxide

Before migrating, I verified that reversibility was possible, aka being able to migrate my emails away from Proton Mail.  In case they stop providing their export tool, I would still have a local copy of all my IMAP emails, which is exactly what I would need to move it somewhere else.

There are certainly better alternatives than Proton with regard to privacy, but Proton is not _that_ bad on this topic, it is acceptable enough for me.

## Benefits

Since I moved my emails, I do not have deliverability issues.  Even people on Microsoft received my emails at first try!  Great success for me here.

The anti-spam is more efficient that my spamd trained with years of spam.

Multiple factor authentication is required to access my account.

## Interesting features

I did not know I would appreciate scheduling emails sending, but it's a thing and I do not need to keep the computer on.

It is possible to generate aliases (10 or unlimited depending on the subscription), what's great with it is that it takes a couple seconds to generate a unique alias, and replying to an email received on an alias automatically uses this alias as the From address (webmail feature).  On my server, I have been using a lot of different addresses using a "+" local prefix, it was rarely recognized, so I switched to a dot, but these are not real aliases. So I started managing smtpd aliases through ansible, and it was really painful to add a new alias every time I needed one.  Did I mention I like this alias feature? :D

If I want to send an end-to-end encrypted email without GPG, there is an option to use a password to protect the content, the email would actually send a link to the recipient, leading to a Proton Mail interface asking for the password to decrypt the content, and allow that person to reply.  I have no idea if I will ever use it, but at least it is a more user-friendly end-to-end encryption method.  Tuta is offering the same feature, but it is there only e2e method.

Proton offer logs of login attempts on my account, this was surprising.

There is an onion access to their web services in case you prefer to connect using tor.

The web interface is open source, one should be able to build it locally to connect to Proton servers, I guess it should work?

=> https://github.com/ProtonMail/WebClients GitHub project page: ProtonMail webclients

## Shortcomings

Proton Mail cannot be used as an SMTP relay by my servers, except through the open source bridge hydroxide.

The calendar only works on the website and the smartphone app. The calendar it does not integrate with the phone calendar, although in practice I did not find it to be an issue, everything works fine.  Contact support is less good on Android, they are restrained in the Mail app and I still have my cardav server.

The web app is first class citizen, but at least it is good.

Nothing prevents Proton Mail from catching your incoming and outgoing emails, you need to use end-to-end encryption if you REALLY need to protect your emails from that.

I was using two accounts, this would require a "duo" subscription on Proton Mail which is more expensive.  I solved this by creating two identities, label and filter rules to separate my two "accounts" (personal and professional) emails.  I actually do not really like that, although it is not really an issue at the moment as one of them is relatively low traffic.

The price is certainly high, the "Mail plus" plan is 4€ / month (48€ / year) if you subscribe for 12 months, but is limited to 1 domain, 10 aliases and 15 GB of storage.  The "Proton Unlimited" plan is 10€ / month (120€ / year) but comes with the kitchen sink: infinite aliases, 3 domains, 500 GB storage, and access to all Proton services (that you may not need...) like VPN, Drive and Pass.  In comparison, hosting your email service on a cheap server should not cost you more than 70€ / year, and you can self-host a nextcloud / seafile (equivalent to Drive, although it is stored encrypted there), a VPN and a vaultwarden instance (equivalent to Pass) in addition to the emails.

Emails are limited to 25MB, which is low given I always configured my own server to allow 100 MB attachments, but it created delivery issues on most recipient servers, so it is not a _real_ issue, but I prefer when I can decide of this kind of limitation.

## Alternatives

I evaluated Tuta too, but for the following reasons I dropped the idea quickly:

* they don't support email import (it's "coming soon" since years on their website)
* you can only use their app or website
* there is no way to use IMAP
* there is no way to use GPG because their client does not support it, and you cannot connect using SMTP with your own client

Their service is cool though, but not for me.

# My ideal email setup

If I was to self-host again (which may be soon! Who knows), I would do it differently to improve the security:

* one front server with the SMTP server, cheap and disposable
* one server for IMAP
* one server to receive and analyze the logs

Only the SMTP server would be publicly available, all ports would be closed on all servers, servers would communicate between each other through a VPN, and exports their logs to a server that would only be used for forensics and detecting security breaches.

Such setup would be an improvement if I was self-hosting again my emails, but the cost and time to operate is non-negligible.  It is also an ecological nonsense to need 3 servers for a single person emails.

# Conclusion

I started this blog post with the fact that the decision was hard, so hard that I was not able to decide up to a day before renewing my email server for one year.  I wanted to give Proton a chance for a month to evaluate it completely, and I have to admit I like the service much more than I expected...

My Unix hacker heart hurts terribly on this one.  I would like to go back to self-hosting, but I know I cannot reach the level of security I was looking for, simply because email sucks in the first place.  A solution would be to get rid of this huge archive burden I am carrying, but I regularly search information into this archive and I have not found any usable "mail archive system" that could digest everything and serve it locally.

## Update 2024-09-14

I wrote this blog post two days ago, and I cannot stop thinking about this topic since the migration.

The real problem certainly lies in my use case, not having my emails on the remote server would solve my problems.  I need to figure how to handle it.  Stay tuned :-)

Self-hosting at home and privacy

# Introduction

You may self-host services at home, but you need to think about the potential drawbacks for your privacy.

Let's explore what kind of information could be extracted from self-hosting, especially when you use a domain name.

# Public information

## Domain WHOIS

A domain name must expose some information through WHOIS queries, basically who is the registrar responsible for it, and who could be contacted for technical or administration matters.

Almost every registrar will offer you feature to hide your personal information, you certainly not want to have your full name, full address and phone number exposed on a single WHOIS request.

You can perform a WHOIS request on the link below, directly managed by ICANN.

=> https://lookup.icann.org/en ICANN Lookup

## TLS certificates using ACME

If you use TLS certificates for your services, and ACME (Let's Encrypt or alternatives), all the domains for which a certificate was emitted can easily be queried.

You can visit the following website, type a domain name, and you will immediately have a list of existing domain names.

=> https://crt.sh/ crt.sh Certificate Search

In such situation, if you planned to keep a domain hidden by not sharing it with anyone, you got it wrong.

## Domain name

If you use a custom domain in your email, it is highly likely that you have some IT knowledge and that you are the only user of your email server.

Using this statement (IT person + only domain user), someone having access to your email address can quickly search for anything related to your domain and figure it is related to you.

## Public IP

Anywhere you connect, your public IP is known of the remote servers.

Some bored sysadmin could take a look at the IPs in their logs, and check if some public service is running on it, polling for secure services (HTTPS, IMAPS, SMTPS) will immediately give associated domain name on that IP, then they could search even further.

# Mitigations

There are not many solutions to prevent this, unfortunately.

The public IP situation could be mitigated by either continuing hosting at home by renting a cheap server with a public IP and establish a VPN between the two and use the public IP of the server for your services, or to move your services to such remote server.  This is an extract cost of course.  When possible, you could expose the service over Tor hidden service or I2P if it works for your use case, you would not need to rent a server for this.

The TLS certificates names being public could be easily solved by generating self-signed certificates locally, and deal with it.  Depending on your services, it may be just fine, but if you have strangers using the services, the fact to accept to trust the certificate on first use (TOFU) may appear dangerous.  Some software fail to connect to self-signed certificates and do not offer a bypass...

# Conclusion

Self-hosting at home can be practical for various reasons: reusing old hardware, better local throughput, high performance for cheap... but you need to be aware of potential privacy issues that could come with it.

How to use Proton VPN port forwarding

# Introduction

If you use Proton VPN with the paid plan, you have access to their port forwarding feature.  It allows you to expose a TCP and/or UDP port of your machine on the public IP of your current VPN connection.

This can be useful for multiple use cases, let's see how to use it on Linux and OpenBSD.

=> https://protonvpn.com/support/port-forwarding-manual-setup/ Proton VPN documentation: port forwarding setup

If you do not have a privacy need with regard to the service you need to expose to the Internet, renting a cheap VPS is a better solution: cheaper price, stable public IP, no weird script for port forwarding, use of standard ports allowed, reverse DNS, etc...

# Feature explanation

Proton VPN port forwarding feature is not really practical, at least not as practical as doing a port forwarding with your local router.  The NAT is done using NAT-PMP protocol (an alternative to UPnP), you will be given a random port number for 60 seconds.  The random port number is the same for TCP and UDP.

=> https://en.wikipedia.org/wiki/NAT_Port_Mapping_Protocol Wikipedia page about NAT Port Mapping Protocol

There is a NAT PMPC client named `natpmpc` (available almost everywhere as a package) that need to run in an infinite loop to renew the port lease before it expires.

This is rather not practical for multiple reasons:

* you get a random port assigned, so you must configure your daemon every time
* the lease renewal script must run continuously
* if something wrong happens (script failing, short network failure) that prevent renewing the lease, you will get a new random port

Although it has shortcomings, it is a useful feature that was dropped by other VPN providers because of abuses.

# Setup

Let me share a script I am using on Linux and OpenBSD that does the following:

* get the port number
* reconfigure the daemon using the port forwarding feature
* infinite loop renewing the lease

You can run the script from supervisord (a process manager) to restart it upon failure.

=> http://supervisord.org/ Supervisor official project website

In the example, the Java daemon I2P will be used to demonstrate the configuration update using sed after being assigned the port number.

## OpenBSD

Install the package `natpmpd` to get the NAT-PMP client.

Create a script with the following content, and make it executable:

```
#!/bin/sh

PORT=$(natpmpc -a 1 0 udp 60 -g 10.2.0.1 | awk '/Mapped public/ { print $4 }')

# check if the current port is correct
grep "$PORT" /var/i2p/router.config || /etc/rc.d/i2p stop

# update the port in I2P config
sed -i -E "s,(^i2np.udp.port).*,\1=$PORT, ; s,(^i2np.udp.internalPort).*,\1=$PORT," /var/i2p/router.config

# make sure i2p is started (in case it was stopped just before)
/etc/rc.d/i2p start

while true
do
    date # use for debug only
    natpmpc -a 1 0 udp 60 -g 10.2.0.1 && natpmpc -a 1 0 tcp 60 -g 10.2.0.1 || { echo "error Failure natpmpc $(date)"; break ; }
    sleep 45
done
```

The script will search for the port number in I2P configuration, stop the service if the port is not found.  Then the port line is modified with sed (in all cases, it does not matter much).  Finally, i2p is started, this will only do something in case i2p was stopped before, otherwise nothing happens.

Then, in an infinite loop with a 45 seconds frequency, there is a renewal of the TCP and UDP port  forwarding happening.  If something wrong happens, the script exits.

### Using supervisord

If you want to use supervisord to start the script at boot and maintain it running, install the package `supervisor` and create the file `/etc/supervisord.d/nat.ini` with the following content:

```
[program:natvpn]
command=/etc/supervisord.d/continue_nat.sh ; choose the path of your script
autorestart=unexpected ; when to restart if exited after running (def: unexpected)
```

Enable supervisord at boot, start it and verify it started (a configuration error prevents it from starting):

```
rcctl enable supervisord
rcctl start supervisord
rcctl check supervisord
```

### Without supervisord

Open a shell as root and execute the script and keep the terminal opened, or run it in a tmux session.

## Linux

The setup is exactly the same as for OpenBSD, just make sure the package providing `natpmpc` is installed.

Depending on your distribution, if you want to automate the script running / restart, you can run it from a systemd service with auto restart on failure, or use supervisord as explained above.

If you use a different network namespace, just make sure to prefix the commands using the VPN with `ip netns exec vpn`.

Here is the same example as above but using a network namespace named "vpn" to start i2p service and do the NAT query.


```shell
#!/bin/sh

PORT=$(ip netns exec vpn natpmpc -a 1 0 udp 60 -g 10.2.0.1 | awk '/Mapped public/ { print $4 }')

FILE=/var/i2p/.i2p/router.config

grep "$PORT" $FILE || sudo -u i2p /var/i2p/i2prouter stop
sed -i -E "s,(^i2np.udp.port).*,\1=$PORT, ; s,(^i2np.udp.internalPort).*,\1=$PORT," $FILE

ip netns exec vpn sudo -u i2p /var/i2p/i2prouter start

while true
do
    date
    ip netns exec vpn natpmpc -a 1 0 udp 60 -g 10.2.0.1 && ip netns exec vpn natpmpc -a 1 0 tcp 60 -g 10.2.0.1 || { echo "error Failure natpmpc $(date)"; break ; }
    sleep 45
done
```

# Conclusion

Proton VPN port forwarding feature is useful when need to expose a local network service on a public IP.  Automating it is required to make it work efficiently due to the unusual implementation.

Emails encryption at rest on OpenBSD using dovecot and GPG

# Introduction

In this blog post, you will learn how to configure your email server to encrypt all incoming emails using user's GPG public keys (when it exists).  This will prevent anyone from reading the emails, except if you own the according GPG private key.  This is known as "encryption at rest".

This setup, while effective, has limitations.  Headers will not be encrypted, search in emails will break as the content is encrypted, and you obviously need to have the GPG private key available when you want to read your emails (if you read emails on your smartphone, you need to decide if you really want your GPG private key there).

Encryption is CPU consuming (and memory too for emails of a considerable size), I tried it on an openbsd.amsterdam virtual machine, and it was working fine until someone sent me emails with 20MB attachments.  On a bare-metal server, there is absolutely no issue.  Maybe GPG makes use of hardware acceleration cryptography, and it is not available in virtual machines hosted under the OpenBSD hypervisor vmm.

This is not an original idea, Etienne Perot wrote about a similar setup in 2012 and enhanced the `gpgit` script we will use in the setup.  While his blog post is obsolete by now because of all the changes that happened in Dovecot, the core idea remains the same.  Thank you very much Etienne for your job!

=> https://perot.me/encrypt-specific-incoming-emails-using-dovecot-and-sieve Etienne Perot: Encrypt specific incoming emails using Dovecot and Sieve
=> https://github.com/EtiennePerot/gpgit gpgit GitHub project page
=> https://tildegit.org/solene/gpgit gpgit mirror on tildegit.org

This guide is an extension of my recent email server setup guide:

=> https://dataswamp.org/~solene/2024-07-24-openbsd-email-server-setup.html 2024-07-24 Full-featured email server running OpenBSD

# Threat model

This setup is useful to protect your emails stored on the IMAP server. If the server or your IMAP account are compromised, the content of your emails will be encrypted and unusable.

You must be aware that emails headers are not encrypted: recipients / senders / date / subject will remain in clear text even after encryption.  If you already use end-to-end encryption with your recipients, there are no benefits using this setup.

An alternative is to not let any emails on the IMAP server, although they could be recovered as they are written in the disk until you retrieve them.

Personally, I keep many emails of my server, and I am afraid that a 0day vulnerability could be exploited on my email server, allowing an attacker to retrieve the content of all my emails.  OpenSMTPD had critical vulnerabilities a few years ago, including a remote code execution, so it is a realistic threat.

I wrote a privacy guide (for a client) explaining all the information shared through emails, with possible mitigations and their limitations.

=> https://www.ivpn.net/privacy-guides/email-and-privacy/ IVPN: The Technical Realities of Email Privacy

# Setup

This setup makes use of the program `gpgit` which is a Perl script encrypt emails received over the standard input using GPG, it is a complicated task because the email structure can be very complicated.  I have not been able to find any alternative to this script.  In gpgit repository there is a script to encrypt an existing mailbox (maildir format), that script must be run on the server, I did not test it yet.

You will configure a specific sieve rule which is "global" (not user-defined) that will process all emails before any other sieve filter.  This sieve script will trigger a `filter` (a program allowed to modify the email) and pass the email on the standard input of the shell script `encrypt.sh`, which in turn will run `gpgit` with the according username after verifying a gnupg directory existed for them.  If there is no gnupg directory, the email is not encrypted, this allows multiple users on the email server without enforcing encryption for everyone.

If a user has multiple addresses, this is the system account name that is used in the local part of the GPG key address.

## GPGit

Some packages are required for gpgit to work, they are all available on OpenBSD:

```shell
pkg_add p5-Mail-GnuPG p5-List-MoreUtils
```

Download gpgit git repository and copy its `gpgpit` script into `/usr/local/bin/` as an executable:

```
cd /tmp/
git clone https://github.com/EtiennePerot/gpgit
cd gpgit
install -o root -g wheel -m 555 gpgit /usr/local/bin/
```

## Sieve

All the following paths will be relative to the directory `/usr/local/lib/dovecot/sieve/`, you can `cd` into it now.

Create the file `encrypt.sh` with this content, replace the variable `DOMAIN` with the domain configured in the GPG key:

```sh
#!/bin/sh

DOMAIN="puffy.cafe"

NOW=$(date +%s)
DATA="$(cat)"

if test -d ~/.gnupg
then
    echo "$DATA" | /usr/local/bin/gpgit "${USER}@${DOMAIN}"
    NOW2=$(date +%s)
    echo "Email encryption for user ${USER}: $(( NOW2 - NOW )) seconds" | logger -p mail.info
else
    echo "$DATA"
    echo "Email encryption for user for ${USER} none" | logger -p mail.info
fi
```

Make the script executable with `chmod +x encrypt.sh`.  This script will create a new log line in your email logs every time an email is processed, including the username and the time required for encryption (in case of encryption).  You could extend the script to discard the `Subject` header from the email if you want to hide it, I do not provide the implementation as I expect this task to be trickier than it looks like if you want to handle all corner cases.

Create the file `global.sieve` with the content:

```sieve
require ["vnd.dovecot.filter"];
filter "encrypt.sh";
```

Compile the sieve rules with `sievec global.sieve`.

## Dovecot

Edit the file `/etc/dovecot/conf.d/90-plugin.conf` to add the following code within the `plugin` block:

```
  sieve_filter_bin_dir = /usr/local/lib/dovecot/sieve
  sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.environment +vnd.dovecot.filter
  sieve_before = /usr/local/lib/dovecot/sieve/global.sieve
  sieve_filter_exec_timeout = 200s
```

You may have `sieve_global_extensions` already set, in that case update its value.

The variable `sieve_filter_exec_timeout` allows the script `encrypt.sh` to run for 200 seconds before being stopped, you should adapt the value to your system.  I came up with 200 seconds to be able to encrypt email with 20MB attachments on an openbsd.amsterdam virtual machine.  On a bare metal server with a Ryzen 5 CPU, it takes less than one second for the same email.

The full file should look like the following (in case you followed my previous email guide):

```
##
## Plugin settings
##

# All wanted plugins must be listed in mail_plugins setting before any of the
# settings take effect. See  for list of plugins and
# their configuration. Note that %variable expansion is done for all values.

plugin {
  sieve_plugins = sieve_imapsieve sieve_extprograms

  # From elsewhere to Spam folder
  imapsieve_mailbox1_name = Spam
  imapsieve_mailbox1_causes = COPY
  imapsieve_mailbox1_before = file:/usr/local/lib/dovecot/sieve/report-spam.siev

  # From Spam folder to elsewhere
  imapsieve_mailbox2_name = *
  imapsieve_mailbox2_from = Spam
  imapsieve_mailbox2_causes = COPY
  imapsieve_mailbox2_before = file:/usr/local/lib/dovecot/sieve/report-ham.sieve

  sieve_pipe_bin_dir = /usr/local/lib/dovecot/sieve

  # for GPG encryption
  sieve_filter_bin_dir = /usr/local/lib/dovecot/sieve
  sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.environment +vnd.dovecot.filter
  sieve_before = /usr/local/lib/dovecot/sieve/global.sieve
  sieve_filter_exec_timeout = 200s
}
```

Open the file `/etc/dovecot/conf.d/10-master.conf` and uncomment the variable `default_vsz_limit` and set its value to `1024M`. This is required as GPG uses a lot of memory and without this, the process will be killed and the email lost.  I found 1024M to works with attachments up to 45 MB, however you should raise this value higher value if you plan to receive bigger attachments.

Restart dovecot to take account of the changes: `rcctl restart dovecot`.

## User GPG setup

You need to create a GPG keyring for each users you want use encryption, the simplest method is to setup a passwordless keyring and import your public key:

```
$ gpg --quick-generate-key --passphrase '' --batch "$USER"
$ gpg --import public-key-file.asc
$ gpg --edit-key FINGERPRINT_HERE
gpg> sign
[....]
gpg> save
```

If you want to disable GPG encryption for the user, remove the directory `~/.gnupg`.

## Anti-spam service

If you use a spam filter such as rspamd or spamassassin relying on bayes filter, it will only work if it process the emails before arriving at dovecot, for instance in my email setup this is the case as rspamd is a filter of opensmtpd and pass the email before being delivered to Dovecot.

Such service can have privacy issues, especially if you use encryption.  Bayes filter works by splitting an email content into tokens (not really words but almost) and looking for patterns using these tokens, basically each emails is split and stored in the anti-spam local database in small parts.  I am not sure one could recreate the emails based on tokens, but if someone like an attacker is able to access the token list, they may have some insights about your email content.  If this is part of your threat model, disable your anti-spam Bayes filter.

# Conclusion

This setup is quite helpful if you want to protect all your emails on their storage.  Full disk encryption on the server does not prevent anyone able to connect over SSH (as root or the email user) from reading the emails, even file recovery is possible when the volume is unlocked (not on the real disk, but the software encrypted volume), this is where encryption at rest is beneficial.

I know from experience it is complicated to use end-to-end encryption with tech-savvy users, and that it is even unthinkable with regular users.  This is a first step if you need this kind of security (see the threat model section), but you need to remember a copy of all your emails certainly exist on the servers used by the persons you exchange emails with.

Using Firefox remote debugging feature

# Introduction

Firefox has an interesting features for developers, its ability to connect a Firefox developers tools to a remote Firefox instance.  This can really interesting in the case of a remote kiosk display for instance.
The remote debugging does not provide a display of the remote, but it gives you access to the developer tools for tabs opened on the remote.

# Setup

The remote firefox you want to connect to must be started using the command line parameter `--start-debugger-server`.  This will make it listen on the TCP port 6000 on 127.0.0.1.  Be careful, there is another option named `remote-debugging-port` which is not what you want here, but the names can be confusing (trust me, I wasted too much time because of this).

Before starting Firefox, a few knobs must be modified in its configuration.  Either search for the options in `about:config` or create a `user.js` file in the Firefox profile directory with the following content:

```
user_pref("devtools.chrome.enabled", true);
user_pref("devtools.debugger.remote-enabled", true);
user_pref("devtools.debugger.prompt-connection", false);
```

This enables the remote management and removes a prompt upon each connection, while this is a good safety measure, it is not practical for remote debugging.

When you start Firefox, the URL input bar should have a red background.

# Remote connection

Now, you need to make a SSH tunnel to that remote host where Firefox is running in order to connect to the port.  Depending on your use case, a local NAT could be done to expose the port to a network interface or VPN interface, but pay attention to security as this would allow anyone on the network to control the Firefox instance.

The SSH tunnel is quite standard: `ssh -L 6001:127.0.0.1:6000`, the remote port 6000 is exposed locally as 6001, this is important because your own Firefox may be using the port 6000 for some reasons.

In your own local Firefox instance, visit the page `about:debugging`, add the remote instance `localhost:6001` and then click on Connect on its name on the left panel.  Congratulations, you have access to the remote instance for debugging or profiling websites.

=> static/firefox-debug-add-remote-fs8.png Input the remote address localhost:6001 and click on Add

=> static/firefox-debug-left-panel-fs8.png Click on connect on the left

=> static/firefox-debug-access-fs8.png Enjoy your remote debugging session


# Conclusion

While it can be tricky to debug a system you can directly see, especially if it is a kiosk in production that you can see / use in case of a problem.

Full-featured email server running OpenBSD

# Introduction

This blog post is a guide explaining how to setup a full-featured email server on OpenBSD 7.5.  It was commissioned by a customer of my consultancy who wanted it to be published on my blog.

Setting up a modern email stack that does not appear as a spam platform to the world can be a daunting task, the guide will cover what you need for a secure, functional and low maintenance email system.

The features list can be found below:

* email access through IMAP, POP or Webmail
* secure SMTP server (mandatory server to server encryption, personal information hiding)
* state-of-the-art setup to be considered as legitimate as possible
* firewall filtering (bot blocking, all ports closes but the required ones)
* anti-spam

In the example, I will set up a temporary server for the domain `puffy.cafe` with a server using the subdomain `mail.puffy.cafe`.  From there, you can adapt with your own domain.

# Quick reminder

I prepared a few diagrams explaining how all the components are used together, in three cases: when sending an email, when the SMTP servers receives an email from the outside and when you retrieve your emails locally.

=> static/img/email-setup-authenticated-mail-delivery.dot.png Authenticated user sending an email to the outside

=> static/img/email-setup-receiving-email.dot.png Outside sending an email to one of our users

=> static/img/email-setup-retrieving-emails.dot.png User retrieving emails for reading

# Packet Filter (PF)

Packet Filter is OpenBSD's firewall.  In our setup, we want all ports to be blocked except the few ones required for the email stack.

The following ports will be required:

* opensmtpd 25/tcp (smtp): used for email delivery from other servers, supports STARTTLS
* opensmtpd 465/tcp (smtps): used to establish a TLS connection to the SMTP server to receive or send emails
* opensmtpd 587/tcp (submission): used to send emails to external servers, supports STARTTLS
* httpd 80/tcp (http): used to generate TLS certificates using ACME
* dovecot 993/tcp (imaps): used to connect to the IMAPS server to read emails
* dovecot 995/tcp (pop3s): used to connect to the POP3S server to download emails
* dovecot 4190/tcp (sieve): used to allow remote management of an user SIEVE rules

Depending on what services you will use, only the opensmtpd ports are mandatory.  In addition, we will open the port 22/tcp for SSH.

```pf.conf
set block-policy drop
set loginterface egress
set skip on lo0

# normalisation des paquets
match in all scrub (no-df random-id max-mss 1440)
antispoof quick for { egress }

tcp_ports = "{ smtps smtp submission imaps pop3s sieve ssh http }"

block all
pass out inet
pass out inet6

# allow ICMP (ping)
pass in proto icmp

# allow IPv6 to work
pass in on egress inet6 proto icmp6 all icmp6-type { routeradv neighbrsol neighbradv }
pass in on egress inet6 proto udp from fe80::/10 port dhcpv6-server to fe80::/10 port dhcpv6-client no state

# allow our services
pass in on egress proto tcp from any to any port $tcp_ports

# default OpenBSD rules
# By default, do not permit remote connections to X11
block return in on ! lo0 proto tcp to port 6000:6010

# Port build user does not need network
block return out log proto {tcp udp} user _pbuild
```

# DNS

If you want to run your own email server, you need a domain name configured with a couple of DNS records about the email server.

## MX records

=> https://en.wikipedia.org/wiki/MX_record Wikipedia page: MX record

The MX records list the servers that should be used by outside SMTP servers to send us emails, this is the public list of our servers accepting emails for a given domain.  They have a weight associated to each of them, the server with the lowest weight should be used first and if it does not respond, the next server used will be the one with a slightly higher weight.  This is a simple mechanism that allow setting up a hierarchy.

I highly recommend setting up at least two servers, so if your main server fails is unreachable (host outage, hardware failure, upgrade ongoing) the emails will be sent to the backup server. Dovecot bundles a program to synchronize mailboxes between servers, one way or two-way, one shot or continuously.

If you have no MX records in your domain name, it is not possible to send you emails. It is like asking someone to send you a post card without giving them any clue about your real address.

Your server hostname can be different from the domain apex (raw domain name without a subdomain), a simple example would be to use `mail.domain.example` for the server name, this will not prevent it from receiving/sending emails using `@domain.example` in email addresses.

In my example, the domain puffy.cafe mail server will be mail.puffy.cafe, giving this MX record in my DNS zone:

```dns
        IN MX     10 mail.puffy.cafe.
```

## SPF

=> https://en.wikipedia.org/wiki/Sender_Policy_Framework Wikipedia page: SPF record

The SPF record is certainly the most important piece of the email puzzle to detect spam.  With the SPF, the domain name owner can define which servers are allowed to send emails from that domain.  A properly configured spam filter will give a high spam score to incoming emails that are not in the sender domain SPF.

To ease the configuration, that record can automatically include all MX defined for a domain, but also A/AAAA records, so if you only use your MX servers for sending, a simple configuration allowing MX servers to send is enough.

In my example, only mail.puffy.cafe should be legitimate for sending emails, any future MX server should also be allowed to send emails, so we configure the SPF to allow all MX defined servers to be senders.

```dns
    600 IN TXT     "v=spf1 mx -all"
```

## DKIM

=> https://en.wikipedia.org/wiki/DomainKeys_Identified_Mail Wikipedia page: DKIM signature

When used, the DKIM is a system allowing a receiver to authenticate a sender, based on an asymmetric cryptographic keys.  The sender publishes its public key on a TXT DNS record before signing all outgoing emails using the private key.  By doing so, receivers can validate the email integrity and make sure it was sent from a server of the domain claimed in the From header.

DKIM is mandatory to not be classified as a spamming server.

The following set of commands will create a 2048 bits RSA key in `/etc/mail/dkim/private/puffy.cafe.key` with its public key in `/etc/mail/dkim/puffy.cafe.pub`, the `umask 077` command will make sure any file created during the process will only be readable by root.  Finally, you need to make the private key readable to the group `_rspamd`.

Note: the umask command will persist in your shell session, if you do not want to create files/directory only readable by root after this, either spawn a new shell, or run the set of commands in a new shell and then exit from it once you are done.

```
umask 077
install -d -o root -g wheel -m 755 /etc/mail/dkim
install -d -o root -g _dkim -m 775 /etc/mail/dkim/private
openssl genrsa -out /etc/mail/dkim/private/puffy.cafe.key 2048
openssl rsa -in /etc/mail/dkim/private/puffy.cafe.key -pubout -out /etc/mail/dkim/puffy.cafe.pub
chgrp _rspamd /etc/mail/dkim/private/puffy.cafe.key /etc/mail/dkim/private/
chmod 440 /etc/mail/dkim/private/puffy.cafe.key
chmod 775 /etc/mail/dkim/private/
```

In this example, we will name the DKIM selector `dkim` to keep it simple.  The selector is the name of the key, this allows having multiple DKIM keys for a single domain.

Add the DNS record like the following, the value in `p` is the public key in the file `/etc/mail/dkim/puffy.cafe.pub`, you can get it as a single line with the command `awk '/PUBLIC/ { $0="" } { printf ("%s",$0) } END { print }' /etc/mail/dkim/puffy.cafe.pub`:

Your registrar may offer to add the entry using a DKIM specific form.  There is nothing wrong doing so, just make sure the produced entry looks like the entry below.

```
dkim._domainkey IN TXT "v=DKIM1;k=rsa;p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAo3tIFelMk74wm+cJe20qAUVejD0/X+IdU+A2GhAnLDpgiA5zMGiPfYfmawlLy07tJdLfMLObl8aZDt5Ij4ojGN5SE1SsbGC2MTQGq9L2sLw2DXq+D8YKfFAe0KdYGczd9IAQ9mkYooRfhF8yMc2sMoM75bLxGjRM1Fs1OZLmyPYzy83UhFYq4gqzwaXuTvxvOKKyOwpWzrXzP6oVM7vTFCdbr8E0nWPXWKPJhcd10CF33ydtVVwDFp9nDdgek3yY+UYRuo/iJvdcn2adFoDxlE6eXmhGnyG4+nWLNZrxIgokhom5t5E84O2N31YJLmqdTF+nH5hTON7//5Kf/l/ubwIDAQAB"
```

## DMARC

=> https://en.wikipedia.org/wiki/DMARC Wikipedia page: DMARC record

The DMARC record is an extra mechanism that comes on top of SPF/DKIM, while it does not do much by itself, it is important to configure it.

DMARC could be seen as a public notice explaining to servers receiving emails whose sender looks like your domain name (legit or not) what they should do if SPF/DKIM does not validate.

As of 2024, DMARC offers three actions for receivers:

* do nothing but make a report to the domain owner
* "quarantine" mode: tell the receiver to be suspicious without rejecting it, the result will depend on the receiver (most of the time it will be flagged as spam) and make a report
* "reject" mode: tell the receiver to not accept the email and make a report

In my example, I want invalid SPF/DKIM emails to be rejected.  It is quite arbitrary, but I prefer all invalid emails from my domain to be discarded rather than ending up in a spam directory, so `p` and `sp` are set to `reject`.  In addition, if my own server is misconfigured I will be notified about delivery issues sooner than if emails were silently put into quarantine.

An email address should be provided to receive DMARC reports, they are barely readable and I never made use of them, but the email address should exist so this is what the `rua` field is for.

The field `aspf` is set to `r` (relax), basically this allows any servers with a hostname being a subdomain of `.puffy.cafe` to send emails for `@puffy.cafe`, while if this field is set to `s` (strict), the domain of the sender should match the domain of the email server (`mail.puffy.cafe` would only be allowed to send for `@mail.puffy.cafe`).

=> https://mxtoolbox.com/dmarc/details/dmarc-tags Mx Toolbox website: DMARC tags list

```
_dmarc        IN TXT     "v=DMARC1;p=reject;rua=mailto:dmarc@puffy.cafe;sp=reject;aspf=r;"
```

## PTR (Reverse DNS)

=> https://en.wikipedia.org/wiki/Reverse_DNS_lookup Wikipedia page: PTR record

An older mechanism used to prevent spam was to block, or consider as spam, any SMTP server whose advertised hostname did not match the result of the reverse lookup of its IP.

Let's say "mail.foobar.example" (IP: A.B.C.D) is sending an email to my server, if the result of the DNS request to resolve the PTR of A.B.C.D is not "mail.foobar.example", the email would be considered as spam or rejected.  While this is superseded by SPF/DKIM and annoying as it is not always possible to define a PTR for a public IP, the reverse DNS setup is still a strong requirement to not be considered as a spamming platform.

Make sure the PTR matches the system hostname and not the domain name itself, in the example above the PTR should be `mail.foobar.example` and not `foobar.example`.

# System configuration

## Acme-client

The first step is to obtain a valid TLS certificate, this requires configuring acme-client, httpd and start httpd daemon.

Copy the acme-client example `cp /etc/examples/acme-client.conf /etc/`

Modify `/etc/acme-client.conf` and edit only the last entry to configure your own domain, mine looks like this:

```
#
# $OpenBSD: acme-client.conf,v 1.5 2023/05/10 07:34:57 tb Exp $
#
authority letsencrypt {
	api url "https://acme-v02.api.letsencrypt.org/directory"
	account key "/etc/acme/letsencrypt-privkey.pem"
}

authority letsencrypt-staging {
	api url "https://acme-staging-v02.api.letsencrypt.org/directory"
	account key "/etc/acme/letsencrypt-staging-privkey.pem"
}

authority buypass {
	api url "https://api.buypass.com/acme/directory"
	account key "/etc/acme/buypass-privkey.pem"
	contact "mailto:me@example.com"
}

authority buypass-test {
	api url "https://api.test4.buypass.no/acme/directory"
	account key "/etc/acme/buypass-test-privkey.pem"
	contact "mailto:me@example.com"
}

domain mail.puffy.cafe {
    # you can remove the line "alternative names" if you do not need extra subdomains
    # associated to this certificate
    # imap.puffy.cafe is purely an example, I do not need it
	alternative names { imap.puffy.cafe pop.puffy.cafe }
	domain key "/etc/ssl/private/mail.puffy.cafe.key"
	domain full chain certificate "/etc/ssl/mail.puffy.cafe.fullchain.pem"
	sign with letsencrypt
}
```

Now, configure httpd, starting from the OpenBSD example: `cp /etc/examples/httpd.conf /etc/`

Edit `/etc/httpd.conf`, we want the first block to match all domains but not "example.com", and we do not need the second block listen on 443/tcp (except if you want to run a https server with some content, but you are on your own then).  The resulting file should look like the following:

```httpd.conf
# $OpenBSD: httpd.conf,v 1.22 2020/11/04 10:34:18 denis Exp $

server "*" {
	listen on * port 80
	location "/.well-known/acme-challenge/*" {
		root "/acme"
		request strip 2
	}
	location * {
		block return 302 "https://$HTTP_HOST$REQUEST_URI"
	}
}
```

Enable and start httpd with `rcctl enable httpd && rcctl start httpd`.

Run `acme-client -v mail.puffy.cafe` to generate the certificate with some verbose output (if something goes wrong, you will have a clue).

If everything went fine, you should have the full chain certificate in `/etc/ssl/mail.puffy.cafe.fullchain.pem` and the private key in `/etc/ssl/private/mail.puffy.cafe.key`.

## Rspamd

You will use rspamd to filter spam and sign outgoing emails for DKIM.

Install rspamd and the filter to plug it to opensmtpd:

```shell
pkg_add rspamd-- opensmtpd-filter-rspamd
```

You need to configure rspamd to sign outgoing emails with your DKIM private key, to proceed, create the file `/etc/rspamd/local.d/dkim_signing.conf` (the filename is important):

```
# our usernames does not contain the domain part
# so we need to enable this option
allow_username_mismatch = true;

# this configures the domain puffy.cafe to use the selector "dkim"
# and where to find the private key
domain {
    puffy.cafe {
        path = "/etc/mail/dkim/private/puffy.cafe.key";
        selector = "dkim";
    }
}
```

For better performance, you need to use redis as a cache backend for rspamd:

```shell
rcctl enable redis
rcctl start redis
```

Now you can start rspamd:

```
rcctl enable rspamd
rcctl start rspamd
```

For extra information about rspamd (like statistics or its web UI), I wrote about it in 2021:

=> https://dataswamp.org/~solene/2021-07-13-smtpd-rspamd.html Older blog post: 2024-07-13 Filtering spam using Rspamd and OpenSMTPD on OpenBSD

### Alternatives

If you do not want to use rspamd, it is possible to replace the DKIM signing part using `opendkim`, `dkimproxy` or `opensmtpd-filter-dkimsign`.  The spam filter could be either replaced by the featureful `spamassassin` available as a package, or partially with the base system program `spamd` (it does not analyze emails).

This guide only focus on rspamd, but it is important to know alternatives exist.

## OpenSMTPD

OpenSMTPD configuration file on OpenBSD is `/etc/mail/smtpd.conf`, here is a working configuration with a lot of comments:

```smtpd.conf
## this defines the paths for the X509 certificate
pki puffy.cafe cert "/etc/ssl/mail.puffy.cafe.fullchain.pem"
pki puffy.cafe key "/etc/ssl/private/mail.puffy.cafe.key"
pki puffy.cafe dhe auto

## this defines how the local part of email addresses can be split
# defaults to '+', so solene+foobar@domain matches user
# solene@domain. Due to the '+' character being a regular source of issues
# with many online forms, I recommend using a character such as '_',
# '.' or '-'. This feature is very handy to generate infinite unique emails
# addresses without pre-defining aliases.
# Using '_', solene_openbsd@domain and solene_buystuff@domain lead to the
# same address
smtp sub-addr-delim '_'

## this defines an external filter
# rspamd does dkim signing and spam filter
filter rspamd proc-exec "filter-rspamd"

## this defines which file will contain aliases
# this can be used to define groups or redirect emails to users
table aliases file:/etc/mail/aliases

## this defines all the ports to use
# mask-src hides system hostname, username and public IP when sending an email
listen on all port 25  tls         pki "puffy.cafe" filter "rspamd"
listen on all port 465 smtps       pki "puffy.cafe" auth mask-src filter "rspamd"
listen on all port 587 tls-require pki "puffy.cafe" auth mask-src filter "rspamd"

## this defines actions
# either deliver to lmtp or to an external server
action "local" lmtp "/var/dovecot/lmtp" alias 
action "outbound" relay

## this defines what should be done depending on some conditions
# receive emails (local or from external server for "puffy.cafe")
match from any for domain "puffy.cafe" action "local"
match from local for local action "local"

# send email (from local or authenticated user)
match from any auth for any action "outbound"
match from local for any action "outbound"
```

In addition, you can configure the advertised hostname by editing the file `/etc/mail/mailname`: for instance my machine's hostname is `ryzen` so I need this file to advertise it as `mail.puffy.cafe`.

Restart OpenSMTPD with `rcctl restart smtpd`.

### TLS

For ports using STARTTLS (25 and 587), there are different options with regard to TLS encryption.

* do not allow STARTTLS
* offer STARTTLS but allow not using it (option `tls`)
* require STARTTLS: drop connection when the remote peer does ask for STARTTLS (option `tls-require`)
* require STARTTLS: drop connection when no STARTTLS, and verify the remote certificate (option `tls-require verify`)

It is recommended to enforce STARTTLS on port 587 as it is used by authenticated users to send emails, preventing them to send emails without network encryption.

On port 25, used by external servers to reach yours, it is important to allow STARTTLS because most server will deliver emails over an encrypted TLS session, however it is your choice to enforce it or not.

Enforcing STARTTLS might break email delivery from some external servers that are outdated or misconfigured (or bad actors).

### User management

By default, OpenSMTPD is configured to deliver email to valid users in the system.  In my example, if user `solene` exists, then email address `solene@puffy.cafe` will deliver emails to `solene` user mailbox.

Of course, as you do not want the system daemons to receive emails, a file contains aliases to redirect emails from a user to another, or simply discard it.

In `/etc/mail/aliases`, you can redirect emails to your username by adding a new line, in the example below I will redirect root emails to my user.

```
root: solene
```

It is possible to redirect to multiple users using a comma to separate them, this is handful if you want to create a local group delivering emails to multiple users.

Instead of a user, it is possible to append the incoming emails to a file, pipe them to a command or return an SMTP code.  The aliases(5) man pages contains all you need to know.

=> https://man.openbsd.org/aliases.5 OpenBSD manual pages: aliases(5)

Every time you modify this file, you need to run the command `smtpctl update table aliases` to reload the aliases table in OpenSMTPD memory.

You can add a new email account by creating a new user with a shell preventing login:

```
useradd -m -s /sbin/nologin username_here
passwd username_here
```

This user will not be able to do anything on the server but connecting to SMTP/IMAP/POP.  They will not be able to change their password either!

### Handling extra domains

If you need to handle emails for multiple domains, this is rather simple:

* Add this line to the file `/etc/mail/smtpd.conf` by changing `puffy.cafe` to the other domain name: `match from any for domain "puffy.cafe" action "local"`
* Configure the other domain DNS MX/SPF/DKIM/DMARC
* Configure `/etc/rspamd/local.d/dkim_signing.conf` to add a new block with the other domain, the dkim selector and the dkim key path
* The PTR does not need to be modified as it should match the machine hostname advertised over SMTP, and it is an unique value anyway

If you want to use a different aliases table for the other domain, you need to create a new aliases file and configure `/etc/mail/smtpd.conf` accordingly where the following lines should be added:

```
table lambda file:/etc/mail/aliases-lambda

action "local_mail_lambda" lmtp "/var/dovecot/lmtp" alias 

match from any for domain "lambda-puffy.eu" action "local_mail_lambda"
```

Note that the users will be the same for all the domains configured on the server.  If you want to have separate users per domains, or that "user a" on domain A and "user a" on domain B could be different persons / logins, you would need to setup virtual users instead of using system users.  Such setup is beyond the scope of this guide.

### Without Dovecot

It is possible to not use Dovecot.  Such setup can suit users who would like to download the maildir directory using rsync on their local computer, this is a one-way process and does not allow sharing a mailbox across multiple devices.  This reduces maintenance and attack surface at the cost of convenience.

This may work as a two-way access (untested) when using a software such as unison to keep both the local and remote directories synchronized, but be prepared to manage file conflicts!

If you want this setup, replace the following line in smtpd.conf

```
action "local" lmtp "/var/dovecot/lmtp" alias 
```

by this line: if you want to store the emails into a maildir format (a directory per email folder, a file per email), emails will be stored in the directory "Maildir" in user's homes.

```
action "local" maildir "~/Maildir/" junk alias 
```

or this line if you want to keep the mbox format (a single file with emails appended to it, not practical), the emails will be stored in /var/mail/$user.

```
action "local" mbox alias 
```

=> https://en.wikipedia.org/wiki/Maildir Wikipedia page: Maildir format
=> https://en.wikipedia.org/wiki/Mbox Wikipedia page: Mbox format

## Dovecot

Dovecot is an important piece of software for the domain end users, it provides protocols like IMAP or POP3 to read emails from a client.  It is the most popular open source IMAP/POP server available (the other being Cyrus IMAP).

Install dovecot with the following command line:

```
pkg_add dovecot-- dovecot-pigeonhole--
```

Dovecot has a lot of configuration files in `/etc/dovecot/conf.d/` although most of them are commented and ready to be modified, you will have to edit a few of them.  This guide provides the content of files with empty lines / comments stripped so you can quickly check if your file is ok, you can use the command `awk '$1 !~ /^#/ && $1 ~ /./'` on a file to display its "useful" content only (awk will not modify the file).

Modify `/etc/dovecot/conf.d/10-ssl.conf` and search the lines `ssl_cert` and `ssl_key`, change their values to your certificate full chain and private key.

Generate a Diffie-Hellman file for perfect forward secrecy, this will make each TLS negociation unique, so if the private key ever leak, every past TLS communication will remain safe.

```shell
openssl dhparam -out /etc/dovecot/dh.pem 4096
chown _dovecot:_dovecot /etc/dovecot/dh.pem
chmod 400 /etc/dovecot/dh.pem
```

The file (filtered of all comments/empty lines) should look like the following:

```dovecot
ssl_cert =  https://en.wikipedia.org/wiki/Internet_Message_Access_Protocol Wikipedia page: IMAP protocol

IMAP is an efficient protocol that returns headers of emails per directory, so you do not have to download all your emails to view the directory list, emails are downloaded upon read (by default in most email clients).  It allows some cool features like server side search, incoming email sorting with sieve filters or multi devices access.

Edit `/etc/dovecot/conf.d/20-imap.conf` and configure the last lines accordingly to the result file:

```dovecot
protocol imap {
  mail_plugins = $mail_plugins imap_sieve
  mail_max_userip_connections = 25
}
```

The number of connections per user/IP should be high if you have an email client tracking many folders, in IMAP a connection is required for each folder, so the number of connections can quickly increase.  On top of that, if you have multiple devices under the same public IP you could quickly reach the limit.  I found 25 worked fine for me with 3 devices.

### POP

=> https://en.wikipedia.org/wiki/Post_Office_Protocol Wikipedia page: POP protocol

POP3 is a pretty old protocol that is rarely considered by users, I still consider it a viable alternative to IMAP depending on your needs.

A major incentive for using POP is that it downloads all emails locally before removing them from the server.  As we have no tooling to encrypt emails stored on remote email servers, POP3 is a must if you want to not leave any email on the server.  POP3 does not support remote folders, so you can not use Sieve filters on the server to sort your emails and then download them as-this.  A POP3 client downloads the Inbox and then sorts the emails locally.

It can support multiple devices under some conditions: if you delete the emails after X days, your devices should synchronize before the emails are removed.  In such case they will have all the emails stored locally, but they will not be synced together: if both computers A and B are up-to-date, when deleting an email on A, it will still be in B.

There are no changes required for POP3 in Dovecot as the defaults are good enough.

### JMAP

For information, a replacement for IMAP called JMAP is in development, it is meant to be better than IMAP in every way and also include calendars and address book management.

JMAP Implementations are young but exist, although support in email clients is almost non-existent.  For instance, it seems Mozilla Thunderbird is not interested in it, an issue in their bug tracker about JMAP from December 2016 only have a couple of comments from people who would like to see it happening, nothing more.

=> https://bugzilla.mozilla.org/show_bug.cgi?id=1322991 Issue 1322991: Add support for new JMAP protocol

From the JMAP website page listing compatible clients, I only recognized the name "aerc" which is a modern console email client.

=> https://jmap.io/software.html#clients JMAP project website: clients list

### Sieve (filtering rules)

=> https://en.wikipedia.org/wiki/Sieve_(mail_filtering_language) Wikipedia page: Sieve

Dovecot has a plugin to offer Sieve filters, they are rules applied to received emails going into your mailbox, whether you want to sort them into dedicated directories, mark them read or block some addresses.  That plugin is called pigeonhole.

You will need Sieve to enable the spam filter learning system when moving emails from/to the Junk folder as it is triggered by a Sieve rule.  This improves rspamd Bayes (a method using tokens to understand information, the story of the person behind it is interesting) filter ability to detect spam accurately.

Edit `/etc/dovecot/conf.d/90-plugin.conf` with the following content:

```
plugin {
  sieve_plugins = sieve_imapsieve sieve_extprograms

  # From elsewhere to Spam folder
  imapsieve_mailbox1_name = Spam
  imapsieve_mailbox1_causes = COPY
  imapsieve_mailbox1_before = file:/usr/local/lib/dovecot/sieve/report-spam.sieve

  # From Spam folder to elsewhere
  imapsieve_mailbox2_name = *
  imapsieve_mailbox2_from = Spam
  imapsieve_mailbox2_causes = COPY
  imapsieve_mailbox2_before = file:/usr/local/lib/dovecot/sieve/report-ham.sieve

  sieve_pipe_bin_dir = /usr/local/lib/dovecot/sieve

  sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.environment
}
```

This piece of configuration was taken from the official Dovecot documentation: https://doc.dovecot.org/configuration_manual/howto/antispam_with_sieve/ .  It will trigger shell scripts calling rspamd to make it learn what does a spam look like, and what is legit (ham).  One script will run when an email is moved out of the spam directory (ham), another one when an email is moved to the spam directory (spam).

Modify `/etc/dovecot/conf.d/15-mailboxes.conf` to add the following snippet inside the block `namespace inbox { ... }`, it will associate the Junk directory as the folder containing spam and automatically create it if it does not exist:

```
  mailbox Spam {
    auto = create
    special_use = \Junk
  }
```

To make this work completely, you need to write the two extra sieve filters that will run trigger the scripts:

Create `/usr/local/lib/dovecot/sieve/report-spam.sieve`

```sieve
require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables"];

if environment :matches "imap.user" "*" {
  set "username" "${1}";
}

pipe :copy "sa-learn-spam.sh" [ "${username}" ];
```

Create `/usr/local/lib/dovecot/sieve/report-ham.sieve`

```sieve
require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables"];

if environment :matches "imap.mailbox" "*" {
  set "mailbox" "${1}";
}

if string "${mailbox}" "Trash" {
  stop;
}

if environment :matches "imap.user" "*" {
  set "username" "${1}";
}

pipe :copy "sa-learn-ham.sh" [ "${username}" ];
```

Create `/usr/local/lib/dovecot/sieve/sa-learn-ham.sh`

```shell
#!/bin/sh
exec /usr/local/bin/rspamc -d "${1}" learn_ham
```

Create `/usr/local/lib/dovecot/sieve/sa-learn-spam.sh`

```shell
#!/bin/sh
exec /usr/local/bin/rspamc -d "${1}" learn_spam
```

Make the two scripts executable with `chmod +x /usr/local/lib/dovecot/sieve/sa-learn-spam.sh /usr/local/lib/dovecot/sieve/sa-learn-ham.sh`.

Run the following command to compile the sieve filters:

```
sievec /usr/local/lib/dovecot/sieve/report-spam.sieve
sievec /usr/local/lib/dovecot/sieve/report-ham.sieve
```

### Manage Sieve

By default, Sieves rules are a file located on the user home directory, however there is a standard protocol named "managesieve" to manage Sieve filters remotely from an email client.

It is enabled out of the box in Dovecot configuration, although you need to make sure you open the port 4190/tcp in the firewall if you want to allow users to use it.

### Start the service

Once you configured everything, make sure that dovecot service is enabled, and then start / restart it:

```
rcctl enable dovecot
rcctl start dovecot
```

# Webmail

A webmail will allow your users to read / send emails from a web interface instead of having to configure a local email client.  While they can be convenient, they enable a larger attack surface and are often affected by vulnerability issues, you may prefer to avoid webmail on your server.

The two most popular open source webmail are Roundcube mail and Snappymail (a fork of the abandoned rainloop) and Roundcube, they both have pros and cons.

## Roundcube mail setup

Roundcube is packaged in OpenBSD, it will pull in all required dependencies and occasionally receive backported security updates.

Install the package:

```
pkg_add roundcubemail
```

When installing the package, you will be prompted for a database backend for PHP.  If you have one or two users, I highly recommend choosing SQLite as it will work fine without requiring a running daemon, thus less maintenance and server resources locked.  If you plan to have a lot of users, there are no wrong picks between MySQL or PostgreSQL, but if you already have one of them running it would be better to reuse it for Roundcube.

Specific instructions for installing Roundcube are provided by the package README in `/usr/local/share/doc/pkg-readmes/roundcubemail`.

We need to enable a few PHP modules to make Roundcube mail working:

```
ln -s /etc/php-8.2.sample/zip.ini /etc/php-8.2/
ln -s /etc/php-8.2.sample/intl.ini /etc/php-8.2/
ln -s /etc/php-8.2.sample/opcache.ini /etc/php-8.2/
ln -s /etc/php-8.2.sample/pdo_sqlite.ini /etc/php-8.2/
```

Note that more PHP modules may be required if you enable extra features and plugins in Roundcube.

PHP is ready to be started:

```
rcctl enable php82_fpm
rcctl start php82_fpm
```

Add the following blocks to `/etc/httpd.conf`, make sure you opened the port 443/tcp in your `pf.conf` and that you reloaded it with `pfctl -f /etc/pf.conf`:

```
server "mail.puffy.cafe" {

    listen on egress tls

    tls key "/etc/ssl/private/mail.puffy.cafe.key"
    tls certificate "/etc/ssl/mail.puffy.cafe.fullchain.pem"

    root "/roundcubemail"

    directory index index.php

    location "*.php" {
        fastcgi socket "/run/php-fpm.sock"
    }
}

types {
    include "/usr/share/misc/mime.types"
}
```

Restart httpd with `rcctl restart httpd`.

You need to configure Roundcube to use a 24 bytes security key and configure the database: edit the file `/var/www/roundcubemail/config/config.inc.php`:

Search for the variable `des_key`, replace its value by the output of the command `tr -dc [:print:] < /dev/urandom | fold -w 24 | head -n 1` which will generate a 24 byte random string.  If the string contains a quote character, either escape this character by prefixing it with a `\` or generate a new string.

For the database, you need to search the variable `db_dsnw`.

If you use SQLite, change this line

```
$config['db_dsnw'] = 'sqlite:///roundcubemail/db/sqlite.db?mode=0660';
```

by this line:

```
$config['db_dsnw'] = 'sqlite:///db/sqlite.db?mode=0660';
```

If you chose MySQL/MariaDB or PostgreSQL, modify this line:

```
$config['db_dsnw'] = 'mysql://roundcube:pass@localhost/roundcubemail';
```

by

```
$config['db_dsnw'] = 'mysql://USER:PASSWORD@DATABASE_NAME';
```

Where `USER`, `PASSWORD` and `DATABASE_NAME` must match a new user and database created into the backend.


Because PHP is chrooted on OpenBSD and that the OpenSMTPD configuration enforces TLS on port 587, it is required to enable TLS to work in the chroot:

```
mkdir -p /var/www/etc/ssl
cp -p /etc/ssl/cert.pem /etc/ssl/openssl.cnf /var/www/etc/ssl/
```

To make sure the files `cert.pem` and `openssl.cnf` stay in sync after upgrades, add the two commands to a file `/etc/rc.local` and make this file executable.  This script always starts at boot and is the best place for this kind of file copy.

If your IMAP and SMTP hosts are not on the same server where Roundcube is installed, adapt the variables `imap_host` and `smtp_host` to the server name.

If Roundcube mail is running on the same server where OpenSMTPD is running, you need to disable certificate validation because `localhost` will not match the certificate and authentication will fail.  Change `smtp_host` line to `$config['smtp_host'] = 'tls://127.0.0.1:587';` and add this snippet to the configuration file:

```
$config['smtp_conn_options'] = array(
'ssl' => array('verify_peer' => false, 'verify_peer_name' => false),
'tls' => array('verify_peer' => false, 'verify_peer_name' => false));
```

From here, Roundcube mail should work when you load the domain configured in `httpd.conf`.

For a more in-depth guide to install and configure Roundcube mail, there is an excellent guide available which was written by Bruno Flückiger:

=> https://www.bsdhowto.ch/roundcube.html Install Roundcube on OpenBSD

# Hardening

It is always possible to improve the security of this stack, all the following settings are not mandatory, but they can be interesting depending on your needs.

## Always allow the sender per email or domain

It is possible to configure rspamd to force it to accept emails from a given email address or domain, bypassing the anti-spam.

To proceed, edit the file `/etc/rspamd/local.d/multimap.conf` to add this content:

```
local_wl_domain {
        type = "from";
        filter = "email:domain";
        map = "$CONFDIR/local.d/whitelist_domain.map";
        symbol = "LOCAL_WL_DOMAIN";
        score = -10.0;
        description = "domains that are always accepted";
}

local_wl_from {
        type = "from";
        map = "$CONFDIR/local.d/whitelist_email.map";
        symbol = "LOCAL_WL_FROM";
        score = -10.0;
        description = "email addresses that are always accepted";
}
```

Create the files `/etc/rspamd/local.d/whitelist_domain.map` and `/etc/rspamd/local.d/whitelist_email.map` using the command `touch`.

Restart the service rspamd with `rcctl restart rspamd`.

The created files use a simple syntax, add a line for each entry you want to allow:

* a domain name in `/etc/rspamd/local.d/whitelist_domain.map` to allow the domain
* an email address in `/etc/rspamd/local.d/whitelist_email.map` to allow this address

There is no need to restart or reload rspamd after changing the files.

Reusing the same technique can be done to block domains/addresses directly in rspamd by giving a high positive score.

## Block bots

I published on my blog a script and related configuration to parse OpenSMTPD logs and block the bad actors with PF.

=> https://dataswamp.org/~solene/2023-06-22-opensmtpd-block-attempts.html 2023-06-22 Ban scanners IPs from OpenSMTP logs

This includes an ignore file if you do not want some IPs to be blocked.

## Split the stack

If you want to improve your email setup security further, the best method is to split each part into dedicated systems.

As dovecot is responsible for storing and exposing emails to users, this component would be safer in a dedicated system, so if a component of the email stack (other than dovecot) is compromised, the mailboxes will not be exposed.

## Network attack surface reduction

If this does not go against usability of the email server users, I strongly recommend limiting the publicly opened ports in the firewall to the minimum: 25, 80, 465, 587.  This would prevent attackers to exploit any network related 0day or unpatched vulnerabilities of non-exposed services such as Dovecot.

A VPN should be deployed to allow users to reach Dovecot services (IMAP, POP) and other services if any.

SSH port could be removed from the public ports as well, however, it would be safer to make sure your hosting provider offers a serial access / VNC / remote access to the system because if the VPN stops working, you will not be able to log in into the system using SSH to debug it.

# Email client configuration

If everything was done correctly so far, you should have a complete email stack fully functional.

Here are the connection information to use your service:

* IMAP/POP3/SMTP login: username on the remote system (the username does not include the `@` part)
* IMAP/POP3/SMTP password: password of the remote system user
* IMAP/POP3 server: dovecot server hostname
* IMAP/POP3 port: 993 for IMAPS and 995 for POP3S (TLS is enabled)
* SMTP server: opensmtpd server hostname
* SMTP port: either 465 in SSL/TLS mode (encryption forced), or 587 in STARTTLS mode (encryption not enforced depending on OpenSMTPD configuration)

The webmail, if any, will be available at the address configured in `httpd.conf`, using the same credentials as above.

# Verify the setup

There is an online service providing you a random email address to send a test email to, then you can check the result on their website displaying if the SPF, DKIM, DMARC and PTR records are correctly configured.
=> https://www.mail-tester.com www.mail-tester.com

The score you want to be displayed on their website is no least than 10/10.  The service can report meaningless issues like "the email was poorly formatted" or "you did not include an unsubscribe link", they are not relevant for the current test.

While it used to be completely free last time I used it, I found it would ask you to pay after three free checks if you do not want to wait 24h.  It uses your public IP address for the limit.

# Maintenance

## Running processes

The following processes list should always be running: using a program like monit, zabbix or reed-alert to notify you when they stop working could be a good idea.

* dovecot
* httpd
* redis
* rspamd
* smtpd

## Certificates renewal

In addition, the TLS certificate should be renewed regularly as ACME generated certificates are valid for a few months.  Edit root crontab with `crontab -e` as root to add this line:

```
10 4 * * 0 -s acme-client mail.puffy.cafe && rcctl restart dovecot httpd smtpd
```

This will try to renew the certificate for `mail.puffy.cafe` every Sunday at 04h10 and upon renewal restart the services using the certificate: dovecot, httpd and smtpd.

## All about logs

If you need to find some logs, here is a list of paths where to find information:

* dovecot: `/var/log/maillog`
* httpd: `/var/log/daemon` for the daemon, access logs in `/var/www/logs/access.log` and errors logs in `/var/www/logs/error.log`
* redis: `/var/log/daemon`
* rspamd: `/var/log/rspamd/rspamd.log` and its web UI on port 11334 (only on localhost by default, a SSH tunnel can be handy)
* smtpd: `/var/log/maillog`
* roundcube: `/var/www/roundcubemail/logs/errors.log` and `/var/www/roundcubemail/logs/sendmail.log`

A log rotation of the new logs can be configured in `/etc/newsyslog.conf` with these lines (take only what you need):

```newsyslog
/var/log/rspamd/rspamd.log		600  7     500  *     Z "pkill -USR1 -u root -U root -x rspamd"
/var/www/roundcubemail/logs/errors.log	600  7     500  *     Z
/var/www/roundcubemail/logs/sendmail.log 600 7     500  *     Z
```

## Disk space

Finally, OpenSMTPD will stop delivering emails locally if the `/var` partition has less than 4% of free disk space, be sure to monitor the disk space of this partition otherwise you will not receive emails anymore for a while before noticing something is wrong.

# Conclusion

Congratulations, you configured a whole email stack that will allow you to send emails to the world, using your own domain and hardware.  Keeping your system up to date is important as you have network services exposed to the wild Internet.

Even with a properly configured setup featuring SPF/DKIM/DMARC/PTR, it is not guaranteed to not end in the spam directory of our recipients.  The IP reputation of your SMTP server also account, and so is the domain name extension (I have a `.pw` domain which I learned too late that it was almost always considered as spam because it is not mainstream).

Cloud gaming review: Xbox xCloud and Amazon Luna+

# Introduction

There are not many cloud gaming services around, here is a quick summary of Xbox Gaming and Amazon Luna services.

# Xbox Cloud Gaming (Microsoft)

The Xbox Cloud gaming service is available for Xbox Game Pass Ultimate subscribers at a price of 17.99$€ / month.

## pros

* Huge game library including games available their first day of release
* Game library can be installed on xbox and Windows and game saves are shared (THIS CAN NOT BE USED ON LINUX!)
* Low bandwidth usage (rarely more than 750 kB/s average)
* Per-game customized layout for touchscreen devices
* 14 days trial for 1€ (there are many giveaways or price cut voucher available here and there)

## cons

* Poor video quality in 1080p due to a low bitrate
* Highest option is 1080@60Hz
* Streamed games are running on Xbox Series S hardware, so some games are locked at 30 FPS (hello Starfield)
* Better xcloud configuration is required to play in best conditions
* Saves can not be exported to reuse with Steam/GOG games

=> https://github.com/redphx/better-xcloud better-xcloud GitHub page: Userscript to improve Xbox Cloud Gaming (xCloud)

## Conclusion

The Xbox Ultimate subscription bundles a game library for Xbox and Windows games with high price titles, this makes the price itself quite cheap compared to the price of available games as a high-priced game is more expensive than four months of subscription.  However, I have mixed feelings about the associated streaming service: on one hand it works perfectly fine (no queue, input lag is ok) but the video quality is not fantastic on a 1080p screen.  The service seems perfectly fitted to be played on smartphones, every touchscreen compatible games have a specific layout customized for that game, making the touchscreen a lot more usable than displaying a full controller over the layout when you only need a few buttons, in addition to the low bandwidth usage it makes a good service for handheld devices.  On desktop, you may want to use the streaming to try a game before installing it, but not much more.

There is no client for Android TV, so you can not use these devices except if you can run a web browser in it.

Really, with a better bitrate, the service would be a blast (not for 4k and/or 120 fps users though), but at the moment it is only ok as a game library, or as a streaming service to play on small or low resolution screens.

# Luna (Amazon)

The Luna+ cloud gaming service is available for 9.99$€ / month, or people who have an Amazon Prime account.

## pros

* Couch coop link to invite a friend
* You can play some games you own in GOG/Epic/Ubisoft libraries
* Bundled with Amazon Prime (if you have Prime, you can use Luna)
* Low bandwidth usage (rarely more than 1 MB/s average)
* A free try of 7 days (when cancelling, make sure to accept the THREE confirmation steps)
* Compatible with most devices / OS

## cons

* Poor game library although there are a couple of good titles (I only account Luna games)
* Poor bitrate
* Highest option is 1080@60Hz
* Average performance
* Static ads when starting a game
* Cancelling subscription is using tricks so you do not cancel it

## Conclusion

The service could be good with a better bitrate, the input lag is ok and I did not experience any waiting time.  The hardware specs seem good except the loading times, it feels like the data are stored on a network storage with poor access time or bandwidth.  The bitrate is so bad that I can not recommend playing anything in first person view or moving too fast as it would look like a pixel mess.  However, playing slow paced games is perfectly fine.

There have a killer feature that is unique to their service, you can invite a friend to play a game in streaming with you by just sending them a link, they will join your game, and you can start playing together in a minute.  While it is absolutely cool, the service lacks fun games to play in couch coop...

As you can use Luna if you have Amazon Prime, I think it is a good fit for casual players who do not want to pay for games but would enjoy a session from time to time on any hardware.

I mentioned the subscription cancelling process twice, here are the facts: on your account you click on unsubscribe, then it asks if you are really sure because you will lose access to your service, you have to agree, then it will remind you that you are about to cancel, and maybe it is a mistake, so you need to agree again, then there is a trick.  The web page says that your account will be cancelled and that you can still use your account up to cancel date, it looks fine here, but it is not, there is a huge paragraph of blah blah below and a button to confirm the cancel!  Then you are done.  But first time I cancelled I did not pass the third step as I thought it was fine, when double-checking my account status before the renewal, I saw I missed something.

# GeForce NOW (NVIDIA)

I wrote a review of their services a few months ago.  Since then, I renewed my account with 6 months of priority tier.  I mostly use it to play resource intensive games when it is hot at home (so my computer does not heat at all), at night when I want to play a bit in silence without fan noise, finally I enjoy it a lot with slow paced games like walking simulators on my TV.

=> https://dataswamp.org/~solene/2024-03-07-geforce-now-review.html 2024-03-07 GeForce NOW review

# Final conclusion

On one hand, Luna seems to target casual users: people who may not notice the bad quality or input lag and who will just play what is available.

On the other hand, Xbox service is a game library first, with a streaming feature.  It is quite perfect for people playing Xbox library games on PC / Xbox who wants to play on a smartphone / tablet occasionally, but not for customers looking only for playing streaming games.

Both services would not need much to be _good_ streaming services, the minimum upgrade should be a higher bitrate. Better specs would be appreciated too: improved loading times for Luna, and Xbox games running on a better platform than Xbox Series S.

WireGuard and Linux network namespaces

# Introduction

This guide explains how to setup a WireGuard tunnel on Linux using a dedicated network namespace so you can choose to run a program on the VPN or over clearnet.

I have been able to figure the setup thanks to the following blog post, I enhanced it a bit using scripts and sudo rules.

=> https://www.ismailzai.com/blog/creating-wireguard-jails-with-linux-network-namespaces Mo Ismailzai's blog: Creating WireGuard jails with Linux network namespaces

# Explanations

By default, if you connect WireGuard tunnel, its "allowedIps" field will be used as a route with a higher priority than your current default route.  It is not always ideal to have everything routed through a VPN, so you will create a dedicated network namespace that uses the VPN as a default route, without affecting all other software.

Unfortunately, compared to OpenBSD rdomain (which provide the same features in this situation), network namespaces are much more complicated to deal with and requires root to run a program under a namespace.

You will create a SAFE sudo rule to allow your user to run commands under the new namespace, making it more practical for daily use.

# Setup

## VPN tunnel and namespace

You need a wg-quick compatible WireGuard configuration file, but do not make it automatically used at boot.

Create a script (for root use only) with the following content, then make it executable:

```shell
#!/bin/sh

# your VPN configuration file
CONFIG=/etc/wireguard/my-vpn.conf

# this directory is used to have a per netns resolver file
mkdir -p /etc/netns/vpn/

# cleanup any previous VPN in case you want to restart it
ip netns exec vpn ip l del tun0
ip netns del vpn

# information to reuse later
DNS=$(awk '/^DNS/ { print $3 }' $CONFIG)
IP=$(awk '/^Address/ { print $3 }' $CONFIG)

# the namespace will use the DNS defined in the VPN configuration file
echo "nameserver $DNS" > /etc/netns/vpn/resolv.conf

# now, it creates the namespace and configure it
ip netns add vpn
ip -n vpn link set lo up
ip link add tun0 type wireguard
ip link set tun0 netns vpn
ip netns exec vpn wg setconf tun0 <(wg-quick strip "$CONFIG")
ip -n vpn a add "$IP" dev tun0
ip -n vpn link set tun0 up
ip -n vpn route add default dev tun0
ip -n vpn add

# extra check if you want to verify the DNS used and the public IP assigned
#ip netns exec vpn dig ifconfig.me
#ip netns exec vpn curl https://ifconfig.me
```

This script autoconfigure the network namespace and the VPN interface + the DNS server to use.  There are extra checks at the end of the script that you can uncomment if you want to take a look at the public IP and DNS resolver used just after connection.

Running this script will make the netns "vpn" available for use.

The command to run a program under the namespace is `ip netns exec vpn your command`, it can only be run as root.

## Sudo rule

Now you need a specific rule so you can use sudo to run a command in vpn netns as your own user without having to log in as root.

Add this to your sudo configuration file, in my example I allow the user `solene` to run commands as `solene` for the netns vpn:

```sudoers
solene ALL=(root) NOPASSWD: /usr/sbin/ip netns exec vpn /usr/bin/sudo -u solene -- *
```

When using this command line, you MUST use full paths exactly as in the sudo configuration file, this is important otherwise it would allow you to create a script called `ip` with whatever commands and run it as root, while `/usr/sbin/ip` can not be spoofed by a local script in $PATH.

If I want a shell session with the VPN, I can run the following command:

```
sudo /usr/sbin/ip netns exec vpn /usr/bin/sudo -u solene -- bash
```

This runs bash under the netns vpn, so any command I'm running from it will be using the VPN.

# Limitations

It is not a real limitation, but you may be caught by it, if you make a program listening on localhost in the netns vpn, you can only connect to it from another program in the same namespace.  There are methods to connect two namespaces, but I do not plan to cover it, if you need to search about this setup, it can be done using socat (this is explained in the blog post linked earlier) or a local bridge interface.

# Conclusion

Network namespaces are a cool feature on Linux, but it is overly complicated in my opinion, unfortunately I have to deal with it, but at least it is working fine in practice.

The Old Computer Challenge v4 (Olympics edition)

# Introduction

This is the time of the year where I announce the Old Computer Challenge (OCC) date.

I recommend visiting the community website about the OCC if you want to connect with the community.

=> https://occ.deadnet.se/ Old Computer Challenge community

=> https://dataswamp.org/~solene/tag-oldcomputerchallenge.html The Old Computer Challenge history

=> static/occ-v4.jpg The Old Computer Challenge v4 poster, by @prahou@merveilles.town on Mastodon

# When?

The Old Computer Challenge 4th edition will begin 13th July to 20th July 2024.  It will be the prequel to Olympics, I was not able to get the challenge accepted there so we will do it our way.

# How to participate?

While the three previous editions had different rules, I came to agree with the community for this year.  Choose your rules!

When I did the challenge for the first time, I did not expect it to become a yearly event nor that it would gather aficionados during the trip.  The original point of the challenge was just to see if I could use my oldest laptop as my main computer for a week, there were no incentive, it was not a contest and I did not have any written rules.

Previous editions rules were about using an old laptop, use a computer with limited hardware (and tips to slow down a modern machine) or limit Internet access to a single hour per day.  I always insist on the fact it should not hinder your job, so people participating do not have to "play" during work.  Smartphones became complicated to handle, especially with the limited Internet access, all I can recommend to people is to define some rules you want to stick to, and apply to it the best you can.  If you realllyyyy need once to use a device that would break the rules, so be it if it is really important, nobody will yell at you.

People doing the OCC enjoy it for multiple reasons, find yours!  Some find the opportunity to disconnect a bit, change their habit, do some technoarcheology to run rare hardware, play with low-tech, demonstrate obsolescence is not a fatality etc...

Some ideas if you do not know what to do for the challenge:

* use your oldest device
* do not use graphical interface
* do not use your smartphone (and pick a slow computer :P)
* limit your Internet access time
* slow down your Internet access
* forbid big software (I indented to do this for 4th OCC but it was hard to prepare, the idea was to setup an OpenBSD mirror where software with more than some arbitrary line of codes in their sources would be banned, resulting in a very small set of packages due to missing transitive dependencies)

# What to do during the challenge?

You can join the community and share your experience.

There are many ways!  It's the opportunity to learn how to use Gopher or Gemini to publish content, or to join the mailing list and participate with the other or simply come to the IRC channel to chat a bit.

# I can't join during 13th to 20th July!

Well, as nobody enforces you to do the OCC, you can just do it when you want, even in December if it suits your calendar better than mid July, nobody will complain at you.

# Conclusion

There is a single rule, do it for fun!  Do not impede yourself for weird reasons, it is here for fun, and doing the whole week is as good as failing and writing about the why you failed.  It is not a contest, just try and see how it goes, and tell us your story :)

How to mount ISO or file disk images on OpenBSD

# Introduction

If you ever happen to mount a .iso file on OpenBSD, you may wonder how to proceed as the command `mount_cd9660` requires a device name.

While the solution is entirely documented into man pages and in the official FAQ, it may not be easy to find it at first glance, especially since most operating system allow to mount an iso file in a single step where as OpenBSD requires an extra step.

=> https://www.openbsd.org/faq/faq14.html#MountImage OpenBSD FAQ: Mounting disk images
=> https://man.openbsd.org/vnconfig#EXAMPLES OpenBSD manual page: vnconfig(8) EXAMPLES section

Note that this method does also work for disk images, not only .iso files.

# Exposing a file as a device

On OpenBSD you need to use the command `vnconfig` to map a file to a device node, allowing interesting actions such as using a file as a storage disk (which you can encrypt) or mounting a .iso file.

This command must be used as root as it manipulates files in /dev.

# Mounting an ISO file

Now, let's see how to mount a .iso file, which is a dump of a CD9660 file (most of the time):

```
vnconfig vnd0 /path/to/file.iso
```

This will create a new device `/dev/vnd0`, now you can mount it on your file-system with:

```
mount -t cd9660 /dev/vnd0c /mnt
```

You should be able to browser your iso file content in /mnt at this point.

# Unmounting

If you are done with the file, you have to umount it with `umount /mnt` and destroy the vnd device using `vnconfig -u vnd0`.

# Going further: Using a file as an encrypted disk

If you want to use a single file as a file system, you have to provision the file with disk space using the command `dd`, you can fill it with zeroes but if you plan to use encryption on top of it, it's better to use random data. In the following example, you will create a file `my-disk.img` of a size of 10 GB (1000 x 10 MB):

```
dd if=/dev/random of=my-disk.img bs=10M count=1000
```

Now you can use vnconfig to expose it as a device:

```
vnconfig vnd0 my-disk.img
```

Finally, the command `bioctl` can be used to configure encryption on the disk, `disklabel` to partition it and `newfs` to format the partitions.  You can follow OpenBSD FAQ guides, make sure use the the device name `/dev/vnd0` instead of wd0 or sd0 from the examples.

=> https://www.openbsd.org/faq/faq14.html#softraidCrypto OpenBSD FAQ: Encrypting external disk

OpenBSD extreme privacy setup

# Introduction

This blog post explains how to configure an OpenBSD workstation with extreme privacy in mind.

This is an attempt to turn OpenBSD into a Whonix or Tails alternative, although if you really need that level of privacy, use a system from this list and not the present guide.  It is easy to spot OpenBSD using network fingerprinting, this can not be defeated, you can not hide the fact you use OpenBSD to network operators.

I did this guide as a challenge for fun, but I also know some users have a use for this level of privacy.

Note: this guide explains steps related to increase privacy of OpenBSD and its base system, it will not explain how to configure a web browser or how to choose a VPN.

# Checklist

OpenBSD does not have much network activity with a default installation, but the following programs generate traffic:

* the installer connects to 199.185.178.80 to associate chosen timezone with your public IP to reuse the answer for a future installation
* ntpd (for time sync) uses pool.ntp.org, 9.9.9.9, 2620:fe::fe, www.google.com and time.cloudflare.com 
* fw_update connects to firmware.openbsd.org (resolves as openbsd.map.fastlydns.net), fw_update is used at the end of the installer, and at the end of each sysupgrade
* sysupgrade, syspatch and pkg_* tools use the address defined in /etc/installurl (defaults to cdn.openbsd.org)

# Setup

## OpenBSD installation

If you do not have OpenBSD installed yet, you will have to download an installer.  Choose from the official mirrors or my tor/i2p proxy mirror.

=> https://www.openbsd.org/faq/faq4.html#Download OpenBSD official website: Downloading OpenBSD
=> https://dataswamp.org/~solene/2024-05-25-openbsd-privacy-friendly-mirror.html OpenBSD privacy-friendly mirrors

Choose the full installer, for 7.5 it would be install75.img for USB installer or install75.iso for using a CD-ROM.

It is important to choose the full installer to avoid any network at install time.

Full disk encryption is recommended, but it's your choice.  If you choose encryption, it is recommended to wipe the drive with random data before.

=> https://www.openbsd.org/faq/faq14.html#softraid OpenBSD FAQ: Crypto and disks

During the installation, do not configure the network at all.  You want to avoid syspatch and fw_update to run at the end of the installer, and also ntpd to ping many servers upon boot.

## First boot (post installation)

Once OpenBSD booted after the installation, you need to take a decision for ntpd (time synchronization daemon).

* you can disable ntpd entirely with `rcctl disable ntpd`, but it is not really recommended as it can create issues with some network software if the time is desynchronized
* you can edit the file `/etc/ntpd.conf` which contains the list of servers used to keep the time synchronized, and choose which server to connect to (if any)
* you can configure ntpd to use a sensor providing time (like a GPS receiver) and disable everything else

Whonix (maybe Tails too?) uses a custom tailored program named swdate to update the system clock over Tor (because Tor only supports TCP while NTP uses UDP), it is unfortunately not easily portable on OpenBSD.

Next step is to edit the file `/etc/hosts` to disable the firmware server whose hostname is hard-coded in the program `fw_update`, add this line to the file:

```
127.0.0.9	firmware.openbsd.org
```

## Packages, firmware and mirrors

The firmware installation and OpenBSD mirror configuration using Tor and I2P are covered in my previous article, it explains how to use tor or i2p to download firmware, packages and system sets to upgrade.

=> https://dataswamp.org/~solene/2024-05-25-openbsd-privacy-friendly-mirror.html OpenBSD privacy-friendly mirrors

There is a chicken / egg issue with this though, on a fresh install you have neither tor nor i2p, so you can not download tor or i2p packages through it.  You could download the packages and their dependencies from another system and install them locally using USB.

Wi-Fi and some other devices requiring a firmware may not work until you run fw_update, you may have to download the files from another system and pass the network interface firmware over a USB memory stick to get network.  A smartphone with USB tethering is also a practical approach for downloading firmware, but you will have to download it over clearnet.

## DNS

DNS is a huge topic for privacy-oriented users, I can not really recommend a given public DNS servers because they all have pros and cons, I will use 1.1.1.1 and 9.9.9.9 for the example, but use your favorite DNS.

Enable the daemon unwind, it is a local DNS resolver with some cache, and supports DoT, DoH and many cool features.  Edit the file `/etc/unwind.conf` with this configuration:

```
forwarder { 1.1.1.1 9.9.9.9 }
```

As I said, DoT and DoH is supported, you can configure it directly in the forwarder block, the man page explains the syntax:

=> https://man.openbsd.org/unwind.conf OpenBSD manual pages: unwind.conf

Now, enable, start and make sure the service is running fine:

```
rcctl enable unwind
rcctl start unwind
rcctl check unwind
```

A program named `resolvd` is running by default, when it finds that unwind is running, resolvd modifies `/etc/resolv.conf` to switch DNS resolution to 127.0.0.1, so you do not have anything to do.

## Firewall configuration

A sane firewall configuration for workstations is to block all incoming connections.  This can be achieved with the following `/etc/pf.conf`: (reminder, last rule matches)

```
set block-policy drop
set skip on lo

match in all scrub (no-df random-id max-mss 1440)
antispoof quick for egress

# block all traffic (in/out)
block

# allow reaching the outside (IPv4 + IPv6)
pass out quick inet
pass out quick inet6

# allow ICMP (ping) for MTU discovery
pass in proto icmp

# uncomment if you use SLAAC or ICMP6 (IPv6)
#pass in on egress inet6 proto icmp6
#pass in on egress inet6 proto udp from fe80::/10 port dhcpv6-server to fe80::/10 port dhcpv6-client no state
```

Reload the rules with `pfctl -f /etc/pf.conf`.

## Network configuration

Everything is ready so you can finally enable networking.  You can find a list of network interfaces with `ifconfig`.

Create the hostname.if file for your network device.

=> https://man.openbsd.org/hostname.if OpenBSD manual pages: hostname.if

An ethernet device configuration using DHCP would look like this

```
inet autoconf
```

A wireless device configuration would look like this:

```
join SSID_NAME wpakey password1
join OTHER_NET wpakey hunter2
inet autoconf
```

You can randomize your network device MAC address at each boot by adding the line `lladdr random` to its configuration file.

Start the network with `sh /etc/netstart ifname`.

# Special attention during updates

When you upgrade your OpenBSD system from a release to another or to a newer snapshot using `sysupgrade`, the command `fw_update` will automatically be run at the very end of the installer.

It will bypass any `/etc/hosts` changes as it runs from a mini root filesystem, if you do not want `fw_update` to be used over clearnet at this step, the only method is to disable network at this step, which can be done by using `sysupgrade -n` to prepare the upgrade without rebooting, and then:

* disconnect your computer Ethernet cable if any, if you use Wi-Fi and you have a physical killswitch this will be enough to disable Wi-Fi
* if you do not have such a killswitch and Wi-Fi is configured, rename its configuration file in `/etc/hostname.if` to another invalid name, you will have to rename it back after `sysupgrade`.

You could use this script to automate the process:

```shell
mv /etc/hostname.* /root/
sysupgrade -n
echo 'mv /root/hostname.* /etc/' > /etc/rc.firsttime
echo 'sh /etc/netstart' >> /etc/rc.firsttime
chmod +x /etc/rc.firsttime
reboot
```

It will move all your network configuration in `/root/`, run sysupgrade, and configure the next boot to restore the hostname files back to place and start the network.

# Webcam and Microphone protection

By default, OpenBSD "filters" webcam and microphone use, if you try to use them, you get a video stream with a black background and no audio on the microphone. This is handled directly by the kernel and only root can change this behavior.

To toggle microphone recording, change the sysctl `kern.audio.record` to 1 or 0 (default).

To toggle webcam recording, change the sysctl `kern.video.record` to 1 or 0 (default).

What is cool with this mechanism is it makes software happy when they make webcam/microphone a requirement, they exist but just record nothing.

# Conclusion

Congratulations, you achieved a high privacy level with your OpenBSD installation!  If you have money and enough trust in some commercial services, you could use a VPN instead (or as a base) of Tor/I2P, but it is not in the scope of this guide.

I did this guide after installing OpenBSD on a laptop connected to another laptop doing NAT and running Wireshark to see exactly what was leaking over the network.  It was a fun experience.

Improve your SSH agent security

# Introduction

If you are using SSH quite often, it is likely you use an SSH agent which stores your private key in memory so you do not have to type your password every time.

This method is convenient, but it comes at the expense of your SSH key use security, anyone able to use your session while the agent holds the key unlocked can use your SSH key.  This scenario is most likely to happen when using a compromised build script.

However, it is possible to harden this process at a small expense of convenience, make your SSH agent ask for confirmation every time the key has to be used.

# Setup

The tooling provided with OpenSSH comes with a simple SSH agent named `ssh-agent`.  On OpenBSD, the agent is automatically started and ask to unlock your key upon graphical login if it finds a SSH key in the default path (like `~/.ssh/id_rsa`).

Usually, the method to run the ssh-agent is the following.  In a shell script defining your environment at an early stage, either your interactive shell configuration file or the script running your X session, you use `eval $(ssh-agent -s)`.  This command runs ssh-agent and also enable the environment variables to make it work.

Once your ssh-agent is correctly configured, it is required to add a key into it, now, here are two methods to proceed.

## OpenSSH ssh-add

In addition to ssh-agent, OpenSSH provides ssh-add to load keys into the agent.  It is simple of use, just run `ssh-add /path/to/key`.

=> https://man.openbsd.org/ssh-add ssh-add manual page

If you want to have a GUI confirmation upon each SSH key use, just add the flag `-c` to this command line: `ssh-add -c /path/to/key`.

In OpenBSD, if you have your key at a standard location, you can modify the script `/etc/X11/xenodm/Xsession` to change the first occurrence of `ssh-add` by `ssh-add -c`.  You will still be greeting for your key password upon login, but you will be asked for each of its use.

## KeepassXC

It turns out the password manager KeepassXC can hold SSH keys, it works great for having used for a while.  KeepassXC can either store the private key within its database or load a private key from the filesystem using a path and unlock it using a stored password, the choice is up to you.

You need to have the ssh-agent variables in your environment to have the feature work, as KeepassXC will replace ssh-add only, not the agent.

KeepassXC documentation has a "SSH Agent integration" section explaining how it works and how to configure it.

=> https://keepassxc.org/docs/ KeepassXC official documentation

In the key settings and "SSH Agent" tab, there is a checkbox to ask user confirmation upon each key use.

# Other security features

## Timeout

I would recommend to automatically delete the key from the agent after some time, this is especially useful if you do not actively use your SSH key.

In `ssh-add`, this can be achieved using `-t time` flag (it's tea time, if you want to remember about it), where time is a number of seconds or a time format specified in sshd_config, like 5s for 5 seconds, 10m for 10 minutes, 16h for 16 hours or 2d for 2 days.

In KeepassXC, it's in the key settings, within the SSH agent tab, you can configure the delay before the key is removed from the agent.

# Conclusion

The ssh-agent is a practical software that ease the use of SSH keys without much compromise with regards to security, but some extra security could be useful in certain scenarios, especially for developers running untrusted code as their user holding the SSH key.

While the extra confirmation could still be manipulated by a rogue script, it would come with a greater complexity at the cost of being spotted more easily.  If you really want to protect your SSH keys, you should use them from a hardware token requiring a physical action to unlock it. While I find those tokens not practical and expensive, they have their use and they can not be beaten by a pure software solution.

OpenBSD mirror over Tor / I2P

# Introduction

For an upcoming privacy related article about OpenBSD I needed to setup an access to an OpenBSD mirror both from a Tor hidden service and I2P.

The server does not contain any data, it only act as a proxy fetch files from a random existing OpenBSD mirror, so it does not waste bandwidth mirroring everything, the server does not have the storage required anyway.  There is a little cache to keep most requested files locally.

=> https://en.wikipedia.org/wiki/I2P Wikipedia page about I2P protocol
=> https://en.wikipedia.org/wiki/The_Tor_Project Wikipedia page about Tor

It is only useful if you can not reach OpenBSD mirrors, or if you really need to hide your network activity.  Tor or I2P will be much slower than connecting to a mirror using HTTP(s).

However, as they exist now, let me explain how to start using them.

# Tor

Using a client with tor proxy enabled, you can reach the following address to download installers or sets.

=> http://kdzlr6wcf5d23chfdwvfwuzm6rstbpzzefkpozp7kjeugtpnrixldxqd.onion/pub/OpenBSD/ OpenBSD onion mirror over Tor

If you want to install or update your packages from tor, you can use the onion address in `/etc/installurl`. However, it will not work for sysupgrade and syspatch, and you need to export the variable `FETCH_CMD="/usr/local/bin/curl -L -s -q -N -x socks5h://127.0.0.1:9050"` in your environment to make `pkg_*` programs able to use the mirror.

To make sysupgrade or syspatch able to use the onion address, you need to have the program `torsocks` installed, and patch the script to use torsocks:

* `sed -i 's,ftp -N,/usr/local/bin/torsocks &,' /usr/sbin/sysupgrade` for sysupgrade
* `sed -i 's,ftp -N,/usr/local/bin/torsocks &,' /usr/sbin/syspatch` for syspatch 

These patches will have to be reapplied after each sysupgrade run.

# I2P

If you have a client with i2p proxy enabled, you can reach the following address to download installers or sets.

=> http://2st32tfsqjnvnmnmy3e5o5y5hphtgt4b2letuebyv75ohn2w5umq.b32.i2p:8081/pub/OpenBSD/ OpenBSD mirror address over I2P

If you want to install or update your packages from i2p, install i2pd with `pkg_add i2pd`, edit the file `/etc/i2pd/i2pd.conf` to set `notransit = true` except if you want to act as an i2p relay (high cpu/bandwidth consumption).

Replace the file `/etc/i2pd/tunnels.conf` by the following content (or adapt your current tunnels.conf if you configured it earlier):

```
[MIRROR]
type = client
address = 127.0.0.1
port = 8080
destination = 2st32tfsqjnvnmnmy3e5o5y5hphtgt4b2letuebyv75ohn2w5umq.b32.i2p
destinationport = 8081
keys = mirror.dat
```

Now, enable and start i2pd with `rcctl enable i2pd && rcctl start i2pd`.

After a few minutes to let i2pd establish tunnels, you should be able to browse the mirror over i2p using the address `http://127.0.0.1:8080/`.  You can configure the port 8080 to another you prefer by modifying the file `tunnels.conf`.

You can use the address `http://127.0.0.1:8080/pub/OpenBSD/` in `/etc/installurl` to automatically use the I2P mirror for installing/updating packages, or keeping your system up to date with syspatch/sysupgrade.

Note: from experience the I2P mirror works fine to install packages, but did not play well with fw_update, syspatch and sysupgrade, maybe because they use ftp command that seems to easily drop the connection.  Downloading the files locally using a proper HTTP client supporting transfer resume would be better.  On the other hand, this issue may be related to the current attack the I2P network is facing as of the time of writing (May 2024).

# Firmware mirror

OpenBSD pulls firmware from a different server than the regular mirrors, the address is `http://firmware.openbsd.org/firmware/`, the files on this server are signed packages, they can be installed using `fw_update $file`.

Both i2p and tor hidden service hostname can be reused, you only have to change `/pub/OpenBSD/` by `/firmware/` to browse the files. 

The proxy server does not cache any firmware, it directly proxy to the genuine firmware web server.  They are on a separate server for legal matter, it seems to be a grey area.

## Disable firmware.openbsd.org

For maximum privacy, you need to neutralize `firmware.openbsd.org` DNS lookup using a hosts entry.  This is important because `fw_update` is automatically used after a system upgrade (as of 2024).

In `/etc/hosts` add the line:

```
127.0.0.9 firmware.openbsd.org
```

The IP in the snippet above is not a mistake, it will avoid fw_update to try to connect to a local web server if any.

## Tor access

If you use tor, it is complicated to patch `fw_update` to use torsocks, the best method is to download the firmware manually.

=> http://kdzlr6wcf5d23chfdwvfwuzm6rstbpzzefkpozp7kjeugtpnrixldxqd.onion/firmware/ Firmware onion address

## I2P access

If you use i2p, you can reuse the tunnel configuration described in the I2P section, and pass the full url to `fw_update`:

```shell
# release users
fw_update -p http://127.0.0.1:8080/firmware/$(uname -r)/

# snapshot users
fw_update -p http://127.0.0.1:8080/firmware/snapshots/
```

Or you can browse the I2P url using an http client with the i2p proxy to download the firmware manually.

=> http://2st32tfsqjnvnmnmy3e5o5y5hphtgt4b2letuebyv75ohn2w5umq.b32.i2p:8081/firmware/ Firmware i2p address

# Conclusion

There were no method to download OpenBSD files over Tor and I2P for people really needing it, it is now a thing.

If you encounter issues with the service, please let me know.

Organize your console with tmuxinator

# Introduction

This article is about the program tmuxinator, a tool to script the generation of tmux sessions from a configuration file.

=> https://github.com/tmuxinator/tmuxinator tmuxinator official project website on GitHub

This program is particularly useful when you have repeated tasks to achieve in a terminal, or if you want to automate your tmux session to save your fingers from always typing the same commands.

tmuxinator is packaged in most distributions and requires tmux to work.

# Configuration

tmuxinator requires a configuration file for each "session" you want to manage with it.  It provides a command line parameter to generate a file from a template:

```shell
$ tmuxinator new name_here
```

By default, it will create the yaml file for this project in `$HOME/.config/tmuxinator/name_here.yml`, if you want the project file to be in a directory (to make it part of a versioned project repository?), you can add the parameter `--local`.

# Real world example

Here is a tmuxinator configuration file I use to automatically do the following tasks, the commands include a lot of monitoring as I love watching progress and statistics:

* update my ports tree using git before any other task
* run a script named dpb.sh
* open a shell and cd into a directory
* run an infinite loop displaying ccache statistics
* run an infinite loop displaying a MFS mount point disk usage
* display top
* display top for user _pbuild

I can start all of this using `tmuxinator start dpb`, or stop only these "parts" of tmux with `tmuxinator stop dpb` which is practical when using tmux a lot.

Here is my file `dpb.yml`:

```yml
name: dpb
root: ~/

# Runs on project start, always
on_project_start: cd /usr/ports && doas -u solene git pull -r

windows:
  - dpb:
      layout: tiled
      panes:
        - dpb:
          - cd /root/packages/packages
          - ./dpb.sh -P list.txt -R
        - watcher:
          - cd /root/logs
          - ls -altrh locks
          - date
        - while true ; do clear && env CCACHE_DIR=/build/tmp/pobj/.ccache/ ccache -s ; sleep 5 ; done
        - while true ; do df -h /build/tmp/pobj_mfs/ | grep % ; sleep 10 ; done
        - top
        - top -U _pbuild
```

# Going further

Tmuxinator could be used to ssh into remote servers, connect to IRC, open your email client, clean stuff, there are no limits.  

This is particularly easy to configure as it does not try to run commands, but only send the keys to each tmux panes, which mean it will send keystrokes like if you typed them.  In the example above, you can see how the pane "dpb" can cd into a directory and then run a command, or how the pane "watcher" can run multiple commands and leave the shell as is.

# Conclusion

I knew about tmuxinator for a while, but I never gave it a try before this week.  I really regret not doing it earlier.  Not only it allows me to "script" my console usage, but I can also embed some development configuration into my repositories.  While you can use it as an automation method, I would not rely too much on it though, it only types blindly on the keyboard.

What is going on in Nix community?

# Introduction

You may have heard about issues within the Nix/NixOS community, this blog post will try to help you understand what is going on.

Please note that it is hard to get a grasp of the big picture, it is a more long term feeling that the project governance was wrong (or absent?) and people got tired.

This blog posts was written with my knowledge and feelings, I clearly do not represent the community.

=> https://save-nix-together.org/ Save Nix Together: an open letter to the NixOS foundation
=> https://xeiaso.net/blog/2024/much-ado-about-nothing/ Xe blog post: Much ado about nothing

There is a maintainer departure milestone in the Nixpkgs GitHub project.

=> https://github.com/NixOS/nixpkgs/milestone/27 GitHub milestone 27: Maintainers leaving

# Project structure

First, it is important to understand how the project works.

Nix (and NixOS, but it is not the core of the project), was developed by Eelco Dolstra early 2000.  The project is open source, available on GitHub and everyone can contribute.

Nix is a tool to handle packaging in a certain way, and it has another huge repository (top 10 GitHub repo) called nixpkgs that contains all packages definition.  nixpkgs is known to be the most up-to-date repository and biggest repository of packages, thanks to heavy automation and a huge community.

The NixOS foundation (that's the name of the entity managing the project) has a board that steer the project in some direction and handle questions.  First problem is that it is known to be slow to act and response.

Making huge changes to Nix or Nixpkgs requires making an RFC (Request For Comment), explaining the rationale behind a change and a consensus has to be found with others to agree (it is somewhat democratic).  Eelco decided a while ago to introduce a huge change in Nix (called Flakes) without going through the whole RFC process, this introduced a lot of tension and criticism because they should have gone through the process like every other people, and the feature is half-baked but got some traction and now Nix paradigm was split between two different modes that are not really compatible.

=> https://github.com/NixOS/rfcs/pull/49#issuecomment-659372623 GitHub Pull request to introduce Flakes: Eelco Dolstra mentioning they could merge it as experimental

There are also issues related to some sponsors in the Nix conferences, like companies related to militaries, but this is better  explained in the links above, so I will not make a recap.

# Company involvement

This point is what made me leave NixOS community.  I worked for a company called Tweag, involved into Nix for a while and paying people to contribute to Nix and Nixpkgs to improve the user experience for their client.  This made me realize the impact of companies into open source, and the more I got involved into this, the more I realized that Nix was mostly driven by companies paying developers to improve the tool for business.

Paying people to develop features or fixing bug is fine, but when a huge number of contributors are paid by companies, this lead to poor decisions and conflicts of interest.

In the current situation, Eelco Dolstra published a blog post to remember the project is open source and belong to its contributors.

=> https://determinate.systems/posts/on-community-in-nix/ Eelco Dolstra blog post

The thing in this blog post, that puzzles me, is that most people at Determinate Systems (Eelco co-founded company) are deeply involved into Nix in various way.  In this situation, it is complicated for contributors to separate what they want for the project from what their employer wants.  It is common for nix contributors to contribute with both hats.

# Conclusion

Unfortunately, I am not really surprised this is happening.  When a huge majority of people spending their free time contributing to a project they love and that companies relentlessly quiet their voice, it just can't work.

I hope Nix community will be able to sort this out and keep contributing to the project they love.  This is open source and libre software, most affected people contribute because they like doing so, they do not deserve what is happening, but it never came with any guarantees either.

# Extra: Why did I stop using Nix?

I don't think this deserved a dedicated blog post, so here are some words.

From my experience, contributing to Nix was complicated.  Sometimes, changes could be committed in minutes, leaving no time for other to review a change, and sometimes a PR could take months or years because of nitpicking and maintainer losing faith.

Another reason I stopped using nix was that it is quite easy to get nixpkgs commit access (I don't have commit access myself, I never wanted to inflict the nix language to myself), a supply chain attack would be easy to achieve in my opinion: there are so many commits done that it is impossible for a trustable group to review everything, and there are too many contributors to be sure they are all trustable.

# Alternative to Nix/NixOS?

If you do not like Nix/NixOS governance, it could be time to take a look at Guix, a Nix fork that happened in 2012.  It is a much smaller community than nix, but the tooling, packages set and community is not at rest.

Guix being a 100% libre software project, it does not target MacOS like nix, nor it will include/package proprietary software, however for the second "problem", there is an unofficial repository called guix-nonfree that contains many packages like firmware and proprietary software, most users will want to include this repo.

Guix is old school, people exchange over IRC and send git diff over email, please do not bother them if this is not your cup of tea.  On top of that, Guix uses the programming language Scheme (a Lisp-1 language) and if you want to work with this language, emacs is your best friend (try geiser mode!).

=> https://guix.gnu.org/ Guix official project webpage

OpenBSD scripts to convert wg-quick VPN files

# Introduction

If you use commercial VPN, you may have noticed they all provide WireGuard configurations in the wg-quick format, this is not suitable for an easy use in OpenBSD.

As I currently work a lot for a VPN provider, I often have to play with configurations and I really needed a script to ease my work.

I made a shell script that turns a wg-quick configuration into a hostname.if compatible file, for a full integration into OpenBSD.  This is practical if you always want to connect to a given VPN server, not for temporary connections.

=> https://man.openbsd.org/hostname.if OpenBSD manual pages: hostname.if
=> https://git.sr.ht/~solene/wg-quick-to-hostname-if Sourcehut project: wg-quick-to-hostname-if

# Usage

It is really easy to use, download the script and mark it executable, then run it with your wg-quick configuration as a parameter, it will output the hostname.if file to the standard output.

```
wg-quick-to-hostname-if fr-wg-001.conf | doas tee /etc/hostname.wg0
```

In the generated file, it uses a trick to dynamically figure the current default route which is required to keep a non-vpn route to the VPN gateway.

# Short VPN sessions

When I shared my script on mastodon, Carlos Johnson shared their own script which is pretty cool and complementary to mine.

If you prefer to establish a VPN for a limited session, you may want to take a look at his script.

=> https://gist.github.com/callemo/aea83a8d0e1e09bb0d94ab85dc809675#file-wg-sh Carlos Johnson GitHub: file-wg-sh gist

# Prevent leaks

If you need your WireGuard VPN to be leakproof (= no network traffic should leave the network interface outside the VPN if it's not toward the VPN gateway), you should absolutely do the following:

* your WireGuard VPN should be on rdomain 0
* WireGuard VPN should be established on another rdomain
* use PF to block traffic on the other rdomain that is not toward the VPN gateway
* use the VPN provider DNS or a no-log public DNS provider

=> https://dataswamp.org/~solene/2021-10-09-openbsd-wireguard-exit.html Older blog post: WireGuard and rdomains

# Conclusion

OpenBSD's ability to configure WireGuard VPNs with ifconfig has always been an incredible feature, but it was not always fun to convert from wg-quick files.  But now, using a commercial VPN got a lot easier thanks to a few piece of shell.

A Stateless Workstation

# Introduction

I always had an interest for practical security on computers, being workstations or servers.  Many kinds of threats exist for users and system administrators, it's up to them to define a threat model to know what is acceptable or not.  Nowadays, we have choice in the operating system land to pick what works best for that threat model: OpenBSD with its continuous security mechanisms, Linux with hardened flags (too bad grsec isn't free anymore), Qubes OS to keep everything separated, immutable operating system like Silverblue or MicroOS (in my opinion they don't bring much to the security table though) etc...

My threat model always had been the following: some exploit on my workstation remaining unnoticed almost forever, stealing data and capturing keyboard continuously.  This one would be particularly bad because I have access to many servers through SSH, like OpenBSD servers.  Protecting against that is particularly complicated, the best mitigations I found so far is to use Qubes OS with disposable VMs or restricting outbound network, but it's not practical.

My biggest grip with computers always have been "states".  What is a state?  It is something that distinguish a computer from another: installed software, configuration, data at rest (pictures, documents etc…).  We use states because we don't want to lose work, and we want our computers to hold our preferences.

But what if I could go stateless?  The best defense against data stealer is to own nothing, so let's go stateless!

# Going stateless

My idea is to be able to use any computer around, and be able to use it for productive work, but it should always start fresh: stateless.

A stateless productive workstation obviously has challenges: How would it help with regard to security? How would I manage passwords? How would I work on a file over time? How to achieve this?

I have been able to address each of these questions.  I am now using a stateless system.

> States? Where we are going, we don't need states! (certainly Doc Brown in a different timeline)

## Data storage

It is obvious that we need to keep files for most tasks.  This setup requires a way to store files on a remote server.

Here are different methods to store files:

* Nextcloud
* Seafile
* NFS / CIFS over VPN
* iSCSI over VPN
* sshfs / webdav mount
* Whatever works for you

Encryption could be done locally with tools like cryfs or gocryptfs, so only encrypted files would be stored on the remote server.

Nextcloud end-to-end encryption should not be used as of April 2024, it is known to be unreliable.

Seafile, a less known alternative to Nextcloud but focused only on file storage, supports end-to-end encryption and is reliable.  I chose this one as I had a good experience with it 10 years ago.

Having access to the data storage in a stateless environment comes with an issue: getting the credentials to access the files.  Passwords should be handled differently.

## Password management

When going stateless, the first step that will be required after a boot will be to access the password manager, otherwise one would be locked outside.

The passwords must be reachable from anywhere on Internet, with a passphrase you know and/or hardware token you have (and why not 2FA).

A self-hosted solution is vaultwarden (it used to be named bitwarden_rs), it's an open source reimplementation of Bitwarden server.

Any proprietary service offering password management could work too.

A keepassxc database on a remote storage service for which you know the password could also be used, but it is less practical.

## Security

The main driving force for this project is to increase my workstation security, I had to think hard about this part.

Going stateless requires a few changes compared to a regular workstation:

* data should be stored on a remote server
* passwords should be stored on a remote server
* a bootable live operating system
* programs to install

This is mostly a paradigm change with pros and cons compared to a regular workstation.

Data and passwords stored in the cloud?  This is not really an issue when using end-to-end encryption, this is true as long as the software is trustable and its code is correct.

A bootable live operating system is quite simply to acquire.  There is a ton of choice of Linux distributions able to boot from a CD or from USB, and also non Linux live system exist.  A bootable USB device could be compromised while a CD is an immutable media, but there are USB devices such as the Kanguru FlashBlu30 with a physical switch to make the device read-only.  A USB device could be removed immediately after the boot, making it safe.  As for physically protecting the USB device in case you would not trust it anymore, just buy a new USB memory stick and resilver it.

=> https://www.kanguru.com/products/kanguru-flashblu30-usb3-flash-drive Product page: Kanguru FlashBlu30

As for installed programs, it is fine as long as they are packaged and signed by the distribution, the risks are the same as for a regular workstation.

The system should be more secure than a typical workstation because:

* the system never have access to all data at once, user is supposed to only pick what they will need for a given task
* any malware that would succeed to reach the system would not persist to the next boot

The system would be less secure than a typical workstation because:

* remote servers could be exploited (or offline, not a security issue but…), this is why end-to-end encryption is a must

To circumvent this, I only have the password manager service reachable from the Internet, which then allows me to create a VPN to reach all my other services.

## Ecology

I think it is a dimension that deserves to be analyzed for such setup.  A stateless system requires remote servers to run, and use bandwidth to reinstall programs at each boot.  It is less ecological than a regular workstation, but at the same time it may also enforce some kind of rationalization of computer usage because it is a bit less practical.

## State of the art

Here is a list of setup that already exist which could provide a stateless experience, with support for either a custom configuration or a mechanism to store files (like SSH or GPG keys, but an USB smart card would be better for those):

* NixOS with impermanence, this is an installed OS, but almost everything on disk is volatile
* NixOS live-cd generated from a custom config
* Tails, comes with a mechanism to locally store encrypted files, privacy-oriented, not really what I need
* Alpine with LBU, comes with a mechanism to locally store encrypted files and cache applications
* FuguITA, comes with a mechanism to locally store encrypted files (OpenBSD based)
* Guix live-cd generated from a custom config
* Arch Linux generated live-cd
* Ubuntu live-cd, comes with a mechanism to retrieve files from a partition named "casper-rw"

Otherwise, any live system could just work.

Special bonus to NixOS and Guix generated live-cd as you can choose which software will be in there, in latest version.  Similar bonus with Alpine and LBU, packages are always installed from a local cache which mean you can update them.

A live-cd generated a few months ago is certainly not really up to date.

# My experience

I decided to go with Alpine with its LBU mechanism, it is not 100% stateless but hit the perfect spot between "I have to bootstrap everything from scratch" and "I can reduce the burden to a minimum".

=> https://dataswamp.org/~solene/2023-07-14-alpine-linux-from-ram-but-persistent.html Earlier blog post: Alpine Linux from RAM but persistent

My setup requires two USB memory stick:

* one with Alpine installer, upgrading to a newer Alpine version only requires me to write the new version on that stick
* a second to store the packages cache and some settings such as the package list and specific changes in /etc (user name, password, services)

While it is not 100% stateless, the files on the second memory stick are just a way to have a working customized Alpine.

This is a pretty cool setup, it boots really fast as all the packages are already in cache on the second memory stick (packages are signed, so it is safe).  I made a Firefox profile with settings and extensions, so it is always fresh and ready when I boot.

I decided to go with the following stack, entirely self-hosted:

* Vaultwarden for passwords
* Seafile for data (behind VPN)
* Nextcloud for calendar and contacts (behind VPN)
* Kanboard for task management (behind VPN)
* Linkding for bookmarks (behind VPN)
* WireGuard for VPN

This setup offered me freedom.  Now, I can bootstrap into my files and passwords from any computer (a trustable USB memory stick is advisable though!).

I can also boot using any kind of operating system on any on my computer, it became so easy it's refreshing.

I do not make use of dotfiles or stored configurations because I use vanilla settings for most programs, a git repository could be used to fetch all settings quickly though.

=> https://github.com/dani-garcia/vaultwarden Vaultwarden official project website
=> https://www.seafile.com/en/home/ Seafile official project website
=> https://nextcloud.com/ Nextcloud official project website
=> https://kanboard.org/ Kanboard official project website
=> https://github.com/sissbruecker/linkding Linkding official project website

# Backups

A tricky part with this setup is to proceed with serious backups.  The method will depend on the setup you chose.

With my self-hosted stack, restic makes a daily backup to two remote locations, but I should be able to reach the backup if my services are not available due to a server failure.

If you use proprietary services, it is likely they should handle backups, but it is better not to trust them blindly and checkout all your data on a regular schedule to make a proper backup.

# Conclusion

This is an interesting approach to workstations management, I needed to try.  I really like how it freed me from worrying about each workstation, they are now all disposable.

I made a mind map for this project, you can view it below, it may be useful to better understand how things articulate.

=> static/stateless_computing-fs8.png Stateless computing mind mapping document

Lessons learned with XZ vulnerability

# Intro

Yesterday Red Hat announced that xz library was compromised badly, and could be use as a remote execution code vector.  It's still not clear exactly what's going on, but you can learn about this on the following GitHub discussion that also links to original posts:

=> https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78baad9e27 Discussion about xz being compromised

# What's the state?

As far as we currently know, xz-5.6.0 and xz-5.6.1 contains some really obfsucated code that would trigger only in sshd, this only happen in the case of:

* the system is running systemd
* openssh is compiled with a patch to add a feature related to systemd
* the system is using glibc (this is mandatory for systemd systems afaik anyway)
* xz package was built using release tarballs published on GitHub and not auto-generated tarballs, the malicious code is missing in the git repository

So far, it seems openSUSE Tumbleweed, Fedora 40 and 41 and Debian sid were affected and vulnerable.  Nobody knows what the vulnerability is doing exactly yet, when security researchers get their hands on it, we will know more.

OpenBSD, FreeBSD, NixOS and Qubes OS (dom0 + official templates) are unaffected.  I didn't check for other but Alpine and Guix shouldn't be vulnerable either.

=> https://security.gentoo.org/glsa/202403-04 Gentoo security advisory (unaffected)

# What lessons could we learn?

This is really unfortunate that a piece of software as important and harmless in appareance got compromised.  This made me think about how could we protect the most against this kind of issues, I came to the conclusion:

* packages should be built from source code repository instead of tarballs whenever possible (sometimes tarballs contain vendoring code which would be cumbersome to pull otherwise), at least we would know what to expect
* public network services that should be only used by known users (like openssh, imap server in small companies etc..) should be run behind a VPN
* OpenBSD style to have a base system developed as a whole by a single team is great, such kind of vulnerability is barely possible to happen (on base system only, ports aren't audited)
* whenever possible, separate each network service within their own operating system instance (using hardware machines, virtual machines or even containers)
* avoid daemons running as root as possible
* use opensnitch on workstations (linux only)
* control outgoing traffic whenever you can afford to

I don't have much opinion about what could be done to protect supply chain.  As a packager, it's not possible to audit code of each software we update.  My take on this is we have to deal with it, xz may certainly not be the only one vulnerable library running in production.

However, the risks could be reduced by:

* using less programs
* using less complex programs
* compiling programs with less options to pull in less dependencies (FreeBSD and Gentoo both provide this feature and it's great)

# Conclusion

I actually have two systems that were running the vulnerable libs on openSUSE MicroOS which updates very aggressively (daily update + daily reboot).  There are no magic balance between "update as soon as possible" and "wait for some people to take the risks first".

I'm going to rework my infrastructure and expose the bare minimum to the Internet, and use a VPN for all my services that are for known users.  The peace of mind will obtained be far greater than the burden of setting up WireGuard VPNs.

Cloud gaming review using Playstation Plus

# Introduction

While testing the cloud gaming service GeForce Now, I've learned that PlayStation also had an offer.

Basically, if you use a PlayStation 4 or 5, you can subscribe to the first two tiers to benefit some services and games library, but the last tier (premium) adds more content AND allows you to play video games on a computer with their client, no PlayStation required.  I already had the second tier subscription, so I paid the small extra to switch to premium in order to experiment with the service.

=> https://www.playstation.com/en-us/ps-plus/ PlayStation Plus official website

# Game library

Compared to GeForce Now, while you are subscribed you have a huge game library at hand.  This makes the service a lot cheaper if you are happy with the content.  The service costs 160$€ / year if you take for 12 months, this is roughly the price of 2 AAA games nowadays...

# Streaming service

The service is only available using the PlayStation Plus Windows program.  It's possible to install it on Linux, but it will use more CPU because hardware decoding doesn't seem to work on Wine (even wine-staging with vaapi compatibility checked).

There are no clients for Android, and you can't use it in a web browser.  The Xbox Game Pass streaming and GeForce now services have all of that.

Sadness will start here.  The service is super promising, but the application is currently a joke.

If you don't plug a PS4 controller (named a dualshock 4), you can't use the "touchpad" button, which is mandatory to start a game in Tales of Arise, or very important in many games.  If you have a different controller, on Windows you can use the program "DualShock 4 emulator" to emulate it, on Linux it's impossible to use, even with a genuine controller.

A PS5 controller (dualsense) is NOT compatible with the program, the touchpad won't work.

=> https://github.com/r57zone/DualShock4-emulator DualShock4 emulator GitHub project page

Obviously, you can't play without a controller, except if you use a program to map your keyboard/mouse to a fake controller.

# Gaming quality

There are absolutely no settings in the application, you can run a game just by clicking on it, did I mention there are no way to search for a game?

I guess games are started in 720p, but I'm not sure, putting the application full screen didn't degrade the quality, so maybe it's 1080p but doesn't go full screen when you run it...

Frame rate... this sucks.  Games seem to run on a PS4 fat, not a PS4 pro that would allow 60 fps.  On most games you are stuck with 30 fps and an insane input lag.  I've not been able to cope with AAA games like God of War or Watch Dogs Legion as it was horrible.

Independent games like Alex Kidd remaster, Monster Boy or Rain World did feel very smooth though (60fps!), so it's really an issue with the hardware used to run the games.

Don't expect any PS5 games in streaming from Windows, there are none.

The service allows PlayStation users to play all games from the library (including PS5 games) in streaming up to 2160p@120fps, but not the application users.  This feature is only useful if you want to try a game before installing it, or if your PlayStation storage is full.

# Cloud saving

This is fun here too.  There are game saves in the PlayStation Plus program cloud, but if you also play on a PlayStation, their saves are sent to a different storage than the PlayStation cloud saves.

There is a horrible menu to copy saves from one pool to the other.

This is not an issue if you only use the stream application or the PlayStation, but it gets very hard to figure where is your save if you play on both.

# Conclusion

I have been highly disappointed by the streaming service (outside PlayStation use).  The Windows programs required to sign in twice before working (I tried on 5 devices!), most interesting games run poorly due to a PS4 hardware, there is no way to enable the performance mode that was added to many games to support the PS4 Pro.  This is pretty curious as the streaming from a PlayStation device is a stellar experience, it's super smooth, high quality, no input lag, no waiting, crystal clear picture.

No Android application? Curious...  No support for a genuine PS5 controller, WTF?

The service is still young, I really hope they will work at improving the streaming ecosystem.

At least, it works reliably and pretty well for simpler games.

It could be a fantastic service if the following requirements were met:

* proper hardware to run games at 60fps
* greater controller support
* allow playing in a web browser, or at least allow people to run it on smartphones with a native application
* an open source client while there
* merged cloud saves

Cloud gaming review using Geforce Now

# Introduction

I'm finally done with ADSL now as I got access to optical fiber last week!  It was time for me to try cloud gaming again and see how it improved since my last use in 2016.

If you are not familiar with cloud gaming, please do not run away, here is a brief description.  Cloud gaming refers to a service allowing one to play locally a game running on a remote machine (either locally or over the Internet).

There are a few commercial services available, mainly: GeForce Now, PlayStation Plus Premium (other tiers don't have streaming), Xbox game pass Ultimate and Amazon Luna.  Two major services died in the long run: Google Stadia and Shadow (which is back now with a different formula).

A note on Shadow, they are now offering access to an entire computer running Windows, and you do what you want with it, which is a bit different from other "gaming" services listed above.  It's expensive, but not more than renting an AWS system with equivalent specs (I know some people doing that for gaming).

This article is about the service Nvidia GeForce Now (not sponsored, just to be clear).

I tried the free tier, premium tier and ultimate tier (thanks to people supporting me on Patreon, I could afford the price for this review).

=> https://www.nvidia.com/en-us/geforce-now/ Geforce Now official page

=> https://play.geforcenow.com/mall/ Geforce Now page where you play (not easy to figure after a login)

# The service

This is the first service I tried in 2016 when I received an Nvidia Shield HTPC, the experience was quite solid back in the days.  But is it good in 2024?

The answer is clear, yes, it's good, but it has limitations you need to be aware of.  The free tier allows playing for a maximum of 1 hour in a single session, and with a waiting queue that can be fast (< 1 minute) or long (> 15 minutes), but the average waiting time I had was like 9 minutes.  The waiting queue also displays ads now.

The premium tier at 11€$/month removes the queue system by giving you priority over free users, always assigns an RTX card and allows playing up to 6 hours in a single session (you just need to start a new session if you want to continue).

Finally, the ultimate tier costs 22€$/month and allows you to play in 4K@120fps on a RTX 4080, up to 8h.

The tiers are quite good in my opinion, you can try and use the service for free to check if it works for you, then the premium tier is affordable to be used regularly.  The ultimate tier will only be useful to advanced gamers who need 4K, or higher frame rates.

Nvidia just released a new offer early March 2024, a premium daily pass for $3.99 or ultimate daily pass for 8€.  This is useful if you want to evaluate a tier before deciding if you pay for 6 months.  You will understand later why this daily pass can be useful compared to buying a full month.

# Operating system support

I tried the service using a Steam Deck, a Linux computer over Wi-Fi and Ethernet, a Windows computer over Ethernet and in a VM on Qubes OS.  The latency and quality were very different.

If you play in a web browser (Chrome based, Edge, Safari), make sure it supports hardware acceleration video decoding, this is the default for Windows but a huge struggle on Linux, Chrome/Chromium support is recent and can be enabled using `chromium --enable-features=VaapiVideoDecodeLinuxGL --use-gl=angle`.  There is a Linux Electron App, but it does nothing more than bundling the web page in chromium, without acceleration.

On a web browser, the codec used is limited to h264 which does not work great with dark areas, it is less effective than advanced codecs like av1 or hevc (commonly known as h265).  If you web browser can't handle the stream, it will lose packets and then Geforce service will instantly reduce the quality until you do not lose packets, which will make things very ugly until it recover, until it drops again.  Using hardware acceleration solves the problem almost entirely!

Web browser clients are also limited to 60 fps (so ultimate tier is useless), and Windows web browsers can support 1440p but no more.

On Windows and Android you can install a native Geforce Now application, and it has a LOT more features than in-browser.  You can enable Nvidia reflex to remove any input lag, HDR for compatible screens, 4K resolution, 120 fps frame rate etc...  There is also a feature to add color filters for whatever reason...  The native program used AV1 (I only tried with the ultimate tier), games were smooth with stellar quality and not using more bandwidth than in h264 at 60 fps.

I took a screenshot while playing Baldur's Gate 3 on different systems, you can compare the quality:

=> static/geforce_now/windows_steam_120fps_natif.png Playing on Steam native program, game set to maximum quality
=> static/geforce_now/windows_av1_120fps_natif_sansupscale_gamma_OK.png Playing on Geforce Now on Windows native app, game set to maximum quality
=> static/geforce_now/linux_60fps_chrome_acceleration_maxquality_gammaok.png Playing on Geforce Now on Linux with hardware acceleration, game set to maximum quality

In my opinion, the best looking one is surprisingly the Geforce Now on Windows, then the native run on Steam and finally on Linux where it's still acceptable.  You can see a huge difference in terms of quality in the icons in the bottom bar.

# Tier system

When I upgraded from free to premium tier, I paid for 1 month and was instantly able to use the service as a premium user.

Premium gives you priority in the queues, I saw the queue display a few times for a few seconds, so there is virtually no queue, and you can play for 6 hours in a row.

When I upgraded from premium to ultimate tier, I was expecting to pay the price difference between my current subscription and the new one, but it was totally different.  I had to pay for a whole month of ultimate tier, and my current remaining tier was converted as an ultimate tier, but as ultimate costs a bit more than twice premium, a pro rata was applied to the premium time, resulting in something like 12 extra days of ultimate for the premium month.

Ultimate tier allows reaching a 4K resolution and 120 fps refresh rate, allow saving video settings in games, so you don't have to tweak them every time you play, and provide an Nvidia 4080 for every session, so you can always set the graphics settings to maximum.  You can also play up to 8 hours in a row.  Additionaly, you can record gaming sessions or the past n minutes, there is a dedicated panel using Ctrl+G.  It's possible to achieve 240 fps for compatible monitors, but only for 1080p resolution.

Due to the tier upgrade method, the ultimate pass can be interesting, if you had 6 months of premium, you certainly don't want to convert it into 2 months of ultimate + paying 1 month of ultimate just to try.

# Gaming quality

As a gamer, I'm highly sensitive to latency, and local streaming has always felt poor with regard to latency, and I've been very surprised to see I can play an FPS game with a mouse on cloud gaming.  I had a ping of 8-75 ms with the streaming servers, which was really OK.  Games featuring "Nvidia reflex" have no sensitive input lag, this is almost magic.

When using a proper client (native Windows client or a web browser with hardware acceleration), the quality was good, input lag barely noticeable (none in the app), it made me very happy :-)

Using the free tier, I always had a rig good enough to put the graphics quality on High or Ultra, which surprised me for a free service.  On premium and later, I had an Nvidia 2080 minimum which is still relevant nowadays.

The service can handle multiple controllers!  You can use any kind of controller, and even mix Xbox / PlayStation / Nintendo controllers, no specific hardware required here.  This is pretty cool as I can visit my siblings, bring controllers and play together on their computer <3.

Another interesting benefit is that you can switch your gaming session from a device to another by connecting with the other device while already playing, Geforce Now will switch to the new connecting device without interruption.

# Games library

This is where GeForce now is pretty cool, you don't need to buy games to them.  You can import your own libraries like Steam, Ubisoft, Epic store, GOG (only CD Projekt Red games) or Xbox Game Pass games.  Not all games from your libraries will be playable though!  And for some reasons, some games are only available when run from Windows (native app or web browser), like Genshin Impact which won't appear in the games list if connected from non-Windows client?!

If you already own games (don't forget to claim weekly free Epic store games), you can play most of them on GeForce Now, and thanks to cloud saves, you can sync progression between sessions or with a local computer.

There are a bunch of free-to-play games that are good (like Warframe, Genshin Impact, some MMOs), so you could enjoy playing video games without having to buy one (until you get bored?).

# Cost efficiency

If you don't currently own a modern gaming computer, and you subscribe to the premium tier (9.17 $€/month when signing for 6 months), this costs you 110 $€ / year.

Given an equivalent GPU costs at least 400€$ and could cope with games in High quality for 3 years (I'm optimistic), the GPU alone costs more than subscribing to the service. Of course, a local GPU can be used for data processing nowadays, or could be sold second hand, or be used for many years on old games.

If you add the whole computer around the GPU, renewed every 5 or 6 years (we are targeting to play modern games in high quality here!), you can add 1200 $€ / 5 years (or 240 $€ / year).

When using the ultimate tier, you instantly get access to the best GPU available (currently a Geforce 4080, retail value of 1300€$).  Cost wise, this is impossible to beat with owned hardware.

I did some math to figure how much money you can save from electricity saving: the average gaming rig draws approximately 350 Watts when playing, a Geforce now thin client and a monitor would use 100 Watts in the worst case scenario (a laptop alone would be more around 35 Watts).  So, you save 0.25 kWh per hour of gaming, if one plays 100 hours per month (that's 20 days playing 5h, or 3.33 hours / day) they would save 25 kWh.  The official rate in France is 0.25 € / kWh, that would result in a 6.25€ saving in electricity.  The monthly subscription is immediately less expensive when taking this into account.  Obviously, if you are playing less, the savings are less important.

# Bandwidth usage and ecology

Most of the time, the streaming was using between 3 and 4 MB/s for a 1080p@60fps (full-hd resolution, 1920x1080, at 60 frames per second) in automatic quality mode.  Playing at 30 fps or on smaller resolutions will use drastically less bandwidth.  I've been able to play in 1080p@30 on my old ADSL line! (quality was degraded, but good enough).  Playing at 120 fps slightly increased the bandwidth usage by 1 MB/s.

I remember a long tech article about ecology and cloud gaming which concluded cloud gaming is more "eco-friendly" than running locally if you play it less than a dozen hours.  However, it always assumed you had a capable gaming computer locally that was already there, whether you use the cloud gaming or not, which is a huge bias in my opinion.  It also didn't account that one may install a video games multiple times and that a single game now weights 100 GB (which is equivalent to 20h of cloud gaming bandwidth wise!). The biggest cons was the bandwidth requirements and the whole worldwide maintenance to keep high speed lines for everyone.  I do think Cloud gaming is way more effective as it allows pooling gaming devices instead of having everyone with their own hardware.

As a comparison, 4K streaming at Netflix uses 25 Mbps of network (~ 3.1 MB/s).

# Playing on Android

Geforce Now allows you to play any compatible game on Android, is it worth?  I tried it with a Bluetooth controller on my BQ Aquaris X running LineageOS (it's a 7 years old phone, average specs with a 720p screen).

I was able to play in Wi-Fi using the 5 GHz network, it felt perfect except that I had to put the smartphone screen in a comfortable way.  This was drawing the battery at a rate of 0.7% / minute, but this is an old phone, I expect newer hardware to do better.

On 4G, the battery usage was less than Wi-Fi with 0.5% / minute.  The service at 720p@60fps used an average of 1.2 MB/s of data for a gaming session of Monster Hunter world.  At this rate, you can expect a data usage of 4.3 GB / hour of gameplay, which could be a lot or cheap depending on your usage and mobile subscription.

Globally, playing on Android was very good, but only if you have a controller.  There are interesting folding controllers that sandwich the smartphone between two parts, turning it into something looking like a Nintendo Switch, this can be a very interesting device for players.

# Tips

You can use "Ctrl+G" to change settings while in game or also display information about the streaming.

In GeForce Now settings (not in-game), you can choose the servers location if you want to try a different datacenter.  I set to choose the nearest otherwise I could land on a remote one with a bad ping.

GeForce Now even works on OpenBSD or Qubes OS qubes (more on that later on Qubes OS forum!).

=> https://forum.qubes-os.org/t/cloud-gaming-with-geforce-now/24964 Qubes OS forum discussion

# Conclusion

GeForce Now is a pretty neat service, the free tier is good enough for occasional gamers who would play once in a while for a short session, but also provide a cheaper alternative than having to keep a gaming rig up-to-date.  I really like that they allow me to use my own library instead of having to buy games on their own store.

I'm preparing another blog post about local and self hosted cloud gaming, and I have to admit I haven't been able to do better than Geforce Now even on local network...  Engineers at Geforce Now certainly know their stuff!

The experience was solid even on a 10 years old laptop, and enjoyable.  A "cool" feature when playing is the surrounding silence, as no CPU/GPU are crunching for rendering!  My GPU is still capable to handle modern games at an average quality at 60 FPS, I may consider using the premium tier in the future instead of replacing my GPU.

Script NAT on Qubes OS

# Introduction

As a daily Qubes OS user, I often feel the need to expose a port of a given qube to my local network.  However, the process is quite painful because it requires doing the NAT rules on each layer (usually net-vm => sys-firewall => qube), it's a lost of wasted time.

I wrote a simple script that should be used from dom0 that does all the job: opening the ports on the qube, and for each NetVM, open and redirect the ports.

=> https://git.sr.ht/~solene/qubes-os-nat Qubes OS Nat git repository

# Usage

It's quite simple to use, the hardest part will be to remember how to copy it to dom0 (download it in a qube and use `qvm-run --pass-io` from dom0 to retrieve it).

Make the script executable with `chmod +x nat.sh`, now if you want to redirect the port 443 of a qube, you can run `./nat.sh qube 443 tcp`. That's all.

Be careful, the changes ARE NOT persistent. This is on purpose, if you want to always expose ports of a qube to your network, you should script its netvm accordingly.

# Limitations

The script is not altering the firewall rules handled by `qvm-firewall`, it only opens the ports and redirect them (this happens at a different level).  This can be cumbersome for some users, but I decided to not touch rules that are hard-coded by users in order to not break any expectations.

Running the script should not break anything.  It works for me, but it was only slightly tested though.

# Some useful ports

## Avahi daemon port

The avahi daemon uses the UDP port 5353.  You need this port to discover devices on a network.  This can be particularly useful to find network printers or scanners and use them in a dedicated qube.

# Evolutions

It could be possible to use this script in qubes-rpc, this would allow any qube to ask for a port forwarding.  I was going to write it this way at first, but then I thought it may be a bad idea to allow a qube to run a dom0 script as root that requires reading some untrusted inputs, but your mileage may vary.
❌