26 minutes
Home Assistant Container Part 12: Migrating to Podman
Intro ¶
From the Podman documentation:
Podman is a daemonless, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images. Podman provides a command line interface (CLI) familiar to anyone who has used the Docker Container Engine. Most users can simply alias Docker to Podman (alias docker=podman) without any problems. Similar to other common Container Engines (Docker, CRI-O, containerd), Podman relies on an OCI compliant Container Runtime (runc, crun, runv, etc) to interface with the operating system and create the running containers. This makes the running containers created by Podman nearly indistinguishable from those created by any other common container engine.
In a timespan of 2 or 3 days my no-brand M.2 SSD crapped itself, destroying my HA Container setup with it, and we had a discussion at work re. Docker licenses and open-source alternatives, Podman being one of them. These events made me decide to start over with a new Podman-based setup of Home Assistant Container as soon as my newly ordered Crucial SATA SSD had arrived. Sadly I didn’t have the chance to document my the changes I did to my Docker setup using macvlan’s. I would have loved to share that as well.
This blog post will be kind of a logbook of the steps I’ve taken to set everything up. Most – if not all – will be based on a topic in the HA forums: Work in progress: configuration for running a Home Assistant in containers with systemd and podman on Fedora IoT.
Note that this does not negate anything we did so far in the Home Assistant Container series so far and you’re still free to use Docker Compose to do your setup.
Fedora Core OS ¶

I’m no linux/unix expert and so far I’ve been running mainly Debian and Ubuntu VMs. But as I’m already making things complicated for myself by starting from scratch and learning to work with a new container manager as I go, I decided to make it extra hard on myself and learn how to work with Fedora CoreOS as well.
Fedora CoreOS is an automatically-updating, minimal operating system for running containerized workloads securely and at scale. It is currently available on multiple platforms, with more coming soon.
Fedora CoreOS also comes with Docker and Podman installed by default.
I installed Fedora CoreOS bare-metal on my Beelink (after installing a new SSD) using the Live USB image.
Preparing Live USB ¶
- Download the Fedora Core OS Bare Metal ISO.
I recommend getting it from the Stable stream. - Follow the steps to verify your download.
Download the checksum file and signature, import Fedora’s GPG key, verify the validity of the signature, and verify the checksum matches.
This way, you’re certain your downloaded ISO isn’t corrupted or altered along the way. - If you’re on Linux or Mac, write the ISO image to your USB drive using
dd
.
Make sure you fully understand what you’re doing here!sudo dd bs=4M if=fedora-coreos-36.20221030.3.0-live.x86_64.iso of=/dev/sdb conv=fdatasync status=progress
If you’re on Windows, use a tool like balenaEtcher.
Once the writing process is done, you can (safely) eject your USB drive from your computer and plug it in your target system.
Preparing Ignition configuration ¶
Fedora doesn’t come with default credentials. So later, when we install Fedora CoreOS on our system, we won’t be able to login on the console or over SSH.
To solve this, we’ll create an Ignition configuration which will be read by the setup process and which we’ll use to setup our account.
Ignition is a provisioning utility that reads a configuration file (in JSON format) and provisions a Fedora CoreOS system based on that configuration.
We start of by creating a Butane configuration file.
In this configuration file, we create 2 accounts core
and hass
with SSH keys.
The core
user is added to the wheel
group so it gets sudo rights and the hass
user is added to the dialout
group.
We also add a systemd config that will download and launch etcd
at boot.
etcd is a strongly consistent, distributed key-value store that provides a reliable way to store data that needs to be accessed by a distributed system or cluster of machines. It gracefully handles leader elections during network partitions and can tolerate machine failure, even in the leader node.
And we also setup auto-updates to not be too eager on updating and to only allow reboots in a specific timeslot.
fedoracore.be
variant: fcos
version: 1.4.0
passwd:
users:
- name: core
ssh_authorized_keys:
- ssh-ed25519 AA... # UPDATE THIS
groups:
- wheel
- name: hass
ssh_authorized_keys:
- ssh-ed25519 AA... # UPDATE THIS
groups:
- dialout
storage:
files:
- path: /etc/hostname
mode: 0644
contents:
inline: fedoracore # Optional: change this
# OPTIONAL - Wariness to updates
# https://docs.fedoraproject.org/en-US/fedora-coreos/auto-updates/#_wariness_to_updates
- path: /etc/zincati/config.d/51-rollout-wariness.toml
contents:
inline: |
[identity]
rollout_wariness = 0.75
# OPTIONAL - Update Strategy
# https://coreos.github.io/zincati/usage/updates-strategy/
- path: /etc/zincati/config.d/55-updates-strategy.toml
contents:
inline: |
[updates]
strategy = "periodic"
[[updates.periodic.window]]
days = [ "Sat", "Sun" ]
start_time = "22:45"
length_minutes = 60
systemd:
units:
- name: etcd-member.service
enabled: true
contents: |
[Unit]
Description=Run a single node etcd
After=network-online.target
Wants=network-online.target
[Service]
ExecStartPre=mkdir -p /var/lib/etcd
ExecStartPre=-/bin/podman kill etcd
ExecStartPre=-/bin/podman rm etcd
ExecStartPre=-/bin/podman pull quay.io/coreos/etcd
ExecStart=/bin/podman run --name etcd --net=host \
--volume /var/lib/etcd:/etcd-data:z \
quay.io/coreos/etcd:latest /usr/local/bin/etcd \
--data-dir /etcd-data --name node1 \
--initial-advertise-peer-urls http://127.0.0.1:2380 \
--listen-peer-urls http://127.0.0.1:2380 \
--advertise-client-urls http://127.0.0.1:2379 \
--listen-client-urls http://127.0.0.1:2379 \
--initial-cluster node1=http://127.0.0.1:2380
ExecStop=/bin/podman stop etcd
[Install]
WantedBy=multi-user.target
To convert this Butane configuration file into an Ignition config, we need the Butane tool. There are multiple methods to get Butane, like using a Podman container or by using a distribution-provided package, but I opted for the standalone executable.
curl https://getfedora.org/static/fedora.gpg | gpg --import
# Check latest version at https://github.com/coreos/butane/releases
wget https://github.com/coreos/butane/releases/download/v0.16.0/butane-x86_64-unknown-linux-gnu
wget https://github.com/coreos/butane/releases/download/v0.16.0/butane-x86_64-unknown-linux-gnu.asc
gpg --verify butane-x86_64-unknown-linux-gnu.asc
Finally to convert the configuration file:
chmod u+x butane-x86_64-unknown-linux-gnu
./butane-x86_64-unknown-linux-gnu --pretty --strict fedoracore.be | tee fedoracore.ign
{
"ignition": {
"version": "3.3.0"
},
"passwd": {
"users": [
{
"groups": [
"wheel"
],
"name": "core",
"sshAuthorizedKeys": [
"ssh-ed25519 AA..."
]
},
{
"groups": [
"dialout"
],
"name": "hass",
"sshAuthorizedKeys": [
"ssh-ed25519 AA..."
]
}
]
},
"storage": {
"files": [
{
"path": "/etc/hostname",
"contents": {
"compression": "",
"source": "data:,fedoracore"
},
"mode": 420
}
]
},
"systemd": {
"units": [
{
"contents": "[Unit]\nDescription=Run a single node etcd\nAfter=network-online.target\nWants=network-online.target\n\n[Service]\nExecStartPre=mkdir -p /var/lib/etcd\nExecStartPre=-/bin/podman kill etcd\nExecStartPre=-/bin/podman rm etcd\nExecStartPre=-/bin/podman pull quay.io/coreos/etcd\nExecStart=/bin/podman run --name etcd --net=host \\\n --volume /var/lib/etcd:/etcd-data:z \\\n quay.io/coreos/etcd:latest /usr/local/bin/etcd \\\n --data-dir /etcd-data --name node1 \\\n --initial-advertise-peer-urls http://127.0.0.1:2380 \\\n --listen-peer-urls http://127.0.0.1:2380 \\\n --advertise-client-urls http://127.0.0.1:2379 \\\n --listen-client-urls http://127.0.0.1:2379 \\\n --initial-cluster node1=http://127.0.0.1:2380\nExecStop=/bin/podman stop etcd\n\n[Install]\nWantedBy=multi-user.target\n",
"enabled": true,
"name": "etcd-member.service"
}
]
}
}
We will need to make this configuration file available to our system over the network. Either by hosting it locally on a (temporary) webserver or by hosting it online.
I opted to use a temporary file host service named tmpfiles.org. This is not an endorsement of this service. Always be cautious when sharing/uploading configurations with potentially sensitive information (like SSH pubkeys) with an unknown 3rd party!
Installing Fedora CoreOS ¶
Now we’re ready to install Fedora on our system via the Live USB.
- Plug the USB in the target system.
- Power up the system and spam the
ESC
button on your keyboard. - Make sure you boot from USB.
- Confirm the path of the system’s hard drive (probably
/dev/sda
).sudo fdisk -l
- Run the following command in the terminal to launch the installer.
Don’t forget to modify the url to your Ignition config.sudo coreos-installer install /dev/sda --ignition-url https://example.com/example.ign
- Once the installer has finished, reboot the system and unplug the USB.
sudo reboot
Finishing touches ¶
Now you can SSH into Fedora system from your own computer.
ssh -i ~/.ssh/id_fc_core core@IP.OF.YOUR.BOX
Linger ¶
We’ll be setting up our containers using systemd.
To keep systemd from stopping all of your containers when you log out, we enable linger on the hass
account.
sudo loginctl enable-linger hass
Directory structure ¶
Local config will be stored in /srv/hass/
.
Grouping everything in one folder makes backing up also a lot easier.
sudo mkdir /srv/hass
sudo chown -R hass:hass /srv/hass
Podman setup ¶
All Podman config will be done as the hass
user.
ssh -i ~/.ssh/id_ed25519_fc_hass hass@<ip.of.your.systen>
Still following the guide from the HA Forum, we’ll be setting up multiple containers.
This includes a Certbot and Nginx reverse proxy container to provide external access to your Home Assistance instance. If you prefer to setup remote access via the Nabu Casa cloud, you won’t need these.
Podman Autoupdate ¶
Podman has an auto-update feature which is triggered via a timer. We’ll enable this in our user session as that’s where we’ll be running our rootless containers in.
systemctl --user enable --now podman-auto-update.timer
Created symlink /var/home/hass/.config/systemd/user/timers.target.wants/podman-auto-update.timer → /usr/lib/systemd/user/podman-auto-update.timer.
Podman containers ¶
Certbot and LetsEncrypt ¶
If you want valid SSL certificates (also required for the Companion app), you’ll need your own domain name.
I advise using a registrar that supports the DNS challenge type for the certbot.
That way you don’t need to expose any systems to request certificates.
You can even get a wildcard *.yourdomain.tld
certificate so you don’t even need to setup public domain names for your systems.
Another advantage is that you don’t run into issues with a changing public IP for doing the web challenge, which requires setting up Dynamic DNS as a fix.
We’ll create a file to store our secrets (keys for the challenge) and a directory for the certbot container to store its files.
mkdir -p /srv/hass/certbot
mkdir -p /srv/hass/secret
/srv/hass/secret/certbot-creds
dns_ovh_endpoint = ovh-eu
dns_ovh_application_key = MDAwMDAwMDAwMDAw
dns_ovh_application_secret = MDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAw
dns_ovh_consumer_key = MDAwMDAwMDAwMDAwMDAwMDAwMDAwMDAw
The exact config for Certbot will depend on your registrar and your own domain name.
In my case, I’m using OVH.
I might add a blog post on how to get the necessary keys and tokens for the OVH DNS challenge.
~/.config/systemd/user/container-certbot.service
[Unit]
Description=CertBot container to renew Let's Encrypt certificates
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-abnormal
RestartSec=3600
TimeoutStopSec=60
TimeoutStartSec=10800
ExecStartPre=-/usr/bin/podman secret create certbot-creds /srv/hass/secret/certbot-creds
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm \
--sdnotify=conmon \
-a stdout -a stderr \
--replace \
--label "io.containers.autoupdate=registry" \
--name certbot \
--volume=/srv/hass/certbot:/etc/letsencrypt:z \
--secret=certbot-creds,mode=0400 \
docker.io/certbot/dns-ovh -n renew --dns-ovh --dns-ovh-credentials /run/secrets/certbot-creds
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=-/usr/bin/podman secret rm example.org.cert
ExecStopPost=/usr/bin/podman secret create example.org.cert /srv/hass/certbot/live/example.org/fullchain.pem
ExecStopPost=-/usr/bin/podman secret rm example.org.key
ExecStopPost=/usr/bin/podman secret create example.org.key /srv/hass/certbot/live/example.org/privkey.pem
ExecStopPost=-/usr/bin/podman secret rm certbot-creds
Type=oneshot
NotifyAccess=all
[Install]
WantedBy=default.target
Notice how this service config uses podman secret create
to pass our certbot credentials to the container.
It also removes this secret when the service is stopped.
Because the order of actions, specifically the pre and post, matters Type=oneshot
is used here.
~/.config/systemd/user/container-certbot.timer
[Unit]
Description=Renews any certificates that need them renewed
Requires=container-certbot.service
[Timer]
Unit=container-certbot.service
OnCalendar=weekly
[Install]
WantedBy=timers.target
Then enable the timer service.
systemctl --user enable --now container-certbot.timer
Created symlink /var/home/hass/.config/systemd/user/timers.target.wants/container-certbot.timer → /var/home/hass/.config/systemd/user/container-certbot.timer.
We do need to manually run the certbot tool once to get our certificates, which the renew service will keep up-to-date. We’ll also accept the Terms of Service of LetsEncrypt and we give them an email address to remind us of expiring certs.
/usr/bin/podman secret create certbot-creds /srv/hass/secret/certbot-creds
/usr/bin/podman run \
--cgroups=no-conmon \
--rm \
--sdnotify=conmon \
-a stdout -a stderr \
--name certbot \
--volume=/srv/hass/certbot:/etc/letsencrypt:z \
--secret=certbot-creds,mode=0400 \
docker.io/certbot/dns-ovh certonly \
-n --agree-tos --email certbot@example.org \
--dns-ovh \
--dns-ovh-credentials /run/secrets/certbot-creds \
-d *.example.org
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Account registered.
Requesting a certificate for *.example.org
Waiting 120 seconds for DNS changes to propagate
Successfully received certificate.
Certificate is saved at: /etc/letsencrypt/live/example.org/fullchain.pem
Key is saved at: /etc/letsencrypt/live/example.org/privkey.pem
This certificate expires on 2023-02-23.
These files will be updated when the certificate renews.
NEXT STEPS:
- The certificate will need to be renewed before it expires. Certbot can automatically renew the certificate in the background, but you may need to take steps to enable that functionality. See https://certbot.org/renewal-setup for instructions.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
If you like Certbot, please consider supporting our work by:
* Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
* Donating to EFF: https://eff.org/donate-le
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Running the renew service once after this, will also ensure our certs are stored as Podman secrets. These secrets will hold the SSL cert and private key and will be shared with the Nginx reverse proxy container.
systemctl --user start container-certbot.service
podman secret ls
ID NAME DRIVER CREATED UPDATED
219a7a4790c900f313e0fd44f example.org.key file 15 minutes ago 15 minutes ago
62ff3f2b637f31917f4fc0234 certbot-creds file 45 minutes ago 45 minutes ago
f8a3cafb519fc30df6e24b8b9 example.org.cert file 15 minutes ago 15 minutes ago
Nginx reverse proxy ¶
The Nginx reverse proxy will be the bridge between the internet and Home Assistant.
The exposed port of the Nginx container is deliberately set to a so-called non-privileged port (>1024) so we can keep our container rootless.
Using port forwarding on our router, we can open ports 80/tcp (HTTP) and 443/tcp (HTTPS) on the router while referring to ports 9080/tcp and 9443/tcp respectively on the container.
The proxy will then, based on the hostname, forward the connection to our Home Assistant instance using the IP of our podman’s virtual network gateway 10.0.2.2
(see proxy_pass
).
Let’s start of with the config for the proxy. Don’t forget to update the following:
- Name of the SSL certificate and private key.
server_name
to match your Home Assistant hostname.
Aserver_name
of_
will match any domain name. We use this to forward HTTP to HTTPS but I’d advise NOT to redirect any random hostname to your Home Assistant instance.
/srv/hass/nginx/nginx.conf
worker_processes auto;
error_log /dev/stdout info;
pid /run/nginx.pid;
include /usr/share/nginx/modules/mod-stream.conf;
events {
worker_connections 1024;
}
http {
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
ssl_certificate "/run/secrets/example.org.cert";
ssl_certificate_key "/run/secrets/example.org.key";
server {
listen 9080;
listen [::]:9080;
server_name _;
return 301 https://$host$request_uri;
}
server {
listen 9443 ssl http2;
listen [::]:9443 ssl http2;
server_name homeassistant.example.org;
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers PROFILE=SYSTEM;
ssl_prefer_server_ciphers on;
proxy_buffering off;
location / {
proxy_pass http://10.0.2.2:8123;
proxy_set_header Host $host;
proxy_http_version 1.1;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
}
}
}
Next is the config for the Nginx container.
Don’t forget to modify the name of the secrets to match your config.
~/.config/systemd/user/container-nginx.service
[Unit]
Description=Nginx Reverse Proxy in a container
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
Wants=container-certbot.service
After=container-certbot.service
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n TZ=Europe/Brussels
Restart=on-failure
RestartSec=30
TimeoutStopSec=10
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm --sdnotify=conmon \
--replace \
--detach \
--label "io.containers.autoupdate=registry" \
--name nginx \
--net slirp4netns:allow_host_loopback=true \
-p 9080:9080 \
-p 9443:9443 \
--volume=/srv/hass/nginx/nginx.conf:/etc/nginx/nginx.conf:Z \
--secret example.org.cert \
--secret example.org.key \
registry.access.redhat.com/ubi8/nginx-120:latest nginx -g "daemon off;"
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=default.target
We can now launch the container, even though the redirect won’t function yet.
systemctl --user enable --now container-nginx.service
Created symlink /var/home/hass/.config/systemd/user/default.target.wants/container-nginx.service → /var/home/hass/.config/systemd/user/container-nginx.service.
Firewall config ¶
Now set up your firewall/router to port forward ports 80/tcp and 443/tcp on your WAN (internet) side to ports 9080/tcp and 9443/tcp on your Fedora system respectively.
Home Assistant ¶
This Home Assistant container has got a few features.
First of all, it wants (but doesn’t require) the Z-Wave and Zigbee containers to be alive but won’t shutdown if they’re not. You can comment these lines out if you won’t have these running anyway. I’ll comment out the Z-Wave lines as I don’t use it in my Home Assistant setup.
Secondly, there’s a custom watchdog service that will ensure the container restarts if Home Assistant ends up in a error state without exiting. Since we first need to setup HA before we can install the watchdog, we’ll comment out the its config for now. I’m keeping it in the config below for future reference.
Also, like with the Certbot container setup, we need to make sure the folder for the HA config exists.
mkdir /srv/hass/hass
~/.config/systemd/user/container-homeassistant.service
[Unit]
Description=Home Assistant Container
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
# Wants=container-zwave.service
Wants=container-zigbee.service
# After=container-zwave.service
After=container-zigbee.service
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n TZ=Europe/Brussels WATCHDOG_USEC=5000000
Restart=on-failure
RestartSec=30
TimeoutStopSec=70
# Note: this is using https://github.com/brianegge/home-assistant-sdnotify.
# Once installed, uncomment the `WatchdogSec` line below
# and change `--sdnotify=conmon` to `--sdnotify=container`.
# WatchdogSec=60
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm \
--sdnotify=conmon \
--replace \
--detach \
--label "io.containers.autoupdate=registry" \
--name=homeassistant \
--volume=/srv/hass/hass:/config:z \
--network=host \
ghcr.io/home-assistant/home-assistant:stable
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=default.target
Note that we tag the config folder with the shared content label (:z
)..
This is because we will be sharing this folder with the HASS Configurator container.
If you don’t need or want to share the configuration folder, use the private content label (:Z
) instead.
Now, again, we start the container.
systemctl --user enable --now container-homeassistant.service
Created symlink /var/home/hass/.config/systemd/user/default.target.wants/container-homeassistant.service → /var/home/hass/.config/systemd/user/container-homeassistant.service.
Give it some time and you should finally be able to see a glimpse of Home Assistant appearing at http://ip.of.your.system:8123
.

Mosquitto MQTT Broker ¶
We will need the Mosquitto MQTT broker when we’re setting up the Zigbee2MQTT container. But a lot of devices and applications can make use of MQTT as message protocol, so I always recomment installing Mosquitto.
mkdir -p /srv/hass/mosquitto/config
mkdir /srv/hass/mosquitto/data
mkdir /srv/hass/mosquitto/log
touch /srv/hass/mosquitto/config/mqttuser
The Mosquitto config needs to be present before we start the container. We can copy our config for Mosquitto from our Docker setup.
/srv/hass/mosquitto/config/mosquitto.conf
# Listen on port 1883
listener 1883
socket_domain ipv4
# save the in-memory database to disk
persistence true
persistence_location /mosquitto/data/
# Log to stderr and logfile
log_dest stderr
log_dest file /mosquitto/log/mosquitto.log
# Require authentication
allow_anonymous false
password_file /mosquitto/config/mqttuser
And yet another service file to autostart out container.
~/.config/systemd/user/container-mosquitto.service
[Unit]
Description=Mosquitto MQTT in a container
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n TZ=Europe/Brussels
Restart=on-failure
RestartSec=30
TimeoutStopSec=10
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm \
--sdnotify=conmon \
--replace \
--detach \
--label "io.containers.autoupdate=registry" \
--name mosquitto \
-p 1883:1883 \
--volume=/srv/hass/mosquitto/config:/mosquitto/config:Z \
--volume=/srv/hass/mosquitto/data:/mosquitto/data:Z \
--volume=/srv/hass/mosquitto/log:/mosquitto/log:Z \
docker.io/eclipse-mosquitto
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=default.target
And start the container.
systemctl --user enable --now container-mosquitto.service
Created symlink /var/home/hass/.config/systemd/user/default.target.wants/container-mosquitto.service → /var/home/hass/.config/systemd/user/container-mosquitto.service.
We can now create accounts for the services we’ll be using later.
podman exec -it mosquitto mosquitto_passwd -c /mosquitto/config/mqttuser mqtt_ha
Password:
Reenter password:
# Drop the -c this time or it will wipe out the previous account!
podman exec -it mosquitto mosquitto_passwd /mosquitto/config/mqttuser mqtt_z2m
Password:
Reenter password:
You can see these being added:
cat /srv/hass/mosquitto/config/mqttuser
mqtt_ha:$7$101$Dh*****rM$3Y********************==
mqtt_z2m:$7$101$3t*****L7$Ir********************==
Zigbee2MQTT ¶
You only need this container if you’re using Z2M as Zigbee coordinator. There’s also Zigbee Home Automation (ZHA) that’s built-in in Home Assistant and deCONZ by dresden elektronik.
Container permissions and SELinux ¶
Since we’re using a Zigbee controller connected over USB, we need to provide permissions to the container to do so, otherwise SELinux will block the container from doing so.
Here’re I’ll also follow the setup by Matthew as I don’t have experience with SELinux. Their setup involves using a custom SELinux module that allows containers to access USB serial devices.
We’ll need sudo rights, so you may need to switch back to our sudo user core
for these next steps.
~/container-usbtty.te
module container-usbtty 1.0;
require {
type container_t;
type usbtty_device_t;
class chr_file { getattr ioctl lock open read write };
}
#============= container_t ==============
allow container_t usbtty_device_t:chr_file { getattr ioctl lock open read write };
checkmodule -M -m -o container-usbtty.mod container-usbtty.te
semodule_package -o container-usbtty.pp -m container-usbtty.mod
sudo semodule -i container-usbtty.pp
Running into issues with permissions? Check the Troubleshooting section.
Config ¶
As with the Mosquitto container, we need to provide a config for the Zigbee2MQTT application before starting it up. And again we can have a look at our Docker setup for this.
Since Nginx is using port 8080, which is the default port for the Zigbee2MQTT dashboard, we tell Z2M to use a different port.
/srv/hass/zigbee/configuration.yaml
# Adapter settings
serial:
port: /dev/zigbee
# MQTT
mqtt:
base_topic: zigbee2mqtt
server: '!secret server'
user: '!secret user'
password: '!secret password'
client_id: zigbee
# Zigbee network
permit_join: false # Do not allow random devices to connect automatically
# Webserver
frontend:
port: 8081
url: 'http://10.0.2.2:8081'
# Devices and groups
# Extract config to separate files
devices: devices.yaml
groups: groups.yaml
# Home Assistant integration
homeassistant: true
advanced:
# Zigbee network - auto-generate new keys
pan_id: GENERATE
network_key: GENERATE
# Zigbee network - set channel to avoid interference with 2.4GHz WiFi
channel: 24
I haven’t found a method to store the secrets mentioned in the config above as a Podman secret.
The issue is that secrets are mounted at /run/secrets/secretname
and Z2M expects the file to be named secret.yaml
and be stored in the same folder as configuration.yaml
.
So for the moment, I store the secrets file next to the configuration file in the config folder.
I hope a change to Zigbee2MQTT or Podman can help fix this.
/srv/hass/zigbee/secret.yaml
server: 'mqtt://10.0.2.2'
user: 'mqtt_z2m'
password: 'MySuperSecretPa$$w0rd'
Don’t forget to change the device
path in the config below.
ls -l /dev/serial/by-id/
usb-ITead_Sonoff_Zigbee_3.0_USB_Dongle_Plus_eexxx52-if00-port0 -> ../../ttyUSB0
~/.config/systemd/user/container-zigbee.service
[Unit]
Description=Zigbee2MQTT Container
Wants=network-online.target
After=network-online.target
Requires=container-mosquitto.service
After=container-mosquitto.service
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n TZ=Europe/Brussels
Restart=on-failure
RestartSec=30
TimeoutStopSec=70
ExecStartPre=-/usr/bin/podman secret create z2m-creds /srv/hass/secret/z2m-secrets
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm \
--sdnotify=conmon \
--replace \
--detach \
--label "io.containers.autoupdate=registry" \
--name=zigbee \
--group-add keep-groups \
--network=slirp4netns:allow_host_loopback=true \
--device=/dev/serial/by-id/usb-ITead_Sonoff_Zigbee_3.0_USB_Dongle_Plus_ee6789cf0260ec11ba5a345f25bfaa52-if00-port0:/dev/zigbee:rw \
--volume=/srv/hass/zigbee:/app/data:Z \
--secret=z2m-creds,target=/app/data/secret.yaml \
-p 8081:8081 \
docker.io/koenkk/zigbee2mqtt:latest
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=-/usr/bin/podman secret rm z2m-creds
Type=oneshot
NotifyAccess=all
[Install]
WantedBy=default.target
We pass our credentials via the --secret
and give it a path, and tell the service it needs to wait for the Mosquitto service to have started.
Then again, start the service.
systemctl --user enable --now container-zigbee.service
Created symlink /var/home/hass/.config/systemd/user/default.target.wants/container-zigbee.service → /var/home/hass/.config/systemd/user/container-zigbee.service.
Don’t forget to check out the previous blog post on Zigbee2MQTT for the automations and dashboard config to make the most out of Z2M.
ZWave2MQTT ¶
I’m copying this from Matthew’s forum post as I don’t use Z-Wave myself.
See also the notes on Container permissions and SELinux in the Zigbee2MQTT section.
mkdir /srv/hass/zwave
~/.config/systemd/user/container-zwave.service
[Unit]
Description=ZWave2MQTT Container
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n TZ=Europe/Brussels
Restart=on-failure
RestartSec=30
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm \
--sdnotify=conmon \
--replace \
--detach \
--label "io.containers.autoupdate=registry" \
--name=zwave \
--group-add keep-groups \
--device=/dev/serial/by-id/usb-Silicon_Labs_Zooz_ZST10\u00A0700_Z-Wave_Stick_0001-if00-port0:/dev/zwave:rw \
--volume=/srv/hass/zwave:/usr/src/app/store:Z \
-p 8091:8091 \
-p 3000:3000 \
docker.io/zwavejs/zwavejs2mqtt:latest
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=default.target
Start the service.
systemctl --user enable --now container-zigbee.service
ESPHome ¶
Here’re we’re leaving Matthew’s forum post as he didn’t discuss setting up ESPHome. Luckily, we already did the heavy lifting before.
~/.config/systemd/user/container-esphome.service
[Unit]
Description=ESPHome Container
Wants=network-online.target
After=network-online.target
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n TZ=Europe/Brussels
Restart=on-failure
RestartSec=30
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm \
--sdnotify=conmon \
--replace \
--detach \
--label "io.containers.autoupdate=registry" \
--name=esphome \
--group-add keep-groups \
--volume=/srv/hass/esphome:/config:z \
--network=host \
docker.io/esphome/esphome:latest
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=default.target
Note that we tag the config folder with the shared content label (:z
)..
This is because we will be sharing this folder with the HASS Configurator container.
If you don’t need or want to share the configuration folder, use the private content label (:Z
) instead.
Create the directory for the esphome config and enable the service.
mkdir /srv/hass/esphome
systemctl --user enable --now container-esphome.service
Created symlink /var/home/hass/.config/systemd/user/default.target.wants/container-esphome.service → /var/home/hass/.config/systemd/user/container-esphome.service.
Now you can access the ESPHome Dashboard at http://<ip.of.your.system>:6052
.
In my case, it even discovered (and allowed me to adopt) a bluetooth proxy I still had on my network from before my system crash.

HASS Configurator ¶
Editing configuration files from within the browser is a lot more user friendly than having to SSH into our system each time, so lets set up HASS Configurator again.
~/.config/systemd/user/container-hassconf.service
[Unit]
Description=HASSConfigurator Container
Wants=network-online.target
After=network-online.target
Requires=container-homeassistant.service
After=container-homeassistant.service
RequiresMountsFor=%t/containers
[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n TZ=Europe/Brussels
Restart=on-failure
RestartSec=30
TimeoutStopSec=70
ExecStartPre=/bin/rm -f %t/%n.ctr-id
ExecStart=/usr/bin/podman run \
--cidfile=%t/%n.ctr-id \
--cgroups=no-conmon \
--rm \
--sdnotify=conmon \
--replace \
--detach \
--label "io.containers.autoupdate=registry" \
--name=hassconf \
--group-add keep-groups \
--volume=/srv/hass/hassconf:/config:Z \
--volume=/srv/hass/hass:/hass-config:z \
--volume=/srv/hass/esphome:/hass-config/esphome:z \
-p 3218:3218 \
docker.io/causticlab/hass-configurator-docker:latest
ExecStop=/usr/bin/podman stop --ignore --cidfile=%t/%n.ctr-id
ExecStopPost=/usr/bin/podman rm -f --ignore --cidfile=%t/%n.ctr-id
Type=notify
NotifyAccess=all
[Install]
WantedBy=default.target
Note that we’re mounting the config folders from our Home Assistant and ESPHome containers.
Make sure you marked these volumes as shared (:z
) in their respective configurations.
We also want set our “working directory” in HASS Configurator to our Home Assistant config folder.
/srv/hass/hassconf/settings.conf
(you will need to create the directory first)
{
"BASEPATH": "/hass-config",
"ENFORCE_BASEPATH": true,
"DIRSFIRST": true
}
Enable the service.
systemctl --user enable --now container-hassconf.service
Created symlink /var/home/hass/.config/systemd/user/default.target.wants/container-hassconf.service → /var/home/hass/.config/systemd/user/container-hassconf.service.
Sidebar ¶
Time to repeat ourselves yet again and setup our sidebar.
The config below needs to end up in the configuration.yaml
inside the Home Assistant config folder (/srv/hass/hass
).
You can do this through our freshly launched HASS Configurator or over SSH.
/srv/hass/hass/configuration.yaml
# ...
panel_iframe:
configurator:
title: Configurator
icon: mdi:wrench
url: http://<ip.of.your.system>:3218
require_admin: true
esphome:
title: ESPHome
icon: mdi:chip
url: http://<ip.of.your.system>:6052
require_admin: true
zigbee2mqtt:
title: Zigbee2MQTT
icon: mdi:zigbee
url: http://<ip.of.your.system>:8081
require_admin: true
Note that you need to use the address you would use to visit these containers directly from your desktop/laptop since this uses iframes.
You can’t use the podman-internal addresses (like 10.0.2.2
).
However, you could setup nginx as a reverse proxy for these as well, even give them a(n internal) domain name and a SSL certificate, and then reference that address in this config.
Confirm using the DevTools that we didn’t make a typo and restart Home Assistant to apply the config.

Backups ¶
The process for backing your config is fairly easy as everything is located at /srv/hass
and ~/.config/systemd/user/container-*.service
.
You can setup a cronjob doing an rsync backup that archives and copies these files to a NAS, for example.
you’ll need to do this as a root user so you don’t run into permission issues with the container volumes.
There is however one notable exception: if you want to avoid the possibility of backing up
home-assistant_v2.db
when it’s in a not-settled state (possibly leading to corruption and losing your state history), you should periodically run something like
echo 'vacuum into "/src/hass/backup/home-assistant_v2-bak.db"' | sqlite3 /srv/hass/hass/home-assistant_v2.db
Backup to NAS over NFS ¶
Below are my configurations for backing up my Podman containers to my NAS over NFS.
Mount backup folder ¶
Setup a share on your NAS and grant NFS permissions to the IP of your Podman system.
Then create the following systemd mount config.
The filename must match your local mount path (Where
) using hyphens instead of slashes.
Note /mnt
is symlinked to /var/mnt
on my Fedora CoreOS system and I needed to uses that path.
Otherwise I got symbolic link errors.
/etc/systemd/system/var-mnt-backup.mount
[Unit]
Description=HomeAssistant Backup
After=network.target
[Mount]
What=<ip.of.my.nas>:/volume1/backup
Where=/var/mnt/backup
Type=nfs
Options=_netdev,auto
[Install]
WantedBy=multi-user.target
sudo systemctl start var-mnt-backup.mount
sudo systemctl enable var-mnt-backup.mount
Created symlink /etc/systemd/system/multi-user.target.wants/var-mnt-backup.mount → /etc/systemd/system/var-mnt-backup.mount.
Backup script ¶
The following script will use rsync
to create an incremental backup.
I also added the vacuum
command mentioned above to “copy” the database. That’s why I’m ignoring the database in the rsync command.
/var/home/core/backup-podman.sh
#!/bin/bash
# A script to perform incremental backups using rsync
set -o errexit
set -o nounset
set -o pipefail
readonly SOURCE_DIR="/srv/hass"
readonly BACKUP_DIR="/var/mnt/backup"
readonly DATETIME="$(date '+%Y-%m-%d_%H:%M')"
readonly BACKUP_PATH="${BACKUP_DIR}/${DATETIME}"
readonly LATEST_LINK="${BACKUP_DIR}/latest"
echo "----- BACKUP STARTED -----"
mkdir -p "${BACKUP_DIR}"
rsync -av --delete \
"${SOURCE_DIR}/" \
--link-dest "${LATEST_LINK}" \
--exclude="home-assistant_v2.db" \
--exclude=".cache" \
--exclude="log" \
--exclude="*.log" \
--exclude=".esphome" \
"${BACKUP_PATH}"
echo "Recreating link"
rm -rf "${LATEST_LINK}"
ln -rs "${BACKUP_PATH}" "${LATEST_LINK}"
# Copy database
echo "Copying database"
echo "vacuum into \"${BACKUP_PATH}/hass/home-assistant_v2.db\"" | sqlite3 "${SOURCE_DIR}/hass/home-assistant_v2.db"
echo "----- BACKUP DONE -----"
Don’t forget to make this script executable and test it out. Run as root so you don’t get permission errors.
chmod u+x /var/home/core/backup-podman.sh
sudo /var/home/core/backup-podman.sh
Crontab ¶
Now we’ll create a cronjob that will automatically call the back-up script on a weekly basis. This will ensure we have weekly back-ups of our config.
But first we need to install the cronie
package which will provide us with cron
sudo rpm-ostree install cronie
sudo rpm-ostree apply-live
Then we can write our cronjob.
/etc/crontab
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=core
# For details see man 4 crontabs
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * user-name command to be executed
0 22 * * 6 root /var/home/core/backup-podman.sh > /dev/null
This will execute the backup-podman.sh
script as root every Saturday at 22:00 (before the auto-update reboot window).
stdout
output will be ignored, only error output (stderr
) will be mailed to the core
user.
Troubleshooting ¶
I made changes to a service file after starting a service ¶
Run systemctl --user daemon-reload
to load your changes.
Then run systemctl --user restart <SERVICE>.service
to restart the service using the new config.
I can’t modify container files ¶
Containers will be running using their own user inside the container. The UserID of that user may not match the UserID of your user. Rootless containers don’t map the user account to e.g. root.
So if you want to change a file in one of the container’s volumes and can’t or don’t want to do it via the terminal/console of the container,
you’ll need to SSH into your sudo user (e.g. core
) and use root privileges to change the files.
If you’re creating new files using the root account (via sudo), their permissions will be mapped to root.
So don’t forget to chown
these files to the correct userID (check the permissions of the folder or other files within the folder using ls -l
).
Error: short-name resolution enforced but cannot prompt without a TTY ¶
Podman doesn’t have a default container registry configured.
So if you use the short name of a container, like eclipse/mosquitto
, it doesn’t know which registry you want to pull it from.
Either use the full link to the container, such as docker.io/eclipse/mosquitto
, or add your preferred registry/registries to your Podman registries.conf
.
The user config is located at $HOME/.config/containers/registries.conf
and the system config is located at /etc/containers/registries.conf
.
Edit or add the following line to suit your needs:unqualified-search-registries = ['docker.io', 'registry.fedoraproject.org', 'registry.access.redhat.com', 'registry.centos.org', 'quay.io', 'ghcr.io']
checkmodule: command not found ¶
Your linux installation may not come with this tool installed.
To get the checkmodule
tool, install the checkpolicy
package using your distro’s package manager.
Fedora CoreOS seems to be built a bit different and doesn’t have the yum
or dnf
package managers installed.
Instead it uses RPM-OStree
.
Package installations are more like a pull request in the system config where you need to get the new code and apply it.
$ rpm-ostree install checkpolicy
==== AUTHENTICATING FOR org.projectatomic.rpmostree1.install-uninstall-packages ====
Authentication is required to install and remove software
Authenticating as: CoreOS Admin (core)
Password:
==== AUTHENTICATION COMPLETE ====
# ...
Added:
checkpolicy-3.3-2.fc36.x86_64
Changes queued for next boot. Run "systemctl reboot" to start a reboot
$ sudo rpm-ostree apply-live
Computing /etc diff to preserve... done
Updating /usr... done
Updating /etc... done
Running systemd-tmpfiles for /run and /var... done
Added:
checkpolicy-3.3-2.fc36.x86_64
Successfully updated running filesystem tree.
SELinux permission issues with device mounts ¶
If you run into permissions issues when you’re trying to mount devices like a Zigbee controller, there are 2 solutions which may fix the issue. Note that these open up the permissions on your system and thus will reduce the security of your system somewhat.
Allow all containers to access all devices:
sudo semanage boolean -m --on container_use_devices
The drastic solution of disabling SELinux:
To disable SELinux intil the next reboot, execute the following command.sudo setenforce 0
Permanently disabeling SELinux can be done by modifying
/etc/selinux/config
and changingSELINUX=enforcing
toSELINUX=permissive
.sudo sed -i 's/=enforcing/=permissive/' /etc/selinux/config
Changelog ¶
- 2022-12-03
- Added explanation on permanently disabeling SELinux
- 2023-04-05
- Fix
sudo semodule -i container-usbtty.pp
instead ofcontainer-usbtty.pp
- Fix
- 2023-04-17
- Remove
certbot-creds
inExecStopPost
otherwise you get errors/warning about a secret already existing
- Remove
Home Automation Home Assistant Container Docker Podman Fedora
5337 Words
2022-12-02 (Last updated: 2023-04-17)