Amazon SES and BYODKIM

I started testing Amazon SES and didn't want to use their ugly EasyDKIM domains so I went for the BYODKIM. Never having to generate my own DKIM key I did a bit of searching and it's really simple.

  1. Generate a 2048 (or 1024) bit RSA key: openssl genrsa -out dkim.priv 2048
  2. Split the public key out: openssl rsa -in dkim.priv -pubout -out dkim.pub
  3. You need to remove the header, footer, and new lines to paste into the SES console: cat dkim.priv | sed '1d;$d' | tr -d '\n'
  4. Create a TXT DNS record for the public key with this value: echo "v=DKIM1\; k=rsa\; p=$(cat dkim.pub | sed '1d;$d' | tr -d '\n')"

Now wait a bit and you should see your domain validated in the SES console.

octoDNS and Route53

Just a quick and simple post. If you want to use octoDNS with Amazon's Route53, you can use the following permisson policy to restrict the user to only what octoDNS needs to do its job.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "route53:ChangeResourceRecordSets",
                "route53:CreateHostedZone",
                "route53:ListHealthChecks",
                "route53:ListHostedZones",
                "route53:ListHostedZonesByName",
                "route53:ListResourceRecordSets"
            ],
            "Resource": "*"
        }
    ]
}

Change Proxmox VM ID with ZFS root storage

I recently started using proxmox to run a few VMs and one issue I quickly ran into was wanting to change the ID of a VM I had already created. After a bit of googling I found a solution however it did not work for me because it assumes you are using LVM as your root storage. In my case, I love ZFS so that's what I run on most of my systems. Like docker with containers, proxmox will store VM disks as separate ZFS datasets. With a slight tweak to his code, you can easily change the ID of your VMs when you're using ZFS.

I've written a little script you can put on your proxmox server and after you make sure your VM is not running, a quick call will change the VM id: ./change-vm-id 103 199. The basic process is: rename ZFS datasets for that VM, change the VM disk IDs in the conf file, and then rename the conf file. Once you do this, it should show up in the proxmox GUI with the new ID right away.

#!/bin/bash

POOL=rpool

if [[ $# != 2 ]]; then
  cat <<-END >&2
usage: $0 old-id new-id
END
  exit 1
fi

old_id=$1
new_id=$2

if ! disks=$(zfs list -r -o name $POOL/data | grep "vm-${old_id}-disk"); then
  echo "did not find any disks, check old vm id and running zfs" >&2
  exit 1
fi

for disk in $disks; do
  new_disk=$(echo $disk | sed "s/vm-${old_id}-disk/vm-${new_id}-disk/g")
  zfs rename $disk $new_disk
done

sed -i "s/vm-${old_id}-disk/vm-${new_id}-disk/g" /etc/pve/qemu-server/${old_id}.conf
mv /etc/pve/qemu-server/${old_id}.conf /etc/pve/qemu-server/${new_id}.conf

Linux ZFS Root & Datasets & systemd-networkd

With my servers I prefer to have the root filesystem be a set of 2 SSDs in a ZFS mirror. That way you get bit rot detection, snapshots before significant changes, separate datasets, and redundancy. I follow the openzfs guide with a few tweaks to set this up. Then I create a dataset at /c where I prefer to put all of my configuration files and then syslink to them at their original locations; however, this leads to an issue during boot for some services.

The issue is that only the root pool is mounted early on and nested datasets are only mounted as part of the local-fs.target chain. This causes an issue if the service is loaded before this systemd unit. In my case, I wanted my systemd-networkd configration files to be stored in /c but when systemd-networkd runs, /c isn't mounted so the syslinks are bad and won't be configured.

The solution is fairly simple, we want systemd-networkd to run after local-fs.target. To accomplish this, you'll want to run sudo systemctl edit systemd-networkd which will open the override.conf file for editing. Add the following, save, and exit.

[Unit]
After=local-fs.target

Now on your next boot, everything should work properly. This should work for most units but for generators (such as netplan.io), this won't work because they run very early in the systemd process