Turned off IPv6 in parts of my systems

I wanted to title this “IPv6 still not viable” but really this is a complex problem of spam, my hosting provider Linode, Comcast, and Spamhaus.

My hosting provider supplies both IPv4 and IPv6 on every machine I spin up. That was fine, until it wasn’t.

I run my own mail server. I’ve been doing SMTP mail for twenty years (more, really) and I am careful to not let anyone randomly relay through my server. I even had to shut down comments on this blog because script kiddies were abusing the new-user-signup process to annoy random people with new-user-spam emails. That was months ago.

I’ve done my due diligence and have SPF and DKIM set correctly. I regularly check with https://www.mail-tester.com/ to make sure everything is still good, and I routinely get a score of 10 out of 10.

Yet, recently Comcast has been refusing to send my email to my friends, because of Spamhaus. Spamhaus says the IPv6 range I am in is populated by spammers.

Linode must have (or had) a customer who was a spammer in the same data center I was in. So in a large part, my problem stems from Linode. I can understand that Linode might have a new customer who tried setting up a mail server, botched the anti-spam part of it, and legitimately got put on an anti-spam list or twenty, including Spamhaus.

Apparently, Linode puts everyone in a data center in the same huge IPv6 range using something called SLAAC. There are Linode community posts going back nine years about Comcast blocking email because Spamhaus and Linode cannot agree on the correct sizing of a customer’s address space within a SLAAC addressed network.

Spamhaus refuses to pick and choose which addresses in the range are violators – it just blocks the whole range.

So I am getting blocked by neighborhood association.

I don’t know if this is Linode’s fault for making the range too large, or Spamhaus’ fault for not being granular. Or if it is Linode’s fault for not policing the spam behavior of its customers. No matter: I can’t email my volunteer service friends the monthly reports, meeting minutes, or even personal email if the email goes out over the IPv6 address.

So the IPv6 network had to go.

It is done.

And it was not easy. My mail host now whines about not being able to get Ubuntu updates. I want to move it to Debian anyway, but this rushes that project.

And here on my home network, things really didn’t work right, with IPv6 present here but not working on the mail server out in Linode lan(d). Thunderbird was throwing a hissy fit for a few days. So then I had to reconfigure everything here at home to be IPv4 only. That was a lot of work.

But at least at the end of it, if a service like Comcast (or anyone else) puts me on an anti-spam list, I’ll only have the single IPv4 address to get unblocked.

I kind of feel bad. I have a friend who is a huge proponent of IPv6, and he now works for Google, and probably one of the things that got him hired was the advocacy work he did for this cause. And I like the idea of IPv6.

Here’s a mild tangent: way back when, Novell had a networking technology called IPX/SPX. For addressing, it used a 32 bit chunk concatenated with a 48 bit chunk: the first 32 bits were the network address and the last bits were from the machine’s MAC address. On local networks, it was speedy, and didn’t use a lot of memory, which is why many LAN games ran over it. Some of my most fun ever was playing Command & Conquer: Generals with my sons over IPX/SPX.

As the Internet was taking off, it was clear that TCP/IP in IPv4 was the way everyone was connecting their local network to the Internet. IPv4 could do NAT (network address translation). In fact, it required address resolution protocol (ARP), which made it slower than IPX/SPX. Some people at Novell went to some working group, and suggested that the Internet could carry IPX/SPX traffic too: they got laughed at. “No, we’re not going to add IPX/SPX to the Internet”. (The NAT problem was solved in IPX/SPX networks with gateways).

Anyway … care to guess what the IPv6 address space is? Could it happen to be 64 bits for the network address and 64 bits from the machine’s MAC address?

Yes, as a matter of fact in substantial ways, IPv6 is IPX/SPX grown to 128 bits. And people like it because it is speedy. Everything old is new again.

Let’s Encrypt for my internal domain

It is time to renew my wildcard SSL certificate for an internal domain I have, and here are the steps I went through to solve it. When I say internal domain I’m referring to a DNS domain that exists on the public Internet, but which wholly and only points to the IP address of my home broadband router. That router has pass-through enabled, so that essentially, my pfSense box is my presence on the Internet for everything inside my home.

I turned off HAProxy so that pfSense wouldn’t be sending the challenge traffic to the only internal server I put out there. The internal server, Nextcloud, doesn’t play nice with others; in order to keep things consistent, they want it to be an appliance where the only stuff running on the box is their code. Okay, I get that. This wouldn’t be so annoying if it wasn’t bug-riddled junk that is in a huge rush to implement new features. Can you say “AI”? But I digress.

I created a new Linode API key in case the problem was that the old API key didn’t have access. Well, the first new key had the wrong selector, and resulted in “Your OAuth token is not authorized to use this endpoint”.

The problem is that the pfSense script is trying to generate a challenge key and insert it into a web server that doesn’t exist. The pfSense web admin portal is that web server. When I turned off HAProxy, that should have opened it up. It did, but I couldn’t tell because the Linode API key was wrong.

Okay, maybe I need to log in to the pfSense box and manually use a generated challenge key? How to log in to the pfSense box? When was the last time I did that?

Here’s a convenient command:

 history | awk '{$1="";print substr($0,2)}' | grep "ssh " | grep -v history | sort | uniq

We run the output of the history command through awk to remove line numbers, then search for "ssh " (the trailing space omits ssh-copy-id and such), run that through sort, and run that through uniq. Et voilà, and I have a list of all twelve boxes I’ve logged in to since history.

Sigh: pfSense isn’t one of them.

But this was a good exercise: I did get logged into pfSense, and did find the “Your OAuth token is not authorized to use this endpoint” problem.

I deleted the previous Linode v4 API certificate specifications, and it worked.

Time to turn HAProxy back on.

Okay, the short form is:

  1. Generated a new Linode API access token with Domain read/write access
    • This probably won’t be required if the access token hasn’t expired.
  2. pfSense > Services > HAProxy > Settings > disable and apply
  3. pfSense > Services > Acme > Certificates > pick certificate and Edit > delete the Domain SAN list entry > Add a new Domain SAN list entry with the new Linode API access token > Save
  4. pfSense > Services > Acme > Certificates > pick certificate and hit Renew
  5. Do the other certificate in the list
  6. pfSense > Services > HAProxy > Settings > Enable and apply

The Helm migration is complete

As I mentioned before, The Helm email appliance company is calling it quits, which I understand. If the business isn’t going to make it, it is better to pull the plug than just keep letting things linger. Best of luck to them on their next adventure.

So, what did I do?

  • (there was a detour while Amazon pissed on their customers wanting to run Mail-In-A-Box) (me)
  • I provisioned the smallest Ubuntu 22.04 LTS machine that Linode has.
    • Mildly annoyed that it doesn’t really support LVM (Logical Volume Manager); they have a backup service that runs an agent inside their machines, and that agent doesn’t do LVM. Still, I know that I’m going to need to grow disks, so I had to learn how to re-partition the Linode so I could do LVM. LVM done.
  • I made a mail server on the Linode machine at a domain name I have that I don’t really use. I followed the excellent guide from Christoph Haas at workaround.org: ISPmail guide for Debian 11 “Bullseye”
  • I got RoundCube webmail working for the domain name; complete with SPF and DKIM.
  • I got Thunderbird to send and receive from the domain name.
  • Then I added Nextcloud to the same box. I wanted CalDav for contacts and calendar, when I eventually hook my iPhone to it.
    • The Nextcloud documentation really needs a lot of work here. If I were retired, I would like to help them with their documentation.
    • Finally, I have the files.example.tld function of The Helm replaced, although at a different domain name.
    • Rspamd uses Redis, but so does Nextcloud. But one uses the network stack, and the other, Unix sockets. Get them both set same.
  • Then I added Duplicati backup. This wasn’t great, as it added a ton of overhead in the form of Mono, just for a graphical user interface.
  • I realize that I’m going to want to host my WordPress here too. I don’t want to have to wrangle four Let’s Encrypt SSL certificates, one for each domain. What about a single wildcard SSL certificate?
    • Yes, that can be done, but: my domain names registrar doesn’t support it. Linode does, though. I install the Linode DNS agent on my machine, and spin up Linode DNS servers to do the DNS work. I have to configure my domain names registrar to tell the rest of the world that Linode is where my name servers are.
    • Somewhere in there I installed the Unbounded DNS resolver. Looks like I need this on my home machine, too, for Home Assistant.io1
  • I got to the point where I could request the domain name transfer. Turns out the people at The Helm were going through Ghandi.net. Ghandi.net tooks as long as they legally could, before actually doing the DNS transfer.
    • Ghandi –> registrar, then registrar to point to Linode. Linode DNS needs to be reconfigured for SPF and DKIM. I had gotten some DNS records wrong, too.
  • Thunderbird to connect to the mail.domain.tld, and though the name hasn’t changed, everything underneath has. Thunderbird is not happy; I lose all my old mail.
    • Well, I didn’t, but it is in a new folder now, so that I’ve got an old version of my mailbox and a new version of my mailbox, and they are separate. Not ideal. Perhaps I could have done an IMAP to IMAP transfer, if I hadn’t already moved the domain name.
  • Hey, looky there: one of the volumes filled up (but everything else was unaffected). Time to grow a disk using LVM.
  • iPhone to connect to CalDAV; phew that was not well documented and had tons of conflicting information.
  • Not really happy with Duplicati, so I remove it and Mono, and install Restic backup instead.
  • Okay, so the last thing left to do is to migrate this blog from Amazon to this new Linode machine. The transfer using NS Cloner goes well, as it usually does. But domain names need to be updated via Let’s Encrypt certbot.
    • Crud. I’m on holiday out of town with family, and have only a Windows laptop with me. Per best practice security protocols, I can only ssh in from home. Logging in via root@ is blocked, and I don’t think I can even do a ssh-copy-id without getting in first and lowering the root login barrier. The certbot to add gerisch.org to the domains list is going to have to wait.
  • Here I am, at home, and I’m done. Dovecot, Postfix, RoundCube, Nextcloud, and WordPress all on one box.
  • While I was on holiday, I took the .mp3 files on the Nextcloud, and made Nextcloud Music Player playlists for the different types of files. Then on the 16 hour drive home, my iPhone logged in to the Nextcloud web interface and played playlists.
    • It’s a bit of nirvana to me, to have a large list of songs (randomized of course) playing absolutely advertising-free because I paid for the songs in the first place.
  1. I ended up not connecting Home Assistant to their cloud ↩︎

Certbot and wildcard domains and –expand, oh my!

Nope, you cannot use –expand if you are using a wildcard helper (in my case --dns-linode)

The command that worked was

certbot certonly --dns-linode --dns-linode-credentials ~/somefolder/somefile.ini -d davidgerisch.com -d gerisch.me -d *.davidgerisch.com -d *.gerisch.me --cert-name davidgerisch.com

certbot –expand was no good because of –dns-linode. My only choice was certbot certonly.

But leaving off the original certificate name created a new certificate in a new location with -0001 tacked on to the name. No way do I want to have to wrangle the original certificate with it’s expiration date and this new certificate and it’s other expiration date. Besides, my web server is already configured for the original certificate. Reconfiguring the web server was less than ideal.

So the secret was to use the –cert-name option to specifically update the existing certificate.

2022-12-27 Update: if you go to add another domain (which happened to be this one) and you get the error “Certbot failed to authenticate some domains (authenticator: dns-linode). The Certificate Authority reported these problems:
 Domain: newdomain.tld
 Type:   unauthorized
 Detail: No TXT record found at _acme-challenge.newdomain.tld

 Domain: firstdomain.tld
 Type:   unauthorized
 Detail: No TXT record found at _acme-challenge.firstdomain.tld

Hint: The Certificate Authority failed to verify the DNS TXT records created by –dns-linode. Ensure the above domains are hosted by this DNS provider, or try increasing –dns-linode-propagation-seconds (currently 120 seconds).”

The problem may actually be a leftover file at /etc/letsencrypt/renewal

I had two files in there: firstdomain.tld.conf and firstdomain.tld-0001.conf

Certbot was trying to use the -0001.conf file instead of the real file. The real file pointed to the actual certificates being served up. The -0001.conf file was pointing to certificates with -0001 in their name, which were never served up to any of my web sites.

Linode base to LVM conversion

In my last post, I whined that I couldn’t find a how-to on how to convert a Linode virtual machine to an LVM setup. Well, I’ve done it, so I should write this up, no?

I didn’t want the machine to have a swap partition; so there were three things to do:

  1. swapoff while logged on, inside the machine
  2. Edit /etc/fstab to delete the line for the swap drive
  3. Outside the machine in the Linode manager, delete the disk
    • So first I had to power the machine down
    • Then in the Linode virtual machine manager, I had to switch to the Storage tab
    • Now I can click on the swap drive and delete it.
      • I don’t know why, but WordPress is being stupid with lists, which it didn’t used to prior to the most recent “upgrade”. This sublist is supposed to be numbered, damnit. And this particular list item was supposed to be indented even further.

The next thing to do was to shrink the existing disk. I do not know if I could have just done that. I see a resize option in the Linode storage manager. It may be that they have cloud-init wired in, and using the resize button would also have run stuff inside the machine to make everything nice. That’s not the way I went. 🤷

In the Linode manager (at the upper level, where you can see all your virtual machines), there is a three-horizontal-dots menu button. (I don’t know what is the good name for this button. I like the three horizontal lines, stacked, menu buttons because I can call it a hamburger button, and people get the idea of a bun with a patty in between. But I digress.)

I clicked on the three-horizontal-dots menu button, and chose the Rescue mode menu option. This powers down my virtual machine and attaches it as storage to a rescue mode virtual machine (running Fennix). Then in the Linode manager, I used Launch LISH Console to spawn a new web page which is the remote console into the Fennix machine. Although I’m inside the Fennix machine, /dev/sda is still my virtual machine’s main disk. It is not mounted at this time, which is good. So then I ran the command to shrink /dev/sda with resize2fs /dev/sda 9G

So a very real problem with me writing this up is that I don’t have a history command to verify this is what I did. That history was recorded in the Fennix virtual machine which is destroyed after reboot. I’m pretty sure the command was resize2fs /dev/sda 9G but I don’t actually know. When I look stuff up now, it looks like resize2fs applies to the partitions inside a disk device rather than the device itself. But I’m pretty sure I did this.

Then, using the Linode manager, I did shrink the disk. So the next steps were:

  1. Reboot out of rescue mode (wait for everything to boot back up)
  2. Power down the virtual machine (wait for it to shut down)
  3. In the Linode manager of my virtual machine, resize the one-and-only disk to 9 GB
    • The base machine had used about 5 GB of the 25 GB allocated. This leaves another 4 GB free disk space, even prior to moving /var off to another disk.
  4. Then, I added four disks:
    • home
    • tmp
    • var
    • var/mail

Of course, when I added these disks, I had to pick the sizes of what I wanted each to be.

The next part of the puzzle wasn’t obvious either: how does Linode map these newly added disks to the virtual machine? The answer is that by default, it does not.

That’s over in the Configuration tab of the virtual machine manager. (Earlier documentation appears to have called this the Profile tab). Doing an edit of my virtual machine, I could pick the /dev/sdX and assign it to the disk I had created for my purpose.

Okie dokie, time to power up and do the LVM stuff.

Create the physical volumes: pvcreate /dev/sdb /dev/sdc /dev/sdd /dev/sde

Create the volume groups:

vgcreate vg_mail /dev/sdb
vgcreate vg_tmp /dev/sdc
vgcreate vg_home /dev/sdd
vgcreate vg_var /dev/sde

Create the logical volume groups:

lvcreate vg_mail -l 100%FREE -n lv_mail
lvcreate vg_tmp -l 100%FREE -n lv_tmp
lvcreate vg_home -l 100%FREE -n lv_home
lvcreate vg_var -l 100%FREE -n lv_var

So at this point, we have logical volumes, inside of volume groups (which have physical devices assigned). LVM makes this storage available at /dev/mapper

Format the new storage:

mkfs.ext4 /dev/mapper/vg_mail-lv_mail
mkfs.ext4 /dev/mapper/vg_tmp-lv_tmp
mkfs.ext4 /dev/mapper/vg_home-lv_home
mkfs.ext4 /dev/mapper/vg_var-lv_var

Now comes the tougher part, moving the new storage into production.

The process is to shut down the system to Init Level 1 (so that as little as possible is currently running), mount the new storage, copy the files over, rename the old storage out of the way, and then update the /etc/fstab to reflect the new storage mount point.

Inside the running virtual machine, I gave the command init 1

Now I have to use the Linode virtual machine manager Launch LISH Console to get logged into the running machine (Init Level 1 turns off the network).

mkdir /mnt/newvar
mount /dev/mapper/vg_var-lv_var /mnt/newvar/
cp -apx /var/* /mnt/newvar
mv /var /var.old

Okay, the contents of /var are now inside the LVM logical volume. Now to configure the system to mount that logical volume at the file system mount point /var

First, use blkid to identify the universally unique identifier assigned to the LVM volume. Perhaps blkid says your LVM volume is this:

/dev/mapper/vg_var-lv_var: UUID="epstein-didnt-kill-himself-605169120" BLOCK_SIZE="4096" TYPE="ext4"

Then, edit /etc/fstab to have the UUID entry for the mount point:

UUID="epstein-didnt-kill-himself-605169120" /var ext4 defaults 0 1

Do this for the other LVM volumes and then clean up. Before rebooting, you should try mount -a just to make sure there are no errors; because if there are errors mounting things, that’s going to make the reboot suck, badly.

Cleanup was to delete /mnt/newvar and to delete /var.old (and the other LVM mount points processed the same way).

Kind of hating cloud servers right now

How in the world am I supposed to create LVM (Logical Volume Management) disk layouts on a cloud VM with a single big disk? Before I start piling in data, I want to put /var/mail on it’s own partition.

Maybe it’s just that Google is stupid, and the answer is plain as day if I could find it.

Linode is annoying, because the pages I found said (in essence) “Don’t use LVM, use our attached disks at an additional $2 per disk per month.” Well, I could add a disk and then use LVM to configure it. But that means that I’m going to have a 25 GB /boot partiition and then hardly anything else over on the new disk. What it won’t do is keep the system from going comatose if some process starts spamming a log file and fills the disk. That’s stupid. And I’d be paying $2 a month, forever, for the stupidity.

I want to install LVM so that I have the option of adding another disk later, and it would be super easy. I’ve done LVM at work for years now, and it’s great. But at work, I get to install the machine from a boot ISO, and I get to go through every step of the install. Linode creates new virtual machines from images, where the disk is pre-configured. I don’t get to say I want /home on a separate volume (for example).

Every search I’ve done about LVM has two assumptions behind it: 1) there is a newly added virgin disk, or 2) during install, choose to partition the disk the way you want.

Nothing appears to address the situation where I’ve got a 25 GB disk with 20 GB free, and I’d like to move /home and /var and /tmp to /dev/sda1 /dev/sda2 /dev/sda3

I need to do pvcreate, but it errs out because I don’t have a newly added virgin disk.

I doubt this problem is particular to Linode; I suspect Rackspace and Vultr have the same problem – the preconfigured image is what you get; go kick rocks if you want something else.

It is frustrating, becasue I cannot be the first person on the planet to have thought of this or asked this question. But if the answer is obvious, I’m not finding it with Google search.