The Helm migration is complete

As I mentioned before, The Helm email appliance company is calling it quits, which I understand. If the business isn’t going to make it, it is better to pull the plug than just keep letting things linger. Best of luck to them on their next adventure.

So, what did I do?

  • (there was a detour while Amazon pissed on their customers wanting to run Mail-In-A-Box) (me)
  • I provisioned the smallest Ubuntu 22.04 LTS machine that Linode has.
    • Mildly annoyed that it doesn’t really support LVM (Logical Volume Manager); they have a backup service that runs an agent inside their machines, and that agent doesn’t do LVM. Still, I know that I’m going to need to grow disks, so I had to learn how to re-partition the Linode so I could do LVM. LVM done.
  • I made a mail server on the Linode machine at a domain name I have that I don’t really use. I followed the excellent guide from Christoph Haas at workaround.org: ISPmail guide for Debian 11 “Bullseye”
  • I got RoundCube webmail working for the domain name; complete with SPF and DKIM.
  • I got Thunderbird to send and receive from the domain name.
  • Then I added Nextcloud to the same box. I wanted CalDav for contacts and calendar, when I eventually hook my iPhone to it.
    • The Nextcloud documentation really needs a lot of work here. If I were retired, I would like to help them with their documentation.
    • Finally, I have the files.example.tld function of The Helm replaced, although at a different domain name.
    • Rspamd uses Redis, but so does Nextcloud. But one uses the network stack, and the other, Unix sockets. Get them both set same.
  • Then I added Duplicati backup. This wasn’t great, as it added a ton of overhead in the form of Mono, just for a graphical user interface.
  • I realize that I’m going to want to host my WordPress here too. I don’t want to have to wrangle four Let’s Encrypt SSL certificates, one for each domain. What about a single wildcard SSL certificate?
    • Yes, that can be done, but: my domain names registrar doesn’t support it. Linode does, though. I install the Linode DNS agent on my machine, and spin up Linode DNS servers to do the DNS work. I have to configure my domain names registrar to tell the rest of the world that Linode is where my name servers are.
    • Somewhere in there I installed the Unbounded DNS resolver. Looks like I need this on my home machine, too, for Home Assistant.io1
  • I got to the point where I could request the domain name transfer. Turns out the people at The Helm were going through Ghandi.net. Ghandi.net tooks as long as they legally could, before actually doing the DNS transfer.
    • Ghandi –> registrar, then registrar to point to Linode. Linode DNS needs to be reconfigured for SPF and DKIM. I had gotten some DNS records wrong, too.
  • Thunderbird to connect to the mail.domain.tld, and though the name hasn’t changed, everything underneath has. Thunderbird is not happy; I lose all my old mail.
    • Well, I didn’t, but it is in a new folder now, so that I’ve got an old version of my mailbox and a new version of my mailbox, and they are separate. Not ideal. Perhaps I could have done an IMAP to IMAP transfer, if I hadn’t already moved the domain name.
  • Hey, looky there: one of the volumes filled up (but everything else was unaffected). Time to grow a disk using LVM.
  • iPhone to connect to CalDAV; phew that was not well documented and had tons of conflicting information.
  • Not really happy with Duplicati, so I remove it and Mono, and install Restic backup instead.
  • Okay, so the last thing left to do is to migrate this blog from Amazon to this new Linode machine. The transfer using NS Cloner goes well, as it usually does. But domain names need to be updated via Let’s Encrypt certbot.
    • Crud. I’m on holiday out of town with family, and have only a Windows laptop with me. Per best practice security protocols, I can only ssh in from home. Logging in via root@ is blocked, and I don’t think I can even do a ssh-copy-id without getting in first and lowering the root login barrier. The certbot to add gerisch.org to the domains list is going to have to wait.
  • Here I am, at home, and I’m done. Dovecot, Postfix, RoundCube, Nextcloud, and WordPress all on one box.
  • While I was on holiday, I took the .mp3 files on the Nextcloud, and made Nextcloud Music Player playlists for the different types of files. Then on the 16 hour drive home, my iPhone logged in to the Nextcloud web interface and played playlists.
    • It’s a bit of nirvana to me, to have a large list of songs (randomized of course) playing absolutely advertising-free because I paid for the songs in the first place.
  1. I ended up not connecting Home Assistant to their cloud ↩︎

How I got into computers

My grandfather on my dad’s side was an immigrant from Germany, between World War I and World War 2. His parents put him on a ship and sent him to the USA when he was sixteen years old. Although it sucked for him that he never saw his family again, in many ways his new future was that he lived the American Dream. One of the things he did was buy stocks. In the 1970’s (before there were VCRs) on Sunday nights, there was a television show on PBS named Wall $treet Week with Louis Rukeyser. So we’d visit, and on Sunday nights, Grandpa would tell us kids to settle down; he needed to watch this TV show.

One Sunday, Mr. Rukeyser had a guest on who was going to pitch IBM as a good stock buy. (In 1974-76 that was great advice. Today I wouldn’t touch them). So during the intro, Mr. Rukeyser says “In the movie The Graduate Mr. Maguire tells Benjamin (played by Dustin Hoffman): I want to say one word to you, Benjamin. Plastics, young man: Plastics. If Mr. Maguire were talking to Benjamin today, he’d say Computers, young man: computers.”

And I thought to myself “I’m a young man….”

I did sign up for a computer programming class in High School. It was an IBM System 3 mini-mainframe, with 4 KB of RAM and punched cards. So I programmed my first computer in 1979. Dr. Larry Ray was our instructor, and one of the most insightful lessons he taught us was to calculate a loan payment schedule, with interest. Wow what an eye-opener the intrerest charges are on loans! But I digress.

My dad thought that getting in to computers was a good idea. I had one friend that got an Apple microcomputer, and my best friend got a Commodore 64. I saw a new computer being advertised that not only was the full hardware package, it came with software too: Obsorne 1. I had the later model one with the blue plastic case. But the kicker with this one was a sale that threw in the dBase II database program. It had everything, for the low low price of $2,200. My dad gave me half the money, and I ponied up the other half.

My last year of High School, I started working at Truline Corporation, a manufacturer of printed wiring boards. I started as a driller. But eventually I migrated up to programming the Numerical Control router (profiler) which cut the boards out of the sheet of fiberglass. This was the G-code programming language. Eventually, the factory needed some space, and moved me across the parking lot in with the president of the company. By this time, I was the “engineer” who measured the artwork, compared it against the blueprints, and created the work order the factory would work. I worked up a program in my Osborne to produce work orders on a printer instead of by hand. I showed it to the president, Jack Cederloff, and he told me that if I learned to program their computer, he’d hire me as their programmer. I was thrilled.

I went to night school to learn the language of their mini-mainframe. The computer was an IBM System/34. I learned RPG II. Eventually Truline moved to an IBM System/36, and I became a professional programmer, eight hours a day, five days a week, for two and a half years. I loved it.

WordPress initial install error: “Cannot select database”

The full error is

Cannot select database

The database server could be connected to (which means your username and password is okay) but the database could not be selected.

What is actually wrong is that you don’t have a file wp-config.php

From what I gather, it used to be that wget http://wordpress.org/latest.tar.gz would bring in a .tar.gz file which contained wp-config.php. That file isn’t there any more in the source.

In the old scheme, the installer would modify it with the user name, password, database table name and then proceed with the rest of the installation.

If I had to guess, I’d guess the new scheme is supposed to do cp wp-config-sample.php wp-config.php and then the installation picks up as it did before (modifying it with the user name, password, database table name); then proceeding with the rest of the installation.

Someone got the idea that instead of maintaining two wp-config files, they could maintain and ship one, and then copy it during install. This is a good idea: makes the source a tiny bit smaller, saving storage and transfer bytes. Just one thing though: do the copy, stupid, and check your results. Err out in a rather ugly mess if you didn’t get the copy right – then at least you’d hear about it mightily if you got it wrong.

The solution is to manually copy the file, edit it with the user name, password, and database table name, and then try to install again, twice.

If you simply copy wp-config-sample.php to wp-config.php and then run the install, it’s going to bark at you that wp-config.php already exists. Also, it is not going to ask you for the user name, password, and database table name. Since you already had to fuck around with the wp-config.php file, surely you already took care of the user name, password, and database table name.

So,

  1. start the install from scratch
  2. copy the file wp-config-sample.php to wp-config.php
  3. edit the new file, supplying database table name, user name, and password
  4. start the install from scratch again and let it bark at you that the new file already exists
  5. click the try again link.

Finally the “famous five minute install” is done after you spent thirty minutes in frustration finding this post and not doing what the documentation says.

Personally, I think it is low quality programming to leave this bug in the basic install process. It’s been there for months. So, what? No-one at Automattic tests the installer any more?

The Helm migration

I really liked my The Helm email appliance. But because the company running the service behind it is going to exit this business, I need to migrate stuff. Oh so much stuff…..

Of course, really, it becomes so-much-stuff because once I’m in a little, I want to pile on more. If Reddit hadn’t become so much trash, I’d have probably been living in /r/SelfHosted these past few weeks. Well, that and except that I’m cloud hosting for myself instead of keeping a box here at home.

Anyway, The Helm provided me with a SMTP server on it’s own domain name, and, NextCloud Files. (It did not include any other parts of NextCloud, though) (I think. Maybe contacts, too?). The company provided DNS services, too. And because no ISP is going to let me run an SMTP server here inside my home, it provided VPN services to AWS where boxes on the public Internet could send port 25 mail from.

I needed to move, and move quick. I’ve seen before how “oh I’ve got plenty of time” turned into “oh crap! It’s due tomorrow‽” enough times to remember the pain.

So now I have learned and am running:

  • A Dovecot and Postfix and rspamd server, with Redis
  • RoundCube attached to same
  • ISPMail attached to same (which is a web administration console for accounts in Dovecot and Postfix)
  • A caching DNS server on same
  • A Linode DNS server, so that Certbot can authorize a wildcard Let’s Encrypt SSL certificate.
  • NextCloud (full suite)
  • Duplicati for backup
  • and I haven’t ever added WordPress yet

I’m least happy with NextCloud. There is a lot of stuff that doesn’t work, and the documentation is poor, and a lot of the forum answers are “just read the documentation, newbie.”

I’m also not really happy with Duplicati. I loved it in version 1, because it was “just” a Python script. It ran on Windows, and I could very easily back up to Amazon S3. In fact, it was my introduction to learning AWS. Version 2 comes with it’s own web server so that it can be cross-platform and have a GUI; but that means adding Mono to my previously somewhat lean Linux server. By the way, accessing a web site on a “localhost” only web server? Here’s a reminder of how.

I started seeing a memory leak, and now I have to reboot the server once in a while. As Tenets of IT number 6 points out, rebooting is a band-aid. Really, I should remove the code that creates the memory leak. I think I’ll move to Restic and Backblaze.

Though I realy want to add WordPress and migrate this blog there, next.

Certbot and wildcard domains and –expand, oh my!

Nope, you cannot use –expand if you are using a wildcard helper (in my case --dns-linode)

The command that worked was

certbot certonly --dns-linode --dns-linode-credentials ~/somefolder/somefile.ini -d davidgerisch.com -d gerisch.me -d *.davidgerisch.com -d *.gerisch.me --cert-name davidgerisch.com

certbot –expand was no good because of –dns-linode. My only choice was certbot certonly.

But leaving off the original certificate name created a new certificate in a new location with -0001 tacked on to the name. No way do I want to have to wrangle the original certificate with it’s expiration date and this new certificate and it’s other expiration date. Besides, my web server is already configured for the original certificate. Reconfiguring the web server was less than ideal.

So the secret was to use the –cert-name option to specifically update the existing certificate.

2022-12-27 Update: if you go to add another domain (which happened to be this one) and you get the error “Certbot failed to authenticate some domains (authenticator: dns-linode). The Certificate Authority reported these problems:
 Domain: newdomain.tld
 Type:   unauthorized
 Detail: No TXT record found at _acme-challenge.newdomain.tld

 Domain: firstdomain.tld
 Type:   unauthorized
 Detail: No TXT record found at _acme-challenge.firstdomain.tld

Hint: The Certificate Authority failed to verify the DNS TXT records created by –dns-linode. Ensure the above domains are hosted by this DNS provider, or try increasing –dns-linode-propagation-seconds (currently 120 seconds).”

The problem may actually be a leftover file at /etc/letsencrypt/renewal

I had two files in there: firstdomain.tld.conf and firstdomain.tld-0001.conf

Certbot was trying to use the -0001.conf file instead of the real file. The real file pointed to the actual certificates being served up. The -0001.conf file was pointing to certificates with -0001 in their name, which were never served up to any of my web sites.

Amazon Echo abandonment, a month in.

I’m trying Apple HomeKit stuff instead. It is very disappointing. Amazon understands “cloud” and Apple does not. Or maybe Apple’s heart just isn’t in it. Perhaps someone there felt a need to compete with Amazon, so they started HomeKit. But, once the reality hit of how much change it would take to do a great job, they grew disheartened and gave up.

Either way, the Apple HomeKit stuff is a Yugo to Amazon’s Porsche.

Of course, the Apple stuff is as expensive as a Porsche, so it’s a bad deal.

I was watching the television show Silicon Valley and at one point they openly mocked Apple that Apple Maps was so bad. Worse was Microsoft Zune which made me LOL. Point is, Apple then decided to make Apple Maps good, and today it is. In fact I had an address here in town I needed to get to, and Google Maps completely failed it. So I tried Apple Maps and it worked. That was quite a good accomplishment in my view: Apple delivers a better app than Google.

But HomeKit today is no bueno.

It increasing looks like I need to invest some time and effort into Home-Assistant.io

(Potential) Future Modern Discourse

AMC theaters and Zoom have announced a collaboration. Big-ass Zoom meetings with a group of people in each the theater (17 cities so far).

I think the Libertarian Party should use this technology to conduct this sort of event, to nail down what they want their official party platform should be.

Once a month, every month, a new topic is tackled. Once we get all the topics defined, we wrap around and revisit each topic, to see if it needs some realignment. Perhaps new technology brings about some change that gives us reason to adjust a position.

Then, anyone running for office who wants the Libertarian Party endoorsement would need to pledge to support all the topics defined. Also, any Libertarian candidate should know what the party stands for, as declared by it’s membership.

Once a month, dinner and a movie, except instead of a movie broadcast into your brain, you get to interactively participate in building the future.

The year 2022: Late stage 2021 but with new, higher prices

h/t to one of Scott Adams Twitter followers, responding to a challenge to summarize 2022 in the snarkiest way possible.

The whole thing is a psy op run by incompetents at behest of elites inflicted upon the aimless. It came about through sixty years of indoctrination: “Buy this shit from our advertiser; that will make you happy.”

Linode base to LVM conversion

In my last post, I whined that I couldn’t find a how-to on how to convert a Linode virtual machine to an LVM setup. Well, I’ve done it, so I should write this up, no?

I didn’t want the machine to have a swap partition; so there were three things to do:

  1. swapoff while logged on, inside the machine
  2. Edit /etc/fstab to delete the line for the swap drive
  3. Outside the machine in the Linode manager, delete the disk
    • So first I had to power the machine down
    • Then in the Linode virtual machine manager, I had to switch to the Storage tab
    • Now I can click on the swap drive and delete it.
      • I don’t know why, but WordPress is being stupid with lists, which it didn’t used to prior to the most recent “upgrade”. This sublist is supposed to be numbered, damnit. And this particular list item was supposed to be indented even further.

The next thing to do was to shrink the existing disk. I do not know if I could have just done that. I see a resize option in the Linode storage manager. It may be that they have cloud-init wired in, and using the resize button would also have run stuff inside the machine to make everything nice. That’s not the way I went. 🤷

In the Linode manager (at the upper level, where you can see all your virtual machines), there is a three-horizontal-dots menu button. (I don’t know what is the good name for this button. I like the three horizontal lines, stacked, menu buttons because I can call it a hamburger button, and people get the idea of a bun with a patty in between. But I digress.)

I clicked on the three-horizontal-dots menu button, and chose the Rescue mode menu option. This powers down my virtual machine and attaches it as storage to a rescue mode virtual machine (running Fennix). Then in the Linode manager, I used Launch LISH Console to spawn a new web page which is the remote console into the Fennix machine. Although I’m inside the Fennix machine, /dev/sda is still my virtual machine’s main disk. It is not mounted at this time, which is good. So then I ran the command to shrink /dev/sda with resize2fs /dev/sda 9G

So a very real problem with me writing this up is that I don’t have a history command to verify this is what I did. That history was recorded in the Fennix virtual machine which is destroyed after reboot. I’m pretty sure the command was resize2fs /dev/sda 9G but I don’t actually know. When I look stuff up now, it looks like resize2fs applies to the partitions inside a disk device rather than the device itself. But I’m pretty sure I did this.

Then, using the Linode manager, I did shrink the disk. So the next steps were:

  1. Reboot out of rescue mode (wait for everything to boot back up)
  2. Power down the virtual machine (wait for it to shut down)
  3. In the Linode manager of my virtual machine, resize the one-and-only disk to 9 GB
    • The base machine had used about 5 GB of the 25 GB allocated. This leaves another 4 GB free disk space, even prior to moving /var off to another disk.
  4. Then, I added four disks:
    • home
    • tmp
    • var
    • var/mail

Of course, when I added these disks, I had to pick the sizes of what I wanted each to be.

The next part of the puzzle wasn’t obvious either: how does Linode map these newly added disks to the virtual machine? The answer is that by default, it does not.

That’s over in the Configuration tab of the virtual machine manager. (Earlier documentation appears to have called this the Profile tab). Doing an edit of my virtual machine, I could pick the /dev/sdX and assign it to the disk I had created for my purpose.

Okie dokie, time to power up and do the LVM stuff.

Create the physical volumes: pvcreate /dev/sdb /dev/sdc /dev/sdd /dev/sde

Create the volume groups:

vgcreate vg_mail /dev/sdb
vgcreate vg_tmp /dev/sdc
vgcreate vg_home /dev/sdd
vgcreate vg_var /dev/sde

Create the logical volume groups:

lvcreate vg_mail -l 100%FREE -n lv_mail
lvcreate vg_tmp -l 100%FREE -n lv_tmp
lvcreate vg_home -l 100%FREE -n lv_home
lvcreate vg_var -l 100%FREE -n lv_var

So at this point, we have logical volumes, inside of volume groups (which have physical devices assigned). LVM makes this storage available at /dev/mapper

Format the new storage:

mkfs.ext4 /dev/mapper/vg_mail-lv_mail
mkfs.ext4 /dev/mapper/vg_tmp-lv_tmp
mkfs.ext4 /dev/mapper/vg_home-lv_home
mkfs.ext4 /dev/mapper/vg_var-lv_var

Now comes the tougher part, moving the new storage into production.

The process is to shut down the system to Init Level 1 (so that as little as possible is currently running), mount the new storage, copy the files over, rename the old storage out of the way, and then update the /etc/fstab to reflect the new storage mount point.

Inside the running virtual machine, I gave the command init 1

Now I have to use the Linode virtual machine manager Launch LISH Console to get logged into the running machine (Init Level 1 turns off the network).

mkdir /mnt/newvar
mount /dev/mapper/vg_var-lv_var /mnt/newvar/
cp -apx /var/* /mnt/newvar
mv /var /var.old

Okay, the contents of /var are now inside the LVM logical volume. Now to configure the system to mount that logical volume at the file system mount point /var

First, use blkid to identify the universally unique identifier assigned to the LVM volume. Perhaps blkid says your LVM volume is this:

/dev/mapper/vg_var-lv_var: UUID="epstein-didnt-kill-himself-605169120" BLOCK_SIZE="4096" TYPE="ext4"

Then, edit /etc/fstab to have the UUID entry for the mount point:

UUID="epstein-didnt-kill-himself-605169120" /var ext4 defaults 0 1

Do this for the other LVM volumes and then clean up. Before rebooting, you should try mount -a just to make sure there are no errors; because if there are errors mounting things, that’s going to make the reboot suck, badly.

Cleanup was to delete /mnt/newvar and to delete /var.old (and the other LVM mount points processed the same way).