Gavin Newsom resents Californians?

I don’t know what else would explain choosing Dianne Feinstein’s replacement from Maryland. Maryland now gets 3 Senators and California gets 1.

I understand that the original design of two houses, with Senators versus Representatives was so that the big states with lots of representatives couldn’t bully the smaller states around. But that doesn’t mean that a large state should simply cede its power away. We should get an equal seat at the table.

Yes, the new senator from Maryland has spent her entire career as (essentially) a (under funded) lobbyist, so is deeply embeded in Washington DC politics.

But nothing about that helps Californians.

Quarterly inventory – 2023 Q3

Dear FutureMe,

Today would be a good day to do a quarterly inventory.

How is your personal life going?

How is your work life going?

How is your Volunteer Service life going?

Personal Life

Not really a whole lot going on. My mom did want to move in to a senior assisted living facility, so we took a tour and got a complimentary lunch. However, the place charges $8,000 per month which is $96,000 per year (and the move-in fee is one month’s rent, so $104,000 for the first year). Although my mom has some money, this was too rich for her blood. The residents we met loved the place. They say it is like a cruise ship that is parked. I was mildly interested to see if they are publicly traded, but alas they are not. Their headquarters are in Seattle WA, but they are privately held.

Happy that I have a Nextcloud server running on a tiny PC here at home. I had to configure pfSense to do Dynamic DNS to map the server name to my home IP address. The Internet gateway had to be beaten into submission to pass outside traffic in: pfSense had to carefully map the listening IP address and port (with an SSL upgrade) on the public Internet to the inside address and port. Running physical hardware is I think a better option than renting a Linode. Don’t get me wrong, I love my Linode running my email server on my own domain. But for serving up a calendar, address book, to-do list and media files – oh so many media files – the Linode would have been rather expensive. $110 later, I’ve got a (refurbished) 16 GB RAM Intel Core i5 running a cool 12 watts at idle. Storage is over on the Synology NAS (not exposed to the Internet). 12 watts isn’t as low as a Raspberry Pi, but still, it’s pretty good.

FINALLY! My new cell phone has an address book with entries. The entries are stored on the Nextcloud server, which is nice. I’m having to use Nextcloud email to get access to the address book contacts; and I’d prefer Roundcube, but that they are there at all is good. When I added them to Nextcloud email, it did create Birthday entries on my calendar which is sweet.

I’ve been trying to get Home Assistant, running on a Raspberry Pi 4, to connect to the files on the Synology, but it doesn’t work. I’m pretty sure it is that the Synology requires SMB 3, and Home Assistant isn’t specifying vers=3.0 correctly. It could be something else though, which is extremely frustrating that I cannot tell what the hell is what.

Been playing a lot of Factorio. Did get Elder Axe’s blueprint used and fully implemented; it is probably the best blueprint I’ve used so far. That said, it is missing a few assembly machines, and only has a single yellow science and a single purple science, so progress toward artillery was extremely slow. Bulked up my defenses and let the game run over night and during the day when I’m not at home. Still needed to manually craft some materials (low density structures) for yellow science bottles to be produced in less than a week’s time.

Work Life

The email retention project will soon be winding down. There was some kerfuffle because Opentext (who bought Micro Focus and GWAVA, who bought Attachmate, who bought Novell) wants to increase the yearly charge a huge amount, and we have documentation that their records are screwed up. They are claiming some people are new, when I show we paid for them three years ago, so they are not new. The whole mess though did kick us in the butt to export everything out of the server so we can power it down and delete it. That would free up 15 TB of storage, which is a reasonably large amount.

I did get to fix a broken system where server templates run a script on startup that send an email to a Mediawiki server, and the body of the email becomes the documentation for the new server in the wiki. Fixing that was cool, because it was a nice feature way back when, to get some documentation simply because the new box powered up and asked the server team member to answer a few questions. It’s not 100% solid though, as I just learned of two servers, and they are not in the wiki. 🙁

Volunteer Service Life

I’ve registered for a couple conferences; one is out of town with a hotel/motel. I also get to go to an Election Assembly, although a friend booked the motel. I’ll drive as we carpool three people. I get to man a Public Information booth at a health fair here in a couple days. One person on the Board of Directors for our local 501(c)(3) is ineligible to run, and another has announced that he may not run for health reasons. That leaves me and two others as elected members. Maybe one other person is interested in running? I also attended a technology workshop via Zoom, and heard that apparently I’m a “dark knight”. I’ve done technological stuff (it’s just WordPress), so like a knight, I’ve shown up to save the day: but I’m a dark knight because zero other people understand the dark arts I’m using to keep the website running. Did I mention that it’s just WordPress? But if I were to get hit by a bus, people might feel helpless to continue on (which would be a shame, since it’s just WordPress).

Microsoft added AI to Bing (fail)

For the first time ever for me, today MS Edge gave me a search result that automatically sent me to their AI chatbot. Of course, the result wasn’t helpful – I needed information about how to do the task with PowerShell, not interactively. So I clicked the button to copy the AI chatbot prompt, did new search, typed in “powershell” and pasted the original query in. It came back with results.

The results were wrong. Completely wrong. But they looked like they might be right.

If I were some new sysadmin trying to figure out something I was unfamiliar with, this would have so fouled me over.

I just have to laugh at Microsoft being so incompetent.

I mean, I know I have a chip on my shoulder about Microsoft. But man they keep shooting themselves in the foot. It’s hilarious.

Microsoft’s company motto appears to be “Quality is Job Secondhundredandthirteenth”.

Microsoft moving their documentation to GitHub – What could go wrong?

I’m not a Windows expert: as much as I dislike Microsoft for their lack of ethics, this should be no surprise. So when I do need to do Microsofty-sorts-of-things, I need to RTFM – which I’m fine with. They took the time to write the manual so I wouldn’t waste valuable people’s time with basic questions. I should Read The Friendly Manual.

I also know that things people link to might change behind the scenes. There’s no way for the changer to know that something else on the planet links here, so yes dead links happen. It should be a temporary problem; and as soon as someone who knows where the page moved to can supply the answer, the dead link can be fixed.

Recently I got a 404 This is not the web page you are looking for on Github. The source document is on Microsoft’s GitHub for PowerShell. Specifically the paragraph that said “Install PowerShell using Winget (recommended)”. That contained the sentence “Note: see the winget documentation for install instructions.”

I’ve never dealt with Winget before, so yes, please, let me read how to install it.

As of this moment, if you were to click on the link winget documentation you get 404 This is not the web page you are looking for.

Okay, this can happen. I have zero idea of what the actual link should be; but, I can let someone know there is a problem. I opened an issue in GitHub.

And it was closed, with the comment “The URL is correct on the published docs site. The links in the markdown source files are relative to the docs site, not to Github”

Well that’s nice. Between that and 404 pennies, I can get a coffee at Starbucks.

Factorio starter base – still looking for a favorite

At the moment, I’m trying ElderAxe’s Quick Start Base (v8.1.1)

I remember trying something else from ElderAxe and being pretty happy with it. I like the idea of adding different pieces as I go along: maybe I want Military Science early on, before other blueprint pieces. This modular system looks like it lets me do this.

I kind of hate Docker

The beauty of Linux is that every program writes log files. The ugly of Docker is that nothing exists after the session ends. Because things like my ssh session are running in a temporary limited container, nothing works and there’s no record of what went wrong.

I’m trying to get a share on a Synology mounted as the media folder in Home Assistant, but Home Assistant doesn’t do SMB 3. There’s a video that shows how to edit a config file to make it work; but, for me it doesn’t work. That would be fine if I could see the log files to figure out what is making it unhappy with my particular installation. But there are no log files from Home Assistant. It is reporting all clear / everything is good. But of course the files aren’t there from the CIFS / SMB share I’m trying to mount.

I tried using ssh to manually do the mount command, but that didn’t work, and the Dockerized ssh doesn’t have access to the log files. This is bullshit.

Maybe I hate Home Assistant for being Docker -only software. Except that every time I have tried to get something to work with Docker, it didn’t, and there was no way to tell what went wrong. Even the stuff that did some logging, logged only the most trivial of events. Service started. Service ended. Gee, thanks for the detail. That was super helpful. Who wrote this? Windows programmers?

Factorio blueprints – Kitch’s Totally Practical Strip Mall

On the one hand, I really like this mall. The layout makes sense to me, and the way it intersperses feeder bus and production columns is straightforward and clean.

One the other hand, it doesn’t have accumulators or solar. It does have nuclear power pieces, but I haven’t gotten into nuclear power. Yet another resource to mine, which also means setting up transport in and subsequent refining. I’d rather do solar.

I did find these two blueprints by Diana. One is for accumulators and the other for solar.

I’m still really happy with NRC’s Mining.

New Nextcloud setup with cron and transactional file locking problems (solved)

I set up Nextcloud on a new instance of Debian, and thought I had added all the pieces for memory cache and file cache, and had set up cron to run php -f /var/www/html/nextcloud/cron.php correctly. But in the Administration Overview screen I was still seeing this:

  • Last background job execution ran 2 hours ago. Something seems wrong.
  • The database is used for transactional file locking. To enhance performance, please configure memcache, if available.

But had installed Redis and APCu and configured them … so what is wrong?

I should mention that I’m using php 8.2. Apparently, with that new version of php, the APCu code now needs an additional setting that wasn’t needed before.

Find your way to /etc/php/8.2/mods-available and edit the apcu.ini file. Add this:

apc.enable_cli=1

Finally! I have the green check mark: All checks passed.

How to test if you cron job is going to run correctly:

sudo -u www-data php -f /var/www/html/nextcloud/cron.php

I had to add the sudo package to Debian, because the basic server build did not come with that. But what it does do, is let me switch user and do the command. First, I specify the same user that Apache is going to use: www-data and then I run the PHP interpreter, using the file /var/www/html/nextcloud/cron.php

Prior to the change, it erred out with a rather ugly OCP\HintException: [0]: Memcache \OC\Memcache\APCu not available for local cache (Is the matching PHP module installed and enabled?)

Now after the change it simply runs without reporting anything (everything ran sucessful)

Mildly amusing: 7.3 miles and 13 green lights in a row

I happened to be driving back from Tulare tonight, and wanted to pick up tacos for dinner at BT’s on Mooney Boulevard in Visalia. I waited at the left hand turn signal at the intersection of Tulare Avenue and CA-63 (Mooney) in Tulare. Turned left, put the cruise control on 40 MPH, stayed in the right lane. I didn’t have to tap the brakes or adjust the speed for the next 7.3 miles. Never even hit a yellow light, though for one intersection a cross-traffic car had pulled up so I thought I might. Thirteen green lights in a row. 🙂

https://goo.gl/maps/p7LE7MgXYPTXuBJJ9

Yes, 40 MPH is really slow for this trip. I wasn’t in a hurry, and know that optimal fuel efficiency is around 30 MPH: higher than that and I’m burning fuel to defeat wind resistance. 40 MPH is a fair trade-off. I’m not so slow that I’m a hazard, and Mooney is two or three lanes the whole way.

New OpenSuSE Tumbleweed cannot ssh in

Problem: I’ve installed OpenSuSE Tumbleweed fresh on new hardware, and I cannot log in as root with ssh. The solution is three steps.

I should also mention the symptoms: I could try to log in with ssh root@host and I would get prompted for the password – as if it was going to work. But no matter how many times I put in the password, I would simply get prompted to enter the Password again, as if I had typed it wrong.

I used an ISO of OpenSuSE Tumbleweed and the super easy to use Imagewriter to make a bootable USB. I installed openSuSE Tumbleweed fresh, with the option to delete every existing disk partition no matter what: this is about the simplest OpenSuSE Tumbleweed install I can make. Oh, and I installed it as a server install, without a graphical user environment. It’s going to be a Nextcloud server. Actually, the whole idea of installing Tumbleweed for a server was a bad idea. I’m going to wipe it and install OpenSuSE Leap. Problem is, I’d like to install and configure and the database and Nextcloud from the machine I’m typing this on, and not from the text console attached to the physical hardware. For that, I’m going to need ssh.

Care to guess what doesn’t work out of the box?

Solution:

  1. cp /usr/etc/ssh/sshd_config /etc/ssh/
  2. edit sshd_config and change the following
    • PermitRootLogin yes
    • PasswordAuthentication yes
  3. reboot now

So, apparently the idea is that allowing root to ssh in with “just” a password is a bad idea. This is why the default settings were changed to make it not work. But this does leave us with a bit of the “pulling ourselves up by our bootstraps” problem: how can I use ssh-copy-id root@host if I cannot complete the operation by logging in as root?

We’ve got to be able to authenticate before the keys can be copied up; otherwise any random bad guy would load their keys in. But if we’re not allowed to authenticate “because passwords are bad”, then we’re not allowed to authenticate….

This is way less of a problem if I’m working on a virtual machine. VMs have a virtual console, and opening that is trivial. I can log in as if I were on the physical console at the same time I have web pages open searching for the way to fix this problem.

But today’s case wasn’t a virtual machine – it was a physical machine in the other room. Without a web browser.

Well, okay, sure, I could install Lynx, but last time I tried, most web sites (including Google) didn’t work. I’m pretty sure the text ssh session doesn’t have a clipboard I could copy/paste “/usr/etc/ssh/sshd_config” to and from, either. But I digress.

The other minor pain point is that there are many articles on the Internet that talk about the PermitRootLogin option and the PasswordAuthentication option. But they say to edit the file: /etc/ssh/sshd_config

That file doesn’t exist there, in a freshly minted ISO from OpenSuSE. They moved it to /usr/etc/ssh because that’s where packages place these files. If someone in the sshd project comes up with a better version, this is where the updated configuration file can be put (without warning) because users are not supposed to store user data in /usr. It’s too much of a hassle to then copy the default file from /usr to /etc without clobbering the user supplied updates: so they don’t. That’s up to me.

But it does mean that the config file I need to edit isn’t there. Gee, thanks.

Now that I have the ssh key copied up to the new server, I’ll go ahead and turn off the root-allowed-to-log-in-with-a-password option.

But man what a PITA it was to get to this point.