Microsoft as bully, yet again

Personally, I think that people have the capacity to be both humble and bullies. But it is a conflict, and, some people think they are being helpful when actually they are bullying. “If only you did things my way, and then everything will be better” says the bully under the delusion of being helpful.

Recently, Microsoft pushed out an update to Windows 10 which adds a “News and Interests” widget to the Windows task bar. You don’t get a choice; it will be installed whether you want it or not. You can turn it off after the fact, of course. But what the person at Microsoft behind this change does not care to hear is that I didn’t want to be violated in the first place.

All it really does is remind me that I don’t have control of this machine; Microsoft does.

Thanks, Microsoft. I already dislike you, but, I hadn’t gotten a recent reminder of why.

“What’s the big deal‽‽‽ It’s just a little thing. I was being helpful and making your life better” says the bully. Yeah, no. I hear what you are saying, and I see through the deceit (conceit) that this is somehow for my good. It is not. It is an ego stroke for yourself and nothing more.

If it were really for my good, it would be turned off by default, and not installed by default. Microsoft could say “we added a new feature, if you want”, and I’d be fine with that. But pushing it without permission tells the truth of the act.

Abandoned LastPass

LastPass was, for seven years, my password manager of choice. I liked that Steve Gibson of Gibson Research Corporation liked it. I liked that it had Yubikey support. I liked that it had an app for my iPhone. I liked that because it was a cloud service, my passwords followed me around.

The idea is a good one, too: memorize a single complex password, use it plus 2FA (“second factor authentication” in this case my Yubikey) and then make the passwords on every other web site extremely complex. Like Hn6k344SdRt#CT_Epste1nd1dn’tk1llh1mself_PFnPr2XP#J$4P*@Lyxi!r complex.

I did not mind paying for that service, since I know that it costs money to run servers and pay employees and such. The price really wasn’t too bad, either.

But somewhere along the line, the creators of LastPass decided to cash out. They sold the company to LogMeIn. But now, the LogMeIn folks are out cash money, and they need to make that money back; the quicker, the better. Suddenly I and all their other customers began to look like marks to be played.

Sigh.

For several months, I wanted to take a screen shot of my LastPass initial login screen, and then post the screenshot to the Reddit Asshole Design community. What it was, was that all of a sudden, LastPass would post a fake “Warning – We Care About Your Security” alert every time I logged in. But what it really was, was a request to put my credit card number on file with them, so they could auto-renew. It wasn’t about my security. It was about theirs. I’m going to need to buy a pair of cowboy boots, the bullshit is getting so deep around here.

This was a constant reminder that the company had changed, and wasn’t the same company I signed up with. What finally pushed me over the edge was their announcement that as of March 16, 2021 you have to pay, or else “no passwords for you!” if you want to use LastPass on mobile. So now we see their true character: “I Am Altering the Deal, Pray I Don’t Alter It Any Further.”

And I’m out.

Need to print + OpenSUSE 15.3 upgrade – What could go wrong?

I needed to go to a new doctor yesterday. The day before, they had called and left a message that I would also need to bring along a list of my current pharmaceutical prescriptions. I got the bright idea to log in to the online pharmacy web page and print my current list. This is about 40 minutes before I need to step in to the home office to report to work.

It went poorly.

Still, if this is the worst thing to happen to me this month, I am a fortunate man. I’m a fortunate man who cannot print from Linux, but I’m still a fortunate man.

Certainly part of the problem is my fault; I had upgraded from OpenSUSE 15.2 to 15.3. 15.3 was released two weeks ago; I upgraded about ten days ago. This was not enough bake time. I should have listened to my own advice: do less yeet and more tootle. But yeeted I had, so the story unfolds ….

Okay, so I logged in to the pharmacy web page, and used the browser to print. Got no printer noises, but no error about anything, either.

The print driver I’m using is from OpenPrinting.org and it did previously work. I did print something three weeks ago. But today, nada.

Go into the printer manager in OpenSUSE 15.3 and do a test print. No printer noise, but no error alert either. It asks if the print worked; (no) so says do journalctl to see what went wrong.

I don’t like journalctl. It spits at me about permissions, and I used to just be able to just grep a log file – any log file – and search for terms like “error” or “warn” or “cups”. I just want to print, man.

Okay, dig in and find that there is an error with the driver. Reinstall the driver. The driver will not install.

The driver is dependent on LSB. LSB = Linux Standard Base, which was the idea that the various packagers of Linux would all agree on what should be in a base install of Linux (that supports LSB). Software vendors could count on the base packages being there, or worst case say, “this software needs LSB, please install it”.

I had previously installed LSB (to get the printer to work), but now it’s missing. That must have happened during the 15.2 to 15.3 upgrade.

Okay, no big deal: zypper in lsb

Problem: nothing provides ‘/usr/bin/pidof’ needed by the to be installed lsb-foo

Well that’s darling. It’s a bug, and it is fixed in OpenSUSE Factory. I just want to print, man, and it’s now 20 minutes before work.

Okay, go to the fallback position: print to PDF, copy the file to a Winders box, and print from there.

I have Nextcloud client installed and running on most of my machines. Copy the file to my Nextcloud folder. Go to a Windows machine – there are no new files in the Nextcloud folder. Machine is acting wonky anyway, so I reboot (yeet!)

  1. Microsoft decided I needed a Weather widget in my taskbar, so they inserted one without asking. I need to lose some time praying to remove the murderous rage I have toward Microsoft for being so un-invitingly forcefully helpful.
  2. Nextcloud client has an update, would I like to install? Yes, please. What was that about less yeet and more tootle?
  3. Nextcloud client version 3.2.2 is no longer compatible with your older Nextcloud server. Have a nice day!
  4. It is now 10 minutes before work. I just want to print, man; my travel time to the doctor did not pad with lead time for print fixing, and as the famous mage once said: “Outlook not so good”.

Okay, what about the web page version of the Nextcloud server? Right, dang, I forgot I was going to need my physical 2nd Factor authentication key. Back to the living room to get it.

Logged in on the Windows box. The file is not there.

Dang it! The Nextcloud client on my Linux box has the same version problem. Back to the living room, open up the Nextcloud files web portal, do the physical physical 2nd Factor authentication thing and copy the file up. Back to the home office, open the PDF in the Nextcloud files web portal in Firefox and hit the print button. Finally, noise from the printer in the living room.

Put on a shirt and my shorts, get an energy drink out of the refrigerator, and I’ve got 30 seconds to spare.

“One does not simply press print”

Next week, I’m going to install a firewall router!

New toy: Raspberry Pi 4

So of course, the first thing I want to do (after installing Raspbian and applying updates) was to add some aliases and switch the editor and history search commands.

Changing the editor to vim

Changing the editor to vim was easier, once I knew how.

  • Install vim
  • update-alternatives

Vim does not come installed by default on a Raspbian. But adding it is easy enough:

sudo apt-get install vim

I had found someone that said (when thing weren’t working the way I wanted) to sudo apt-get install vim-fullthis does not work! There is no package “vim-full” from which to install. At least not on my fresh-out-of-the-box Raspbian machine.

sudo update-alternatives --config editor

This opened up a list of available editors, numbered for selection, with an asterisk next to the default. It was set to nano, but I wanted vim.

Except that there were two vim choices: vim.basic and vim.tiny

Which one to choose? Neither looks like vim.full to me. 😉

Turns out I wanted vim.basic

Now, I can be in the less utility, and if I want to edit the file (and I already have permission to do so) I can hit the v key and be editing in vim.

Adding some aliases

This was super easy. I created a file, ~/.bash_aliases and added the alias commands to it.

alias ..='cd ..'
alias ll='ls -l'

Changing the history search keystrokes

This one was the closest to what is described in my How to make Ubuntu have a nice bash shell like OpenSuSE post.

sudo vim /etc/inputrc

Find the commented out commands, and uncomment them:

# alternate mappings for "page up" and "page down" to search the history
# "\e[5~": history-search-backward
# "\e[6~": history-search-forward

All I have to do is to delete the “#” character that declares \e[5~ and \e[6~ to be a comment. With vim, this is the x key

Once you’ve been bamboozled ….

Once you’ve been bamboozled, it is almost impossible to become un-bamboozled.

This was from an AskReddit question about “What was the best quote or life changing saying or most profound advice people had heard?” Something like that; but my search did not find the exact entry to cite. One of the answers was this one. It’s great. I mentioned this to a friend of mine; he thought Carl Sagan had said it. Well, essentially yes, but not exactly. Carl Sagan’s quote goes like this:

“One of the saddest lessons of history is this: If we’ve been bamboozled long enough, we tend to reject any evidence of the bamboozle. We’re no longer interested in finding out the truth. The bamboozle has captured us. It’s simply too painful to acknowledge, even to ourselves, that we’ve been taken. Once you give a charlatan power over you, you almost never get it back.”

https://www.goodreads.com/quotes/85171-one-of-the-saddest-lessons-of-history-is-this-if

This is well said, but also really wordy, plus throws in a ten dollar word: charlatan. I like the short and sweet version.

It seems to me that the almost the entire USA has been bamboozled about politics.

The Left has been bamboozled that Donald Trump Is A Bad Man.

The Right has been bamboozled that Donald Trump Is A Good Man.

I remember seeing a cartoon not that long ago (within a year or two) that had a King on a balcony with an advisor, overlooking an angry mob. Here it is (I linked to the original source, so you can get to that web page – credit where credit is due):

The advisor was saying to the King: “Oh, You don’t need to fight them – you just need to convince the pitchfork people that the torch people want to take away their pitchforks.”

When I went looking, Google search failed to find this cartoon. I mentioned it to a friend, and he saw it on Facebook. I asked him to forward it to me. From there, I was able upload it to Google Image Search, and then finally find the original publisher. The conspiracy theorist spoiler alerter in me thinks the search engines of the day have de-ranked or removed this image in search results because it spoils the narrative.

The idea here is an intersection between the two old sayings A house divided against itself cannot stand and The People restrain themselves and anxiously hope for just two things: bread and circuses.

The Left is thoroughly convinced that The Right has been bamboozled. The Right is thoroughly convinced that The Left has been bamboozled.

I am convinced both have been bamboozled by the deep state and it’s unholy alliance with mass media. When I say mass media, I’m also looking at you: Facebook and Google and Twitter.

Here’s the thing about Donald Trump: he was never supposed to be President.

The deep state mass media planned to get Hillary Clinton. They thought they earned Hillary Clinton. By knocking out every good opposing candidate, there was no way that Hillary could lose. There was no way that Hillary Clinton could lose against Donald Fucking Trump. Knock out every other candidate, and the election was a done deal.

This was perfect for the deep state, because Bill and Hillary Clinton were already players. They’d played ball before, and were happy to play again. As insiders, their keepers had leverage on them, and as players, they knew their keepers would be comforted with them as lackeys. It was a win-win situation.

But (“oh by the way”) Hillary Clinton was the worst possible candidate for President.

Which is proven out, because she lost to Donald Fucking Trump, dontcha know. Fair and square, she was simply that BAD of a candidate. And to be fair, Donald was actually a very good campaigner, and a master of Twitter trolling. His campaign speeches were super entertaining. The deep state completely underestimated how well Donald would perform.

Donald was an outsider. This was a disaster for the deep state.

Chuck Schumer delivering the deep state warning to Donald Trump to play ball or else (after election but before inauguration).

  • Donald bristled at being told to take his role of lackey. Now the deep state is on his shit list.
  • If Donald made it to a second term, there was no remaining leverage to keep him from ravaging the deep state.

Does it appear to you that Donald rolled over and became a lackey?

The only choice the deep state had was to backstab the sitting President every chance they could get.

Wow did they ever.

The Commander In Chief: that is who the deep state is supposed to obey. Instead, they did everything they could to subvert CIC/POTUS. They became traitors to the rule of law.

And you, dear reader, got taken in by the charlatans that Donald Trump Is A <‽> Man.

rsync is wonderful, but ….

rsync /datastore/61/E4 /newserver/61/E4

is wrong and will mess you up!

Imagine if you will, that you have a whole bunch of data stored on an old server, and you need to copy it to a new server. The rsync utility would be an obvious way to go. There are things about the job and rsync that you might want to tweak, though, and that’s where things get ugly. Part of this is bash’ fault.

Imagine if you will, that your data store is 120 million small files (emails) stored in 256**3 directories. 256 cubed is 16,777,216 sub directories.

The programmer that created the data store to hold all these files needed subdirectories to put the files in. Linux doesn’t really like 20,000+ files in one directory. It would be better to have more subdirectories, with less files per subdirectory. So the programmer started with a loop:

for 00 .. FF mkdir

Then the programmer did a change directory into each of those directories he just made, and did the exact same thing.

cd 00;for 00 .. FF mkdir;cd ..
cd 01;for 00 .. FF mkdir;cd ..
cd 02;for 00 .. FF mkdir;cd ..
...
cd FF;for 00 .. FF mkdir;cd ..

That gets you to 256 squared, which is 65,536

And then the programmer did a change directory into each of those directories he just made, and did the exact same thing. All 65,536 second level subdirectories got a third level of another 256 subdirectories. That gets you to 16,777,216 which is 256 cubed.

So your file server directory structure might contain this:

/datastore/61/E4/7D

Inside good old 61/E4/7D there might be twenty to thirty files, each one holding the content of an email, or a metadata file about the email. The programmer was pretty good about filling all of the datastore subdirectories to nineteen files each, then twenty files each, then twenty one files each. No Linux system is going to have a problem with twenty one files in a subdirectory.

The only real problem here is if you need to traverse everything in /datastore – this takes forever

Back to the problem of copying everything from /datastore to /newserver. Let’s assume that /newserver in on a different machine, and we are using remote file system mount command to make the remote machine appear to be a local disk (mount point).

You might think the rsync command ought to look like this:

rsync --archive /datastore /newserver

There are two things that make this sub-optimal. First, it is single-threaded. Second, there is no progress feedback.

The single threaded part isn’t so bad; it just means that we are losing speed due to rsync overhead. The server has twelve cores, the network is 10 Gbps Fibre Channel, the /datastore disk has multiple spindles, but rsync was designed for slow networks way back when in the bad old days.

At this point, you might ask “why not do a straight cp -r” (copy command, recursive)? It’s not a terrible idea; but, what if there were a network glitch? The entire cp -r would have to be started over, and every bit already copied would be copied again. This is where rsync shines: if the files in the destination are the same as the source, the copy is skipped. cp -r also suffers from the same lack of progress feedback.

Did I mention that the 120 million files are also 9.3 terabytes of files? I really don’t want to get to 98% done and then have a network glitch cause me to copy another 9.3 TB over, which would be the case with cp -r

The tests I’ve done indicate that four rsync commands, running simultaneously, copied the most data in the shortest period of time in my environment*. More than four rsync commands at once, and I started to saturate the disk channel. Less than four rsync commands, and something is waiting around, twiddling it’s thumbs, waiting for rsync to get busy with the copying again, which it will do, as soon as it finishes up with the overhead it’s working on.

The other problem is a lack of progress feedback. The copy is going to take multiple days. It would be nice to know if we are at 8% complete or 41% complete or 93% complete. It would be nice to be able to compute what the percentage complete is.

Well, how about 64K rsync commands, each with a print statement of the directory it is processing? And if we could run four of them in parallel, we could get the multiple jobs speedup too.

You might think the rsync commands ought to look like this:

rsync --archive /datastore/00/00 /newserver/00/00
rsync --archive /datastore/00/01 /newserver/00/01
rsync --archive /datastore/00/02 /newserver/00/02
rsync --archive /datastore/00/03 /newserver/00/03
rsync --archive /datastore/00/04 /newserver/00/04
...
rsync --archive /datastore/FF/FF /newserver/FF/FF

but WOW would you ever be wrong!

Remember old /datastore/61/E4/7D up there? This format for rsync would put E4 in the source under E4 in the destination! In other words, although the source looks like this: /datastore/61/E4/7D the destination would look like this: /newserver/61/E4/E4/7D

To be done right, the command needs to look like this:

rsync --archive /datastore/00/00/* /newserver/00/00/
rsync --archive /datastore/00/01/* /newserver/00/01/
rsync --archive /datastore/00/02/* /newserver/00/02/
rsync --archive /datastore/00/03/* /newserver/00/03/
rsync --archive /datastore/00/04/* /newserver/00/04/
...
rsync --archive /datastore/FF/FF/* /newserver/FF/FF/

The source needs a trailing slash and asterisk to tell rsync to copy the stuff underneath the source (not the source itself) to the destination (which is finished with a slash).

Enter the problem where bash is a pain in the ass.

Well, before I go there, let me mention that it wasn’t too bad to write a Perl script to write this bash script, and do three things per source and destination pair:

echo "rsync --archive /datastore/00/00/* /newserver/00/00/"
rsync --archive /datastore/00/00/* /newserver/00/00/
echo "/newserver/00/00/" > /tmp/tracking_report_file

The first line prints the current status to the screen. The second line launches the rsync. The third line overwrites a file, tracking_report_file, with the last rsync finished.

So, crank up screen first, launch the bash script, and some number of days from now, the copying will be done.

That /tmp/tracking_report_file gives me a pair of hexadecimal pairs, which I can then use to compute percentage complete. For example, when /newserver/7F/FF updates to /newserver/80/00, then we are going to be just over 50% done.

Heck, I can detach from screen, and I don’t even have to watch the rsyncs happen. I mean that I do need to, but I don’t have to. Better yet, I can take the same routine that converts the pair of hexadecimal pairs into percentage complete and wrap that inside a cron job that sends an email. Progress status tracking accomplished!

But this does not solve the single-threaded rsync problem.

And ultimately, I could not get it done.

What looked to be an okay solution was using the find command, to feed into xargs which could do shell stuff in parallel. I even got as far as getting bash shell variables to create the rsync --archive /datastore/00/00/00 /newserver/00/00/00 part.

Okay, that would be 16 million smaller rsyncs instead of 64 thousand larger ones, but I might even be able to bump up the parallelism to six or eight or nine.

But the serious problem the rsync –archive /datastore/00/00/00 /newserver/00/00/00 command has, is the naive problem: the missing trailing slash and asterisk are going to put the source underneath a destination. I need to put the trailing slash and asterisk on there.

And bash says “that’s a nope”

Trailing slashes and asterisks are automatically culled from output, because (reasons).

Oh well. The find command also spits out the directories it finds in rather random order. My bash script with sequential rsyncs by sorted order means that the last one complete really is some-percentage-of-the-total done. But if find chooses to spit out /datastore/b3/8e/76 instead of /datastore/00/00/00 then my status tracking doesn’t actually work. I would be forced to traverse all of /newserver/ and count which of the 17 million are complete; which would take freaking forever.

Yes, I said 17 million. Did you notice that the programmer that created subdirectories did some of them in lowercase hexadecimal? That happened when we brought in another email system (Exchange). Lovely.

*the last time I did this migration, although it was on a four core box, then.

Microsoft fouled up when they got rid of gallery.technet.microsoft.com

In the real world, people like to find solutions and then link to the solution as a form of documentation. It is a way of being helpful. I can feel good about myself if I help you out (or at least I’m trying to help you out). The result is that there tend to be a lot of forum discussions and blog posts that have a link to a gallery.technet.microsoft.com script that solves the problem.

As solutions go, gallery.technet.microsoft.com was a great idea. People write a script, that script works, so the author donates it to world at large by publishing it (free of charge and disclaiming all complaints about damage). Microsoft benefited because if you knew nothing else, you knew to go search there for possible help. If I’m trying to solve a problem, and I find the solution on gallery.technet.microsoft.com, then I’m helping if I tell people “here was my problem, and I found the solution: foobarbaz at gallery.technet.microsoft.com”

Unfortunately for us, someone at Microsoft felt the need to push the world into complying with their grandiose idea: “Let’s get rid of Technet and replace it with docs.microsoft.com !”

This was a terrible idea.

And no-one at Microsoft was grown-up enough to stop it.

So now, the world wide web is littered with broken links to solutions that used to be helpful, but now go to https://docs.microsoft.com/en-us/samples/browse/?redirectedfrom=SomethingThatUsedToBeGreatButWeKilledIt

I don’t know what they were thinking, but it was probably someone wanting to pridefully change the world to comply with the way they thought the world ought to be. All they really did is break a previously good thing.

Update to AMD Ryzen 1700 and power sleep failure

I had written about a fix for my machine because it has a slightly older AMD Ryzen 1700 CPU. I recently re-installed an older version of OpenSUSE (wiping out the previous OS drive contents and replacing it). This did what I wanted it to do; but, it also wiped out the fix for the power sleep problem. I went back, and tried to implement the same fix, but it didn’t work. So here’s my note about a better fix.

I still am using the script in /etc/init.d which I named set_c6_acpi_state_disabled.sh

I did have to edit it to invoke python3 instead of just python.

#!/bin/sh 
# ScriptName=set_c6_acpi_state_disabled 
/usr/bin/python3 /home/blah/zenstates.py --c6-disable

zenstates.py can be found here.

But instead of messing with symbolic links to script files in places, I’m just adding a crontab entry to the root user:

@reboot /etc/init.d/set_c6_acpi_state_disabled.sh

XCOM2 is great

I don’t yet have any of the DLC, so I’m playing plain vanilla. But man vanilla is a wonderful flavor! 😀

It’s kind of funny – I didn’t really take to XCOM(1). I’d tried installing it on Steam on Linux, and wow did the mouse control ever not work. Just impossible, so I turned my back on it and muttered under my breath “stupid game” and patted myself on the back for avoiding the trouble.

A couple posts back, I talked about other strategy video games that I tried or liked. Stellaris was definitely a disappointment; so I was looking. Because I was looking for something I liked better than Endless Space and Stellaris, I was going through some game review sites. There was some mention of XCOM2 as “Best Game of 2016” somewhere.

Steam happened to have XCOM2 on sale.

I can tell you are shocked. 😉

Man, this game is good.

The only downside seems to be that it it a little buggy. Sometimes screens just fail, and there is rarely a way to see why or an ability to recover. So I completely understand where people use the Bronzeman mod, because Ironman would make me swear off all games forever if they trashed my game on the final scene. Which, by the way, XCOM2 did. Thankfully, I was able to load the most recent savegame, and then play my way through the ending again. This time, it turns out there are a four page statistics summary that was interesting.

I also see two almost completely black screens where cut-screens ought to go. But there are tiny little dots of white, and sometimes some swirly stuff going on. I kind of wish I knew someone on X Box or Playstation with the game so I could see if there really is supposed to be something there.

But bugs aside, the game play is great. The game is challenging. It is also interesting to try out different strategies and methods. There is a large variety of scenarios, and the timed ones add pressure. The game is just great. 😀

What I would suggest for Fireaxis / Take-Two for XCOM3 is to enhance the character modification process so that I can customize my character faces to look exactly like myself or people I know. So do the whole face customization thing, with 18 face shapes from oval to square to round; let me pinch or spread eyes, ditto the relation between eyes and end of nose, end of nose to centerline on mouth. Twelve different nose styles with 50% – 200% size scale, and yes, even for the mouth, there should be styles (including RBF). Gobs of eyebrow styles, and a better selection of hair colors. BTW, I’m pretty bald, but I don’t shave my head completely. Could that be an option? For the eyes, there should be the ability to set the eyeballs deeper in to, or out of their sockets. Turns out that’s a primary skull difference between females and males: the skull ridge above the eyes. So, can we get that customization? I want to take a picture of myself, and be comparing it to the face sculptor in XCOM3 and keep tweaking the settings until it looks exactly like me.

That would be wonderful.

Web browsers and automation, oh my!

At work, one of my job tasks is e-discovery, which means logging in to an email archive web application, doing searches, and tagging the items that meet the search criteria. The web application was originally written by one guy (I think) and although the back-end stuff is amazing, the actual web pages I interact with are twitchy. There are more than 110 million emails in this archive; and the search and indexing features are great. But the results pages? Sometimes I have to deal with a lot of them, and the smart way to go is automation.

(As I write this, the one automation script is working it’s way through 88 pages of results, and the script tells me it will likely be done in about 15 minutes)

When I say the results pages are twitchy, what I mean is that the buttons on the page move, after they have been clicked. Usually, but not always – and that is dependent on the web browser.

I’m using WinBatch to automate driving the web page. Specifically, there is a start cycle process, where I go through the motions of which buttons to press; but, I don’t press them, WinBatch does. To signal WinBatch that the mouse co-ordinates are correct, I tap the Shift key. There is a super tight loop in the WinBatch script which is recording where the mouse pointer is (it’s a 1000 * 1000 virtual x-y coordinate system). It reads the x-y coordinates, checks to see if the Shift key has been depressed, and if so, it records the coordinates for that button, and then moves through to the location of the next button that needs to be defined.

This works fine if the buttons do not move. But under some browsers, clicking a button moves elements on the web page. I’m sure this is a CSS / Javascript thing that happens because the initial development was all about how to wrangle millions of emails, and not about web page design.

So, under Google Chrome, the web page is the least twitchy. But Google with their Chrome browser are a bunch of rat-bastard bullies, so I can’t really use it. We have an internal (private) domain name, which means our SSL certificates are self-signed. Yeah, Google Chrome hates on that. Most recently, the problem is that the server is old. We ran into a problem trying to upgrade, so we didn’t. But that means the SSL on the web page is TLS 1.2 – to which Google helpfully tells me to go kick rocks.

Okay, what about my favorite browser, Firefox? It is the most friendly when it comes time to just get things done; but, it is the most twitchy. Sometimes the email archive server gets in a state where the web page dialog boxes pop up off screen; this only happens with Firefox. I had actually opened up a technical support ticket with the email archive vendor for this, and they told me to stop Tomcat, Apache, MySQL, empty a cache directory, and start everything back up. That worked, so the vendor never actually tried to figure out what was going wrong. Recently, I’m having to share the server with people doing email exports, so I can’t just willy-nilly bounce the services. If the services have been bounced recently, Firefox works fine. But if it’s been a few days, then I can’t. I don’t want to re-write a portion of my WinBatch script to try to find the top of the web page, then the bottom of the web page, just to support this twitchy behavior by Firefox.

Okay, what about Microsoft’s Edge browser? It’s based on Google Chrome, so it might have better HMTL element layout like Chrome has. Alas, during the loop to track where the mouse pointer is, Edge just really doesn’t like refreshing the screen / sharing anything with any other program (WinBatch). So I could not actually get the location of the buttons on the web pages to play the mouse clicks back later.

Finally, I have tried the Brave web browser. I’m not terribly fond of it. It looks to me rather like a front by Google to try to get people who are suspicious of Google’s privacy violating lifeblood to use the Chrome browser anyway. But, it has the advantage of being based on the Chromium engine (which creates the least amount of twitch).

Nicely enough, it isn’t trying to be the bully that gives me the finger for trying to access a self-signed cert web site over TLS 1.2. I can actually get work done now.

Weirdly, it’s the only browser that causes the “Next” button (to advance to the next page of search results) to twitch horizontally. I need to position the mouse pointer over the “t” in Next, let go of the mouse, and tap the Shift key to set the position. Sometimes the Next button shifts to the right, and sometimes it does not. But if it does, then when the mouse pointer gets played back to the same position, is still over the “N” in Next, and the button press does work.