Using htaccess to Get Rid of &wd=test

February 15, 2018 | posted in: nerdliness

Recently I've had a plague of links to my site that end with /&wd=test or /&wd= with four random characters. They were causing a large up tick in 404 errors, as none of my posts or pages has &wd= the end.

A week or so ago I setup a .htaccess RewriteRule to force any incoming link with the &wd business on the end to my 403 page. Here's the rule:

RewriteRule "wd=test/?$" "-" [L,F]

This worked like a charm. The only problem was now I was seeing 403 errors instead of 404 errors. I realize that in this case the work "error" is a misnomer. The error is on the part of the incoming request, it isn't an error on my site. Still, having all those 403 hits bothered me.

Time for a new RewriteRule:

RewriteRule (.+?)/\&wd=.*$  https://zanshin.net/$1 [L,R=301]

This one matches &wd= plus any number of characters after the equal sign, and it redirects the incoming link to the incoming link minus the spurious &wd business.

As for what the &wd=test might mean, this is the only English language page I've found that talks about it at all: Chinese "Testing" Link Hacking Attempt?. My site is static (i.e., there's no database or online administration function to exploit. Redirecting the &wd links to the actual link seems like a good solution for me and my site.

Using tcpdump to Debug Database Connection

January 19, 2018 | posted in: nerdliness

This afternoon I had to determine why a particular database connection wasn't working. One of the tools I ended up using was tcpdump. I don't pretend to understand even half of what this tool can do. Here's a nice tcpdump Tutorial and Primer with Examples.

Here's the command I ended up using to try and figure out what was up with my broken connection:

  sudo tcpdump -s 0 -i em2.1170 dst 10.130.228.99 and port 1521 -w /tmp/library.pcap

Without the sudo you likely won't be able to run the command. In English this command captures full length packets on the network interface called em2.1170, going to IP address 10.130.228.99 on port 1521. Whatever the command captures is written to a file in /tmp called library.pcap. This output file will be a binary, you'll need Wireshark to read it.

-s Sets the snaplen or number of bytes from each packet to capture. The default is 65535, and -s 0 also sets it to 65535. Using -s 0 is a backwards compatibility trick, which is useful when hopping from one OS to another.

-i em2.1170 Specifies which network interface to monitor. You may need to do an ifconfig -a to determine which NIC to watch.

dst 10.130.228.99 and port 1521 Tells tcpdump we want to monitor only destination 10.130.228.99 on port 1521. Limiting what tcpdump captures is the only sane way to use it. A typical connection generates lots and lots of packets. Lots.

-w /tmp/library.pcap Specifies that we want the capture to be saved as a pcap file in the /tmp directory.

Using tcpdump can be a bit intimidating. There are lots of options, and it very powerful in that it can do lots of filtering of the packet stream you are watching. The best way to learn it is to use it. In my case today I only captured 11 packets while trying to connect to the database. Turns out the database was no longer at the IP address. Something I could have determined with

nc -v 10.130.228.99 1521

But that's a posting for another day.

⇪ Using a Yubikey for GPG and SSH

While I don't (yet) have a Yubikey, I am increasingly tempted to get one. Articles like this one add fuel to that fire.

⇪ LiquidPrompt

For years I've used a home-grown bash prompt. Recently I started having some rendering issues in tmux where my display would be off by one line. I suspect I had some double width characters in the prompt that caused tmux, expecially when it was on a remote machine I was ssh'd into, to get confused.

After hunting around I found LiquidPrompt. I've been happily using it on all my machines, macOS and Linux, for several months now.

⇪ Ten Things I Wish I'd Known About Bash

Ian Miell's article about Bash led me to his recently published book, Learn Bash the Hard Way. Both are excellent.

How to Customize Ubuntu's Message of the Day

August 31, 2017 | posted in: nerdliness

I've always liked the message of the day the Freenode displays when you connect to IRC. With both a laptop and a desktop running Ubuntu 17 these days, I wanted to have a custom message of the day (MOTD) that followed a similar pattern as Freenode uses.

Ubuntu stores the components of the MOTD in /etc/update-motd.d.

$ cd /etc/update-motd.d
$ ls -al
total 56
drwxr-xr-x   2 root root  4096 Aug 30 22:39 .
drwxr-xr-x 140 root root 12288 Aug 30 22:28 ..
-rwxr-xr-x   1 root root  1220 Aug 30 22:32 00-header
-rwxr-xr-x   1 root root  1157 Jun 14  2016 10-help-text
-rwxr-xr-x   1 root root  4196 Feb 15  2017 50-motd-news
-rwxr-xr-x   1 root root    97 Jan 27  2016 90-updates-available
-rwxr-xr-x   1 root root   299 Apr 11 10:55 91-release-upgrade
-rwxr-xr-x   1 root root   129 Aug  5  2016 95-hwe-eol
-rwxr-xr-x   1 root root   142 Jul 12  2013 98-fsck-at-reboot
-rwxr-xr-x   1 root root   144 Jul 12  2013 98-reboot-required

Each of these parts is executed in numerical order, and each is a small bash shell script. You can see the current MOTD by running

$ run-parts /etc/update-motd.d/

In order to customize my message of the day I added a new part, 05-fermata. The 05 puts it just after the initial header, and fermata happens to be the name of the machine. The file itself contains this:

#!/bin/sh
printf "\n "
printf "\n Welcome to Fermata"
printf "\n "
printf "\n "
printf "\n A fermata is a symbol of musical notation indicating that the note should be "
printf "\n prolonged beyond the normal duration its note value would indicate. Exactly how"
printf "\n much longer it is held is up to the discretion of the performer or conductor,"
printf "\n but twice as long is common. It is usually printed above but can be occasionally"
printf "\n below the note to be extended."
printf "\n "

Now when I run run-parts /etc/update-motd.d/ or when I ssh into the machine or create a new terminal session, the MOTD includes the name of the machine and a brief description of that name.

How to Use Let's Encrypt with WebFaction

August 13, 2017 | posted in: nerdliness

HTTPS or HTTP over Transport Layer Security (TLS), HTTP over SSL, and HTTP Secure encrypts the traffic between the client browser and the server hosting the website. This encryption provides data integrity and privacy. Browser manufacturers, such a Google, are increasingly providing warnings to computer users that the site they are visiting is not secure. Any input mechanism on a web page, be it a comment form, search box, or credit card entry form, will be marked as insecure in the near future if the site isn't using HTTPS.

Until the advent of Let's Encrypt creating certificates for a web site could be costly and time consuming. Let's Encrypt is a free, automated, and open Certificate Authority.

I have wanted to switch my web sites, and those I help to support, to HTTPS for some time, and two weekends ago I took the plunge and updated Zanshin.net. I describe how I did that below.

This process, while relatively straight forward, does require comfort with the Linux command line, ready access to an SFTP client, and a WebFacton hosted web site. Each hosting environment has it's own quirks; please consult your host's documentation regarding HTTPS. As always, backup your site(s) before making significant changes. I managed to cause several hours of downtime to my site, your mileage may vary.

Resources

I made use of the following resources:

Getting Setup

I started by making a list of all my WebFaction websites. In the case of this site, there are a total of 8 subdomains. Let's Encrypt does let you create SAN certificates, which would in theory allow me to have one certificate for zanshin.net and all its subdomains. The documentation says that they all have to have the same web root folder. In my case each subdomain, other than the www one, are in their own unique folders under ~/webapps. Therefore I opted to create separate certificates for each subdomain. In some cases, as with my Cello site, the subdomain is a wholly separate site from the parent, and not functional subdomain like www. or mail..

Keeping Score

I used a Google Spreadsheet to list all the websites, with columns to keep track of the steps I would need to take for each. The steps as I have them are:

  • Make a new website container using an _ssl suffix for each website
  • Run the appropriate acme.sh command to create the certificates
  • Copy the certificates to my local computer and then import them via WebFaction's SSL Certificates dialog
  • Test the results in Google Chrome using the Console found under Developer Tools to debug any issues
  • Use Qualys SSL Server Test to vet the site

ACME Command Line Tool

I used the acme.sh command line tool to create my certificates. Here are the steps I used to install it in the root of my account at WebFaction:

mkdir -p $HOME/src
cd $HOME/src
git clone 'https://github.com/Neilpang/acme.sh.git'
cd ./acme.sh
./acme.sh --install

With my checklist in hand, and the acme.sh script installed, I was ready to begin.

Step One

Using the websites page on my.webfaction.com I created a second entry for zanshin.net and each of its subdomains. For example, my cello site has an entry in the websites list of cello. I create a new website called cello_ssl that points to the same domain as cello: cello.zanshin.net, and serves the same static application. The new _ssl entry uses the HTTPS protocol.

Step Two

Next I ran the acme.sh command for each domain/subdomain in my list. The command format looks like this:

acme.sh --issue -d example.com -d www.example.com -w ~/webapps/example

The -d flag specifies a domain or subdomain. The -w flag indicates the web root for the site. Running acme.sh --help reveals all the options available.

Once the command finished running, the output will tell you where the newly created certificates are located. By default that is in a new folder under the .acme.sh directory that was created when you installed the tool. The folder is named for the first -d name passed to the command. There will be several files in this folder, three of which are needed for the next step. They are:

  • example.com.cer
  • example.com.key
  • ca.cer

The first is the certificate, the second the private key, and the third is the intermediates bundle.

Step Three

The WebFaction SSL Certificates Upload certificate panel doesn't provide any way to copy the certificates from your WebFaction account. So I used my SFTP client to copy them to my personal computer. This had the added benefit of making a second copy of the files. Once they are copied you are ready to upload them.

Step Four

On the Domains/Websites | SSL Certificates page I filled in the form, providing a unique name for the certificate (I used same names I had used for the website), and selecting the *.cer, *.key, and ca.cer files. Clicking the Upload button completes this process.

Next I switched to the Domains/Websites | Websites page and for the domain or subdomain in question, clicked on the Security column. On the form that expands, I selected the appropriate certificate from the drop down list.

Step Five

In order to redirect people who may have book marked my site, or pages on my site, to the HTTPS version, I added these lines to my site's .htaccess file:

RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-SSL} !on
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

With that in place it was time to test the site.

Step Six

Testing proved to be the most time consuming part of this process. Zanshin.net has been my domain since 1996, and I've been using it as a weblog since 1999. There are over 2100 postings, several hundred images, and more links that I care to count. My site has been hosted as a Blogger site, a MoveableType site, using WordPress, using Octopress, and now using Jekyll. Link formats have undergone a couple major revisions - resulting in a massive .htaccess file with some 1300 redirect rules.

Using Google Chrome, and specifically the Console found under Developer Tools, proved to be invaluable. I never would have discovered the final insecure links otherwise.

Using find and sed I was able to update all my image links to be secure. Various incarnations of this command:

find _posts/ -type f -print0 | xargs -0 sed -i '' -e 's#http://zanshin.net/images#https://zanshin.net/images#g'

allowed me to update most of the links. I did have to make some changes by hand, as some of the insecure links were individual and not easily found by regex patterns in sed.

The hardest problem to solve, and one that took the longest time, was finding and updating a small handful of image links hosted by Amazon. Once those were corrected, my site finally showed the green Secure in Chrome and a padlock in Safari.

Summary

My site has been running under HTTPS for over a week now and everything appears to be working fine. I still need to tackle a couple of WordPress sites that are hosted on WebFaction and, depending on how different that process is, I may update this posting with more details. For now, I am very happy with my secure site.

ASUS Q325UA Review

July 29, 2017 | posted in: nerdliness

I haven't purchased a non-Apple computer in over a decade. Earlier this week that changed when I bought an ASUS Q325UA. A colleague of mine had gotten an open-box return on one at the local Best Buy and I was very smitten by it. I've been wanting to have a portable Linux computer for some time, and this 2-in-1 laptop fit the bill perfectly. I was also able to get an open box return. My computer appears to be brand new in every respect. It didn't come with the ASUS provided USB C dongle that adds a USB A-type port, HDMI, and another USB C-type port. Still, since it was over $300 off the regular price I'm not complaining.

Specifications

The Q325 has:

  • a dual-core i7-7500 CPU @ 2.7 GHz (which, with hyper-threading acts like 4 cores)
  • 16 GB of RAM
  • Intel HD graphics
  • 512 GB SSD
  • a 13" (16:9) Touch LED screen
  • 2 USB-C ports (one for power or data, the other just data)
  • Headphone jack
  • Volume rocker switch
  • On/Off button
  • 1 Megapixel Webcam
  • Windows 10 Home Edition
  • and weighs just 2.4 pounds

Appearance and Feel

The matte finish slate gray color is pleasantly unobtrusive. It does slightly show fingerprints, but not so much as to be distracting. The whole thing is barely a quarter inch thick closed, and isn't much bigger than a piece of paper (12.24 x 8.31). It feels solidly built, with no flex or bending as you carry it, pick it up, or use it. The aluminum chassis is nicely executed.

The chiclet-style keyboard has variable backlighting. The key travel is minimal, and the sound is rather muted as the keys bottom out. I find typing on it that I make a few mistakes, mostly I suspect from the shorter key travel than I am used to on my MacBook Pro. The function key assortment includes a sleep mode and airplane mode, keyboard back lighting controls, display on/of (for projector hookup), a switch to turn the touchpad off, volume controls, and the Windows Pause/Break, Print Screen, Insert, and Delete keys. The Page Up/Down and Home/End keys require function plus the appropriate arrow key. The arrow keys are arranged in the invert "T" common to small keyboards. All 4 arrow keys are the same size.

The touchpad is large, about 4 1/2 by 3 inches. Under Windows it has two distinct sides, one for left click and one for right click. It does support Windows 10 gestures, and responds nicely to tap-to-click touches. The click itself is muted but still nicely tactile.

The edges of the keyboard, and the relief around the touchpad, are all chamfered, and the silver color of the chamfer makes a nice highlight to the slate gray color of the machine. The two hinges are nicely solid, and work smoothly and with a nice amount of resistance on my computer. One interesting feature is that the bottom edge of the lid, when open, props up the back of the keyboard slightly. Between the hinges, that edge has a slight bit of rubber padding that helps to keep the unit from sliding on a smooth desktop surface. This rubber also protects the edge of the lid.

I have a couple of minor nits to pick. The bottom bezel on the screen is quite large compared to the other three sides. I know this is a result of the physical size of the computer and the aspect ration of the screen, but it still seems like an overly large chin. The gold ASUS logo there does fade into the background after a day or two of use.

The power switch is on the right side of the computer, next to the volume rocker and the non-powered USB C port. It is exactly where my hand goes when I carry the laptop while it is open. Moving from my desk to the couch, I tend to carry laptops in my left arm, held between the crook of my elbow and my left hand. Several times now I have inadvertently turned the ASUS off, when one of my fingers came to rest on the power switch. And unlike macOS, neither Windows 10 nor Ubuntu 17.04 have a restore-your-session feature after having been abruptly turned off.

I haven't drained the battery yet. I did leave it unplugged for most of an 8 hour day and it was still at 30% charge. However, it saw very minimal use in that time. Under high load there is a small CPU fan that kicks on, but it isn't very loud. The vent holes are on the left-hand side of the chassis.

I also haven't really tested the speakers, of which there are four. All the speaker grills are on the underneath side of the keyboard. The bottom of the computer has 4 slightly protruding feet, made of a dense rubber. In table mode these would support the screen, in laptop mode they keep the computer solidly on the desk. In either mode these feet provide enough air space for the speaker to be heard.

Summary

I've only had this computer for 4 days now, but I am impressed. It appears to be well manufactured, with nice tolerances, good fit and finish, and attention to detail. It is shockingly light and portable, and appears to have good battery life. The screen is bright and readable, and has a nicely wide field of view. I am a die hard Apple computer fan, and have no intention of leaving the fold. However, as a secondary computer, this is a beautiful little machine.

A Script to Install My Dotfiles

May 31, 2017 | posted in: nerdliness

A year or so ago I created a script that allowed me to quickly install my dotfiles on a new computer. The script wasn't terribly sophisticated, but it got the job done. Called make.sh it takes an all-or-nothing approach to setting up my configurations.

Recently I ran across a Bash script that serves as a general purpose Yes/No prompt function. Using this script as a jumping off point I was able to create a more sophisticated install.sh script that allows me a more granular approach to setting up my dotfiles.

Ask

The ask function lets you create Yes/No prompts quckly and easily. Follow the link above for more details. I was able to default some of my configurations to Yes or install, and others to No or don't install.

Key/Value Pairs

In order to keep my script DRY I needed to have a list of the configuration files paired with their default install/don't install setting. Turns out you can do key/value pairs in Bash. It works like this:

for i in a,b c_s,d ; do 
  KEY=${i%,*};
  VAL=${i#*,};
  echo $KEY" XX "$VAL;
done

The key/value pairs are comma separated and space delimited, e.g., key,value key,value key,value. By using Bash parameter substitution it's possible to separate the key and value portions of each pair.

My list of pairs looks like this:

tuples="bash,Y gem,Y git,Y openconnect,Y tmux,Y slate,Y hg,N textmate,N"

The loop the processes these pairs looks like this:

for pair in $tuples; do
  dir=${pair%,*};
  flag=${pair#*,};
  if ask "Setup $dir" $flag; then
    echo "Linking $dir files"
    cd $dotfiles_dir/$dir;
    for file in *; do
      ln -sf $dotfiles_dir/$dir/$file ${HOME}/.$file
    done
  fi
  echo ""
done

Each key/value pair is a directory (dir) and a install/don't install flag (flag). My dotfiles repository is organized into directories, one for each tool or utility. The fourth line is where the ask function comes into play. Using the flag from the key/value pair it creates a prompt that is defaulted to either Y/n or y/N so that all I need to do is hit the enter key. Within each directory there are one or more files needing to be symlinked. The inner loop walks through the list of files creating the necessary symlink.

Linking Directories

Some of my configurations have directories or are trageted at locations where simple symlinking won't work.

Neovim, for example, all lives in ~/.config/nvim. Symlinking directories can produce unexpected results. Using the n flag on the symlink statement treats destination that is a symlink to a directory as if it were a normal file. If the ~/.config/nvim directory already exists, ln -sfn ... prevents you from getting ~/.config/nvim/nvim.

My Vim setup contains both directories and individual files.

My ssh config needs to be linked into the ~/.ssh directory.

The linking for each of these three exceptions is handled outside the main loop in the script.

The install.sh script

Here's the entire install.sh script.

Vim Macros Rock

February 11, 2016 | posted in: snippet

Today I had to take a two column list of fully qualified domain names and their associated IP addresses and reverse the order of the columns. Using Vim macros I was able to capture all the steps on the first line and then repeat it for the other 80-odd lines of the file.

Here's a sanitized sample of the data:

as1.example.com , 10.600.40.31 ,
as2.example.com , 10.600.40.32 ,
db1.example.com , 10.600.40.75 ,
db2.example.com , 10.600.40.76 ,
db3.example.com , 10.600.40.79 ,
db4.example.com , 10.600.40.80 ,
db5.example.com , 10.600.40.81 ,
dr-as1.example.com , 10.600.40.43 ,
dr-fmw1.example.com , 10.600.40.44 ,
dr-oid1.example.com , 10.600.40.39 ,
dr-web1.example.com , 10.600.40.45 ,
fmw1.example.com , 10.600.40.33 ,
fmw2.example.com , 10.600.40.34 ,
oid1.example.com , 10.600.40.29 ,
oid2.example.com , 10.600.40.30 ,
web1.example.com , 10.600.40.35 ,
web2.example.com , 10.600.40.36 ,

What I wanted was the IP address first, surrounded in single quotes, follwed by a comma, then follwed by an in-line comment containing the FQDN. This crytpic string of Vim commands does that:

vWWdf1i'<esc>f i', #<esc>pd$

Let's break that down.

v - Enter Visual mode
W - Select a Word, in this case the leading spaces before the FQDN
W - Select a Word, in this case the FQDN, including the trailing comma
d - Put the selection in the cut buffer
f1 - Find the start of the IP address, they all start with 1 in my data set
i'<esc> - Insert a single quote and escape from insert mode
f  - Find the next blank, or the end of the IP address
i', #<esc> - Insert the closing single quote, a space, a comma, and the in-line comment character, escape insert mode
p - Paste the contents of the cut buffer, the FQDN
d$ - Delete to the end of the line to clean up the errant commas from the cut/paste 

To capture this command string in a macro you need to record it. Macros and You is a really nice introduction to Vim macros. To start recording a macro you press the q key. The next key determines the buffer or name for the macro. Then you enter the command string. To stop recording press the q key again. For simplicity sake I tend to name my macros q, so to start recording I press qq and then enter the steps outlined above, followed by q to stop recording.

Playing back the macro is done with the @ command, followed by the letter or number naming the macro. So, @q.

Macros can be proceeded by a number, like regular Vim commands. In my case with 80 lines to data to mangle, I'd record the macro on line one, and then run it against the remaining 79 lines with 79@q. There is a problem with my command string though, it works on one line only. I need to add movement commands to the end of it to position the insertion point to the beginning of the next line. The updated command sting would be this:

vWWdf1i'<esc>f i', #<esc>pd$j0

The j0 jumps down a line and goes to the beginning. Now when the macro is run, it will march down through the file a line at a time, transforming the data. Here's the result.

'10.600.40.31', #   as1.example.com
'10.600.40.32', #   as2.example.com
'10.600.40.75', #   db1.example.com
'10.600.40.76', #   db2.example.com
'10.600.40.79', #   db3.example.com
'10.600.40.80', #   db4.example.com
'10.600.40.81', #   db5.example.com
'10.600.40.43', #   dr-as1.example.com
'10.600.40.44', #   dr-fmw1.example.com
'10.600.40.39', #   dr-oid1.example.com
'10.600.40.45', #   dr-web1.example.com
'10.600.40.33', #   fmw1.example.com
'10.600.40.34', #   fmw2.example.com
'10.600.40.29', #   oid1.example.com
'10.600.40.30', #   oid2.example.com
'10.600.40.35', #   web1.example.com
'10.600.40.36', #   web2.example.com

While it may take a little trial and error to capture the right set of commands in the macro to accomplish the transforms you want, the time and effort saved over a large file is well worth it. That watching your macro work through your file is fun too, is icing on the cake.