How to Customize Ubuntu's Message of the Day

August 31, 2017 | posted in: nerdliness

I've always liked the message of the day the Freenode displays when you connect to IRC. With both a laptop and a desktop running Ubuntu 17 these days, I wanted to have a custom message of the day (MOTD) that followed a similar pattern as Freenode uses.

Ubuntu stores the components of the MOTD in /etc/update-motd.d.

$ cd /etc/update-motd.d
$ ls -al
total 56
drwxr-xr-x   2 root root  4096 Aug 30 22:39 .
drwxr-xr-x 140 root root 12288 Aug 30 22:28 ..
-rwxr-xr-x   1 root root  1220 Aug 30 22:32 00-header
-rwxr-xr-x   1 root root  1157 Jun 14  2016 10-help-text
-rwxr-xr-x   1 root root  4196 Feb 15  2017 50-motd-news
-rwxr-xr-x   1 root root    97 Jan 27  2016 90-updates-available
-rwxr-xr-x   1 root root   299 Apr 11 10:55 91-release-upgrade
-rwxr-xr-x   1 root root   129 Aug  5  2016 95-hwe-eol
-rwxr-xr-x   1 root root   142 Jul 12  2013 98-fsck-at-reboot
-rwxr-xr-x   1 root root   144 Jul 12  2013 98-reboot-required

Each of these parts is executed in numerical order, and each is a small bash shell script. You can see the current MOTD by running

$ run-parts /etc/update-motd.d/

In order to customize my message of the day I added a new part, 05-fermata. The 05 puts it just after the initial header, and fermata happens to be the name of the machine. The file itself contains this:

#!/bin/sh
printf "\n "
printf "\n Welcome to Fermata"
printf "\n "
printf "\n "
printf "\n A fermata is a symbol of musical notation indicating that the note should be "
printf "\n prolonged beyond the normal duration its note value would indicate. Exactly how"
printf "\n much longer it is held is up to the discretion of the performer or conductor,"
printf "\n but twice as long is common. It is usually printed above but can be occasionally"
printf "\n below the note to be extended."
printf "\n "

Now when I run run-parts /etc/update-motd.d/ or when I ssh into the machine or create a new terminal session, the MOTD includes the name of the machine and a brief description of that name.

How to Use Let's Encrypt with WebFaction

August 13, 2017 | posted in: nerdliness

HTTPS or HTTP over Transport Layer Security (TLS), HTTP over SSL, and HTTP Secure encrypts the traffic between the client browser and the server hosting the website. This encryption provides data integrity and privacy. Browser manufacturers, such a Google, are increasingly providing warnings to computer users that the site they are visiting is not secure. Any input mechanism on a web page, be it a comment form, search box, or credit card entry form, will be marked as insecure in the near future if the site isn't using HTTPS.

Until the advent of Let's Encrypt creating certificates for a web site could be costly and time consuming. Let's Encrypt is a free, automated, and open Certificate Authority.

I have wanted to switch my web sites, and those I help to support, to HTTPS for some time, and two weekends ago I took the plunge and updated Zanshin.net. I describe how I did that below.

This process, while relatively straight forward, does require comfort with the Linux command line, ready access to an SFTP client, and a WebFacton hosted web site. Each hosting environment has it's own quirks; please consult your host's documentation regarding HTTPS. As always, backup your site(s) before making significant changes. I managed to cause several hours of downtime to my site, your mileage may vary.

Resources

I made use of the following resources:

Getting Setup

I started by making a list of all my WebFaction websites. In the case of this site, there are a total of 8 subdomains. Let's Encrypt does let you create SAN certificates, which would in theory allow me to have one certificate for zanshin.net and all its subdomains. The documentation says that they all have to have the same web root folder. In my case each subdomain, other than the www one, are in their own unique folders under ~/webapps. Therefore I opted to create separate certificates for each subdomain. In some cases, as with my Cello site, the subdomain is a wholly separate site from the parent, and not functional subdomain like www. or mail..

Keeping Score

I used a Google Spreadsheet to list all the websites, with columns to keep track of the steps I would need to take for each. The steps as I have them are:

  • Make a new website container using an _ssl suffix for each website
  • Run the appropriate acme.sh command to create the certificates
  • Copy the certificates to my local computer and then import them via WebFaction's SSL Certificates dialog
  • Test the results in Google Chrome using the Console found under Developer Tools to debug any issues
  • Use Qualys SSL Server Test to vet the site

ACME Command Line Tool

I used the acme.sh command line tool to create my certificates. Here are the steps I used to install it in the root of my account at WebFaction:

mkdir -p $HOME/src
cd $HOME/src
git clone 'https://github.com/Neilpang/acme.sh.git'
cd ./acme.sh
./acme.sh --install

With my checklist in hand, and the acme.sh script installed, I was ready to begin.

Step One

Using the websites page on my.webfaction.com I created a second entry for zanshin.net and each of its subdomains. For example, my cello site has an entry in the websites list of cello. I create a new website called cello_ssl that points to the same domain as cello: cello.zanshin.net, and serves the same static application. The new _ssl entry uses the HTTPS protocol.

Step Two

Next I ran the acme.sh command for each domain/subdomain in my list. The command format looks like this:

acme.sh --issue -d example.com -d www.example.com -w ~/webapps/example

The -d flag specifies a domain or subdomain. The -w flag indicates the web root for the site. Running acme.sh --help reveals all the options available.

Once the command finished running, the output will tell you where the newly created certificates are located. By default that is in a new folder under the .acme.sh directory that was created when you installed the tool. The folder is named for the first -d name passed to the command. There will be several files in this folder, three of which are needed for the next step. They are:

  • example.com.cer
  • example.com.key
  • ca.cer

The first is the certificate, the second the private key, and the third is the intermediates bundle.

Step Three

The WebFaction SSL Certificates Upload certificate panel doesn't provide any way to copy the certificates from your WebFaction account. So I used my SFTP client to copy them to my personal computer. This had the added benefit of making a second copy of the files. Once they are copied you are ready to upload them.

Step Four

On the Domains/Websites | SSL Certificates page I filled in the form, providing a unique name for the certificate (I used same names I had used for the website), and selecting the *.cer, *.key, and ca.cer files. Clicking the Upload button completes this process.

Next I switched to the Domains/Websites | Websites page and for the domain or subdomain in question, clicked on the Security column. On the form that expands, I selected the appropriate certificate from the drop down list.

Step Five

In order to redirect people who may have book marked my site, or pages on my site, to the HTTPS version, I added these lines to my site's .htaccess file:

RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-SSL} !on
RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

With that in place it was time to test the site.

Step Six

Testing proved to be the most time consuming part of this process. Zanshin.net has been my domain since 1996, and I've been using it as a weblog since 1999. There are over 2100 postings, several hundred images, and more links that I care to count. My site has been hosted as a Blogger site, a MoveableType site, using WordPress, using Octopress, and now using Jekyll. Link formats have undergone a couple major revisions - resulting in a massive .htaccess file with some 1300 redirect rules.

Using Google Chrome, and specifically the Console found under Developer Tools, proved to be invaluable. I never would have discovered the final insecure links otherwise.

Using find and sed I was able to update all my image links to be secure. Various incarnations of this command:

find _posts/ -type f -print0 | xargs -0 sed -i '' -e 's#http://zanshin.net/images#https://zanshin.net/images#g'

allowed me to update most of the links. I did have to make some changes by hand, as some of the insecure links were individual and not easily found by regex patterns in sed.

The hardest problem to solve, and one that took the longest time, was finding and updating a small handful of image links hosted by Amazon. Once those were corrected, my site finally showed the green Secure in Chrome and a padlock in Safari.

Summary

My site has been running under HTTPS for over a week now and everything appears to be working fine. I still need to tackle a couple of WordPress sites that are hosted on WebFaction and, depending on how different that process is, I may update this posting with more details. For now, I am very happy with my secure site.

ASUS Q325UA Review

July 29, 2017 | posted in: nerdliness

I haven't purchased a non-Apple computer in over a decade. Earlier this week that changed when I bought an ASUS Q325UA. A colleague of mine had gotten an open-box return on one at the local Best Buy and I was very smitten by it. I've been wanting to have a portable Linux computer for some time, and this 2-in-1 laptop fit the bill perfectly. I was also able to get an open box return. My computer appears to be brand new in every respect. It didn't come with the ASUS provided USB C dongle that adds a USB A-type port, HDMI, and another USB C-type port. Still, since it was over $300 off the regular price I'm not complaining.

Specifications

The Q325 has:

  • a dual-core i7-7500 CPU @ 2.7 GHz (which, with hyper-threading acts like 4 cores)
  • 16 GB of RAM
  • Intel HD graphics
  • 512 GB SSD
  • a 13" (16:9) Touch LED screen
  • 2 USB-C ports (one for power or data, the other just data)
  • Headphone jack
  • Volume rocker switch
  • On/Off button
  • 1 Megapixel Webcam
  • Windows 10 Home Edition
  • and weighs just 2.4 pounds

Appearance and Feel

The matte finish slate gray color is pleasantly unobtrusive. It does slightly show fingerprints, but not so much as to be distracting. The whole thing is barely a quarter inch thick closed, and isn't much bigger than a piece of paper (12.24 x 8.31). It feels solidly built, with no flex or bending as you carry it, pick it up, or use it. The aluminum chassis is nicely executed.

The chiclet-style keyboard has variable backlighting. The key travel is minimal, and the sound is rather muted as the keys bottom out. I find typing on it that I make a few mistakes, mostly I suspect from the shorter key travel than I am used to on my MacBook Pro. The function key assortment includes a sleep mode and airplane mode, keyboard back lighting controls, display on/of (for projector hookup), a switch to turn the touchpad off, volume controls, and the Windows Pause/Break, Print Screen, Insert, and Delete keys. The Page Up/Down and Home/End keys require function plus the appropriate arrow key. The arrow keys are arranged in the invert "T" common to small keyboards. All 4 arrow keys are the same size.

The touchpad is large, about 4 1/2 by 3 inches. Under Windows it has two distinct sides, one for left click and one for right click. It does support Windows 10 gestures, and responds nicely to tap-to-click touches. The click itself is muted but still nicely tactile.

The edges of the keyboard, and the relief around the touchpad, are all chamfered, and the silver color of the chamfer makes a nice highlight to the slate gray color of the machine. The two hinges are nicely solid, and work smoothly and with a nice amount of resistance on my computer. One interesting feature is that the bottom edge of the lid, when open, props up the back of the keyboard slightly. Between the hinges, that edge has a slight bit of rubber padding that helps to keep the unit from sliding on a smooth desktop surface. This rubber also protects the edge of the lid.

I have a couple of minor nits to pick. The bottom bezel on the screen is quite large compared to the other three sides. I know this is a result of the physical size of the computer and the aspect ration of the screen, but it still seems like an overly large chin. The gold ASUS logo there does fade into the background after a day or two of use.

The power switch is on the right side of the computer, next to the volume rocker and the non-powered USB C port. It is exactly where my hand goes when I carry the laptop while it is open. Moving from my desk to the couch, I tend to carry laptops in my left arm, held between the crook of my elbow and my left hand. Several times now I have inadvertently turned the ASUS off, when one of my fingers came to rest on the power switch. And unlike macOS, neither Windows 10 nor Ubuntu 17.04 have a restore-your-session feature after having been abruptly turned off.

I haven't drained the battery yet. I did leave it unplugged for most of an 8 hour day and it was still at 30% charge. However, it saw very minimal use in that time. Under high load there is a small CPU fan that kicks on, but it isn't very loud. The vent holes are on the left-hand side of the chassis.

I also haven't really tested the speakers, of which there are four. All the speaker grills are on the underneath side of the keyboard. The bottom of the computer has 4 slightly protruding feet, made of a dense rubber. In table mode these would support the screen, in laptop mode they keep the computer solidly on the desk. In either mode these feet provide enough air space for the speaker to be heard.

Summary

I've only had this computer for 4 days now, but I am impressed. It appears to be well manufactured, with nice tolerances, good fit and finish, and attention to detail. It is shockingly light and portable, and appears to have good battery life. The screen is bright and readable, and has a nicely wide field of view. I am a die hard Apple computer fan, and have no intention of leaving the fold. However, as a secondary computer, this is a beautiful little machine.

A Script to Install My Dotfiles

May 31, 2017 | posted in: nerdliness

A year or so ago I created a script that allowed me to quickly install my dotfiles on a new computer. The script wasn't terribly sophisticated, but it got the job done. Called make.sh it takes an all-or-nothing approach to setting up my configurations.

Recently I ran across a Bash script that serves as a general purpose Yes/No prompt function. Using this script as a jumping off point I was able to create a more sophisticated install.sh script that allows me a more granular approach to setting up my dotfiles.

Ask

The ask function lets you create Yes/No prompts quckly and easily. Follow the link above for more details. I was able to default some of my configurations to Yes or install, and others to No or don't install.

Key/Value Pairs

In order to keep my script DRY I needed to have a list of the configuration files paired with their default install/don't install setting. Turns out you can do key/value pairs in Bash. It works like this:

for i in a,b c_s,d ; do 
  KEY=${i%,*};
  VAL=${i#*,};
  echo $KEY" XX "$VAL;
done

The key/value pairs are comma separated and space delimited, e.g., key,value key,value key,value. By using Bash parameter substitution it's possible to separate the key and value portions of each pair.

My list of pairs looks like this:

tuples="bash,Y gem,Y git,Y openconnect,Y tmux,Y slate,Y hg,N textmate,N"

The loop the processes these pairs looks like this:

for pair in $tuples; do
  dir=${pair%,*};
  flag=${pair#*,};
  if ask "Setup $dir" $flag; then
    echo "Linking $dir files"
    cd $dotfiles_dir/$dir;
    for file in *; do
      ln -sf $dotfiles_dir/$dir/$file ${HOME}/.$file
    done
  fi
  echo ""
done

Each key/value pair is a directory (dir) and a install/don't install flag (flag). My dotfiles repository is organized into directories, one for each tool or utility. The fourth line is where the ask function comes into play. Using the flag from the key/value pair it creates a prompt that is defaulted to either Y/n or y/N so that all I need to do is hit the enter key. Within each directory there are one or more files needing to be symlinked. The inner loop walks through the list of files creating the necessary symlink.

Linking Directories

Some of my configurations have directories or are trageted at locations where simple symlinking won't work.

Neovim, for example, all lives in ~/.config/nvim. Symlinking directories can produce unexpected results. Using the n flag on the symlink statement treats destination that is a symlink to a directory as if it were a normal file. If the ~/.config/nvim directory already exists, ln -sfn ... prevents you from getting ~/.config/nvim/nvim.

My Vim setup contains both directories and individual files.

My ssh config needs to be linked into the ~/.ssh directory.

The linking for each of these three exceptions is handled outside the main loop in the script.

The install.sh script

Here's the entire install.sh script.

Vim Macros Rock

February 11, 2016 | posted in: snippet

Today I had to take a two column list of fully qualified domain names and their associated IP addresses and reverse the order of the columns. Using Vim macros I was able to capture all the steps on the first line and then repeat it for the other 80-odd lines of the file.

Here's a sanitized sample of the data:

as1.example.com , 10.600.40.31 ,
as2.example.com , 10.600.40.32 ,
db1.example.com , 10.600.40.75 ,
db2.example.com , 10.600.40.76 ,
db3.example.com , 10.600.40.79 ,
db4.example.com , 10.600.40.80 ,
db5.example.com , 10.600.40.81 ,
dr-as1.example.com , 10.600.40.43 ,
dr-fmw1.example.com , 10.600.40.44 ,
dr-oid1.example.com , 10.600.40.39 ,
dr-web1.example.com , 10.600.40.45 ,
fmw1.example.com , 10.600.40.33 ,
fmw2.example.com , 10.600.40.34 ,
oid1.example.com , 10.600.40.29 ,
oid2.example.com , 10.600.40.30 ,
web1.example.com , 10.600.40.35 ,
web2.example.com , 10.600.40.36 ,

What I wanted was the IP address first, surrounded in single quotes, follwed by a comma, then follwed by an in-line comment containing the FQDN. This crytpic string of Vim commands does that:

vWWdf1i'<esc>f i', #<esc>pd$

Let's break that down.

v - Enter Visual mode
W - Select a Word, in this case the leading spaces before the FQDN
W - Select a Word, in this case the FQDN, including the trailing comma
d - Put the selection in the cut buffer
f1 - Find the start of the IP address, they all start with 1 in my data set
i'<esc> - Insert a single quote and escape from insert mode
f  - Find the next blank, or the end of the IP address
i', #<esc> - Insert the closing single quote, a space, a comma, and the in-line comment character, escape insert mode
p - Paste the contents of the cut buffer, the FQDN
d$ - Delete to the end of the line to clean up the errant commas from the cut/paste 

To capture this command string in a macro you need to record it. Macros and You is a really nice introduction to Vim macros. To start recording a macro you press the q key. The next key determines the buffer or name for the macro. Then you enter the command string. To stop recording press the q key again. For simplicity sake I tend to name my macros q, so to start recording I press qq and then enter the steps outlined above, followed by q to stop recording.

Playing back the macro is done with the @ command, followed by the letter or number naming the macro. So, @q.

Macros can be proceeded by a number, like regular Vim commands. In my case with 80 lines to data to mangle, I'd record the macro on line one, and then run it against the remaining 79 lines with 79@q. There is a problem with my command string though, it works on one line only. I need to add movement commands to the end of it to position the insertion point to the beginning of the next line. The updated command sting would be this:

vWWdf1i'<esc>f i', #<esc>pd$j0

The j0 jumps down a line and goes to the beginning. Now when the macro is run, it will march down through the file a line at a time, transforming the data. Here's the result.

'10.600.40.31', #   as1.example.com
'10.600.40.32', #   as2.example.com
'10.600.40.75', #   db1.example.com
'10.600.40.76', #   db2.example.com
'10.600.40.79', #   db3.example.com
'10.600.40.80', #   db4.example.com
'10.600.40.81', #   db5.example.com
'10.600.40.43', #   dr-as1.example.com
'10.600.40.44', #   dr-fmw1.example.com
'10.600.40.39', #   dr-oid1.example.com
'10.600.40.45', #   dr-web1.example.com
'10.600.40.33', #   fmw1.example.com
'10.600.40.34', #   fmw2.example.com
'10.600.40.29', #   oid1.example.com
'10.600.40.30', #   oid2.example.com
'10.600.40.35', #   web1.example.com
'10.600.40.36', #   web2.example.com

While it may take a little trial and error to capture the right set of commands in the macro to accomplish the transforms you want, the time and effort saved over a large file is well worth it. That watching your macro work through your file is fun too, is icing on the cake.

Fun With Bash Shell Parameter Expansion

February 08, 2016 | posted in: snippet

Recently I switched back to bash from zsh for my shell environment. I needed a consistent shell on my local machines as well as on remote servers. One aspect of my bash environment that wasn't working the way I wanted was displaying the current Git branch and Git status information when the current directory was Git controlled.

In my original attempt at building my prompt I combined PS1 and prompt_command. This worked on OS X machines, but not on Linux-based operating systems. After splitting apart the line of information I wished to display via the prompt_command from the actual prompt (controlled by PS1), none of the PS1 substituitions were working. Here's the line before:

function prompt_command {
  export PS1="\n\u at \h in \w $(git_prompt_string)\n$ "
}

And here's the code after:

function prompt_command {
  printf "\n$(id -un) at $(hostname) in ${PWD} $(git_prompt_string)"
}

The PROMPT_COMMAND is set to the function above, and the PS1 prompt has the $:

export PROMPT_COMMAND=prompt_command
export PS1="\n$ "

Instead of using \u for the current user, I'm using id -un. For the hostname, hostname rather than \h. And PWD displays the current working directory in place of \w.

The problem with PWD is that it displays the full path, and I wanted a ~ when in my $HOME directory. Fortunately Steve Losh has already solved this puzzle in his My Extravagent Zsh Prompt posting.

Here's the solution:

${PWD/#$HOME/~}

It's deceptively simple, and took me a few minutes to understand, with the help of the Shell Parameter Expansion section of the Bash Manual.

The pattern ${parameter/pattern/string} works in the following manner.

The pattern is expanded to produce a pattern just as in filename expansion. Parameter is expanded and the longest match of pattern against its value is replaced with string. If pattern begins with ‘/’, all matches of pattern are replaced with string. Normally only the first match is replaced. If pattern begins with ‘#’, it must match at the beginning of the expanded value of parameter. If pattern begins with ‘%’, it must match at the end of the expanded value of parameter. If string is null, matches of pattern are deleted and the / following pattern may be omitted. If parameter is ‘@’ or ‘’, the substitution operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with ‘@’ or ‘’, the substitution operation is applied to each member of the array in turn, and the expansion is the resultant list.

What all that means is $HOME is expanded and if it matches the expanded $PWD, starting at the beginning of the string, then the matching characters are replaced with a ~. The key is the # before $HOME.

Here's the final printf line:

printf "\n$(id -un) at $(hostname) in ${PWD/#$HOME/~} $(git_prompt_string)"A

You can see the complete .bashrc file in my dotfiles repository.

Installing My Dotfiles Via A Script

January 30, 2016 | posted in: snippet

For too long now I have been putting off creating a script to setup my collection of dotfiles on a new machine. My excuse has always been, "I don't need to set them up on a new machine that often." Still it would be nice to run one command rather then enter multiple ln -s ~/.dotfiles/... ... commands in a row.

Here's my make.sh script:

#!/usr/bin/env bash

#
# This script creates symlinks for desired dotfiles in the users home diretory.
#

# Variables
dotfiles_dir=~/.dotfiles
dirs="bash gem git openconnect tmux"

# Update dotfiles to master branch
echo "Updating $dotfiles_dir to master"
cd $dotfiles_dir;
git pull origin master;
cd;

echo ""

function makeLinks() {
  # For each directory in dirs, make a symlink for each file found that starts
  # with a . (dot)
  for dir in $dirs; do
    echo "Linking $dir files"
    cd $dotfiles_dir/$dir;
    for file in *; do
      ln -svf $dotfiles_dir/$dir/$file ~/.$file
    done
    echo ""
  done

  # Handle odd-ball cases
  # Vim files¬
  echo "Linking vim files"
  ln -svf $dotfiles_dir/vim ~/.vim;
  ln -svf $dotfiles_dir/vim/vimrc ~/.vimrc;
  ln -svf $dotfiles_dir/vim/vimrc.bundles ~/.vimrc.bundles;

  # ssh
  echo ""
  echo "Linking ssh configuration."
  ln -svf $dotfiles_dir/ssh/config ~/.ssh/config

  echo ""
  echo "Caveats:"
  echo "Vim:  If remote server, rm .vimrc.bundles"
  echo "Bash: If local server, rm .bashrc.local"

  echo ""

  echo "Finished."
}

if [ "$1" == "--force" -o "$1" == "-f" ]; then
  makeLinks;
else
  read -p "This may overwrite existing files in your home directory. Are you sure? (y/n) " -n 1;
  echo ""
  if [[ $REPLY =~ ^[Yy]$ ]]; then
    makeLinks;
  fi;
fi;
unset makeLinks;

Some Caveats:

  • This script works for the way I have my dotfiles arranged in ~/.dotfiles. Each tool has a directory containing the file or files that make up the configuration. None of the files are preceeded by a dot (.) in my repository, so the link command adds that.

  • My Vim configurtion and my ssh config don't follow this pattern, so they are handled separately.

The dirs variable has a list of the configurations I want to setup using this script. All of the files in each of those directories is symlinked in turn. I'm using the -svf flags on the ln statement.

  • s for symlink, of course
  • v for verbose
  • f for force if the link already exists

To make the script a scant more friendly it offers a --force option, that eliminates the "Are you sure?" prompt.

As with any script you find laying around on the Internet, read the source and understand what it's doing before unleashing it's awesome powers on your computer.

Bash History Search Bind Keys

January 26, 2016 | posted in: snippet

I recently switched back to bash shell from zsh and in doing so I lost zsh's history search. From your zsh prompt if you type in part of a command and then press the up arrow, you'll be shown the previous occurrence of that command. Repeated up arrows walk you through all previous occurrences. A very handy tool, and one I grew fond of.

Here's how to have this history search in bash.

First use the read command to learn what code is transmitted by the up or down arrow key press.

$ read
^[[A  # up arrow
^[[B  # down arrow

Control-c will return you to your prompt from the read builtin command.

Parsing the up and down arrow strings reveals that they both start with an escape character ^[ and then the key value itself: [A or [B.

The bash function to search history is history-search-backward or history-search-forward. So binding ^[[A to history-search-backward and ^[[B to history-search-forward emulates the arrow key behavior from zsh.

Here is what I have in my .bash_bindkeys file, which is sourced from my .bashrc file.

bind '"\e[A":history-search-backward'
bind '"\e[B":history-search-forward'

The \e is the escape character (^[) from the read builtin output. With these bindings in my .bashrc I can enter part of a command and search back through my history using my arrow keys.

2015 Books

December 30, 2015 | posted in: nerdliness

I read or listened to a total of 123 books in 2015. 40 were brand new to me, the other 83 were rereads or re-listens.

The longest book was Neal Stephenson's "Seveneves: A Novel" at 880 pages.

The shortest was "The Countess of Stanlein Restored: A History of the Countess of Stanlein Ex Paganini Stradivarius Cello of 1707", a history of a Stradivarius cello at 120 pages.

In total I read 39,160 pages, or 108 pages a day average for the year.

17 of the titles on my list were audio books. The longest of these was (again) a Neal Stephenson book, "Reamde" at 38 hours and 34 minutes.

The shortest audiobook was a mere 9 hours; "The Hanged Man's Song" by John Sandford.

In total I listened to 249 hours and 41 minutes of audio books this year. Which works out to 41 minutes per day average.

Ten of the books were non-fiction, eight were science fiction, one was historical fiction, and the rest fiction. Thirty-one of the books were from our local public library, the rest I own in one format or another.

Out of all the books I read or listened to this year, Andy Weir's "The Martian" was far and away my favortie book. Not only did I read it multiple times, I listened to the audio version. And I saw it in the theater when it came out. And I bought a copy on iTunes that I've now watched twice in a row. It's easily one of the very best books I've read in a long, long time.

Solidarité

November 14, 2015 | posted in: life

solidaité