Airlines / Frequent Flyer Programs: The good, the so-so, and the ugly

An earlier blog post detailed heroic feats of airline reward redemption: leaping across multiple rewards programs, bounding over unaccompanied minor restrictions, and otherwise gaining cross-country access for peanuts!

And then everything was shattered when I had to change my tickets.  This let me see how well each of the rewards programs faired in handling booking changes.  This leads me to my list of good, so-so, and ugly:

Let’s start with the ugly: Delta.  I think its kinda neat that I booked $70 flights from LAX to SLC.  Then when I wanted to change the flights they were willing to do so for $200 per ticket.  I’d rather pay a bit more for Southwest and get better customer service.

Now the so-so: american airlines earns my so-so category.  They didnt really provide any way to change my existing flight, much like delta. However they have otherwise great service, and some of the best reward travel coverage available; I can always find an AA flight, even through avios.

Which brings me to my winners, the good guys: Southwest and Avios!  I must first say southwest is my all around favorite: TWO free checked bags, NO fees when changing flights.  Awesome.

And Avios deserves a good apology from this author.  I was able to use only 13500 points to book a one-way flight for three of my family members from LAX to SLC.  That same thing would have taken over 30k points on something like southwest or aa.  It turns out that if you aren’t bounding from coast to coast avios is a powerful tool to have in your points chest.  Its fantastic for those short-haul trips.

There ye have it.

A Tasty Apple Pie Recipe

Having a bushel or two of apples on hand, I decided to make an apple pie the whole family could enjoy.  I borrowed two recipes.  For posterity i have included the actual recipe in addition to the links.

For the apple pie guts I used Apple Pie by Grandma Ople:

1 recipe pastry for a 9 inch double crust pie
1/2 cup unsalted butter
3 tablespoons all-purpose flour
1/4 cup water

1/2 cup white sugar
1/2 cup packed brown sugar
8 Granny Smith apples – peeled, cored and sliced
Preheat oven to 425 degrees F (220 degrees C). Melt the butter in a saucepan. Stir in flour to form a paste. Add water, white sugar and brown sugar, and bring to a boil. Reduce temperature and let simmer.
Place the bottom crust in your pan. Fill with apples, mounded slightly. Cover with a lattice work crust. Gently pour the sugar and butter liquid over the crust. Pour slowly so that it does not run off.
Bake 15 minutes in the preheated oven. Reduce the temperature to 350 degrees F (175 degrees C). Continue baking for 35 to 45 minutes, until apples are soft.

and for the crust I used French Pastry Recipe:

3 cups all-purpose flour
1 1/2 teaspoons salt
3 tablespoons white sugar
1 cup shortening

1 egg
1 teaspoon distilled white vinegar
5 tablespoons water
In a large mixing bowl, combine flour, salt, and sugar. Mix well, then cut in shortening until mixture resembles coarse meal.
In a small bowl, combine egg, vinegar, and 4 tablespoons of water. Whisk together, then add gradually to flour mixture, stirring with a fork. Mix until dough forms a ball. Add one more tablespoon of water if necessary.
Allow dough to rest in refrigerator 10 minutes before rolling out.

The results we’re fantabulous:


A few ways to quickly and automatically binarize an image

For my wife’s Spell To Write and Read (SWR) homeschooling we have a bunch of scanned worksheets.  A sample of the scanned image is shown below:


As you can see its entirely readable and fine for our purposes.  However it is not gentle on our laser printer toner budget.  What we really want is the background to be white, and the foreground to be black – nothing inbetween. This process is called binarization – and scanner software often has a feature that lets you do this during scantime.

We didn’t use that feature (or maybe our software didnt support it) at scantime. As such we need to resort to postprocessing. I have a Master’s in computer graphics and vision, and everytime I use something I learned the value of that degree goes up. It therefore behoves me to use it every chance I get.

As a good computer vision student, when I think binarization my mind jumps straight to Otsu!  He came up with a great way of automatically determining a good threshold value (meaning, when we look at each pixel in the image, everything below a value turns black, all else turns white).

My first thought is to check for an easy button somewhere. In gimp, for example, I found you can load the image, click on “Image -> Mode -> Indexed” then select “Use black and white (1 bit)”. Looks ok!

Now how to automate this, given I have 60+ images? Turns out there is a threshold option in imagemagick. I could go through each image in the directory and manually threshold, but I might get the threshold wrong, and I don’t really want to train my wife on picking a threshold value. Plus I know Otsu is better!

Turns out some guy named Fred has a bunch of ImageMagick scripts, including an Otsu one. I downloaded his script and ran it, yielding the following image:


Pretty nice – just black and white.  Thanks Fred… sorry I cannot call him “Fast Freddy” since it took around 18 seconds per image.  I know we can do better! Time to dust off those computer vision skills of Master.

Here’s what I came up with using python/opencv:

import cv2
import sys
ret,imgThresh=cv2.threshold(img, 0, 255, cv2.THRESH_OTSU)
cv2.imwrite(sys.argv[2], imgThresh)

Short and sweet! And performance is way better: about 3 seconds per image.  But it looks like most of the program runtime is spent loading cv2.  Based on that assumption I decided to add a bulk processing mode:

import cv2
import sys
if len(sys.argv) == 1 or "-h" in sys.argv:
    print "Usage: %s [-inplace] image1 [image2 [image 3 ...]]"
    print " %s inImage outImage"
if "-inplace" == sys.argv[1]:
    inOut = [ (arg, arg) for arg in sys.argv[2:] ]
    inOut = [ (sys.argv[1], sys.argv[2]) ]
for inImage, outImage in inOut:
    print "Converting %s to %s" % (inImage, outImage)
    ret,imgThresh=cv2.threshold(img, 0, 255, cv2.THRESH_OTSU)
    cv2.imwrite(outImage, imgThresh)

When I run this script on the whole directory it takes an average of 2 seconds per image. Better, but longer than needed. What gives? It turns out I have all my data on a QNAP and opening, reading, and writing lots of files is not its forte. When I copy the data to my local SSD on the MAC, the cost per image is now 140ms. Much better.

Since, as often happens, I have found my assumptions totally flawed, can I vindicate Freddy? After rerunning the test it appears he is still a “steady Freddy” at about 2.7 seconds when running straight on the hard drive. Sorry Fred; opencv just beat the pants off you.

Using raspberry pi for two-way video/audio streaming

I am currently writing custom software to create a variety of distributed media solutions.  One of these involves a raspberry-pi security camera.  Right now my software isn’t finished, but I wanted to prove the concept out using existing software.  System requirements:

Raspberry pi:

  • Broadcast audio (over the LAN) to a server
  • Broadcast video (over the LAN) to a server
  • Receive (and play over the speaker) audio


  • Receive audio/video from the raspberry pi (and display it)
  • Send audio (from the laptop’s own microphone) to the pi.

I bought a raspberry pi b+, cheapo USB microphone, and IR camera (with built-in LEDs), and some speakers.  The whole package was around $100, which is just plain awesome.  Im sure my grandkids will marvel, 40 years down the road, that for the cost of an Obama-approved school lunch, I could buy so much.

Here’s a schematic of the setup:


This can be accomplished with a few scripts, which i based heavily on those found here:

Run the following on the pi:

function on_signal (){
    launch "kill -9 $piVideoPid $piAudioPid $osxAudioPid"
trap 'on_signal' EXIT

# Receive os x audio
gst-launch-1.0 -v udpsrc port=9102 caps=\"application/x-rtp\" ! queue ! rtppcmudepay ! mulawdec ! audioconvert ! alsasink device=plughw:ALSA sync=false &

# Send pi audio
gst-launch-1.0 -v alsasrc device=plughw:Device  ! mulawenc ! rtppcmupay ! udpsink host=$broadcastIp port=$piAudioPort &

# Send pi video
raspivid -t 999999 -w 1080 -h 720 -fps 25 -hf -b 2000000 -o - |  gst-launch-1.0 -v fdsrc ! h264parse ! rtph264pay config-interval=1 pt=96  ! gdppay ! tcpserversink host=$localIp port=$piVideoPort

Run the following on the laptop (a macbook in my case).  Note if this were a linux box you would probably use “autoaudiosrc”.

function on_signal (){
    launch "kill -9 $piVideoPid $piAudioPid $osxStreamPid"
trap 'on_signal' EXIT


# Stream OSX microphone (note the capsfilter, apparently needed due to os x bug)
launch "gst-launch-1.0 -v osxaudiosrc ! capsfilter caps=audio/x-raw,rate=44100 ! audioconvert ! audioresample ! mulawenc ! rtppcmupay ! udpsink host=$broadcastIp port=$macAudioPort" "ASYNC"

# Receive audio
launch "gst-launch-1.0 -v udpsrc port=$piAudioPort caps="application/x-rtp" ! queue ! rtppcmudepay ! mulawdec ! audioconvert ! osxaudiosink" "ASYNC"

# Receive video
launch "gst-launch-1.0 -v tcpclientsrc host=$piIp  port=$piVideoPort ! gdpdepay ! rtph264depay ! avdec_h264 ! videoconvert ! autovideosink sync=false" "ASYNC"

echo "Waiting on $piVideoPid $piAudioPid $osxStreamPid"

You should be able to see and talk to whoever is on the other end of the pi. Biggest problem is audio feedback, if the laptop and pi are close to one another.  I havent even begin to think about this, but the coolest things to note are:

  • You get a silky-smooth h264 video
  • The audio is acceptable
  • It works almost entirely using gstreamer.  The cool thing about that is gtreamer has a very easy programmatic API.  So… that makes my job easy when I get to that part of my project.


FenBackup Through the Years

This is more of a tour of the past six years or so of various iterations of Fenimore backup solutions.


  • Solution 1.  Soon after I got married I found that between my wife and I we had around 1TB of movies, images, and documents from school.  We were also accruing around 5-10GB a week on family videos and pictures (hey, 1080p video cameras are readily available, and generate fairly big, hi-res videos).   Around Oct 2010 the price of 1.5TB drives was pretty low – around $80 each (this was right before the huge price spike a year or so later due to some disaster in the Pacific).  So I built a 3TB software-RAID 6 array with linux and a large tower.  I also had two 3TB RAID 5 arrays that plugged into hot-swap bays.  I periodically rotated these arrays to an offsite location.  By “periodically,” I mean it was every week or so for a month, then exponentially decayed to about once or twice a year towards the end.  backup_schemes
  • Solution 2. Enter the QNAP.  By this point I had upgraded to 3TB drives in my consolidated linux solution.  I took those drives out of my linux box and moved them into a 4-bay QNAP (TS-453 Pro).  I found that all of these 3TB WD Green drives started dying.  As I write this I am actually not sure what the heck happened, but soon I found myself buying some 5TB WD RED (NAS-grade) drives to replace the WD Greens.  These have worked much better.  This solution has also been nice because anytime I need to do work on my linux box or on the NAS it doesn’t take down my whole network.  It was also nice because I ended up making all my files accessible (in theory) via QNAPs cloud software.  The only flaw is that I tried to continue my offsite backup scheme wherein a drive was cycled to an offsite location; this actually was worse with QNAP, because QNAP doesn’t like the drives to be removed. backup_schemes2
  • Solution 3: Redundant QNAPs over VPN.  This is basically an upgrade to my QNAP solution where I bought a cheaper 2-bay QNAP.  The second QNAP fulfills my offsite backup goals. It sits on the other side of the country and is available 24/7 for my main qnap to push a copy of my data to it.backup_schemes3

Overall I’d say this solution is just about flawless.  It at least seems secure, works well, etc.  I found, for example, I could get to my files when I am not home.  I was even able to stream video from my qnap via PLEX to my laptop in my hotel; this turned out to be useful, for example, when my daughter wanted to watch tangled, which worked fairly well.



Key-based OpenVPN for OSX/Linux/QNAP Howto

I’ve had excellent luck setting up a key-based authentication VPN for my home network, which is distributed across the country and consists of Linux, Mac (OS X), and QNAP nodes.  What, you ask, does “key-based authentication” refer to?  It means that I do not allow people to enter passwords.  Instead, clients must possess the proper key, which is exchanged automatically by openvpn.   I think key-based authentication can be more secure in some cases, however I mainly wanted to do it “just because.”

Generally speaking the OpenVPN HOWTO is excellent and should suffice for setting up your VPN.  Ill briefly parrot this guide here.  To set things up you:

  • Pick a vpn network.  E.g. something that won’t conflict with any of the LAN networks of the nodes on your distributed network.  For example, one of my nodes sits on a 192.168.x.x/16 network, another sits on a 10.246.75.x/24 network, etc.  Pretty much anything, other than 10.246.75.x/24, from the 10.x.x.x/8 space can be used.
  • Generate keys:
    • From /etc/openvpn/easy-rsa run . ./vars
    • ./clean-all
    • ./build-ca  – basically accept everything as is, but remember the common-name for the server (as <servername>)
    • ./build-key-server <servername>
    • For each of my clients (i have 5 or 6): ./build-key <clientName>
    • Build the diffie-hellman stuff: ./build-dh
  • I chose to enable client-client communication, so I added the following in my server config:
    • client-to-client (to let clients talk to eachother)
    • push “redirect-gateway def1” (to redirect all traffic through my vpn)
    • client-config-dir /etc/openvpn/clients  (to let me specify static ips for all my clients).  For each client (identified by the common name, during the setup above, I create a like-named file with a single line like this:
      • ifconfig-push (meaning this client will be accessible within the vpn via the address)

On each client I scp’d over the ca, crt, key files, and made up a quick config script (based on the sample provided with openvpn).  I really only changed the following lines:

  • remote <hostname> <port> (this line has the hostname/port of your server)
  • ca /abs/path/to/vpn/directory/ca.crt
  • cert /abs/path/to/vpn/directory/fenimac.crt
  • key /abs/path/to/vpn/directory/fenimac.key

With that both the server and client should connect; all connected clients should have statically-assigned ips, and they should be able to talk to eachother.

What makes this a bit tricky is QNAP.  They only provide password-based authentication.  Thankfully you can just make a crontab to create the vpn connection, if its not up.  I made a script that looks something like:

if [[ `ps ax | grep openvpn | grep vpndir | grep -v grep` ]]; then
    echo "`date` Already running" | tee -a $VPNLOG
    ping -c 1 &lt;server'sVPNIp&gt;
    if [[ "0" = "$PINGRET" ]]; then
        echo "`date` Link is good!" | tee -a $VPNLOG
        exit 0
        echo "`date` Link is down (ping gave $PINGRET) - killing current vpn" | tee -a $VPNLOG
        ps ax | grep vpndir | tr -s " " | cut -f 1 -d ' ' | xargs kill -9
echo "`date` Run this thing!" | tee -a $VPNLOG
/usr/sbin/openvpn  --config /mnt/HDA_ROOT/vpndir/qnap.conf | tee -a $VPNLOG

A few important notes:

  • I run this script on the QNAP every couple minutes in a crontab.  Just editing crontab (“crontab -e”) doesn’t do the trick – qnap wipes those entries away upon reboot. To make this work, even across reboots, I followed a guide i found somewhere on the internet.  Basically you do the following:
    1. Edit /etc/config/crontab and add your custom entry.
    2. Run 'crontab /etc/config/crontab' to load the changes.
    3. Restart cron, i.e. '/etc/init.d/ restart'
  • I couldn’t get the vpn to stay up unless I turned on the vpn server inside the qnap.  It wasn’t my favorite thing to just leave that on, but for now im ok with it.


Avios Adventures

I had a chance to to head out to California on business and, in keeping with tradition, decided it would be fun to bring the family.  Work will pay for my flight, the hotel, and rental car, however it is up to me to get flights for the family.  Its precisely for moments like this that we like to keep 50-100 thousand points laying around, usually the result of a sign-up bonuses for credit cards.

This time around I had an arsenal of 100k southwest points, about 40k american airlines points, and 50k avios points.  I wanted to save the southwest points, so i focused on aa and avios.  Here’s what I did:

First, I looked for flights on avios and had mixed luck.   It turns out, as the “Points Guy” points out, that it is because the avios website doesn’t work well with multi-hop routes. In my case the route was IAD -> DFW -> SFO.  The trick is to look up the individual legs and book them separately. showed that I could get three family members from IAD to DFW for 30,000 points, then with my remaining 20,000 i could get two people from DFW to SFO, meaning I’d have to pay for the third person for that leg.  Before booking i checked on prices and found that the price tag for DFW->SFO was $530!  In contrast, IAD to DFW was only $78.  I quickly switched the order, using 30k points to pay for the more expensive DFW->SFO leg  Then I booked 20k points for the IAD->DFW link for two members.

Things got a little interesting when I tried to buy tickets for my daughter (the third member of the IAD->DFW.)  Since she is under 16, would not let me book her alone.  Of course, she wasn’t alone, but doesn’t let you link her to another person on the same flight (which, incidentally, i had booked with  So i called aa and, after bypassing the annoying robot, found a helpful lady.  She let me know she was waiving the $25 phone reservation fee, then helped link my daughter to my wife so she could get a ticket.

My opinion of avios is not so great, at least for cross-country domestic flights.  I ended up barely getting 50% as far as I would have with delta, aa, united, etc. for the same points.  To be fair to avios, I’ll just chock this up to ignorance on my part.  I think one can “work” avios better, but i didn’t find a way.  In the end, excluding 9/11 fees (which you always pay, regardless of your points program), the trip cost me around $78.  If I had purchased it on my own dime, it would have cost over $800 (excluding my flight, which is paid for by work).  This is why getting points is so worth it.

Based on my research avios just works “differently” than most other domestic programs (like aa or delta); avios are “distance based” – in theory they are fantastic for short-haul flights, since they charge less points than, say, aa might charge.  For example, i consider it standard to that 12.5k points gets one person one way in the US.  Since most credit cards that are worth getting have a 50k sign-on bonus, i expect to get two round-trip tickets per card.  In the end, with my 50k avios points, I got a family of three only MOST of the way from virginia to california.  Kinda crumby!  However if I were hoping from, say, virginia to new york, avios might charge less than the standard 12.5k points, and therefore be a better deal.  So the theory goes.

One last thing: I was planning on using aa points to fly back to virginia, but in order to align with flights allowed by work ended up having to go with southwest.   Before i decided on using southwest, i thought I would book with points from a pair of AA accounts.  Unfortunately I found I was short about 800 points in one of my aa accounts.  However,  I had an excess of several thousand points in another account.  AA lets you “share” points, but at an exorbitant (IMHO) cost.  To transfer 1000 points between accounts ended up costing over $30 ($12 for the 1000 points, and a flat $20 processing fee.)  sheesh!  But i now have the points where i need them, and can send my family in one direction anywhere in the USA.

So there you have it – a coast-to-coast trip for the whole family for under $100.

Ubiquitous Encryption with GPG

Because its easy, and because it provides so many benefits, i now try to use encryption everywhere. GPG is my tool of choice; i actually don’t even know what other options there are. Here are some things I have found to be useful:

  • Password management.  I use the pass application for this.  After installing gpg, i just do a “pass init <keyid>” and from there i am good to go.  I use qtpass as a graphical frontend.  The way this works is as follows: you add a password “pass insert blah”, which then asks you for the password to store.  Later you can ask for the password by typing “pass blah”. Storing passwords sure is a good idea, but even better is to just have pass generate good ones for you.  To do this i type “pass generate foobar 12” and a 12-char password is generated and stored.  Now i can have unique passwords for all my websites, and pass will remember them.  The one thing that weired me out is that after unlocking any password i could get at any others without entering my passphrase.  This worried me a bit because it looked like you only had to unlock things once and then they remained unlocked.  Turns out it was because gpg-agent is running and caching passwords for up to 10 min (default).  I figure its ok to leave things open for 10 min, in fact it would essentially mean if youre checking a lot of websites and you dont remember their passwords, you dont have to keep typing your passphrase over and over.
  • GPG also has great integration into mac os x.  I use this for mail and file encryption.  For mail, it lets me sign all my emails, regardless of where they go.  If the recipients are also using encryption, i can encrypt the messages.  When i receive an encrypted email i can decrypt it, etc.  Its literally 1-button email encryption for free.  Not bad!

As for my philosophy: “why encrypt?” I think the burden is more on people to answer: “why not encrypt?”  It takes almost no effort, and the benefit is that emails sent directly to you are no longer viewable by anyone else along their way.