Flaky Goodness

Read-Only PDFs

December 28, 2023

Sometimes I like to scribble on my PDFs to annotate them, add revisions or comments, check off items, or just mark them to indicate that I've reviewed them. That's not possible if the author of the PDF has password-protected it to be read-only.

Read-only error

Based on this StackOverflow answer, here is a simple automation that will easily let you strip off this restriction for your PDFs. Note that this won't remove bonafide PDF encryption: it's just for removing the read-only flag.

Install Homebrew if you don't have it already.

Install qpdf

brew install qpdf

Open up Automator, choose New > Quick Action.

Set up your Quick Action like this

Remove read-only from PDF

Save it

Name it PDF - remove read-only and quit Automator.

Test your Quick Action

Right-click/Option-click on a read-only PDF in the Finder. Choose Quick Actions > PDF - remove read-only.

The change should be instant and will not produce any output, but the editing restriction should be gone. You can also select several PDFs and run this Quick Action on all of them at once.

Printing PostScript from Emacs in macOS Sonoma

December 23, 2023

Apple removed support for PostScript in macOS Sonoma, which has broken a number of things in my daily workflow. Here is a quick roundup post of some of my workarounds.

Install Ghostscript with Homebrew

First, install Homebrew if you don't already have it.

Now, install the Ghostscript PostScript interpreter:

brew install ghostscript

To confirm that it has installed correctly, open a Terminal and type:

gs --version

You should see something like 10.02.1.

manp script for nicer man pages in Preview

I have added this to my .zshrc but it should work just as well in .bash_profile and similar.

function manp() {
  [ -n "$1" ] && man -t $1 | gs -sDEVICE=pdfwrite -o $TMPDIR/$1.pdf - && open $TMPDIR/$1.pdf && sleep 1 && rm $TMPDIR/$1.pdf 
}

Now you can type manp instead of man to view your man pages as a nicely formatted PDF in Preview.

Emacs PostScript Print Buffer / Region

Emacs depends on the system lpr utility to print PostScript and as of macOS Sonoma, lpr no longer supports PostScript. The specific error message you will see is:

user-error: Spooling...done: /usr/bin/lpr: Unsupported document-format “application/postscript”.

Let's create our own lpr alternative for Emacs to use.

In a convenient executable folder (in my case, /Users/gene/bin), create the following file and call it lpr-ps

#!/bin/bash
uuid=$(uuidgen)
gs -sDEVICE=pdfwrite -o $TMPDIR/$uuid.pdf - && open $TMPDIR/$uuid.pdf && sleep 1 && rm $TMPDIR/$uuid.pdf 

Make it executable:

chmod +x /Users/gene/bin/lpr-ps

Add the following to your Emacs configuration (with your particular path of course):

(setq ps-lpr-command "/Users/gene/bin/lpr-ps")

PostScript Print Buffer and PostScript Print Region (M-x ps-print-buffer and M-x ps-print-region) should now open a nicely formatted Preview document for you.

NOTE: if you are getting an error like The file ... couldn't be opened because there is no such file. Try changing sleep 1 to a bigger number of seconds, like sleep 10 or even sleep 30. The script tries to delete the generated temporary file as soon as possible, but sometimes this is sooner than Preview can get it onto the screen. Keep bumping this number up until the script is reliable for you. The tradeoff is that control isn't returned to Emacs until the full sleep timeout has elapsed.

Emacs Org LaTeX export to PDF

This might not be an issue for you, but if you use Org's export to PDF via LaTeX I had to make the following additional change to my Emacs config. Something about pdflatex not being a known LaTeX compiler.

(setq org-latex-pdf-process '("latexmk -f -pdf -interaction=nonstopmode -output-directory=%o %f"))

I'll continue updating this post with additional PostScript-on-Mac workarounds as I find and implement them.

Oven Fresh

August 14, 2023

I guess that bread baking is a pandemic trope now and I'm late to the game, but a couple of months ago I started my sourdough journey. The lure of hot, fresh bread whenever I wanted was enticing and I took the plunge. After a bit of research I found a great sourdough starter and things escalated from there.

I named my starter Nigel, as you do.

The directions are cheerful, fun and easy to follow. At one point Nigel bubbled over the top of the jar and I thought I had mis-measured or mis-timed something. Honestly though, it's kind of fun to watch your starter become so excited that it exhuberantly overflows the jar. It's a milestone.

Nigel has been completely reliable and easy to work with. I've been experimenting and steadily improving my process and results. I started with a high-quality wheat flour but missed the greater glutenous growth of All-Purpose, so for now I'm just trying to get the best rise I can. I want to get the crust darker and crispier too.

The important thing though is it's bread, it's delicious, and I bake a loaf every week. The process is a salve from sitting in front of a screen and keyboard.

Nigel and his works

Today would have been my father's 86th birthday. He loved bread. This next loaf's for you Dad.

Upgrading to Emacs 29.1 on Apple Silicon Macs

August 13, 2023

This is an update to my post about compiling Emacs 28.1 from source. With the recent release of Emacs 29.1, here are the steps I used to upgrade.

Remove existing Emacs

Backup and delete /Applications/Emacs.app if you have one.

Install/re-install dependencies

brew update
brew reinstall gcc
brew install libgccjit texinfo tree-sitter jansson
sudo rm -rf /Library/Developer/CommandLineTools
xcode-select --install

It still seems to be necessary to re-install Xcode command-line tools to prevent Emacs from getting killed on launch.

Get a fresh clone of Emacs

I don't tend to remote into my Emacs session any more so I didn't bother updating or applying the multi-tty patch. I may reconsider that if my needs change in the future.

git clone https://bitbucket.org/mituharu/emacs-mac
cd emacs-mac
git checkout emacs-29.1-mac-10.0

Configure and make

autoreconf -i

./configure \
  --with-native-compilation \
  --with-modules \
  --enable-mac-app \
  --with-xwidgets \
  --with-no-frame-refocus \
  --with-tree-sitter \
  --prefix=/Applications/Emacs.app/Contents/Resources
  
make

Move the built Emacs into /Applications

I didn't have much luck with make install so I just manually moved emacs-mac/mac/Emacs.app into /Applications

This was pretty smooth for me, probably because I had things nicely wired up from my previous installation. Good luck!

UPDATE 2023-10-02 to add --with-tree-sitter configuration option and manual installation process.

Upgrading to Emacs 28.1 on Apple Silicon (M1) Macs

April 10, 2022

This is an update of sorts to my previous Emacs build recipe and late-night Emacs-on-M1 twitter thread.

The Emacs Mac Port has been updated for Emacs 28.1 and here are the steps I used to build and install it on my M1 MacBook Air. Overall the process seems much smoother than the pre-release build experience but as always YMMV.

Install/re-install dependencies

brew reinstall gcc
brew install libgccjit texinfo
brew ln texinfo --force

I'm not sure why it was necessary to reinstall gcc (maybe an architecture issue?), but it seemed to help with a build error in a subsequent step. Also texinfo provides an updated makeinfo which is now no longer optional in the Emacs build process.

Add makeinfo to path

Add this to your shell path (necessary so that the makeinfo provided by texinfo is seen before the older macOS-provided version).

/opt/homebrew/opt/texinfo/bin

Install/re-install Xcode command-line tools

sudo rm -rf /Library/Developer/CommandLineTools
xcode-select --install

Again, something was wedged in the subsequent build process until I took this step, as recommeded in this GitHub thread.

Clone Emacs

These steps also apply the (optional) multi-tty patch that I've written about before.

git clone https://bitbucket.org/mituharu/emacs-mac
cd emacs-mac
git checkout emacs-28.1-mac-9.0
autoreconf -i
wget https://gist.github.com/genegoykhman/6effe7fa25696c49d0519af877f5fb42/raw -O multi-tty.patch
git apply multi-tty.patch

Configure and build

./configure \
  --with-native-compilation \
  --with-modules \
  --enable-mac-app \
  --prefix=/Applications/Emacs.app/Contents/Resources
make install

Confirm your emacsclient is linked to the right place

The above steps will update your /Applications/Emacs.app in place, so if you had emacsclient in a different subdirectory (as I did) you may want to replace any symbolic links to the newly build version at /Applications/Emacs.app/Contents/Resources/bin/emacsclient.

Starting Emacs

If all goes well you should be able to start Emacs and (emacs-version) will report 28.1. The new native-compilation feature will churn through your startup Elisp and probably spew a ton of warnings, but eventually you'll notice that Emacs feels much zippier.

If you don't see any (comp) warnings on first startup you might just want to check that native compilation is working ... (fboundp 'native-compile-async) should return t.

Visual debugging for Swift Package Manager projects on Linux

May 9, 2021

Last week I wrote about how to install and use lldb-vscode and Emacs dap-mode to visually debug Swift Package Manager projects outside of Xcode. Unfortunately the process still depended on the LLDB.framework packaged with Xcode, which meant that you still needed a Mac with Xcode installed.

In this post we'll go through a similar process, except we'll do it on Linux and without any need for Xcode. By the end you'll be able to use Emacs (or VSCode or other editor that can use VSCode extensions) to visually debug Swift Package Manager projects on Debian 10 ("buster").

Upgrade binutils

Building Swift requires a newer binutils than the one included with Debian 10. I think it's related to this bug in ld. We'll need to upgrade to the unstable version of binutils (2.35.2 as of this writing).

First, enable the unstable and testing repos for apt.

Now, install the unstable binutils:

sudo apt update
sudo apt-get install binutils/unstable

Try ld --version. You should see 2.35.2 or newer.

Configure LD_LIBRARY_PATH

I think that if you install a new version of ld (like we just did) you need to configure it so that linked executables correctly look for libraries in /etc/local/lib. On Debian 10 we just need to run:

sudo ldconfig

Install Debian 10 dependencies

I'm starting with a fairly minimal installation of Debian 10 so I'll need to install some dependencies that you may or may not need. This will vary for other distributions of course.

sudo apt-get install cmake ninja-build libedit-dev libpython3-dev libcurl4-gnutls-dev libsqlite3-dev 

Build Swift from Source

Last week we cloned the llvm-project repository and built it standalone, without Swift. This time we'll clone the Apple Swift project and use its build scripts to pull in and build llvm-project as a dependency. This is the longest step of this procedure so leave yourself ample time.

My personal projects directory is ~/Proj and I'll be using it through the rest of this post. Feel free to substitute your own.

cd ~/Proj
git clone https://github.com/apple/swift
git checkout swift-5.4-RELEASE
./utils/update-checkout --clone
./utils/update-checkout --tag swift-5.4-RELEASE
./utils/build-script \
  --clean \
  --lldb \
  --llbuild \
  --release --no-assertions \
  --xctest \
  --foundation \
  --libdispatch \
  --libicu \
  --swiftpm \
  --install-destdir="~/Proj/swift-install" \
  --install-all

I had this fail several times, each time with a missing library dependency. If that happens to you install the missing library with sudo apt-get install and then reissue the same ./build-script command but omit the --clean flag. This will allow the build to pick up from roughly where it left off.

Eventually you'll have a fully built Swift project in the ~/Proj/swift-install directory.

Copy the built files into your /usr/local folder

sudo cp -R ~/Proj/swift-install/usr/* /usr/local/

Install the lldb-vscode extension

Create the required directory in your home folder and copy the extension:

mkdir -p ~/.vscode/extensions/llvm-org.lldb-vscode-0.1.0/bin
cd ~/Proj/build/Ninja-Release/lldb-linux-x86_64
cp bin/lldb-vscode ~/.vscode/extensions/llvm-org.lldb-vscode-0.1.0/bin

Grab the package.json

For some reason the build_script doesn't create the package.json file required for the lldb-vscode extension. You can download the one we built last week and save it into this directory:

~/.vscode/extensions/llvm-org.lldb-vscode-0.1.0

You should now be able to start lldb-vscode. From the command-line

cd ~/.vscode/extensions/llvm-org.lldb-vscode-0.1.0/bin
./lldb-vscode

If you don't see any errors you're in good shape. Hit Ctrl-C and let's continue.

Install Vapor and build Hello World

You can use any SwiftPM project for this step but we'll continue to use Vapor for our example. I'm using version 3.1.9 here but choose whatever you like.

cd ~/Proj
git clone https://github.com/vapor/toolbox.git
cd toolbox
git checkout 3.1.9
make install

Type vapor --help to confirm that it is installed. Now create and build the Hello World project.

cd ~/Proj
vapor new hello -n
cd hello
swift build

Add the dap-debug template into Emacs

Assuming you're using Emacs and have installed and configured dap-mode (you can read more about this in last week's post), you need to add a debug template to your Emacs config for the Hello World we've just built. Here's the one I'm using.

(dap-register-debug-template
    "Vapor Hello World Linux"
    (list :type "lldb-vscode"
          :cwd "/home/gene/Proj/hello/.build/x86_64-unknown-linux-gnu/debug"
          :request "launch"
          :program "/home/gene/Proj/hello/.build/x86_64-unknown-linux-gnu/debug/Run"
          :name "Run"))

Evaluate this region or reload your Emacs config.

Set a breakpoint and see if it hits

The steps in this section are identical to what we did on the Mac last week.

In Emacs, start debugging with M-x dap-debug. Select the template you just added: that should launch the Vapor web server and the Hello World app.

Find the file Sources/App/routes.swift in the Hello World project. Move the point to the line that says return "It works!" (line 5 for me). Issue M-x dap-breakpoint-add. You should see a breakpoint indicator dot appear to the left of the line.

Open a web browser and navigate to http://127.0.0.1:8080. Your breakpoint should hit and you'll see something like this.

Breakpoint hit

Breakpoint hit

Under Locals (top-right on my screen), if you click on req it will expand and show you the Vapor request properties at the time your breakpoint was hit.

The stack trace is available by clicking the Run label in the Debug Sessions pane. To stop debugging issue M-x dap-disconnect.

And there we have it: a full visual debugging solution for Swift Package Manager projects in Linux, with no Xcode required.

Controlling TIDAL with AppleScript

May 8, 2021

I've been experimenting with the TIDAL streaming service but ran into a snag when I couldn't easily play/pause audio with AppleScript. I have a nice little script (that I've mapped to ⌘⌥⌃ P using Alfred) that will intelligently play or pause media playback from whatever source I'm currently using. It works with Music.app, Spotify, VLC, and so on but depends on the app providing at least a basic AppleScript dictionary. No such luck with TIDAL.

Happily AppleScript can inspect and select menus in any standard macOS app menubar and we can take advantage of this for TIDAL.

Menubar to the rescue

Menubar to the rescue

To determine whether the TIDAL app is running:

tell application "System Events"
    if exists (processes where name is "TIDAL") then
        -- TIDAL is running
    end if
end tell

To play TIDAL If it is currently paused or stopped:

tell application "System Events"
    tell process "TIDAL"
        if name of menu item 0 of menu "Playback" of menu bar 1 is "Play" then
            click menu item "Play" of menu "Playback" of menu bar 1
        end if
    end tell
end tell

To pause TIDAL if it is currently playing:

tell application "System Events"
    tell process "TIDAL"
        if name of menu item 0 of menu "Playback" of menu bar 1 is "Pause" then
            click menu item "Pause" of menu "Playback" of menu bar 1
        end if
    end tell
end tell

To toggle the play/pause state:

tell application "System Events"
    tell process "TIDAL"
        click menu item 0 of menu "Playback" of menu bar 1
    end tell
end tell

Just as with any UI scripting approach, this method is fragile and won't survive, for example, a TIDAL UI revamp. But it's better than nothing and simple enough to tweak in the future if necessary.

Debug Swift Package Manager projects using dap-debug

May 2, 2021

You may know you can edit and build your Swift PM projects outside of Xcode, but what about visual debugging? The Debugger Adapter Protocol (DAP), one of the many gifts to the world from the VSCode project, allows us to hook up our editor of choice (not just VSCode) to LLDB to visually debug our Swift code.

This walkthrough demonstrates how to set up Emacs dap-mode for Swift debugging1 and debug the Vapor Hello World example project.

Bad News First

You still need to do this on a Mac with Xcode installed. Theoretically this process should work on any machine that can build the llvm-project but right now, without the LLDB.framework that ships with Xcode, you'll be unable to inspect variables at stopped breakpoints.

This is a pretty big deal for me but I'm hoping that whatever magic sauce is in the Xcode-packaged LLDB.framework makes its way up to the mainline llvm-project eventually2.

Then what's the point

If you still need a Mac and Xcode, then why not just use Xcode? Excellent question. This process at least gives you the choice of what editor to use, and enables the full development cycle outside of Xcode (yes, yes and AppCode).

Process overview

In this post we're going to:

  • install and configure dap-mode in Emacs.
  • clone and build the llvm-project from source.
  • hack the lldb-vscode extension produced in our build to work with the LLDB.framework that comes with Xcode.
  • install Vapor and build the Vapor Hello World example project.
  • set a breakpoint in the routes.swift file of the Hello World project and confirm that we can run, break, and see the call stack and local variables.

Phew. Let's get started.

Install and Configure dap-mode

I'm going to assume you already have lsp-mode installed. Make sure you have (require 'lsp-mode) and (require 'lsp-ui) somewhere in your Emacs config.

For dap-mode, follow the steps on the dap-mode Github README. I keep it pretty simple with:

(require 'dap-lldb)
(dap-auto-configure-mode)

dap-mode uses debug templates. You'll eventually want to add one for every target you debug. For now, add this to your config for our Hello World example.

(dap-register-debug-template
"Vapor Hello World SwiftPM"
(list :type "lldb-vscode"
      :cwd "/come/back/and/fill/this/in/later"
      :request "launch"
      :program "/come/back/and/fill/this/in/later/Run"
      :name "Run"))

Don't worry about the missing paths for now... we'll come back and, well you get the idea.

Install the build system

You'll need CMake and the Ninja build system installed.

brew install cmake ninja

Clone and Build the llvm-project from source

Open your terminal, start brewing a coffee (you'll need it shortly) and type:

git clone https://github.com/llvm/llvm-project
cd llvm-project
git checkout llvmorg-12.0.0

I'm using the latest release tag as of this writing, llvmorg-12.0.0, but you can check the current release3.

This next step is pretty weird but here we are. You need to completely remove the Xcode command-line tools in order to prevent build errors for llvm-project. We'll be reinstalling them shortly, but for now quit Xcode if it's running and, in terminal, type:

sudo xcode-select -s /Applications/Xcode.app
sudo rm -rf /Library/Developer/CommandLineTools

Ok let's build llvm-project. If you're not on Apple Silicon skip the line with CMAKE_OS_ARCHITECTURES='arm64'.

rm -rf build
mkdir build
cd build
cmake -G Ninja \
  -DLLVM_ENABLE_PROJECTS="clang;clang-tools-extra;lldb" \
  -DCMAKE_OSX_ARCHITECTURES='arm64' \
  -DCMAKE_C_COMPILER=`which clang` \
  -DCMAKE_CXX_COMPILER=`which clang++` \
  -DCMAKE_BUILD_TYPE=Release \
  -DLLDB_INCLUDE_TESTS=OFF \
  -DLLDB_BUILD_FRAMEWORK=ON \
  -DDEFAULT_SYSROOT="$(xcrun --show-sdk-path)" \
  ../llvm

cmake --build . -- -j8

Remember that coffee? This is where you get to sip it as you watch llvm build. No rush.

Reinstall Xcode command-line tools

Assuming that the build succeeded:

sudo xcode-select --install

Copy the built lldb-vscode extension

Part of the llvm-project that we built was the lldb-vscode extension (not to be confused with the completely unrelated vscode-lldb project). We need to copy the binary into a place where dap-mode will find it:

mkdir -p ~/.vscode/extensions/llvm-org.lldb-vscode-0.1.0/bin
cp ../lldb/tools/lldb-vscode/package.json ~/.vscode/extensions/llvm-org.lldb-vscode-0.1.0
cp bin/lldb-vscode ~/.vscode/extensions/llvm-org.lldb-vscode-0.1.0/bin

Hack the rpath of the extension

The rpath of the lldb-vscode extension we just built points directly into the bin directory of llvm-project tree, which is where it's going to look for LLDB.framework. You can confirm that by typing:

cd ~/.vscode/extensions/llvm-org.lldb-vscode-0.1.0/bin
otool -l lldb-vscode

Look for the LC_RPATH entry. We need to change it, because we actually want lldb-vscode to use the LLDB.framework from your currently installed Xcode (12.5 at the time of this writing). The LLDB.framework we just built as part of llvm-project won't fully work: you won't be able to see locals or a stack trace during debugging, which makes it less useful.

Here's how to replace the rpath. You'll have to plug in the current rpath from the otool command above.

cd ~/.vscode/extensions/llvm-org.lldb-vscode-0.1.0/bin
install_name_tool -delete_rpath /exact/rpath/from/the/otool/command/you/just/issued lldb-vscode
install_name_tool -add_rpath /Applications/Xcode.app/Contents/SharedFrameworks lldb-vscode

Check again with otool -l lldb-vscode. There should be only one LC_RPATH, and it should point to /Applications/Xcode.app/Contents/SharedFrameworks.

Install and Build Vapor Hello World

You can try this with any Swift PM project of course but Vapor's Hello World is a perfect example. Change into your Documents/Projects/Scribbling directory (mine is /Users/gene/Proj). Install Vapor with:

brew tap vapor/tap
brew install vapor/tap/vapor
vapor --help

If that worked you'll see a list of available vapor commands. Ok let's create the Hello World app.

vapor new Hello
cd Hello
swift build
find . -name Run

That last command will show you where the Run binary got dropped. Make a note of the full path to the file ... on my machine it's /Users/gene/Proj/Hello/.build/arm64-apple-macosx/debug/Run.

Go back to your Emacs config and modify the debug template with the correct working directory and program to run. Mine is:

(dap-register-debug-template
"Vapor Hello World SwiftPM"
(list :type "lldb-vscode"
      :cwd "/Users/gene/Proj/Hello/.build/arm64-apple-macosx/debug"
      :request "launch"
      :program "/Users/gene/Proj/Hello/.build/arm64-apple-macosx/debug/Run"
      :name "Run"))

Reload your Emacs config.

Set a breakpoint and see if it hits

Start debugging with M-x dap-debug. Select the template you just added: that should launch the Vapor web server and the Hello World app.

In Emacs, find Sources/App/routes.swift in the Hello World project. Move the point to the line that says return "It works!" (line 5 for me). Issue M-x dap-breakpoint-add. You should see a breakpoint indicator dot appear to the left of the line.

Open a web browser and navigate to http://127.0.0.1:8080. Your breakpoint should hit and you'll see something like this.

Breakpoint hit

Breakpoint hit

Under Locals (top-right on my screen), if you click on req it will expand and show you the Vapor request properties at the time your breakpoint was hit. If you don't see req at all, something went wrong and lldb-vscode isn't using an LLDB.framework that can show locals. My only recommendation is to try this procedure again to see if you missed anything. It's also possible that this functionality will regress again in the future (like it did in LLVM 11.0.1 / Xcode 12.4) so join me in keeping our fingers crossed.

The stack trace is available by clicking the Run label in the Debug Sessions pane. To stop debugging issue M-x dap-disconnect. I'll leave further investigation to you, and I may also expand on this in future blog posts.

Congratulations on making it this far, and on getting it to work. My hope is that the LLVM and DAP story will continue to expand and improve in the future, possibly dropping the dependency on Xcode entirely.

--

1 I am not familiar with the VSCode extensions ecosystem but I imagine it would be even more straightforward to set up there.

2 At the recommendation of one of the reviewers of this post (thanks Dan!) I tried this process using Apple's fork of the llvm-project but still could not shake the dependency on Xcode's LLDB.framework.

3 It's worth mentioning that llvm 12.0.0 and Xcode 12.5 work, whereas llvm 11.0.1 and Xcode 12.4 did not (no locals or call stack during debugging). Hopefully whatever got fixed stays fixed.

Keep your Time Machine Volume Unmounted

February 18, 2021

A locally-connected Time Machine drive1 mounts automatically when you log in and stays mounted for the duration of your entire computing session. Normally this isn't a problem, but if you want to yank the cable and grab your notebook to go you have to remember to manually eject the Time Machine drive first. Otherwise you'll see this:

Disk not ejected properly

Disk not ejected properly

You can usually ignore this warning and suffer no ill effects but there is always a risk that it could lead to actual backup corruption or enough of a danger signal that Time Machine will want to wipe your backup history and do a full backup from scratch. Neither scenario is great.

This procedure shows you how to set up an automated Time Machine backup schedule that mounts the drive, does the backup, and unmounts as soon as the backup completes. The process assumes you're on macOS 11 (Big Sur) or later, and that you use an encrypted APFS Time Machine volume.

First, turn off the built-in backup schedule

Click the Time Machine icon in your menu bar and choose Open Time Machine Preferences...

Uncheck Back Up Automatically.

Write the backup script

We're going to write a shell script to handle the mounting, backup, and ejecting of the Time Machine drive. I could not find a way to make a bare shell script reliably mount an encrypted APFS volume, but a real Mac app can. Luckily you can wrap any shell script in a folder structure to make it look and work just like a regular Mac app. Let's call ours timemachine_go.app and save it in our ~/bin folder.

In Terminal:

cd ~/bin
mkdir -p timemachine_go.app/Contents/MacOS

Create the file ~/bin/timemachine_go.app/Contents/MacOS/timemachine_go using your editor of choice. This is the script, based on the approach described in this StackOverflow response, but adding support for AFPS encrypted volumes.

#!/bin/bash
sleep 120
d="Time Machine"  # (change this to match the name of your backup drive)
diskutil mount "$d"
tmutil startbackup -b
diskutil eject "$d"
diskutil apfs lockVolume "$d"

Make the script executable:

chmod +x ~/bin/timemachine_go.app/Contents/MacOS/timemachine_go

Open System Preferences > Security & Privacy. Select Full Disk Access. Click the lock and authenticate so you can make changes, then click + and add the timemachine_go.app you just created.

Make sure your Time Machine drive automatically mounts when you log in

This script will only work if your Mac is able to automatically mount your Time Machine drive without asking you for a passphrase. If your Time Machine volume automatically appears on your desktop when you log in, you should be good to go.

Test the backup script

Click the Time Machine icon in your menu bar and note the time of the most recent successful backup. Right-click on your Time Machine drive on the Desktop and Eject it. Navigate to and double-click timemachine_go.app in Finder.

If all goes well you should see the Time Machine drive appear on your desktop, the icon in the menu bar change for a few minutes while the backup runs, and then the Time Machine drive should disappear again. Click the Time Machine icon in the menu bar and confirm that the last backup time was updated.

Schedule the backup script to run automatically

We're going to create a LaunchAgent to run the backup script on a set schedule when you are logged into your Mac user account2. Create the file ~/Library/LaunchAgents/com.flakygoodness.timemachine_go.plist (feel free to substitute your own reverse-DNS name for the script)

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>Label</key>
    <string>com.flakygoodness.timemachine_go</string>
    <key>ProgramArguments</key>
    <array>
      <string>/Users/YOURUSERNAME/bin/timemachine_go.app/Contents/MacOS/timemachine_go</string>
    </array>
    <key>StartInterval</key>
    <integer>7200</integer>
    <key>RunAtLoad</key>
    <true/>
</dict>
</plist>

A few notes about the script:

Be sure to substitute YOURUSERNAME above, as well as the correct reverse-DNS name you chose for the script if different than mine. The StartInterval of 7200 seconds (run a backup every 2 hours) can be customized to your taste. I suggest starting with 900 (15 minutes) for testing, and then when you're confident the script is working as intended you can set this to whatever you'd like.

Changes you make to the LaunchAgent file won't take effect until you've reloaded the plist, which happens for you automatically when you log out and back into your Mac account.

Reboot your Mac and log in

When you boot your Mac and log in you'll see the Time Machine drive on the desktop3. After a 2 minute delay4 the backup will run, and then the drive will eject and disappear. In another 2 hours (or whatever interval you've set in the LaunchAgent script) the drive will mount and appear, the backup will run, and then disappear again.

Periodically check your Time Machine backups

Every once in a while it's a good idea to click the Time Machine icon in the menu bar and confirm that the last successful backup occurred when you think it should have. Better yet, manually mount your Time Machine drive (using Disk Utility or the command-line) and open up the Time Machine interface. Use it to restore a data file or two, and then eject the drive with confidence that it will be mounted again when it's time for the next backup.

--

1 Alternatively you could use a network-attached Time Machine volume like a Time Capsule or Time Machine-compatible NAS. This comes with its own set of performance and reliability trade-offs. A fast, locally-attached Time Machine is a nice way to go. With an APFS-formatted SSD my incremental backups typically take only take a few seconds to complete.

2 I don't believe that this script will run during Power Nap wakes, but I'm not sure about this.

3 Although there are a number of approaches for completely hiding the Time Machine drive from the desktop I didn't find any of them suitable. There was a way to set a hidden flag on any macOS file/folder/volume but this no longer seems to work, at least for Time Machine volumes. It's possible to prevent a volume from automatically mounting at startup, but this seems to prevent the script from ever being to mount the drive without user intervention (the drive still mounts from Disk Utility). Finally, it's possible to prevent Finder from showing any externally attached drives on the Desktop but I actually find this behavior useful for things like USB thumbdrives.

4 You may have noticed the sleep 120 (2 minute delay) hard-coded at the top of the timemachine_go script. This is to workaround a timing issue where, if the backup starts immediately upon login, the Time Machine drive may not have yet been detected by the OS. Two minutes is enough time to allow for this, and also gives us a little opportunity to do something quickly and log out again if we want to avoid a backup entirely.

Dell UP2715K on M1 Macs

January 30, 2021

UPDATE 2021-12-20 After updating to macOS Monterey 12.1 (I skipped 12.0), this fix no longer seems necessary. macOS once again natively supports the UP2715K on M1 Macs.

--

Getting a Dell UP2715K working with the new M1 Macs is tricky. The monitor has worked seamlessly with Intel Macs since it was first introduced (assuming you had the right adapter). Sadly discontinued, the UP2715K is still one of the best/only 5K options that isn't an iMac. Here's how to get it working with the M1 Macs.

A huge shout out to Alex Argo who actually spent the time on the phone with Apple support that led to this solution. He documented his approach and success in the Apple support communities and when I couldn't quite get it to work myself he was generous with his time and configuration files. Cheers Alex.

The problem is that, by default, macOS doesn't recognize the correct list of available resolutions from the UP2715K. This procedure adds an override configuration file that macOS uses to see the correct available resolutions.

Create an override file

Start by creating the directory tree where the override file will go.

sudo mkdir -p /Library/Displays/Contents/Resources/Overrides/DisplayVendorID-90ac

Now, create a file DisplayProductID-40b6 (no extension) in this directory with the following contents.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
    <key>DisplayPixelDimensions</key>
    <data>
    AAAUAAAAC0A=
    </data>
    <key>DisplayProductID</key>
    <integer>16566</integer>
    <key>DisplayProductName</key>
    <string>DELL UP2715K (patched)</string>
    <key>DisplayVendorID</key>
    <integer>37036</integer>
</dict>
</plist>

Important: Reboot now.

I want to give credit to Stephane Madrau, the developer of the SwitchResX tool that was used for originally creating this file, who helped me understand what was happening and what this patch actually does. He is currently in the process of updating SwitchResX for full M1 support and you should check it out if you want to dig into this further or need better control over your display resolution settings in general.

Select the correct resolution

When you've rebooted and logged back in, go to System Preferences > Displays and check the Window menu. You should see an item named Dell UP2715K (patched). If you don't see that exact name, the file we just created didn't get recognized by macOS for some reason.

Select that entry and then, again from the Window menu, Move to Built-in Retina Display. This will move the options sheet onto your notebook screen where you can actually see it (assuming your Dell doesn't have an image showing yet).

In the resolution list, click Scaled, and now hold down ⌥ and click Scaled again. This will show the full resolution list for the monitor.

Full resolution list

Select 2560x1440. This is 5K HiDPI (native retina), and is probably what you want for this display. Close System Preferences and reboot again.

When you login your Dell display should be active at the correct resolution. You can close your notebook lid, Sleep and Resume, etc. and the Dell should be working as expected.

There's just one thing left

After a reboot the Dell might not activate until you've logged in. This is worth a bit of explanation.

The override file we created sits on your Data volume, which (if you have FileVault on) is locked and encrypted until you've logged into your Mac. Therefore, if you have FileVault on, macOS won't be able to activate the overrides file until that first login. As of Big Sur (maybe late Catalina?) I don't believe there is any way of putting the overrides file in the /System/Library/Displays ... path (which is on the System volume and visible to macOS pre-login). The System volume is completely sealed and, even with SIP off, changes to the volume won't survive a reboot.

So: the real solution is for Apple to ship the Display...Overrides file for the UP2715K as part of a future version of macOS. I've filed a feedback (FB8985087) with this request.

For now, you have a choice to make. You could leave FileVault on and open your notebook lid to login when you restart. Or, you could turn FileVault off and have the Dell activate automatically on restart. That's a security choice and worth thinking about. Hopefully, if the overrides file eventually gets added to the System volume we won't have to deal with this dilemma for long.

Automatic switch for the macOS firewall

December 22, 2020

First, please don’t take this post as concrete security advice. Everyone’s situation, risk profile, and risk tolerance is different. But if you’re like me, you may not have your macOS firewall turned on right now. I’m not going to judge, especially if you’re on a trusted network behind a well configured router. You can check by going to System Preferences > Security & Privacy > Firewall.

I will suggest though that when you leave your cozy home or office network, turning on that firewall is a good idea, especially if you have anything switched on in the Sharing panel.

Firewall: On

Firewall: On

Consistently remembering to do that is a chore so you may choose to leave the firewall on permanently. I don’t, because when I’m at my home office I want the Mac fully accessible from other devices on my network, and when I’m away I want things locked down as tight as I can get them.

Here’s how I have my macOS firewall switch off automatically when I’m on my home network and switch back on whenever I leave.

Find the MAC address of your home router

While connected to your home network, open Terminal and type:

system_profiler SPNetworkDataType | grep IPv4\.Router

You’ll see something that starts with:

Network Signature: IPv4.Router=192.168.0.1; ...

That 4-number IP address is probably the internal (to your network) address of your router. You might see the MAC (IPv4.RouterHardwareAddress) of your router on the same line, but let’s confirm it. Substitute the IP address from above when you type:

arp 192.168.0.1 | head -n 1 | awk '{print $4}'

You should see 6 hex numbers separated by a colon. This is your router’s MAC address and you’ll use it in the script below.

Create a networkchanged script

Create the following script in a location of your choice somewhere in your home folder. For me this script is called /Users/gene/bin/networkchanged. Substitute in your router’s MAC address for the $GATEWAYMAC string of zeroes below.

#! /bin/bash

GATEWAYIP=`system_profiler SPNetworkDataType | grep -m1 IPv4\.Router | awk -F'[=;]' '{print $2}'`
if [ ! -z "$GATEWAYIP" ]; then
   GATEWAYMAC=`arp $GATEWAYIP | head -n 1 | awk '{print $4}'`
fi

if [ ! -z "$GATEWAYMAC" ] && [ $GATEWAYMAC == "00:00:00:00:00:00" ]; then
  # Trusted network. Disable firewall.
  /usr/libexec/ApplicationFirewall/socketfilterfw --setglobalstate off
else
  # Not on a trusted network. Enable firewall.
  /usr/libexec/ApplicationFirewall/socketfilterfw --setglobalstate on --setblockall on
fi

Make it executable

Substitute in the location of your script and run this from the terminal:

chmod u+x /Users/gene/bin/networkchanged

Try it

Run this from the terminal:

sudo /Users/gene/bin/networkchanged

You’ll probably see the following output:

Firewall already disabled.

Now disconnect from your home network (unplug your network cable and/or turn off your Mac’s wifi) and run the script again. You should see:

Firewall is enabled. (State = 1)
Firewall is set to block all non-essential incoming connections

Nice. You can also open System Preferences > Security & Privacy > Firewall and confirm that the little yellow light is on. Note that the state of this light only refreshes when you quit and relaunch System Preferences.

Make it run automatically whenever the network changes

As root, create a file called /Library/LaunchDaemons/com.scripts.NetworkChanged.plist with the following contents. Substitute the full path to your networkchanged script.

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple Computer//DTD PLIST 1.0//EN" \
 "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
  <key>Label</key>
  <string>networkchanged</string>
  <key>LowPriorityIO</key>
  <true/>
  <key>ProgramArguments</key>
  <array>
    <string>/Users/gene/bin/networkchanged</string>
  </array>
  <key>WatchPaths</key>
  <array>
    <string>/private/var/run/</string>
  </array>
  <key>RunAtLoad</key>
  <true/>
</dict>
</plist>

Make sure it’s owned by root:

sudo chown root:wheel /Library/LaunchDaemons/com.scripts.NetworkChanged.plist

Load it:

sudo launchctl load /Library/LaunchDaemons/com.scripts.NetworkChanged.plist

And you’re all set.

Test it

Take your computer off and bring it back onto your network and confirm that the firewall starts and stops correctly. Remember that you have to quit and re-launch System Preferences the see the effect of your script on the indicator light.

Multi-TTY support in Emacs 27

December 6, 2020

This is a good post describing the issue and the nice solution that ylluminarious came up with. The patches linked in the post no longer apply cleanly to Emacs 27.1 so I put up a gist of an updated, combined patch.

You can follow these steps to apply the patch and build Emacs 27.1 from Emacs Mac port source.

git clone https://bitbucket.org/mituharu/emacs-mac
cd emacs-mac
git checkout emacs-27.1-mac-8.1
autoreconf -i
wget https://gist.github.com/genegoykhman/6effe7fa25696c49d0519af877f5fb42/raw -O multi-tty.patch
git apply multi-tty.patch
./configure --without-makeinfo --with-modules --enable-mac-app --prefix=/Applications/Emacs.app/Contents/Resources
make install

The Best Thing About eGPUs

May 27, 2020

A couple of years ago I was considering switching from a powerful desktop/small laptop setup to laptop-only. The increasing cost of CTO Mac hardware was making me question the need to maintain and upgrade two developer-spec Macs at the same time. And I was tired of the little wrinkles in syncing configuration and data despite having automated quite a bit of it. There really is something to be said for having everything on a single machine that you can pick up and take with you.

I was hesitant to lose the performance and stability of a desktop, and dreaded the hassle of plugging and unplugging cables and dongles every time I took my machine on the road. I was also skeptical that I'd be able to get a multi-monitor retina setup working with sufficient performance given the bandwidth limitations of Thunderbolt 3 and the hot/loud/relatively slow GPUs that are built into MacBooks.

Then a friend showed me his MacBook Pro setup with the Mantiz Venus eGPU chassis and I was all-in. I've been running this way since 2018 and it's amazing.

Mantiz MZ-02 (Venus)

Mantiz MZ-02 (Venus)

Inside the eGPU chassis I use an AMD Radeon Vega 561 that can handily output to both a Dell 5K display and a second Dell 4K display2 at the same time, over DisplayPort, without breaking a sweat3.

The eGPU chassis connects to the 15" MacBook Pro using a single Thunderbolt 3 cable. And it also provides 87W of USB-C Power Delivery for charging the laptop over the same cable.

And it also provides five (!) USB3 ports, all signaling over the same cable.

And it also provides gigabit Ethernet, also over that same cable.

It gets crazier. The Mantiz chassis also has an internal bay for a 2.5" SATA drive, which communicates (say it with me) over the same cable.

Why do they even call them eGPUs? They're actually "single-cable docking stations that happen to have a PCIe slot where you can plug in a discrete GPU if you're so inclined." Or SCDSTHTHAPSWYCPIADGIYSI for short.

The hot-swappability works reasonably well and continues to improve with each macOS release. But the great undersold secret of eGPUs is the no-hassle I/O capability they can provide over a single Thunderbolt 3 cable4.

--

1 I was warned off the Vega 64 which is a little more power hungry and runs hotter.

2 I find Dell displays to be a lot nicer than the LG displays Apple promotes. I have a UP2715K (5K) which is no longer available, unfortunately, and a P2415Q (4K) which is even sharper with richer colors than the 5K.

3 I'm not sure about this but I don't think that a single Thunderbolt 3 port has enough bandwidth to support both a directly-connected 5K and 4K display at the same time. With an eGPU setup though, the bandwidth requirement between the computer and the eGPU is lower and within the threshold. The computer is sending the eGPU instructions on what to draw. Most of the bandwidth appetite is between the eGPU and the monitors, where millions of rendered pixels are pushed 60 times a second.

Thunderbolt 3 bandwidth allocation is a complex topic and different eGPU vendors have sliced up the bandwidth differently, with varying results. Intel's Technology Brief goes into the intricacies but suffice to say that not all eGPUs perform well with all setups, and even software support can play a factor. I'm told that Final Cut Pro can actually perform worse with an eGPU setup than with built-in graphics in some configurations, for example.

4 As of this writing the Mantiz Venus is sold out and different eGPUs provide different I/O options. Some provide none at all other than power for charging. Do your research.

Faster Dark Mode on macOS

May 23, 2020

There is a slight performance hit when using the Dark appearance on macOS Catalina. It feels like there is just a tiny bit of input lag across the whole OS as compared to light mode. At first I thought it was my imagination, so I was happy to hear that Guilherme Rambo and John Sundell experienced the same thing on Stacktrace Podcast 084 at around the 16:50 mark.

Better yet, Gui found a solution! In System Preferences > General, setting the Accent color (not the Highlight color) to Graphite, the last option, disables the offending video compositing or blurring or whatever else is causing the delay. Unfortunately you're stuck with boring grey UI accents, but it's worth it for bit of extra responsiveness.

Graphite accent color

Graphite accent color

If you are a developer in the Apple ecosystem and haven't checked out Stacktrace yet I highly recommend it. This isn't the first macOS mystery they've unraveled for me.

Saving to Empty iCloud Drive Folders

February 28, 2020

iCloud Drive can be kind of funny sometimes. If you happen to see a "No such file or directory" error when trying to save a file from your iOS device, one thing to check is whether the destination folder is currently empty. If it is, adding a file (any file) to that folder through some other means might allow you to save your original file from iOS.

No such file or directory

No such file or directory

I have a "Inbox" style folder that I like to keep empty most of the time so it can be frustrating to get this error when I try to save something into it. My workaround is to put a 0-byte hidden file in that folder using this command from Terminal:

touch .placeholder-do-not-remove

Ensure Proper Ventilation

January 3, 2020

Fair warning

Be careful falling down the mechanical keyboard rabbit-hole because it's a long drop. This is a story about my recent adventures down that deep, dark tunnel. If you don't self-identify as a keyboard enthusiast you might want to skip this one. None of the companies or products I mention in this post are sponsor or affiliate links.

Introduction

Matias makes excellent mechanical keyboards. For example, take the Mini Tactile Pro for Mac.

My daily driver

My daily driver

It's a Mac keyboard (it sends Mac-specific scan codes for each key, including media and system keys) and uses the Matias Click switch: a clicky tactile switch inspired by the old ALPS white, long out of production, that many prefer over today's gamut of Cherry MX-style switches and variants.

Matias Mini Quiet Pro for PC

Matias Mini Quiet Pro for PC

This is the Mini Quiet Pro Keyboard for PC. It has a beautiful piano-black case with matching black keycaps and white legends. It's gorgeous. Unfortunately it is only available with PC-legend keycaps (and PC-specific scan codes) and also uses a different, non-clicky switch, which sacrifices the satisfying clickiness of the Mini Tactile Pro.

Matias Tactile Pro 4 for PC

Matias Tactile Pro 4 for PC

This is the Tactile Pro Keyboard for PC. Matias no longer makes this keyboard but it is still available from some distributors like The Keyboard Company in the UK. It uses a slightly older version of the Matias Click switches. The feel is similar, but keypresses have a lower-frequency sound and require a bit less force to actuate. That makes them incredibly enjoyable to use, and are, to this day, my very favorite mechanical keyboard switch. This keyboard is also PC-specific and is not available in a tenkeyless variant. If you can find it at all, that is.

Matias Laptop Pro for Mac

Matias Laptop Pro for Mac

Finally, this is the Laptop Pro for Mac. It uses the same quiet, non-clicky switches as the Mini Quiet Pro and has a somewhat bland gray case, but it has Mac-specific black keycaps and is easily available directly from Matias.

So, four keyboards, each with their distinct advantages. The Mac-specific scan codes of the Mini Tactile Pro, the beautiful black case of the Mini Quiet Pro, the just-right switches from the Tactile Pro 4, and the black Mac keycaps of the Laptop Pro. What if they could be combined into a single, perfect Black Mini Tactile Pro for Mac?

But how?

My plan was simple. I would find one keyboard of each type, take them apart, then mash them together into a single ideal specimen.

I would desolder the keyswitches from a Tactile Pro 4 for PC, solder them into the PCB of a Mini Tactile Pro for Mac, pop on the black keycaps of the Laptop Pro for Mac, plop the whole thing into the black case of the Mini Quiet Pro and I'd have the exact keyboard I wanted. Piece of cake.

The first step was the easiest: buying things. I already had a white Mini Tactile Pro for Mac that I condemned in the name of the cause. Then I bought a Mini Quiet Pro from Matias along with a set of black Mac keycaps that they obligingly sold me in the form of a used Mini Laptop Pro that had been returned to them.

I was blown away by how responsive and helpful the Matias rep was, and when I explained what I was attempting he charitably described my idea as "incredibly ambitious" rather than what my own brother said, which was "I wouldn't even try." The Matias rep even pointed me to The Keyboard Company as a possible source for the Tactile Pro 4 Keyboard for PC and I scooped up one of the eight they still had in stock at the time. Very lucky considering these keyboards are out of production and the Amazon and eBay listings I had seen for them looked pretty sketchy.

Next came the electronics supplies. My brother (and soon-to-be partner in this endeavor) correctly pointed out that desoldering the switches from the boards would be the trickiest part of all this, and suggested I pick up a desoldering tool, or even better, a full-on desoldering station.

Aoyue Professional Repair and Rework Station

Aoyue Professional Repair and Rework Station

I was getting a little squeamish about the hundreds of dollars I had already committed to this project and decided to start with a much more humble $9 solder sucker.

Wemake Solder Sucker

Wemake Solder Sucker

To that I added to some standard solder and desoldering braid and we were off to the races.

Breaking things

With a deep sigh I took apart the white Mini Tactile Pro for Mac that I had been enjoying for the past few months. I extracted the white keycaps with a keycap puller and after removing two screws from the bottom it was just a matter of carefully prying the plastic case apart.

I was relieved to see that none of the internal components were glued in. The PCB can be pulled straight out and the two side USB ports (one on each side of the top of the board) can be pulled out of their headers without any difficulty.

Open Mini Tactile Pro

Open Mini Tactile Pro

This is when I hit my first setback. The baseplate of the board was white. How would that look peeking through the piano black case?

I decided instead to use the PCB of the Mini Quiet Pro (which was black) and just transplant the daughterboard from the Mini Tactile Pro for Mac... this little daughterboard is easy to remove and, evidently, provides the scan codes used by the keyboard. The white Mac Mini Tactile Pro PCB would become my desoldering practice board.

With that in mind I started trying to desolder the switches from my practice board. It looked so easy in this 3 minute video I found on YouTube.

But it was not easy. It was not easy at all. I destroyed 2 switches and pulled up some of the pads and traces on the (thankfully) practice board before I tucked my tail between my legs and called it a night.

The Trough of Despair

Anybody who does risky creative work is very familiar with the stage of so many projects which, for the purposes of this post, I'll refer to as the Trough of Despair. The initial excitement has worn off and now there are significant challenges and the real possibility of failure. And lack of confidence or clarity about what to do next.

This is not pleasant.

I was a few hundred dollars into this now, and had just destroyed my favorite keyboard on switch number two of the 81 I would have to perfectly desolder from the destination board (the Mini Quiet Pro). Plus another 81 switches from the donor board (the Tactile Pro 4 for PC). I wasn't sure what to do next.

The Rescue

Remember my "I wouldn't even try" brother? Early in his career he was an electronics test technician and he knew his way around a soldering iron. After a few other choice quotes estimating the probability of eventually ending up with a working board, he took me under his wing and taught me how to properly desolder a switch.

The actual technique wasn't too different from the YouTube video, but in practice it all comes down to experience and timing. You need to heat the solder for long enough that it doesn't just get visibly gooey on the surface, but a second or two longer so that all the solder down through the hole melts as well. You need to nail the angle at which you hold the solder sucker: straight down doesn't work because you don't get enough of a seal over the pin, but you can still get good suction at a pretty acute angle provided you position the tip close enough to the pin. That kind of stuff.

First switch removed

First switch removed

A desoldered switch

A desoldered switch

We did two rows of switches together, and although it was slow going (requiring us to, for example, re-solder some partially sucked-out pins so that we could try again) I eventually gained enough confidence to move on to the real destination board.

The Slog

Soldered pins

Soldered pins

Each of the 81 keyswitches on the destination board is held on by two pins soldered to pads on the surface. Only when both pins are almost completely clear of solder can you push the switch out. Force the switch and you run the risk of detaching a pad or some of the traces from the pad to the rest of the board. A single mistake (pushing out a pad or pulling out a trace) would likely destroy the board and bring the project to an abrupt end.

Partially desoldered destination board

Partially desoldered destination board

I did my best to be careful and the going was slow. Every switch seemed to require special attention. Desolder, nope, re-solder, try desoldering again, a bit of solder left, try the desoldering wick, and so on.

It took me an average of 5 minutes per switch but, as far as I could tell, the board was clean and there was no evidence that I had broken it. That night I fell into bed exhausted but relieved. I had made it over the Trough of Despair.

Clean destination board

Clean destination board

The Pressure

My family was out of town for a few days and, in service to the project, I had turned our small living room into a makeshift electronics workshop. I had two days left to finish up before they were due back and if there were still bits of keyboards, switches, and lead solder all over the place my set of problems would broaden considerably.

I needed to get this done, and, for the donor board, I now had at least 81 more switches to desolder, or 100+ if I wanted to desolder the whole board and pick up some spare switches just in case. That would take almost 10 hours of straight grind and I was not looking forward to it. Nevertheless, I took a deep breath and got down to business.

Our living room table

Our living room table

The Miracle

Whether it was the practice, the deadline pressure, a good night of sleep, or just luck, desoldering the switches on the donor board went much, much faster. I got into a nice groove and almost every switch cooperatively popped right out after a quick solder sucking.

I motored through the whole board saving almost every switch (a couple were lost due to bent or broken pins) and I was drinking a celebratory beer well before bedtime. I now had a clean destination board and all the switches I needed from the donor board. And as far as I could tell, I hadn't broken anything I needed yet.

The Stretch

The final day of the project went quickly and smoothly. I started soldering the switches from the Tactile Pro 4 for PC onto the Mini Quiet Pro for PC PCB and, after two straight days of gruelling desoldering practice, actually soldering the switches onto the board was laughably easy.

At least until I realized the board standoffs were positioned incorrectly and I had to redo about a dozen switches, but after I stopped crying I started laughing again.

My brother dropped by and helped reach the finish line by soldering some switches as well. We finished up with the switches, re-attached the USB headers (which I had pulled out to keep them out of the way), and plugged in the daughterboard from the Mini Tactile Pro for Mac.

I carefully pushed the keycaps from the Laptop Pro into their correct positions while my brother made fun of me for taking so long. I reassembled the board into the piano black case from the Mini Quiet Pro and plugged it into my Mac.

Pushing in the keycaps

Pushing in the keycaps

Well, I plugged it into a USB hub which I plugged into my Mac, because I'm not ready to replace my computer just yet.

I then used the Show Keyboard Viewer menubar option provided by the macOS Keyboard System Preference pane to test each and every key. I held my breath.

The Result

Total success. Absolute victory.

Matias Black Mini Tactile Pro for Mac

Matias Black Mini Tactile Pro for Mac

Thanks to a healthy dose of luck on my part and skill and experience on my brother's, as well as the moral support of Steve from Matias (thanks Steve!) I'm typing this post on the world's only Matias Black Mini Tactile Pro for Mac (with Tactile Pro 4 Matias Click switches). The keyboard feels and sounds amazing and, as a bonus, I have enough spare parts to keep it in service for a good long while.

I understood from the beginning that things might not have worked out and I'd be left sitting in a big pile of electronics waste. I think I would still have been glad to step out of my regular software-mostly comfort zone and given this a shot. And happily, it all seems to have worked. On my desk sits a clicky black trophy of achievement as proof.

How to Lose Your Apple Health Data

October 6, 2019

Take a quick moment and make sure you don't make the same I just did when replacing my iPhone. Go into Settings, tap your name at the top, tap iCloud, and scroll down to Health. Is it toggled on?

Recent versions of iOS prompt you to turn this on when you first set up your device. I think maybe because I had been restoring from iTunes backups over the past few years I never saw this prompt. Or maybe I saw it at some point and dismissed it because I was in a hurry. Either way, I was not syncing my Apple Health data to iCloud.

Even a full iCloud backup evidently does not include Apple Health data if you leave this setting off. And 5+ years of Apple Health data, including all my Apple Watch workouts, weight tracking history, ECG traces, and so on is gone.

How Many Conferences

January 29, 2019

Recently I was sharing my excitement at attending the upcoming NSNorth conference in Montreal, and possibly Release Notes in Mexico (right?!) and this led to a pretty good question. As a developer trying to justify the expense of going to these events, how many should you go to and how do you pick which ones to attend?

A few assumptions. First, if someone else is paying your conference and travel costs, try to go to as many as you can. This post is not about why conferences are rewarding and fun. Let's assume they are, and that the limiting constraint is money and time (roughly equivalent to money for an indie). So, how many per year?

Rather than contriving a cost-benefit analysis for any particular conference, or conferences in general, I'll suggest a try-it-and-see approach and say that if you have never been to a developer conference you should make it a personal goal to attend your first this year. Just one. Just to see.

If you have attended conferences in the past, enjoyed yourself tremendously and still, like me, twitch a little when you see your credit card bill when you get home, my suggestion is: one a year. Put aside the question of which one, take a deep breath, and commit to attending one conference every year. You probably already realize that it's good for you personally, good for your business, and good for your future prospects. It's fun, it's eye-opening. Go to one a year. Unless you really can't afford it, and then obviously put it aside until you can.

So you're going to a conference this year, but which one? Here's my rule of thumb: go to a conference at which you can imagine yourself speaking someday. If you have the opportunity and interest in speaking right now then go ahead and submit a proposal. But even if you're not ready to take the stage just yet, I suggest you pick a conference where someday you might. That's a great way to think about what conference is likely to hold the most value to you professionally, and where you might meet people with similar interests and values.

And don't wait too long before trying your hand at public speaking: it's absolutely the most fun way to experience a conference.

Paste as Code

October 11, 2018

Copying source code from a plain text editor and pasting it into Keynote, Pages, an Office document or some other rich text application often yields disappointing results. Where is the monospaced programming font? Where is the syntax highlighting? Why is the indentation all messed up?

This little Mac automation allows you to paste a copied code block as fully formatted, syntax highlighted code and works with many different languages. It leans on the Pygments project to do the heavy lifting. Many thanks to Brett Terpstra for pointing me in the right direction when I was getting started with this.

Step 1: Install Pygments

The required version of Python is already installed on macOS. You just need to:

pip install Pygments

Step 2: Choose how you want to invoke the automation

BetterTouchTool

Starting from version 3, it is possible to create this automation using BetterTouchTool.

I'm using ⌃⌥⌘ V as the shortcut trigger but you can use whatever you like. The shortcut has three actions:

  1. ⌘C (copy selected text to clipboard)
  2. Trigger predefined action: Controlling Applications > Execute terminal command (synchronous, blocking): see below
  3. ⌘V (paste processed text and replace current selection)

Step 2 (Trigger predefined action) is the important one.

The terminal command is:

pbpaste | awk '{ gsub(/\t/, " "); print }' | /usr/local/bin/pygmentize -f rtf | pbcopy

To try this out, select a block of unformatted code you've pasted into your rich text application, trigger the automation with ⌃⌥⌘ V, and watch as the code is first copied to the clipboard, run through the pygmentize command which converts it to nicely formatted RTF text, and then pasted back over your original selection.

Or, just copy some code from your editor, switch into your rich text application and (without selecting anything) invoke the trigger keystroke. The formatted code will be pasted in.

Quicksilver

I use Quicksilver and this is how I usually Paste as Code. I've added a custom trigger that uses the Run Text as an AppleScript action to run this code:

delay 0.5 tell application "System Events" to keystroke "c" using command down do shell script "/bin/bash -c \"pbpaste | awk '{ gsub(/ /, \\\" \\\"); print }' | /usr/local/bin/pygmentize -f rtf | pbcopy\"" delay 0.5 tell application "System Events" to keystroke "v" using command down

This works just like the BetterTouchTool automation.

TextExpander

If you already have the plain text on your clipboard you can use TextExpander to paste the formatted version at the insertion point. Here is the snippet. Be sure to set the Content type of the snippet to AppleScript.

do shell script "/bin/bash -c \"pbpaste | awk '{ gsub(/\t/, \\\"  \\\"); print }' | /usr/local/bin/pygmentize -f rtf | pbcopy\"" delay 0.5 tell application "System Events" to keystroke "v" using command down

My TextExpander abbreviation for this is ;code

Quick tips and final notes

You can put a whole (plain text) source file on your clipboard in a hurry, and you're ready to Paste as Code.

cat my_source_file.js | pbcopy

If the snippet you copy is too short, semantically ambiguous, or in an unsupported language, pygmentize will not be able to apply correct syntax highlighting. In that case you'll get monospaced text without the colors. If you often find yourself pasting short snippets of, say, bash scripts that pygmetize has trouble detecting you can create a bash-specific shortcut using this command-line:

pbpaste | awk '{ gsub(/\t/, " "); print }' | /usr/local/bin/pygmentize -l bash -f rtf | pbcopy

In this case the -l bash forces pygmentize to color the snippet as a bash shell script and skip auto-detection. Here is the list of supported lexers.

If you find Paste as Code useful and have any ideas for how to make it better please let me know.

The iPad Productivity Paradox

June 10, 2018

Let's assume for a moment that, whatever your "real work" is, it is possible to get it done on an iPad. Let's also agree that this work is more fun to do on an iPad. But the process is slower, especially when you've got a lot of it to do.

I've been trying to make a meaningful go of moving as much of my non-development work as I can to an iPad Pro 10.5" recently. This includes e-mail, pulling down and annotating PDFs from various places, interacting with online services including JIRA, GitHub, etc., moving files around the web, web research, scanning and filing paper documents, buying things online, writing, communicating on Slack, social media and so on.

I can do most of this an iPad, and I enjoy the process. But it's slower. It feels slower, and the more time pressure I'm under the more it feels like moving through molasses. It seems irresponsible to indulge in this tap-and-drag lifestyle.

In no particular order, keyboard shortcuts/navigation/remapping, automation and scriptability, incredible text editors, large monitors, windowing, a shell, clipboard history, Hazel/Alfred/Quicksilver/whatever: on the Mac I can crank a lot of virtual widgets very quickly.

My workflows are evidently not ready for post-PC computing. It would help, for example, if more websites behaved properly in mobile Safari or Chrome. Or if the legacy apps I need to use inside Windows VMs had good native iPad or web-based options. And they will eventually.

I bet that much of what I do will morph into some combination of server-side cron jobs, AI-automated processing, and maybe progress on what we as a society expect from a small business (not much faxing in my workflow anymore). But right now it feels like I've become the human glue to tie these things together with taps and drags and incessant rearrangement of window splits.

This is fine, even good, when I'm sitting on the patio with a proverbial glass of Chardonnay, twirling my Apple Pencil in my off-hand. But when the heat is on, my iPad gets put aside.

Don't Give Up

June 5, 2017

It's my birthday today. I'm getting older and so are my friends and family. You are too, probably. It's getting harder to eat well, exercise and stay in shape.

Why is it getting harder? Everyone has their own reasons and they're good ones. The reasons become more convincing as we get older and busier. That's not what I want to talk about right now. I've started to notice something with people I care about that deeply concerns me.

When we were young we took staying in shape mostly for granted. We ate and drank what we wanted and let metabolism, good luck, and naturally active lifestyles handle it for us. As we got a little older we found we had to put some effort into staying healthy. For some of us that effort become an ingrained set of habits and values. For others it didn't.

Now, as we're continuing to age, my friends are noticing that their weight is out of control. They drink too much, by anyone's definition of "moderate." They are being diagnosed with lifestyle-related diseases. They're on medication to manage chronic illnesses. And one daily medication is becoming two is becoming three.

Maybe you're in the same boat, or maybe you will be someday. Will you shrug your shoulders and accept the inevitable decline of your health with age? Will you give up?

Here's something else I've noticed. Our bodies, and overall health, is incredibly adaptable. If you can walk 100m today and walk every day you will soon be walking 1000m. And not long after that 10 km1.

One-way, unfortunately

And then you'll start running. And maybe you'll only run 1 km the first time and take breaks, but if you run every day soon you'll be running 5 km without breaks. And then 10 km.

You'll find that you've started losing weight, and that you're sleeping better and have more energy. And that will motivate you to start eating a little better. Maybe cut down a bit on the drinks. And you'll get healther. Eventually. Because the human body will adapt, if you work at it every day and give it time.

As long as you don't give up.

About three years ago I was inspired by The Healthy Programmer to start walking, then running, then swimming. A couple of weeks ago I (slowly) finished an indoor 1/2 triathlon. Later this year I'm hoping to complete a full triathlon indoors. I never thought that I'd be able to complete a triathlon, but now it is the next step in a progression of goals set and achieved.

Not breaking any records, but hey.

The body is like that. You can set goals and make daily progress towards them. Sometimes you will hit plateaus and sometimes you will regress, but over the long term you will get healthier. Everybody starts from a different place, but you can always make progress. No matter where you are right now there is a next step, and a step after that.

Just don't give up.

--

1 328 feet, 0.6 miles, and 6.4 miles respectively.

Hey Siri on the Apple Watch

May 30, 2017

People seem to have trouble with this so here is the trick to using Hey Siri on the Apple Watch.

  1. Flip your wrist up so that your watch face appears before saying Hey Siri. Starting from a blank watch face doesn't seem to work very well (or at all).

  2. As soon as the Siri waveform UI appears, continue with your request. Do not wait for Siri's audio or haptic feedback. When you see the waveform, continue speaking.

Continue speaking now

The combination of these two steps makes Siri on the Apple Watch much more reliable for me.

Networking at WWDC

March 31, 2017

Congratulations to everyone who won the chance to attend this year's Apple World Wide Developer Conference in San Jose. My first (and so far, only) trip to WWDC was in 2016 and it was an amazing experience.

A good friend going for the first time this year asked me what I thought about WWDC as a place for business networking. He is a freelance iOS developer and is always on the lookout for new opportunities to work on interesting projects. My advice, based on my experience last year, boils down to this:

  1. I didn't meet many people who were actively looking for contractors to help with their iOS project. I don't really remember meeting anyone who was looking for technical co-founders or partners either. I'm sure these opportunities are there, but I wouldn't say they are common at WWDC.

  2. There are many other contractors at the conference. Chatting with them is fascinating (some of the stories are pretty good) and could potentially lead to sharing of opportunities in the future. I met at least a few people to whom I've passed on contracting leads I've come across since.

  3. The social aspect of WWDC I enjoyed the most was getting to meet in person all the people with whom I've had some interaction in the Mac and iOS developer community. People with whom I've argued on Twitter, whose podcasts I enjoyed, whose libraries I've used, and some who currently do, once did, or would soon begin, working for Apple itself.

This third type of interaction will probably not lead to a direct business opportunity. But membership in the Apple community is the greatest reward for choosing to develop for the platform. There are real people behind all the work you see and use, and once a year you can shake their hand, buy them a beer, and talk shop for a bit.

So if I was going this year, I think I would seek out the third, be open to the second, and not worry about the first type of networking in my list.

Making Xcode more like Emacs

October 13, 2016

You might already know that macOS has great system-wide support for Emacs text editing shortcuts. And you might also know that you can customize that support to add your own custom shortcuts. In fact, there is a nice pre-built file of additional Emacs bindings compiled by Jacob Rus that I recommend.

Xcode ignores these customizations.

Luckily you can add your own similar customizations if you're willing to dig into the Xcode application bundle. Here's the file to edit:

/Applications/Xcode.app/Contents/Frameworks/IDEKit.framework/Resources/IDETextKeyBindingSet.plist

It gets replaced when you swap out your Xcode binary so be prepared to copy your modified version back in from time to time. Here's a snippet from mine, appearing directly under the </dict> closing the <key>Writing Direction</key> section.

<key>Custom</key> <dict> <key>Move to non-whitespace beginning of line</key> <string>moveToBeginningOfLine:, moveSubWordForward:, moveSubWordBackward:</string> <key>Delete current line in one hit</key> <string>moveToEndOfLine:, deleteToBeginningOfLine:, deleteToEndOfParagraph:</string> <key>Insert line above</key> <string>moveToBeginningOfLine:, insertNewline:, moveUp:</string> <key>Insert line after end of current line</key> <string>moveToEndOfLine:, insertNewline:</string> </dict>

When you restart Xcode you will find your custom commands available in Xcode > Preferences > Key Bindings > All. You can bind keyboard shortcuts here.

Bind keyboard shortcuts

Happily the keyboard bindings seem to persist through Xcode updates, so just copy your IDETextKeyBindingSet.plist file back in after updates and you should be good to go.

Limitations

It seems that the methods available to us for the definition of custom text editing shortcuts (like deleteToBeginningOfLine:) are all instance methods on NSResponder. So many of the more useful Emacs features (like Moving by Defuns) might be impossible to implement. If you can think of a way to do this, I'd appreciate hearing from you.

Three Products I Hope Apple Makes but They Probably Won't

February 10, 2016

Apple's hardware releases last year have felt like shots across my credit card's bow: things I almost like but not enough to buy for myself. Here are three dream products I would buy in an instant, if only Apple made them.

A round Apple Watch

I am so sad that Apple has committed to a rectangular watch face. I've never seen a rectangular watch I liked. I understand the difficulties in making a good circular UI, but if anyone could crack it Apple could. The next Apple Watch will probably be faster and thinner and may have better battery life. But a round face would seal the deal for me.

An 11" Retina MacBook Air

My 2013 11" MBA is probably my favorite Apple computer of all time1. Literally the only thing I'm missing is the gorgeous retina display on the new 12" MacBook. A Skylake CPU update for slightly longer battery life would just be icing on the cake: everything else on the 12" MacBook feels like a downgrade. But with the 12" MacBook and rumors that the next 13" Pro will be thin and light enough to supplant the 13" Air, it looks like there might not be enough room in the lineup for an upgraded baby Air. I hope I'm wrong on this.

An iPad mini with Apple Pencil support

I love the iPad mini's 7.9" form factor for inside-pocket portability, but without native stylus support I've switched to a Samsung Galaxy Note 8 for stylus notetaking and sketching on the go2. The Samsung S Pen technology is exceptional, and inking on my 2013 Android tablet feels similar to using Intuos tablets and the Microsoft Surface pen3.

The Apple Pencil is at least as good if not better, but is currently only available on the ginormous iPad Pro, in which I have no interest. There are rumblings that the Pencil will make its way down to the 9.7" iPad Air but I hope Apple takes it all the way down through the iPad line. I'd love to come back to iOS for my portable tablet needs.

Notetaking on a Samsung Galaxy Note 8

Notetaking on a Samsung Galaxy Note 8

They Probably Won't

I'm not holding my breath on any of these. In each case it feels as though Apple has made a deliberate decision to move in a different direction. Hope springs eternal though, and you can never put it past Apple to surprise us with something we hadn't even imagined.

--

1 I was lucky enough to have both an Apple ][+ and an original Macintosh 128k as a kid and they were awesome but I wouldn't trade my Air for them now.

2 I use Microsoft OneNote, which is free, supports mixing inking/sketching with typing, and syncs beautifully across iOS, OS X, and Android. And presumably Windows too.

3 I believe the Samsung Galaxy Note devices and the Microsoft Surface both use the same or very similar WACOM digitizer technology as that found in Intuos tablets. All attempts to make a good capacitive stylus (passive or Bluetooth) for modern devices pale in comparison: they all provide a completely different and wholly inferior experience.

14-day Week View in Calendar

February 7, 2016

Calendar (formerly iCal) has become increasingly stubborn about the number of days shown in the Week view. In OS X 10.7 and earlier it was possible to expose a Develop menu to bump the week view to my preferred 14-day range, but that went away in 10.8. You could still defaults write your change, but sometime between 10.8 and 10.11 that stopped working too.

14-day Week View

With a little spelunking in ~/Library/Preferences/com.apple.iCal.plist, I found the appropriate setting. Close Calendar, and type into Terminal:

defaults write com.apple.iCal "n days of week" 14

Reopen Calendar in Week view and you should see a 2-week window. To restore the original behaviour you can simply close calendar and type:

defaults delete com.apple.iCal "n days of week"

Aah. That's better.

Pile of Poo

February 3, 2015

Here's a little toy I put up on GitHub that will make a young person in your life smile. It's a secret code generator in Ruby. Run the script with the plaintext and you get back an HTML puzzle page "encoded" using a substitution cypher made up of a randomized subset of the emoji character set.

Sample output

Open the HTML in an emoji-friendly browser (like Safari) and print it out. Good times, and completely immune to the POODLE vulnerability.

My Priorities

January 21, 2015

Once in a while I am able to pull my head above water level and reflect on my work priorities. When I start making the list it is always longer than I thought it would be. My reach exceeds my grasp, sometimes in a healthy, optimistic way and sometimes less so. It's a useful exercise: what are my real project priorities, and what does that even mean?

This is the working definition I've come up with:

Your priority projects are the ones you touch every day. Everything else is a hobby.

There is nothing wrong with having lots of hobbies. Still, this is a bitter pill for me to swallow because I am involved in so many different things that I (supposedly) care about. There are only a few things that I work on, even just a little bit, every day. If I am honest with myself, those are my priorities.

My Problem with Objective-C Dot Syntax

March 3, 2014

Say you have a class Employee with the property CGFloat salary. Here are two choices to set this property from inside an instance of the class.

_salary = 35000.00;

or

[self setSalary:35000.00];

These do very different things. In the first case you directly set the instance variable (ivar) and do not trigger any observers or other hooks that may be watching salary for changes.

The second is actually calling the method setSalary, whether defined explicitly in your code or generated for you by the compiler. It is also triggering anything in the framework that may be set up to detect changes to the property.

In general, you would always want to use the second form rather than the first. Something out there might be expecting to be notified when salary changes, and if you set it directly you are doing an end-run around one of the things that makes programming in Objective-C so nice. Sometimes though, you actually do want to use direct ivar assignment.

Now, consider a third way of setting the salary.

self.salary = 35000.00;

Is this like the first form (direct ivar assignment) or like the second (property assignment)?

Integrating terminal Vim with Finder [UPDATED 2015-02-10]

February 24, 2014

MacVim is a great choice for Vim on the Mac. It is a pre-built GUI wrapper with a lot of nice extras that smooth the infamous Vim learning curve. But the Vim experience is better in its native habitat1, the terminal, and like others, I grew to prefer text-mode Vim.

I hate losing Finder integration though. I want to be able to double-click on a text file and have it open in my editor, preferably reusing an existing Vim session if available. This is non-trivial to set up with terminal Vim. The approach I'll describe here is fiddly, hacky and a work in progress. But it does work.

XQuartz

Might as well get the hard part over with first. Check to see if you have Vim compiled with the necessary options:

vim --version

Do you see +clientserver, +X11, and +xterm_clipboard? I'm guessing you do not.

Here's the thing. If you want this procedure to work you're going to need to install and run XQuartz in the background, all the time. I have mine set to run at startup and I leave it running while I use my machine. I'd prefer not to need this because I don't use X11 for anything else, but it is fundamental to this configuration so I've learned to live with it.

Go ahead and install XQuartz now, because you'll need it on your system before re-compiling Vim.

UPDATE 2015-02-10: OS X 10.10 Yosemite

You'll need to link the X11 include and library directories to where the vim configure script can find them:

sudo ln -s /opt/X11/include /usr/include/X11 sudo ln -s /opt/X11/lib /usr/lib/X11

Compile Vim from source

You can't call yourself a programmer until you've compiled your own editor from source, right? Well, here we go.

Download the Vim source and configure. Here are the options I use:

make distclean
./configure --with-features=huge \
                --enable-rubyinterp \
                --enable-pythoninterp \
                --enable-luainterp \
                --enable-cscope \
                --enable-gui=gtk2 \
                --disable-darwin

You're disabling the available OS X integration (--disable-darwin) in favour of XTerm integration (--enable-gui=gtk2). That flag will be ignored by the configure script if you haven't installed XQuartz, so scroll through the configure output and confirm that your --enable-gui=gtk2 wasn't discarded.

Now:

make
sudo make install

And check that everything worked with:

vim --version

You should now see +clientserver, +X11 and +xterm_clipboard.

tmux

If you're this far down the text-mode rabbit hole you have probably already installed tmux. But if you haven't, download and install it from source.

iTerm2

Here's one you don't have to recompile. A long time ago I thought that the built in terminal on OS X was fine and that I didn't need a souped-up terminal, but I was wrong. Get iTerm2.

Now you want to set up an iTerm2 profile specifically for creating/connecting to a tmux session and a Vim server. Here's a screenshot of what I've done:

iTerm2 profile

iTerm2 profile for launching tmux and Vim

That full command-line is:

/bin/zsh -c "tmux attach -t TMUX || tmux new -s TMUX '/usr/local/bin/vim --servername VIM'" 

Substitute your shell of choice if you haven't got religion on oh-my-zsh yet.

We are not even close to finished yet.

AppleScript

Now you want a script to actually tell iTerm2 to start a new terminal session using this profile, or just activate an existing one if available. Open up AppleScript Editor and type the following.

tell application "iTerm"
    activate
    repeat with theTerminal in terminals
        repeat with theSession in sessions of theTerminal
            if (name of theSession contains "tmux") or (name of theSession contains "vim") then
                set current terminal to theTerminal
                select theSession
                return
            end if
        end repeat
    end repeat

    -- No tmux or vim session yet. Start one.

    set theTerminal to (make new terminal)
    tell theTerminal
        launch session "VimServer"
    end tell
end tell

I've saved mine into ~/bin/open-tmux-or-vim.scpt.

This script is actually pretty handy, and I've bound it to Ctrl-Option-Cmd-V (using a Quicksilver trigger, but you can use whatever you want).

Automator workflow

Now we're going to make an application (actually just an Automator workflow) that will accept the files we've selected or double-clicked in the Finder and hand them off to the running Vim server we started (or will start). Fire up Automator and create an application as follows:

Automator workflow

Automator workflow for opening from Finder

I know what you're thinking. Sleep 0.5? Really? This is number one on the To Do list for future improvement. But for now, I've hardcoded a delay so that Vim has a chance to set up a listening server to accept the --remote-tab-silent we're issuing. Without the delay the Vim server won't have launched yet.

I've saved mine into ~/bin/OpenWithVim.app.

Associate file extensions

Right-click on a .txt file in the finder and choose Open With... > Other.... Enable All Applications and navigate to where you saved OpenWithVim.app. Click Add..., now click Change All... and Continue.

Try double-clicking on this file, or any other .txt file. It should open in terminal Vim within a tmux session.

Bonus points

If you want to associate lots of extensions and (like me) want to do this on several Macs, the Open With... dance is too cumbersome. Duti is a great solution to this, and I've mapped a Vim command to edit and trigger my .duti.conf file so adding new associations is a breeze. The bundle ID for the automator action you just created, by the way, is com.apple.automator.OpenWithVim.

Conclusion

If you've read this far and come to the conclusion that you should probably just use MacVim, I don't blame you. But I'm loving this set up right now and expect it will only get better.


1. There are good reasons for this, and they won't sway you until you've experienced it yourself. Building your own Vim is convenient because you can enable all the features you want and you can stay up to date. But mostly, there is a gestalt that occurs when you're working with solely pure-keyboard, pure-text tools. The allure is so strong that I'm considering switching to text-mode, tmux-friendly mail, Twitter, and messaging clients.

NIB-less

July 7, 2013

Every once in a while, usually whenever Apple makes significant changes to Xcode or AppKit/UIKit, an old debate flares up in the Mac and iOS development community: is Interface Builder still a good place to implement UI, or should you do everything in code?

When you're first starting out with Mac or iOS development this question seems like a no-brainer. Most of the tutorials you'll see will be IB-centric, and intuitively it seems like it would be much nicer1 to drag buttons and table views out from an object palette, wire them up, set few visual properties, and then sangria on the deck.

That's how I started but I've since become a code-only convert. Looking back I wish that I had started learning Cocoa with a code-only approach. Although Apple sends the message that IB is appropriate for both new developers learning the frameworks and experienced developers who want to pump out UI in the most time-efficient way possible, I actually think that it's not great at either.

First, the time efficiency. I can't argue that dragging a button out of an object palette and wiring it up to an IB outlet is fast, easy, and kind of fun. But how much of real-world UI is actually that simple? When you get to composite, custom views in contained view controllers, custom layer-backed views, on-the-fly swapping of collection view layouts and so on, you're doing most of the heavy lifting in code anyway and IB just gets in your way. The number of times I have to click through a view hierarchy just to select the right thing is almost as annoying as the opaque consequences of accidentally dragging something out of alignment by a few pixels.

So yes, easy to create trivial UI but difficult to create complex UI and painful to maintain and update it over the lifecycle of the app. It is extremely difficult to understand the IB manifestation of someone else's complex UI. And this is even more true when using Auto Layout2: fighting with IB to set the correct constraints is sufficiently difficult, but actually maintaining the correct constraints while making updates or enhancements to a UI can be time consuming and sometimes utterly baffling.

But if I had started learning with a code-only approach, I would have had an easier time of it in a few ways. First, AppKit/UIKit, unlike some other UI frameworks, is really code-efficient3. You can instantiate a button, set its title, disable auto-resizing mask translation and add it as a subview in 4 lines of code. A few more and you've set layout constraints on it. If you've been using UIAppearance you may not have to do any additional UI styling at all.

Sure, you might have 4 or 8 more lines of code than you would have with an IB-based approach, but look at what you've gained:

  1. You've put the button in the right place. Doing everything in code encourages more thoughtful object encapsulation, avoiding a lot of the header-bloat and insufficiently opaque classes you can get from IB if you're not careful.

  2. You've learned how to keep your code DRY4. The desire to keep the UI code to a minimum will probably motivate you to learn UIAppearance and other best-practices just to keep your code to a minimum. And you'll never again have to walk through your NIBs trying to make sure all your controls use the

  3. You can play the Auto Layout mini-game. It's fun and educational.

-- 1. I've thought a lot about why I initially subscribed to this theory and I'll summarize as follows: first, every book and tutorial I'd seen used IB for layout. Second, I used to do a lot of work in Visual Basic, so drag-and-drop layout was comfortable for me and I had positive associations with it. Third, modern non-Apple UI frameworks that offer imperative (code-based rather than visual) layout as an option, like Microsoft's WPF, are a horrendous hacky painful mess. I currently work on a pure-code WPF-based UI and it's not fun. So the assumption is that they must all be this painful, and visual layout would almost certainly be better.

2. Auto Layout is what prompted my switch to code-only UI development.

3. Not everyone agrees with this assessment, and I haven't tried all the different web and native frameworks out there, but it's pretty damn efficient for what I'm used to.

4. Don't Repeat Yourself. The programming principle of not duplicating the same snippet of code, instead centralizing it and calling it from multiple locations as needed. DRY code tends to be less prone to error and easier to maintain and extend. Copy-paste code, on the other hand, tends to be a maintenance nightmare.

My RSS Setup

June 30, 2013

Barring any last minute stay of execution, Google Reader will stop working tomorrow (Monday July 1) so this would be a great time to pull your feed list off the service if you haven't already.

As predicted, the last few months have seen a whirlwind of RSS development and announcements as the Mac and iOS community have scrambled to fill the impending void. There are a number of great options now and they've been detailed at length in other articles. Here is what I decided to go with.

On the server-side I am self-hosting the single-user Fever aggregator. Fever is innovative in its presentation and ranking of articles, and this was its initial differentiator when it entered a market that still housed the 800-pound gorilla. I don't really care about that though.

For me, the low, one time cost and compatibility with Reeder makes it a great choice. As a bonus I can finally (finally) subscribe to feeds from my client application, which I had to sign into Google Reader's web app to do before.

On my iPhone I'm sticking with Reeder. Its gorgeous UI and reading experience hasn't changed much in the last couple of years, and the fact that it hasn't needed to is a testament to the strength of its design.

I wish I could use Reeder on my iPad but it has been lagging the iPhone version and doesn't yet offer Fever support. This is expected to come soon, and the renewed interest in non-Google RSS solutions is hopefully a motivating factor for the developer. I have a sinking feeling that nobody's getting rich off RSS, even with Google gone, so it's probably still a matter of the developer's internal motivation whether any particular product or service is improved. In a nutshell: Go Silvio Go!

For now, I am using Sunstroke, an excellent Fever-specific universal client and honestly perfectly suitable to the task.

Sunstroke for iPhone and iPad Sunstroke for iPhone and iPad. Source: goneeast.com/sunstroke/

I don't read feeds on my Mac so the web/desktop reading experience is a non-factor for me.

There is concern in blogging circles that Google Reader's hasty retirement is going to dramatically reduce RSS readership. Readers might not bother to find a replacement solution, especially with how fragmented the market has suddenly become and the complexity of choosing both a server-side and possibly multiple client-side tools. People might just gravitate to other sources of news instead.

I hope that this worry is unfounded: reading RSS feeds is still my favorite way to keep up with my (unevenly curated slice of the) world.

New Feed URL

May 1, 2013

As of today, Flaky Goodness can be found at http://goykhman.ca/gene/blog. Please update your bookmarks and subscriptions accordingly.

The Creamiest TextView in the App Store

April 14, 2013

I'm very pleased to announce the launch of my "weekend project" about a year in-the-making. Indigo In is now available on both the App Store and Mac App Store for all of your Macs and other Apple devices, and it's free. Enjoy.

Indigo In on iPhone

Indigo In on iPhone

There is no shortage of simple notetaking apps for either Mac or iOS, so I think it's worth mentioning what makes Indigo In a little different.

First, there is exactly one note. When you launch In you are editing that note. You never have to exit a note, create a new note, delete old notes, organize or tag your notes, etc. This makes for a really nice ubiquitous capture experience.

Launch In, start typing1. If you haven't yet gotten religion on ubiquitous capture, it's like the move from Windows to Mac: few who make the switch look back.

Second, the sync. This is why a weekend prototype took a year to hit the App Store. My original vision was simple: one page of notes constantly in sync between my Mac, iPhone and iPad. To make that happen took more than I expected. I might go into the technical details in the future, but for now I will say that I'm pleased with the result. Tap ideas into Indigo In on any of your devices and they will appear on all your other devices within a few seconds.

There are other nice things about Indigo In. The sharing feature (a $1 in-app purchase available only on the iPhone/iPad version of In) is handy for whipping through what you've captured and actually doing something with it. If you have a favorite app you'd like as a Share destination please suggest it on Twitter @indigoinapp.

Also, there is no sign-up or sign-in: everything is synced through iCloud, to which your device is probably already connected. That invisible login experience was the reason I stuck with iCloud despite its challenges.

Indigo In is a classic scratch-my-own-itch project and I'm really happy to see it spread its wings. It's in each of my docks and I use it constantly. I hope you find it useful too.


1. Try voice-dication with In. This feels especially Star Trek.

Using Panic's Status Board to monitor TimeTiger data

April 11, 2013

Yesterday renowned Mac and iOS developer Panic launched their beautiful new Status Board app for iPad. It lets you display real-time metrics from various public and personal sources in sizable, re-arrangable widgets on the iPad screen and optionally mirror that display to an HDTV or monitor hanging in your office. This is not a new idea, but is implemented elegantly and in an extremely user-friendly way. You can read iMore's full review.

Panic cleverly incorporated some simple interfaces for providing custom data sources to show in Status Board. This kind of thing is pure catnip for developers, so I took a few hours1 yesterday to put together a simple tool to publish TimeTiger time data in a Status Board-friendly way.

Sample TimeTiger Metrics

I've posted the tool on GitHub. It is a .NET 2.0 application written in VB.NET (Visual Studio 2005 and up) and uses the public TimeTiger SDK. It doesn't include any scheduling capabilities yet and provides only six simple reports, but serves as a great starting point for rolling your own real-time TimeTiger time and project Status Board layout.

TimeTiger Status Board tool

Sound interesting? We can help you get started using the TimeTiger SDK or adapting this specific tool to your needs.


1. According to TimeTiger, 3.4 hours from initially purchasing Status Board to committing working code and the screenshots you see here.

The Softphone that Gene Built

April 7, 2013

This is the IAX softphone that Gene built, so that he could make low-latency1, almost free phone calls from his computer anywhere in the world.

BlueVoice (original UI)

BlueVoice (original UI)

This is the wireless headset that Gene uses along with the IAX softphone that Gene built.

Plantronics CS-50 USB Headset

Plantronics CS-50 USB Headset

This is the wireless headset that Gene bought to replace the first one because a Mac OS X update broke the USB compatibility on which the headset's wake-from-sleep depended.

Plantronics Savi W440 Headset

Plantronics Savi W440 Headset

These are the open-source audio processing libraries on which Gene's softphone depends, that haven't been updated since 2008, and that Gene now has to port to 64-bit Mountain Lion.

IAXClient library on SourceForge

IAXClient library on SourceForge

So that Gene can actually build his softphone again.

To make low-latency, almost free phone calls from his computer anywhere in the world.


1. I've never been happy with the latency that seems to be endemic to SIP softphones. Building my own softphone based on the Asterisk IAX protocol allows me to leverage the native transfer capability of Asterisk, providing extremely low-latency point-to-point phone call connections that I wasn't able to achieve with anything off-the-shelf.

For the Most Part

March 29, 2013

iCloud has been getting piled on recently by the Mac and iOS developer community and that's too bad, because I think that for the most part1 it delivers on the promise of effortless, ubiquitous multi-device synchronization. There are bugs and edge cases to be sure, but it has been getting better2 and I believe that there is light at the end of the tunnel3. I may be naive, but I believe that most of iCloud4 either does or will soon "Just Work."

iCloud incorporates multiple sync technologies of which Core Data sync is just one. The promise of Core Data sync specifically is too great to ignore and it has lured many developers, including myself, into implementing solutions that depend on it. At some point or another, before, during, or (oh god the humanity) after release, iCloud Core Data sync falls out from under you and you must consider abandoning your project or using an alternative synchronization approach or even platform.

After my app was rejected for using (the dead simple and highly reliable) iCloud key-value store, I did in fact re-implement using Core Data sync. And I got it working, mostly. Then I hit one of those "falls out from under you" scenarios.

It has been suggested that iCloud Core Data is not appropriate for complex database schema with dependent relationships, integrity constraints, and so forth, but is quite useful for simple models. Based on my experience and the trivial example I put together to demonstrate my problem, I do not believe that to be the case.

But I could be wrong.

I've posted a 3-minute video that demonstrates the problem. It's possible that the bug is mine and mine alone, so here is the GitHub project if you'd like to take a look yourself. I would appreciate a pull request that actually resolves the errant behaviour rather than just works around it with a clever hack.

If this is a bonafide iCloud bug though, it's pretty bad. Given how trivially simple the use case is it's hard to imagine using iCloud Core Data sync in any shipping app5 until it improves. Here's the bug report, submitted to Apple as rdar://13192714.

As for my project, I've re-implemented (for the third time) using plain old text files in the plain old iCloud ubiquity store and have submitted it for review. It's working great: just as well as the original key-value store implementation. More on that soon.


1. Except for Core Data sync.

2. Core Data sync has not been getting better.

3. There is no light where there is Core Data sync.

4. Not the part with Core Data sync.

5. I know of some shipping iCloud Core Data apps, at least one of which is quite popular. I assume that their use cases manage to walk a very fine line between the issues others have and are continuing to report. There is no way to know whether any particular use case, no matter how simple, will be so blessed.

Be Careful What You Start

January 27, 2013

In April 2012, coming off another technically interesting but commercially flopped side-project, I was in the process of convincing myself that I should really be sticking to my knitting when I came up with a cool idea for, yes, another side-project.

Some People Just Don't Learn

The concept was simple and resonated with my own needs, and I figured I could whip up a prototype fairly quickly to see whether it would be useful to myself and others.

It was.

So, with a working prototype in hand I (once again) imposed upon the awesome talents of Colleen Nicholson to come up with an app icon, and set about bringing this thing to market.

I'll do it over the weekend.

— Bill Gates

How long would it take? A couple of weeks to Beta, maybe a couple of weeks of that, and then submission to the Mac and iOS app stores. Launch in June, margaritas on the beach by Canada Day1.

The core UI was dead-simple. The magic was in the iCloud-based synchronization on which the whole thing hung. A little research and a little testing and I decided on the iCloud Key-Value Store API. Although primarily intended for storing user preferences and configuration settings, the API was sufficient for my needs and it was2 the simplest and most reliable iCloud sync option.

I needed to build a fair bit of intelligence on top of that layer. The original schedule was washed away by reality but as the weeks and months rolled past I eventually got the sync behaviour I wanted. This was no fault of iCloud: the Key-Value Store API was rock-solid and acceptably performant for me throughout my testing, but I had a lot of kinks to work out in my own code. It was November before I finally submitted the iOS and Mac apps for review.

Prior to the completion of the review process, a Beta tester discovered a show-stopper and, after messing around with the code for a bit, I decided to pull the binaries until I could get it right. Sync is hard, and if you screw it up people are going to lose data.

I spent the next few weeks hacking away on an experimental branch of my sync code, digging deeper and deeper until I could no longer see daylight. Throwing my hands up just before the Christmas holidays, I decided to shelve it until 2013.

Refreshed and re-energized in the new year, I took a step back and started writing lots of automated tests against my sync logic. One of my frustrations had been my inability to quickly identify regressions and test edge-case conditions quickly enough. So, as you do, I implemented a detailed simulation of the actual iCloud Key-Value Store against which I could test my sync logic.

And Then it Worked

Like a shaft of light from the heavens cutting through the gray skies, in early January everything came together in a smooth, reliable, fast, and tight little bundle. I released another Beta and almost immediately submitted the app for review.

Rejected

Rejected

The iOS app was approved in about a week or so, but a few days after that the Mac app was rejected. I was using the iCloud Key-Value Store to save actual user-generated content, and that's not really what it's meant for.

I'm not going to whine about the relative merits of this argument or the fairness of this rejection. The fact is that yes, I was using the KVS to store user data and that no, that's not what it's meant for. So now I'm back to the drawing board on (by far) the most challenging element of this app.

What Now?

After 3 (or was it 4?) complete re-writes of the synchronization engine, the last including a fairly high-fidelity simulation of a chunk of iCloud, I think it's fair to say that my little weekend project has become a scope-creep cautionary tale. Part of me thinks it's time to shelve it for good. Especially since one of the most viable options now is converting the code to use iCloud Core-Data Sync (aka the Painmaker).

But I'm not very good at letting go of something once I've got it clenched in my jaws. So don't be surprised if a post not too long from now announces the release of this little project, a year or so after that first weekend prototype.


1. July 1st.

2. And still is, from what I hear.

User Intention

January 20, 2013

My father is vexingly difficult to shop for. I think that every brilliant gift idea I've had over my adult life has either been immediately returned or is still sitting on a shelf somewhere. I remember asking him once what type of camera he wanted, and his response was essentially this: small enough to fit in his pocket, and with a single button. But that one button has to take the perfect picture, with the perfect composition, exposure and color balance, every time.

This serves as a pretty good description of the ideal user interface: one button that does exactly what you intend. As software designers we can't ever quite get there, but we have ways of getting closer.

Limiting scope

The less a product does the simpler the interface can be. The stock Camera app on the iPhone has far fewer controls than my Nikon D7000, simply because it does less.

Making decisions instead of providing options

There will always be those of us who want to shoot in Manual or Aperture Priority to express our artistic vision and eke out the best result in any circumstance. But most of the world would rather shoot in Auto. A well designed Auto mode, one that minimizes the downside of giving up all that control, is a design win.

More controls than my car

More controls than my car

Asking the right questions at the right time

If full-Auto gets you 80% of the way there, it might be possible to get to 90% by exposing just a tiny bit of UI-complexity when absolutely necessary. A subtly blinking flash icon when the ambient light is too low, for example, or a similar HDR button when the sensor detects a very high dynamic range in the scene. It might be overkill to have these controls visible all the time, but they might make a big difference if presented conditionally.

Abandoning UI abstractions

As developers we refuse to accept that most of our userbase will never fully embrace the conceptual abstractions that we take for granted and depend on every day.

Modes. Hierarchy. Aliases. Virtual or "smart" collections.

We fool ourselves into thinking that because some of our users are able to use some of these things some of the time, they must understand them and like them. But really, the lion's share of our poor userbase has often just learned the steps or actions to accomplish their tasks without ever understanding what's happening under the hood. And at the first sign of trouble, they'll get lost, confused, and possibly angry.

Doing these things is hard, especially when you're building a product that was originally designed to scratch your own itch. And for every example of a design that takes this advice to heart, there are critics, detractors and disillusioned former fans who jump ship for a more advanced and customizable approach. But this tension between simplicity and power is where software design lives.

Unstuck

January 13, 2013

I get stuck a lot.

Bouncing between half a dozen active development projects as well as the other stuff I do at work, not to mention at home, creates a kind of multi-tasking hell. The constant context switches during an already interruption-filled day makes it challenging to eke out even a little forward motion. Projects are routinely left ignored for days or weeks.

Restarting these fallow projects becomes progressively more difficult the longer they are left untouched. In software development we have an expression for this: bit rot. As code is left untouched while its surrounding environment changes, and as the original developers leave the organization or move on to other things, the code becomes increasingly difficult to pick up again and improve or even fix.

That happens to my projects. The longer I haven't touched something the tougher it is to pick up again and move forward. I may have forgotten some important detail, forgotten what the next step was, forgotten what got me so excited about it in the first place. It takes a surprising amount of effort to page in the requisite mental state to make forward progress.

Just the thought of coming back to a large, ambiguous, difficult project becomes a barrier. Looking back on my notes and seeing the next step is no help: the next step is often big, hairy and loosely defined1.

So I have adopted a mental trick. When I consider coming back to a project that has been sitting for a while I force myself to only tackle a tiny, trivial, embarassingly easy element of that project. I might fix a typo or an itty bitty visual glitch, delete some previously commented out code, rename a poorly named method or two, whatever. Really, really low hanging fruit.

What is the smallest, easiest, fastest, least risky thing you could do to move this thing forward? Even just a fraction?

And 5 minutes later I am back into the project. The mindset I had when I was last working on it has seeped back into my subconscious and I can start planning my next move.

Until the next interruption.


1. Jeffrey Windsor and Ernest Hemingway both wrote about how they dealt with this in their work. Merlin Mann published a nice summary on his old productivity blog.

Archiving a TV News Segment

January 6, 2013

A few weeks ago during the peak of the holiday season our 3 1/2 year-old was interviewed for the evening news. It was a cute segment about the weather and his thoughts on Santa. He did great.

We wanted to save a permanent copy of the segment when it aired, but we don't use a PVR or any other TV recording device. Even if we did have a TiVo or cable-company provided PVR it is unclear whether a recording made with one of these things would serve as a good archive, given the proprietary formats and ubiquitous DRM.

What to do?

Most news networks now make their segments available for free viewing online. But have you ever tried to save one locally? It is enough of a challenge that I think most people would give up and just keep a link to the online version hosted by the news station. But again, this is not suitable for permanent archiving: links change, media companies and TV stations get bought and sold, and so on.

Here is the step-by-step process I used to download and store a permanent, high-definition version of the news segment on which my son was interviewed. I should probably point out that the legalities of following this procedure may depend on your geographic location and whether you have a reasonable fair-use claim on the content. In this case, as my son was the interview subject I think it's probably ok.

I should also point out that this procedure is highly dependent on the TV station website itself, and different websites might use different approaches, content delivery networks and so on. After reading through this procedure if you can't make heads or tails of what is going on you may want to delegate this process to a tech-savvy friend or relative. I would rate this at a difficulty level of 8/101.

  1. Identify the URL of the specific page on the TV network website where the video is available to view.
  2. Fire up Firefox with the Download Helper extension and navigate to the page.
  3. Start playing the video.
  4. If Download Helper picks up an .mp4 version of the video that is playing (not just Flash), you're done. Grab the .mp4, preferably in 720p or 1080p if available, and thank the lucky stars you don't have to go through the rest of this process.2
  5. Brace yourself. This would be a good time to make a coffee or something stronger. Clear your schedule, hug your family.
  6. Uninstall Flash from your computer. You want the TV website to think that you are incapable of viewing Flash. ClickToFlash and other Flash blockers are probably insufficient, although if you can find a way to selectively (but completely) remove Flash support from one of your browsers (preferably Safari), go ahead and try that instead.
  7. Set your browser user agent to the one used by Safari 5.1 on the iPad. I used the Safari Develop menu to do this3. The purpose here is to make the TV website believe you are on an iOS device, and are capable of viewing high-definition H.264 video. This is our preferred video archival format.
  8. Clear all your browser cookies and history. You want the site to forget that you ever visited with a Flash-capable browser.
  9. Navigate to the page where the video is shown. Play the video and wait until the pre-roll ads are done and the video is actually playing. Right-click on the playing video and Inspect Element (in Safari).
  10. See if you can identify something that looks like the video source URL, which (in my case) was a file called master.m3u8.
  11. This is a UTF-8 playlist file, and we need to download it to take a look. Download the .m3u8 playlist file (use wget or curl if you'd like), and open it in a text editor.
  12. We're getting warmer. This master playlist file references a number of stream playlists, each at a specific bandwidth level. Since we want an archive in the best possible quality, identify the line corresponding to the highest bandwidth rating. Note that this might not be the bottom-most line in the file. Download this stream playlist URL using wget or curl and open it in a text editor.
  13. Here we go. We're now looking at a sequential list of video files, probably with the .ts extension, that together make up the segment we want to download. Use wget or curl to download each one of these ts files locally, making sure to keep the sequence number in the filename.
  14. Recombine these .ts files (losslessly of course) into a single .ts file. I used tsMuxeR for this.
  15. You now have a single .ts file that contains your video segment. If you'd prefer the video in an mkv container (which might be easier for your playback environment), you can use tsMuxeR to demux the combined ts file into separate video and audio streams, and then use mkvmerge to re-package the streams into an mkv file.

Voila. A perfect archival copy of your news segment in 15 easy steps!


1. Relative to what, you might ask. As a rough guide, consider difficulty 1 to be playing the video from the TV website, and difficulty 10 to be replacing the video on the site with Never Gonna Give You Up. RickRolling the nation is 10 for difficulty and 10 for style.

2. Steps 1-4 are usually sufficient for almost all other difficult-to-grab video on the web. TV and news sites in particular seem to make things difficult.

3. You should be able to use the User Agent Switcher to get the same effect in Firefox, but for whatever reason I couldn't get it to work right with the segment I was trying to download. Safari worked a treat though, and its Inspect Element feature comes in handy for later steps.

Pizza Segmentation

December 30, 2012

Have you noticed that pizza seems to be getting a lot more expensive?

I started ordering a large 4-topping pizza online from our neighborhood Pizza Nova the other day and the total was going to be somewhere around $37 after tax. Before delivery fee and tip. That's a lot to pay for a pizza: well north of the local shawarma place and pushing into sushi territory. What's up with that?

Thinking about it a little, the answer is clear. Everybody eats pizza. From the elementary school kids scarfing down a slice and Coke for lunch to the commercial real-estate broker who doesn't have time for a restaurant meal to families like ours. Pizza, at least in our culture, is a universal food.

But not everybody pays the same price for their pizza. In fact, if I was running a pizza conglomerate I would be thinking long and hard about how to charge each of my widely varying customers the maximum amount they're willing to bear to address their pizza needs. Tough to do when the pricing is publicly posted above the counter and on every second flyer in my mailbox, but evidently not impossible.

Source: pizzanova.ca

Source: pizzanova.ca

A pepperoni slice and Coke is $3 at lunchtime. That takes care of the elementary school kids. A large pepperoni (pickup-only) is $10, but only if you dig up the special offer on the web site. That one is for the super-value conscious. And for me, $40 for a large 4-topping pizza. Because the actuarial masterminds at Pizza Nova have sorted me into a bucket for people that have a specific pizza vision and are either insensitive to the cost of actualizing that vision or too impatient to hunt down a better deal.

They're not wrong: I've been paying that much for pizza for years. But as of now, the $10 pepperoni is starting to look much more appetizing.

Spinning in Circles

December 23, 2012

Declarative programming, at least in the form I described last week and the week before, is by its very nature ambiguous. A computer can't really know for sure what I mean when I say "make this circle occupy 25% of the screen" because I haven't said anything about which 25% or what happens when I rotate the display or what if I have two screens or no screens?

Resolving these inherent ambiguities falls on the original assumptions made by the designer of the declarative framework. And declarative frameworks are made up of imperative code1, so that's what ultimately tells the computer how to draw my circle.

Take a look at my imaginary CSS example from last week. It might seem pretty definitive at first, but it leans on several implicit assumptions I made. You might share these assumptions, but the computer has some options for rendering the page2. For example, I neglected to specify that the left and right columns must be exactly the same width.

Let's say that I meant for the outer columns to have the same width, but my rendering framework renders the left column using only as much width as the contents require, and the right column using all the remaining width on the page. What now? How do I tweak my declaration to act in the way I actually want rather than the way the computer assumes I want? How do I tell CSS, even my awesome imaginary CSS, that the remaining width of the page should be split evenly between two separate columns?

I've seen a lot of ugly hacks in my day, but I think the ugliest hacks of all are the ones that try to coax non-default behaviour out of a declarative framework. This is the realm of negative margins in CSS, comically convoluted XAML control template specifications, key path sequences in Cocoa key-value coding, hand-optimized SQL querying in Rails, and so on.

Trying to make a declarative framework do what you want rather than what it expects is one of the most painful things I've ever experienced while sitting in front of a keyboard. And you can't just override the default behaviour unless the original framework designer gave you a specific mechanism to do just that.

So, do we just eschew declarative programming entirely and stick to layering progressively more abstract layers of imperative code on top of each other?

Maybe for now.

But declarative programming speaks to a bigger vision: an unrealized dream of what programming could be someday. Programming where we just sketch out a rough outline and let the machine fill in the details. Programming at a level of intentions and desired outcomes rather than menial shifting and sifting of data structures and algorithms.

10 MAKE THE WORLD EXACTLY HOW I ENVISION IT OUGHT TO BE
20 GOTO 10

Imperative control loop for a really great declarative program

In a way, declarative programming shares a lot with that other unrealized dream, artificial intelligence. Maybe they're actually two aspects of the same thing. And as with artificial intelligence, just because we're not there yet doesn't mean we should stop trying.


1. At the lowest level, all code executed by a computer is imperative. Everything eventually gets compiled to very specific, well-defined instructions along the lines of "move this bit of memory here" and "add 1 to the value in that bit of memory over there."

2. There are more ambiguities than this. I didn't explitly state that the columns should appear beside each other, either, so the computer could legitimately stack the divs vertically without violating my constraints.

Circling the Declarative Drain

December 16, 2012

Last week I sketched out the rough difference between imperative and declarative approaches to programming, and admitted that I cut my teeth on the former. I alluded to the deep-seated fear and distrust I harbor of declarative approaches, although I didn't actually write that. So here I am writing that.

Let's look at a (sort of) modern example: using DIV tags and CSS to lay out a web page. This is a mostly declarative technique in that you don't tell the browser exactly where to put each element and how big it should be. Instead, you set out some constraints for the elements and let the browser work out the rest. In theory, with sufficient constraints, you should get precisely the behaviour you want by expending a tiny fraction of the coding effort that would be required if you approached this imperatively.

Let's pretend for a moment that we lived in a world where this was actually the case. What would such magical markup look like? What is declarative at its best?

I present to you the classic1 fluid-fixed-fluid 3-column layout. Your web page is divided into three vertical columns. The middle column is fixed to a width of exactly 300 pixels. The left and right columns grow and shrink evenly to fill up the remainder of the available browser width2.

Thar she blows

Thar she blows

What if we had an elegant, semantic and precise markup and styling language to define this behaviour in code?

<head>
    <style>
        body { width: 100% }
        div { height: 100% }
        .fixed { width: 300px }
        .fluid { /* I don't even need to put anything here because it would be redundant */ }
    </style>
</head>
<body>
    <div class=fluid>This is the left column</div>
    <div class=fixed>This is the center column</div>
    <div class=fluid>This is the right column</div>
</body>

You may quibble with my making it look like modern-day HTML and CSS, but you'll agree that this is fairly concise and semantic: it doesn't take long to read, and it's pretty clear what I'm trying to accomplish. The key is in the <style> tag near the top: see how little CSS it takes to make things work in my imagination?

There are a couple of ways to make this happen in actual HTML and CSS, and they're all horrific. Take a look at this representative example, which I tried to use a couple of weeks ago. Much as I wrote in 2008, I ran screaming back to tables around the time I started trying to get more than 2 browsers to work at the same time.

But even if this solution did work on lots of browsers (it doesn't), and even if it didn't suffer from a handful of other annoyances (it does), just take a look at the markup required. I SAID LOOK AT IT. And weep as I do.

Next week, if I've recovered my composure, I'll use this one example to make a broader hypothethis about why this happened, and continues to happen, with declarative approaches to programming problems.


1. Classic only to me, in the sense that I bang my head against it every few years so it feels like an old, familiar, foe.

2. See what I just did there? I declared an (almost) complete specification for the behaviour of our page. That's how we'll be doing this 50 years from today, or today, if we can just delegate this whole mess to the intern.

How to Draw a Circle

December 9, 2012

In Grade 4 I really started getting into the programming groove. The high-res graphics capabilities1 of the Apple II Plus begged for tools that took advantage of them. One my first serious2 programming projects was a bitmapped image editor I named SuperDraw. Lovingly handcrafted in Applesoft Basic, it was my crowning achievement at the time. But it couldn't draw circles.

If you think I could just call Ellipse(0,0,50,50), I'll remind you that fancy graphics libraries that abstracted away shape primitives didn't come until years later. If I wanted a circle I would have to figure out where to put each individual pixel. I consulted a family friend who happened to be a math tutor. He started by showing me the equation for a circle:

How to draw a circle

x2 + y2 = r2

The square of the x-coordinate plus the square of the y-coordinate is equal to the square of the radius for any point on the circle. Simple, right?

You might imagine a 9-year old Gene puzzling over how to turn this mathematical identity into a recipe for iterating over the x and y-coordinates of a circle of radius r in order to light them up on the monitor. This is actually non-trivial, and if you have some spare time I recommend this excercise as a good way to review your trigonometry.

What I'm getting at though is that a description of a circle, even if it's perfectly accurate and complete, doesn't tell you very much about how to draw one. This is the difference between declarative programming and imperative programming.

The programming with which I grew up was pretty much purely imperative. You gave the computer instructions and it followed them. These instructions might have been in a procedural language (like Applesoft Basic or C), an object-oriented language (like Java or C++) or even a functional language (like Lisp). It all boils down to the same thing: a predictable flow of control through a set of instructions that eventually leads to the desired outcome, or if not, can be traced through step-by-step to isolate the problems.

As the complexity of your problem grows, purely imperative programming can become a bit of a drag. Its effectiveness is based on your ability to abstract away that complexity in layer on top of layer of logic. Near the bottom layer you have methods that calculate the pixels in a circle, somewhere in the middle you have methods that draw complete circles and squares, and near the top you have methods that, for example, draw an organizational chart of your company given a database of the employees and their job titles.

For decent-sized programs that can be a lot of layers and a lot of code to manage. All this complexity: isn't that what computers are good at? Can't we just tell the computer what we'd like achieved and have it figure out how to get there?

Declarative programming is just that. "I'd like to live in a world where there was a circle at (0,0) with radius 25" is the declarative equivalent of "please draw a circle at (0,0) with radius 25." But declarative scales better. You could declare a complete screen layout, with associated constrants ("this circle should always occupy 25% of the window width") and trust the computer to resize the circle as the window is resized. The imperative approach, on the other hand, would require an event handler to catch a resize when it occurred, get a handle to the circle, recalculate the new size, and so on.

At their best, declarative frameworks and approaches can dramatically simplify the specification of complex systems and interactions. And next week I'll illustrate why things don't really work out that way.


1. 280x160, 6 colors, 4 lines of text at the bottom of the screen. Sweeet.

2. I was an immature hack up until then, but Grade 4 was really the turning point.

Just Give Up Now

December 2, 2012

That thing you've been thinking about doing? That great idea of yours? Maybe you've already started on it? Don't bother. Here are just a few of the reasons it's not going to work.

  1. It has already been done. Fire off a couple Google searches and I'll bet you'll find at least one or two nearly identical approaches.
  2. There's a better way to do it. You might believe your approach is solid, but have you really thought through all the angles? Are you a domain expert in this field? The likelihood you've come up with the best approach is miniscule.
  3. There are people working on it right now who are smarter than you, better funded than you, and have way better taste. They are going to launch imminently, maybe even in the next couple of weeks. Are you ready to go up against them?
  4. It has already been tried, unsuccessfully. During the dot com boom there were probably three different venture-funded startups that went down in flames trying the very thing you're considering.
  5. It isn't aligned with your long term goals. How does this idea even fit in with your family plans? Your career plans? Your life plans?
  6. It's low priority. You have way more important things to be worried about right now.
  7. You don't have the budget to do this well. Even if you could hack together a duct-tape prototype of this thing, it would take an engineering team 6 months and a truckload of money to build something worth bringing to market.

And so on. It's easy to crush an idea.

Look, the truth is that taking the leap of faith required to make ideas happen requires not looking down. If we all understood the risks of what we were attempting and had an objective assessment of our odds, nothing new would ever get built.

Everything I've said so far is probably true about your ideas and about mine. Forget it. As Dr. Seuss wrote so poignantly in The Lorax, "Unless someone like you cares a whole awful lot, nothing is going to get better. It's not."

And if you think that any or all of the arguments above are a legitimate reason to abandon your idea, buy me a beer and I'll explain why you've got nothing to worry about.

Native Mobile or HTML5

November 25, 2012

You're building a new web-based application or service and you want to offer a mobile solution to your users. Do you build a native app or tailor your website to mobile clients using responsive HTML5 and JavaScript?

It's tricky.

If you agree that native apps will offer the better user experience (and I believe that, with practically zero exceptions, they will), the gut instinct is to build out native apps for your chosen mobile platforms. But which platforms? Well, you need an iPhone and iPad app, and you need to support both the retina and non-retina versions of each, and yeah, Android support is pretty important these days, and maybe a lot of your target uses Windows on the desktop so you want to cover your bases with a Windows RT client.

Ruh Roh.

You've just signed up for about 10 developer-years of extremely expensive effort. If your business is still in the "if we build it they will come" stage, this is probably ill-advised.

The TimeTiger web client running in mobile Safari

The TimeTiger web client running in mobile Safari

So maybe for now, you can put optimal user experience on the back-burner and develop a nice mobile-friendly responsive web site. This comforting idea is a dangerous illusion, as:

  1. Cross-platform responsive web development is incredibly difficult and fiddly, and can actually take longer than native app design using decent tools.
  2. Even when you get it right, the average user's mobile web experience will be significantly worse than an equivalent native experience.
  3. Once your business is off the ground, you will eventually want to make native clients anyway, so much of this effort will be wasted.

Your choices boil down to diving into the bottomless pit of native mobile development or sacrificing your app at the altar of crappy user experience. Not ideal.

The only way forward is to consider, deeply, your users. What are they using now, what are their expectations and what are they willing to accept in order to gain the functionalty you're offering?

"What you tolerate defines your community." - Heather Champ at Web Directions South 2012

If you are developing an app targeted at a design-conscious, consumer audience whose attention you need to grab and hold, it is absolutely essential that you provide a magical and delightful experience. Don't trick yourself into believing that HTML5 is enough: learn from Facebook's mistake.

On the other hand, if you're providing a hard-core business tool where the appeal is in the actual functionality or data you're offering, you can probably get by with a responsive web solution for now. If your users are thankful just to be able to get what you've offering on their device, no matter the form, you're good. If your users are typically not very design conscious, in that they don't care whether you have nice smooth transitions between pages and maybe can't even tell whether they're looking at a retina device or not, you're good.

Good for now because everybody likes to use great software, even when they aren't consciously aware of what makes the software great in the first place. All things being equal, a competitor will eventually develop a native experience that, even if it provides less functionality, will still start pulling users away from you. Unless and until browser technology improves and converges, providing your mobile experience in the form of responsive HTML5 is a shunt: good enough for now, but that's it.

A Consumption Device

November 18, 2012

For as long as the iPad has existed it has faced criticism about what it could and could not be used for. Legions of bloggers tripped over themselves labelling it "a consumption device," while an equally vocal contingent relentlessly pointed out examples of the iPad being used in well known and respected art and literature, music production, construction, medicine, and so on.

The myth of the iPad as a consumption-only device was thoroughly debunked, at least in the eyes of many Apple enthusiasts. It might or might not replace your MacBook, but you could certainly use it for more than watching movies and reading iBooks.

This was an exciting time, the birth of the "Post-PC" era, and I, like many others, wanted to experience the wonder. I was inspired by articles by Mark O'Connor, who talks about ditching his MacBook in favour of an iPad to do bonafide software development on a 200,000 processor system. I dreamed of a world where I could do most, if not all, of my work on an Internet-connected iPad.

My day-to-day work responsibilities are a grab bag of operations, support, sales, customer service, and development, so I had no illusions about being able to dump my 15" MacBook Pro on Day 1. But gradually, I started pushing more and more of the systems and processes I used into the cloud. My vision was to be able to do everything except actual software development using an iPad, and from anywhere in the world.

Source: Mark O'Connor, Yield Thought blog Source: Mark O'Connor, Yield Thought blog

You don't need to hold the iPad mini for long to realize that it is something special. In fact, some Apple writers are proclaiming that it is what the iPad should always have been. Careful not to take anything away from its big brother, they are choosing the mini for themselves in favour of the larger option.

But for what use?

From what I can tell, to consume rather than produce. MacBook Airs or Pros are for doing work, and the blissfully light, thin, and compact iPad mini is for e-mail, Twitter, reading and surfing. Choosing the mini as your iPad seems to be an admission, perhaps more of a realization, that if you're working, you'd rather be doing it on a MacBook.

My fear is that treating the iPad as a primary production device is going out of favour. That there is a growing belief that although it is possible, it is rarely optimal, and almost never preferable to using a MacBook for the same job.

There will always be exceptions of course. Some applications are so amazingly well suited to a tablet form factor that you'd never want to be using them on a laptop if you could avoid it. But something as simple as web-based surfing and research feels like swimming upstream when you factor in the need to jump to 1Password, excerpt and annotate the research material, clip and edit images, create the occasional PDF etc. Weak interapp sharing mechanisms, the prohibition of system-wide 3rd party productivity utilities, no split-screen capability, and no centralized file store all conspire to make the experience clunky and unpleasant.

Surprisingly, it is Microsoft that seems to be holding up the banner of tablet productivity highest. The Surface is a flawed product, with a half-baked OS, and a weakening team behind it, but Microsoft, bless them, is trying so hard to make Surface your primary work device. It's a shame they've got so far to go.

Best Enough

November 11, 2012

Dustin Curtis just published an interesting post about the payoff you get from finding and choosing the very best of whatever you own and use. He argues that it is better to have a few very nice things that work just the way you want and that you can blindly trust than it is to have many things that aren't quite right.

I like this idea, and over the past few years I have tried to incorporate a "less stuff, but better stuff" approach to my life. In some things I have been successful, and in some things I have failed. Now I'm convinced that it would be madness to try to apply this approach to everything, or even to most things.

Finding "The Best" is expensive

Financially to be sure, but more importantly, the research, selection, evaluation, acquisition, maintenance and divestiture of "The Best" is significant. Dustin talks about the 20 different sets of flatware he bought and tried before settling on "The Best." With no derision intended, there are only a few things on which I can afford to lavish such passion. A very few things.

Living up to "The Best" is all-consuming

I spent the better part of 2012 fiddling with Vim to try to make it "The Best" editor for me. Hours and hours of (admittedly enjoyable) learning and tweaking and asking and banging my head against a desk. I could never quite get it to where I needed it to be, and I'm using something else now. I don't regret the time I spent, but make no mistake, that was a lot of time.

When you commit to using "The Best" IDE, or driving "The Best" car, or using "The Best" kitchen utensils, you are commiting to constant learning, honing, tweaking, maintaining and upgrading. These are fantastic things to do for a few things in your life. A very few things.

"The Best" does not actually exist

There is a well-known sales tenet having to do with consumer education. The more educated a potential customer becomes about the product area, the more likely they are to up-sell themselves to a "better" solution.

For most of my life I had no interest in coffee, but over the past few years I have "educated" myself up from pre-ground department store beans and a French press all the way to locally roasted artisan beans and a Rancilio espresso machine with a hacked-on PID controller that keeps the boiler temperature within a 0.2 degree range. The tamper I use has a lightly rippled base to prevent channeling.

Reg Barber Ripple Tamper Base

And guess what? I could be doing much better, but I have decided to stop here for now.

Once you start down the path to "The Best," there is no end. Ask a photographer what "The Best" camera body is. Or a designer about "The Best" layout software. You see, "The Best" is not a product or solution at all. It is a journey towards perfection. A challenging, fascinating, often frustrating journey that you should only be on if you care an awful lot about the destination.

And how many of these journeys do you want to be on right now? How many can you afford to be on?

For almost everything in our lives, there is a "good enough," and stopping there is the right move. Sufficiency does not preclude excellence, but it's a great way to stay out of the rabbit-hole of perfection. Err, I mean, "The Best."

The Lifification of Games

November 4, 2012

I grew up playing fantasy role-playing games on my computer. I don't recall whether my Vic-20 ever had anything in that genre, but I fondly remember long hours spent in front of my Apple II playing Ultima II, III and IV. Ever since then I've always had one or three of these on the go.

Back then I wasn't trying to play through these games. I don't think I even realized that the Ultima games had a end-goal. To me, they were worlds explore and master. Immersive fantasy environments that I could live in, for a few hours each day. Finishing the games was something I did, or didn't do, once I got bored of playing them.

These days I have been playing Skyrim, World of Warcraft, and EVE Online. All fantastic games and truly immersive sandbox environments. WoW and EVE especially, as MMOs with epic scope and vibrant communities, fulfill my every wish of having an alternate world in which to kick back and relax after a tough day in the real one. Or so it would seem.

Dusk over Windshear Crag

You see, I spend most of my work days happily sitting in front of a computer and a keyboard, sometimes talking into a headset, working towards individual or team goals that I have set for myself or others have assigned to me. When I complete those goals I am rewarded in larger or smaller ways, and I advance in my longer-term objectives and plans.

My job is an MMO.

And the games I play are looking more and more like a job. My quest list in WoW looks a lot like my task list at work. They both involve typing some things on my keyboard, responding to things happening on my screen, maybe solving a puzzle or problem, occassionally speaking with people on my headset. I enjoy doing both, but they both kind of feel the same.

The games I play now can't be called simple any more, or even relaxing. The mechanics, plotlines, strategies, and gear has become so expansive that they are their own field of study. One printed strategy guide for the latest WoW expansion, Mists of Pandaria, clocks in at 456 pages. And because WoW is a social game, you are expected to know what you are doing when playing with others. In fact, at the higher levels of the game, you are expected to practice.

So I consciously opt-out of these Alpha-groups and concentrate on having fun. Playing at my own pace. Enjoying the environment and the world, like in the old days. But I can't help feeling that I might as well be spending more time becoming a superstar in the real-world rather than a scrub in the virtual one.

Report the Bug

October 28, 2012

Has this happened recently to you or someone you love?

"I was using this software/web service today and all of a sudden it crashed/reset/did something unexpected. I had to re-type everything, and I didn't even have it written down or printed out. Man I hate company. They should really get their act together."

Here's the thing. We, as software developers, try hard to make our software as bug-free as we possibly can. It doesn't matter: a few of the suckers will always make their way into production code. Even if it were possible to test every single line of code in a non-trivial system against every possible input (it isn't), heterogeneous operating environments, shifting technology stacks, integration with external systems and components, network latency, and random acts of configuration madness™ will always conspire against perfection.

In other words, every piece of software you are using right now is buggy.

A lot of people understand that; they've accepted it. They have become passive victims to our industry's inability to achieve perfection.

They do not submit bug reports.

I have witnessed the most heinous acts of software cruelty perpetrated against nice, normal people just like you and me who have walked away shaking their heads and just feeling frustrated and sad. If there was one message I could send to all of these victims it would be this:

"If the software does something you do not agree with, report the bug."

It often doesn't occur to people to do this. They think the developer doesn't care about their problem. Worse, sometimes they think they themselves are at fault! As software developers we use the bug designation "by design." In other words, the software is working as we expect it to, but for whatever reason that didn't work out for the user. Well, guess what? Sometimes, there is a bug in the design. So:

"If the software does something you do not agree with, report the bug."

Apple has an ancient bug tracking system colloquially called Radar, actually named Apple Bug Reporter. A bug I submitted in June against Apple Mail has an ID of almost 12 million, and the reported bugs are numbered consecutively, allegedly starting at ID 1 from sometime in the late Jurrasic, probably. This bug, and every other bug I've ever submitted to Apple, have been marked as duplicates. Apple has so many detailed bug reports in its database that it is almost impossible to stumble over something new and novel and this is great. Apple is aware of and able to prioritize almost every conceivable defect or perceived defect in their software. The culture in the Apple developer community is that you're not entitled to complain about a bug in one of Apple's products unless you've already filed a Radar, duplicate or not.

Radar or GTFO

Photo of Michael Jurewitz, former Apple Developer Tools Evangelist, taken by George Dick. "Radar or GTFO" addition by Steve Streza. Image taken from a Black Pixel blog entry by Daniel Pasco. These guys know a thing or two about fixing bugs.

It is telling that so many of these issues are still unresolved. Again, with any non-trivial system, fixing every single reported issue isn't possible because some solutions would cause other problems, or would require fundamental design changes that would negatively impact the product as a whole. Or would take the product in a direction other than the one the developer envisions. Or will soon be obviated by a planned update to the system that replaces or upgrades that part of the functionality. But just hearing what customers find flawed with the system is invaluable.

"If the software does something you do not agree with, report the bug."

Do you have a bug to report in TimeTiger? Do you have a feature suggestion or just feel that something could work smoother, faster, prettier? We might not fix it right away, but I guarantee that we appreciate your input.

"If the software does something you do not agree with, report the bug."

A User by Any Other Name

October 21, 2012

When we were starting Indigo in 1997 I sought $100K of angel financing from an old friend and mentor. Michael Schweitzer, who passed away in 2003, had taken me under his wing at the age of 15 and we remained close throughout my university career. I am still in awe at the learning and opportunities which I owe to this friendship, and I will always be grateful. Michael and his company became one of Indigo's earliest clients, so his participation, both financial and advisory, seemed like a perfect fit.

As we proceeded through the informal negotiations Michael met my co-founders and started to learn more about our collective goals and beliefs. At the last minute, just before there was an actual contract to sign, Michael pulled out of the deal.

Feeling crestfallen and a little betrayed, I wanted to know why.

"Gene," he said, "in every conversation I've had with your group, all I've heard about was your new approach. How special this thing you were trying to build was going to be. How your team was so great. I didn't hear anything at all about us. About the customers, and our needs."

One Perspective

He continued. "I don't doubt your team's ability. You've put together a strong group. But I just don't think you care enough about us. I hope you guys do great, but until I see that you've become more focused on your customers, I won't invest."

Predictably, I dismissed his opinion out of hand. And of course, he was absolutely right. Over the years I have started to learn to project our focus externally, rather than internally to our organization. And I chuckle a little bit when I have conversations with people, and startups, that haven't quite gotten there yet.

Jack Dorsey of Square and Twitter posted a nice piece on not wanting to attach the labels "customer" and "user" when more specific labels, like "buyer" and "seller" are available. A number of prominent bloggers responded to this labelling issue, including Marco Arment who posted an interesting take on the relationship between the label and the business model that drove it. Jessie Char jokingly goes further, and is (not really) trying to eliminate the term "client" entirely from her organization's vocabulary, preferring "friend with benefits."

It makes me wonder what organizations like Facebook call us behind our backs (my guess: "meeple").

But for me, Jack's second point resonated more strongly. "... all of our work is in service of our customers. Period." I think Michael would have felt that way too.

Update: Jessie was kidding. Duh.

Getting Rid of Stuff

October 14, 2012

The toughest part of my hardware upgrades is getting rid of the old stuff. Most of the time it still works perfectly, and even if it didn't I wouldn't want it in landfill. So I've been mastering the art of efficiently divesting myself of old "stuff."

Here's my current checklist, roughly in order of preference. If you're already on top of this and are about to skip this post, I urge you to at least read the section about Freecycle if you're not already familiar with it.

Photo: Michelle Arsenault, licensed under Creative Commons Attribution-ShareAlike 3.0

Landfill, by Michelle Arsenault, August 2008

Can I hand it off to friends or family?

Typically I will only do this for stuff that is in great working condition and is being replaced for reasons other than obsolescence. I was recently able to hand off a really nice flatbed scanner that I hadn't used in over a year to my brother. I had to pony up $40 for a replacement powerbrick to make this happen, but I still feel better now that it has a good home.

Can I sell it to a company that buys this stuff?

For some stuff there is a trade-in or cash-back option provided by a reputable retail or mail-order company. For example, in the U.S. Gazelle is apparently very convenient for trading in Apple stuff. Here in Toronto, when I upgrade my Mac stuff I trade it in at Carbon Computing. Carbon is very low-hassle and they'll cut you a cheque after they inspect and test your equipment. Along the same lines, big camera shops like Henry's will take used camera gear for trade-in.

You won't get as much as you would like by going this route, but it's highly time-efficient and requires very little work on your part. Plus it gets the stuff out of your hands fast.

Can I sell it online?

Your success with this will vary, as will your level of comfort. I've sold 3 or 4 used iPhones on eBay over the years, and I've made more than enough on each one to fully pay for a new (subsidized) iPhone. Except for the one time I shipped to Malaysia without tracking information, and surprise surprise, the iPhone "never arrived." It was my own mistake for not charging enough for shipping to pay for a waybill-tracked delivery method, but still, an expensive annoyance.

A couple of my friends are master Craigslisters. I've only had one Craigslist experience and it was negative (a no-show on a Sunday night in a Costco parking lot: very shady), and that has put me off. But they tend to deal in high-end furniture and home fixtures, so that scene is probably different than the one for the Xbox I was going for. Anyway, each time they move (and they move a lot), they are able to unload all their old stuff for at least as much as they paid for it, and then refurnish for less than that. I think they might actually be cash-positive on their moves (or at least cash-neutral). They should probably be the ones writing this post.

Can I donate it?

There are probably non-profits in your area that want your working computer gear. I have donated equipment to specific organizations where friends have worked, and it's a nice feeling to know your stuff will be used for a good cause. It can sometimes take a little work to identify appropriate organizations in your area and arrange for pickups and deliveries, but it's great when it happens.

Can I Freecycle it?

How can everyone not know about this? Freecycle is magic. It is the fastest, simplest, most responsible, most rewarding and lowest hassle way of getting rid of just about anything, including but certainly not limited to electronics, as long as you're willing to give it up for free.

Once you've joined the Freecycle group in your area, simply post a brief note about what you're offering (for free) and your major intersection. Once a moderator has approved your post (often the same day), you will start getting e-mails (sometimes a flood of them) from people who are willing to come to your home (or a place of your choosing) to pick up the item.

You can choose the best recipient. Most of the time I will pick the first respondent, but occassionally one e-mail will stand out as representing a particularly good fit for the item I'm donating. You can then arrange a time for the pickup and you're done. Post another note saying the item has been claimed to stem the assault on your inbox.

I've Freecycled dozens of items, including everything from fully functioning rack-mounted servers to "box of random cables, most functional," and there has never been a no-show or any other hassle. So simple. So awesome.

Can I recycle it?

When all else fails, big box chains like Best Buy are now recycling electronics gear. Apple recycles old Apple stuff, although you can almost always sell functioning Apple stuff. Even your municipality might be aggressively recycling old electronics. Your gear should never have to end up in landfill.

Start by Being Terrible

October 7, 2012

Are you a good driver?

I don't mean professional-grade, just better than most of the jerks you're constantly dealing with on the road. You're an above average driver, aren't you?

Despite what we would like to believe, not all of us are above average. In fact, there is probably little correlation between someone's perceived skill behind the wheel and their actual skill. By definition, we cannot be objective about our abilities in any particular field, whether we are complete newbies or seasoned veterans. Not all "veterans" are good. Without some external expression of our work, for others and even ourselves to point to and compare to our original vision, we have no sense of how good we actually are.

This can be useful. If we fully appreciated how terrible we were at something new, we might never stick with it long enough to become better. Imagine taking a dancing class where you were always keenly aware of how ridiculous you looked. It would be hard to concentrate, and impossible to enjoy the experience. It is only by putting aside our internal self-assessment that we can push past terribleness.

Iteration

What do you think about this post so far? Do you think you could express this idea more efficiently, or with a little more flair?

I have been reading blog posts on the Internet for far longer than I have been writing them. It is only when I actually started writing that I felt the full weight of my ineptitude. I love good writing, online and off, and I presumptuously considered my own skills on par with what I was reading. I often felt that I could have written that same post, maybe even a little bit better.

But just trying to express my ideas in writing has shown me how much I have to learn. Now that I've started, and now that I see just how bad I am, maybe I can begin to improve.

You might think I am being too hard on myself, and you might have heard other writers express a similar sentiment. Know that the reason a writer (and perhaps any type of artist) is their own harshest critic is because he is sensitive to just how inefficiently, self-indulgently, pompously, inconsiderately, condescendingly, and ignorantly he has sprayed what started out as a pretty good idea onto the page.

Even if you happen to be enjoying this post, I guarantee it is a poor shadow of the idea in my head. And that's what I'm trying to get at: our ideas are good, but we are arrogant to think that we can express them with perfect fidelity. That all of their strength and virtue will land on the page, or in the presentation, or the source-code.

That can happen, but only once you have started becoming less terrible.

Taking Advantage of the Sum of Errors

September 30, 2012

Estimating project schedules is hard. It is especially difficult for larger projects, and it is not uncommon for projects in my industry, software development, to be underestimated by several multiples. Sometimes, an order of magnitude and up.

I am not going into why that might be the case: it has been widely discussed and may the subject of a future post, but suffice to say, it happens all the time, and even to experienced project managers. Here is one way to look at project estimating that helps me keep things mostly reasonable, most of the time.

Consider an interesting little script that you think might take 10 hours to write. For the purpose of this example, let's say that our estimates are always 50% off, in one direction or the other. This might seem egregious to you, but in the software industry only being 50% off is practically omniscience.

So, we have made a 10 hour estimate with an expected error of 50%. The script will actually take either 5 hours or 15 hours to actually finish. How can we improve on this?

What if we break down the work of doing the script into 10 smaller pieces and estimate each of these individually? Even a small script will require a documentation/usage page, command-line option parsing, some sort of input processing, maybe an algorithm or two, some output, some error-checking, perhaps a non-trivial edge case it might need to handle. Don't forget the effort of creating a Github gist for this, or creating a remote source-code repo to which you can push your hard work.

Now we have 10 items, and we try really hard to accurately estimate each one individually. By sheer coincidence, our best estimate for each of these 10 items is 1.0 hours, providing a total estimate of 10 hours for the script. And our estimation error is still exactly 50%, so each item in our estimate will actually take either 0.5 or 1.5 hours to complete. Assuming we err on either side equally, here is the estimated vs. actual for this project.

Sum of Errors

The individual estimation errors for each task have effectively cancelled each other out, giving us an extremely accurate overall estimate.

Sure, these numbers are contrived for the purpose of this example, but the punchline is still useful: the more chunks into which you decompose your project, the more accurate your estimate will tend to be. The sum of errors will approach 0 as the number of estimates increases, assuming a normal error distribution.

An unfortunate consequence of more accurate estimates tends to be larger estimates. But that's a subject for another day.

An Alternative to QuickBooks

September 23, 2012

I've been keeping an eye out for an alternative to QuickBooks to handle small-business accounting. Sure, QuickBooks is big and comprehensive, but I only use 5% of it, and frankly I'd rather do that 5% much differently.

Now this is how you do accounting

The recent craze of plain-text writing and blogging workflows has brought me back to a project that I've been circling around for several years. You may not know this, but there is a plain-text accounting system. It's called Ledger and it's amazing.

The idea behind John Wiegley's Ledger is that you type your transactions into a great big text file. You then run the super-fast Ledger command-line processor, and specify what sort of output you'd like to see, such as a list of account balances or a detailed register for a particular account.

Think about that for a second. Every single one of your transactions is crunched when you're simply curious about how much is left in your chequing account. Sound crazy? Not really, because Ledger is so screamingly fast (and, being devoid of UI, so efficient at processing large batches of transactions), that in the time it would take you to submit a single transaction in QuickBooks, Ledger can process them all.

Once you bend your mind around this unusual process, you start to see the possibilities. TextExpander snippets for common transactions. Scheduled jobs for generating and e-mailing key reports. A bare-bones web UI for report generation, perhaps that your accountant can access. Simple integration with your website or other systems. There are a lot of interesting things you can do when freed from the constraints of UI-laden accounting software.

So, how to start? I made a Ruby script that converts your entire QuickBooks transaction history into something that is immediately usable in Ledger. So usable, in fact, that your Ledger trial balance should match your QuickBooks trial balance.

Here's how to use it:

  1. In QuickBooks, generate a Journal report for the date range All. In my version of QuickBooks, this is in Reports > Accountant & Taxes > Journal.

  2. Export to a comma separated values (.csv) file, say journal.csv.

  3. Run the script as follows:

    qb2ledger journal.csv > ledger.dat

  4. Now you can run Ledger directly against this data file. So, for a list of account balances, you could run:

    ledger -f ledger.dat bal

So far I have found one annoyance: older versions of QuickBooks don't escape double quotes when exporting the CSV file. That means you'll have to go through and escape them yourself if you happen to have a few double-quotes scattered around your transaction memo fields. I had a number of things like Samsung 15" monitor, for example. To help with this, you can use this regexp to search for double-quotes that don't belong:

\"[^,]*\"[^,]*\"

Once you find them, you can simply preface them with a backslash to allow qb2ledger to parse them properly, or do what I did and get rid of them in the original QuickBooks transactions before doing the export.

Now, rather than moving all of your historical transactions over to Ledger, you may just want to close off that set of books and start a new one. In fact, I might go this route myself, but this script is still useful for getting a handle on how Ledger could work for you. And that's important, because Ledger isn't for everyone. If you know your way around double-entry accounting, though, and want a fast, powerful, extremely flexible sledgehammer of a solution, you should take a look.

Have more questions about Ledger? Take a look at the Ledger site or read through the full documentation. There is an IRC channel as well, #ledger on freenode. Have questions about my qb2ledger script? Ask me on Twitter.

There and Back Again

September 16, 2012

The first new car I ever bought was my beloved 2001 Honda Prelude. I obsessed over every detail of that car, spending hours in the crappy Flash-based Honda online configuration tool before ever setting foot in the dealership. I babied the Prelude for every one of the 10 years that I owned it, and when I finally sold it it still looked like it was fresh off the showroom floor.

But times change, and a new baby and the need for more carrying capacity eventually made it necessary to move to a more practical vehicle. So when I picked up Forza 4 for the Xbox, one of the first things I did was check whether my old car was represented in the huge catalog of available in-game vehicles.

It sure was, and I scooped it up for 6,000 CR (which seems disrespectfully low, regardless of what a CR is "worth").

The in-game Prelude was only available in a few colours, and not in the Electron Blue Pearl of the earstwhile object of my obsession. Forza allows you to re-paint your car in any color you want though (for free!) and in moments I was looking at an amazing likeness of my old car. I shook off the very mixed feelings and jumped into the driver's seat to start a race.

Lo and behold, the dashboard was lovingly reproduced. I was floored.

I don't own this car any more And this one isn't even real

And then I heard the engine roar. The Forza team actually went and matched the engine sound of the 2001 Honda Prelude. I was nauseaus.

And sad. Because I went from owning my dream car to only being able to drive it in a computer game.

Still won the race, though.

Leader of the Pack

September 9, 2012

Recently I spent a moment watching a thin, scraggly looking guy walking a group of 6. Large. Dogs. Every once in a while, one of the dogs would try to pull in one direction or another, but, with a little bobbing and weaving the guy was able to unsteadily navigate in one general direction.

Were the dogs to all pull in the opposite direction this poor guy would instantly become cargo. He'd have no chance. And the dogs each demonstrated, from time to time, an individual desire to go in a direction other than the one imposed upon them. But their lack of coordinated action, in fact their lack of awareness that coordinated action was even possible, kept this little guy in charge.

Force equivalence

Putting aside for a moment that these dogs were evidently loyal, well trained animals that were not imminently going feral, it is interesting to look at it from the other side:

The thing that allowed this one small guy to take the 6. Large. Dogs for a walk was that he had a destination, and they, as a group, did not.

3-day

September 2, 2012

A few years after graduating from University I noticed that I wasn't very happy. By all accounts I had no reason to feel this way: a decent career, great friends, a healthy family, nothing to complain about. And yet I couldn't shake the feeling that some days I didn't want to get out of bed.

Probably Not the Answer

I had (and still have) friends who suffer from various forms and intensities of depression1. I did not realize at the time just how prevalent mental illness was, but I still knew that I didn't want any part of it. I was genuinely worried that I might be suffering from more than just the blues.

I wasn't sure, so I decided to implement what at the time seemed like a pretty clever lifehack. Every morning I would log, on a scale of 1-10, how happy and energized I felt. Just one number. And then based on what I wrote, I would choose my activities accordingly.

If the number was high, say 7 or above, I would go try to be all that I could be. If the number was 3 or lower, I would indulge myself in low-effort tasks and usually just spend the day watching movies and playing video games. I was working from home and mostly independently at the time so I didn't have to deal with going into the office every day. That was a big help in pulling this off.

I had some simple rules during these low-energy days (that I later dubbed "3-days"). No command decisions, no new projects, and no difficult work. Any time I found myself doubting the direction my life was taking, questioning my important relationships, or impatiently re-thinking a major work initiative I would remind myself that it was a 3-day, and I didn't make these kinds of moves on a 3-day.

This was a big relief. It was an instant excuse to short-circuit the downward spiral of doubt, fear, anxiety and guilt that I remember feeling during those days. And I got a lot of gaming in too.

I started to find that 3-days were often followed immediately by 5- or 6- days, which were then often followed by 7-, 8- or 9- days. It was rare that I had two 3-days in a row and I don't think I ever had three in a row.

As soon as I realized this my 3-days became even less stressful2. I became confident that the way I felt right now wouldn't last long. If I just concentrated on taking care of myself today and not doing too much damage I would be better equipped to face the world the next day. Or the day after. After a few months I didn't need the log any more.

I still have 3-days. I think everyone does. But recognizing them, and acting accordingly, have worked for me so far.


1. By describing my own experiences I do not mean to trivialize the painful and substantially more complicated forms of mental illness suffered by so many of us. This summer the Center for Addiction and Mental Health (CAMH) ran a public awareness campaign in Toronto to address just how uninformed, dismissive, and hurtful we can be when faced with a friend or relative suffering from this kind of condition. Even though I consider myself not completely ignorant on the subject, some of the billboards, like the one I linked above, gave me pause, because I think I might have actually said these things to friends. If you think you might need help, please talk to a pro.

2. Another example of why I think tracking can be such a powerful tool.

Is Git useful at the beginning of a project?

August 27, 2012

Oooh! Reader mail!

"Upcoming Entrepreneur" writes asking whether Git is relevant for an unreleased, one-man development project.

From my standpoint, once you've got a 1.0 release of your software product, Git makes perfect sense to help you better control hot fixes, feature branches, and major version releases. ... Am I crazy to think that I shouldn't bother with Git until I release a 1.0?

So, just how much value does version control, and especially industrial-strength version control like Git, add to something that you're working on alone and hasn't even hit production yet?

Here's what a distributed version control (DVCS) system, like Git or Mercurial, is good for at this stage of a project:

  1. Letting you back out of a poor design choice. Truthfully, I've only taken advantage of this when I had the foresight to branch before heading off in my ill-fated coding direction.

  2. Helping you learn and practice DVCS before you're in production. Make no mistake, once your project grows to multiple people or gets actual clients, you will need Git or Mercurial or something similar. Learn now while mistakes are cheap to fix. As you've discovered, there is a learning curve, and there is a difference between using your DVCS like a butter-knife and actually wielding it like a double-bladed axe of coding justice.

  3. Integration into your automated testing and deployment infrastructure. I use Git repositories to manage the deployment process of several projects and it's a nice way to do that. There are alternatives of course, but if if you've got the axe already...

Modern editors and IDEs sometimes give you simplified alternatives to DVCS. Xcode has snapshots, AppCode has Local History, and so on. Frankly, I was never interested in these because they're really just safety nets for use when you don't commit often enough. You won't need them once you are using your DVCS well (committing often, branching and merging or abandoning fearlessly, etc.). And like I said, you will need Git or something like it eventually.

If you really hate the Git command-line, by the way, there are alternatives. Mercurial is well regarded and has been called a somewhat saner approach to the whole thing. There are also nice graphical clients (I use SourceTree) but try to force yourself to stay on the command-line for as long as you can... you will learn faster.

Thanks for writing in "Upcoming Entrepreneur" and kicking my butt back into blogging. I took this as an excuse to scratch an itch and put together a simple static blogging engine based on Dropbox ... because it's all the rage.

More questions? You can always reach me on Twitter @genegoykhman.

Tables Anonymous

May 13, 2008

Hi, my name is Gene and I use table layouts. I really want to use CSS for positional layout, I really really do. And every few years I look at the various sites I manage and think to myself, Gene, today is the day you replace all this hacky table garbage with nice, clean, browser-independent CSS.

So I scrap the tables and it feels good, like getting rid of accumulated crud always feels. And I start styling. Now, I am by no means a CSS expert and I look up a lot of stuff online and in books, but I now know enough to get around basic grid-like layouts and have figured out the positional properties more or less. And I'm not really doing anything fancy. So, initial progress tends to be quick and in an hour or two I have a pretty sharp, pure-CSS layout working in my test browser (currently Safari).

Encouraged and emboldened I start testing a couple of other browsers that I'd like to support, like IE6 and 7 and Firefox. I notice a few display discrepancies and here is the TSN turning point. This is the exact point at which my CSS journey becomes a death-spiral into whack-a-mole layout hell. I spend 3, 4, 5 hours madly swapping between test browsers trying to figure out why fixing one breaks another ad infinitum.

First I get creative, trying various tricks of cascading styles and divs so that every browser gets fed the kibble they like best. At this point I'm still clinging to the hope that I don't have to use any browser detection nonsense. And for a while it looks like I'm making progress. But inevitably, every single time, one browser will have an unexplained white space between elements that are supposed to be flush or something will wrap where it's obviously not supposed to wrap. After creativity comes desperation and I start madly trolling the Internet for all kinds of crazy (I'll say it again: crazy) hacks that people use to work around the various browser idiosyncrasies. JavaScript expression syntax, even (OMG I can't believe this exists) conditional JavaScript compilation, and by this point it's about 3:00 AM and I start to ask myself the following question:

How is this horrid, ugly, hack of a CSS layout any more elegant than my original table layout?

Obviously is isn't, so I roll back all my changes and call it a night. Maybe next year things will be different. Maybe by then Microsoft, Apple and the Mozilla foundation will all get together over beers and settle this crap once and for all. And maybe, just maybe, CSS will become as reliable and consistently rendered as my 5+ year old table layout has been. Maybe I'll learn the magic secret of browser-independent CSS and will laugh at the problems I'm facing now. But I doubt it. Say what you will about CSS vs. tables. At least tables work.