Author's note: Here's another random "from the archives" post. This is from May 6, 2007. It's more of a "personal reminiscience" type post, more in keeping with the "personal journal" nature of a blog than my usual subject matter. But hey, it's my website, so I can post whatever the heck I want.
I live in Corning. It's a small city in central New York, about 30 minutes from the Pennsylvania border. As locations for software developers and other technologists go, it's not quite in the middle of nowhere, but it's close.
However, there are still advantages to be had. For example, there are the air shows. It just so happens that the Elmira/Corning regional airport is also the location of the National Warplane Museum (note: that has changed - now it's the Wings of Eagles), and the airport just happens to be across the street from the local mall. So when we went shopping on Sunday, we got a nice view of the day's airshow while we were driving around. The part we saw was a bright red biplane doing loops, barrel rolls, flying in low over the highway exit. It was actually very cool. I wish I'd had a camera....
Commentary: I remember watching this plane as my wife and I walk though the parking lot of the Tops supermarket in the plaza across the street from the mall. I don't remember exactly why we were there (it wasn't our normal market) or what we'd done before, but I do remember that it was a sunny afternoon and I kept looking up at the red biplane doing loops and barrel rolls overhead. It was a symbol of freedom - rolling unfettered through the blue sky, no cares about what was going on below. I was a little jealous - I'd been feeling trapped in a job that I increasingly disliked and I wanted to be like that plane.
It's funny how images like that will stick with you sometimes. It's been seven years, but I still remember that plane, even though I don't remember any of the context. In fact, the only other thing I remember doing that day was working in my garden and thinking about that plane. Perhaps it's not a coincidence that my garden was one of my main sources of escapism....
Author's Note: This is another entry in my "from the archives" series. I've decided it's time to clear some of those out of my queue. This was an unfinished stub entry I last edited on April 20, 2007, when I was still working in county government. I actually still have some recollection of the incident that inspired this entry, nearly seven years later. There wasn't much to the original entry itself (as I said, it was kind of a stub), so this post will feature some commentary on my own comments. Yeah, I could try to actually finish the post, but I only vaguely recall what my original point was and I'd rather make a different one now.
Question: what is the purpose of having a "help desk" when all they do is forward support requests to the programmers and analysts? Isn't that the opposite of help?
I admit that's not entirely fair. Our help desk does all sorts of things and often solves small problems on their own. So it's not like they're just a switchboard.
It just annoys me when they pass on a request without even trying to get any more details. For instance, when they get e-mail requests and just copy and paste the body of the e-mail into our help desk software. Then they print out the ticket on a sheet of blue paper and lay it on the desk of the programmer or analyst who last worked with the department or system in question. There's no indication that they even asked the user any questions. It seems like I'd be just as well off is the user just e-mailed me directly. At least we wouldn't be killing trees that way.
Commentary: I don't even really remember what the partiuclar issue that prompted this was anymore. I just remember coming back to my desk one day, seeing a blue print out of a help desk ticket, and realizing from what it contained that the "help" desk had done no analysis or trouble-shooting on it whatsoever.
This was one of those moments that was a combination of anger and shaking my head in resignation. I had a lot of those in 2007. I guess it was just part of the realization that government IT just wasn't for me. In fact, it was that August that I took the job with eBaum's World and moved into full-time development, which is really what I wanted to be doing all along. So really, it all worked out in the end.
Today is a special day - it's my grandma's 100th birthday. My parternal grandmother, Althea Geer, was born a century ago today. I'm not much on the sappy personal posts, but I figured this deserved mention. Birthdays happen every year, but it's not often you get to celebrate a milestone like that in your family.
My grandmother still lives by herself in a tiny village in upstate New York, in the same house where she raised my father. She's lived there most of her life - I remember playing in the back yard as a child. We'll be going over there to visit her for an open-house celebration on Saturday. I'm sure we'll have a pretty good crowd coming through. And then on Sunday the plan is to have a family celebration at my parents house.
I'm hoping to get some nice pictures of the occasion. In particular, some of grandma with my son. He's her only great-grandchild and she adores him. Unfortunately, it's a pretty long trip, so she doesn't get to see him very often. So it'll be nice for him to be there for such a special occasion. Of course, he's still too young to remember it when he's grown up, but I hope to get some picture to commemorate the occasion.
So even though you're never going to see this entry (I don't think she's ever even used a computer), happy birthday, Grandma!
After shamelessly reusing passwords for far too long, I finally decided to get myself a decent password manager. After a few false starts, I ended up going with KeePass. In retrospect, I probably should have started with that, but my thought process didn't work out that way.
Originally, my thought was that I wanted to use a web-based password manager. I figured that would work best as I'd be able to access it from any device. But I didn't want to use a third-party service, as I wasn't sure how much I wanted to trust them. So I was looking for something self-hosted.
I started off with PPMA, a little Yii-based application. It had the virtue of being pretty easy to use and install. There were a few down sides, though. The main one was that it wasn't especially mobile-friendly, so there were parts of the app that actually didn't work on my phone, which defeats the whole "works on any device" plan. Also, it really only supported a single user, so it's not something I could easily set my wife up on as well. (To be fair, the multi-user support was sort of there, but it was only half-implemented. I was able to get it basically working on my own, but still.)
More importantly, I wasn't entirely confident in the overall security of PPMA. For starters, the only data it actually encrypted was the password. Granted, that's the most important piece, that's sort of a minimalist approach to account security. And even worse, I wasn't 100% convinced that that was secure - it's not clear to me that it doesn't store a password or key in session data that could be snooped on a shared server. Of course, I haven't done an extensive analysis, so I don't know if it has any problems, but the possibility was enough to make me wary and I didn't really want to do an extensive audit of the code (there was no documentation to speak of, and certainly nothing on the crypto scheme).
The next package I tried was Clipperz. This is actually a service, but their code is open-source, so you could conceivably self-host it. I had a bit more confidence in this one because they actually had some documentation with a decent discussion of how their security worked.
The only problem I had with Clipperz was that I couldn't actually get it to work. Their build script had some weird dependencies and was a pain to deal with (it looked like it was trying to check their source control repository for changes before running, for some reason). And once I got it installed, it just flat-out didn't work. I was able to create a new account, but after that every request just returned an error out. And to make things worse, it turns out their PHP backend is ancient and not recommended - it's still using the old-school MySQL database extension. The only other option was the AppEngine Python backend, which wasn't gonna work on my hosting provider. So that was a bust.
It was at that point that I started to think using a web-based solution might not be the best idea. Part of this is simply the nature of the web - you're working over a stateless protocol and probably using an RDBMS for persistence. So if you want to encrypt all the user's data and avoid storing their password, then you're already fighting with the medium. A desktop app doesn't have that problem, though - you can encrypt the entire data file and just hold the data in memory when you decrypt it.
It also occurred to me that accessing my passwords from any computer might not be as valuable as I'd originally thought. For one thing, I probably can't trust other people's computers. God alone knows what kind of malware or keyloggers might be installed on a random PC I would use to access my passwords. Besides, there's no need to trust a random system when I always have a trusted one with me - namely, my phone.
Great! So all I really need is a password manager than runs on Android.
Well...no, that won't do it. I don't really want to have to look up passwords on my phone and manually type them into a window on my desktop. So I need something that produces password databases that I can use on both Android and Windows.
Luckily, KeePass 2 fits the bill. It has a good feature set, seems to have a good reputation, and the documentation had enough info on how it works to inspire some confidence. The official application is only Windows-based, but there are a number of unofficial ports, including several to iOS and Android. It's even supported by the Ninite installer, so I can easily work it into my standard installation.
For me, the key feature that made KeePass viable was that it supports synchronization with a URL. There are extensions that add support for SSH and cloud services, if you're into that sort of thing, but synchronizing via standard FTP or WebDAV is built right in. KeePass also supports triggers that allow you to automatically synchronize your local database with the remote URL on certain events, e.g. opening or saving the database.
For the mobile side, I decided to go with Keepass2Android. There are several options out there, but I chose this one because it supports reading and writing the KeePass 2.x database format (which not all of them do) and can directly read and write files to FTP and WebDAV. It's also available as an APK download from the developer's site, as opposed to being available exclusively through the Google Play store, which means I can easily install it on my Kindle Fire.
Keepass2Android also has a handy little feature called "QuickUnlock", which allows you one chance to unlock your database by typing just the last few characters of your passphrase. If you get it wrong the database is locked and you need to enter the full passphrase. This addresses one of my main complaints about smart phones - the virtual keyboards work to actively discourage good passwords because they're so damned hard to type. I chose a long passphrase which takes several seconds to type on a full keyboard - on a virtual keyboard, it's absolutely excruciating. This way, I don't have to massively compromise security for usability.
So, in the end, my setup ended up being fairly straight-forward.
- I install KeePass on all my computers.
- I copy my KeePass database to the WebDAV server I have set up on my web hosting.
- I set up all my computers with a trigger to sync with the remote URL.
- I install Keepass2Android on my phone and tablet.
- I configure them to open the database directly from the URL. Keepass2Android caches remote databases, so this is effectively the same as the desktop sync setup.
- Profit! I now get my password database synchronized among all my computers and devices.
I've been using this setup for close to a month now, and it works pretty darn well. Good encryption, good usability, plenty of backups, and I didn't even have to involve a third-party service.
I got a new toy the other week - a Sandisk Wireless Flash Drive. This was not normally something I would have bought, but it showed up as a Kindle-exclusive special offer on my new Kindle Fire HDX (post on that coming later) - the 32GB model was only $20, which is about 66% off the normal retail price. I didn't really know anything about the device or how it worked, but for only $20, I figured, why not?
It turns out that this is an interesting little "flash drive". First, to be clear, it's not really what I would normally consider a "flash drive". For starters, it doesn't actually have any built-in storage - it has a MicroSD slot with a 32GB card in it. So the "flash drive" itself is more like a MicroSD card reader that's also a networking appliance.
The networking portion is actually kind of cool. In it's most basic configuration, the Wireless Flash Drive acts as a WiFi access point. You associate to it and it supplies you an IP and can serve out content. However, it also allows you to configure it to associate to an internet-connected access point. So you can tell it the SSID and WPA credentials of your network and anything on the LAN will be able to access it.
The device is built to work with a mobile app (available for both iOS and Android) which allows you to not only access the data on it, but also configure it. However, while the app is actually not bad, it turns out you don't need it. The device provides a web interface that lets you configure it as well as browse content. And on further research, it turns out that the Wireless Flash Drive serves it's content out over WebDAV! So forget the mobile app - you can actually access it from any PC with a WebDAV client.
Though I haven't had opportunity to use it heavily yet, I have to say the Wireless Flash Drive is actually a pretty cool little deice. I probably wouldn't pay $60 for it, simply because I don't need it that much. But if you happen to have a distinct use for such a device, it's pretty cool and more open to tinkering than I'd expected.
For the last few years, I've been using this tool called "go" - an eminently unsearchable name. Basically, it's kind of like bookmarks for the command line - you can set directories to be remembered and go back to them with a single short command.
Anyway, shortly after I started using it, I posted a patch that added support for Powershell (which I call go-posh). That worked well and everything was fine. Then, a bit later, I added a few more small patches to the application, such as an option to print the directory path rather than changing to it.
Well, that was all well and good. I'd been using my patched version every day for quite a while. I even added it to my Mercurial repository for safe keeping.
What I didn't realize is that I never updated my original blog post with my second set of changes. I discovered this the hard way a couple of weeks ago, on the computer at my new job. When I set up go-posh on that computer, I just used the ZIP archive from my blog post, rather than cloning it from my Mercurial repository like I normally would. It worked just fine for a few weeks until I tried to run something like
gvim (go -p sb/somefile.txt) and was informed that the -p option didn't exist.
I hate it when I do things like that. It's such a stupid mistake and it's extra embarrassing because it's been wrong for nearly three years.
Anyway, I've updated the old blog entry. That's a little out-of-date now anyway, so I also linked it to the project tracker page, which is the canonical source of information anyway. I even rebuilt the Mercurial repo to reflect the actual changes I made to the stock install, rather than the context-free initial import I had before.
So anybody who's interested should grab the code from there. The additions over what I used to have in the blog post download include the aforementioned -p option, as well as resolving of shortcut prefixes when they aren't ambiguous (e.g. if the shortcut is "downloads", you can just use "dow"), home directory detection for Windows, and improved support for the -o options. You can see more info in the README.txt file.
Like many geeks, I have an Amazon Prime subscription. In addition to free shipping on many orders (which may well pay for the subscription by itself), Prime comes with a number of other features, such as streaming of selected Amazon Instant Video titles (which also paid for my subscription the last two years by having all of the Stargate series), and 5GB of free storage on Amazon Cloud Drive, which will be my topic for today.
Now, I actually kinda like some aspects of Could Drive. It's not terribly expensive - it's $0.50/GB per year and they offer six teirs from 20GB to 1TB. It integrates with my Kindle Fire HD and they have nice, unobtrusive apps for Windows, Mac, and Android that keep your files in sync. The Android app even has a nice little option to automatically sync any pictures you take to your cloud drive. So it works pretty well for me.
The one problem with Cloud Drive is sharing. Normally, that's not something I care a great deal about. However, there's that photo uploading feature of the Android app that I mentioned. And let's face it - if you have a smart phone, then it is your camera. And if your camera is automatically uploading pictures to your Cloud Drive, well then it makes perfect sense that you would share the pictures from your Cloud Drive when you want to send them to your friends and family.
Except you can't.
Well, to be fair, you actually can. You just won't want to because it sucks to much.
Maybe it would be easier to actually show you the problem. First, here's how you share a file in Cloud Drive:
Nice and simple - you select a file and click the "share" item in the menu. Self explanatory, isn't it? That'll give you a dialog that looks like this:
It's got a nice preview and the share URL and everything. Sweet!
Now, here's another picture. See if you can spot what's different.
Did you catch them both? One was that there were two files selected this time instead of one. The other one is that the SHARE ITEM IS FREAKING DISABLED!
So I can't share multiple files? Well, that's OK - I'll just move them both into a folder and share the entire folder. Except that I can't because sharing is FREAKING DISABLED ON FOLDERS!
So what's the upshot of this? Well, if I want to share all those nice photos that my phone has so helpfully auto-uploaded, I have to do it one at a friggin' time. That means my workflow looks like this:
- Select a photo in Cloud Drive.
- Click the "share" menu item.
- Copy the share URL from the dialog.
- Paste that URL into an e-mail or something.
- Dismiss the dialog.
- Return to step 1 and repeat for the next item.
Sure, for a handful of items, this isn't a big deal. But what happens when you have a couple dozen photos you want to share? Does Amazon seriously think I'm going to sit there and painstakingly share each individual file, noting down the share URLs? Not a chance in hell. I'm going to take advantage of the fact that Cloud Drive desktop app syncs them to my PC to bypass Amazon and just upload my pictures to another site with non-crappy sharing features. (I've been using Sta.sh, mainly because I'm familiar with it, what with having spent nearly two years helping to build it when I worked for deviantART.)
The thing that really bothers me is that this isn't a new problem. It's not like Amazon just rolled out the sharing feature and it's going to be improved very soon. If this were just a temporary stop-gap then I could forgive it. But it's been like this for a while. I don't remember when I first noticed this, but I think it's been a year or more.
And the worst part is that this is just such insultingly bad product design. Seriously, has nobody at Amazon even tried to use this freaking thing? Is the product manager just so oblivious that it never occurred to him that maybe people might want to share more than two pictures at a time? And did nobody above or below him bother to question it? Or is it just a case of trying to shut up some of their more vocal customers who demanded this by giving them a crappy, mostly-broken version of what they asked for in the hopes that they'll go away?
There are loads of other ways to do this that would solve the problem. And I get that there may be some non-obvious technical limitations to some of the possible solutions. I've seen such issues myself when I was working on Sta.sh. For instance, maybe there are back-end issues that make sharing folders actually much harder to implement than it would seem. That's fair.
But even so, is there really any excuse for this? After all, Cloud Drive is all AJAX-based, so is there any reason why they couldn't have done the UI in a way that would allow multi-sharing? I know I certainly can't think of one. It wouldn't even be hard. All you'd need to do is enable the "share" menu for multiple selections, have it fire off one AJAX request per item to get the share URLs, and then aggregate them in a text area so that you can copy and paste them en masse. I'm not saying that would be ideal, but it would be better than what they have now. Plus it would be easy. Heck, I even think I could implement that in a Greasemonkey script if I really wanted to - I looked at the AJAX requests and they're not complicated.
So seriously, Amazon, get your act together. I don't know who signed off on this half-assed feature, but they need a smack in the head. It's almost 2014 - this crap doesn't fly anymore.
I've discovered a small caveat to the technique for OSX-like trackpad behavior that I posted about the other week. I probably should have anticipated this - it seems fairly obvious in hindsight, but didn't seem so at the time.
It turns out that X-Mouse runs in a different user context than the Window UAC dialog. Who knew, huh? The upshot is that any button that's been configured with X-Mouse will revert to its default behavior in the UAC. In other words, you have to click the system-default "left button" to make the dialog work.
Not actually a huge deal, since your X-Mouse config comes back once the elevated process has been launched, but a bit weird.
Note: This post is mostly a note to myself. I don't do this often and I always forget a step when I need to do it again.
I have Mercurial set up on my hosting provider. I'm using hgweb.cgi and it works well enough. However, Mercurial does not seem to support pushing new repositories remotely when using this configuration. That is, you can't just run an
hg push on the client or anything like that - you need to do some manual setup.
The steps to do this are as follows:
1) On the client, do you
hg init to create a new repository.
2) Copy this directory to your server in the appropriate location.
3) On the server, edit your
new_repo_dir/.hg/hgrc to something like this (if you have one that you copied form step 1, just nuke it):
allow_push = youruser
description = Some description
name = your_repo_dir
4) Add a line like this to your
your_repo_dir = /path/to/repo/your_repo_dir
Assuming that the rest of the server is already set up, that should do it. I always keep forgetting one of the last two steps, for some reason (probably because I only do this once in a while).
I spent some time looking through some old links the other day. I imported all my bookmarks into Lnto (which I really need to release one of these days) and I was browsing through some of the ones that I've had hanging around forever. Some of them date back to when I was in college.
Turns out quite a few of them were dead. Some of these were not unexpected. There were a few links to cjb.net, members.tripod.com, and suchlike sites that are now defunct. There were also several links to university web pages, many presumably belonging to students who have long since graduated.
Several of them were also domains that had changed hands. Most were parked and covered with ads. One was an anime fan site that now redirects to the official site of the distributor.
The most interesting one was a Final Fantasy fan site that is now an "escort service" site. Out of curiosity, I looked the site up in the Wayback Machine and found that this is actually a fairly recent development. Apparently the fan site was in existence until 2009. In 2010, the archived copies are just mostly empty directory listings. These continue into 2011, and then there's one copy that appears to be a broken and/or spammy blog. There are no archived pages from 2012, and then in 2013, there's a GoDaddy parked domain page in June, followed by the escort service site in July.
It's strange how the web works. Despite the talk about how digital content lasts forever and how it's virtually impossible to completely delete anything you put online, the truth is that content on the web is surprisingly ephemeral. Sites regularly disappear with no explanation; content gets modified with no indication whatsoever to readers; sites get reorganized, breaking every external link and just redirecting them to the front page. It's a wonder people manage to find anything at all!
This has been on my mind anyway, since I've been meaning to get back to refactoring LnBlog (which is a topic for another post). As part of that, I was going to work on a nicer URL structure. That piece is easy, but I'm committed to keeping all the old links valid. That's less easy, but not unmanageable. (It's actually further complicated by the fact that I'm considering moving off of subdomains so that )
The thing is, I've owned this domain for nearly ten years and URLs are something I never really put a great deal of thought into. But it seems obvious that I need to start thinking seriously about the best way to manage them. I want the content on my site to have true permalinks - I want the college kids who bookmark a blog entry today to still be able to visit that link when their kids are in college.
This will require some planning and future-proofing. And I'm not just about the URLs themselves - those are the easy part - but conventions for different types of content, what constitutes "permanent" content, and how I'm going to maintain all this stuff across potentially many changes in hosting and underlying technology. If I'm going to have this site until I die (and that is the plan), I'm eventually going to have an awful lot of content, and it would pay to have a plan for how to deal with that.