On GitHub pipelines and diverging branches

Just before Christmas I started a new job.  I won't get into the details, but my new team has a different workflow than I'm used to, and the other day I noticed a problem with it.  My colleague suggested that the experience might make for good blog-fodder, so here's the break-down.

First, let me start by describing the workflow I was used to at my last job.  We had a private GitLab instance and used a forking workflow, so getting a change into production went like this:

  1. Developer forks the main repo to their GitLab account.
  2. Developer does their thing and makes a bunch of commits.
  3. Developer opens a merge request from their fork's branch to the main repo's main branch.  Code review ensues.
  4. When review is complete, the QA team pulls down the code from the developer's fork and tests in a QA environment.  Obviously testing needs differ, but a "QA environment" is generally exactly the same thing as a developer environment (in this case, a set of disposable OpenStack VMs).
  5. When testing is complete, the merge request gets merged and the code will go out in the next release (whenever that is - we didn't do continuous deployment).
  6. Every night, a set of system-level tests runs against a production-like setup that uses the main branches of all the relevant repos.  Any failures get investigated by developers and QA the next morning.

I'm sure many people would quibble with various parts of this process, and I'm not going to claim there weren't problems, but it worked well enough.  But the key feature to note here is the simplicity of the branching setup.  It's basically a two-step process: you fork from X and then merge back to X.  You might have to pull in new changes along the way, but everything gets reconciled sooner or later.

The new team's process is not like that.  We use GitHub, and instead of one main branch, there are three branches to deal with: dev, test, and master, with deployment jobs linked to dev and test.  And in this case, the merging only goes in one direction.  So a typical workflow would go like this:

  1. Developer creates a branch off of master, call if "feature-X"
  2. Developer does their thing and makes a bunch of commits.
  3. Developer opens a pull request from feature-X to dev and code review ensues.
  4. When the pull request is approved, the developer merges it and the dev branch code is automatically deployed to a shared development environment where the developer can test it.  (This might not be necessary in all cases, e.g. if local testing is sufficient.)
  5. When the developer is ready to hand the code off to QA, they open a pull request from feature-X to test.  Again, review ensues.
  6. When review is done, the pull request gets merged and the test branch code is automatically deployed to test, where QA pokes at it.
  7. When QA is done, the developer opens a pull request from feature-X to master and (drum roll) review ensues.
  8. When the master pull request is approved, the code is merged and is ready to be deployed in the next release, which is a manual (but pretty frequent) process.

You might notice something odd here - we're only ever merging to dev and test, never from them.  There are occasionally merges from master to those branches, but never the other way around.  Now, in theory this should be fine, right?  As long as everything gets merged to all three branches in the same order, they'll end up with the same code.  Granted, it's three times as many pull requests to review as you really need, but other than that it should work.

Unfortunately, theory rarely matches practice.  In fact, the three branches end up diverging - sometimes wildly.  On a large team, this is easy to do by accident - Bob and Joe are both working on features, Bob gets his code merged to test first, but testing takes a long time, so Joe's code gets out of QA and into master first.  So if there are any conflicts, you have the potential for things like inconsistent resolutions.  But in our case, I found a bunch of code that was committed to the dev branch and just never made it out to test or master.  In some cases, it even looks like this was intentional.

So this creates an obvious procedural issue: the code you test in QA is not necessarily the same as what ends up in production.  This may be fine, or it may not - it depends on how the code diverges.  But it still creates an obvious risk because you don't know if the code your releasing is actually the same as what you validated.

But it gets worse.  This also creates issues with the GitHub pipeline, which is where we get to the next part of the story.

Our GitHub pipelines are set up to run on both "push" and "pull_request" actions.  We ended up having to do both in order to avoid spurious error reporting from CodeQL, but that's a different story.  The key thing to notice here is that, by default, GitHub "pull_request" actions don't run against the source branch of your pull request, they run against a merge of the source and target branches.  Which, when you think about it, is probably what you want.  That way you can be confident that the merged code will pass your checks.

If you're following closely, the problem may be evident at this point - the original code is based on master, but it needs to be merged to dev and test, which diverge from master.  That means that you can get into a situation where a change introduces breakage in code from the target branch that isn't even present in the source branch.  This makes it very hard to fix the pipeline.  Your only real choice at that point is to make another branch of the target branch, merge your code into that, and then re-create the pull request with the new merged branch.  This is annoying and awkward at best.

But it gets worse than that, because it turns out that your pipeline might report success, even if the merge result would be broken!  This appears to be a GitHub issue and it can be triggered simply by creating pull requests.  

The easiest way to explain is probably by describing the situation I actually ran into.  I had a change in my feature-X branch and wanted to go through our normal process, which involves creating three pull requests.  But in my case, this was just a pipeline change (specifically, adding PHPStan analysis), so it didn't require any testing in dev or test.  Once it was approved, it could be merged immediately.  So here's what I did:

  1. First, I created a pull request against dev.  The "pull_request" pipeline here actually failed, because there was a bunch of code in the dev branch that violated the PHPStan rules and wasn't in master, so I couldn't even fix it.  Crud.
  2. After messing around with dev for a while, I decided to come back to that and just create the pull requests for test and master.
  3. So I created the pull request for test.  That failed due to drift from master as well.  Double crud.
  4. Then I created the pull request for master.  That succeeded, as expected, since it was branched from master.  So at least one of them was reviewable.
  5. Then I went back and looked at the dev pull request and discovered that the "pull_request" pipeline job now reported as passing!

Let me say that more explicitly: the "pull_request" job on my pipeline went from "fail" to "pass" because I created a different pull request for the same branch.  There was no code change or additional commits involved.

Needless to say, this is very bad.  The point of running the pipeline on a pull request is to verify that it's safe to merge.  But if just doing things in the wrong order can change a "fail" to a "pass", that means that I can't trust the results of my GitHub pipeline - which defeats the entire purpose of having it!

As for why this happens, I'm not really certain.  But from my testing, it looks like GitHub ties the results of the "pull_request" job to the last commit on the source branch.  So when I created the pull request to dev, GitHub checked out a merge of my code and dev, ran the pipeline, and it failed.  It then stores that as part of the results for the last commit on the branch.  Then I created the master pull request.  This time GitHub runs the pipeline jobs against a merge of my code with master and the jobs pass.  But it still associates that result with the last commit on the branch.  Since it's the same commit and branch are for both pull requests, this success clobbers the failure on the dev pull request and they both report a "pass".  (And in case you're wondering, re-running the failed job doesn't help - it just runs whatever the last branch it tested was, so the result doesn't change.)

The good news is that this only seems to affect pull requests with the same source branch.  If you create a new branch with the same commits and use that for one pull request and the original for the other, they don't seem to step on each other.  In my case, I actually had to do that anyway to resolve the pipeline failures.

So what's the bottom line?  Don't manage your Git branches like this!  There are any number of valid approaches to branch management, but this one just doesn't work well.  It introduces extra work in the form of extra pull requests and merge issues; it actually creates risk by allowing divergence between what's tested and what's released; and it just really doesn't work properly with GitHub.  So find a different approach that works for you - the simpler, the better.  And remember that your workflow tools are supposed to make things easier.  If you find yourself fighting with them, then you're probably doing something wrong.

OneDrive for Linux

As I mentioned a while ago, I replaced my desktop/home server this past summer.  In the process, I switched from my old setup of Ubuntu running Trinity Desktop to plain-old Ubuntu MATE, so I've been getting used to some new software anyway.  As part of this process, I figured it was time to take another look for OneDrive clients for Linux.

See, I actually kind of like OneDrive.  I have an Office 365 subscription, which means I get 1TB of OneDrive storage included, so I might as well use it.  I also happen to like the web interface and photo-syncing aspects of it pretty well.

However, I'm slightly paranoid and generally distrustful of cloud service providers, so I like to have local copies and offline backups of my files.  This is a problem for me, because my primary Windows machine is a laptop, and I don't want to pay the premium to put a multi-terabyte drive in my laptop just so I can sync my entire OneDrive, and scheduled backups to a USB disk are awkward for a laptop that's not plugged in most of the time.  Now, I do have a multi-terabyte drive connected to my Linux desktop, but for a long time there were no good OneDrive sync clients for Linux.  In the past, I had worked around this by using one-off sync tools like Unison (which...mostly worked most of the time) or by setting up an ownCloud sync on top of the OneDrive sync (which worked but was kind of janky).  However, but those depended on syncing from my Windows laptop, which was OK when I had 20 or 30 gigabytes of data in OneDrive, but at this point I'm well over 100GB.  Most of that is archival data like family photos and just eats up too much space on a 500GB SSD.

Enter InSync.  InSync is a third-party file sync tool that runs on Windows, Mac, and Linux and supports OneDrive, Google Drive, and Dropbox.  It has all the bells and whistles you'd expect, including file manager integrations, exclusions, directory selection, and other cool stuff.  But what I care about is the basics - two-way syncing.  And it does that really well.  In fact, it totally solves my problem right out of the box.  No more janky hacks - I can just connect it to my OneDrive account and it syncs things to my Linux box.

The only down-side to InSync is that it's proprietary (which I don't mind) and the licensing is confusing.  The up side is that it's not actually that expensive - currently, the pricing page lists licenses at $30 USD per cloud account.  So if you only want to sync OneDrive, it's $30 and you're done.  However, there's also an optional support contract and there's some difference between "legacy" licenses (which I think is what I have) and their new subscription model.  Frankly, I don't fully understand the difference, but as long as it syncs my OneDrive and doesn't cost too much, I don't really care.  

So if you're a OneDrive user and a Linux user, InSync is definitely worth a try.  I don't know about the other platforms or services (I assume they're all similar), but OneDrive on Linux works great.

On WSL performance

As somebody who does a lot of work in a Linux environment, WSL (the Windows Subsystem for Linux) has become almost a required too for me.  A while back, I looked up ways to share files between native Windows and WSL.  For various reasons, the most convenient workflow for me is to do much of my work on the code from within Windows, but then run various tests and parts of the build process in Linux.  So I wanted to see what my options were.

The option I had been using was to use the mount point that WSL sets up in Linux for the Windows filesystem.  In addition to that, it turns out there are a couple of ways to go the other direction and read Linux files from Windows.  There's the direct, unsupported way or the the supported way using a network share.  Sadly, it turns out none of these are really good for me.

My main problem and motivation for looking into this was simple: performance.  When crossing container boundaries, filesystem performance takes a nose-dive.  And I'm not just talking about an "I notice it and it's annoying" performance hit, I'm talking "this is actively reducing my productivity" hit.  For filesystem-intensive processes, on a conservative estimate, when running a process in Linux, things take at least 2 to 3 times as long when the files are hosted in Windows compared to when they're hosted in Linux.  And it's frequently much worse than that.  For one project I was working on, the build process took upwards of 20 minutes when the files were on Windows, but when I moved them to Linux it was around 3 minutes.  And it's not just that project.  Even for smaller jobs, like running PHPStan over a different project, the difference is still on the order of several minutes vs. 30 seconds or so.  Perhaps this has improved in more recent versions, but I'm still stuck on Windows 10 and this is seriously painful.

My solution?  Go old-school: I wrote a quick script to rsync my project code from Windows to Linux.  Not that rsync is super-fast either, but it's not bad after the initial sync.  I just set it up to skip external dependencies and run NPM et al. on Linux, so even when there's "a lot of files", it's not nearly as many as it could be.  Of course, then I need to remember to sync the code before running my commands, which is not idea.  But still, the time difference is enough that I can run the command, realize I forgot to sync, do the sync, and run the command again in less time than just running it once on the Windows-hosted code.

Duet Air is pretty cool

A while back, I posted about a tool called Duet, which allows you to convert an iPad or Android tablet (or even phone) into an external laptop display.  It actually works quite well, and allows you to use either WiFi or USB connections for your tablet monitor.  It also support using the touch screen on the tablet to control your desktop, which is pretty cool.

However, I did eventually discover an issue with it.  It seems that, on my work laptop (but not my personal one), the "energy efficient" setting doesn't properly support all resolutions.  It's a really weird bug, as the other two performance settings ("high power" and "pixel perfect") both work fine, and everything works fine on my personal laptop, but "energy efficient" only works when the resolution is set to maximum on my work laptop.  On the up side, their support people have been very responsive and I can just use a different setting, so it's not a big deal.

Anyway, as part of trying to collect more info on this bug for Duet's testing team, I signed up for a trial of Duet Air to see if I could reproduce the issue through that (spoiler: I could).  Duet Air enables Duet's "remote desktop" feature, which allows you to use not only mobile devices, but other laptops as external displays.

It's actually a pretty slick feature.  You just create an account and sign into all of your devices with it.  Then you can go to the "remote desktop" tab in Duet and choose the device you want to connect to.  The paradigm is that you use the "display" device to select what you connect to.  So, for example, if I want to have four monitors for my work machine, I can open up Duet on my home laptop, select my work laptop, and the home laptop becomes a wireless display.

So far, it's working pretty well.  It's easy to use and set up, performant, and it's a tool I'm already using.  It's also fairly cheap at $25/year.  I think I'll probably continue using it after the trial.

Poor man's home intercom

A few weeks ago, I decided to set up a DIY home intercom system.  This was motivated by the fact that my son has been doing home-school and we set him up a workspace in the basement.  This isn't a problem per se, but my wife usually doesn't go down there with him if he's doing independent work, which means there's often yelling up and down the stairs.  This is, shall we say... somewhat distracting when I'm trying to work.

I did a little searching for intercom systems, thinking I might buy some hardware, but decided that looked like too much work.  We'd have to find a home for it, and then you might not hear it if you were on the other side of the house, unless I put them everywhere, which is an even bigger pain.  Besides, it seemed like there should be an app for that.  And since we pretty much have our phones close to hand most of the time, that would be more convenient than dedicated hardware anyway.

Turns out there is an app for that.  A number of them, actually.  The one I decided to go with was Zello, which is a fairly simple walkie-talkie app.  I went with this one for a few reasons:

  1. The mobile app is free, at least for personal use.  (There's a PC version too, but that's only for paid corporate accounts.)
  2. It's in the Amazon and Google Play app stores.
  3. It's easy to set up.
  4. It's really easy to use.

The setup process for Zello was pretty basic.  For my son, I decided to just put it on an old Kindle Fire that I had laying around.  It can just sit on the desk, plugged in and ready to use whenever we need to talk to him.  My wife and I just put the app on our phones.  From there, you just create an account (which only requires basic information) for each device using the app, and then send a contact request to the other accounts.  Once your request is accepted, that person will appear in your contact list.

Actually talking to other people is even simpler.  You just tap on the person's account from your contact list and then you get a screen with a great big "talk" button in the middle.  When you want to talk to the person, you just press and hold the button and start talking, just like an old-fashioned walkie-talkie.  When you're done, you release the button.  From what I can tell, the connection is not in real-time - it seems like the app records your message and then delivers it, so you are less subject to the vagaries of the network.  But barring networking issues, the delay seems to be pretty short - a few seconds in most cases.

The app also has a few other features, including very basic text messaging.  There's also a "channels" feature, which I haven't used yet.  That's their "group voice chat" feature.  Presumably the idea is to mimic a dedicated frequency for a CB radio.  The primary use-case for the commercial version of Zello seems to be for fleet dispatchers, so the interface seems geared toward a simple replacement for a traditional radio system.

Overall, the app works pretty well.  It was easy to set up and it has definitely saved some frustration in terms of yelling back and forth across the house.  Also, my son seems to like using it.  He even ends is messages with "over and out".  So I count this as a win.

Windows 11 is fine

A couple of weeks ago I got the notification on my laptop that I could now upgrade to Windows 11.  And since I was feeling optimistic that day, I clicked "yes".

I'm not going to do a "review of Windows 11" post, though.  Not because I'm lazy or pressed for time (though I don't deny those charges), but really just because I don't have that much to say.  I mean, so far Windows 11 is fine.  And I don't mean that in the sarcastic, room-on-fire-this-is-fine meme sort of way.  My experience has been genuinely fine.  It's not a phenomenal, life-changing upgrade, but I haven't had any problems either.

For the most part, I haven't really noticed much of a change from Windows 10.  Yeah, now windows have more rounded corners and the UI in general got kind of a face lift, but those are mostly cosmetic changes.  They added some handy window management features that I use on occasion, but I haven't discovered any major features that strike me as must-have's.  

The one change I did immediately notice was the start menu.  I really don't like the new start menu.  I think it's much less useful than the one from Windows 10.  For one, the default position is in the middle of the screen, which seems like a pointless change.  However, there's a setting to easily change that.  But beyond that, it doesn't allow much customization and seems much more focused on being a glorified search box than a menu.  You can still pin items to the start menu, but the option to arrange them into groups is gone.  Also, the pinned area is now smaller and paginated, which is kind of annoying.

2021-11-27T14-22-18-212Z-small.png

Fortunately, that can be changed too.  There are a few options out there for start menu replacement in Windows 11.  I went with Stardock's Start11, which give you quite a few options in terms of customizing the start menu experience, including versions of the Windows 10 menu and the "classic" Windows 7 style menu.  On top of this, it gives you a number of other settings to manipulate the look and behavior of the start menu and taskbar, such as controlling transparency and texture, swapping out the start button image, and controlling click behavior.  It's actually quite well done, and with a $6 price tag, it's kind of a no-brainer if you don't like the new menu.

2021-11-27T14-10-15-829Z-small.png

CoC for Vim

A few weeks ago, I was looking into Typescript a bit.  I've heard lots of good things about it, but never had a chance to play with it.  However, I got tasked with some updates to my company's portal site.  (While not technically my team's responsibility, the portal team was swamped, so I agreed to make the required updates to support a  back-end feature my team added.)  And, of course, the portal team uses Typescript.

Naturally, most of the editing recommendations for Typescript are focused on Visual Studio Code.  But I like Vim, so I did a quick search and found this article, which led me to CoC (which I choose to pronounce "coke", like the soda), which stands for the slightly ungrammatical "Conquer of Completion".  It's a plugin for NeoVim and Vim that essentially does Intellisense (code completion, context popups, etc.) using language servers.

If you're not familiar, the Language Server Protocol (abbreviated LSP, though that always makes me think of the Liskov Substitution Principle) was developed by Microsoft for VS Code.  It's essentially a way to make Intellisense work without the editor having to implement support for each language.  It does this by defining a protocol that "clients" like an editor can use to communicate with a "language server".  The language server is a stand-alone program that can provide code intelligence for a particular language, but is not directly tied to any particular editor.  The server can then be called by any client that implements the protocol, which means that the editor itself doesn't actually have to know anything about the language to implement advanced editing features - which is huge.

Anyway, CoC is an LSP client for Vim.  And I have to say, it's awesome!  I've messed with a few code completion and LSP plugins in the past, but I never really got them to work right.  They were either difficult to configure, or required Vim to be built with particular non-standard options.  But CoC was dead-simple to set up.  The only catch is that you have to install the language servers separately, but it turns out that's super-simple as well.  (The ones I've used so far can all be installed through NPM.)

I'm still getting used to it, but having CoC is a game changer for Vim.  I'd given up on having this level of intelligence in my editor.  I mean, for something that supports as many languages as Vim, building it the old-fashioned way just isn't feasible.  But when you can use the same language servers as more modern editors to do the heavy lifting, suddenly it's no longer crazy.

The next step is to look into the available commands and customizations for CoC and see what I can come up with to optimize my experience.  So far it's a pretty cool tool and it definitely makes the development experience nicer.  I want to see what else I can do with it.

 

More Vim-as-IDE pointers

A while back I came upon this article on using Vim as a PHP IDE.  It's got some very good pointers, although the author may have gone a little over the top on trying to mimic the behavior of PHPStorm.  I haven't tried all of those plugins, but quite a few of them are in my Vim config.

If nothing else, the article gives you a good sense for just how powerful Vim is when it comes to extending its behavior.  It's actually pretty impressive.

Cover art for The Pragmatic ProgrammerEarlier this year,  finally read through The Pragmatic Programmer.  It's been sitting on my bookshelf for...at least 15 years now.  I'd read parts of it before, but never went through the whole thing.  Anyway, it contains a section on choosing an editor, and one of the things they stress is extension.  You need to be able to customize your editor to extend its functionality to suit your workflow.  The idea is that the editor is one of your primary tools of code-craft and you need to make it truly yours.  You need to learn to use it well and completely, to make it an extension of your hand.

So far I'm finding Vim to be a pretty good choice in that regard.  Partly this is due to its raw power, but to a large extent it's also due to its age and the ecosystem around it.  Vim is a very old tool and it's very mature and stable.  This is a very good thing, because it means that there are lots of examples and documentation for how to use and customize it.  If you want do do something new with Vim, there's a good change that someone already wrote a plugin to do it.  And if not, there are plenty of plugins out there that you can use as examples. 

Being mature and stable also means that things are unlikely to change out from underneath you.  Sure, new features are still being added, but the basic functionality has been largely unchanged for a while.  This is what you want from a good tool.  If you're going to invest a lot of time and effort to get good at using a tool, you want that investment to retain its value.  You don't want to spend three months every couple of years re-learning because the tool vendor decided to re-architect their product or switch to the latest trendy programming language.

So while Vim may be old and boring, there's something to be said for being old and boring.  When was the last time you saw an old, boring person on The Jerry Springer Show?  Doesn't happen.  New and interesting may be exciting, but there's a reason why telling someone "may you live in interesting times" is supposed to be a curse.

New browser plugins for KeePass

Almost three years ago I wrote a post about setting up a browser plugin for KeePass.  That plugin was chromeIPass and it worked pretty darn well.

Now fast-forward to a few months ago.  My wife's laptop broke down and I had to re-install Windows on it.  In the process, I tried to set up chromeIPass and discovered that it's dead!  Well, mostly dead, anyway.  It's open-source, and the source is still available, but it's no longer available in the Chrome app store.  So it's effectively dead.

So I started looking for alternatives.  The good news is that there's a fork of chromeIPass called KeePassHTTP-Connector. That still exists in the Chrome store.  However, it's also been discontinued!  Apparently it's deprecated in favor of KeePassXC-Browser which is a similar plugin for KeePassXC.  Apparently KeePassXC is a cross-platform re-implementation of KeePass.  I'm not sure why that's needed, since KeePass is written in C# and runs under Mono, and .NET core is now cross-platform anyway, but whatever.  The one nice thing about that browser plugin is that it uses a KeePassNatMsg plugin to communicate with KeePass.  Apparently that's more secure because it doesn't involve talking over HTTP.  But apparently it doesn't work correctly with "real" KeePass.  At least, it didn't for me - the plugin segfaulted when I tried to configure it.

Luckily, I did find a currently supported plugin that actually seems fairly good - Kee.  This is actually intended for a separate password manager, also called Kee, which I gather is some kind of paid service based on KeePass.  (Or something.  To be honest, I didn't really look into it - I only cared about the browser plugin.)  The Kee plugin is based on the old KeeFox plugin for Firefox, but this one also runs in Chrome.  It uses the KeePassRPC plugin for communication with KeePass.

If you used KeeFox in the past, this plugin is equally painless to use and configure.  Just install the KeePassRPC plugin, fire up KeePass, and install the browser plugin.  Kee will automatically attempt to connect to the RPC server and KeePass will prompt you to authorize the connection by bringing up a window with an authorization code.  Just enter that code into the window that Kee opens and click "connect".  Done!  Now when you visit a site that's in your KeePass database, Kee will put icons you can click in the login boxes and auto-populate the login form.  (The auto-population can be turned off - the convenience of that functionality is fantastic, but the security is iffy.)

So at least there's still a good, supported KeePass browser plugin out there.  I suppose this is one of the pitfalls of "roll your own" systems based on open-source software.  Since KeePass doesn't bundle a browser plugin, like many of the proprietary password managers do, we're forced to rely on the community, which can be both good and bad.  The bad comes when the "community" is basically one guy who eventually loses interest.  And while it's great that the source is there for anyone to pick up, it's important to recognize that adopting a new software project requires a substantial time commitment.  Free software is free as in "free puppy", not "free beer".

I give up - switching to GItHub

Well, I officially give up.  I'm switching to GitHub.

If you read back through this blog, you might get the idea that I'm a bit of a contrarian.  I'm generally not the type to jump on the latest popular thing.  I'd rather go my own way and do what I think is best than go along with the crowd.  But at the same time, I know a lost cause when I see it and I can recognize when it's time to cut my losses.

For many years, I ran my own Mercurial repository on my web host, including the web viewer interface, as well as my own issue tracker (originally MantisBT, more recently The Bug Genie).  However, I've reached the point where I can't justify doing that anymore.  So I'm giving up and switching over to GitHub like everybody else.

I take no real pleasure in this.  I've been using Git professionally for many years, but I've never been a big fan of it.  I mean, I can't say it's bad - it's not.  But I think it's hard to use and more complicated than it needs to be.  As a comment I once saw put it, Git "isn't a revision control system, it's more of a workflow tool that you can use to do version control."  And I still think the only reason Git got popular is because it was created by programming celebrity Linus Torvalds.  If it had been created by Joe Nobody I suspect it would probably be in the same boat as Bazaar today.

That said, at this point it's clear that Git has won the distributed VCS war, and done so decisively.  Everything supports Git, and nothing supports Mercurial.  Heck, even BitBucket, the original cloud Mercurial host, is now dropping Mercurial support.  For me, that was kind of the final nail in the coffin.  

That's not the only reason for my switch, though.  There are a bunch of smaller things that have been adding up over time:

  • There's just more tool support for Git.  These days, if a development tool has any VCS integration, it's for Git.  Mercurial is left out in the cold.
  • While running my own Mercurial and bug tracker installations isn't a huge maintenance burden, it is a burden.  Every now and then they break because of my host changing some configuration, or they need to be upgraded.  These days my time is scarce and it's no longer fun or interesting to do that work.
  • There are some niggling bugs in my existing environment.  The one that really annoys me is that my last Mercurial upgrade broke the script that integrates it with The Bug Genie.  I could probably fix it if I really wanted to, but the script is larger than you'd expect and it's not enough of an annoyance to dedicate the time it would take to become familiar with it.
  • My web host actually now provides support for Git hosting.  So I can actually still have my own repo on my own hosting (in addition to GitHub) without having to do any extra work.
  • Honestly, at this point I've got ore experience with Git than Mercurial, to the point that I find myself trying to run Git commands in my Mercurial repos.  So by using Mercurial at home I'm kind of fighting my own instincts, which is counterproductive.

So there you have it.  I'm currently in the process of converting all my Mercurial repos to Git.  After that, I'll look at moving my issue tracking in to GitHub.  In the long run, it's gonna be less work to just go with the flow.

Global composer

Nice little trick I didn't realize existed: you can install Composer packages globally.

Apparently you can just do composer global init ; composer global require phpunit/phpunit and get PHPUnit installed in your home directory rather than in a project directory, where you can add it to your path and use it anywhere.  It works just like with installing to a project - the init creates a composer.json and the require adds packages to it.  On Linux, I believe this stuff gets stored under ~/.composer/, whereas on Windows, they end up under ~\AppData\Roaming\Composer\.

That's it.  Nothing earth-shattering here.  Just a handy little trick for things like code analyzers or other generic tools that you might not care about adding to your project's composer setup (maybe you only use them occasionally and have no need to integrate them into your CI build).  I didn't know about it, so I figured I'd pass it on.

Text-based UML

Recently I discovered a new tool that I never knew I needed - PlantUML.

If you're like me, you probably want to do more UML.  I mean, I'm interested in software design and architecture.  I read books and articles about it.  I even wrote my thesis on formal modeling.  So I'd love to do more UML modeling.

The thing is...I don't like UML modelers.  I mean, it's not that the tools are bad - in fact, some of them are pretty good.  It's just that creating a UML model feels so heavy.  And while the actual modeling features that many tools have are really cool and useful in some circumstances, I find that 90% of the time all I really need is a simple diagram.  And while any UML tool can help you make a diagram, I feel like I usually end up getting bogged down in the mechanics of putting it together.  You know, you've got to select the right type of box, select the right type of relationship, then the tool renders the connections in a weird way by default so you have to fix it, etc.  Before you know it, you've spent 20 minutes on a diagram that would have taken two minutes if you'd done it on paper.

Enter PlantUML.  It bills itself as a "drawing tool" for UML, but the upshot is that it's a way to define your models in plain text.  You just write your models in your favorite text editor (and yes, there's a Vim syntax file available), run the tool, and it will spit out a rendered UML diagram.  Here's an example:

,

And here's the text that generated that: @startuml class Link { name : string description : string url : string } class Tag { name : string } class Folder { name : string } class User { username : string password : string setPassword(password : string) } class Comment { body : string } Link "1" -- "*" Tag : has > Link "1" -- "*" Comment : < belongs to Folder "1" -- "*" Link : contains Folder "1" -- "*" Folder : contains User "1" -- "*" Link : owns > @enduml

As you can see, the syntax is fairly straight-forward and pretty compact.  All of the standard UML diagram types are supported and the syntax allows you to provide minimal detail and still produce something meaningful.  In addition to the GUI shown above, it can also run from the command line and just create PNG images (or whatever format you like) of your diagrams, so you could easily work it into your build pipeline.  And the installation is simple - just download and run the JAR file.

The thing I really like, though, is that this text-based format makes it easy to store and source-control UML alongside your code.  Yes, you technically can do that with other formats, but it's awkward.  XMI files are huge and ugly and I don't even want to think about the project files for Eclipse-based tools.  But with PlantUML you can just have a directory with some "modelname.pu" files in it that are small, simple, and produce diffs that are easy to read when you change them.

I haven't tried it out yet, but I'm also interested in how feasible it would be to put the models right in the code, e.g. put the text in comments.  Seems like it might help with the whole "keeping code and models in sync" thing.  But maybe that's a bit much.

I recommend checking it out.  If you want a quick and easy method, there's an online version that you can test.

My UHK has arrived

My Ultimate Hacking Keyboard (UHK) finally arrived the other week.  It's only about a year and a half over-due, which I guess isn't really that bad for a crowd-funded product.  I was in love with this keyboard as soon as I saw the promotional video, so I've really been looking forward to getting my hands on one. 

If you haven't heard of the UHK, I recommend taking a look at it.  It's an extremely cool piece of hardware, even if you're like me and are neither a "gadget person" nor a keyboard aficionado.  It's a fully programmable mechanical keyboard that can control the mouse, splits down the middle, and has support for plug-in modules (not yet available).

Initial Impressions

Just taking the UHK out of the shipping box, it looks very nice.  I'm not sure what I was expecting, but I was pleasantly surprised.  The packaging was very slick and professional - far more so than conventional keyboards I've purchased.  It came with a nice "thank you" card and minimal instructions that just point to the URL for their online tutorial (which I highly recommend new users try out).

The very nice UHK and palm rest boxes.

I purchased both the keyboard and the palm rests.  At first glance, both look exactly as nice as they do on the marketing site.  The palm rests are a beautiful, smooth wood mounted on extremely solid metal plates.  The keyboard itself has a solid-feeling plastic case.  The seam where the halves separate is magnetic and has all metal contacts - no visible wires, circuit boards, or weird connectors.  The bottom has some thick no-skid nubs to stand on and metal mounting points for the palm rest.  Overall, it feels much sturdier and higher quality than any single-piece conventional keyboard I've used.

The opened UHK box.

The open UHK palm rest box.

Setup

Setting up the UHK was a bit of a mixed bag.  In the most basic setup, you can plug one end of the USB cable into the keyboard and the other end into your computer and it "just works" - no additional software or configuration required.  And that's great.  But if you have the palm rests and want to set up something more ergonomic, it's a different story.

My configuration of choice was to separate the keyboard and use a "tented" configuration with the palm rests, so that the center part of the keyboard is elevated.  This is similar to the setup of the Microsoft Ergomonic keyboard I had been using.  And once I got it set up, I found it to be very comfortable.

Comparison of the UHK with my old Microsoft Ergonomic keyboard.

The palm rest and tilting setup was the only aspect of the UHK that I'm not crazy about.  The setup process was not especially difficult, and there were clear instructions for all the standard palm rest configurations, but you can't do it without a screwdriver.  Installing the feet for tilting was the most painful part.  The feet are a thick plastic, which is good for durability, but makes it harder to bend them enough to fit into the mounting brackets.  And you can't really get the three screws for the mounting brackets in or out with the feet in them.  So it's not really feasible to quickly switch between configurations - at least not if you're using the "tented" setup.  I found that a little disappointing, but I can live with it.  The up side is that the final setup is surprisingly solid.  I had been worried that the palm rest would wobble or that the feet would have some give, but that's not the case at all.

The Agent

One of the really cool things about the UHK is that you can configure everything, but don't have to install any special software to use it.  The configuration is stored on the keyboard itself, so as soon as you plug it in, all your settings are already there.  You do, however, need a special program to do the configuration.

Screenshot of the UHK agent software.

The "agent" software itself is pretty intuitive.  It's cross-platform (looks like an Electron app - there's a web-based live demo here) and consists of a few settings panes and a visual representation of the keyboard that you can use to remap keys.  It allows you to remap literally any key on the keyboard, including modifier keys and layer-switching keys.  You can even map different functions for different modifier keys

The agent also has some support for running programs or doing other system functions, such as controlling volume.  Initially, these seemed to be a little dodgy, but that seems to have been resolved when I upgraded the keyboard's firmware.  That upgrade also gave me support for keyboard macros, which weren't yet implemented in the pre-installed firmware version.  I haven't actually had occasion to try out the macro feature yet, but it seems like a really cool idea.

Adapting to the UHK

The biggest challenge with the UHK is adapting to using it.  When you first look at the keyboard, the most remarkable thing is how small it is.  As one of my teammates put it, "Looks like you're missing some keys there."  And that's because, compared to most standard keyboards, it is missing a lot of keys.  Instead of having a lot of dedicated keys, the UHK has a concept of "layers".

My UHK setup.

If "layers" doesn't ring a bell, think of the "numlock" key on a conventional keyboard.  When you turn it on, the numeric keypad types numbers.  When you turn it off, the numeric keypad keys function as arrow keys, "delete", page up/down, etc.  That's two "layers" - the "number" layer and the "navigation" layer, if you will.  With the UHK, the entire keyboard is like that, but with four possible layers instead of two.  They are:

  1. The "base" layer, where you do normal typing.  This is the "no layers selected" layer.
  2. The "mod" layer, which gives you access to arrow keys, page up/down, home/end, F1 - F12 keys, and a bunch of other things that would have a dedicated key on a conventional keyboard.
  3. The "function" layer, which gives you access to the kind of things associated with the "function" key on a laptop - media control, volume control, etc.  Note that this is not for the function keys as in "F1", which was initially slightly confusing.  Instead, the number keys on this layer are pre-configured to change the key map (e.g. form QWERTY to Dvorak).
  4. The "mouse" layer, which is used to control the mouse, including left/right/middle clicking, movement, and scrolling.

I'll be honest - this setup takes a little getting used to.  You can't just open the box and immediately be productive using this.  However, it's really not as bad as I feared it might be.  The key arrangement is standard, so you can type text on the base layer with little to no adjustment.  It's just the layer switching that is an issue.

For me, the first week was spent mostly getting used to switching layers and getting the different key combinations into my muscle memory.  The second week was spent on customization and figuring out what did and didn't work for me.  For the most part, the default key mapping is pretty good, but there were a few things that didn't work for me.  For instance, I had to swap the left "Fn" and "Alt" keys because I was used to "Alt" being right next to the space bar and kept accidentally hitting the wrong key.  I also converted the right "Fn" key into a secondary "Mouse" key because, frankly, I never use the function layer and it seemed more useful to be able to control the mouse entirely with my right hand.  After the second week or so, I found that I pretty much had the hang of the layer switching.  My control started to become much faster and more natural.  After about a month, I found that when I used my laptop keyboard, I would instinctively reach for the non-existent "mod" key because it was more natural than moving my entire hand to find the arrow keys.\

Conclusion

It's been a little over a month and I LOVE my UHK.  If it wasn't so expensive, I'd consider buying a second one to use at home.  (Also I spend most of my time at home on a laptop, which doesn't lend itself to an external keyboard.)  It's a physically solid device with lots of features and it's just really comfortable to use.  I'm really enjoying the whole "not having to move off of the home row" thing.  It's not cheap, but I would definitely recommend it to any code who is willing to invest $300 or so in a keyboard.  I have no regrets and am actually looking forward to giving them more money when the modules come out.

On the state of my Vim-as-IDE

Last year I posted about adopting Vim as my default editor.  I've been leveling up my Vim knowledge since then, so I thought I'd write a summary of the current state of my ad hoc Vim IDE configuration.

First, let's start with some sources of information and inspiration.  I found Ben McCormick's Learning Vim in 2014 series to be very useful.  The Vim as Language post in particular offers an eye-opening perspective on the beauty of using Vim.  I've also been working my way through Steve Losh's Learn Vimscript the Hard Way, which is a great source for information on customizing your Vim configuration.  And if you want a little inspiration to give Vim a try, here is Roy Osherove's "Vim for Victory" talk from GOTO 2013.

So how has my Vim adoption gone?  Pretty well, actually.  When I look back at my original post on looking for a new editor, it's pretty clear that a sufficiently customized Vim meets all my criteria.  However, to be fair, it did take a while to get it sufficiently customized.  The customization I've done wasn't actually that hard, but it took some time to figure out what I needed, what I could do, and then Google how to do it.  But paradoxically, that's one of the strengths of Vim - it's been around long enough that pretty much everything that you might want to do either has a plugin available or has been documented someplace on the web, so you rarely need to write any original code.

My Plugins

These days there are actually quite a few plugin managers available for Vim.  The nice thing about this is that they all support the same plugin format, i.e. GitHub repositories laid out in the standard ~/.vim directory format.  I'm currently using Plug because it provides an easy mechanism for selective or deferred plugin loading (in the case where you have a plugin that's not always needed and slows down Vim startup).

Here are some of the goodies my Plug plugin list currently contains:

  • scrooloose/nerdtree - A nice file explorer plugin that provides some enhancements to Vim's native file browser.  Pretty much a must-have for every Vim setup.
  • ctrlpvim/ctrlp.vim - A fuzzy file finder that works on buffers and other stuff too.  Just press ctrl+p and start typing the name you want.
  • jlanzarotta/bufexplorer - A handy plugin to list and switch between the current buffers.  Think of it as like the tab strip at the top of other editors, but easier to deal with from the keyboard.
  • tpope/vim-dispatch - A nice plugin for running external programs asynchronously.  By default, external command execution blocks the rest of the UI until the command is done.  This is fine sometimes, but not others.  Dispatch integrates with other plugins and provides a way to run things in the background and get their output back into Vim.
  • tpope/vim-surround - Provides a Vim movement that helps you manipulate surrounding entities.  Good for things like changing quotes, HTML tags, etc.
  • Chiel92/vim-autoformat - Provides an interface to various code formatters.  I use it as a replacement for the JSON beautifying feature that I loved so much in Komodo.
  • mileszs/ack.vim - A quick and easy integration of the ack! text search tool.  Like the built-in grep, but better.
  • joonty/vim-sauce - Sauce is a handy little plugin for managing multiple configuration files.  It's also useful for adding the concept of a "project" to Vim.  I use it to create project-specific configurations that handle all the customization that would be done in the project file of a regular IDE.
  • janko-m/vim-test - A unit test runner plugin that handles many different tools.
  • vim-airline/vim-airline - An enhanced status line that's both pretty and displays some useful information.
  • w0rp/ale - The Asynchronous Lint Engine, this offers syntax and style checking with inline error notifications, just like in a regular IDE.
  • majutsushi/tagbar - A tool for displaying the tags in the current file, similar to the structure browsers found in IDEs.

Needless to say, I also made a number of other customizations to my Vim configuration.  My full work-in-progress Vim configuration is in this repo if you're interested.  I do not hold this up as a great example of how to configure Vim, but it's working for me so far and, as previously noted, it actually wasn't all that much work.

The IDE Functionality

So what functionality do I have with this setup?  Well, it turns out I actually get most of what I previously had with Komodo.  Of course, I need to integrate with a few external packages for this, the key ones being Exuberant CTags, which indexes identifiers in the code, and ack for text search.  I also need various external formatters and linters, though the specific programs depend on what language I'm coding in.  Nothing fancy, though - they're pretty much all command-line executables that you can just drop someplace in your path.

So here's what I get for my trouble:

  • Insanely powerful key bindings.  I mean sersiously powerful - I don't think there's anything you can do in Vim that can't be bound to a keystroke.  And it's usually pretty easy.  Just the other week I defined a couple of ad hoc key bindings to help me add translation keys to a web template.  It's really a beautiful thing.
  • Inline syntax and style checking.  Using ALE in conjunction with the appropriate external linters, I get the same kind of inline checking I can get in PHPStorm.
  • Navigating identifiers.  Using Vim's ctag support, it's possible to navigate uses and definitions of a particular, for example, much like the code browsing abilities of PHPStorm.  Of course, it's not perfect because ctags lack knowledge of the context of the identifier, but it's not bad.  (And to be fair, I've seen the code navigation in PHPStorm and Komodo fall over on more than one occasion.)
  • Searching.  Between CtrlP and Ack, I have some nice facilities for searching for or within files.  Again, very similar to what I had in Komodo or PHPStorm.
  • Project management.  Between NERDTree and Sauce, I have some decent support for the concept of project.  They give me a nice file tree navigation panel and the ability to define project-specific configuration.

Conclusion

The short version is that this Vim experiment is going pretty well.  Granted, it is somewhat more work than figuring out a traditional IDE.  But on the other hand, it's not that bad and actually isn't as much work as I thought it would be.

In terms of functionality, I find that I haven't actually given up very much.  In fact, if you're talking about multi-language IDEs, I'm not even sure I've given up anything I care about.  It turns out that Vim is remarkably capable and the plugin ecosystem is very large and deep.

Would I recommend this to someone who's never touched Vim before?  Probably not.  But if you're familiar with Vim and interested in trying a new way of working, it might be worth a shot.  At worst, you'll improve your Vim skills and enjoy using a nice, fast editor that doesn't eat up half a gigabyte of RAM when it's just sitting there doing nothing.

New backup solution

Backups are important.  Everybody in IT knows this.  After all, whenever someone tells you they lost some important file, the first question is always, "Well do you have a backup?"

Nevertheless, many of us are lax in setting up our own backup solutions.  I know I was for many years.  And while that might not be forgivable for people who really should know better, it is understandable.  After all, implementing a proper backup solution is a lot of work and the benefits are not immediately obvious.  Think about it: it requires an ongoing investment of both time and money which, in the best-case scenario (i.e. nothing goes wrong), will never pay off.  Is it any wonder that even IT professionals aren't eager to put a lot of effort into something they actively hope never to need?

But about a year ago, I finally put a home backup solution in place.  That solution was CrashPlan.  But now CrashPlan is discontinuing their home plans, which means I've been forced to figure out something else.

The old setup

CrashPlan's "home" plan was actually pretty nice.  Perhaps too nice, since apparently it wasn't profitable enough for them to keep offering it.  The subscription was priced at $150 per year and that included up to ten computers in your household and unlimited cloud storage.  On top of that, their agent software supported not only backing up to the cloud, but also backing up to external hard drives and other computers running the CrashPlan agent.  And it ran on Windows, Linux, and Mac!

Currently, I have three computers in active use in my house: my laptop, my wife's laptop, and my desktop, which is really more of home server at this point.  I had both of the laptops backing up to the cloud, while the desktop backed up to both the cloud and an external hard drive.  I've got close to a terabyte of data on the desktop, so the external drive is important.  I do want a copy of that data off-site just in case of a catastrophe, but I'd rather not have to suck that much data down from the cloud if I can avoid it.

I'm happy with this setup and I wanted to keep something equivalent to it.  I feel like it provides me with sufficient protection and flexibility while not requiring me to buy extra hardware or pay for more cloud storage than I need.

The alternatives

Turns out this setup isn't quite as easy to replicate as I had hoped.  There are plenty of home backup services out there, but most of them don't offer the same range of options.  For instance, many are Mac and Windows only - no Linux.  Many offer limited storage - usually a terabyte or less, which I'm already pushing with my largest system.  Many are cloud backup only - no local drives.  And when you add up the cost of three systems, most of them are more expensive than CrashPlan Home was.

On their transition page, CrashPlan recommends that existing home customers either upgrade to their small business plan or switch over to Carbonite.  I briefly considered upgrading to the small business plan, but the price is $10/month/device.  So I'd go from $150/year to $360/year for basically the same service.  That doesn't sound like a great deal to me.

Carbonite, on the other hand, is one of the options I considered the first time when I settled on CrashPlan.  They're offering a 50% discount to CrashPlan customers, so the initial price would only be $90.  Presumably that's only for the first year, but even after that $180/year is only slightly more than CrashPlan.  However, from what I can see Carbonite doesn't support Linux on their home plan - they only do Linux servers on their office plan.  I also don't see an option to back up to an external drive.  Although it does support backing up external drives to the cloud...for another $40/year.

Plan B

After doing some research and thinking about it for a while, I eventually decided to skip the all-in-one services.  Yeah, they're nice and require almost zero work on my part, but I wanted some more flexibility and didn't want to pay an arm and a leg for it.  However, I didn't want to completely roll my own solution.  Writing simple backup scripts is easy, but writing good backup scripts with proper retention, special file handling, logging, notifications, etc. is a lot of work.CloudBerry Backup for Windows

If you don't want a full-service backup solution, the alternative is to go a la carte and get a backup program and separate cloud storage provider.  There are a number of options available for both, but after some research and experiments, I decided to go with CloudBerry Backup for my backup program and Backblaze B2 as my storage provider.

The choice of Backblaze B2 was kind of a no-brainer.  Since this is long-term backup storage, performance is not a huge concern - it was mostly a question of capacity and price.  B2 has standard "pay for what you use" billing and the prices are extremely reasonable.  Currently, storage is $0.005 per gigabyte and there's no charge for uploads.  So for the volume of data I have, I'm looking at storage costs of $5 or $6 per month, which is pretty cheap.

The backup program was a different story.  I tried out several options before settling on CloudBerry.  Most of the options I tried were...not user friendly.  For me, the CloudBerry UI had the right balance of control and ease of use.  Some of the other solutions I tried were either too arcane or simplified things too much to do what I wanted.  CloudBerry uses a wizard-based configuration approach that makes it relatively painless to figure out each step of your backup or restore plan.  I find that this allows them to expose all the available options without overwhelming the user.

As far as capabilities, CloudBerry pretty much has what I wanted.  It supports Windows, Mac, and Linux using multiple storage solutions, including local file systems, network file systems, and various cloud providers.  Beyond that, the feature set depends on the version you use.  There's a free version, but I went with the paid desktop version because it supports compression and encryption.  The licenses are per-system and they're a one-time change, not a subscription.  The basic "home" licenses currently run $50 for Windows and $30 for Linux, which I think is pretty reasonable.

Results

So far, the combination of CloudBerry and B2 for my backup solution is working well.  I've been using it for about five months and have all three of my systems backing up to the cloud and my desktop also backing up to a USB hard drive.  The process was largely painless, but there were a few bumps along the way.

As I mentioned in a previous post, as part of this process I moved my desktop from Windows to Linux.  As it turns out, setting up the two laptops to have CloudBerry back up to B2 was completely painless.  Literally the only annoyance I had in that process was that it took quite a while for the confirmation e-mail that contained my license key to arrive.  So if you're a Windows user, I can recommend CloudBerry without reservation.

Linux wasn't quite as simple, though.  I had a number of problems with the initial setup.  The interface is very stripped-down compared to the Windows version and doesn't offer all of the same options.  I also had problems getting the backups to run correctly - they were repeatedly stalling and hanging.  Fortunately, the paid license comes with a year of support and I found the CloudBerry support people to be very helpful.  In addition, they seem to be actively working on the Linux version.  I initially installed version 2.1.0 and now they're up to 2.4.1.  All of the issues I had have been resolved by upgrading to the newer versions, so things are working well now.

I had initially been a little concerned about the per-gigabyte and per-transaction pricing, but so far it hasn't been an issue.  I found Backblaze's storage cost calculator to be pretty accurate and the per-transaction charges are not significant.  The cost has been basically what I initially estimated and I haven't had any surprises.

Overall, I'm very happy with this solution.  The price is reasonable and, more importantly, it provides me with lots of flexibility.  Hopefully I'll be able to keep this solution in place for years to come.

Of backups and reinstalled

I finally decided to do it - reinstall my home media server.  I switched it from Windows 8.1 to Ubuntu 17.10.

The reinstall

Believe it or not, this is actually a big thing for me.  Fifteen years ago, it would have been par for the course.  In those days, I didn't have a kid or a house, so rebuilding my primary workstation from scratch every few months was just good fun.  But these days...not so much.  Now I have a house and a kid and career focus.  Installing the Linux distribution of the week isn't fun anymore - it's just extra time I could be spending on more important and enjoyable things.

But, in a fit of optimism, I decided to give it a shot.  After all, modern Linux distributions are pretty reliable, right?  I'll just boot up and Ubuntu DVD, it'll run the installer for 15 minutes or so, and I'll have a working system.  Right?

Well...not quite.  Turns out Ubuntu didn't like my disk setup.  Things went fine until it tried to install Grub and failed miserably.  I'm not sure why - the error message the installer put up was uninformative.  I assume it had something to do with the fact that I was installing to /dev/sdb and /dev/sda had an old Windows install on it.  After a couple of tries, I decided to just crack open the case and swap the SATA cables, making my target drive /dev/sda, and call it a day.  That did the trick and Ubuntu installed cleanly.  I did have to update my BIOS settings before it would boot (apparently the BIOS didn't automatically detect the change of drives), but it worked fine.

The only real problem I had was that apparently Wayland doesn't work properly on my system.  I tried several times to log into the defaut GNOME session and after a minute or two, system load spiked to the point that the GUI and even SSH sessions became unresponsive and eventually the GUI just died.  And I mean died - it didn't even kick me back to the GDM login screen.

I suspect the problem is my system's archaic video card - an integrated Intel card from the 2010 era which I have absolutely no intention of ever upgrading.  I mean, it apparently wasn't good enough to run Windows 10, so I wouldn't be surprised if Wayland had problems with it.  But in any case, Ubuntu still supports X.org and switching to the GNOME X.org session worked just fine.  I don't really intend to use the GUI on this system anyway, so it's not a big deal.

The restore

Once I got Ubuntu installed, it was on to step two of the process: getting my data drive set up.  It's a 1TB drive that used to serve as the Windows install drive.  It was divided into a couple of partitions, both formatted as NTFS.  Since I'm switching to Linux and really just wanted one big data drive, this was a sub-optimal setup.  Therefore I decided to just blow away the partition table and restore from a backup

I currently use CrashPlan to back up this computer.  It's set to back up to both the cloud and a local USB hard drive.  So my plan was to repartition the disk, install CrashPlan, and restore from the local hard drive.

This was fairly easy.  Installing the CrashPlan client was the first task.  There's no .deb package, but rather a .tgz that contains an installer script.  It was actually pretty painless - just kick off the script and wait.  It even installs its own copy of the JRE that doesn't conflict with the system version.  Nice!

Next was actually restoring the data.  Fortunately, CrashPlan has their process for restoring from a USB drive well documented, so there wasn't much to figure out.  The process of connecting a particular backup dataset on the drive to the CrashPlan client was slightly confusing because the backup interface (as far as I can remember) doesn't really make it obvious that a USB drive can have more than one dataset.  But it just boils down to picking the right directory, which is easy when you only have one choice.

The only surprise I ran into was that running the restore took a really long time (essentially all day) and created some duplicate data.  My data drive contained several symlinks to other directories on the same drive and CrashPlan apparently doesn't handle that well - I ended up with two directories that had identical content.  I'm not sure whether this was a result of having symlinks at all, or if it was just moving from Windows symlinks to Linux symlinks.  In any case, it's slightly inconvenient, but not a big deal.

Other services

While the restore ran, I started setting up the system.  Really, there were only a handful of packages cared about installing.  Unfortunately for me, most of them are proprietary and therefore not in the APT repository.  But the good news is that most of them were pretty easy to set up.

The first order of business, since this box is primarily a media server, was setting up Plex.  This turned out to be as simple as installing the .deb package and firing up the UI to do the configuration.  From there, I moved on to installing Subsonic.  This was only marginally more difficult, as I also had to install the OpenJDK JRE to get it to run.  I also followed the instructions here to make the service run as a user other than root, which seemed like not such a great idea.

The only thing I wanted to install that I couldn't get to work was TeamViewer.  But this wasn't a deal-breaker, though, because the only reason I was using TeamViewer under Windows was because I was running the Windows 8.1 Home and was too cheap to pay for the upgrade to Professional just to get the RDP server.  But since this is Ubuntu, there are other options.  Obviously SSH is the tool of choice for command-line access and file transfers.  For remote GUI access, I tried several variations on VNC, but it eventually became clear that xRDP was the best solution for me.  It's not quite as straight-forward to get working, but this guide provides a nice step-by-step walk through.  There are also a few caveats to using it, like the fact that the same user can't be logged in both locally and remotely, but those weren't big issues for my use case.

Results

For the most part, the transition has gone pretty smoothly.  The setup didn't go quite as smoothly as it could have been, but it wasn't too bad.  In any event, what I really cared about was getting Plex and Subsonic up and running and that was pretty painless.  So I'm back to where I was originally, but on an up-to-date operating system, which is all I really wanted.

And the current editor is - Vim?

So, as I mentioned before, I'm looking for a new go-to text editor/IDE.  So far, I've taken cursory looks at a few of the options.  These include:

  1. PHPStorm.  As I mentioned in the last post, I use PHPStorm at work.  And while it's really good in a lot of ways, I'm not in love with it.  On the up side, it's got great code intelligence and the VI-mode plugin is actually quite good.  On the down side, it's a single-language IDE (well, not quite, but it's still got a limited set of supported languages).  I guess I could just buy the entire JetBrains IDE suite, but I'm not crazy about switching back and forth.  Also, PHPStorm is kinda heavy - like, Eclipse heavy - both in terms of memory footprint and, more important, conceptual weight of the UI.  So while I don't dislike it, I don't really have any enthusiasm for it.
  2. Visual Studio Code.  I initially liked the look of VS Code.  It's got the cool Visual Studio intellisense, which is nice, and it seems to have a lot of extensions available.  The Vim-emulation plugin seemed fairly good, but not great.  The most popular PHP plugin, however, didn't seem to work out of the box at all.  I'm not sure why, though it could be its wonky install process/implementation (apparently it's written in PHP).  At any rate, I didn't care enough to look into it, though it might be worth taking a closer look at VS Code at some point.
  3. Atom.  I liked the look of Atom.  I read a little about the philosophy behind it and I really wanted to like it.  But then I fired it up and tried opening one of my project directories and it didn't work.  And by "didn't work", I mean that Atom actually crashed, consistently, on this particular directory.  So that's a no-go.  A quick Google revealed that it might be a problem with the Git library, which could possibly be fixed by changing the index version on the repo, but frankly I don't care.  If I can't trust my editor to just open a directory, then I can't trust it at all.  I mean, I don't even care if my editor has Git support at all, so I'm certainly not going to accept crashing if it sees a repo it doesn't like.  
  4. Sublime Text.  I've heard good things about Sublime.  I used to work with several people who really liked it.  Then I fired it up and immediately said, "What the heck is this?"  The UI is pathologically minimal, except for a gajillion menu items and the stupid friggin' mini-map (which I didn't like in Komodo either).  Configuration is done by editing a JSON file, but what the heck is with the weird out-of-box-experience?  The UI is extremely minimal and customization is done by editing a JSON file, which is weird (to be fair, VS Code and Atom do that too, but it's more forgivable because they're free), and the plugin manager was immediately confusing.  Seemed like getting used to Sublime might be a steep learning curve.
  5. Vim.  Yes, you read that right - Vim.  I was surprised too.  Let me explain.

After trying out Sublime, my initial reaction was, "Geez, if I want something that complicated, why don't I just use Vim?"  And then I stopped.  And I said to myself, "Actually...why don't I use Vim?"  Good Vim emulation is one of my must-haves, and no plugin is ever gonna beat the real thing.  It's free, open-source, hugely customizable, has lots of available plugins, and is extremely well established.

The thing is, I knew Vim could be customized into a pseudo-IDE, but I'd never really thought of myself as a hard-core Vim user so I'd never tried it.  But the truth is that I've been a Vim user for a very long time, and for the last few years I've been actively trying to pick up more Vim tricks for use in Vim emulation plugins.  So while I didn't know an awful lot about customizing Vim, I'm very comfortable actually editing code in it.

And it turns out that actually customizing Vim isn't really that bad.  Heck, there are even package managers for Vim now!  There are also IDE-like configuration/plugin bundles you can install, such as spf13, but I quickly determined that those were too big and overwhelming.  However, they are good as a source of ideas and settings to copy into your own custom ~/.vimrc file.  That's actually part of the beauty of Vim - despite the fact that Vim script is a little weird and the configuration is in no way intuitive, there's enough information already out there that it doesn't really matter.

So over the course of a week or so, I pulled out some of the more interesting settings from the spf13 config, found a selection of plugins that I liked, and got myself a nice, functional Vim setup.  I even set up some symlinks and file syncing so that I can have my setup synchronized between home and work.  

Is it perfect?  No, but nothing ever is.  But so far, it's working pretty well.  And what's more, I actually enjoy using it.  It might not have the power of a specialized IDE when it comes to the more advanced features, but it's got a heck of a lot of power in general.  And the amount and types of customization you can do are amazing.  With a little effort, you can really shape the editor to your workflow, which is a thing of beauty.

Looking for a new editor

A couple of weeks ago I started looking into new code editors.  I've been using Komodo for eight or nine years now, counting both KomodoIDE and KomodoEdit, but I'm growing dissatisfied with it.  In fact, I'm becoming unsatisfied enough that I think the time has come to move on.  However, I'm not sure I see an obvious candidate for a replacement.  So in this post, I'll talk about some of the things I find lacking in Komodo and what I'm looking for in an editor.

Why the change?

Let me start by saying that I'm not trying to bad-mouth Komodo here.  I have used it for a long time and it has served me well.  The people at ActiveState are nice and they do good work.  But the direction it seems to be heading in doesn't mesh well with my needs and I'm not sure the cost/benefit analysis of sticking with Komodo makes sense anymore.  Maybe this can provide some insight to the Komodo development team that can help them improve the IDE.

My dissatisfaction started a few months ago, when my team transitioned to a different product.  Unlike the last product we worked on, this one doesn't belong to just us.  This one is core to the company's business, and has several other development teams working on it, not to mention all the other groups that come in contact with it.  As such, there's an existing process and structure that we needed to adhere to, and it quickly became apparent that adhering to that was much easier if you were running the standard development setup, which is PHPStorm on Ubuntu.  I was running KomodoIDE on Windows, so this was a problem.  I was able to use WSL to get around the Linux part, but KomodoIDE just didn't offer the same features that I needed from PHPStorm.

So let's use that as a starting point.  What does PHPStorm offer that Komodo can't compete with at present?  Well, for purposes of the product I'm working on, here's the list:

  1. Code intelligence.  This is by far the biggest factor.  I'm not just talking about intellisense here.  I mean finding and navigating the actual places in the code where a class, function, or method is used or defined.  And not just text search, but actually knowing about the class and its inheritance hierarchy.  PHPStorm is actually pretty awesome at that.  Komodo's code intel., at least least for PHP and JavaScript, is buggy at best and totally broken at worst.  The quality of the experience also seems to vary hugely depending on the codebase you're working with.  It's nice that they have something for code intel., but you can't really rely on it.
  2. Validations.  PHPStorm integrates with phpcs, which is nice because we need to actually adhere to PSR-2.  Komodo doesn't have that built in.  This might seem like a trivial thing because, after all, I can always just run phpcs from the command line.  However, having the check done in the editor is actually hugely useful, because we have a lot of legacy code that doesn't adhere to PSR-2, which makes selectively running the checks for only the code you changed awkward.  Seeing violations in your editor gives you a nice, simple way to catch mistakes you made before they get to code review.
  3. Symfony support.  While it's not built in, PHPStorm has a pretty good quality plugin for Symfony support.  We use Symfony, so this is really nice.  Komodo doesn't have that.

Those things are important, but they are specific to the new product I'm working on.  If these were my only complaints, I would happily continue using Komodo for the foreseeable future and just use PHPStorm for that one project at work.  But they're not.  The list of annoyances and things I don't like has been slowly growing over the years.  This includes actual problems and bugs, missing functionality, and road-map issues (i.e. disagreement with the direction I see Komodo going).  Here's a summary:

  1. Kinda buggy.  I hate to say it, but I've seen a decent amount of weird errors, crashes, and just plain strange behavior in Komodo.  Not a huge number - it certainly hasn't been buggy enough to get switch editors - but it's still a non-trivial issue.  I'll give just a few examples here to give you the flavor of my complaints.
    1. Maybe it's just my perception, but it's rare for the error log or console to not have something in it.
    2. Sometimes when I'm doing things in the "places" panel, some keypress will trigger a change to the next pane of that panel.  I'm not sure what it is and I can't seem to do it on purpose.
    3. I'm constantly empty getting "find" panes accumulating in my bottom panel.  Again, I'm not 100% sure what causes them and can't seem to reproduce intentionally.
    4. I've tried numerous times on numerous versions of Komodo to set up the keybindings for "focus X pane", but they just don't seem to work at all.
    5. There are various places in the settings UI where one setting determines another, but it's not clearly indicated.  So you can change a setting, click OK, and your change will just be ignored because it's invalid, but there's no indication of that.  A good example is the new unit test runner.  For options other than "custom", the testing framework determines which parser is used, but you can still change the parser in the UI.  It just doesn't do anything.
    6. The syntax checking preferences for JavaScript allow you to set a custom JSHint version to use, probably because the integrated one is kind of old.  I've tried this numerous times and have never been able to get it to work.  Oh, and speaking of JSHint, I'm still kinda miffed that Komodo 10 removed the graphical JSHint configuration UI.  Now there's just a text area to enter your settings, so you have to go to the JSHint site and look up the options rather than just being able to pick from a list.
  2. Pretty over functional.  In the last few releases, some of the tools have been revamped in such a way that makes them prettier, but in my opinion doesn't actually make them more useful.  The two big examples that spring to mind are version control and unit testing.
    1. In older versions of Komodo, my only complaint about the VCS integration was that there wasn't an easy way to do a commit of all pending changes in the project - I fixed that myself with a simple macro.  In the new version, they fixed that problem.  But at the same time, the VCS widget no longer displays changes for added files, which is a big pain.  I typically use my VCS diff for pre-commit code review, so it's kind of inconvenient to have to switch to another view to do that.
    2. The new unit test runner in Komodo 10.2 looks very pretty.  However, they removed the key bindings I was using to run the current test suite.  And it didn't detect my old test suites, so I had to recreate them.  They also changed the name-humanizing algorithm, so that test names that used to be rendered nicely in the runner aren't anymore.
  3. Features I don't care about.  It feels like there have been a few of these added lately.  Some of them seem like things that are really just gimmicks that look good in marketing material but don't provide any value.  Others do provide genuine value, but seem tacked on to pad the feature set, i.e. they're useful, but I don't see the point of having them right in the IDE.  Some examples include:
    1. Collaboration.  You can do real-time editor sharing with someone else using KomodoIDE.  Cool!  And maybe it would be useful if I was doing remote pair programming with someone else who uses Komodo.  But I'm not, so I've never actually used it.  I suppose this could substitute for some of those "social coding" web editors, but this doesn't feel like a general enough use-case to want it integrated into my IDE.
    2. Sharing to kopy.io.  Upload code snippets directly to a code-sharing website.  Nice!  But again, that's something that's not hard to do without IDE support and that I seldom or never do it anyway.  And even if I did, I'd just create  gist on GitHub, not use a new site.
    3. Slack sharing.  You can share code in a Slack channel right from inside Komodo.  Great!  But again, I don't do this often and it's not clear how this is easier than just copy-and-pasting the code into Slack.
    4. Minimap.  A 10,000-foot overview of how your code looks that replaces the traditional scroll bar.  I think Sublime Text had this first.  Sure, it looks really cool, but does anyone actually use these things?  I don't spend a lot of time looking for code segments based on the shape of the text, so it's never been clear to me what useful information this provides.
    5. HTTP inspector.  This is actually an older feature - an HTTP proxy that allows you to inspect traffic.  And it's a genuinely valuable thing for a developer to have.  But you still have to set it up like any other HTTP proxy, so it's not clear how having this baked into the IDE is better than just using a stand-alone proxy app.  And again, this isn't something I need on a regular basis so I've never used it for actual work.
  4. Features that are strange or hard to use.  There are also features that I use, or would like to use, that either don't make sense or don't quite do what I need.  In other words, they could be really good and useful, but they fall short.  For instance:
    1. Keyboard navigation.  Komodo actually has a pretty nice configuration interface for setting up keyboard shortcuts.  Unfortunately, a lot of the actual bindings you might want don't exist (or don't work, as previously noted).  But my big gripe is that navigating around the UI using the keyboard is difficult, particularly in the side/bottom panels.  Trying to navigate between fields within a pane often either doesn't work, gets stuck, switches you to the main editor, or otherwise fails in strange ways.  And as I mentioned earlier, trying to use the keyboard to navigate between different panes seems to just generally not work.
    2. Regex toolkit.  This seems like a really useful tool.  But I'll be darned if I can figure out how it's supposed to work.  Every now and then I try it and I always spend more time trying to figure out how it works than it would take so just write a one-off test script to test the regex.
    3. Publishing.  Komodo has a publishing tool that lets you push your code up to a remote system for testing.  That's a very nice and useful thing.  Except that it's actually a "synchronizer," by which I mean it only does two-way synchronization of files with a remote server.  Why?  What if I don't care what's on the server and just want to clobber it every time?  That's not such an uncommon occurrence with test servers.  In fact, for me, wanting to pull in changes from the remote servers is distinctly an edge-case, not something I'd want to happen by default.

I could probably go on, but I think I've made my point.  It's not that Komodo is a bad IDE - far from it.  But there are a number of rough edges and niggling little issues that are increasingly starting to bother me.  Choice of editor can be a very personal and subjective thing, and for me it just feels like it's time for a change.

What do I want?

So that leaves the question: what do I want in an editor or IDE?  Well, I'm not completely sure.  I've been using Komodo for a long time, and I do like a lot of things about it.  So let's start with a list of some of those things.  That should at least work as a jumping off point.

  1. Good VI emulation.  I've been using VI emulation in my editors ever since I worked for deviantART.  I got a MacBook Pro when I started there and I found the weird keyboard layout completely infuriating, particularly when I switched back to my regular PC keyboard so I decided to just switch to VI key bindings since they're always the same.  Since then, I've gotten progressively more friendly with VI mode, to the point where coding without it feels strange.  Komodo includes fairly decent VI emulation out of the box, and it's relatively easy to add customizations, so any editor I pick up will need comparable VI emulation support.
  2. Code formatters.  One of the really handy features of Komodo is built-in code formatters.  My most common use-case for this is copying some JSON out of the bowser network monitor, pasting it into Komodo, and formatting it so I can analyze it.  It would be really nice for a new editor to support something like that.
  3. Light-weight.  A concept of "projects" is nice in an IDE, but sometimes I just want to quickly open a file.  So I'd rather not use an IDE that takes ages to start up and insists that everything be part of a project (I'm looking at you, Eclipse).  Komodo is pretty good in that regard - it can just start up and open a file without it being too much of an ordeal.
  4. Extensibility.  No editor or IDE is ever perfect out of the box, so it's important to have a decent extension ecosystem around your editor.  Komodo is pretty good in this regard, with a community site offering a nice assortment of extensions.
  5. Scriptability.  In addition to extensions, one of the things Komodo gets really right is giving you the ability to easily write simple scripts to automate things.  It lets you write user scripts in JavaScript that can be fairly easily hooked into the UI.  This is huge.  There are a lot of "small wins" you can achieve with this kind of scripting that will really improve your workflow.  Supporting extensions is great, but it's hard to justify integrating a feature into your IDE if you need, e.g., an entire Java build environment to write an extension for something you could do in five lines of shell.
  6. Multi-language.  This is a big one.  If you regularly code in more than one language, having to learn a different IDE for each one is a real drag.  In the best-case scenario, you have to configure the same things for each editor.  In the worst-case scenario, you have to learn two completely different tools with completely different features.  Most of my professional work is in PHP and JavaScript these days, but I also do some Python, SQL, BASH and PowerShell scripting, and I'm trying to learn Haskell.  So given the choice, I'd rather learn one editor inside and out than have a bunch of language-specific editors that I use as "notepad with some extra features".
  7. Cross-platform.  Right now I use Windows both at home and at work, but that's not set in stone.  I actually use Windows at work by choice (yeah, yeah, I know) - there are a few people with MacBooks, but most of the other developers use Ubuntu.  In the past, I've gone back and forth between platforms, so running on Windows, Linux, and MacOS is a hard requirement for me.  I don't want to be tied to one platform or have to learn a different IDE for each if I decide to switch.
  8. The right feel.  This one is highly subjective, but it's important to me that my IDE feel smooth to use.  I want it to work with me, rather than feeling like I have to adapt to it.  This has always been my problem with Eclipse - I feel like the UI doesn't really adapt smoothly to what I want.  Something about coding in it just feels a little "off" to me.

 Time to experiment

So it's time to start doing some experimentation.  It took me a long time to finally settle on Komodo, so I'll probably go back and forth a few times.  

I've got lots of choices, to be sure.  Back when I settled on Komodo Edit, I was looking primarily at free and open-source editors.  My use of Komodo IDE grew out of that.  These days, I'm not as price-sensitive, so commercial IDEs are definitely on the table, provided they have reasonable non-Enterprise pricing (i.e. I'm not gonna spend $2000 on one).

Last time I was IDE shopping I looked at a bunch of Linux-only editors, but those are out.  I used Eclipse for a while, but I'm not inclined to revisit that.  I also used jEdit for a while, and while it was fairly nice, it doesn't appear to be getting much active development these days, and I'm not sure it fits my current requirements anyway.  But now we have things like Sublime Text, Atom, Visual Studio Code and the entire suite of Jet Brains IDEs.  So I've got lots of things to look at.  If nothing else, the process will be good for a few blog posts.

KeePass browser plugins

In my last post about KeePass, I mentioned that you can integrate your KeePass password database with your web browser.  In this post, I'll tell you more about how to do that and why it's an extremely handy thing.

Why bother?

So why would you want to bother with integrating your browser with KeePass?  I mean, most browsers have a feature to remember your passwords anyway, so why not just use that?  Or if you want to use KeePass, why not just use that auto-type feature I talked about in the last post?

It's true, you could just use the password manager that's built into your browser.  Pretty much all of them have one, these days.  Most of them will even secure your data with a master password.  They may even synchronize your passwords to the cloud, so you can access them on more than one device.  Granted, that's pretty handy.

However, browser password managers generally just do passwords - they don't allow you to enter extra information or attach files like KeePass does.  They also don't work for things outside the web browser, like for security software such as VPN clients.  So they don't provide you with a single, secure location for all your important account information.  But more importantly, they're generally tied to a single browser.  Sure, Google Chrome can store and synchronize all my passwords, but what if I decide I don't like Chrome anymore?  Maybe I just bought a Mac and decided I really like Safari.  Is there an easy way to get my passwords out of one browser and into another?  I don't know.  

By using KeePass with a plugin for your browser, you can get the best of both worlds.  KeePass itself gives you more power and features than browser password managers and allows keeps you from being tied to a single browser.  Using a browser integration plugin adds on the ability to have the browser automatically fill in your username and password when you visit a website.  It's not quite as convenient as the browser-integrated password managers, but it still pretty good.  And it's definitely a lot easier than trying to use auto-type or copy-and-paste to fill in password forms.

What are my options?

In general, there are a lot of plugins available for KeePass.  Just look at the list.  Or maybe don't - you probably don't care about 90% of those plugins.  The main thing you need to know about is which browsers have plugins available. 

Short answer: Chrome, Firefox, and Safari.

Long answer: Chrome, Firefox, and Safari have proper browser plugins available.  The Chrome plugin also works in Vivaldi and possibly other browsers that are based on Chrome.  There are also form-filling plugins that work with Internet Explorer.  To my knowledge, there is no plugin support available for Microsoft Edge.

For this entry, I'll just talk about setting up a plugin with Chrome.  We're going to use a Chrome extension called ChromeIPass.  It adds a KeePass button to the toolbar in Chrome and can automatically detect login forms on webpages you visit.  It works with a KeePass plugin called KeePassHttp.

First, you need to install the KeePassHttp plugin.  Start by going to the KeePassHttp website and clicking the "download" link, or just download it directly here.  Sadly, KeePass doesn't have a nice way to install plugins - you just have to copy the plugin file to the KeePass plugins folder on your system.  Inconvenient, but fortunately not something you need to do very often.  On most computers, this will be C:\Program Files (x86)\KeePass Password Safe 2\Plugins.  So just copy the KeePassHttp.plgx file that you downloaded and paste it into that location.  Since this is a system directory, you will probably be prompted to grant access.  Click "continue" to copy the file.  Note that if KeePass is running, you will need to close and restart it for it to detect the plugin.

Click "continue" when prompted to allow access to copy the plugin.

Now that the KeePassHttp plugin is installed, KeePass will be able to communicate with Chrome.  You just need to install the ChromeIPass extension.  You can do that by going to the Chrome web store page here and clicking the "Add to Chrome" button.  

So...now what?

OK, now that ChromeIPass is installed, what do you do with it?  Well, not really much until it's time to log into a site.  So pick a site that's in your KeePass database and go there - I'll use sourceforge.net for this example because it's a pretty standard login form.

The first time you try to log into a site using ChromeIPass, you'll need to connect it to your KeePass database.  You should notice a KeePass icon is now in your toolbar.  Make sure KeePass is running and click that button.

You should see a "Connect" button.  Click that and KeePass will prompt you to add a new encryption key for the KeePassHttp plugin.  This is a security mechanism - the KeePassHttp plugin encrypts its communication with your KeePass database and this is just the initial step where it sets that up.  Don't worry about the details right now - just type in a unique name for the key, maybe based on your browser and computer, e.g. "Laptop - Chrome".  You only have to do this the first time you connect a browser to your database - after that, the encryption is automatic.

Now that ChromeIPass is connected to your KeePass database, you can click the ChromeIPass button in your toolbar and click the "Redetect Credetials Fields" to fill in your username and password.  Alternatively, you can just refresh the webpage and they should be auto-filled.  You won't see anything in the browser yet, but KeePass itself ill prompt you to allow access to the password for this site.  You can check the "Remember this decision" box to not be prompted to allow access the next time you visit this site.

(I should probably stop to acknowledge that this thing of having to grant a site access to your KeePass database before you can log in is kind of a drag.  I agree, it is somewhat annoying.  This is actually a security feature of KeePassHttp - that's the portion of this that runs inside KeePass itself and allows the ChromeIPass extension to talk to it.  It actually has a lot of security-related settings.  This is actually a good thing, though, because it essentially provides a way for other programs to read your KeePass database, and you want to make sure that malware or dodgy websites aren't able to do that.  However, if you want to disable some of these settings, like prompting to allow access, you can do that by going into KeePass and selecting the "Tools > KeePassHttp Options" menu item.  The KeePassHttp documentation has some more information on the available settings.)

The good news is that now you're done!  After you allow access to KeePass, ChromeIPass will automatically fill in your username and password.  If you selected the "remember" option when allowing access to the site, ChromeIPass will automatically fill in your login info the next time you visit the site, no action required.  You will only have to allow access the first time you visit a new site of if you elect not to have KeePass remember the approval.

If you're so inclined, ChromeIPass has a number of other features, as detailed in the documentation.  For instance, it can save or update entries automatically when you enter a password into a webpage; it has a built-in password generator that lets you create strong passwords right in the browser; it can customize the login fields for non-standard login forms; and it provides a handy right-click menu to fill in passwords and access other functionality.  

Hopefully this will help get you started.  Using a password manager is a must for keeping your accounts secure these days, and integrated browser support makes using one that much easier, which means you're more likely to keep using it.

Using KeePass

You should be using a password manager.  If you're a technical person, this is probably not news to you - you're very likely already using one.  

This article is for the non-technical people.  People like my wife (hi, honey!) and my mom.  People who access a lot of websites and have a lot of passwords to remember.

Security 101

So why is using a password manager a good idea?

Well, you may have seen guidelines for cyber security that tell you things like:

  1. Don't write down your passwords.
  2. Don't reuse passwords on different sites.
  3. Don't use short, easy to guess passwords.
  4. Don't use passwords that are easy to figure out from public data (like a birthday that anyone can get from your Facebook profile).

Such guidance raises the question: if I have to use long passwords that aren't related to anything in my life, and I can't reuse them or write them down, how the hell am I supposed to remember them?!?

This is a totally reasonable question.  Yes, ideally we would all memorize a hundred different 32-character-long, randomly generated passwords.  But in real life, nobody can actually do that.  So a password manager is a good compromise.

What is a Password Manager

My mother has a little paper "password book" that she keeps in a drawer next to her computer.  When she has to create a new account for some website, she writes down all the login information in that book so that she can look it up later.

A password manager is the digital equivalent of that password book.  It's an application that lets you record your login information and them look it up later.  Most password managers have lots of other handy-dandy features as well, but that's the core of what they do.

So how is this different from, say, writing down all your passwords in a Word document on your desktop?  Well, a password manager encrypts all your data.  It requires a "master password" to decrypt your information, so if some nasty hacker steals that file, they won't be able to actually read it.  

Is this as secure as just memorizing all your passwords?  No.  But as we said, nobody can do that anyway, and this is one heck of a lot more secure than the alternatives, i.e. reused or weak passwords.  With a password manager, you can still have strong, unique passwords for all your sites, but you're relieved of the burden of having to remember them all.

About KeePass

There are a number of password managers out there, but the one I'm going to talk about is KeePass.  It's a free, open-source password management application that will run on Windows, Linux, and Mac, and has compatible apps available for iOS and Android.  KeePass works offline (i.e. it requires no internet connection and doesn't depend on any online services), but it's possible to sync your KeePass passwords between devices using file sync tools like DropBox or OneDrive.  So it provides you some flexibility, but you aren't beholden to a single company that can get hacked or go out of business.

KeePass creates files password files that end with ".kdbx".  You can open those files from within KeePass or double-click on them in Window Explorer.  When you try to open one, KeePass will prompt you for the master password to that file.  Every KDBX file has its own master password.  This allows you to do things like create a password file to share with the rest of your family, and have a different one for the accounts that are just yours.  (That's a topic for a different post.)

One of the handy extra functions of KeePass is that each entry in your password save can have a bunch of extra data associated with it.  For example, you can add custom fields and attach files to each entry, which are handy for things like account validation questions and activation files for software licenses.  Basically, you can keep all the important information in one place.  And since KeePass encrypts your entire password file, it will all be kept secure.

Using KeePass

So how do you use KeePass?  Let's walk through it.

Step 1 - Download

The first thing you need to do is to get yourself a copy of KeePass.  You can go to this page and click the download link for the "professional edition".  (There's not really anything "professional" about it - it's just a newer version with more features.)  When that's done, you can double-click the file to install it like any other program.

You can also install KeePass through Ninite.  If you're not familiar with Ninite, I strongly recommend you check it out.  It's a great tool that makes it brain-dead simple to install and update a collection of programs with just a few clicks.  You basically just select a bunch of applications you'd like to install from a list, click a button, and you get an installer program you can run to put everything you selected on your computer.  And if you run that program again later, it will actually update any of those programs that have a newer version.  It's very slick and a real time-saver.

Step 2 - Create a password safe

 Next, open up KeePass and click "File > New".  You will be prompted to choose where you want to save your new password database.  Choose a name and folder that work for you.  Remember - your password database is just a regular file, so you can always move or rename it later if you want.

After that, you should get a dialog that looks like this:

This shows several options for securing your password safe.  But don't worry about that - the one you really want is the first one, "master password".  So choose a password and type it in.  If you click the three dots on the right, KeePass will display the password as you type, so that you don't have to re-enter it.

There are two important things to note when choosing a master password.  First, since it's going to protect all your other passwords, you want to make it good.  KeePass provides a password strength meter to help you judge, but the main things to bear in mind are that you want a range of different characters and you want it to be long.  And no, ten letters does not qualify as "long" - it should be more of a passphrase than a password.  One common technique is to use a full sentence, complete with capitalization and punctuation (an maybe some numbers, if you can work them in).  That will generally give you a pretty strong password, but it will still be easy to remember.

The other important thing to remember is that the password to a KDBX file is your encription key for that file.  That means that the only way to decrypt the file is with that password.  If you forget your master password, your data is gone forever.  So write down your master password and keep it in a safe place until you're certain you've committed it to memory.  And if you want to change your master password later, make sure to make a backup copy of your KDBX file first.

After you've chosen a master password, you should see a screen that allows you to configure some of the settings for your password file.  However, you don't really need to worry about this - those are all optional.  You can safely click the "OK" button to just continue on.

Step 3 - Organize your passwords

Alright!  You now have a password database set up.  You should see a list of groups on the left and a list of password entries on the right, like in the image below.  These are the sample groups and entries that KeePass creates by default.  They're just to give you an idea of how to use your password database - you can safely delete them at any time.

You can click on each group at the left to see what entries it contains.  The groups are basically like folders in Windows.  There's a top-level folder, and it contains a bunch of sub-folders and each of those sub-folders can contain other folders.  So in the screenshot, you can see that "NewDatabase" is highlighted in the group list.  That's the top-level folder for my example database.  You can see on the right that it contains two entries.  You can move an entry into another folders by dragging it from the entry list on the right onto one of the folders on the left.

Step 4 - Create passwords

To add a password entry to your database, select "Edit > Add Entry" from the menu.  That will bring up the entry edit screen.  This is the same screen you'll see when you double-click on the title of an existing entry, except that it is mostly blank.

There are a lot of tabs and options on this screen, but you don't really need to worry about those.  The main things are right in front of you: the entry title, user name, and password.  You'll probably also want to fill in the URL field with the web address of the site this information is for.  This will come in handy if you want to use a KeePass plugin for your web browser (which we'll cover in another post).  When you're done entering your info, click the OK button to create the entry.  You should then select "File > Save" from the menu or push the "save" button on the toolbar to save the changes to your password database.

You'll probably notice that there's already a password filled in.  KeePass will generate a random password for new entries.  You are free to change this yourself or use the button to the right of the "repeat" box to generate other random passwords using different rules.  KeePass has a password generator that lets you specify the allowed characters and length for a random password, which is handy for those sites that insist on specific password length or complexity rules.

Step 5 - Getting your passwords out

Now let's back up and say you've just started up your computer, are logging in to some website, and want to get a password out of KeePass.  The first thing you need to do is open up your password database.  You can do this by double-clicking on it in Windows Explorer or by opening up KeePass then selecting your database from the "File > Open" menu.  When you open the database, you'll be greeted by a screen asking you to enter your master password - you know, the one you came up with in step 2.  (Hint: remember that you can click the button with the three dots to display the password as you type it.)  After you enter your master password, the database will be decrypted and you'll find yourself and the entry browsing screen from step 3.

There are several ways to get your passwords out of KeePass.  Here's the list in order of preference:

  1. Use a browser plugin to automatically fill in login forms.  Since most of the passwords you end up creating are for websites, setting up your browser to fill in the forms from your KeePass database makes life much easier.  I'll talk about how to do that in the next post.  But don't worry - it's not all that hard.
  2. Use auto-type.  This is a feature of KeePass where you to click a button in the KeePass window and it will automatically send the keystrokes for your username and password to the last window you used.  So, for example, you would navigate to the login page of a site in your web browser, click in the username box, and then switch over to the KeePass window and click the auto-type button on the toolbar (the one that looks kind of like some keyboard keys - hover your cursor over the buttons to see the descriptions).  By default, the auto-type feature will type your username, the "tab" key, your password, and then the "enter" key.  This will work for probably 90% or more of login pages, but it's not universal, so be aware of that.
  3. Copy them to the clipboard.  If all else fails, you can always just copy your passwords to the clipboard so that you can paste them into another window.  KeePass makes this fairly easy.  In the main password list that you saw in step 3, when you double-click on the actual username or password for an entry in the list, it will copy that to the clipboard.  This saves you having to open up the entry edit screen and copy things there.  You can then switch to another window and paste the data into a login screen.
  4. Just read it.  Last, but not least, you can always go low-tech and just read the passwords out of the window.  Just double-click the name of your entry, then click the "three dots" button to make the password visible.  Clearly this is not great, but sometimes it's necessary.  For example, you will need to do this when entering a password on a system that doesn't have KeePass installed, such as to login into your Amazon or Netflix account when setting up a Roku or other streaming media system.

Conclusion

With any luck, I've made this "password manager" thing sound good enough for you to try it out.  You really should look into it.  Password reuse has become increasingly dangerous, with hackers trying the usernames and passwords they harvested from one hack on other sites just to see if they work.  Password cracking tools have also advanced a lot in recent years, including information gleaned from previous attacks, so relying on things like "133t 5p34k" passwords is no longer good enough.  A decent password manager, if used consistently with randomly generated passwords, will provide you with a good trade-off between convenience and security.

Making Windows Git Do SSH

Another quick note to my future self: setting up Git under Windows to use SSH key authentication is pretty easy...once you know what to do.

At work, we have some Composer and Bower packages that we fetch from our internal GitHub Enterprise server.  And, of course, all the source lines in the composer.json and bower.json files use SSH references.  I just use HTTP for my code checkouts, so I finally had to figure out how to make Git for Windows authenticate with my SSH key.

Turns out it's pretty easy.  I have a key pair I created with PuTTYgen.  All I had to do was export my private key to OpenSSH format and copy that file to C:\Users\<my_username>\.ssh\id_rsa.  Git picks it up with no further configuration.  Then I just added my key to GitHub and I was good to go.

Using PuTTY inside ConEmu

This is yet another one of those "post this so I can refer back to it later" things.  So if you're not a Windows user, or if you don't use ConEmu, then I suggest you go get a cup of coffee or something.

So for a while now I've been using ConEmu as my Windows console app.  It supports multiple tabs, transparency (ooooh), customizable hotkeys, customizable sessions, Far manager integration and a whole bunch of other nifty stuff.  

A couple of months ago, I saw that it was possible to embed PuTTY, the popular Windows-based SSH client, directly in a ConEmu tab.  So I tried it out and found it to be pretty slick.  The only down side was some key binding weirdness.

First, there's the general putty issue that if you accidentally press ctrl+S - you know, the key combination that means "save" in just about every editor in existence - it effectively locks the terminal and it's not obvious how to get control back.  The second issue is that, when you embed an application like PuTTY in ConEmu, it steals most of your keyboard input, so the standard key bindings for switching between window tabs don't work.

Luckily, these problems are easily fixed.  The fixes are just hard to remember, which is why I'm writing it down.  For the ctrl+S iissue, you can just hit ctrl+Q to turn input back on.  For the tab-switching issue, you can use the global key bindings for ConEmu - namely Win+Q, Win+Shift+Q, and Win+<Number> to switch consoles, as well as Win+Z to toggle focus between ConEmu and PuTTY.

 

Finally upgraded to Komodo IDE

Well, I finally did it - I shelled out for a license of Komodo IDE.  I've been using Komodo Edit on and off since about 2007, but had never needed the extra features of the IDE to justify the $300 price tag.  But last week ActiveState had a one-week $100 off sale, so I decided to try it out again and ended up deciding to make the purchase.

My Komodo IDE setup

Part of what motivated me to give Komodo IDE another shot after getting along fine with Komodo Edit for so many years (aside from the price, of course) is my new-ish job.  It turns out that Eclipse is fairly popular among the Pictometry engineers, despite the fact that they're a pretty smart bunch (I'm not a fan of Eclipse).  Thus many of them take integrated debugging for granted, so on a few occasions I've been told to debug an issue by "just stepping through the code."  And to be fair, there have been a few bugs I've worked on where integrated debugging genuinely would have been useful.  Perhaps this is due to the fact that our internal back-end framework is significantly more Java-like than most code-bases I've worked on (but that's another post entirely).

Of course, that's not all there is to Komodo IDE.  There are plenty of other nifty features, such as the regex testing tool, structure browser, integrated source control, and a number of other things.  So when you put them all together, it's a pretty good value, especially at the sale price of $195.  And with the cool new stuff in Komodo 8.5, I have to say I've pretty much lost interest in trying out new editors.  The ActiveState team did a really nice job on the latest version.

Going WYSIWYG

I must be getting old or something.  I finally went and did it - I implemented a WYSIWYG post editor for LnBlog (that's the software that runs the blog you're reading right now).

I've been holding out on doing that for years.  Well, for the most part.  At one point I did implement two different WYSIWYG plugins, but I never actually used them myself.  They were just sort of there for anybody else who might be interested in running LnBlog.  I, on the other hand, maintained my markup purity by writing posts in a plain textarea using either my own bastardized version of BBCode or good, old-fashioned HTML.  That way I could be sure that the markup in my blog was valid and semantically correct and all was well in the world.

The LnBlog post editor using the new TinyMCE plugin.

If that sounds a little naive, I should probably mention that I came to that conclusion some time in 2005.  I had only been doing web development for a few months and only on a handful of one-man projects.  So I really didn't know what I was talking about.

Now it's 2014.  I've been doing end-to-end LAMP development as my full-time, "I get paid for this shit" job for almost seven years.  I've worked for a couple of very old and very large UGC sites.  I now have a totally different appreciation for just how difficult it is to maintain good markup and how high it generally does and should rank on the priority scale.

In other words, I just don't care anymore.

Don't get me wrong - I certainly try not to write bad markup when I can avoid it.  I still wince at carelessly unterminated tags, or multiple uses of the same ID attribute on the same page.  But if the markup is generally clean, that's good enough for me these days.  I don't get all verklempt if it doesn't validate and I'm not especially concerned if it isn't strictly semantic.

I mean, let's face it - writing good markup is hard enough when you're just building a static page.  But if you're talking about user-generated content, forget it.  Trying to enforce correct markup while giving the user sufficient flexibility and keeping the interface user-friendly is just more trouble than it's worth.  You inevitably end up just recreating HTML, but with an attempt at idiot-proofing that end up limiting the user's flexibility in an unacceptable way.  And since all the user really cares about is what a post looks like in the browser, you end up either needing an option to fall back to raw HTML for those edge-cases your idiot-proof system can't handle, which completely defeats the point of building it in the first place, or just having to tell the user, " Sorry, I can't let you do that."

"But Pete," you might argue, "you're a web developer.  You know how to write valid, semantic HTML.  So that argument doesn't really apply here."  And you'd be right.  Except there's one other issue - writing HTML is a pain in the butt when you're trying to write English.  That is, when I'm writing a blog post, I want to be concentrating on the content or the post, not the markup.  In fact, I don't really want to think about the markup at all if I can help it.  It's just a distraction from the real task at hand.

Hence the idea to add a WYSIWYG editor.  My bastardized BBCode implementation was a bit of a pain, I didn't want to fix it (because all BBCode implementations are a pain to use), and I didn't want to write straight HTML.  So my solution was simply to resurrect my old TinyMCE plugin and update it for the latest version.  Turned out to be pretty easy, too.  TinyMCE even has a public CDN now, so I didn't even have to host my own copy.

So there you have it - another blow stricken against tech purity.  And you know what?  I'm happier for it.  I've found that "purity" in software is tends not to be a helpful concept.  As often as not, it seems to be a cause or excuse for not actually accomplishing anything.  These days I tend to lean toward the side of "actually getting shit done."

Getting a password manager

After shamelessly reusing passwords for far too long, I finally decided to get myself a decent password manager. After a few false starts, I ended up going with KeePass. In retrospect, I probably should have started with that, but my thought process didn't work out that way.

Originally, my thought was that I wanted to use a web-based password manager. I figured that would work best as I'd be able to access it from any device. But I didn't want to use a third-party service, as I wasn't sure how much I wanted to trust them. So I was looking for something self-hosted.

PPMA

I started off with PPMA, a little Yii-based application. It had the virtue of being pretty easy to use and install. There were a few down sides, though. The main one was that it wasn't especially mobile-friendly, so there were parts of the app that actually didn't work on my phone, which defeats the whole "works on any device" plan. Also, it really only supported a single user, so it's not something I could easily set my wife up on as well. (To be fair, the multi-user support was sort of there, but it was only half-implemented. I was able to get it basically working on my own, but still.)

More importantly, I wasn't entirely confident in the overall security of PPMA. For starters, the only data it actually encrypted was the password. Granted, that's the most important piece, that's sort of a minimalist approach to account security. And even worse, I wasn't 100% convinced that that was secure - it's not clear to me that it doesn't store a password or key in session data that could be snooped on a shared server. Of course, I haven't done an extensive analysis, so I don't know if it has any problems, but the possibility was enough to make me wary and I didn't really want to do an extensive audit of the code (there was no documentation to speak of, and certainly nothing on the crypto scheme).

The next package I tried was Clipperz. This is actually a service, but their code is open-source, so you could conceivably self-host it. I had a bit more confidence in this one because they actually had some documentation with a decent discussion of how their security worked.

Clipperz - beta UI

The only problem I had with Clipperz was that I couldn't actually get it to work. Their build script had some weird dependencies and was a pain to deal with (it looked like it was trying to check their source control repository for changes before running, for some reason). And once I got it installed, it just flat-out didn't work. I was able to create a new account, but after that every request just returned an error out. And to make things worse, it turns out their PHP backend is ancient and not recommended - it's still using the old-school MySQL database extension. The only other option was the AppEngine Python backend, which wasn't gonna work on my hosting provider. So that was a bust.

It was at that point that I started to think using a web-based solution might not be the best idea. Part of this is simply the nature of the web - you're working over a stateless protocol and probably using an RDBMS for persistence. So if you want to encrypt all the user's data and avoid storing their password, then you're already fighting with the medium. A desktop app doesn't have that problem, though - you can encrypt the entire data file and just hold the data in memory when you decrypt it.

It also occurred to me that accessing my passwords from any computer might not be as valuable as I'd originally thought. For one thing, I probably can't trust other people's computers. God alone knows what kind of malware or keyloggers might be installed on a random PC I would use to access my passwords. Besides, there's no need to trust a random system when I always have a trusted one with me - namely, my phone.

Great! So all I really need is a password manager than runs on Android.

Well...no, that won't do it. I don't really want to have to look up passwords on my phone and manually type them into a window on my desktop. So I need something that produces password databases that I can use on both Android and Windows.

Luckily, KeePass 2 fits the bill. It has a good feature set, seems to have a good reputation, and the documentation had enough info on how it works to inspire some confidence. The official application is only Windows-based, but there are a number of unofficial ports, including several to iOS and Android. It's even supported by the Ninite installer, so I can easily work it into my standard installation.

KeePass2

For me, the key feature that made KeePass viable was that it supports synchronization with a URL. There are extensions that add support for SSH and cloud services, if you're into that sort of thing, but synchronizing via standard FTP or WebDAV is built right in. KeePass also supports triggers that allow you to automatically synchronize your local database with the remote URL on certain events, e.g. opening or saving the database.

For the mobile side, I decided to go with Keepass2Android. There are several options out there, but I chose this one because it supports reading and writing the KeePass 2.x database format (which not all of them do) and can directly read and write files to FTP and WebDAV. It's also available as an APK download from the developer's site, as opposed to being available exclusively through the Google Play store, which means I can easily install it on my Kindle Fire.

Keepass2Android also has a handy little feature called "QuickUnlock", which allows you one chance to unlock your database by typing just the last few characters of your passphrase. If you get it wrong the database is locked and you need to enter the full passphrase. This addresses one of my main complaints about smart phones - the virtual keyboards work to actively discourage good passwords because they're so damned hard to type. I chose a long passphrase which takes several seconds to type on a full keyboard - on a virtual keyboard, it's absolutely excruciating. This way, I don't have to massively compromise security for usability.

So, in the end, my setup ended up being fairly straight-forward.

  1. I install KeePass on all my computers.
  2. I copy my KeePass database to the WebDAV server I have set up on my web hosting.
  3. I set up all my computers with a trigger to sync with the remote URL.
  4. I install Keepass2Android on my phone and tablet.
  5. I configure them to open the database directly from the URL. Keepass2Android caches remote databases, so this is effectively the same as the desktop sync setup.
  6. Profit! I now get my password database synchronized among all my computers and devices.

I've been using this setup for close to a month now, and it works pretty darn well. Good encryption, good usability, plenty of backups, and I didn't even have to involve a third-party service.

Command-line shortcuts

I came across an interesting little program the other day. It's called Go. It's a Python script for quickly navigating directories via what are essentially command-line shortcuts. I discovered it while perusing the settings in my Komodo Edit preferences - it was mentioned in the settings for the Fast Open extension, which I believe is included by default as of Komodo 5.1.
komodo-fast-open.png
The beautiful thing about Go is how simple it is. You run a simple command to set an alias and from there on out, you can just type go alias and it will change you to that directory. You can also add paths after the alias, such as go alias/dir1/dir2 to switch to a subdirectory of the alias. Great for switching between deep hierarchies, like how Windows programs like to bury things three levels under your "Documents" directory.

However, as I played around with Go, I did come across a few annoyances. The biggest one, or course, was that it didn't work under Powershell. The go.bat wrapper script would run...and do nothing. The current directory stayed the same. Turns out this was because go uses a driver-based system for changing directory which is based on your current shell. The Windows driver was using batch file syntax and running cmd.exe. Powershell does this in a new process, so naturally the current directory wasn't changing.

So, in the spirit of open-source, I decided to fix that problem. And while I was at it, I fixed a couple of other things and implemented some of the feature requests posted on the Google Code issue tracker. Here's the quick list:

  • Added support for Powershell.
  • Added built-in shortcut "-" pointing to the OLDPWD environment variable. On Windows, Go handles setting this variable (on UNIX, "cd" takes care of that).
  • When invoked without any argument, change to home directory.
  • Resolve unique prefixes of shortcut, e.g "pr" resolves to "projects" if there's no other shortcut starting with "pr".
  • Made -o option work without having the win32api bindings installed.

Below are a patch file for the official Go 1.2.1 release (apply with patch -p2 < go-posh.patch) as well as a fully patched setup archive for those who just want to get up and running. Note that, to enable Powershell support, prior to running the Go setup, you'll need to set the SHELL environment variable so that Go knows to use Powershell. You can do this by adding $env:SHELL = "powershell" to your Powershell profile script.

Patch file: go-posh-1.2.1.patch
Full Go archive: go-posh-1.2.1.zip

Edit: Fixed support for "-" so that it actually works like it does in UNIX.

Edit again: I've added a few more small features and also created a project for this patch in my bug tracker.

Initial Windows setup

Well, I did my Windows 7 install the other day. One day later, I'm doing pretty well. Ran into some problems, but so far the results are not bad.

Unsurprisingly, the actual install of Windows 7 was pretty uneventful. Pretty much the same as a typical Ubuntu installation - selecting partition, entering user info, clicking "next" a few times, etc. Nothing to report there.

The initial installation of my core programs was pretty easy too, thanks to Ninite. They have a nifty little service that allows you to download a customized installer that will do a silent install of any of a selected list of free (as in beer) programs. So I was able to go to a web page, check off Opera, Thunderbird, Media Monkey, the GIMP, Open Office, etc., download a single installer, and just wait while it downloaded and installed each program. Not quite apt-get, but pretty nice.

My first hang-up occurred when installing the Ext2IFS. Turns out that the installer won't run in Windows 7. You need to set it to run in Windows Server 2008 compatibility mode. And even after that, it was a little dodgy. It didn't correctly map my media drive to a letter on boot. It worked when I manually assigned a drive letter in the configuration dialog, but didn't happen automatically. It was also doing weird things when I tried to copy some backed-up data from my external EXT3-formatted USB drive back to my new NTFS partition. Apparently something between Ext2IFS and Win7 doesn't like it when you try to copy a few dozen GB of data in 20K files from EXT3 to NTFS over USB. (Actually, now that I write that, it seems less surprising.) The copy would start analyzing/counting the files, and then just die - no error, no nothing. I finally had to just boot from the Ubuntu live CD and copy the data from Linux. Still not sure why that was necessary.

I also had some interesting issues trying to install an SSH server. I initially tried FreeSSHD, which seemed to be the best reviewed free server. The installation was easy and the configuration tool was nice. The only problem was, I couldn't get it to work. And I mean, at all. Oh, sure, the telnet server worked, but not the SSH server. When set to listen on all interfaces, it kept complaining that the interface and/or port was already in use when I tried to start the SSH server. When bound to a specific IP, it gave me a generic access error (literally - the error message said it was a generic error).

After messing around fruitlessly with that for an hour or so, I gave up and switched to the MobaSSH server. This one is based on Cygwin. It's a commercial product with a limited home version and didn't have quite as nice an admin interface, but seems to work sell enough so far. The one caveat was that I did need to manually open port 22 in the Windows firewall for this to work.

The biggest problem so far was with setting up Subversion. Oh, installing SlikSVN was dead simple. The problem was setting up svnserve to run as a service. There were some good instructions in the TortiseSVN docs, but the only worked on the local host. I could do an svn ls <URL> on the local machine, but when I tried it from my laptop, the connection was denied. So I tried messing with the firewall settings, but to no effect. I even turned off the Windows firewall altogether, but it still didn't work - the connection was still actively denied.

I started looking for alternative explanations when I ran netstat -anp tcp and realized that nothing was listening on port 3690. After a little alternative Googling, I stumbled on to this page which gave me my solution. Apparently, the default mode for svnserve on Windows, starting with Vista, is to listen for IPv6 connections. If you want IPv4, you have to explicitly start svnserve with the option --listen-host 0.0.0.0. Adding that to the command for the svnserve service did the trick.

Editing FAT32 file attributes

Here's a quick and useful tactic for dealing with file attributes on FAT32 drives. I got the idea from this post on the Ubuntu blog

My new MP3 player (which I'll be blogging about when I have more time) uses a FAT32 filesystem. I needed to change the attributes on some of file attributes so that it would show the media folders but hide the system folders. Why I needed to do that is another story. Anyway, the point is that there was no obvious way to do this from Linux and since charging the MP3 player seems to reset these attributes, I didn't want to have to rely on a Windows machine being handy.

After way more Googling than I thought necessary, I discovered that you can do this with good old mtools. The really old-school people in the audience will probably remember them from the days when floppy disks were still in common use. Well, it turns out that they can be used with USB mass storage devices too.

The first step, after installing mtools of course, is to set up a drive letter for your USB device in your ~/.mtoolsrc file. This can be done by adding something like the following:
drive s: file="/dev/sdb1"
mtools_skip_check=1

The first line associates the S: drive letter with the device file for my player. The mtools_skip_check line suppresses errors which, I believe, arise from the fact that this is a USB device, not an actual floppy disk. Either that, or there's something about the FAT that mtools doesn't like, but can still work with.

Once that's set up, I was able to simply use mattrib to change the file attributes and [cdoe]mdir[/code] to show the attribute-sensitive directory listing. The actual commands look something like this:
mdir S:
mattrib +h S:/TMP
mattrib -h S:/MUSIC

Note the use of the S: drive letter to prefix paths on the root of the device. The +h and -h flags turn the hidden attribute on and off respectively. Also note that you can have the device mounted while doing this - mtools doesn't need exclusive access as far as I know.

Eventually, I'll be scripting this so that I can (hopefully) run it automatically after charging my player. Ideally, that script would include some HAL or udev magic to detect the dynamically assigned device node and add that to the mtoolsrc file. When I get around to writing that, I'll post the result.

PHP IDE mini-review

Tomorrow marks my 2-month anniversary at my new job doing LAMP. And for most of that two months, I've been going back and forth on what editor or IDE to use.

My requirements for a PHP IDE are, I think, not unreasonable. In addition to syntax highlighting (which should be a given for any code editor), I need to following:

  1. Support for editing remote files over SSH. This is non-negotiable.
  2. A PHP parser, preferably with intellisense and code completion.
  3. A file tree browser that supports SSH.
  4. Syntax highlighting, and perferably parsers, for (X)HTML and JavaScript.
  5. Search and replace that supports regular expressions.
  6. Support for an ad hoc, per-file workflow. In other words, I don't want something that is extremely project-centric.
  7. It should be free - preferably as-in-speech, but I'll take as-in-beer if it's really good.

So far, my preferred IDE has been Quanta Plus. It has all of the features I need and also integrates nicely with KDE. It also has a few other nice features, including context-sensitive help (once you install the documentation in the right place). However, the build of Quanta 3.5.6 that came with Kubuntu Feisty is kind of unstable. It crashes on me every few days, and for one project, I actually had to switch to something else because I was making heavy use of regex search and replace, which was consistently crashing Quanta. Also, while Quanta has a PHP parser with some intellisense, it's pretty weak and not in any way comparable to, say, Visual Studio.

My second heavier-weight choice is ActiveState's free KomodoEdit. This is a very nice, XUL-based editor. It's stongest feature is undoubtedly the PHP parser. It's really outstanding. For instance, it can scan pre-determined paths for PHP files and do intellisense for them. It even understands PHPDoc syntax and can add the documentation to the intellisense.

The down side is that, while Komodo does speak SFTP, the file browser tree only does local files. There is a Remote Drive Tree extension that adds this feature, but while it's better than nothing, it still isn't that good. I also don't much care for the look of Komodo or for the keyboard shortcuts. Those things are much easier to customize in Quanta.

After Quanta, my other old stand-by is jEdit. After installing the PHPParser, XML, and FTP plugins, this meets my needs. On the down side, the PHP parser doesn't do any intellisense (although it does detect syntax errors). The interface also feels a littly clunky at times, although it's much better than the average Java application and not really any worse than Quanta in that regard.

I took a brief look at a couple of Eclipse setups, but wasn't initially impressed by them. It might be worth looking at them again some time, but the whole process of getting and installing the appropriate plugins just seemed like a lot of trouble. Same goes for Vim. I'm sure I could get it to do most, if not all, of what I want, but it seems like an awful lot of trouble. And then, of course, there's the Zend IDE, which I don't really want to pay for. And besides, my one of my co-workers told me that, while it's a decent IDE, the real selling point is the integrated debugging and profiling, which won't work on our setup.

And so my intermitent search goes on. I'm hoping that the upgrade to Kubuntu Gutsy will fix the stability problems in Quanta, which is my biggest problem with it. I'm also hoping for some nice new features when KDE 4 comes along. But I guess I'll keep looking in the meantime.

Tools I can't do without: VMware

We all have those few programs we can't do without. For the non-technical user, the list might include Internet Explorer and Outlook Express. For the hard-core geek, it might be Vim/Emacs, GCC, and GDB. As for me, lately I've found that VMware is way up on that list.

VMware Player running Kubuntu 6.10 under Kubuntu 7.04This is particularly the case when it comes to testing and evaluating software. If it's anything even remotely "big," such as Microsoft Office, or if it's something I'm doing for work or casual research and am not planning to keep installed, I'll just break out a VM, install the software, and then blow the VM away when I'm done. In fact, I keep a directory full of compressed VM images of various pre-configured test setups for just this purpose. When I need to try something out, I decompress one of the images, do my thing, and then delete it when I'm all done. It's kind of like the old-fashioned "take a disk image" approach, only way, way faster. Honestly, VMware makes things so easy, it baffles me that people still bother with physical test machines for testing applications. It's so...1990's.

But VMware is great for regular work too. The performance is quite good, so if you have even a middle of the road system, you can run medium to heavy-wieght applications in a VM without too much pain. This is especially useful if you happen to be stuck running Windows, because some things are just so much easier to do in Linux, such as getting the tools you need. Of course, virtualization can't beat running natively, but flipping back and forth between your regular desktop and a VM is a lot less cumbersome than dual-booting or having two computers.

Of course, my whole-hearted adoption of virtualization is not without its price. This week I found myself looking up prices on new RAM sticks for my desktop and laptop. The main benefit I envisioned? I could comfortably run more than one instance of VMware! It's the perfect answer to, "What could you possibly do with 4GB of memory?"

If you've never used any virtualization software, you really need to check it out. It's a godsend. I use VMware Player because it's free and available on both Windows and Linux. QEMU is also a fairly nice cross-platform solution. And for the Windows-only crowd, there's always Virtual PC. They might take a little getting used to, but it's well worth the effort.

Learning TDD

It's time to start getting my skills up to date. And not just learning the programming language de jour, either. Learning new languages is nice, but as they say, you can write COBOL in any language. No, it's time to get up to speed on methods and practices.

My current project is to get myself up to speed on test-driven development and unit testing in general. This was prompted in large part by listening to an old episode of Hanselminutes on dynamic vs. compiled languages. Scott made the point that TDD and dynamic languages are (or should be) linked. Since you can't rely on the compiler to catch errors early, you need to replace that with continuous unit testing.

That made sense to me. I'm a fan of strict languages and static analysis. In fact, I'm one of the last 9 guys on Earth who thinks Ada is a really great programming language. (Note: That is a made-up number. The actual number is 14.) Since you don't have that safety net with dynamic languages, you need to find something else that can pick up the slack. Since unit testing is good for lots of other things as well (like detecting regressions), it seemed like something that was worth seriously taking up.

I've started off my journey by playing with the Simple Test unit testing framework for PHP. I came across it by chance and decided to go with it because the site had some helpful introductory articles on unit testing. I have a C# project I'll be starting at work this week, so I'll also looking at NUnit soon as well.

At the moment, I'm still trying to wrap my brain around the way TDD works. Conceptually, it's not that complicated. It's just that it's a more bottom-up approach than I'm used to. When you write tests first, it seems like you almost have to start by writing the basic building blocks and work your way up to the more complicated methods. However, I tend to work in reverse, starting with the "meatier" methods first and writing the supporting methods as I figure out what I need. But maybe that's not the best way to work. I don't know. I've still got a lot of reading to do.