On the state of my Vim-as-IDE

Last year I posted about adopting Vim as my default editor.  I've been leveling up my Vim knowledge since then, so I thought I'd write a summary of the current state of my ad hoc Vim IDE configuration.

First, let's start with some sources of information and inspiration.  I found Ben McCormick's Learning Vim in 2014 series to be very useful.  The Vim as Language post in particular offers an eye-opening perspective on the beauty of using Vim.  I've also been working my way through Steve Losh's Learn Vimscript the Hard Way, which is a great source for information on customizing your Vim configuration.  And if you want a little inspiration to give Vim a try, here is Roy Osherove's "Vim for Victory" talk from GOTO 2013.

So how has my Vim adoption gone?  Pretty well, actually.  When I look back at my original post on looking for a new editor, it's pretty clear that a sufficiently customized Vim meets all my criteria.  However, to be fair, it did take a while to get it sufficiently customized.  The customization I've done wasn't actually that hard, but it took some time to figure out what I needed, what I could do, and then Google how to do it.  But paradoxically, that's one of the strengths of Vim - it's been around long enough that pretty much everything that you might want to do either has a plugin available or has been documented someplace on the web, so you rarely need to write any original code.

My Plugins

These days there are actually quite a few plugin managers available for Vim.  The nice thing about this is that they all support the same plugin format, i.e. GitHub repositories laid out in the standard ~/.vim directory format.  I'm currently using Plug because it provides an easy mechanism for selective or deferred plugin loading (in the case where you have a plugin that's not always needed and slows down Vim startup).

Here are some of the goodies my Plug plugin list currently contains:

  • scrooloose/nerdtree - A nice file explorer plugin that provides some enhancements to Vim's native file browser.  Pretty much a must-have for every Vim setup.
  • ctrlpvim/ctrlp.vim - A fuzzy file finder that works on buffers and other stuff too.  Just press ctrl+p and start typing the name you want.
  • jlanzarotta/bufexplorer - A handy plugin to list and switch between the current buffers.  Think of it as like the tab strip at the top of other editors, but easier to deal with from the keyboard.
  • tpope/vim-dispatch - A nice plugin for running external programs asynchronously.  By default, external command execution blocks the rest of the UI until the command is done.  This is fine sometimes, but not others.  Dispatch integrates with other plugins and provides a way to run things in the background and get their output back into Vim.
  • tpope/vim-surround - Provides a Vim movement that helps you manipulate surrounding entities.  Good for things like changing quotes, HTML tags, etc.
  • Chiel92/vim-autoformat - Provides an interface to various code formatters.  I use it as a replacement for the JSON beautifying feature that I loved so much in Komodo.
  • mileszs/ack.vim - A quick and easy integration of the ack! text search tool.  Like the built-in grep, but better.
  • joonty/vim-sauce - Sauce is a handy little plugin for managing multiple configuration files.  It's also useful for adding the concept of a "project" to Vim.  I use it to create project-specific configurations that handle all the customization that would be done in the project file of a regular IDE.
  • janko-m/vim-test - A unit test runner plugin that handles many different tools.
  • vim-airline/vim-airline - An enhanced status line that's both pretty and displays some useful information.
  • w0rp/ale - The Asynchronous Lint Engine, this offers syntax and style checking with inline error notifications, just like in a regular IDE.
  • majutsushi/tagbar - A tool for displaying the tags in the current file, similar to the structure browsers found in IDEs.

Needless to say, I also made a number of other customizations to my Vim configuration.  My full work-in-progress Vim configuration is in this repo if you're interested.  I do not hold this up as a great example of how to configure Vim, but it's working for me so far and, as previously noted, it actually wasn't all that much work.

The IDE Functionality

So what functionality do I have with this setup?  Well, it turns out I actually get most of what I previously had with Komodo.  Of course, I need to integrate with a few external packages for this, the key ones being Exuberant CTags, which indexes identifiers in the code, and ack for text search.  I also need various external formatters and linters, though the specific programs depend on what language I'm coding in.  Nothing fancy, though - they're pretty much all command-line executables that you can just drop someplace in your path.

So here's what I get for my trouble:

  • Insanely powerful key bindings.  I mean sersiously powerful - I don't think there's anything you can do in Vim that can't be bound to a keystroke.  And it's usually pretty easy.  Just the other week I defined a couple of ad hoc key bindings to help me add translation keys to a web template.  It's really a beautiful thing.
  • Inline syntax and style checking.  Using ALE in conjunction with the appropriate external linters, I get the same kind of inline checking I can get in PHPStorm.
  • Navigating identifiers.  Using Vim's ctag support, it's possible to navigate uses and definitions of a particular, for example, much like the code browsing abilities of PHPStorm.  Of course, it's not perfect because ctags lack knowledge of the context of the identifier, but it's not bad.  (And to be fair, I've seen the code navigation in PHPStorm and Komodo fall over on more than one occasion.)
  • Searching.  Between CtrlP and Ack, I have some nice facilities for searching for or within files.  Again, very similar to what I had in Komodo or PHPStorm.
  • Project management.  Between NERDTree and Sauce, I have some decent support for the concept of project.  They give me a nice file tree navigation panel and the ability to define project-specific configuration.

Conclusion

The short version is that this Vim experiment is going pretty well.  Granted, it is somewhat more work than figuring out a traditional IDE.  But on the other hand, it's not that bad and actually isn't as much work as I thought it would be.

In terms of functionality, I find that I haven't actually given up very much.  In fact, if you're talking about multi-language IDEs, I'm not even sure I've given up anything I care about.  It turns out that Vim is remarkably capable and the plugin ecosystem is very large and deep.

Would I recommend this to someone who's never touched Vim before?  Probably not.  But if you're familiar with Vim and interested in trying a new way of working, it might be worth a shot.  At worst, you'll improve your Vim skills and enjoy using a nice, fast editor that doesn't eat up half a gigabyte of RAM when it's just sitting there doing nothing.

The dangers of using old stuff

I was reminded the other day of the dangers of using old software.  And by "old" I mean, "hasn't been updated in a couple of years".  So not really that old, just not new.

For my personal projects, I use the open-source version of a program called The Bug Genie.  It's a web-based bug issue tracker written in PHP.  I picked it mostly because it was easy to install on my hosting account (which didn't allow SSH access at the time) and sucked less than the alternatives I had tried.  It's actually not a bad program - a little confusing to administer, but has a decent feature set and UI.

The problem is that the last "official release" of the open-source version was in 2015.  In and of itself, this is not a problem - there are lots of super-useful programs out there that haven't been updated in far longer than that.  The problem is that this is a web application and the web, as an ecosystem, is not at all shy about breaking your stuff if you go more than six months without updating it.

So I tried to log into my Bug Genie instance the other day and I ran into two issues.  The first, and most serious, was a fatal error saying that a parameter to the count() function must be either an array or a countable.  After a little debugging, it turned out that this was due to a change in PHP 7.2, to which my web host had recently upgraded.  The code for one of the Composer packages contained a bug that didn't always pass an appropriate parameter to count() and in older versions of PHP, this would pass silently.  But in PHP 7.2, this raises a warning, which was in turn causing an error.  The fix was simply to update to a more recent version of the package that fixes the underlying bug.

The second issue was client-side.  The project pages have a number of panels on them that are loaded via AJAX calls, and none of them were loading.  Turned out the problem was that a JavaScript file related to the Mozilla Persona support wasn't loading and this was causing subsequent scripts on the page to fail.  A quick search revealed that there was a good reason it wasn't loading - Mozilla discontinued its Persona service in 2016, so the URL the page was trying to load no longer existed.  Fortunately, this was easily fixed by turning off that feature.

So we have two things broken by the passage of time.  One a change in platform semantics and the other a change in the surrounding ecosystem.  Both of them theoretically foreseeable.  But on the other hand, both also very easy to overlook.

For a software developer, this is a parable on building for longevity.  There are dangers is relying on external dependencies.  There are dangers in being even slightly out of spec.  If we want our software to last, we need to be vigilant in our validation and establish boundaries.  You can't trust the testing of today to reflect the world of tomorrow.

New backup solution

Backups are important.  Everybody in IT knows this.  After all, whenever someone tells you they lost some important file, the first question is always, "Well do you have a backup?"

Nevertheless, many of us are lax in setting up our own backup solutions.  I know I was for many years.  And while that might not be forgivable for people who really should know better, it is understandable.  After all, implementing a proper backup solution is a lot of work and the benefits are not immediately obvious.  Think about it: it requires an ongoing investment of both time and money which, in the best-case scenario (i.e. nothing goes wrong), will never pay off.  Is it any wonder that even IT professionals aren't eager to put a lot of effort into something they actively hope never to need?

But about a year ago, I finally put a home backup solution in place.  That solution was CrashPlan.  But now CrashPlan is discontinuing their home plans, which means I've been forced to figure out something else.

The old setup

CrashPlan's "home" plan was actually pretty nice.  Perhaps too nice, since apparently it wasn't profitable enough for them to keep offering it.  The subscription was priced at $150 per year and that included up to ten computers in your household and unlimited cloud storage.  On top of that, their agent software supported not only backing up to the cloud, but also backing up to external hard drives and other computers running the CrashPlan agent.  And it ran on Windows, Linux, and Mac!

Currently, I have three computers in active use in my house: my laptop, my wife's laptop, and my desktop, which is really more of home server at this point.  I had both of the laptops backing up to the cloud, while the desktop backed up to both the cloud and an external hard drive.  I've got close to a terabyte of data on the desktop, so the external drive is important.  I do want a copy of that data off-site just in case of a catastrophe, but I'd rather not have to suck that much data down from the cloud if I can avoid it.

I'm happy with this setup and I wanted to keep something equivalent to it.  I feel like it provides me with sufficient protection and flexibility while not requiring me to buy extra hardware or pay for more cloud storage than I need.

The alternatives

Turns out this setup isn't quite as easy to replicate as I had hoped.  There are plenty of home backup services out there, but most of them don't offer the same range of options.  For instance, many are Mac and Windows only - no Linux.  Many offer limited storage - usually a terabyte or less, which I'm already pushing with my largest system.  Many are cloud backup only - no local drives.  And when you add up the cost of three systems, most of them are more expensive than CrashPlan Home was.

On their transition page, CrashPlan recommends that existing home customers either upgrade to their small business plan or switch over to Carbonite.  I briefly considered upgrading to the small business plan, but the price is $10/month/device.  So I'd go from $150/year to $360/year for basically the same service.  That doesn't sound like a great deal to me.

Carbonite, on the other hand, is one of the options I considered the first time when I settled on CrashPlan.  They're offering a 50% discount to CrashPlan customers, so the initial price would only be $90.  Presumably that's only for the first year, but even after that $180/year is only slightly more than CrashPlan.  However, from what I can see Carbonite doesn't support Linux on their home plan - they only do Linux servers on their office plan.  I also don't see an option to back up to an external drive.  Although it does support backing up external drives to the cloud...for another $40/year.

Plan B

After doing some research and thinking about it for a while, I eventually decided to skip the all-in-one services.  Yeah, they're nice and require almost zero work on my part, but I wanted some more flexibility and didn't want to pay an arm and a leg for it.  However, I didn't want to completely roll my own solution.  Writing simple backup scripts is easy, but writing good backup scripts with proper retention, special file handling, logging, notifications, etc. is a lot of work.CloudBerry Backup for Windows

If you don't want a full-service backup solution, the alternative is to go a la carte and get a backup program and separate cloud storage provider.  There are a number of options available for both, but after some research and experiments, I decided to go with CloudBerry Backup for my backup program and Backblaze B2 as my storage provider.

The choice of Backblaze B2 was kind of a no-brainer.  Since this is long-term backup storage, performance is not a huge concern - it was mostly a question of capacity and price.  B2 has standard "pay for what you use" billing and the prices are extremely reasonable.  Currently, storage is $0.005 per gigabyte and there's no charge for uploads.  So for the volume of data I have, I'm looking at storage costs of $5 or $6 per month, which is pretty cheap.

The backup program was a different story.  I tried out several options before settling on CloudBerry.  Most of the options I tried were...not user friendly.  For me, the CloudBerry UI had the right balance of control and ease of use.  Some of the other solutions I tried were either too arcane or simplified things too much to do what I wanted.  CloudBerry uses a wizard-based configuration approach that makes it relatively painless to figure out each step of your backup or restore plan.  I find that this allows them to expose all the available options without overwhelming the user.

As far as capabilities, CloudBerry pretty much has what I wanted.  It supports Windows, Mac, and Linux using multiple storage solutions, including local file systems, network file systems, and various cloud providers.  Beyond that, the feature set depends on the version you use.  There's a free version, but I went with the paid desktop version because it supports compression and encryption.  The licenses are per-system and they're a one-time change, not a subscription.  The basic "home" licenses currently run $50 for Windows and $30 for Linux, which I think is pretty reasonable.

Results

So far, the combination of CloudBerry and B2 for my backup solution is working well.  I've been using it for about five months and have all three of my systems backing up to the cloud and my desktop also backing up to a USB hard drive.  The process was largely painless, but there were a few bumps along the way.

As I mentioned in a previous post, as part of this process I moved my desktop from Windows to Linux.  As it turns out, setting up the two laptops to have CloudBerry back up to B2 was completely painless.  Literally the only annoyance I had in that process was that it took quite a while for the confirmation e-mail that contained my license key to arrive.  So if you're a Windows user, I can recommend CloudBerry without reservation.

Linux wasn't quite as simple, though.  I had a number of problems with the initial setup.  The interface is very stripped-down compared to the Windows version and doesn't offer all of the same options.  I also had problems getting the backups to run correctly - they were repeatedly stalling and hanging.  Fortunately, the paid license comes with a year of support and I found the CloudBerry support people to be very helpful.  In addition, they seem to be actively working on the Linux version.  I initially installed version 2.1.0 and now they're up to 2.4.1.  All of the issues I had have been resolved by upgrading to the newer versions, so things are working well now.

I had initially been a little concerned about the per-gigabyte and per-transaction pricing, but so far it hasn't been an issue.  I found Backblaze's storage cost calculator to be pretty accurate and the per-transaction charges are not significant.  The cost has been basically what I initially estimated and I haven't had any surprises.

Overall, I'm very happy with this solution.  The price is reasonable and, more importantly, it provides me with lots of flexibility.  Hopefully I'll be able to keep this solution in place for years to come.

Database URIs are a pain

Note to self: When using SQLite with SQLAlchemy, the URI has three slashes after the colon, and that's not counting a leading slash.

I only post this because I've forgotten this at least three times and had to spend way too much time figuring it out.  A have a project that has a test SQLite database in the local project directory with a URI of sqlite:///../file.db.  And that's fine.  But then I forget and try to change it to an absolute path and can't figure out why sqlite:///path/to/file.db doesn't work.  But of course that's wrong: it's sqlite:////path/to/file.db with four slashes at the beginning - the last one being part of the actual path.  Hopefully this time I won't forget.

New SSD for me!

I've been going back and forth on getting a new laptop for a few months.  My Lenovo IdeaPad U310 is about four years old and it's been starting to drag.  However, while it was never top-of-the-line, the specs are still decent compared to what I could buy for a reasonable price (translation: "a price I wouldn't feel uncomfortable communicating to my spouse").  So eventually I settled on just upgrading the spinning disk to a solid state drive.

I was slightly apprehensive about this choice.  I've upgraded laptops before, but the IdeaPad officially has "no user-servicable parts", so it clearly wasn't going to be as easy as the last time.  I found a good guide, but the short version is that replacing the drive was simple.  The hard part was getting the stupid case open.

The difference between the SSD and the rotational disk is actually pretty dramatic.  I knew there would be a big performance difference, but it's interesting to see just how big it is on otherwise identical hardware.  Boot time went from "forever" to 15 or 20 seconds.  Running backups and other disk-intensive activity no longer grinds the entire system to a halt.  It's fantastic!  Makes me wish I'd sprung the extra cash for an SSD when I originally bought the system.