A different take on password managers

I've posted several entries in the past about the benefits of password managers.  Well, I just read a very interesting article that also detailed some of the risks of password managers.

The article is by a security professional who offers a very good take on some of the aspects of password management that I've rarely considered.  For instance, using a password manager can very much entail putting all your eggs in one basket.  For instance, what if you get malware that steals your master password?  What if you forget the password?  That might seem far-fetched, but you never know - you could hit your head, have a stroke, or any number of things.  So in addition to security, there are things like recovery strategy to consider.

While I've been guilty of making blanket statements that "everybody should use a password manager," I now see that that's a mistake.  I still believe password managers are a very good for many, if not most people, but it needs a more nuanced assessment.  Depending on your risk profile and tolerance, you might want to avoid putting all your eggs.  You might want to avoid password managers altogether, or use them only for low-value, or perhaps use multiple password vaults to partition things by importance.

The point is that security is not a one-size-fits-all thing.  There are lots of use-cases and it's important not to get stuck in thinking that yours is the only one or even the most common or important one.  Consider the situation and the trade-offs involved before making a decision or recommending a course of action to others.

Yup, password expiration is dumb

I posted an entry a while back about passwords.  Well, turns out that Microsoft is no longer recommending rotating your passwords and is removing the password expiration policies from Windows.

So yup, I was right.

But really, what was the point of always changing your passwords anyway?  Well, I guess it does mitigate against the possibility that a re-used password has been compromised somewhere else.  But what's the cost?  Users can never remember their passwords.  And since coming up with a good, memorable password is hard, they're not going to put in the effort. Instead, they'll come up with some pattern that meets the minimum complexity requirements and just cycle through it. 

Is this better?  I'm no expert, but it's not clear to me that it is.If you want extra protection, there are better ways to get it.  For example, setting up two-factor authentication.  That's also a bit of a pain for users, but at least it provides more real protection.

And on a semi-related side-note, I'd like to point out KeepassAndroid now integrates with Firefox Mobile.  I've mentioned this app in some previous posts about KeePass.  I do a lot of my non-work web browsing on my phone these days, so this app is really a necessity for me.  This new integration is really nice because it means I can now get the same user experience on both my laptop and my phone.  I just tap on the login box and get promoted to auto-fill the form.  Perfect!

At least it's a resume builder

Note: This is a short entry that's been sitting in my drafts folder since March 2010, i.e. from half a career ago. My "new" job at the time was with an online advertising startup. It was my first and only early-stage startup experience. In retrospect, it was useful because it exposed me to a lot of new thing, in terms of not only technology, but people, processes, and ways of approaching software development (looking at things from the QA perspective was particularly eye-opening). It was not, however, enjoyable. I've also worked for later-stage startups and found that much more enjoyable. Sure, you don't get nearly as much equity when you come in later, but there's also less craziness. (And let's face it, most of the time the stock options never end up being worth anything anyway.)

Wow. I haven't posted anything in almost six months. I'm slacking. (Note: If only I'd known then that I wouldn't publish this for nine years...)

Actually, I've been kind of busy with work. I will have been at the "new" job for a year next month. The first six months I was doing the QA work, which was actually kind of interesting, as I'd never done that before. I did some functional test automation, got pretty familiar with Selenium and PHPUnit, got some exposure to an actual organized development process. Not bad, overall.

On the down side, the last six months have been a bit more of a cluster file system check, if you get my meaning. Lots of overtime, throwing out half our existing code-base, etc. On the up side, I've officially moved over to development and we're using Flash and FLEX for our new product, which are new to me.

The good part: FLEX is actually not a bad framework. It's got its quirks, but it's pretty powerful and, if nothing else, it beats the pants off of developing UIs in HTML and JavaScript. And while it's not my favorite language in the world, ActionScript 3 isn't bad either. It's strongly typed, object oriented, and generally fairly clean.

The bad part: Flash is not so nice. It wasn't quite what I was expecting. I guess I assumed that "Flash" was just design environment for ActionScript programming. Actually, it's more of an animation package that happens to have a programming language bolted onto it. The worst part is that our product requires that we do the Flash portion in ActionScript 2, which seriously sucks. I mean, I feel like I'm back in 1989. And the code editor in Flash CS4 is...extremely minimal. As in slightly less crappy than Windows Notepad. I am sersiously not enjoying the Flash part.

(Note: On the up side, none of this matters anymore because Flash is now officially dead.)

Running my own calendar server

Note: This is an article I started in October of 2012 and never finished. Fortunately, my feelings on the issue haven't changed significantly. So I filled it out into a real entry. Enjoy!

As I alluded to in a (not so) recent entry on switching RSS readers, I'm anti-cloud.

Of course, that's a little ambiguous. The fact is, "cloud" doesn't really mean anything anymore. It's pretty much come to refer to "doing stuff on somebody else's server." So these days we refer to "having your e-mail in the cloud" rather than "using a third-party webmail service," like we did 15 years ago. But really it's exactly the same thing - despite all the bells and whistles, GMail is not fundamentally different than the Lycos webmail account I used in 1999. It still amounts to relying entirely on some third-party's services for your e-mail needs.

And if the truth were known, I'm not even really against "the cloud" per se. I have no real objection to, say, hosting a site on an Amazon EC2 or Windows Azure instance that I'm paying for. It's really just the "public cloud." You know, all those "cloud services" that companies offer for free - things like GMail and Google Calendar spring to mind.

And it's not even that I object to using these servies. It's just that I don't want to rely on them for anything I deem at all important. This is mostly because of the often-overlooked fact that users have no control over these services. The providers can literally cut you off at a moment's notice and there's not a thing you can do about it. With a paid service, you at least have some leverage - maybe not much, but they generally at least owe you some warning.

There are, of course, innumerable examples of this. The most recent one for me is Amazon Music. They used to[i] offer a hosting service where you could upload your personal MP3 files to the Amazon cloud and listen to them through their service. I kinda liked Amazon Music, so I was considering doing that. Then they terminated that service. So now I use Plex and/or Subsonic to listen to my music straight from my [i]own server, than you very much!

As a result, I have my own implementation of a lot of stuff. This includes running my own calendar server. This is a project that has had a few incarnations, but that I've always felt was important for me. Your calendar is a window into your every-day life, a record of every important event you have. Do you really want to trust it to some third party? Especially one that basically makes its money by creating a detailed profile of everything you do so that they can better serve ads to you? (I think we all know who I'm thinking of here....)

For several years I used a simple roll-your-own CalDAV server using SabreDAV. That worked fine, but it was quite simple and I needed a second application to provide a web-based calendar (basically a web-based CalDAV client). So I decided to switch to something a little more full-featured and easier to manage.

So these days, I just run my own OwnCloud instance. At it's core, OwnCloud is basically a WebDAV server with a nice UI on top of it. In addition to nice file sync-and-share support, it gives me web-based calendar and contact apps with support for CalDAV and CardDAV respectively. It also has the ability to install additional apps to provide more features, such as an image gallery, music players, and note-taking apps. Most of the more impressive apps are for the enterprise version only, or require third-party services or additional servers, but all I really wanted was calendar and contact support.

To get the full experience, I also use the OwnCloud apps on my laptop and phone to sync important personal files, as well as the DAVx5 app on my phone to synchronize the Android calendar and contacts database with my server. Overall, it works pretty well and doesn't really require much maintenance. And most important, I don't have to depend on Google or Amazon for a service that might get canned tomorrow.

LnBlog Refactoring Step 2: Adding Webmention Support

About a year and a half ago, I wrote an entry about the first step in my refactoring of LnBlog.  Well, that's still a thing that I work on from time to time, so I thought I might as well write a post on the latest round of changes.  As you've probably figured out, progress on this particular project is, of necessity, slow and extremely irregular, but that's an interesting challenge in and of itself.

Feature Addition: Webmention

For this second step, I didn't so much refactor as add a feature.  This particular feature has been on my list for a while and I figured it was finally time to implement it.  That feature is webmention support.  This is the newer generation of blog notification, similar to Trackback (which I don't think anyone uses anymore) and Pingback.  So, basically, it's just a way of notifying another blog that you linked to them and vice versa.  LnBlog already supported the two older versions, so I thought it made sense to add the new one.

One of the nice things about Webmention is that it actually has a formal specification that's published as a W3C recommendation.  So unlike some of the older "standards" that were around when I first implemented LnBlog, this one is actually official, well structured, and well thought out.  So that makes things slightly easier.

Unlike the last post, I didn't follow any formal process or do any time tracking for this addition.  In retrospect I kind of wish I had, but this work was very in and out in terms of consistency and I didn't think about tracking until it was too late to matter.  Nevertheless, I'll try to break down some of my process and results.

Step 1: Analysis

The first step, naturally, was analyzing the work to be done, i.e. reading the spec.  The webmention protocol isn't particularly complicated, but like all specification documents it looks much more so when you put all the edge cases and optional portions together.  

I actually looked at the spec several times before deciding to actually implement it.  Since my time for this project is limited and only available sporadically, I was a little intimidated by the unexpected length of the spec.  When you have maybe an hour a day to work on a piece of code, it's difficult to get into the any kind of flow state, so large changes that require extended concentration are pretty much off the table.

So how do we address this?  How do you build something when you don't have enough time to get the whole thing in your head at once?

Step 2: Design

Answer: you document it.  You figure out a piece and write down what you figured out.  Then the next time you're able to work on it, you can read that and pick up where you left off.  Some people call this "design".

I ended up reading through the spec over several days and eventually putting together UML diagrams to help me understand the flow.  There were two flows, sending and receiving, so I made one diagram for each, which spelled out the various validations and error conditions that were described in the spec.

Workflow for sending webmentions Workflow for receiving webmentions

That was really all I needed as far as design for implementing the webmention protocol.  It's pretty straight-forward and I made the diagrams detailed enough that I could work directly from them.  The only real consideration left was where to fit the webmention implementation into the code.

My initial thought was to model a webmention as a new class, i.e. to have a Webmention class to complement the currently existing TrackBack and Pingback classes.  In fact, this seemed like the obvious implementation given the code I was working with.  However, when I started to look at it, it became clear that the only real difference between Pingbacks and Webmentions is the communication protocol.  It's the same data and roughly the same workflow and use-case.  It's just that Pingback goes over XML-RPC and Webmention uses plain-old HTTP form posting.  It didn't really make sense to have a different object class for what is essentially the same thing, so I ended up just re-using the existing Pingback class and just adding a "webmention" flag for reference.

Step 3: Implementation

One of the nice things about having a clear spec is that it makes it really easy to do test-driven development because the spec practically writes half your test cases for you.  Of course, there are always additional things to consider and test for, but it still makes things simpler.

The big challenge was really how to fit webmentions into the existing application structure.  As I mentioned above, I'd already reached the conclusion that creating a new domain object for the was a waste of time.  But what about the rest of it?  Where should the logic for sending them go?  Or receiving?  And how should sending webmentions play with sending pingbacks?

The first point of reference was the pingback implementation.  The old pingback implementation for sending pingbacks lived directly in the domain classes.  So a blog entry would scan itself for links, create a pingback object for each, and then ask the pingback if its URI supported pingbacks, and then the entry would sent the pingback request.  (Yes, this is confusing.  No, I don't remember why I wrote it that way.)  As for receiving pingbacks, that lived entirely in the XML-RPC endpoint.  Obviously none of this was a good example to imitate.

The most obvious solution here was to encapsulate this stuff in its own class, so I created a SocialWebClient class to do that.  Since pingback and webmention are so similar, it made sense to just have one class to handle both of them.  After all, the only real difference in sending them was the message protocol.  The SocialWebClient has a single method, sendReplies(), which takes an entry, scans its links and for each detects if the URI supports pingback or webmention and sends the appropriate one (or a webmention if it supports both).  Similarly, I created a SocialWebServer class for receiving webmentions with an addWebmention() method that is called by an endpoint to save incoming mentions.  I had originally hoped to roll the pingback implementation into that as well, but it was slightly inconvenient with the use of XML-RPC, so I ended up pushing that off until later.

Results

As I mentioned, I didn't track the amount of time I spent on this task.  However, I can retroactively calculate how much code was involved.  Here's the lines-of-code summary as reported by Process Dashboard:

Base:  8057
Deleted:  216
Modified:  60
Added:  890
Added & Modified:  950
Total:  8731

For those who aren't familiar, the "base" value is the lines of code in the affected files before the changes, while the "total" it the total number of lines in affected files after the changes.  The magic number here is "Added & Modified", which is essentially the "new" code.  So all in all, I wrote about a thousand lines for a net increase 700 lines.

Most of this was in the new files, as reported by Process Dashboard below.  I'll spare you the 31 files that contained assorted lesser changes (many related to fixing unrelated issues) since none of them had more even 100 lines changed.  

Files Added: Total
lib\EntryMapper.class.php 27
lib\HttpResponse.class.php 60
lib\SocialWebClient.class.php 237
lib\SocialWebServer.class.php 75
tests\unit\publisher\SocialWebNotificationTest.php 184
tests\unit\SocialWebServerTest.php 131

It's helpful to note that of the 717 lines added here, slightly less than half (315 lines) is unit test code.  Since I was trying to do test-driven development, this is to be expected - the rule of thumb is "write at least as much test code as production code".  That leaves the meat of the implementation at around 400 lines.  And of those 400 lines, most of it is actually refactoring.

As I noted above, the Pingback and Webmention protocols are quite similar, differing mostly in the transport protocol.  The algorithms for sending and receiving them are practically identical.  So most of that work was in generalizing the existing implementation to work for both Pingback and Webmention.  This meant pulling things out into new classes and adjusting them to be easily testable.  Not exciting stuff, but more work than you might think.

So the main take-away from this project was: don't underestimate how hard it can be to work with legacy code.  Once I figured out that the implementation of Webmention would closely mirror what I already had for Pingback, this task should have been really short and simple.  But 700 lines isn't really that short or simple.  Bringing old code up to snuff can take a surprising amount of effort.  But if you've worked on a large, brown-field code-base, you probably already know that.