Making Windows Git Do SSH

Another quick note to my future self: setting up Git under Windows to use SSH key authentication is pretty easy...once you know what to do.

At work, we have some Composer and Bower packages that we fetch from our internal GitHub Enterprise server.  And, of course, all the source lines in the composer.json and bower.json files use SSH references.  I just use HTTP for my code checkouts, so I finally had to figure out how to make Git for Windows authenticate with my SSH key.

Turns out it's pretty easy.  I have a key pair I created with PuTTYgen.  All I had to do was export my private key to OpenSSH format and copy that file to C:\Users\<my_username>\.ssh\id_rsa.  Git picks it up with no further configuration.  Then I just added my key to GitHub and I was good to go.

JavaScript unit testing

Today I watched a very interesting talk on good and bad unit testing practices by Roy Osherove, entitled "Unit Testing Good Practices & Horrible Mistakes in JS".  It really was an exceptional talk, with some good, concrete advice and examples.  I highly recommend it.

I actually recognized a couple of the examples in the talk.  The big one was OpenLayers.  I still remember being horrified by their test suite when I was using that library at my last job.  We had a number of extensions to OpenLayers and I was thinking about adding some tests into their test suite.  Then I ran the tests and saw a bunch of failures.  Assuming our code must be bad, I tried a fresh copy of the OpenLayers release.  A bunch of tests still failed.  And that's when I gave up and decided to just pretend their tests didn't exist.

If nothing else, the talk provided some nice validation that my JavaScript unit testing practices aren't completely wrong and stupid.  I've been writing unit tests for my JavaScript as part of my standard process for the last six months or so, and I was actually watching the video waiting for the hammer to drop, for Roy to describe an anti-pattern that I'd been blissfully propagating.  I was pleased and somewhat surprised when that moment didn't come.

Using PuTTY inside ConEmu

This is yet another one of those "post this so I can refer back to it later" things.  So if you're not a Windows user, or if you don't use ConEmu, then I suggest you go get a cup of coffee or something.

So for a while now I've been using ConEmu as my Windows console app.  It supports multiple tabs, transparency (ooooh), customizable hotkeys, customizable sessions, Far manager integration and a whole bunch of other nifty stuff.  

A couple of months ago, I saw that it was possible to embed PuTTY, the popular Windows-based SSH client, directly in a ConEmu tab.  So I tried it out and found it to be pretty slick.  The only down side was some key binding weirdness.

First, there's the general putty issue that if you accidentally press ctrl+S - you know, the key combination that means "save" in just about every editor in existence - it effectively locks the terminal and it's not obvious how to get control back.  The second issue is that, when you embed an application like PuTTY in ConEmu, it steals most of your keyboard input, so the standard key bindings for switching between window tabs don't work.

Luckily, these problems are easily fixed.  The fixes are just hard to remember, which is why I'm writing it down.  For the ctrl+S iissue, you can just hit ctrl+Q to turn input back on.  For the tab-switching issue, you can use the global key bindings for ConEmu - namely Win+Q, Win+Shift+Q, and Win+<Number> to switch consoles, as well as Win+Z to toggle focus between ConEmu and PuTTY.


Fixing Lenovo IdeaPad function keys

Note to self: you actually can turn the function keys on your Lenovo IdeaPad back into actual function keys.  It's just a BIOS setting.

For anyone else who comes across this, Lenovo's IdeaPad series of laptops (at least the version I have) does weird things with the function keys.  You've probably seen those keyboards where the F1 through F12 keys have alternate functions.  They often have a "function lock" button that's equivalent to caps lock or scroll lock and you can turn it off to access the alternate functionality. 

What Lenovo does is similar, except that the alternate functionality is on by default and there's no function lock button.  There's just an "Fn" button that you have to press and hold to access the normal function button keystrokes.  So on my laptop, F11 and F12 turn the screen brightness up and down and I need to press Fn+F12 to do a regular F12 keystroke.

This is probably great for "regular" users, most of whom wouldn't miss the function keys if they went away completely.  On the other hand, for a developer, this adds a second keystroke to a row of what were previously absurdly convenient hotkeys.  Granted the volume and brightness keys are convenient, but when I'm coding I use F11 and F12 for my IDE keybindings much more often than I need to change the screen brightness.

But luckily, as I found in the link above, you can turn that off in a BIOS setting.

PSP Break-down, part 4: Results

Welcome to the end of my series of PSP posts.  In part one, I started with an overview of the Personal Software Process.  Part two covered the process of learning about the PSP and how to apply it.  Part three was the sales pitch of nice things that the PSP is supposed to enable you to do.  Now, in part four, we'll get down to brass tacks and talk about how well the PSP actually works for me.

I'll start with a quick overview of the good, bad, easy, and hard parts.  Then I'll dive into a deeper discussion of my experiences and the details of what did and didn't work.

PSP at a Glance

This table gives you a nice little overview of my experience.  It rates various parts of using the PSP on two axes - whether I judge them to be useful or not (good/bad) and how difficult they are to apply in practice (easy/hard).

Easy Maintaining process
Defect tracking
Time tracking
Size tracking
Code reviews
PROBE estimates
Reviewing on paper
Setting up tool support
Test report template
Coding standards
Hard Planning process
Design reviews
Postmortem analysis
Defect data analysis
Creating relative size tables
Creating review checklists
Design templates
Design verification methods

As you can see, I find the benefits of the PSP to outweigh the drawbacks.  For me, it's useful enough that I plan to keep using it, in some form, for the foreseeable future. 

Using the PSP

Despite what you might read about how cumbersome the PSP is to use, I actually didn't find it that difficult at all.  Granted, it takes a little getting used to - every process change does.  But once I had the basics down, actually using and sticking to the process wasn't that hard.  The use of written scripts with well defined entry and exit criteria helps keep you honest and disciplined.

Likewise, with the help of Process Dashboard, I found the entire data collection process to be relatively painless.  There are a few pain-points, of course.  For me, the biggest one was simply properly configuring a line counting tool so that you can measure project size.  This is actually more annoying than you'd think.  I use the integrated line-counter in Process Dashboard to count change size and cloc to measure initial size for estimation purposes, mostly because I've integrated it into my IDE.  The Process Dashboard tool has some nice features, but will require you to write custom language definitions for pretty much anything that doesn't use C-style syntax.  It uses a fairly easy to follow XML format for configuration, but still....  Cloc has much better language support, but is harder to customize.  As a further annoyance, while Process Dashboard does have VCS diff support, it currently only supports Subversion.  So if you're using Git, Mercurial, or anything else reasonably modern, you'll have to set up two copies of your local repo for before and after comparison.

Code Review

Personal pre-commit code review is one of those things that every responsible developer does, but hardly anyone seems to talk about.  I know I've been doing an informal version of it for years.  It simply consisted of looking over the diff of my changes before hitting the "commit" button and making sure that I didn't make any obvious mistakes, that I'm not checking in any changes I don't mean to, etc.  It's something you quickly learn to do after making a few embarrassing mistakes.

Doing an organize code review with a checklist really takes this to the next level.  Instead of being a CYA thing, code review becomes a way to preemptively find problems in your code.  And at its best, it can be hugely effective.  As any experienced developer knows, there are classes of problem that are hard to find in testing, but stick out like a sore thumb when you actually stop and read the code.

The only hard thing about code review is actually customizing your review checklist.  I find that it's difficult for me to do that simply by looking at the defect categories in my data because those seldom tell you anything actionable.  There are a few defect categories that readily translate into checklist items, but there are many defects that are more subtle and are either difficult to categorization or difficult to generalize into checklist items.

The one thing I really didn't like about the PSP code review process was the recommendation to do it on paper.  Using paper does have the benefit that you can get more code in front of you at a time, and it is easier to make annotations.  However, it also bypasses the navigational and analysis power baked into modern IDEs.  For instance, Komodo let's me easily navigate between functions, access standard library documentation at a click, and search for uses of an identifier.  Those things are much more tedious to check on paper. 

But the big kicker for me was that trying to review a diff on paper is just painful.  It's hard enough on screen with color highlighting, but it really sucks on paper.  And on paper I don't have things like the Komodo diff viewer's feature to jump from a diff item to that location in the file to view the context.  It might work well to review new code on paper, but for changes to existing code it feels really clunky.

Design Reviews

While we're on the topic of reviews, let's talk about design reviews for a minute.  Again, this is a really good idea, for the same reasons that code review is a good idea.  And the PSP does offer some productive advice in the recommendation to adopt a design standard and a checklist for common errors.

However, the specific methods the PSP recommends just don't work for me.  The four standard design templates are a nice idea, but they feel very repetitive and clunky to work with.  And even if I switched to the UML equivalent, the recommended list of artifacts is just too much for a lot of the things I do.  And at the risk of having my CS degree rescinded, I have to admit that I have trouble constructing a state machine for many of the projects I do - at least, one that's even remotely enlightening.  In general, I just find them painful to work with and biased toward "new program" development rather than incremental enhancement.

And the design verification techniques are even worse!  They're time-consuming and tedious - you're basically executing your program on paper.  It's a nice idea, and might come in handy occasionally, but frankly, I have less confidence in my ability to perform those verification exercises correctly than I do in my code being correct in the first place.  And, again, they're just way too heavy for most of the projects I do.

I'm still working on finding a design and design review approach that's sustainable for me.  Since my last few shops have used agile methodologies, most of the "projects" I do are fairly small enhancements to existing code - usually just two or three days, seldom more than a week.  So a heavy-weight design process with lots of templates or UML diagrams just isn't going to work. 

My current approach is as follows (note that I'm using a variant on the PSP3 process in Process Dashboard):

  1. Sketch out a high-level design in a word processor.  This is a refinement of the conceptual design used for planning, usually in the form of plain prose and bullet lists.
  2. Review that primarily for feasibility, completeness, and requirements coverage.
  3. For each component I've broken out, do a more detailed "transient design" (I'll describe that in a moment).
  4. Review the transient design primarily for completeness, correctness, and requirements coverage.

I refer to the detailed designs as "transient designs" because I don't actually create separate documents for them.  I blend implementation and design and actually do the design right in the source code.  I generally stub out things like classes and methods and fill in the details either with actual code (for simpler items) or with "design annotations", which are just comments that use a special formatting to mark them as design artifacts.  Sometimes they're pseudo-code, other times they're just descriptions of what needs to be done, whatever seems appropriate.  Then, in the code phase, I simply replace those annotations with the actual implementation.  It's certainly not perfect, but it seems to be working well enough for me so far.  As a next step, I'm going to look at a TDD-like approach and try incorporating unit test definitions as one of the design artifacts.


In my experience so far, PSP estimation using PROBE actually works remarkably well. In the data from my last job, I was eventually able to get to the point where my actual development times were generally within about 15% of my estimates.  I consider that to be pretty good, especially when you consider that the estimates were done in minutes and based on fairly sketchy user-stories.

Of course, estimation is a learned skill, and PROBE doesn't change that.  You still need to be able to accurately account for the possible changes when constructing the conceptual design.  And as with any data-driven approach, the results are going to be sensitive to the quality of your data.  So if your relative size tables are just made up rather than being based on your past work, then don't expect your estimates to be too accurate.

It's also important to note that there's a bias against project diversity here.  For example, line counts can differ wildly for different programming languages, different problem domains, etc.  So if you tend to work on projects that are generally very similar to each other, then PROBE will work much better than if all your projects are widely divergent.  My data from my last job is based largely on a single code-base, so while the purposes of the individual projects varied wildly, the technology stack was consistent.

The hardest part about estimation, at least for me, is coming up with those relative size tables.  It's one of those things that sounds easy, but actually isn't.  For one thing, I don't have a tool to automatically count lines and methods in classes - much less one that works across a diversity of languages.  For another, my data largely comes from work, which is a problem because I do mostly web development on a team, which means I need to extract my method and class size data from a code base written in five different languages by five people.  When you wrote a quarter of the methods in one class, half of a third of the methods in another class, etc., how do you count all that?  You can just forget that and count the files you have, but then it's not really your data, so it's not clear how useful it will be. 

I've also found it challenging to come up with useful categorizations for my relative size tables.  Perhaps it's just the products I've been working on, but I end up with a whole lot of database-related classes and a smattering of other categories.  That's fine for those products, it's hard to figure out how to extrapolate that to other kinds of projects.  My suspicion is that that's a result of sub-optimal system design.  I'm currently trying to adhere religiously to the SOLID principles, which should result in more and more targeted classes, which should solve that problem.

Other Bad Things

There are a few other annoying things about the PSP as Humphrey describes it.  While the general focus on templates and checklists is not bad in and of itself, their value tends to vary.  The test report template, for instance, is one that I've not found particularly valuable.  While it is useful to sketch out your testing strategy, or maybe make a quick checklist of your test cases, the test report template is more like something that you'd give to a QA team to do manual testing.  It has a bias towards verbosity that makes it seem like more effort than it's worth.

Likewise with the focus on coding standards.  We can all agree that having coding standards, and following them consistently, is a very good thing.  Everyone should do it.  However, I've been working as a software developer for a long time now and my "coding standard" is something I've long since internalized.  I don't need to spend time formalizing it or checking it in my code reviews.  Ditto the size counting standard.  You can get really fancy if you want to, but to I'm not convinced that anything much more complicated than counting physical lines is likely to be helpful.  I suspect that, at least for my purposes, any elaborate counting standard would just serve to complicate measurement.

And, of course, there's the simple fact of process overhead.  It's really not too bad when you use Process Dashboard, but it's still there.  For example, there's a non-trivial amount of work that goes into configuring Process Dashboard itself.  It's a useful and powerful tool, but its not always simple to use.  There's also the analysis time use to assess and correct your process.  This is unquestionably valuable, but it's still some additional time that you need to plan for.  And, of course, there's just the time to actually follow the process.  This isn't actually that much, but for very small tasks (e.g. one or two hours), your estimates might be thrown off by the fact that your standard phase break-down results in a phase that's one or two minutes, and it takes you longer than that just to type in the data you need.

And Some Good Things to End On

Last but not least, I wanted to highlight two more "good but hard" things from the table above: the planning and postmortem stages.  At first, these seemed like silly, pro forma phases to me, but they're actually quite valuable.  And the most valuable thing about them is something that doesn't show up in the script: they make you stop and reflect.

The planning phase forces you to think about what you're doing.  To construct an estimate, you have to think about the project you're trying to do, break it down, and define its scope.  Even if you don't believe there's value in the estimate itself, simply going through the process gives you lots of good insight into just what you're trying to do, which reduces the number of surprises later in the development cycle and makes everything go smoother in general.

Likewise, the postmortem stage prompts you to reflect on how you're doing and figure out how you can improve.  It's like a one-man sprint retrospective.  And like the sprint retrospective, it's actually the most important part of the process.  The simple task of looking at your statistics and filling out a Process Improvement Proposal forces you to stop and focus on your performance and what you can do to improve your work.  I find that simply looking at that PIP line in the postmortem exit criteria keeps me honest and makes me stop and think of something I could improve.  And if you can't thing of at least one thing you could do better, you're just lying to yourself.


So there you have it.  In many ways, the PSP is working well for me.  Some of Humphrey's suggestions work well out of the box, some don't, and some need tweaking.  I do find that my process is evolving away from the "standard" PSP, but that's neither bad nor unexpected.  The basic techniques and ideas are still useful to me, and that's what matters.

Upgrading Mercurial on shared hosting

Disclaimer: This is yet another "note to self" post.  If you're not me, feel free to ignore it.

After God alone knows how many years (at least six, since I have posts related to it from 2010), it's finally time up upgrade the version of Mercurial that I have installed on my shared web hosting account.  This is a shared hosting account with no shell access - nothing but web-based tools and FTP.  I also don't know what OS it's running - just that it's some form of Linux.  So I've been putting this off for obvious reasons.

Unfortunately for me, the defaults for repository creation in Mercurial 3.7 turn on general delta support by default.  That isn't supported by the old version I was running (1.7), so my choices are to either use the now non-standard, older, and less efficient format my repositories, or just bite the bullet and upgrade.  So I did the latter, since the version I had was pretty ancient and I was going to have to do it eventually anyway.

Fortunately, my hosting provider supports Python 2.7, which gets you most of Mercurial.  However, there are some C-based components to Mercurial.  Since I have no shell access to the hosting server, and there are probably no development tools installed even if I did, I had to try compiling on a VM.  I was able to do that by spinning up a Fedora 24 VM (on the assumption that they're running RHEL, or something close enough to it), and doing a local build.  The only caveat was that apparently my provider is running a 32-bit OS, because building on a 64-bit VM resulted in errors about the ELF format being incorrect.

Once the Fedora VM was up and running, I was able to do a build by running the following:
sudo dnf install python-devel
sudo dnf install redhat-rpm-config
cd /path/to/mercurial-3.8.x
make local

That's about it.  After I had a working build I was able to copy the Mercurial 3.8 folder to the server, right over top of the old version, and it just worked.  Upgrade accomplished!

Fixing Synaptics right-click button

Tonight I finally got around to fixing my trackpad.  Again.

So here's the thing: like most Windows-based laptops, my little Lenovo ultrabook has a trackpad with two "button" regions at the bottom.  They're not separate physical buttons, but they act as the left- and right-click buttons.  However, rather than force you to use the right-click pseudo-button, the Synaptics software the comes with the laptop lets you turn off the right-click button and use two-finger-click as the right-click, a la a MacBook Pro.  I find this preferable, in particular because my laptop centers the trackpad under the space bar, rather than in the actual physical center of the unit, which means that when you're right-handed, it's easy to accidentally hit the right-click button when you didn't mean to.

Prior to upgrading to Windows 10, I had this all set up and it was fine.  After the upgrade, not so much.  Sure, the same options were still there, but the problem was the that option to turn off the right-click button did just that - it turned it completely off!  So rather than clicking in that region doing a standard left-click, that part of the trackpad was just dead, which was even worse than the problem I was trying to fix.

Luckily, it turns out that the functionality is still there - the Synaptics people just need some better UX.  I found the solution in this forum thread on Lenovo's site.  It turns out you can just change the value of the registry key HKEY_CURRENT_USER\Software\Synaptics\SynTP\TouchPadPS2\ExButton4Action to 1, which is apparently the standard left-click action.  By default, it's 2, but when you turn off the "Enable Secondary Corner Click" (i.e. right-click to open context menu) feature in the Synaptics UI, that gets changed to 0, which is apparently the "do nothing" action.

Long story short: my mouse is much better now.

PSP Break-down, part 3: The Good Stuff

Welcome to part three of my PSP series.   Now that the introductory material is out of the way, it's time to get to the good stuff!  This time, I'll discuss the details of the PSP and what you can actually get out of it.  Theoretically, that is.  The final post in this series will discuss my observations, results, and conclusions.

PSP Phases

As I alluded to in a previous post, there are several PSP phases that you go through as part of the learning process.  Levels 0 and 0.1 get you used to using a defined and measured process; levels 1 and 1.1 teach planning and estimation; and levels 2 and 2.1 focus on quality management and design.  There is also a "legacy" PSP 3 level which introduces a cyclical development process, but that's not covered in the book I used (though there is a template for it in Process Dashboard).  The phases are progressive in terms or process maturity.  So PSP 0 defines your baseline process and by the time you get to PSP 2.1 you're working with an industrial-strength process. 

For purposes of this post, my discussion will be at the level of the PSP 2.1.  Of course, there's no rule that says you can't use a lower level, but the book undoubtedly pushes you toward the higher ones.  In addition, while the out-of-the-box PSP 2.1 is probably too heavy for most people's taste, there is definitely useful material there that you can adapt to your needs.


One of the big selling points for all that data collection that I talked about in part 1 is to use it for estimation.  The PSP uses an evidence-based estimation technique called Proxy-Base Estimation, or PROBE.  The idea is that by doing statistical analysis of past projects, you can project the size and duration of the current project.

The gist of the technique is that you create a "conceptual design" of the system you're building and define proxies for that functionality.  The conceptual design might just be a list of the proxies such that, "if I had these, I would know how to build the system."  A proxy is defined by its type/category, it's general size (from very small to very large), and how many items it will contain.  In general, you can use anything as a proxy, but in object-oriented languages, the most obvious proxy is a class.  So, for example, you might define a proxy by saying, "I'll need class X, which will be a medium-sized I/O class and will need methods for A, B, and C." 

By using historical project data, you can create relative size tables, i.e. tables that tell you how many lines of code a typical proxy should have.  So in the example above, you would be able to look up that a medium-sized I/O class has, on average, 15.2 lines of code per method, which means your class X will have about 46 lines of code.  You can repeat that process for all the proxies defined in your conceptual design to project the total system size.  Once you have the total estimated size, PROBE uses linear regression to determine the total development time for the system.  PROBE allows for several different ways to do the regression, depending on how much historical data you have and how good the correlation is.


As a complement to estimation, the PSP also shows you how to use that data to do project planning.  You can use your historical time data to estimate your actual hours-on-task per day and then use that to derive a schedule so that you can estimate exactly when your project will be done.  It also allows you to track progress towards completion using earned value.

Quality Management

For the higher PSP levels, the focus is on quality management.  That, in and of itself, is a concept worth thinking about, i.e. that quality is something that can be managed.  All developers are familiar with assuring quality through testing, but that is by no means the only method available.  The PSP goes into detail on other methods and also analyzes their efficiency compared to testing.

The main method espoused by the PSP for improving quality is review.  The PSP 2.1 calls for both a design review and a code review.  These are both guided by a customized checklist.  The idea is that you craft custom review checklists based on the kinds of errors you tend to make, and then review your design and code for each of those items in turn.  This typically means that you're making several passes through your work, which means you have that many opportunities to spot errors.

The review checklists are created based on an analysis of your defect data.  Since the PSP has you capture the defect type and injection phase for each defect you find, it is relatively easy to look at your data and figure out what kind of defects you typically introduce in code and design.  You can use that to prioritize the most "expensive" defect areas and develop review items to try to catch them.

As part of design review, the PSP also advocates using standard design formats and using organized design verification methods.  The book proposes a standard format and describes several verification methods.  Using these can help standardize your review process and more easily uncover errors.

Process Measurement

Another big win of the PSP is that it allows you to be objective about changes to your personal process.  Because you're capturing detailed data on time, size, and defects, you have a basis for before and after comparisons when adopting new practices. 

So, for instance, let's say you want to start doing TDD.  Does it really result it more reliable code?  Since you're using the PSP, you can measure whether or not it works for you.  You already have historical time, size, and defect data on your old process, so all you need to do is implement your new TDD-based process and keep measuring those things.  When you have sufficient data on the new process, you can look at the results and determine whether there's been any measurable improvement since you adopted TDD.

The same applies to any other possible change in your process.  You can leverage the data you collect as part of the PSP to analyze the effectiveness of a change.  So no more guessing or following the trends - you have a way to know if a process works for you.


One of the least touted, but (at least for me) most advantageous aspects of using the PSP is simply that it keeps you honest.  You use a process support tool, configured with a defined series of steps, each of which has a script with defined entry and exit criteria.  It gives you a standard place in your process to check yourself.  That makes it harder to bypass the process by, say, skimping on testing or glossing over review.  It gives you a reminder that you need to do X, and if you don't want to do it, you have to make a conscious choice not to do it.  If you have a tendency to rush through the boring things, then this is a very good thing.

Next Up

So that's a list of some of the reasons to use the PSP.  There's lots of good stuff there.  Some of it you can use out-of-the-box, other parts of it offer some inspiration but will probably require customization for you to take advantage of.  Either way, I think it's at least worth learning about.

In the next and final installment in this series, I'll discuss my experiences so far using the PSP.  I'll tell you what worked for me, what didn't, and what to look out for.  I'll also give you some perspective on how the PSP fits in with the kind of work I do, which I suspect is very different from what Humphrey was envisioning when he wrote the PSP book.

Access to local storage denied

I ran into an interesting issue today while testing out an ownCloud installation on a new company laptop running Windows 10.  When trying to open the site in IE11, JavaScript would just die.  And I mean die hard.  Basically nothing on the page worked at all.  Yet it was perfectly fine in Edge, Firefox, and Chrome.

The problem was an "Access denied" message on a seemingly innocuous line of code.  It was a test for local storage support.  The exact line was:

if (typeof localStorage !== "undefined" && localStorage !== null) {

Not much that could go wrong with that line, right?  Wrong!  When I double-checked in the developer console and it turns out that typeof localStorage was returning "unknown".  So the first part of that condition was actually true.  And attempting to actually use localStorage in any way resulted in an "Access denied" error.

A little Googling turned up this post on StackOverflow.  It turns out this can be caused by an obscure error in file security on your user profile.  Who knew?  The problem was easily fixed by opening up cmd.exe and running the command:
icacls %userprofile%\Appdata\LocalLow /t /setintegritylevel (OI)(CI)L

PSP Break-down, part 2: Learning

This is part two of my evaluation of the Personal Software Process.  This time I'll be talking about the actual process of learning the PSP.  In case you haven't figured it out yet, it's not as simple as just reading the book.

Learning the PSP

The PSP is not the easiest thing to learn on your own.  It's a very different mindset than learning a new programming language or framework.  It's more of a "meta" thing.  It's about understanding your process: focusing not on what you're doing when you work, but rather how you're doing it.  It's also significantly less cut-and-dried than purely technical topics - not because the material is unclear, but simply because what constitutes the "best" process is inherently relative.

Given that, I suspect the best way to learn the PSP is though the SEI's two-part training course.  However, I did not do that.  Why not?  Because that two-part course takes two weeks and costs $6000.  If you can get your employer to give you the time and shell out the fee, then that would probably be great.  But in my case, that just wasn't gonna happen and the SAF (spouse acceptance factor) on that was too low to be viable.

Instead, I went the "teach yourself" route using the self-study materials from the SEI's TSP/PSP page.  You have to fill out a form with your contact information, but it's otherwise free.  These are essentially the materials from the SEI course - lecture slides, assignment kits, and other supplementary information.  It's designed to go along with the book, PSP: A Self-Improvement Process for Software Engineers, which serves as the basis for the course.  Thus my learning process was simply to read through the book and do the exercises as I went.

(As a side-note, remember that this is basically a college text-book, which means that it's not cheap.  Even the Kindle version is over $40.  I recommend just getting a used hardcover copy through Amazon.  Good quality ones can be had for around $25.)

Fair warning: the PSP course requires a meaningful investment of time.  There are a total of ten exercises - eight programming projects and two written reports (yes, I did those too, even though nobody else read them).  Apparently the course is designed around each day being half lecture, half lab, so you can count on the exercises taking in the area of four hours a piece, possibly much more.  So right there you're looking at a full work-week worth of exercises in addition to the time spent reading the book.

Personally, I spent a grand total of 55 hours on the exercises: 40 on the programming ones and 15 on the two reports (for the final report I analyzed not only my PSP project data, but data for some work projects I had done using the PSP).  While the earlier exercises were fairly straight-forward, I went catastrophically over my estimates on a couple of the later ones.  This was largely due partly to my misunderstanding of the assignment, and partly to confusion regarding parts of the process, both of which could easily have been averted if I'd had access to an instructor to answer questions.


 As I mentioned in the last post, you'll almost certainly want a support tool, even when you're just learning the PSP.  Again, there's a lot of information to track, and trying to do it on spreadsheets or (God forbid) paper is going to be very tedious.  Halfway decent tool support makes it manageable.

I ended up using Process Dashboard, because that seems to be the main (only?) open-source option. In fact, I don't think I even came across any other free options in my searches.  I understand other tools exist, but apparently they're not public, no longer supported, or just plain unpopular. 

One of the nice things that Process Dashboard offers is the ability to import canned scripts based on the PSP levels in the book.  To use that, you have to register with the SEI, just like when you download the PSP materials, which is annoying but not a big deal.  (The author got permission from the SEI to use their copyrighted PSP material and apparently that was their price.)  This is really handy for the learning process because it puts all the scripts right at your fingertips and handles all of the calculations for you at each of the different levels.

In terms of capabilities, Process Dashboard is actually pretty powerful.  The UI is extremely minimal - essentially just a status bar with some buttons to access scripts, log defects, pause the timer, and change phases.  Much of the interesting stuff happens in a web browser.  Process Dashboard runs Jetty, which means that it includes a number of web-based forms and reports that do most of the heavy lifting.  This includes generating estimates, entering size data, and displaying stock and ad hoc reports.

Process Dashboard has fairly extensive customization support, which is good, because one of the basic premises of the PSP is that you're going to need to customize it.  You can customize all the process scripts and web pages, the project plan summary report, the built-in line counter settings, the defect categories, etc.  And that's all great.  The one down side is that the configuration is usually done by editing XML files rather than using a graphical tool. 

Since this is an open-source developer tool, I guess that's sort of fine.  And at least the documentation is good.  But it's important to realize that you will need to spend some time reading the documentation.  It's a powerful tool and probably has just about everything you need, but it's not always easy to use.  On the up side, it's the sort of thing that you can do once (or occasionally) and not have to worry about again.  Just don't expect that you'll be able to fire up Process Dashboard from scratch and have all the grunt work just done for you.  There's still some learning curve and some work to do.  But if you end up using the PSP, it's worth it.