Nope, I don't know JS

I've been writing software for a living for the last 15 years.  I've been doing mostly full-stack web development for nine of those.  That means I've written my fair share of JavaScript code.  But you know what?  It turns out I really don't know JS.  I thought I did.  But I don't.

I reached this conclusion after reading the first two and a half of the six books in Kyle Simpson's You Don't Know JS series.  Normally, I'd wait until I was finished with the series to blog about it, but seriously, this is good stuff.  If you think you have a decent grasp of JavaScript, you should read this to test your mettle.

Forget the W3Schools tutorials or jQuery guides you might have read to learn JavaScript in the first place.  This is way beyond that.  The goal of the You Don't Know JS series is not to teach you "how to code JavaScript" but rather to help you master JavaScript.  It's a deep-dive into the guts of JavaScript - not the subset most of us are used to, but what's actually spelled out in the ECMAScript specifications. 

The beautiful thing about this series is that it's not about expanding your catalog of technical tricks.  Most of it (well, of the 2.5 books I've read so far, anyway) is about understanding the fundamentals in a deep way.  For example, going beyond "oh, this in JavaScript is weird" and actually understanding the rules behind how the dynamic binding of this works and how it differs from the lexical scope used for everything else.  Things like that are easy to gloss over.  After all, you don't really need to know the gory details of how this is bound in order to write code and be productive.  But these kinds of things really are important.  They're the difference between "knowing" JavaScript and knowing JavaScript. 

To put it another way, there's more to the craft of building software than just "getting the job done".  For some people, just "getting it done" is sufficient - and that's fine: they're satisfied to remain journeymen.  But for some people, that's not enough - they want to be master craftsmen.  This series is for them.

Music to code by

Lately I've been listening to a lot of Music To Code By at work.  It's a project by Carl Franklin of .NET Rocks! fame.  The idea is that it's a collection of music tracks that will help you stay in the "flow" while working on code or pretty much anything else.  He describes it as a "productivity tool" rather than an artistic endeavor.

I've been listening to MTCB for a few months now.  Carl is up to 11 tracks (I have all of them), so there's a nice variety to choose from.  Each track is 25 minutes long, to coincide with a Pomodoro cycle (if you care about that).  The tracks are extremely repetitive, with a nice, laid-back kind of vibe.  The goal, which I think Carl has accomplished admirably, is for the tracks to be "not boring", but at the same time not so interesting that you actively listen to them.

Believe it or not, MTCB actually solves a problem that I've had for years.  In the office, I often end up putting on my headphones to try to drown out the distraction of the surrounding noise.  But whatever I listen too inevitably ends up either boring or distracting.  If I listen to my favorite music, it eventually gets old and annoying.  If I listen to new music, I frequently end up distracted.  Same thing with podcasts - I either end up listening to the podcast rather than working or miss the good part of the podcast because I'm engrossed in my work.

Music To Code By does a great job of covering up the noise while just fading into the background.  Even when listening to a track for the first time, the repetition prevents it from holding your interest for more than a couple of minutes.  But at the same time it manages to be unobtrusive enough that you don't really get sick of it - after a while, you just don't even hear it.  So you're left with the benefits of music to cover up the office noise without the risk of being distracted from your work.

I definitely recommend giving it a try.  If you want to start small, the individual tracks are $5 a piece (except for the first three, which I believe are only available in the $18 "album").  However, there's now a $49 collection that includes everything.  I know that sounds a little steep when you consider that the entire point is that you're not going to listen to the music, but I've found it to be well worth the investment.

No estimates?

So as I mentioned in a previous post, I've been taking a look at the Personal Software Process, or PSP (not to be confused with Sony's portable gaming console, which makes Googling for it loads of fun).  It's been an eye-opening experience and I'll write more about it once I've officially finished the course.

At the same time, I've been reading about the #NoEstimates hashtag/movement/discussion/whatever-the-hell-it-is.  I actually first heard about it around the time I was starting the PSP book.  This makes for an interesting contrast, because one of the first things you learn about when studying the PSP is...how to do estimates.

The thing is, up until a few months ago, I would have completely agreed with the #NoEstimates people.  In fact, I wrote about my feelings on estimates a couple of years ago.  My feelings were pretty much in line with Woody Zuill's comments in .NET Rocks! episode 1160 - that estimates are useless because they're just made-up, grossly inaccurate numbers which can only be used for evil or to make even more inaccurate plans.  So why bother?  Just decide if something is important enough to do, and then either do it or not.  Stop pretending.

Then I started learning about the PSP.  I started accumulating data on my actual productivity and learning how to leverage it.  And you know what?  It turns out it actually is possible to do estimates with a reasonable degree of accuracy - and it doesn't even take huge amounts of up-front planning.  Granted, there are caveats - you need historical data that's representative of the technology and problem domain you're working in.  But once you've got a few projects under your belt to establish a baseline, it really does work.

So now I have some cognitive dissonance.  Was I wrong to take the time to learn this estimating stuff?  Or was I wrong before when I agreed with the #NoEstimates people?  Where do I go with this?

What got me thinking about this is an article by Henri Karhatsu in Methods and Tools.  Towards the end of the article, he discusses the #NoEstimates alternative to providing an estimate: forecasting.  This is what you use when you really do need to figure out when something will be done (e.g. so that you can prepare marketing materials for a new product or feature).  While this may sound like estimating, according to Karhatsu, there is a key difference: forecasts are based on data, while estimates are not.

I had several reactions to this revelation.  The first was, "Well, that's a crock - 'forecasting' is just estimating by a different name."  Then I realized, "Hey, 'forecasting' is exactly what you're doing in the PSP!  Cognitive dissonance resolved!"  Which prompted the question, "If PSP 'estimating' is the same thing as #NoEstimates 'forecasting', then what the heck does #NoEstimates even mean?"

And that's when I read this article analyzing the #NoEstimates movement.  It's an interesting read and pretty well summarizes my impressions of #NoEstimates based on what I've heard and read.  The short version is that #NoEstimates doesn't actually mean "no estimates ever".  It doesn't even mean that estimates are inherently bad in all cases.  Rather, it seems to be a reaction against people using estimates in stupid an fruitless ways.  This came up a number of times in the .NET Rocks! interview with Woody Zuill.  You have managers using estimates as a weapon against developers, or clueless project managers demanding estimates because they need to plug the numbers into Microsoft Project so they can finish their plan, even though nobody really thinks the plan reflects reality.  This ends in the demoralizing and unproductive result of developers fabricating numbers, without any clue if they're even vaguely realistic, for no other purpose than to keep the managers happy.

I don't think anybody could reasonably argue that such a situation is a good thing.

But on the other hand, when you're being paid to do a project, whether it's building software or building a house, the questions "How long is it going to take?" and "How much is it going to cost?" are not inherently unreasonable things for the customer to ask.  And while it's sometimes true that you're working on an unprecedented project an have no clue how long it will take, it's more often true that you're on reasonably familiar ground and have a pretty good idea of the effort involved in a project.  And let's be honest - while a big-deal consultant like Woody Zuill might be able to get away with telling the customer that they don't really need an estimate, that's not going to fly in every organization.  So #NoEstimates as a general recommendation is probably not realistic.

And that brings me back around to the first chapter of the PSP book.  In it, Watts Humphrey relates a story from one of his first TSP teams (that's "Team Software Process" - the team equivalent of the PSP).  They had just come off a two-year death-march project and now management had a new project for them - with a nine-month timeline.  The dev team was sure they couldn't make that schedule.  They didn't really know how long the project would take, but figured it would probably be closer to two years again. 

So what normally happens in that situation?  Well, the dev team says that the schedule is impossible to meet, management insists that the date is firm, and the developers eventually cave and say they'll "do their best".  I've been in that situation, as I'm sure a lot of other people have.  Why do we do that?  Because, well, what else can we do?  We don't really know how long the project will take, and even if we have an idea, we usually don't have anything concrete to back up our estimate.  And as Humphrey puts it, when we don't know and management doesn't know, they'll win every time.

But rather than go the #NoEstimates route and try to convince people that coming up with an estimate isn't a useful exercise, Humphrey goes the opposite direction.  He uses the power of data to give estimates teeth. Remember - even if management wants the new project done yesterday, it's still important for them to know when it's really going to be done.  And while it's easy to write off a guess-work estimate as "schedule padding", it's not as simple when you have a detailed plan based on historical data.  When you reach that point, estimates can become a weapon for the dev team rather than one used against them.

So where does this leave me?  Pretty much back where I started.  Estimates are good when they're needed and accurate.  They're bad when they're unneeded or inaccurate.  They should be based on data, not guess-work, and should be used responsibly.  So no earth-shattering news.  Just an interesting detour.

Review - Agile!: The Good, the Hype, and the Ugly

As I mentioned in my a previous post, I've been doing some reading around process.  A development process is one of those things that you always have, by definition.  It might be ill-defined and chaotic, and you might not even thingk about it, but eventually you go through some process to develop code.  But when you think about it, this process is going to have a not insubstantial impact on the quality of the software you produce.  After all, how could it not?

However, as developers, we often think of "process" as a dirty word.  It's one of those bureaucratic things imposed on us by pointy-haired project managers, with their Gantt charts and their Microsoft Project.  Sure, those highfalutin "processes" sold by over-priced consultants might be useful for those huge, soulless "enterprise" shops that have thousands of know-nothing drones cranking out Java ports of mainframe COBOL systems.  But we don't need those!  We're "code poets", "10x developers", "coding ninjas", etc.  We can code circles around a dozen of those enterprisey jokers in our sleep! 

Except it doesn't really work that way.  Even if you accept the notion that the best developers are ten or more times more effective than average ones, they still have to deal with the complications of ill-defined features, requirements changes, integration problems, performance problems, communication between stake-holders, and the myriad other issues that emerge and magnify as more people become involved in a project.  If you expect to make it through any decent sized project and not have it turn into a hot mess, you have to have some way of managing all the things that come up. Some organizations, such as the aforementioned Java-drone shop, do that by implementing processes that require mountains of paperwork that serve primarily to cover everyone's butt.  Others take the agile approach.

Cover of "Agile!: The Good, the Hype, and the Ugly"If you've been living in a cave for the past decade or so, the "agile" movement is a response to the stereotypical "big, up-front" processes of old.  The values of the agile community are summed up in the Agile Manifesto, but the basic idea is to focus less on the bureaucratic paper-pushing and more on delivering what the customer needs.  This generally sounds good to developers because it means less time spent on TPS reports and more time actually building software.  And it sounds good to managers because the iterative nature of agile development means you get the software faster (for certain definitions of "faster").  And it sounds very good to consultants who can offer agile training and certification.  So, naturally, everyone is talking about it.

That's where Bertrand Meyer's book Agile!: The Good, the Hype, and the Ugly comes in.  Its stated purpose is to separate the wheat from the chaff; that is, to pick out what's good about agile methods and what can be safely disregarded.  Meyer says it is intended as an even-handed and fact-based assessment to offset the hyperbole that often surrounds discussions of agile.

Part of what attracted me to this book was simply the fact that it's by Bertrand Meyer.  I first became aware of his work when researching for my Masters thesis.  He's probably best known as the originator of the idea of "design by contract" and Eiffel, and has a strong footing in formal methods, which is not usually the type of thing you'd associate with agile.  In fact, some people might consider it the opposite of agile.

Overall, I found the book not only enlightening, but also highly entertaining.  Meyer writes in an informal and even playful style.  While he does provide plenty of evidence and citations, he is also not light on the sarcasm - he's not shy about calling ideas stupid and harmful if that's what he thinks.  Sadly, this last point somewhat undercuts his claim to impartiality.  The "opinion" sections are helpfully marked with a marginal icon and they're just all over the place.  And while I don't think that Meyer has any anti-agile ax to grind, I was definitely left with the impression that the preponderance of his commentary was critical of agile.

Now, to be fair, criticizing agile is not a bad thing - every movement needs critics to keep it honest.  If the agile advocates you find on the web are any indication, much of the material out there is very "rah, rah agile" and this book is a bit of a reality-check.  While Meyer does give the agile movement credit for a number of good and even brilliant ideas, advocating for the greatness of agile is not the point.  The important thing is to critically examine it and not just take the press releases at face value.

One of the nice features of this book's structure is that it has book-ended assessments.  That is, Meyer ends the first chapter with an initial assessment of agile and then finishes the book with his final assessment of the good, hype, and ugly.  Though only a couple of pages, I found the initial assessment actually quite good as far as putting things in perspective.  He begins with this quote (apocryphally attributed to Samuel Johnson):

Your work, Sir, is both new and good, but what is new is not good and what is good is not new.

To me, this sets the stage, highlighting that, regardless of the hype, not all of the "innovations" in agile are actually new.  In fact, some have been around in different forms for years or even decades.  Of course, not all the good ideas are old and not all the new ideas are bad, but the point is that agile isn't quite as ground-breaking as some of the hype would have you believe. 

I won't get into the details of Meyer's analysis, but there are a few interesting items I'd like to point out.   First is his point that agile proponents (and many others in the industry) tend to work on the assumption that there are only two development models: agile and waterfall.  They use "waterfall" as a synonym for "anything that's not agile," which is both inaccurate and unfair (page 31).  Innaccurate because  "waterfall" is a specific lifecycle model proposed by Royce in the 1970's.  Unfair because there are many so-called "predictive" lifecycle models which are not the same as waterfall.  For example, the idea of iterative development has been around for a long time, except under other names like "spiral model".

Related to this, Meyer makes the interesting point that not all of the predictive processes out there actually preclude agile.  His big example of this is CMMI (page 44), which many (most?) of us think of as a super-heavy process that's only used by government defense contractors, and only because the government requires it.  However, he argues, if you really look at what CMMI is, there's no inherent contradiction with agile.  Of course, the marriage of the two is far from a no-brainer, but it is possible, as evidenced by a report on a Scrum team that was certified at CMMI level 5.

This is just part of what seems to be Meyer's biggest problem with agile methods - the deprecation of "big, up-front" tasks, which seems to be generalized to all up-front tasks.  He seems this as emblematic of a tendency in the agile movement to take a good idea and take it to the extreme.  It's invariably true that you can't get all the requirements right the first time, that the initial archtecture won't be perfect, that the initial plan won't work out perfectly, etc.  Nobody's perfect and we can't know everything - that's just life.  But does that really mean that we should just not plan out any of that out ahead of time at all?  Isn't that just as good of an argument for taking some more time to figure out a better plan up front?

My own experience the last few years has led me increasingly to sympathize with Meyer's views on this.  Like him, there are a lot of agile practices I think are great and helpful.  Short, time-boxed iterations with frequent deliveries are a great way to work and collect feedback; having a good product owner is priceless; and I don't think anybody really objects to heavy use of unit test suites and continuous integration anymore.  Let's keep those around.

But the more I think about and work with various "agile" processes, the clearer it becomes that there's a reason so many people seem to say that their organization "does Scrum," but not by the book - they have this and that modification, sometimes to the point that it no longer sounds like Scrum.  Some people interpret that as evidence that agile is nothing but hype.  After all, if nobody implements that actual process, then in what sense is it successful? 

Of course, that's not entirely fair because every team and organization is different so no process is going to be a perfect fit for everyone.  But on the other hand, is it entirely unfair either?  If hardly anyone seems to do things "by the book," could that be because the book is wrong?  Without more of the type of researched analysis that Meyer offers, we really have no way to know.

If you're interested in development processes, then Agile! The Good, the Hype, and the Ugly is definitely worth checking out.  I have to say that it really changed the way I think about agile processes.  For me, it really helped to put things in a wider perspective than the "good/new agile vs. bad/old waterfall" story that seems to dominate discussions.  And if nothing else, it's at least good to see some questioning and critical analysis of the agile mantra.

(Note: for a shorter read, Jim Bird has a very interesting take on agile in a similar vein to Meyer.)

Poster debugging

You know you've been working in software too long when you start finding bugs in the posters on your wall.

When I started my current job, this poster was hanging on the outside wall of my cubicle.
The Open Road: A History of Free Software
I'm not sure who originally put it there.  I never really paid much attention to it, but I'm a fan of free software, so I left it up.  And when we moved to our new building, I took it with me.  Only this time, I put it up right next to my desk.

While I was thinking about something the other day, I was staring blankly at this poster.  That's when I noticed the bug.  My eyes fell on the entry for 1977 and I said, "Wait a minute...."  For those who don't want to open up the full-size image, it says:

Bruce Perens writes the first draft of "The Open Source Definition" as "The Debian Free Software Guidelines." The subsequent OSD incorporates comments from Debian developers in a month-long e-mail conference, with Debian specific info removed.

Nice to know, but that definitely wasn't 1977.  According to Wikipedia, it was more like 1997.  Of course, that poster is ten years old and I doubt anyone cares anymore.  I just found it funny that I happened to notice that.  Apparently even printing bugs are shallow with enough eyes.

Line counting in Komodo

So as part of my ongoing professional improvement program, I've been working my way through PSP: A Self-Improvement Process for Software Engineers.  And by "working", I mean I'm doing the exercises from the SEI website and everything.  So far it's actually quite an interesting process - I'd definitely recommend it to any professional software developer, even if you're not interested in using the process, just as "food for thought".  You can pick up a relatively cheap used copy of the book on Amazon (it is technically a textbook, so new ones are a little pricey).  I'll have to write a post on it when I'm finished.

Anyway, the PSP uses line-of-code counting to estimate program size, defect densities, etc.  So I thought it would be nice to be able to run line count reports right from within Komodo.  Fortunately, it turned out to be pretty easy.  I just lifted some code from Nathan Rijksen's "Open Terminal Here" macro as a starting point and went from there.  The macro just takes the selected files or directories in you "places" pane and appends them to a custom command.  I've used cloc as my line-counting tool of choice for a number of years, but you can change the command to be whatever you want (even "wc -l" if you really want).  I also added in a little code to calculate and run from a common base directory, so that you wouldn't get a full, absolute path for every item in the report.  The command output is sent to the Komodo output pane.

The code for the macro is below.  Or, if you're feeling lazy, you can just download the tool here and drop it in your toolbox.

/**
 * Adds a "Count LOC" menu item to items in the Places widget.
 * This will run a line-counter on the selected paths and display the command output.
 * Based on the "Open Terminal Here" macro by Nathan Rijksen.
 *
 * Usage: Update the "command" variable to contain whatever command you want to run.  The paths to the files selected
 * in the places pane will be appended to this command.
 *
 * If the "use_common_directory" variable is true, then the macro will calculate the deepest common directory of all
 * the selected files and will run the command from that directory, passing relative paths.  For example, if you
 * select files /foo/bar/baz/buzz.js and /foo/bar/fizz/fuzz.css, the command will be run from /foo/bar and will be
 * passed the paths baz/buzz.js and fizz/fuzz.css.
 * If this variable is set to false, then the command will be passed the full, absolute paths to all files and no
 * working directory will be specified for it.
 *
 * @author Peter Geer
 * @contributor Nathan Rijksen
 * @contributor Mathieu Strauch
 * @version 0.1
 */
/*global ko, extensions:true */

// Register namespace
if ((typeof extensions) == 'undefined') {
    extensions = {};
}
extensions.CountLOC = {};

(function() {
    
    var command = "cloc --by-file --force-lang=PHP,phtml",
        use_common_directory = true,
        label = 'Count LOC',
        id = 'contextCountLOC',
        sibling,
        d,
        mi,
        callback,
        longest_common_path;
    
    longest_common_path = function(list) {
        var longest_item = list[0].substr(0, list[0].lastIndexOf('/')),
            i = 0;
        for (i = 0; i < list.length; i++) {
            while (longest_item !== '' && list[i].indexOf(longest_item) < 0) {
                longest_item = longest_item.substr(0, longest_item.lastIndexOf('/'));
            }
            if (longest_item === '') {
                break;
            }
        }
        return longest_item;
    };
    
    callback = function(e) {
        var i = 0,
            cmd = command,
            curr_dir = null,
            uris = ko.places.viewMgr.getSelectedURIs();
        
        if (uris.length === 0) {
            return;
        }
        
        // Clean up the URIs.
        for (i = 0; i < uris.length; i++) {
            uris[i] = uris[i].replace(/^[a-zA-Z]+:\/\//,'');
            if (uris[i].match(/^\/[a-zA-Z]:\//)) {
                uris[i] = uris[i].substr(1);
            }
        }
        
        // If set, turn the absolute paths into relative paths.
        if (use_common_directory) {
            curr_dir = longest_common_path(uris);
            if (curr_dir !== '') {
                for (i = 0; i < uris.length; i++) {
                    uris[i] = uris[i].substr(curr_dir.length + 1);
                }
            } else {
                curr_dir = null;
            }
        }

        // Prepare command for each platform
        for (i = 0; i < uris.length; i++) {
            cmd += ' ' + uris[i];
        }

        // Run command, show output in bottom pane
        ko.run.command(cmd, {cwd: curr_dir, runIn: 'command-output-window'});
    };
    
    // Get places pane document object
    d = document.getElementById('placesViewbox').contentDocument,

    // Remove existing menu entry if it exists
    mi = d.getElementById(id);
    if (mi) {
        mi.parentNode.removeChild(mi);
    }

    // Get the sibling element which we want to insert our menu item after
    sibling = d.getElementById('placesContextMenu_rename');
    
    // Create our menu item
    mi = document.createElement("menuitem");
    mi.setAttribute("id", id);
    mi.setAttribute("label", label);

    // Add event listener for when the menu item is used
    mi.addEventListener('command', callback);

    // Append menu item to popupmenu
    if (sibling && sibling.parentNode) {
        sibling.parentNode.insertBefore(mi, sibling);
    }

}.apply(extensions.CountLOC));

Lying with statistics?

Slashdot recently ran a story on why so many tech workers dislike their jobs.  The article links to a survey by TinyPulse, a company that measures employee engagement.  The thrust of the article was that people in IT are less happy and engaged in their jobs than their fellow employees in other departments. 

The article itself was somewhat interesting.  They showed a number of graphs that showed IT pros reporting reduced engagement as compared to people in other departments.  However, in looking at the graphs and the numbers, I got the distinct impression that somebody at TinyPulse was trying to pump up business by manufacturing a crisis.  For example, consider this screenshow of the graph (captured 2015-09-21):

This is the first graph in the article and the way it's shown is literally straight out How to Lie with Statistics.  No, seriously - read the book.  One of the ways to lie that it discusses is to use a bar graph with a discontinuous scale.  In this case, the horizantal axis appears to start at about 17%, which is why the bar on the right is about three times as tall as the one on the left, even though it only represents a 3% difference.

To their credit, TinyPulse didn't do that with any of the other graphs in the report, so maybe it was just a garden-variety screw-up.  It fould just be that somebody pushed the wrong button in the graph-generation tool and the result got pushed out.  Who knows?  You'd think a company that does statistics as part of its core business would be a little more careful, but hey, things like that happen.

The problem is that when you notice something like that, it immediately puts you on your guard and makes you suspicious of the rest of the data. Of course, that's assuming you know to look for it, which most people probably don't.  And that's the beauty of lying with statistics - you don't have to actually lie.  Why risk an out-and-out lie when you can just be careful in your presentation and trust that the vast majority of the people will draw the conclusion you want them to draw rathen than the one that's actually supported by the presented data?

Night theme in TT-RSS

Just a quick note on a small customization to Tiny Tiny RSS.  If you're not aware of TT-RSS, it's the online RSS aggregator that I switched to after Google Reader closed down.  It's got all the features I care about, has a fairly nice web UI and there are mobile apps and mobile-web front-ends available.  And despite what it says in the official system requirements about needing a dedicated server, it works perfectly on my el cheapo shared hosting account - wasn't even hard to set up.

Anyway, I recently switched the configuration for the web viewer from the "default" theme to the "night" theme, which is a white-on-black color scheme.  I decided to try that because it matches better with a lot of the development tools I'm using (Komodo IDE, Visual Studio, command prompts - all light text on dark backgrounds).  The only problem is that, unlike the default theme, which shows headlines of posts you've read in a different color, the night theme doesn't visually differentiate read and unread posts.

Fortunately, this is easy to fix.  You can just open up the preferences and there's an option to customize your stylesheet.  To change the highlight color, copy something like the following into the custom CSS box.

body#ttrssMain .hl .hlTitle a {
    color: #878787;  /* A slightly darker gray to use for "read" posts. */
}

body#ttrssMain .hl.active .hlTitle a,
body#ttrssMain .hl.Unread .hlTitle a {
    color: #CCC;  /* Normal headline color for the "night" theme. */
}

 

JSHint regex warnings

Note to self: when getting regular expression warnings from JSHint, remember the inline option for disabling them.

/*jshint regexp: false */

This is useful for adding to the top of a function definition when you want to use "unsafe" regular expressions.  Apparently the idea is that using an unescaped "." in your regular expressions can match more data than you intend and potentially lead to bad validation and insecure applications.  So it's actually a good thing that JSHint check for this.

On the other hand, in my case the regex in question was just a simple extraction of the extension from a file name.  Since I was comparing the result to a white-list and substituting a known default if it wasn't found, there wasn't really any serious risk.  I just wanted JSHint to shut up.

The Mythical Man-Month

I've decided it's time to do a little professional development.  In other words, stop putzing around with whatever new technology is "cool" these days (grumble, grumble kids with their Snapchats, and their Node.js, and their Dan Fogelberg) and get back to fundamentals.  That means design and architecture, development process, project management, quality controlThe Mythical Man-Month, 20th anniversary edition, documentation, and all those other things that are important, but not even remotely "hip".

To that end, I'll be doing some reading.  The first book on my list has been on my bookshelf for about 13 years - The Mythical Man-Month, by Fred Brooks.  I bought and started reading the 20th anniversary edition over a decade ago, but never got around to finishing it. 

In retrospect, I think I was just too inexperienced at the time really get the significance of the book.  After all, it might be a classic, but its's an old book (written in 1975) about an old project (much of it relates to developing OS/360).  How relevant could it be?  Especially once you get past the punch line of why the man-month is mythical - which is revealed in chapter two.

Now, after almost 14 years in tech, it's easy to see just how brilliant a book it is.  While there is some discussion of obselete methods and technologies, that's beside the point.  The Mythical Man-Month is still relevant because it isn't really about specific development techniques or technologies.  It's about the complications of managing and communicating with lots of people on a large project.  And while we've come a long way as an industry in the last 40 years, the "human nature" part of the equation hasn't really changed appreciably.

One of the nice things about this book is that Brooks clearly gets what it means to be a developer.  He starts the book with an essay on the joys and woes of the craft of programming, all of which are as relevant today as they were 40 years ago.  There are a number of other such insights sprinkled throughout the book.  My personal favorite is his statement that all programmers are optimists.  When you think about the process of estimation and how that usually turns out, this is the sort of insight that seems so blindingly obvious that you're suprised you didn't think of it yourself.

The third chapter, "The Surgical Team," was particularly interesting to me.  The proposal is essentially to treat building software less like building a bridge and more like performing surgery.  So, in other words, you'd have one senior deveoper act as the "surgeon" and he and a ore junio assistant perform all of the deliverable work.  The rest of the team supports him, doing the project management work, building supporting tools, serving as expert consultants, etc.  Since one top-notch programmer is supposedly ten times as productive as a mediocre programmer, having one of them do the work and nine other people support them gives you the same total productivity while reducing communication problems and maintaining design consistency.

One of Brooks' themes throughout the book is that the way to manage complexity is to maintain consistency of architectural vision.  That is, the overall system architecture should reflect as few viewpoints as possible, rather than being a mish-mash of the viewpoints of everyone who worked on it.  This plays into another major issue Brooks discusses: communication cost.  This is part of the reason the man-month is mythical - because not only does adding new people to a project add ramp-up time, it also increases the communication cost because you now have more people to keep in the loop.

I think one of the best things about the 20th anniversary edition is the added chapters.  For starters, they include Brooks' classic paper No Silver Buller - Essence and Accident in Software Engineering, which is a good read.  It also includes a chapter of analysis and reflection, in which Brooks discusses not only the legacy of the book, but also what he got right and wrong.  The things he got wrong are particularly interesting.  They include the classic "build one to throw away" quote, which Brooks says is wrong "not because it is too radical, but because it is too simplistic," being rooted in the waterfall model that was popular in the 1970's, as well as his notion of communicating the full project workbook to everyone.

Overall, The Mythical Man-Month is a very engaging and surprisingly easy read, especially given the volume of references.  While it contains some details that now seem like quaint digressions into the history of computing, the majority of the material is still relevant.  It definitely contains useful insights for anyone who is interested in the dynamics of running a large project.  I'm very glad I came back to it.