Grow up JavaScript

The other week, somebody posted this article by Jared White in one of the chats at work.  It decries the "shocking immaturity" of the ecosystem around JavaScript and Node.JS.

I mean...yeah.  But it's not like this is news.  The Node ecosystem has been messed up for years.  Remember the left-pad debacle?  That was five years ago.  It's pretty clear that the ecosystem was messed up then.  So I guess this article just tells us that not much has changed.

To be fair, a lot of the stuff Jared complains about isn't really specific to the JavaScript ecosystem.  I've also been in the industry for 20 years and I can say from experience that bugs and hype are endemic to most of the industry and have been for quite some time.  For example, in the early days of Rails, I remember seeing a million variations on the "build your own blog in 10 minutes with Ruby on Rails" tutorials.  And yes, that's fine, you can make a simple demo app in 10 minutes.  Whoop-de-doo.  In reality, what that usually means is that on a two-month project you've saved yourself...maybe a day or two at most.  There are lots of tools and framework in lots of language ecosystems that are grossly over-hyped - it's almost standard practice in the industry.

As for bugs, I can't speak to Jared's experience.  In any software ecosystem, bugs are obviously common and not mentioning them is almost de rigueur.  I mean, if you're developing a framework or library, of course you're not going to advertise the bugs and limitations of your tool.  You want people to use it, not be scared away.  But I'm willing to accept Jared's assertion that the JavaScript world is uniquely bad.  I know my experience of client-side JS libraries is...not fantastic in terms of reliability or documentation.  So while I'm not sure he's right, I wouldn't be surprised if he was.

I do think his point about the learning curve is interesting and valid, though I don't know that it relates specifically to bugs.  I haven't gotten deep into many of the fancy new JavaScript frameworks, but they do seem to be staggeringly complex.  I started working with JavaScript way back in 2005, when all you needed to do what save your code in a text file, open that file up in a browser, and see what it did.  It was extremely simple and the bar to entry was ridiculously low.  Then, a few years ago, I decided to try out React, since that's the big new thing.   Just to do "Hello, World!", I had to get my head around their weird template syntax, install a transpiler, and run some kind of server process (I don't even remember - maybe that's changed by now).  And when I saw that work, I quit because I had actual work to do.  It's hard for me to remember what it was like to be a beginner, but I can imagine that this kind of an on-ramp would be pretty daunting, even with the dumbed-down tutorials.  Heck, it seemed like kind of a lot to me, and I'm an experienced professional!

Honestly, I kind of wonder how much of the problems Jared is seeing stem from the "youth" of the JavaScript ecosystem.  I'm not talking about the language, of course.  I'm thinking more of the historical and cultural part of the ecosystem.  Consider:

  1. While JavaScript has been around for a 25 years, it was widely considered a "toy" language for the first 10 years or so.  Remember - JavaScript came out in 1995, but jQuery didn't come along until 2005.  And these days, building your site on jQuery is the equivalent of building your house out of mud and sticks.
  2. In the roughly 15 years of JavaScript's non-toy lifespan, there's been a lot of churn in the web space.  And during much of that time, it was considered important for many businesses to support legacy web browsers.  I remember many times having to stop myself from using "new" features of JavaScript because, well, we still have to support old versions of Internet Explorer.  Yeah, nobody cares about that anymore (thank God), but it wasn't all that long ago that they did.  Heck, I remember getting yelled at in 2015 because I forgot to test something in IE9, which was released in 2010!
  3. From 1 and 2 above, it's clear that, in terms of the evolution of the ecosystem, the 25 year history of JavaScript is really a lot less than 25 years.  In fact, it's probably only within the last five years or so that we've managed to shake off most of the legacy cruft and get adoption of modern stuff beyond the handful of early-adopters.
  4. On the cultural front, it's been my experience that a lot of young people these days get into coding through web development.  This is not a bad thing - everybody has to start somewhere and the web is a relatively accessible and popular medium.  But it's also my experience that a lot of the people who create open-source tools and libraries are younger, because they're less likely to have families and other obligations and hence more likely to have the free time.  Again, this is not bad, but it means that the people writing the tools are disproportionately likely to be less experienced.
  5. So while there are plenty of things in the JavaScript world that are old enough that they should be mature, we can see from 3 and 4 that this might not necessarily be the case.  When a tool is developed by relatively inexperienced coders and hasn't been widely used outside a relatively small circle for very long, it shouldn't come as a surprise when they have some issues.

Of course, I'm just spit-balling here - I could be completely wrong.  But the point is that developing a stable ecosystem takes time, and the JavaScript ecosystem hasn't actually had as much real time to develop as the calendar suggests.  I mean, there's still a hot new framework coming out every other week, so it doesn't seem like the ecosystem has even finished stabilizing yet.  Maybe in a few years things will settle down more and quality will improve.

Or maybe not.  We'll see.  In the mean time, we just have to make do with what we have.

LnBlog Refactoring Step 3: Uploads and drafts

It's time for the third, slightly shorter, installment of my ongoing series on refactoring my blogging software.  In the first part, I discussed reworking how post publication was done and in the second part I talked about reworking things to add Webmention support.  This time, we're going to talk about two mini-projects to improve the UI for editing posts.

This improvement is, I'm slightly sad to say, pretty boring.  It basically involves fixing a "bug" that's really an artifact of some very old design choices.  These choices led to the existing implementation behaving in unexpected ways when the workflow changed.

The Problem

Originally LnBlog was pretty basic and written almost entirely in HTML and PHP, i.e. there was no JavaScript to speak of.  You wrote posts either in raw HTML in a text area box, using "auto-markup", which just automatically linkified things, or using "LBCode", which is my own bastardized version of the BBCode markup that used to be popular on web forums.  I had implemented some plugins to support WYSIWYG post editors, but I didn't really use them and they didn't get much love.

The old LnBlog post editor

Well, I eventually got tired of writing in LBCode and switched to composing all my posts using the TinyMCE plugin.  That is now the standard way to compose your posts in LnBlog.  The problem is that the existing workflow wasn't really designed for WYSIWYG composition.

In the old model, the idea was that you could compose your entire post on the entry editing page, hit "publish", and it would all be submitted to the server in one go.  There's also a "review" button which renders your post as it would appear when published and a "save draft" button to save your work for later.  These also assume that submitting the post is an all-or-nothing operation.  So if you got part way done with your post and decided you didn't like it, you could just leave the page and nothing would be saved to the server.

At this point it is also worth noting how LnBlog stores its data.  Everything is file-based and entries are self-contained.  That means that each entry has a directory and that directory contains all the post data, comments, and uploaded files that are belong to that entry.

What's the problem with this?  Well, to have meaningful WYSIWYG editing, you need to be able to do things like upload a file and then be able to see it in the post editor.  In the old workflow, you'd have to write your post, insert an image tag with the file name of your picture (which would not render), add your picture as an upload, save the entry (either by saving the draft or using the "preview", which would have trigger a save if you had uploads), and then go back to editing your post.  This was an unacceptably workflow clunky.

On top of this, there was a further problem.  Even after you previewed your post, it still wouldn't render correctly in the WYSIWYG editor.  That's because the relative URLs were inconsistent.  The uploaded files got stored in a special, segregated draft directory, but the post editor page itself was not relative to that directory, so TinyMCE didn't have the right path to render it.  And you can't use an absolute URL because the URL will change after the post is published.

So there were two semi-related tasks to fix this.  The first was to introduce a better upload mechanism.  The old one was just a regular <input type="file"> box, which worked but wasn't especially user-friendly.  The second one was to fix things such that TinyMCE could consistently render the correct URL for any files we uploaded.

The solution - Design

The actual solution to this problem was not so much in the code as it was in changing the design.  The first part was simple: fix the clunky old upload process by introducing a more modern JavaScript widget to do the uploads.  So after looking at some alternatives, I decided to implement Dropzone.js as the standard upload mechanism.

The new, more modern LnBlog post editor.

The second part involved changing the workflow for writing and publishing posts.  The result was a somewhat simpler and more consistent workflow that reduces the number of branches in the code.  In the old workflow, you had the following possible cases when submitting a post to the server:

  1. New post being published (nothing saved yet).
  2. New post being saved as a draft (nothing saved yet).
  3. Existing draft post being published.
  4. Existing draft post being saved.
  5. New (not yet saved) post being previewed with attached files.
  6. Existing draft post being previewed with attached files.

This is kind of a lot of cases.  Too many, in fact.  Publishing and saving were slightly different depending on whether or not the entry already existed, and then there were the preview cases.  These were necessary because extra processing was required when an entry was previewed with new attachments because, well, if you attached an image, you'd want to see it.  So this complexity was a minor problem in and of itself.

So the solution was to change the workflow such that all of these are no longer special cases.  I did this by simply issuing the decree that all draft entries shall always already exist.  In other words, just create a new draft when we first open the new post editor.  This does two things for us:

  1. It allows us to solve the "relative URL" problem because now we can make the draft editing URL always relative to the draft storage directory.
  2. It eliminates some of those special cases.  If the draft always exists, then "publish new post" and "publish existing draft" are effectively the same operation.  When combined with the modern upload widget, this also eliminates the need for the special "preview" cases.

The implementation - Results

I won't get into the actual implementation details of these tasks because, frankly, they're not very interesting.  There aren't any good lessons or generalizations to take from the code - it's mostly just adapting the ideosyncratic stuff that was already there.

The implementation was also small and went fairly smoothly.  The upload widget was actually the hard part - there were a bunch of minor issues in the process of integrating that.  There were some issues with the other part as well, but less serious.  Much of it was just integration issues that weren't necessarily expected and would have been hard to foresee.  You know, the kind of thing you expect from legacy code.  Here's some stats from Process Dashboard:

Project File Upload Draft always exists
Hours to complete (planned): 4:13 3:00
Hours to complete (actual): 7:49 5:23
LOC changed/added (planned): 210 135
LOC changed/added (actual): 141 182
Defects/KLOC (found in test): 42.6 27.5
Defects/KLOC (total): 81.5 44.0

As you can see, my estimates here were not great.  The upload part involved more trial and error with Dropzone.js than I had expected and ended up with more bugs.  The draft workflow change went better, but I ended up spending more time on the design than I initially anticipated.  However, these tasks both had a lot of unknowns, so I didn't really expect the estimates to be that accurate.

Take Away

The interesting thing about this project was not so much what needed to be done but why it needed to be done. 

Editing posts is obvious a fundamental function of a blog, and it's one that I originally wrote way back in 2005.  It's worth remembering that the web was a very different place back then.  Internet Explorer was still the leading web browser; PHP 5 was still brand new; it wasn't yet considered "safe" to just use JavaScript for everything (because, hey, people might not have JavaScript enabled); internet speeds were still pretty slow; and browsing on mobile devices was just starting to become feasible.  In that world, a lot of the design decisions I made at the time seemed pretty reasonable.

But, of course, the web evolved.  The modern web makes it much easier for the file upload workflow to be asynchronous, which offers a much nicer user experience.  By ditching some of the biases and assumptions of the old post editor, I was more easily able to update the interface.

One of the interesting things to note here is that changing the post editing workflow was easier than the alternatives.  Keeping the old workflow was by no means impossible.  I kicked around several ideas that didn't involve changing it.  However, most of those had other limitations or complications and I eventually decided that they would ultimately be more work.  

This is something that comes up with some regularity when working with an older code-base.  It often happens that the assumptions baked into the architecture don't age well as the world around the application progresses.  Thus, when you need to finally "fix" that aspect of the app, you end up having to do a bit of cost-benefit analysis.  Is it better to re-vamp this part of the application?  Or should you shim in the new features in a kinda-hacky-but-it-works sort of way?

While as developers, our first instinct is usually to do the "real" fix and replace the old thing, the "correct" answer is seldom so straight-forward.  In this case, the "real" fix was relatively small and straight-forward.  But in other cases, the old assumptions are smeared through the entire application and trying to remove them becomes a nightmare.  It might take weeks or months to make a relatively simple change, and then weeks or months after that to deal with all the unforeseen fallout of that change.  Is that worth the effort?  It probably depends on what the "real" fix buys you.

I had a project at work once that was a great example of that.  On the surface, the request was a simple "I want to be able to update this field", where the field in question was data that was generally but not necessarily static. In most systems, this would be as simple as adding a UI to edit that field and having it update the datastore.  But in this case, that field was used internally as the unique identifier and was used that way across a number of different systems.  So this assumption was everywhere.  Everybody knew this was a terrible design, but it had been that way for a decade and was such a huge pain to fix that we had been putting it off for years.  When we finally bit the bullet and did it right, unraveling the baked-in assumptions about this piece of data took an entire team over a month.  At an extremely conservative estimate, that's well over $25,000 to fix "make this field updatable".  That's a pretty hefty price tag for something that seems so trivial.

The point is, old applications tend to have lots of weird, esoteric design decisions and implementation-specific issues that constrain them.  Sometimes removing these constraints is simple and straight-forward.  Sometimes it's not.  And without full context, it's often hard to tell when one it will be.  So whenever possible, try to have pity on the future maintenance programmer who will be working on your system and anticipate those kind of issues.  After all, that programmer might be you.

JavaScript unit testing

Today I watched a very interesting talk on good and bad unit testing practices by Roy Osherove, entitled "Unit Testing Good Practices & Horrible Mistakes in JS".  It really was an exceptional talk, with some good, concrete advice and examples.  I highly recommend it.

I actually recognized a couple of the examples in the talk.  The big one was OpenLayers.  I still remember being horrified by their test suite when I was using that library at my last job.  We had a number of extensions to OpenLayers and I was thinking about adding some tests into their test suite.  Then I ran the tests and saw a bunch of failures.  Assuming our code must be bad, I tried a fresh copy of the OpenLayers release.  A bunch of tests still failed.  And that's when I gave up and decided to just pretend their tests didn't exist.

If nothing else, the talk provided some nice validation that my JavaScript unit testing practices aren't completely wrong and stupid.  I've been writing unit tests for my JavaScript as part of my standard process for the last six months or so, and I was actually watching the video waiting for the hammer to drop, for Roy to describe an anti-pattern that I'd been blissfully propagating.  I was pleased and somewhat surprised when that moment didn't come.

Access to local storage denied

I ran into an interesting issue today while testing out an ownCloud installation on a new company laptop running Windows 10.  When trying to open the site in IE11, JavaScript would just die.  And I mean die hard.  Basically nothing on the page worked at all.  Yet it was perfectly fine in Edge, Firefox, and Chrome.

The problem was an "Access denied" message on a seemingly innocuous line of code.  It was a test for local storage support.  The exact line was:

if (typeof localStorage !== "undefined" && localStorage !== null) {

Not much that could go wrong with that line, right?  Wrong!  When I double-checked in the developer console and it turns out that typeof localStorage was returning "unknown".  So the first part of that condition was actually true.  And attempting to actually use localStorage in any way resulted in an "Access denied" error.

A little Googling turned up this post on StackOverflow.  It turns out this can be caused by an obscure error in file security on your user profile.  Who knew?  The problem was easily fixed by opening up cmd.exe and running the command:
icacls %userprofile%\Appdata\LocalLow /t /setintegritylevel (OI)(CI)L

Nope, I don't know JS

I've been writing software for a living for the last 15 years.  I've been doing mostly full-stack web development for nine of those.  That means I've written my fair share of JavaScript code.  But you know what?  It turns out I really don't know JS.  I thought I did.  But I don't.

I reached this conclusion after reading the first two and a half of the six books in Kyle Simpson's You Don't Know JS series.  Normally, I'd wait until I was finished with the series to blog about it, but seriously, this is good stuff.  If you think you have a decent grasp of JavaScript, you should read this to test your mettle.

Forget the W3Schools tutorials or jQuery guides you might have read to learn JavaScript in the first place.  This is way beyond that.  The goal of the You Don't Know JS series is not to teach you "how to code JavaScript" but rather to help you master JavaScript.  It's a deep-dive into the guts of JavaScript - not the subset most of us are used to, but what's actually spelled out in the ECMAScript specifications. 

The beautiful thing about this series is that it's not about expanding your catalog of technical tricks.  Most of it (well, of the 2.5 books I've read so far, anyway) is about understanding the fundamentals in a deep way.  For example, going beyond "oh, this in JavaScript is weird" and actually understanding the rules behind how the dynamic binding of this works and how it differs from the lexical scope used for everything else.  Things like that are easy to gloss over.  After all, you don't really need to know the gory details of how this is bound in order to write code and be productive.  But these kinds of things really are important.  They're the difference between "knowing" JavaScript and knowing JavaScript. 

To put it another way, there's more to the craft of building software than just "getting the job done".  For some people, just "getting it done" is sufficient - and that's fine: they're satisfied to remain journeymen.  But for some people, that's not enough - they want to be master craftsmen.  This series is for them.

JSHint regex warnings

Note to self: when getting regular expression warnings from JSHint, remember the inline option for disabling them.

/*jshint regexp: false */

This is useful for adding to the top of a function definition when you want to use "unsafe" regular expressions.  Apparently the idea is that using an unescaped "." in your regular expressions can match more data than you intend and potentially lead to bad validation and insecure applications.  So it's actually a good thing that JSHint check for this.

On the other hand, in my case the regex in question was just a simple extraction of the extension from a file name.  Since I was comparing the result to a white-list and substituting a known default if it wasn't found, there wasn't really any serious risk.  I just wanted JSHint to shut up.

Reference project root in command

Continuing on the Komodo macro theme for this week, here's another little macro template that might come in handy.  This one is just a quick out outline for how to reference your project's root directory when running a command.

As you may know, Komodo's commands support a variety of interpolation variables to do things like insert file paths and other input into your commands.  The problem is that there's no variable to get the base directory of your current project - by which I mean the "project base directory" that you can set in your project properties.  Oh, sure, there's the %p and %P variables that work on the current project, but they don't get the project base path.  They get the path to the project file and the directory in which the project file is contained.  That's fine when your project file lives in the root of your source tree, but if you want to put your project files in another location, it doesn't help.

Sadly, there is currently no way to get this path using the interpolation variables.  However, it's pretty easy to get with a macro.  The only problem with that is that the macro syntax is a little odd and the documentation is somewhat lacking.  The documentation for ko.run.runCommand() does little more that give the function signature, which is bad because there are 17 parameters to that function, and it's not entirely clear which are required and what the valid values are.  Luckily, when you create a command, the data is stored in JSON format with keys that more or less match the parameter names to runCommand(), so you can pretty much figure it out by creating the command as you'd like it and then opening the command file up in an editor tab to examine the JSON.

Anyway, here's macro template.  Needless to say, you can substitute in the project base directory at the appropriate place for your needs.  In my case, I needed it in the working directory.

var partSvc = Cc["@activestate.com/koPartService;1"].getService(Ci.koIPartService),
    baseDir = partSvc.currentProject.liveDirectory,
    dir = baseDir + '\\path\\to\\OpenLayers\\build',
    cmd = 'python build.py';
ko.run.runCommand(window, cmd, dir, undefined, false, false, true, "command-output-window");

Better project commit macro

Several months ago, I posted a Komodo IDE macro to run a source control commit on the current project.  That was nice, but there was an issue with it: it only sort of worked. 

Basically, in some cases the SCC type of the project directory was never set.  In particular, if you focused on another window and then double-clicked the macro to invoke it, without touching anything else in Komodo, it wouldn't work.  While this scenario sounds like an edge-case, it turns out to be infuriatingly common, especially when you use multiple monitors.  The obvious example is:

  1. Make some changes to your web app in Komodo.
  2. Switch focus to a browser window and test them out.
  3. See that everything works correctly.
  4. Double click the commit macro to commit the changes.
  5. Wait a while and then curse when the commit window never comes up.

I'm no Komodo expert, so I'm not sure exactly what the problem was.  What I did eventually figure out, though, is that Komodo's SCC API doesn't seem to like dealing with directories.  It prefers to deal with files.  And it turns out that, if you're only dealing with a single file, the "commit" window code will search that file's directory for other SCC items to work with.

So here's an improved version of the same macro.  This time, it grabs the project root and looks for a regular file in it that's under source control.  It then proceedes in the same way as the old one, except that it's much more reliable.

(function() {
    "use strict";
    
    // Find a file in the project root and use it to get the SCC type.  If we
    // don't find any files, just try it on the directory itself.
    // TODO: Maybe do a recusive search in case the top-level has no files.
    function getSccType(url, path) {
        var os = Components.classes["@activestate.com/koOs;1"]
                           .getService(Components.interfaces.koIOs),
            ospath = Components.classes["@activestate.com/koOsPath;1"]
                               .getService(Components.interfaces.koIOsPath),
            fileSvc = Components.classes["@activestate.com/koFileService;1"]
                                .getService(Components.interfaces.koIFileService),
            files = os.listdir(path, {}),
            kofile = null;
        // First look for a file, because that always seems to work
        for (var i = 0; i < files.length; i++) {
            var furi = url + '/' + files[i],
                fpath = ospath.join(path, files[i]);
            if (ospath.isfile(fpath)) {
                kofile = fileSvc.getFileFromURI(furi);
                if (kofile.sccDirType) {
                    return kofile.sccDirType;
                }
            }
        }
        // If we didn't find a file, just try the directory.  However, this
        // sometimes fails for no discernable reason.
        kofile = fileSvc.getFileFromURI(url);
        return kofile.sccDirType;
    }
    
    var curr_project_url =  ko.projects.manager.getCurrentProject().importDirectoryURI,
        curr_project_path = ko.projects.manager.getCurrentProject().importDirectoryLocalPath,
        count = 0;
    
    // HACK: For some reason, the SCC type on directories doesn't populate.
    // immediately.  I don't know why.  However, it seems to work properly on
    // files, which is good enough.
    var runner = function() {
        var scc_type = getSccType(curr_project_url, curr_project_path),
            cid = "@activestate.com/koSCC?type=" + scc_type + ";1",
            fileSvc = Components.classes["@activestate.com/koFileService;1"]
                                .getService(Components.interfaces.koIFileService),
            kodir = fileSvc.getFileFromURI(curr_project_url),
            sccSvc = null;
            
        if (scc_type) {
            // Get the koISCC service object
            sccSvc = Components.classes[cid].getService(Components.interfaces.koISCC);
            
            if (!sccSvc || !sccSvc.isFunctional) {
                alert("Didn't get back a functional SCC service. :( ");
            } else {
                ko.scc.Commit(sccSvc, [curr_project_url]);
            }
        
        } else if (count < 50) { // Just in case this never actually works....
            count += 1;
            setTimeout(runner, 100);
        } else {
            alert('Project directory never got a valid SCC type.');
        }
    };
    
    runner();
}());

Quickie TDD with Jasmine and Komodo

I'm currently on my annual "this time I'm really going to start doing test-driven development (or at least something close to it)" kick.  And it seems to be going pretty well, so hopefully this time it will actually stick. 

As I said, I do this every year and testing usually ends up falling by the wayside.  Granted, this is partly just due to a lack of discipline and commitment on my part.  Getting started with a new practice takes effort, and sometimes it's hard to justify that effort when "I've been doing just fine without it for years."  But there's also the matter of the type of projects I work on.  Most of the projects I've worked on have not had an establish test suite or a culture of testing.  Whether it's a work-related project or an old personal project for before I heard the gospel of TDD, the norm has been a sizable, years-old code-base that has few if any tests and isn't really designed to be testable in the first place. 

Getting into TDD with a project like that can be daunting.  Picture PHP code littered with static method calls, system calls, and direct class instantiations at multiple levels.  "You need to inject a mock object?  Yeah, good luck with that."  The classes may be well-defined, but there's not much compartmentalization of responsibilities within them, so inserting test doubles is not always straight-forward.  That leaves you in the unenviable position of either having to rewrite a bunch of existing code to make it unit-testable or set up some real test data and turn the unit test into an integration test.  The first option is obviously preferable, but can be much more risky, especially since you don't already have tests to validate the behavior of the code you need to change.  And while the second approach is certainly better than no tests at all, integration tests are slow to run, cumbersome to set up, and much more prone to breakage when things change.  Faced with a mess like that, it doesn't seem that unreasonable to say, "You know what?  I've got things I need to get done.  I'll get back to those tests later."  But, of course, you never actually do.

This time, I'm doing things a little differently.  For starters, I'm not writing new tests for old code.  I'm just doing the new code.  I'll get to the old code when/if I get around to refactoring it.  That means that I don't have to worry about untestable code, which makes the entire enterprise about a thousand times simpler.  I'm also not working with PHP code for the time being.  I'm trying to do TDD on two projects — one for work that's in JavaScript and a personal one in Python.  For Python, I'm using unittest and mock which, so far, I'm finding to be less of a pain in the neck than PHPUnit.

For the JavaScript project, I'm using Jasmine, which brings me to the title of this post.  Since I'm trying to do TDD (or rather BDD), I quickly tired of alt+TABbing to a browser window and then hitting ctrl+R to reload the page with the test runner.  Sure, it's not a big deal, but it's just one more thing that gets in the way.  I wanted to do what I could do in Python and PHP, which is just hit a hotkey and have Komodo run the test suite right in the same window. 

Turns out, that was actually pretty easy to set up.  I just banged out a quick macro that opens up the Jasmine test runner HTML file in a browser tab or refreshes it if it's already opened.  I bound that to a hotkey and I'm good to go.  Sure, it doesn't use the Komodo IDE testing framework, but that's not the end of the world — I just move it to a split pane and it works out pretty well.  I even added some CSS to the spec runner to make it match my Komodo color scheme.

Here's a screenshot of the side-by-side runner and some (unrelated but public) code:

And here's the macro itself:
(function() {
    var uri = 'file:///path/to/test_runner.html',
        view = ko.views.manager.getViewForURI(uri);
    if (view === null) {
        ko.open.URI(uri, 'browser');
    } else if (view.reload) {
        view.reload();
    } else {
        view.viewPreview();
    }
})();

Ext Direct errors

Note to self: when Ext Direct calls start failing, look in the request headers for error messages.  I'm not sure whether it's Ext itself or just our framework, but for whatever reason, Ext Direct calls seem to more or less swallow server-side errors.

In this particular case, I was experimenting with some of the code that our in-house development framework uses to render maps.  We have OpenLayers on the front-end a custom PHP back-end that we communicate with in part through Ext Direct, which is the handy-dandy AJAX RPC framework that comes packaged with Sencha's ExtJS.

So anyway, I made some changes, reloaded the page, and all my Ext Direct calls were failing.  No meaningful error messages, nothing in the JavaScript console, and the response body was just empty.  So what the heck was happening?  (Yeah, I know, I could have just run the unit tests, since we actually have unit tests for the framework code.  But that didn't occur to me because so much of the application code is missing them and I was just experimenting anyway.  Get off my back!) 

Then I noticed, just by chance, that the request headers in the network tab of Chrome's dev tools looked weird.  In particular, it contained this header:
X-Powered-By:PHP/5.3.21 Missing argument 1 for...

So that's what happened to the error message — it got dumped into the headers.  Not terribly helpful, but good to know.

Project commit in Komodo IDE

Edit (2014-10-09): An improved version of this macro can be found in this post.

So I finally got around to writing a little macro for Komodo IDE that I've been missing for a while.  It has a very simple task - perform a "source control commit" operation on the currently project.  You'd think this would be built in, and it wort of is, but it doesn't work very well. 

In Komodo IDE, the SCC commit action is tied to files and directories.  So if you want to commit an entire directory, you need to select it in the "places" pane (i.e. the file system browser).  And if you want to commit the directory that's currently set as the root of the file view, you either have to go up a level and select it or right-click on an empty spot in the file browser.  So, in other words, it's grossly inconvenient.

Hence this little macro.  Just create a new macro in your toolbox and paste this code into it (note that this doesn't work in Komodo Edit).  Note that this is a little hacky, as some of the SCC initialization seems to be asynchronous and I don't know what (if any) events are fired on completion.  But hey, it works, so close enough.

(function() {
    var curr_project_url =  ko.projects.manager.getCurrentProject().importDirectoryURI,
        fileSvc = Components.classes["@activestate.com/koFileService;1"]
                            .getService(Components.interfaces.koIFileService),
        kodir = fileSvc.getFileFromURI(curr_project_url),
        count = 0;
    
    // HACK: For some reason, the SCC type takes some time to populate.
    // I don't know if there's any event for this, so instead just try it again
    // if it's empty.
    var runner = function() {
        var cid = '',
            sccSvc = undefined;
            
        if (kodir.sccDirType) {
            cid = "@activestate.com/koSCC?type=" + kodir.sccDirType + ";1";
            sccSvc = Components.classes[cid].getService(Components.interfaces.koISCC);
            
            // Get the koISCC service object
            if (!sccSvc || !sccSvc.isFunctional) {
                alert("Didn't get back a functional SCC service. :(");
            } else {
                ko.scc.Commit(sccSvc, [curr_project_url]);
            }
        } else if (count < 10) { // Just in case this never actually works....
            count += 1;
            setTimeout(runner, 100);
        }
    };
    
    runner();
})();