Swag shipment

My shipment of Komodo IDE swag arrived today.  I won a Twitter contest to find an Easter Egg in the new Commando feature of Komodo 9.  Turns out that if you type "kitt" or "michael" into the Commando box, it will play an animation of Kitt's cylon eye-light, complete with whooshing sound, from Knight Rider.

So my prize was a Komodo IDE T-shirt, some Komodo and Stackato stickers, and a pack of Komodo IDE playing cards. The shirt is actually pretty nice.  It just has the komodoide.com URL on the back and that logo on the front.  A nice little swag selection, at any rate.  Thanks, ActiveState!

Upgrade time again

It's upgrade time again.  As I mentioned in my last entry, my desktop/home server had been getting short on disk space - and by "short on disk space," I mean I started to see Explorer reporting the free space as less than 1GB on my C: drive.  So it was time for a new hard drive.

Since the only real problem was my 100GB C: partition, I decided to go with more of a "start over" approach and just got a modest 120GB SSD.  My Windows 7 installation was about 5 years old, so it had accumulated a fair amount of cruft.  For instance, the winsxs folder alone had swollen to almost 14GB.  So I reasoned that a clean installation would probably fix most of my problems.

Along with the new hard drive, it was also time for some new software.  I started that out with an OS upgrade from Windows 7 Pro to Windows 8.1.  Yes, I know - everybody hates Windows 8.  But I think it's a good OS, damn it!  Sure, I'll grant you that the initial release had some rough edges, but aside from the questionable decision to remove the start menu (which is trivially fixed by installing Classic Start), the 8.1 update fixes every complaint I had.

As a change of pace, this time I decided to try the downloadable purchase of Windows 8.1 from NewEgg rather than waiting for physical media to ship.  It turns out that this process is actually pretty simple.  You place your order and then get an e-mail with a license key and a link to a downloader program.  You run that, give it your key, and it gives you several options for downloading the installation files.  One of these is to just download a bootable ISO image that you can burn to disk.  So it's actually not as wierd as I initially feared.  Of course, the one catch is that the downloader runs under Windows, so this probably doesn't work so well if you're a Mac or Linux user.

The one thing of note here is that this time I decided to save myself a few bucks and drop down from the Professional edition to the standard one.  I made this decision after considering the non-transferability of the OEM version and looking at the feature comparison matrix.  It turns out the the Pro versino contained exactly one feature that I cared about: the Remote Desktop Server.  So I reasonsed that if I could find a suitable remote access solution to replace RDP, then there was no need to buy the Professional edition.  And after playing around with TeamViewer for a few days, I decided I'd found that. 

It turns out that TeamViewer actually works quite well.  For starters, it's free for non-commercial use and it's in Ninite, so there's basically zero overhead to getting it set up.  Registering for an account is also free and helpful, though not strictly necessary.  The performance seems to be pretty good so far and it has the ability to start LAN-only connections or go through their servers for over-the-Internet access.  After using TeamViewer for a couple of days, I was more than convinced that I could do without the Windows RPD server.

Next on the list was a service to run Virtual Box.  As you may know, Virtual Box is a free system virtualization package.  It works pretty well, but it doesn't come with the ability to run a VM on boot (at least in Windows) - you have to wait until you log in.  To fix that, I installed VBoxVmService.  This is a little Windows service that will run selected VMs on boot without anyone having to log in and also offers a systray app that allows you to manage those VMs.  Previously, I had been using the similarly named VirualBoxService, which does essentilally the same thing but isn't quite as nice to use.  Of course, there are some limitiations, but for the most part it works well enough for my setup.  All I really wanted to do was have a Linux VM run on boot to serve as a web server because setting the stuff up on Windows was just too much of a pain.

While I was at it, I also decided to give Plex a try.  I'd previously been a little turned off by the requirement to create an account, even though I only wanted to use it on my LAN, but it turns out that actually isn't necessary.  The account registration is really only needed for remote access.  And with Android and Roku apps, Plex provided more functionality and required less work than my home-grown solution of using PHP scripts to generate customized web server directory listings for Roksbox.  That was all well and good, but just using Plex is much easier and seems to work better.

So far, things seem to be going pretty well and I'm happy with my new setup.  Granted, there are no radical changes to my setup, but in my days of Linux on the desktop, painful upgrades were not always such an uncommon occurrence, so I guess I'm a little battle-scarred.

One last thing to note is that I'm actually kind of impressed with Windows 7.  I days gone by, a Windows install lasting for 5 years of heavy use would be unheard of.  And even with seriously limited hard drive space, the system was rock-solid and actually performed pretty well.  If I'd been inclined to migrate it to the new drive, I probably could have kept that installation going for much longer.  Switching back to Windows was definitely a good move.

Forget Cloud Drive, let's try OneDrive

This entry started out as a quick discussion of consolidating my photos onto Amazon Cloud Drive.  However, that didn't happen.  So now this entry is about why.

This started out as an attempt to clean up my hard drive.  I have a 1TB hard drive in my desktop, divided into two partitions: 100GB for my system "C: drive" and the rest of it allocated to assorted data.  The problem is that my C: drive was down to less than 5GB free, so it was time to do some clean-up.

Part of my problem was that my locally synced cloud storage was all in my home directory on the C: drive, including Cloud Drive.  So the plan was to move my Cloud Drive folder to the D: drive and, in the process, move a bunch of my older photos into Cloud Drive.  After all, I've complained about Cloud Drive, but now that Amazon offers unlimited photo storage to Prime members, why not take advantage of it?

Well, I'll tell you why - because it doesn't work, that's why.  For starters, the desktop Cloud Drive app doesn't provide any way to set the local sync directory.  Luckily, I did find that there's a registry key for that that you can set manually.  So I did that, moved my Cloud Drive folder, and restarted the Cloud Drive app.  And then I waited.  And waited.  And waited some more for Cloud Drive to upload the newly added photos.  However, when I came back the next morning, the systray icon said that Cloud Drive was up to date, but when I looked at the website, my new photos weren't there.

OK, so plan B: try downloading the latest version of the Cloud Drive desktop app.  My version was from March, so maybe there were somce improvements since then. 

And here's problem number two: the new app isn't the same as the one I have.  As far as I can tell Amazon no longer offers a desktop sync app.  Now they just have a "downloader/uploader" app.  It's not sync, though - the process is totally manual.  And I can't find any link to the version I have.  Presumably it's been replaced by this lame uploader app.  I notice that the Cloud Drive website now omits any mention of sync and talks about accessing your data on your computer through the website.

OK, so no upgrade.  Plan C: try to get the version I have working.  That didn't work out, though.  I didn't have the installer anymore, so reinstalling was out of the question.  I tried deregistering the app and resyncing my cloud data, but now that was just broken.  Cloud Drive synced part of my documents folder, then just stopped and reported that it was up-to-date.

At that point, I decided to just give up.  I'd been thinking about switching to OneDrive anyway.  It works well enough and fixes the problems I have with Cloud Drive.  It also gives me 30GB of free storage and has pretty reasonable rates for upgrades - $2/month for 100GB or 1TB for $7/month including an Office 365 subscription.  Plus I've already got it set up on my desktop, laptop, phone, and tablet, so it's just a matter of getting my data into it.

So that's what I did.  I changed my OneDrive location to my D: drive by unlinking and relinking the app (which is required for Windows 7 - Windows 8.1 is much easier) and then moved over the pictures from my Cloud Drive as well as the old ones I wanted to consolidate.  Hopefully that will work out.  It's going to tak OneDrive a while to sync all that data - over 10GB - but it seems to be going well so far.  And unlike Cloud Drive, as least OneDrive has a progress indicator so you can tell that something is actually happening.

As for Cloud Drive, I think I'm pretty much done with that.  I'll probably keep the app on my phone, as it provides a convenient second backup of my photos and also integrates with my Kindle, but I'm not even going to try to actively manage the content anymore.  It seems that Amazon is moving away from the desktop to an all-online kind of offering.  That's all well and good, but it's not really what I'm looking for at the moment.  Too bad.

Site outage

Well, that sucked.  My domain was MIA today.  Fixed what I could, but it's not totally working yet.

I discovered the problem this morning, when I was thwarted in my regularly scheduled RSS feed checking.  The page didn't load.  And neither did any of the other page.  Or the DNS alias that I had pointed to my home server.  So after confirming that my hosting provider was in fact up, I checked my DNS.

Somehow - I'm still not 100% sure how - my domain's nameservers got reset to defaults for my registrar.  I'm not sure if I fat-fingered something while confirming my contact info in the domain manager, or if there was some change on their end, or what.  At any rate, they were wrong.  I was able to change the settings back to my hosting provider's nameservers without any issues, but that still requires waiting for the change to finish propagating.  What a pain.

Reference project root in command

Continuing on the Komodo macro theme for this week, here's another little macro template that might come in handy.  This one is just a quick out outline for how to reference your project's root directory when running a command.

As you may know, Komodo's commands support a variety of interpolation variables to do things like insert file paths and other input into your commands.  The problem is that there's no variable to get the base directory of your current project - by which I mean the "project base directory" that you can set in your project properties.  Oh, sure, there's the %p and %P variables that work on the current project, but they don't get the project base path.  They get the path to the project file and the directory in which the project file is contained.  That's fine when your project file lives in the root of your source tree, but if you want to put your project files in another location, it doesn't help.

Sadly, there is currently no way to get this path using the interpolation variables.  However, it's pretty easy to get with a macro.  The only problem with that is that the macro syntax is a little odd and the documentation is somewhat lacking.  The documentation for ko.run.runCommand() does little more that give the function signature, which is bad because there are 17 parameters to that function, and it's not entirely clear which are required and what the valid values are.  Luckily, when you create a command, the data is stored in JSON format with keys that more or less match the parameter names to runCommand(), so you can pretty much figure it out by creating the command as you'd like it and then opening the command file up in an editor tab to examine the JSON.

Anyway, here's macro template.  Needless to say, you can substitute in the project base directory at the appropriate place for your needs.  In my case, I needed it in the working directory.

var partSvc = Cc["@activestate.com/koPartService;1"].getService(Ci.koIPartService),
    baseDir = partSvc.currentProject.liveDirectory,
    dir = baseDir + '\\path\\to\\OpenLayers\\build',
    cmd = 'python build.py';
ko.run.runCommand(window, cmd, dir, undefined, false, false, true, "command-output-window");

Better project commit macro

Several months ago, I posted a Komodo IDE macro to run a source control commit on the current project.  That was nice, but there was an issue with it: it only sort of worked. 

Basically, in some cases the SCC type of the project directory was never set.  In particular, if you focused on another window and then double-clicked the macro to invoke it, without touching anything else in Komodo, it wouldn't work.  While this scenario sounds like an edge-case, it turns out to be infuriatingly common, especially when you use multiple monitors.  The obvious example is:

  1. Make some changes to your web app in Komodo.
  2. Switch focus to a browser window and test them out.
  3. See that everything works correctly.
  4. Double click the commit macro to commit the changes.
  5. Wait a while and then curse when the commit window never comes up.

I'm no Komodo expert, so I'm not sure exactly what the problem was.  What I did eventually figure out, though, is that Komodo's SCC API doesn't seem to like dealing with directories.  It prefers to deal with files.  And it turns out that, if you're only dealing with a single file, the "commit" window code will search that file's directory for other SCC items to work with.

So here's an improved version of the same macro.  This time, it grabs the project root and looks for a regular file in it that's under source control.  It then proceedes in the same way as the old one, except that it's much more reliable.

(function() {
    "use strict";
    
    // Find a file in the project root and use it to get the SCC type.  If we
    // don't find any files, just try it on the directory itself.
    // TODO: Maybe do a recusive search in case the top-level has no files.
    function getSccType(url, path) {
        var os = Components.classes["@activestate.com/koOs;1"]
                           .getService(Components.interfaces.koIOs),
            ospath = Components.classes["@activestate.com/koOsPath;1"]
                               .getService(Components.interfaces.koIOsPath),
            fileSvc = Components.classes["@activestate.com/koFileService;1"]
                                .getService(Components.interfaces.koIFileService),
            files = os.listdir(path, {}),
            kofile = null;
        // First look for a file, because that always seems to work
        for (var i = 0; i < files.length; i++) {
            var furi = url + '/' + files[i],
                fpath = ospath.join(path, files[i]);
            if (ospath.isfile(fpath)) {
                kofile = fileSvc.getFileFromURI(furi);
                if (kofile.sccDirType) {
                    return kofile.sccDirType;
                }
            }
        }
        // If we didn't find a file, just try the directory.  However, this
        // sometimes fails for no discernable reason.
        kofile = fileSvc.getFileFromURI(url);
        return kofile.sccDirType;
    }
    
    var curr_project_url =  ko.projects.manager.getCurrentProject().importDirectoryURI,
        curr_project_path = ko.projects.manager.getCurrentProject().importDirectoryLocalPath,
        count = 0;
    
    // HACK: For some reason, the SCC type on directories doesn't populate.
    // immediately.  I don't know why.  However, it seems to work properly on
    // files, which is good enough.
    var runner = function() {
        var scc_type = getSccType(curr_project_url, curr_project_path),
            cid = "@activestate.com/koSCC?type=" + scc_type + ";1",
            fileSvc = Components.classes["@activestate.com/koFileService;1"]
                                .getService(Components.interfaces.koIFileService),
            kodir = fileSvc.getFileFromURI(curr_project_url),
            sccSvc = null;
            
        if (scc_type) {
            // Get the koISCC service object
            sccSvc = Components.classes[cid].getService(Components.interfaces.koISCC);
            
            if (!sccSvc || !sccSvc.isFunctional) {
                alert("Didn't get back a functional SCC service. :( ");
            } else {
                ko.scc.Commit(sccSvc, [curr_project_url]);
            }
        
        } else if (count < 50) { // Just in case this never actually works....
            count += 1;
            setTimeout(runner, 100);
        } else {
            alert('Project directory never got a valid SCC type.');
        }
    };
    
    runner();
}());

Resize Subsonic frames

I upgraded Subsonic today and I had to remember the little hack I put in to enable resizing of the playlist frame. This is the second time I've had to look up how to do this, so I'm just going to document it here.

By default, each piece of the Subsonic web interface is a frame, and all the frames are borderless and not resizable.  This is kind of a pain because sometimes I have a long playlist and I want to see all of it at once.  The ideal way to do that would be to just increase the size of the playlist frame - but I can't do that.  Fortunately, fixing this is relatively easy.  You can just modify the frameset markup in the following file:
C:\subsonic\jetty\<number>\webapp\WEB-INF\jsp\index.jsp

For my case, the solution was to change the third frameset tag by increasing turning on frame borders and increasing the border size.  Here's the markup:
<frameset rows="75%,25%" border="5" framespacing="0" frameborder="1">

Using RewriteBase without knowing it

So here's an interesting tidbit that I discovered this afternoon: you can use RewriteBase in a .htaccess file without knowing the actual base URL.  This is extremely useful for writing a portable .htaccess file.

In case you don't know, the RewriteBase directive to Apache's mod_rewrite is used to specify the base path used in relative rewrite rules.  Normally, if you don't specify a full path, mod_rewrite will just rewrite the URL relative to the current directory, i.e. the one where the .htaccess file is.  Unfortunately, this isn't always the right thing to do.  For example, if the .htaccess file is under an aliased directory, then mod_rewrite will try to make the URL relative to the filesystem path rather than the path alias, which won't work.

Turns out that you can account for this in four (relatively) simple lines:

RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-f
RewriteCond $1#%{REQUEST_URI} ([^#]*)#(.*)\1$
RewriteRule ^(.*)$ %2index.php [QSA,L]

All you need to do is substitute in your rewrite target for "index.php" and it "just works".  No changes to server configuration required and no need to edit the RewriteBase for the specific server.

VirtualBox shared folders

Here's a little issue I ran across the other day.  I was setting up a VirtualBox VM with an Ubuntu guest and I wanted to add a shared folder.  Simple, right?  Just install the VirtualBox guest additions, configure the shared folder to mount automatically, and it "just works".

The only problem is the permissions.  By default, VirtualBox mounts the shared folder with an owner of root.vboxsf and permissions of "rwxrwx---".  So if you add your user account to the vboxsf group, you get full access.  Everyone else...not so much.  Of course, this isn't inherently a problem.  However, I wanted the Apache process to have read access to this shared folder because I have a web app that needs to read data off of it.  And I didn't really want to give it write access (which it doesn't need), so just adding it to the vboxsf group wasn't a good option.  What I really needed to do was change the permissions with which the share was mounted.

As far as I can tell, there's no way to get VirtualBox to change the permissions.  At least, I didn't see anything in the guest OS and there's no setting in the management UI.  Fortunately, you can pretty easily bypass the auto-mounting.  Since it's a Linux guest, you can just turn off the auto-mounting in the VirtualBox management console and add a line to your /etc/fstab.

There is one issue, though: you have to make sure the vboxsf kernel module is loaded before the system auto-mounts the share.  If you don't the mounting will fail.  However, forcing the module to load is easily accomplished by adding a vboxsf line to your /etc/modules file.

As for the line in /etc/fstab, this seems to work pretty well for me:
SHARE_NAME   /media/somedir   vboxsf   rw,gid=999,dmode=775,fmode=664   0   0

Tired of Cloud Drive

You know what?  Amazon Cloud Drive is a little bit of a pain.  I think it's time to start moving away from it. I think I'm going to give DropBox another look.

To be clear, I don't plan to stop using Cloud Drive altogether. For one thing, the Kindle integration is actually pretty nice.  And for another, they give Amazon Prime members a pretty decent amount of space for free.  And I find that the desktop Cloud Drive app actually works pretty well.  It's just the mobile app and the website that suck.

The mobile app

For starters, let me voice my new complaint with the mobile app: the semantics of syncing your pictures are rather poorly defined.  Or, rather, they're not defined — at least, not to my knowledge.  By that I mean: what happens when you delete a bunch of files from your Cloud Drive, but not from your phone's main picture storage?  I'll tell you what: the app tries to re-add them to your Cloud Drive. 

This has happened to me two or three times in the last week or two.  A couple of weeks ago I went through my Cloud Drive and deleted some of the pictures that were duplicated, out of focus, etc.  Now, at semi-random intervals, the app wants to re-upload them.

In fairness, this could be due in part to the fact that Amazon's apps all seem to share authentication and it seems to be semi-broken on my phone.  I've had this problem several times recently.  The Kindle app on my Galaxy Nexus will stop synchronizing and the settings will report that it is "Registered to .".  No, that's not a typo, my name is just missing.  And when this happens, I can't authenticate with the Cloud Drive app or the Cloud Player app either — they just reject my credentials.  So far, the only fix I've found is to deregister my device in the Kindle app settings and then re-register it.  That fixes the Kindle app as well as authentication in the other apps. 

The website

I've blogged about the Cloud Drive website and it's useless "share" feature before.  Well, despite that, the other day I decided to give that "share" feature a try.  Hey, maybe it won't really be so bad, right?

Good Lord, it was even worse than I'd imagined.

So my task was to share one or two dozen pictures of my son with my relatives.  Seems simple enough, right?  The pictures are already in my Cloud Drive, so I just need to send out some links to them.  As I noted in the last entry, Cloud Drive doesn't actually support sharing folders or sharing multiple files at once, so the only way to do this is by sharing one file at a time and copying the URL into an e-mail. 

As bad as this is, it turns out it's made even worse by the fact that Cloud Drive is now reloading the file list after you complete the "share" operation.  And to add insult to injury, the reload is really slow.  Granted, I have about 1500 images in that directory, but that shouldn't matter because the reload really doesn't need to happen at all.  I mean, nothing has changed in the folder view.  All that happens is that the UI locks up for a few seconds and then I get a "file has been shared" message.

So this only confirms my opinion that the "share" feature in Cloud Drive is unusable.  I mean, if sharing were just a matter of popping up the "share" dialog that holds the URL, copying it, pasting to another window, and then closing the dialog, that would be one thing.  It would suck, but I could deal with it.  But the brain-dead UI locking up for five seconds after each share just makes it too painful to even try.  I got through maybe half a dozen pictures before giving up in disgust.

Solution - DropBox?

So, clearly, I need another platform for sharing my pictures.  So I looked around and found myself mighily confused.  My requirements were pretty simple.  I wanted something that:

  1. Had a free account tier,
  2. Had an easy way to share by just sending someone a link,
  3. And provided a simple uploading option.

I briefly considered using the Google+ "Photos" feature, but gave up on that when I realized the whole visibility/sharing thing wasn't completely obvious.

For now, I'm trying out DropBox.  I've had a free DropBox account for a couple of years, but never really used it for anything, so that was one less thing to sign up for.  The desktop app is pretty nice and allows me just drag files between Explorer windows and shows an icon of the file's sync status.  And sharing things via the web UI is dead simple, which is exactly what I was looking for.  So we'll see how that goes.  Worst-case scenario, I won't have lost anything.