SVN+SSH, SlikSVN, and Cygwin

As previously mentioned, since switching my desktop to Windows, I've set up a Subversion service using SlikSVN and an SSH service using Cygwin. So this week, I figured I'd try getting them to play together so that I can do Subversion over SSH and not have to open up another port in my firewall. It eventually worked out, but wasn't as painless and it should have been. And it was totally my fault.

As you know if you've ever used Subversion over SSH, it's pretty simple. You just change your protocol, point SVN to the repository on the remote server, and supply your OS login credentials. So, since I already had an account set up on the server and svnserve running, I figured this should work:
svn ls svn+ssh://myuser@myserver/myproject
But no dice - I got the message "svn: No repository found in 'svn+ssh://myuser@myserver/myproject'"

Hmm..... Time to Google.... Oh, that's right - you can't use the root directory supplied to svnserve over SSH! Instead, you have to supply the full path to the repository and project. But wait - my repository is on the D: drive...so how do you reference that? Well, SSH is running on Cygwin, so we can use Cygwin's drive mapping. So change that command to:
svn ls svn+ssh://myuser@myserver/cygdrive/d/myrepos/myproject
That should work, right?

Yeah...not so much. That definitely should work, but I'm still getting that "no repository found" message. So what's the deal?

A little searching revealed that, behind the scenes, the svn+ssh:// protocol runs a command similar to this:
ssh myserver svnserve -t
Turns out that the problem was in that command.

See, the svnserve portion runs on the Subversion server, which, in this case, is inside Cygwin on a Windows box. However, I have two copies of svnserve - one from Cygwin and one from SlikSVN, and they don't both work the same way.

For SVN+SSH to work, I need to pass the repository path in with the Cygwin path mapping, and SlikSVN doesn't understand that. Thus the need for Cygwin's SVN. However, SlikSVN is first in my path when I connect via SSH, so it's SlikSVN's svnserve that's getting run inside Cygwin. Hence the "no repository found" message.

After a bit of experimentation, it turns out that this is really easy to fix. All you need to do is set the PATH in your Cygwin .bashrc file to explicitly put the Cygwin binaries first. Just add the following line to the end of the file:
export PATH=/bin/:/usr/bin/:/usr/local/bin/:$PATH

So, problem solved. Unfortunately, it took a lot longer than I would have thought, mainly because I couldn't find anyone else who had the same problem. So hopefully anyone else who's crazy enough to set things up this way will come across this post if they have any problems.

Good intentions, bad idea

Today I'm going to discuss a comedy of errors. This starts out with a nasty bug that surfaced in my company's product a couple of months ago, and finally became clear when I was doing some prep work for implementing a CDN. It's a tale of good intentions, bad ideas, and code I didn't write, but have to fix.

First, the bug.

To explain the bug, I have to tell you a little about how my company's new product works. Basically, it's a drag-and-drop UI builder for Flash ads. The user designs their ad on a design surface in a FLEX app, saves it, than can then publish the result. However, rather than actually compile a SWF for the ad, we're currently assembling all the assets at run-time on the client-side. Our ad tags serve up a shell SWF file and inject a few parameters into it, including a URL to a "recipe" file that we use to cook up the ad. This is just an XML file contains the information for the ad's constituent elements. The SWF file parses it and pulls/creates all the needed objects to build the ad. There were various reasons for doing it this way, but I won't get into those.

Now, this bug was a real head-scratcher. The actual problem was simple - the shell SWF just wasn't rendering the ad. No error, no message - just didn't work. However, it only happened in IE - Firefox, Chrome, Opera, and Safari worked just fine. It also only happened in our production and test environments - our dev servers and demo server worked fine in IE. The SWF files were identical in every environment - I know because I triple checked. What's more, I could see the XML file being requested in the server logs, so the SWF wasn't totally crapping out. And, again, it worked in other browsers, so it didn't seem like there could be an issue with the SWF.

Well, after researching and messing around with this for the better part of a day, our QA person found a link that put us on the right track. In fact, there are a bunch of such links. It turned out to be an issue with the HTTP headers on the XML file. The file was being served over SSL with the "no-cache" header set. Turns out IE doesn't like this. Apparently "no-cache" keeps the file completely out of cache when used on SSL, which means not even long enough for the browser to pass it off to the Flash plugin. Apparently we would have seen an error for this if we did the SWF file in ActionScript 3, but we used ActionScript 2 (apparently most of our customers require ads to be in AS 2 - don't ask me why), which has a penchant for failing silently. And the reason it didn't happen in all four environments is because the Apache configurations were actually not all the same. Go figure.

Fast forward two months to the discovery of the cause.

I'm looking to implement a CDN. We've got one that does origin pull, so it shouldn't be a big deal, right? Well, yeah, but we still have to make some changes, because those "recipe" files are on the same path as rest of our media and it won't do to have the CDN caching them when users are editing them and trying to preview the output. So I need to fix it so that, at least for the previews, we can serve the recipes from our own server instead of the CDN.

A few months ago, when we implemented user uploads, we (and by "we" I mean "another guy on my team") added the concept of a "media URL" to our system. The idea is that we could just change this media URL to switch to a CDN without having to change any URLs in code. This was implemented as a method on one of our back-end classes. It would return the base domain and path, if applicable, from which media is served. So building a URL would look like this:
$image_url = ObfuscatedAdClassModel::getMediaURL();
$image_url .= '/path/to/naughty_pic.jpg';

The getMediaURL() method just checks the database/Memcache for a saved URL and returns the saved value or a static default if nothing is found. Easy-peasy.

Or not. You see, I exaggerated in that last sentence - what I meant was, that's what getMediaURL() should do. In actuality, it does a bit more. In fact, as an object lesson in over complication, I'm posting the redacted code below.

static public function getMediaURL(){
   global $cache_enabled;

   $db = self::getDB();
   if (!is_object($db)) {
      throw new Exception('Error getting database connection');
   }

   $settings_key='media_url';
   
   if( $ft_query_cache_enabled ) {
      $settings_value = $db->getCache($settings_key);
   }

   if (!$settings_value){
      $settings_value=DEFAULT_MEDIA_URL;
      $query = "SELECT value FROM settings WHERE key='$settings_key'";
      $ret = $db->query($query);
      if (is_array($ret[0])) {
         $settings_value = $ret[0]["settings_value"];
      } elseif ($ret === true){
         try{
            $db->startTrans();
            $have_lock = $db->query( "LOCK TABLE settings IN ACCESS EXCLUSIVE MODE;" );
            if ($have_lock){
               $query = "SELECT value FROM settings WHERE key='$settings_key'";
               $ret = $db->query($query);
               if (!is_array($ret[0])) {
                  $query = "INSERT INTO settings (key, value) VALUES ('$settings_key','$settings_value')";
                  $ret = $db->query($query);
                  if($ret !== true){
                     $err_msg = "Unable to insert setting: $settings_key";
                     Log::error($err_msg);
                     throw new Exception($err_msg);
                  }
               }
            } else {
               $err_msg = "Could not acquire table lock or table does not exist: settings ".$db->getLastError();
               Log::error($err_msg);
               throw new Exception($err_msg);
            }
            $db->commitTrans();
         } catch (Exception $e){
            $db->rollbackTrans();
            $err_msg = 'DB error while attempting to update settings.';
            Log::error($err_msg);
            throw new Exception($err_msg);
         }
      } else {
         $err_msg = 'DB error while attempting to load settings.';
         Log::error($err_msg);
         throw new Exception($err_msg);
      }
      
      if( $cache_enabled ) {
         $db->setCache($settings_key, $settings_value, 3600);
      }
   }
   if ( $_SERVER["SERVER_PORT"] == 443 ){
      return "https://" . $settings_value . "/";
   } else {
      return "http://" . $settings_value . "/";
   }
}

This should be just a simple look-up in a generic settings table. However, when the desired setting is not found, this method tries to insert it into the database. This is completely unnecessary and leads us to five levels of nested "if" blocks and a table lock. We already have the default value value, so why not just return that? And the table lock is just paranoid. This is not a setting that's likely to change more than once every couple of months. And if we do get a minor inconsistency, so what? You think users are going to complain that an ad didn't render properly?

Note also that the returned URL is set to straight HTTP or SSL based on the current server port. Hint: this will be important later.

Now, the reason I'm looking at this is because I need to adjust how the URLs to the recipe files are handled in our system. Given our media URL scheme, if we set our static contents to serve from the CDN, the recipes will go with them. For previewing in-progress ads, this won't work. So I need to change the recipe URLs to use something other than the media URL. So I find where the recipe URL is injected into the client-side code and start tracing it backward.

To my surprise, I find the that recipe URL is actually not set dynamically using getMediaURL(). It turns out it's coming from our back-end ad object, via a getMetadata() getter method, as a full, absolute URL. And what is this metadata? Well, it's an associative array of seemingly random data that we serialize and cram in the database. And by "we", I mean the same guy who wrote getMediaURL(). For the record, I told him it was a bad idea.

So if the recipe URL is coming out of this metadata, what does that mean? That it's stored in the database. So I start grepping for the key for the metadata of the recipe URL. And I find it in the web service method in our management dashboard that saves the XML files.

Let's pause here. Now, if you're very sharp, you may have asked yourself earlier why we were serving out this XML file over SSL. We're serving ads right? They don't need to go over SSL. And this Flash problem was specific to SSL, so if we just served them over straight HTTP, we should have been good. So why didn't we do that?

That's a good question. It had occurred to me after I fixed that bug (by changing the Apache configuration), but when I looked, I couldn't find where the URL was set to HTTPS. Plus I didn't know why we were serving them over HTTPS, so I assumed there must be some reason for it. And besides, the bug was "fixed" and I had lots of other work to do at the time, so it wasn't a priority.

Again, if you're sharp, you can probably see that there was no good reason. The recipes were being served over SSL by accident! You see, our management dashboard, which is where the recipe file is saved, runs over HTTPS, and getMediaURL() selects the protocol based on the current one. So when we called getMediaURL() to build the recipe URL to save in the database, it came back as an HTTPS URL.

So there you have it. A crazy, hard to diagnose bug, caused by a method that's too smart for its own good, and hidden by an ill-conceived half-abstraction. I hate to speak negatively of a friend and former colleague, but this was really a lesson in poor system design. He needed to separate things out a bit more. He should have done less in getMediaURL() and factored the generic "metadata" out into separate properties rather than lumping them all together.

Situations like this are why we have guidelines in programming. Things like "don't put serialized strings in the database", "a method should do one thing", and "don't use global variables" can seem arbitrary to the inexperienced. After all, it's so much quicker an easier to do all those things. But those of us who've been around the block a few times get that queasy feeling. We know there's a reason for those guidelines - those things can easily come back and bite you hard later on. Sure, they might not become a problem, but if they do, it's going to be much harder to fix them later than to "do it right" the first time. The hard part of software development is not "making things work" - a trained chimp can do that. The real art is in keeping things working over the long haul. That's what separates the real pros from the ones who are just faking it.

Yes, VIM is good

I've been catching up on some of my back episodes of .NET Rocks the past couple of days. I'm currently about 3 months behind on my podcasts, so I've got plenty to listen to.

Anyway, I was listening to show 537 with James Kovacs at the gym this morning, and he mentioned something interesting toward the end. When asked what he's been into lately, he said he'd been playing with Vim. Imagine that - a hard-core Microsoft guy playing with Vim. That just gives me a good feeling.

I've always been frustrated by how people in the Windows world seem to think "text editor" equals "Windows Notepad". Even experienced people. You always hear them refer to opening something in Notepad, or typing things into Notepad, etc. This from people who've been in the business for 10, 15, or even 20 or more years! I mean, they must be aware that there are better editors out there. Is "notepad" just used as a generic term, a shorter version of "text editor"? I wish somebody would explain that to me.

But getting back to the topic, it was nice to see James mention Vim. Despite Carl's and Richard's comments about VI being an antique, Vim really is remarkably powerful. Granted, it takes some effort to learn, but once you've got it down, you can do some pretty complicated stuff in just a couple of keystrokes. It's always nice to see that MS A-listers can recognize the power of tools like Vim, and not just us transplanted UNIX geeks.

Hiding accounts in Windows

There was one annoying side-effect of installing the Cygwin SSH server - I had to create an account for it. Not that this is inherently a problem, but I noticed that the new "Privileged service" account showed up on the login screen. That needed to go.

Turns out that hiding an account in Windows 7 is actually pretty easy. Just a simple registry entry - create a DWORD with the name of the user to hide and value zero under "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\SpecialAccounts\UserList" - and you're all set. Nice to know.

Well, that was a waste of time

Yeah.... So remember how I was messing with freeSSHd last weekend? I'm thinking that was a waste of time.

While the native shell thing in freeSSHd was cool, I had two big issues with it. First, the terminal was buggy. I got lots of weird screen artifacts when I connected to it - the screen not clearing properly, weird random characters hanging around after a lot of output, etc. Not the rock-solid terminal experience I'm used to.

The second thing was something that occurred to me after trying out a couple of SSH sessions. If freeSSHd is using a native shell, with native file paths, how is that going to work with SCP and SFTP? How does it account for drive letters and path delimiters? Turns out the answer (at least for SFTP) was restricting access to a certain pre-defined directory ($HOME by default). For SCP...well, I still haven't figured out how that's supposed to work. But either way, this is decidedly not the behavior I was looking for.

So instead, I decided to go the old-fashioned way and just install Cygwin and use their OpenSSH server. And you know what? It was completely painless. I just used this handy LifeHacker entry to get the auto-config command and the necessary settings, and I was done. I started the service, connected to the SSH server, and all my stuff was right where it was supposed to be. I have a solid, familiar terminal, access to my whole system via SCP and SFTP, and NT authentication worked out of the box. Heck, the hardest part was figuring out the funky non-standard UI of the installer's package selection dialog. I should have just used Cygwin from the beginning and saved myself some effort.

Got freeSSHd (mostly) working

Well, after my dismal failure yesterday, I mostly got FreeSSHd working. Not sure if it was worth the effort or not, but we'll see.

Turns out the problem was with my expectations. Or maybe I can blame it on the (extremely) minimal documentation. You see, I said "yes" when asked if I wanted to run freeSSHd as a Windows service. Now, I figured that starting the freeSSHd program in the start menu while the service was running would allow me to configure that service and view its status. Yeah...not so much. Apparently running the config program actually tries to start another instance of freeSSHd. Hence why I was getting errors that the the port was in use. It was already being used by the service and I was unwittingly trying to spawn another instance.

It turns out that to configure the service, I need to start the config program, do my stuff, close the config program, and restart the service. At least, that seemed to do the trick - I'm not actually sure how much of it was necessary. After that, freeSSHd ran quite nicely.

My one remaining problem was authentication. I wanted to use NT authentication, which freeSSHd gives as an option. The problem was, it didn't quite work. After creating a user and setting it to NT authentication, I was able to log in...and that's it. I connected with Putty, entered my password, and freeSSHd immediately disconnected me. No warnings, no errors, nothing in the logs - just immediate disconnection.

The really odd thing was that NT authentication worked just fine if I ran freeSSHd by starting the config program rather than as a service. Running it as service, though, disconnected every time. The only time it didn't was when I disabled the "new console" option. Then the session would just hang and not accept input, which wasn't an improvement. I tried various settings and Googled fruitlessly, but no luck. I still have no idea what was going on. After mucking about with this for probably an hour and a half, I finally gave up and changed my freeSSHd user to use a SHA1 hashed password. That worked just fine, but feels like defeat.

The one thing I do like about freeSSHd so far is that it allows you to select your command shell. You actually get a native Windows shell instead of being forced into Cygwin weirdness. I changed mine from the default cmd.exe to PowerShell. That should make for a more pleasant experience.

Initial Windows setup

Well, I did my Windows 7 install the other day. One day later, I'm doing pretty well. Ran into some problems, but so far the results are not bad.

Unsurprisingly, the actual install of Windows 7 was pretty uneventful. Pretty much the same as a typical Ubuntu installation - selecting partition, entering user info, clicking "next" a few times, etc. Nothing to report there.

The initial installation of my core programs was pretty easy too, thanks to Ninite. They have a nifty little service that allows you to download a customized installer that will do a silent install of any of a selected list of free (as in beer) programs. So I was able to go to a web page, check off Opera, Thunderbird, Media Monkey, the GIMP, Open Office, etc., download a single installer, and just wait while it downloaded and installed each program. Not quite apt-get, but pretty nice.

My first hang-up occurred when installing the Ext2IFS. Turns out that the installer won't run in Windows 7. You need to set it to run in Windows Server 2008 compatibility mode. And even after that, it was a little dodgy. It didn't correctly map my media drive to a letter on boot. It worked when I manually assigned a drive letter in the configuration dialog, but didn't happen automatically. It was also doing weird things when I tried to copy some backed-up data from my external EXT3-formatted USB drive back to my new NTFS partition. Apparently something between Ext2IFS and Win7 doesn't like it when you try to copy a few dozen GB of data in 20K files from EXT3 to NTFS over USB. (Actually, now that I write that, it seems less surprising.) The copy would start analyzing/counting the files, and then just die - no error, no nothing. I finally had to just boot from the Ubuntu live CD and copy the data from Linux. Still not sure why that was necessary.

I also had some interesting issues trying to install an SSH server. I initially tried FreeSSHD, which seemed to be the best reviewed free server. The installation was easy and the configuration tool was nice. The only problem was, I couldn't get it to work. And I mean, at all. Oh, sure, the telnet server worked, but not the SSH server. When set to listen on all interfaces, it kept complaining that the interface and/or port was already in use when I tried to start the SSH server. When bound to a specific IP, it gave me a generic access error (literally - the error message said it was a generic error).

After messing around fruitlessly with that for an hour or so, I gave up and switched to the MobaSSH server. This one is based on Cygwin. It's a commercial product with a limited home version and didn't have quite as nice an admin interface, but seems to work sell enough so far. The one caveat was that I did need to manually open port 22 in the Windows firewall for this to work.

The biggest problem so far was with setting up Subversion. Oh, installing SlikSVN was dead simple. The problem was setting up svnserve to run as a service. There were some good instructions in the TortiseSVN docs, but the only worked on the local host. I could do an svn ls <URL> on the local machine, but when I tried it from my laptop, the connection was denied. So I tried messing with the firewall settings, but to no effect. I even turned off the Windows firewall altogether, but it still didn't work - the connection was still actively denied.

I started looking for alternative explanations when I ran netstat -anp tcp and realized that nothing was listening on port 3690. After a little alternative Googling, I stumbled on to this page which gave me my solution. Apparently, the default mode for svnserve on Windows, starting with Vista, is to listen for IPv6 connections. If you want IPv4, you have to explicitly start svnserve with the option --listen-host 0.0.0.0. Adding that to the command for the svnserve service did the trick.

Which desktop OS to use

The last week or so, I've been going back and forth over what to run as the primary OS on my upgraded home desktop. Right now I'm running Ubuntu 10.04 on it, but I keep thinking maybe I should switch to Windows 7. I'm using it to run Win7 under VMWare anyway, so I'm wondering if I should just invert the arrangement and run Ubuntu under VMWare instead.

So to help myself decide, I'm going to list out some of the various pros and cons. Of course, this is just for my case - this is not universally applicable, your mileage may vary, etc. I'm basing this list on my usage patterns over the last 2 year, since I started running Windows on some of my personal (i.e. non-company) machines again.

Every-day desktop software

This is obviously one of the biggest categories, and one of the reasons I'm leaning toward Windows 7 in the first place. For my every-day non-development desktop computing needs, I need the following tools:

  1. Web browser
  2. E-mail client (no, I don't want to use a web client, even if it's GMail)
  3. Desktop RSS aggregator with multi-system syncing (no, I don't particularly like Google reader, except as a back-end)
  4. Multi-format universal video player
  5. Music player/manager
  6. Podcatcher
  7. Comic book reader

For items 1, 2, and 4, I already have favorite apps that are cross-platform - Opera, Thunderbird, and VLC. So those requirements are a wash.

Pro-Windows

On Windows, I'm currently using FeedDemon for RSS and MediaMonkey for music, and I quite like both of them.

On Linux, I don't have favorite apps for those. Recently, I've been using Exaile for music, but I'm not particularly attached to it. I just haven't found anything good since Amarok 2 came out (which I absolutely hate, despite really liking Amarok 1.x). As for an RSS reader, I've tried Liferea, but didn't particularly care for it. Mostly I've been either reading my feeds on a Windows box or just using Google reader, which I'm not crazy about.

Pro-Linux

On the Linux side, I do have a favorite comic reader an podcatcher: ComiX and gPodder respectively. However, I don't think either of them are really irreplaceable. While I don't have any complaints about gPodder, my podcatching needs are fairly basic - download and transfer to my MP3 player. I have a feeling the podcasting features of MediaMonkey would be just fine for this. The same is probably true of ComiX - I don't really need any complicated management features, just a good viewer for comic archives. I've played with a few other comic readers, and there are probably several that could do the job. For instance, HoneyView seems like it would fit my needs quite well.

Verdict: I have to go with Windows on this point. Most of the every-day stuff I really care about is either cross-platform or Windows-only, while I'm not really so attached to my current Linux tools.

Development tools

The development tools are a little different from the every-day tools. In part because, in many cases, there isn't really much of a choice. In the case of things like language-specific IDEs, you either use a particular tool and just use whatever platform it's supported on, or you make things waaaaay harder on yourself than they need to be. In other words, if you want to be idealistic in your platform choice, you suffer for it.

In my case, I'm currently doing mostly LAMP and FLEX work, and want to get into more .NET stuff on the side. That means that I need not only the IDEs and development tools, but also the supporting servers for those environments. I also need good command-line and scripting utilities as well as a good desktop virtualization package.

Currently, I favor Komodo Edit and gVim for PHP, Python, and other general-purpose coding, with a little Eclipse mixed in on occasion. Both of those are cross-platform, as are PHP and Python. For supporting servers, I also need MySQL, PostgreSQL, SQLite (more of a library than a server, but I'll list it with the other databases), and Apache, all of which are also cross-platform. So in terms of what I can use, it's a wash on all of that.

Pro-Linux

However, while most (if not all) of the LAMP stack runs on Windows, it's kind of a second-class citizen. In my experience, it's much easier to install and administer this stuff on Linux. Of course, that could just be because I'm more familiar with them on Linux, but that's not really the point.

In addition, Linux gets some points for the command-line. Granted, now that Powershell has come along it's not nearly as many as Linux used to get, but it still wins on ease of installing packages and having things like FFMPEG.

Pro-Windows

Obviously, Windows wins on everything dot-NET. IIS, Visual Studio, SQL Server - all of them are Windows-only and must-haves for anyone wanting to do "professional" .NET work, by which I mean work that "counts" on your resume. (No, Mono with MySQL isn't good enough, unfair as that may be.) It also wins on the FLEX front. While the FLEX SDK is cross-platform and runs on Linux, the IDE, Flash Builder, only runs on Windows and Mac. It's Eclipse-based, so you'd think it'd run on Linux, but it doesn't. Don't ask me why.

Windows also gets a small win on virtualization. Mostly, I use VMWare, plus a bit of VirtualBox, both of which are cross-platform. However, anybody who ever has to do IE compatibility testing will tell you how handy VirtualPC is. Not in and of itself, but rather because Microsoft puts out pre-configured images for testing IE 6 and 7. Just download, extract, and run - no hunting down copies of Windows with the appropriate IE version. Granted, it's annoying that the Windows installs on those images expire every few months, but they're still useful.

Verdict: I think I have to give Windows the edge on this one too, much as it pains me. I just need the Windows-only tools, and it's easier to run the resource-sucking ones like Visual Studio and Eclipse natively than in a VM. And as for the LAMP server tools, I can easily set up an Ubuntu server VM to run in the background without assigning it too many resources.

Remote access

I've gotten very used to being able to access my home desktop from anywhere. Therefore, I need some kind of secure, easy to use method for connecting to it. I also need it to be something I can set up on someone else's computer in 5 minutes without admin access. I do use Opera Unite for some of this, but that doesn't cover everything - I want something a little more robust and general purpose.

Pro-Linux

Let's face it - SSH rocks. There's just no getting around it. It's secure, simple to install, simple to set up, and simple to use. Also, it gets you full access to the command line, which is pretty powerful. I also tunnel VNC over SSH for nice, easy graphical access to my machine.

Pro-Windows

Let's face it - Remote Desktop beats the pants off of VNC. I mean, it's not even close. Granted, VNC is cross-platform, but RDP offers more features and it's a lot faster.

Also, there are SSH servers available for Windows. They're not as "native" as SSHD under Linux, but they still offer remote command-line shell (usually based on Cygwin, so it's Linux-like) as well as SCP and SFTP.

Verdict: It's pretty much a draw on this one. The SSH support for Linux is better, but Windows has some SSH and better remote graphical capabilities.

Inertia

When switching platforms, you need to account for overcoming the inertia of your current environment. That means coming up with new tools, new customizations, new data organization, etc.

Pro-Linux

Eight years is a long time. That's about how long I've been running Linux exclusively on my home desktop. I've been building up my environment since the bad-old-days when Netscape Navigator 4 was the best browser available and you had to recompile your kernel if you wanted to use a CD burner. I have scripts that won't run on Windows, symlinked configurations that won't work on Windows, and, or course, lots of things organized in a very UNIXy way.

Pro-Windows

The nice thing about Windows 7, as opposed to previous versions, is that it's a bit easier to organize things in a UNIXy fashion. Plus, there's always Cygwin and so forth. But really, there's not much to say here. The only saving grace here is that most of my existing Linux investment isn't irreplaceable.

Verdict: Linux, obviously. But while the margin is pretty good, it's not overwhelming. I don't do that much in the way of heavy customization anymore, so it's actually not as big an issue as it would have been a few years ago.

Final Verdict

Well, I'd say it's pretty obvious at this point. The preponderance of the pros seems to be on the Windows side.

On the one hand, the idea of this transition makes me a little uncomfortable. I've been a Linux user for a long time, and will continue to be one on the server-side. But at the same time, the more I think about it, the more sense this transition makes. While I'm still a Linux fan, I've grown less concerned with the desktop side of it. These days I'm more interested in things "just working" than in twiddling with my desktop or finding free software to do something I could as easily do with freeware. By the same token, distance has dulled any animosity I may have harbored toward Windows.

So we'll see how it goes. I actually installed Win7 on my desktop yesterday afternoon (I started this entry a week ago - just didn't get around to posting it). I'll be posting updates on my difficulties and migration issues. Hopefully it will be a pretty smooth process.