The goal approacheth: 183.0

My weight-loss goal creeps ever closer, even with the loosened summer diet. This morning I weighed in at 183.0. That means only 3 more pounds to go.

It also means it's time for me to go shopping. My pants size is now a 34 waist, down from a 38. That means that nothing I have fits anymore. And they don't just not fit, they kind of look funny. My dress slacks in particular look like "clown pants" according to my wife. (Which is too bad, because I got some of new dress clothes just last fall.) I actually had to go out and buy new clothes so that I could go to my cousin's wedding a few weeks ago. Unfortunately I didn't think of that until the same morning. But that at least meant I got the early-morning sale at J. C. Penny.

I'm also going to have to have my wedding ring resized. I've lost enough weight that it literally slides off my left ring finger. I was wearing it on my right hand for a while, but now my right ring finger is getting too small too. So now I'm starting to put it on the middle finger of my left hand.

The weird thing is that I"ve lost 50 pounds and I still have a lot more fat on my body than I thought I would. I guess that's not surprising, since I'm still overweight according to the BMI charts, but it still feels strange. I mean, those weights always seemed kind of low. After all, many people who are technically "overweight" don't look like they need to lose any weight. But I guess the BMI charts are just biased toward being very lean. It's all a matter of where you draw the line on what's acceptable.

From the agregator

Some quick links and commentary on news that's shown up in my RSS agregator this week.

Longhorn reloaded was shut down.
Apparently some people tried to take a copy of Windows Longhorn (the code-name for the beta releases of Vista) and publish an improved version of it. Naturally, Microsoft put the kibosh on that. The only news here is that some of them were actually surprised by that.

Dvorak says what I've been thinking.
Yeah, I'm sure the iPhone is great, but seriously, how many people are going to spend $600 on a freakin' cell phone? Unless you're rich, that's gadget-lust taken to an unhealthy extreme. Plus I'm getting really sick of it being the topic of every other link that gets posted to Digg.

Mono team does Silverlight in 21 days.
Well, sort of. From the sound of it, they have a working prototype rather than anything releasable. Quite an accomplishment, but it doesn't do as much to feed the self-congratulatory über-hacker mythology that programmers in general and the open-source community in particular like to cling to. But still, great job Miguel and company!

Westciv is running their free HTML and CSS courses again.
Right now they're on week 2 of CSS. I did these a few years back when I was first getting into web development and they're pretty good. If you want to learn web design, or if you're one of those backward incompetents who is still writing tag soup and freely mixing content with presentation, you should give it a try.

Mark Rasch on the legal implications of letting Google store your documents.
I've always been a little queasy about storing my personal stuff on somebody else's server. Apparently I'm not just paranoid.

Sympathy for the devil

I know there are those in the Linux community who will regard this as equivalent to spitting on the cross while blaspheming the holy spirit, but sometimes I feel sorry for the people at Microsoft. They have a tough job and they are frequently blamed for things that aren't their fault.

What made me think of this was Jeff Atwood's follow-up to his Windows spyware post.

I understand the pressure to be backwards compatible. There's no end of Vista blowback based on minor driver compatibility issues. The "if it doesn't work, it's automatically Microsoft's fault, even if the software or hardware vendor is clearly to blame" mentality is sadly all too common. But given the massive ongoing Windows security epidemic, was defaulting regular users to Administrator accounts-- exactly like Windows XP, Windows 2000, and Windows NT before it-- really the right decision to make?

In many ways, Microsoft is in a perpetual no-win situation. On the security front, for example, they are besieged by the evil forces of malware. However, every time they try to improve the situation, end users scream blood murder. For example, was the weird "quasi-administrator with prompting" a good idea? Probably not, but it's better for the average user than silently allowing the installation of spyware, and yet everyone seems to hate it. But what's the alternative? To make accounts regular users by default? How would the average Windows user would feel about that? I don't know, but I have read a many comments by recent converts to Linux who seem to think that entering a password just to install some software is completely stupid and unreasonable, so I can't imagine it would be universally hailed as a great improvement.

And, of course, there's always the breakage that accompanies any OS upgrade. For example, remember Windows XP Service Pack 2? I seem to recall widespread complaints about things breaking when that was rolled out. And we're seeing the same thing now with Vista. And who do people blame? Is it the ISV who are still coding with Windows 98 in mind? No - they blamed Microsoft.

What really kills me about this situation is when I read Raymond Chen's accounts of the efforts of the Windows App Compatibility team. For example, consider this sample chapter from his book. From the cases he mentions there, it is completely obvious that Microsoft takes backward-compatibility seriously. Very seriously. In fact, you might even say they take it too seriously.

Think of it this way. On Linux you're lucky if you can get source compatibility for an application that's more than 5 years old. Microsoft has binary compatibility with a large range of programs that are 10 or 15 years old. They're working with third-party binaries, diagnosing obscure bugs, and implementing fixes to keep the applications working, even though it's by sheer luck that they ever worked in the first place. As a programmer, it's hard to overstate how impressive this is. And yet all anyone ever focuses on is the problems they didn't fix.

Then there's the political angle. There are lots of people out there who seem to think that Microsoft can do no good. Everything they do is viewed with suspicion. Anyone who works for Microsoft has to contend with accusations that he is either in on the conspiracy or is bowing down to "the man" every time he says something they MS-haters don't like. That's got to be at least a little demoralizing. And while a certain degree of animosity is certainly warranted (as it is with practically any large business), it's not like Microsoft has been running child sweatshops or dumping toxic waste in the local drinking water. It just seems way out of proportion.

So no matter what the people in Redmond do, it seems like there's always somebody pissed off at them. And it's a shame, because they really do do some good work. The .NET world has lots of neat technologies and is a very cool place for developers. Even though OpenOffice may be good enough for many, MS Office is by far the best office suite available. And, last but not least, Windows, despite it's flaws (and they are legion) is a very impressive feat of software engineering. Not to mention that Microsoft employs a number of very bright people.

So, to the Linux community, I say, let's give Microsoft a chance. I'm not asking you to like MS, or to start using any of their products. Let's just be honest and realistic. Most people in the community aren't shy about placing the blame on them, but give credit where credit is due. We rightly object when people blame free software for not being a panacea, what with hardware incompatibilities and the lack of certain software. We should at least hold MS to the same standard and not judge them for failing to satisfy everyone.

Is it a phone or a Beowulf cluster?

I saw something on TV last night that really struck me. In fact, for a second I thought I was dreaming, as I had dozed off for a while. But I think it was actually real: an iPhone commercial that gave the ability to watch YouTube videos as a selling point.

Being the curmudgeon that I am, my initial reaction to this was, "Just what we need, another overly-expensive way to waste time." Now you can spend $500 to watch crappy home movies and/or blatantly pirated TV shows on a 3.5-inch screen, rather than paying $400 to watch them on a 17-inch screen. Personally, the only way I can tolerate watching videos on 19-inch wide-screen is if they're full-screen. The little embedded Flash viewer drives me crazy. Plus I was never able to get too excited about YouTube in general. It's a nice, well-designed site, but I'd usually rather have a podcast in the background while I work than stare at my monitor for half an hour.

But after thinking about it a little longer, what I found really interesting about this commercial is how the cell phone is dying out. In fact, I predict that by 2015, perhaps even 2010, there will no longer be such a thing as a cellular telephone. The mobile phone will be a thing of the past. People will look at them like they look at rotary telephones now, as a quaint reminder of how backwards we used to be.

Instead, the "phone" of the future will be like the iPhone, i.e. very small multimedia computers that just happen to have voice communication features. We're already part way there when you think about it. People are using their cell phones extensively for things like playing games, text messages (a.k.a. the poor-man's IM), taking and trading pictures, and listening to music. They've turned into PIM/multimedia appliances. Actual voice communication has become almost secondary.

The best part about this is the rate at which the technology to price ratio is increasing. When I got my first pre-paid cell phone seven years ago, it was about the size of a regular phone handset, rather expensive, and didn't do anything except make and receive calls. Then, four years ago, I signed up with Verizon and got two stick phones with biggish black and green screens, text messaging, calendars, and other simple tools for less than the one pre-paid. Two years later, for only slightly more than the two stick phones, I upgraded to two Samsung SCH-A670 camera phones with color screens, plenty of bells and whistles, and the ability to run BREW programs. Currently, I'm using an LG VX8300, which features, among other things, a 1.2 megapixel camera, an MP3 player, surprisingly good speakers, and a 1GB MicroSD slot to hold all that media. And once again, I didn't pay any more for this upgrade than for the last one.

In a few years, I'd love to have a phone powerful enough that I could use it to do actual work. Maybe something I could plug a roll-up keyboard into. Or maybe I could get something with a USB or FireWire port and an operating system with some degree of flexibility. Or maybe they can just invent a 4-inch laptop an put a phone on it! After all, that's practically what the high-end smart phones are. Now it's just a matter waiting for the prices to come down.

Professional incompetence

There were a few interesting threads on the Joel on Software discussion forum last week. They proved to be interesting reading on just how wrong a developer can go, with or without realizing it.

The short version is that poster Brice Richard took issue with the recommendations of other posters that applications should be written as, in his words, "all mini-functions." He then went on to describe his coding methods and argue that his way was right; if not for everyone, than at least for his circumstances.

Now, the title of that second thread was actually a little misleading. I actually read that one first, and I initially thought he was referring to the arbitrary guidelines on subroutine size that you often hear thrown about without any real supporting evidence. For example, some people say that a function should never be longer than a dozen lines, or that all functions should fit on one screen in you editor. That last one always sounds nice - unless you have poor eyesight and a small monitor.

But that's not what Mr. Richard was talking about. He only believes in writing separate functions when:

1) there is a bona-fide reason to believe that the code will be re-usable in other apps
2) when you have duplicative functionality within your application that results in the need to create a function that defines that functionality through either value or referential arguments allowing your code to execute variably when used.

And apparently he doesn't view 300+ line functions as a cause of concern. And all this from someone who has been in the consulting business for 8 years.

Part of the reason for this is probably that Mr. Richard is, in his words, a self-taught programmer who always works alone. Of course, there's nothing wrong with being self-taught. In this business, we're all self-taught to various extents. To quote Dr. Kelso from an episode of Scrubs, "If you've been out of college for more than five years, half of what you know is obsolete." There are always new technologies to learn and most schools don't even come close to covering all of what you need to be a professional developer.

However, as Jeff Atwood pointed out today, working alone can be dangerous. And it's not just the project management issues described in the essay he cites. When you don't have other people to learn from, you end up having to work harder to educate yourself. You need to read more, experiment by yourself, and try to validate your results based on just your research rather than peer input. You have no one to practice with or bounce ideas off.

It's important to remember that this is 2007. Software development is still a young field, but it's no longer in its infancy. Translation: you don't need to go about rediscovering how to do things. When people talk about "doing things your own way" and the like, that's usually what they mean. It's much better to study the recommendations of the experts and use or adapt them as appropriate.

When you're working alone and aren't vigilant in your studies, it's easy to fall into bad habits. It's also easy to get stuck in your ways and not learn new techniques or technologies. The forum threads I mentioned above are a good example of this. Working alone isn't necessarily detrimental to your skills, but it's much better to have smart people around to give you an occasional reality check.

Of course, you can still end up working with other developers who have no concern for modern practices or procedures, and hence don't help much. But that's a different story....

Down to the last 5: 184.6

After succesfully forgetting to weigh myself in the morning for several days, I finally managed to get a couple of readings in a row last week. For a while, I had been hovering around 187, but last Friday, I got a consistent reading of 184.6. That puts me at a total loss of 49 pounds and within 5 pounds of my goal of 180.

I've actually been loosening up on the diet lately. Partly it's because it's summer time, which means it's time for ice cream and cooking burgers on the grill. I've been careful not to over-do it, though - only 1 ice cream cone a week and mostly chicken and turkey on the grill.)

The bigger problem is that we've been very busy lately. I think we've been out of town four or five weekends in the last two months, which generally means we eat out. (Plus my cousin's wedding - he got married across town, but we still ate out for that.) Most of the other weekends we've spent most of the day working on the house or in the garden, and after that we often just don't feel like cooking. But so far, it doesn't seem to be a problem. Even when we eat out, I'm trying to choose the healthier foods and stick to sensible portions. (Translation: no, you don't have to eat that whole plate of fries!) At the very least, I'm not seeing any negative effects yet.

I think learning proper portion control has been a big help to me so far. I've learned to try to eat slowly and not force myself to finish an overly large serving. Most of us are used to the gigantic portions they give you at chain restaurants like Applebee's or Chili's, but the truth is that they normally give you enough food to feed two people. It was a big step for me just to realize how much food it takes to fill me up and stop there. Since it typically takes longer for my body to send the "full" message than it does for me to overeat, I've found that eating slowly helps to narrow the gap and keep me to a sane amount.

Desk upgrade

The second half of my latest upgrade arrived from NewEgg today. I ordered two more gigabytes of RAM - one for my desktop, one for my laptop.

As a brief aside, the laptop upgrade was much smoother than I expected. I'd never tried upgrading a laptop, so I wasn't sure how hard it would be. Turns out adding more RAM to my Inspiron B120 was actually pretty easy. I just followed the Dell service manual. The process was pretty much "open up the correct panel, then slide in RAM module."

The desktop upgrade arrived just in time, since I was about to disassemble and move the system anyway. That's because I just finished "upgrading" my computer desk.
My new computer desk
We've consolidated office space, so Sarah and I are now sharing one large desk. Of course, I still have to finish putting the doors and drawers back in and get some keyboard trays, but it's basically done. I built it out of kitchen cabinets and laminate counter. It's a little higher than a normal desk, but it should serve us well. It will also be a lot sturdier than the pre-fab fiber-board desks its replacing.

Things I don't care about

Following in the spirit of Mark Pilgrim's post from the other day, I thought I'd a short list of things I don't care about. I do this mostly because I'm tired and grumpy, and it's hard to come up with positive, insightful commentary when you're tired and grumpy.

1) Safari on Windows. Maybe it will be good for testing. But then again, I didn't care about Safari when it was Mac-only, and I see no reason to change my attitude now.
2) The iPhone. Yeah, it looks very cool and it's probably much easier to use than any other cell phone on the market, but you can buy an actual computer for less money.
3) Font rendering. Joel Spolsky and Jeff Atwook both commented the "revelation" that Safari for Windows was using Apple's font rendering engine instead of the Windows one. I've heard many complaints about the font rendering on Linux too. Who cares? I never got the obsession some people have with how their fonts look. As long as I can read it without getting eye strain, I'm happy. Hell, half the time I can't even tell the difference between two similar fonts.
4) VB6 programmers. I came across this link from a couple of years ago lamenting that .NET was killing hobbyist programmers. It's an argument I've heard before: .NET is just too hard compared to VB6. Well too bad. Learn to freaking program. VB6 seemed good 10 years ago, but in retrospect, it was nothing but a recipe for hideously bad code and huge magenta buttons. Good ridance! And I was a VB6 programmer, so I'm allowed to say that.
5) Out-sourcing/off-shoring. I'm sick of hearing programmers wailing about their jobs being moved to India or China. You know what? If your job is really in danger from that, it probably sucked anyway. Upgrade your skillset and next time don't work as a code monkey.

Playing with Monodevelop

This week I've been looking at Mono a bit more. I've been trying to spend more time on one technology at a time rather than jumping back and forth between a bunch of different things. So for this week, I'm putting off Ruby on Rails and doing Mono.

To me, being both a Windows and Linux programmer, Mono is a very good thing. If nothing else, it has the benefit of making mainstream Windows programming knowledge usable on Linux. And really, C# and the .NET framework are pretty good in their own right. Mono means that we have another good language and framework that brings with it a high degree of source- and binary-compatibility with Windows. It even has a GUI that isn't horrifyingly ugly like Java's Swing. What's not to like? Unless you're a rabid Microsoft hater, nothing.

So far, my complaints are mostly with the development tools. I'm using Monodevelop 0.12 from the Ubuntu Feisty repositories. While Monodevelop is a fork of SharpDevelop, it is, sadly, not quite as good. It's missing many of the nice little features #Develop has. However, my main complaint is those damn panels. You can't easily minimize groups of them and the positioning gets screwed up when you close them. And if you leave them open, you get stuck writing in a tiny code window.

It's also somewhat annoying that there is no WinForms designer for Mono. Apparently this is because appropriate designer surfaces haven't yet been implemented in the framework, but it's still annoying. After all, if you want to write a cross-platform GUI, WinForms is the choice.

My other complaint is gnunit2 version 1.2.3 that's included in Ubuntu. It has no options, doesn't reload the assembly between test runs, and includes a "preferences" menu item that doesn't do anything. Of course, there's always the NUnit plugin for Monodevelop, but the aforementioned panel annoyance means that isn't significantly better.

On the up side, a new version of Monodevelop came out today. I'll have to try it out and see if it's any better. I only hope that I don't end up having to recompile huge numbers of packages in order to get it working.

Advance your career by losing hope

This week I finally decided to take the plunge: I started working on my résumé. That's right! After six years I have finally decided that it's time to get my career moving and so I have officially entered the job market.

Ah, job hunting! It's quite the experience, isn't it? I'd almost forgotten what it was like. There really is nothing like a good job search to make you feel like a useless, incompetent sack of crap!

I don't know about other industries, but this is definitely the case in the IT world. If you've ever looked for a job in software development, you know what I'm talking about. For every reasonable job listing you see, there are twelve that absolutely require a 10 years using laundry-list of excruciatingly specific technologies, strong interpersonal skills, a Mensa membership, and a strong track record of miraculous healing. And that's for an entry-level position. With a typical career path, if you start early, you should be ready for their grunt-work jobs by about the time your kids are graduating from college and moving back in with you.

The listings that have really been killing me, though, are the absurdly specialized ones. Not the ones that require 5 years experience with ASP.NET, C#, Java, Oracle, SQL Server, SAP, Netware, Active Directory, LDAP, UNIX, SONY, Twix, and iPod - they're just asking for the kitchen sink and hoping they get lucky. I'm talking about listings like the one I saw that required advanced training in computer science, a doctorate in medical imaging, and 10 years of experience developing imaging software. Or, perhaps, all those listings for a particular defense contractor that required experience with technologies I had never even heard of. I mean, I couldn't even begin to guess what these abbreviations were supposed to stand for, and I'm pretty up on my technology! When you come across a lot of listings like that at once, it can be a little depressing. "How am I ever going to find a job? I don't even know about grombulating with FR/ZQ5 and Fizzizle Crapulence GammaVY5477 or how to do basic testing of quantum microcircuits using radiation harmonics with frequency-oscillating nano-tubes on a neural net. Every idiot understands that!"

But the real killer for me is location. I'm in the southern tier of New York state, which is not exactly a hotbed of tech startups. I like the area and don't really want to move, but there's practically nothing here in terms of software development. The best possibility I found was a local consulting company 10 minutes form home. However, when I sent them a résumé, I got a message back saying that they were currently unable to add new positions due to the fact that they were going out of business. I've applied for a couple of other semi-local positions, but of all the possibilities I've found, the closest is about 50 miles from my house. Workable, but not a situation I'm crazy about.

I'm now starting to think seriously about relocating. I don't really want to move to the west coast, both because of the cost of living and on general principle, so I'm thinking of looking either downstate (i.e. New York City) or south to the Washington, D.C. or Atlanta metropolitan areas. All three of those seem to have a fair number of positions in software development.

However, I'm faced with something of a moral dilemma. You see, having been born and raised in upstate New York, it is my patriotic duty to hate New York City. But as a New Yorker, it is also my patriotic duty to look down on the South and New Jersey. That leaves me wondering whether I'm forced in to choosing Washington, or whether it counts as "the South" too and I'm just out of luck.

In the end, I guess I'm just not that patriotic. All three of those cities sound good to me. But New Jersey is another story.

The anti-agile

I think I've put my finger on the reason that, despite being a rookie in it's practice, I'm so enamoured of Test-Driven Development. It's because TDD, despite being an "agile" programming practice, is the very inverse of agile methods. It's good, old-fashioned design. The only difference is that it's dressed up to look like programming.

At least that's what I get from reading Dave Astel's thoughts on TDD. He's an advocate of Behavior-Driven Development. This is an evolution of Test-Driven Development where the emphasis is shifted from testing units of code to describing the desired behaviour of the system. The idea is that rather than concentrate on writing a test for each class method, you figure out what the system should do and design your test cases around that rather than around the program structure. (Apologies if I mangled that explanation - I only learned about BDD today.)

This is somewhat similar to the other expansion of the initialism "TDD" that I discovered early in my research - Test-Driven Design. Note that the emphasis here is on using the unit tests as a design tool, i.e. specifying the behavior of the system. In other words, the unit tests are not written to do testing per se, but rather as a vehicle for expressing the low-level system specification.

Having studied formal development methods, it seems to me that, at a fundamental level, practitioners of TDD and formal methods are really doing pretty much the same thing. They're both trying to describe how a program is supposed to function. The TDD people do it by creating a test suite that asserts what the program should and shouldn't do. The formal methods people do it by writing Z specifications and SPARK verification conditions. Both are creating artifacts that describe how the system is supposed to work. The formal methods people do it with verifiable mathematical models, while the TDD people do it with executable test. They're just two different ways of trying to get to the same place.

In a way, this seems to be the opposite of what agile methods promote. Most of the agile propoganda I've read advocates doing away with specifications and documentaiton to the maximum extent possible. In fact, I've even read some statements that the amount of non-code output in a project should approach zero. And yet TDD can be viewed primarily as a design activity of the type that, using a different notation, would generate reams of paper. The only thing that saves it is the technicallity that the design language is also a programming language. It just seems a little ironic to me.

I suppose that's really the genius of TDD. It's the type of specification activity programmers are always told they should do, but repackaged into a code-based format that's palatable to them. It's sort of a compromise between traditional "heavy-weight" design methods and the more common "make it up as we go" design. I don't want to say it's the best of both worlds, but it's certainly better than the undisciplined, half-assed approach to design that's so common in our industry.

More on TDD

I think I may have to break down and buy a book on test-driven development. Maybe Kent Beck's Test Driven Development By Example. I usually don't like to pay for programming books because they tend to be expensive, have low re-read value, and you can often get the information online. However, this is more a methodology topic than a technology topic, so that's not so true. Plus they don't have anything on TDD in the local Barnes & Noble, so I can't read it without shelling out. Rats!

I really just need to do some more reading and get a little more practice. There are still some aspects that I'm not quite getting. Then again, it's been less than a month since I started looking at TDD and I haven't had as much coding time as I would have liked.

The problem seems to be two-fold. The first is that I'm not yet certain how to write really good unit tests. Second, and the reason I was looking at Beck's book, is because I haven't seen any really good examples yet.

Most of the articles I've read suffer from the same pedagogical flaws as tutorials on object-oriented programming: they're too simple to be useful. An article doesn't provide enough space to work up to a good example. You either end up jumping right in on the deep end, which makes it useless for beginners, or you do something small, manageable, and not especially useful in real life.

Another problem I'm having is working with mock object frameworks like NMock. At this point I'm still working on where the line is between adding mock objects to my unit tests and refactoring things so that it's not an issue. I suspect this is because I have yet to internalize the line between unit and functional tests. Hence the need for a more extensive treatment.

It's only a matter of time and practice. Just my little experience with TDD so far make me feel that I've made significant progress as a developer. Which actually feels both good and bad. It's good to learn new techniques and improve. But on the other hand, I feel like kind of a schmuck for taking so long to get to the party. But at least I'm not the last one....

Metabar and weird templates

Konqueror with small Metabar panelThe other day I was playing with the Konqueror Metabar. The metabar is a handy little sidebar panel that gives you information on the selected file or directory, provides quick access links to the KDE "open with" and "actions" menus, file previews, and some other features. However, the default metabar theme in Kubuntu is kind of annoying in that, as this screen shot shows, when the panel is too small, long lines wrap over other lines and make the text basically unreadable.

So I tried building my own theme. Since the theme system is based on KHTML, you can build a theme using HTML, CSS, and JavaScript. It's just that it's weird HTML. Here's a sample of what the code for the "open with" panel looks like:
<div class="frame">
<div class="head"><a i18n image="run" class="title" onClick="this.blur();" hr
ef="function://toggle/open">Open With</a></div>
<li id="open" expanded="true" style="height:0px">
</li>
</div>

Note that the README for the Metabar code said not to change this markup.

My initial reaction to this was, naturally, "What the hell is this?!" Not only is this non-semantic, not only is it invalid, but it doesn't even make sense! Why the lone LI hanging out in the middle of the DIV? It's obviously where Metabar is injecting the KDE "open with" items, but why is it an LI? I would have thought a UL or another DIV would make more sense. What is Metabar injecting anyway?

I never did get a good layout. I couldn't seem to get the panels to correctly resize. All I really wanted was for them to expand to fit their contents. It should have been as simple as a "height: auto", but for some reason, it just didn't work. I'm not sure why - there didn't seem to be anything in the stylesheet that would interfere with that. Though it's hard to be sure without knowing exactly what the application is adding to the markup.

It might possibly have been something built-in to the binary. The default theme certainly seemed to be. The application's theme directory only contained one theme, and it wasn't the Kubuntu one. I guess I'll have to go to the source to figure out that one. Presumably Metabar is in one of the KDE add-on packages. I just have to figure out which one.

Tools I can't do without: VMware

We all have those few programs we can't do without. For the non-technical user, the list might include Internet Explorer and Outlook Express. For the hard-core geek, it might be Vim/Emacs, GCC, and GDB. As for me, lately I've found that VMware is way up on that list.

VMware Player running Kubuntu 6.10 under Kubuntu 7.04This is particularly the case when it comes to testing and evaluating software. If it's anything even remotely "big," such as Microsoft Office, or if it's something I'm doing for work or casual research and am not planning to keep installed, I'll just break out a VM, install the software, and then blow the VM away when I'm done. In fact, I keep a directory full of compressed VM images of various pre-configured test setups for just this purpose. When I need to try something out, I decompress one of the images, do my thing, and then delete it when I'm all done. It's kind of like the old-fashioned "take a disk image" approach, only way, way faster. Honestly, VMware makes things so easy, it baffles me that people still bother with physical test machines for testing applications. It's so...1990's.

But VMware is great for regular work too. The performance is quite good, so if you have even a middle of the road system, you can run medium to heavy-wieght applications in a VM without too much pain. This is especially useful if you happen to be stuck running Windows, because some things are just so much easier to do in Linux, such as getting the tools you need. Of course, virtualization can't beat running natively, but flipping back and forth between your regular desktop and a VM is a lot less cumbersome than dual-booting or having two computers.

Of course, my whole-hearted adoption of virtualization is not without its price. This week I found myself looking up prices on new RAM sticks for my desktop and laptop. The main benefit I envisioned? I could comfortably run more than one instance of VMware! It's the perfect answer to, "What could you possibly do with 4GB of memory?"

If you've never used any virtualization software, you really need to check it out. It's a godsend. I use VMware Player because it's free and available on both Windows and Linux. QEMU is also a fairly nice cross-platform solution. And for the Windows-only crowd, there's always Virtual PC. They might take a little getting used to, but it's well worth the effort.

On motivation and brain rust

Things at work have been slow lately. Really slow. The fact is we're severely over-staffed and there just isn't enough work to go around right now. This has left me with an amount of down-time that I find...uncomfortable.

You'd think this might be a good position to be in. After all, it leaves me plenty of time to read up on development practices, the latest technologies, and keep up on my tech blogs. I have spare time to experiment with Ruby on Rails and play with Mono. I can browse through Code Complete or Refactoring at my leisure. What more could a programmer want?

As I've been discovering, this isn't quite as nice in practice as it seemed at first. For starters, there are still the miscellaneous menial tasks that need to be done - software installations, security and configuration changes, and just general support calls. These are excellent for knocking you out of the zone. Furthermore, the constant threat of them makes it a bit harder to concentrate in the first place.

Second, while reading and experimenting to build your skill-set is great, you need the right environment to make it truly profitable. You need some degree of freedom and some goal to work towards. Or at least I do. I find I learn the most when I have something I'm trying to accomplish. I also need periodic breaks to process and assimilate what I'm studying. It just doesn't seem to work as well when my "goal" is to stave off boredom and my breaks are scheduled.

Last, I've found that it's becoming a challenge just to stay sharp at times like this. You see, I need some sense of purpose to stay sharp. I feel like I'm chained in a room for 8 hours a day being forced to do nothing but pour water back and forth from one bucket into another. It feels like my brain is starting to rust.

This is exactly the opposite of how to motivate programmers. We need crave interesting progblems to solve. Or, at the very least, some problems to solve. Playing the point-and-click Windows monkey won't do it and I can only stand to read so many hours a day.

The problem is, the more I try to spend my time improving my knowledge and skills, the more unbearable I find my condition. I feel like I should be "out there" putting what I learn to use, but instead I have to sit at my desk for...no particular purpose. And what's worse, it's sapping my mental and emotional energy. After being stuck in the office all day, trying to keep from going stir-crazy, I feel like I've got nothing left when I get home. It's turning into a vicious cycle.

Have other people been in a situation like this? How do you deal with it? I mean, short of quitting (which I'd do if not for that damn mortgage). Are their any coping strategies to hold me over until I can get out?