Back to the movies: The Eternals

A couple of weeks ago, my wife and I took a nice, relaxing long weekend at Keuka Lake.  We had some nice meals; went to the Glenn Curtis museum, which I'd never visited before; and we tasted some very nice wines, particularly the Cabernet Frank from Domaine LeSeurre and several of the wines at Dr. Konstantin Frank.  We even did something we haven't done in the almost two years since the pandemic started - we went to a movie!  It was a late afternoon show and there were only a couple of other people in the theater, so it was pretty nice.

The only down side was that the movie we saw was Marvel's Eternals.  Spoiler alert: it wasn't very good.  (But seriously, there are a couple of spoilers.)

Honestly, I didn't have high hopes going into this movie.  I saw the last two Avengers films and, frankly, after those I'm kind of done with the Marvel Cinematic Universe.  It's not that those particular movies were bad, it's just that I'm tired of the whole concept.  There's too many characters, too many movies, too many attempts to tie them together.  The movies aren't that good and I just don't care enough to even try to keep up with them.  And I went into this film knowing basically nothing about The Eternals other than being vaguely aware that it was the title of a comic book in the Marvel Universe.

On the up side, the special effects were very good.  I mean, for the most part.  (But for the kind of budget Marvel movies get, they damned well better be.)  And I guess some of the action scenes were entertaining.  Unfortunately, that's about it.

I had a number of problems with this film.  One of the overriding issues is probably that they actually try to develop all of the Eternals as characters, at least to some extent.  Normally, this would be a good thing.  But there are like ten Eternals and this is only a two and a half hour film.  There just isn't time to develop that many characters to a significant extent and it didn't really work.  They gave most of the characters a little development, but it wasn't enough to make me actually care about them.  So all it really did was drag out the movie and slow down the pace.

The two characters that they did put more effort into were the leads, Sersi and Ikaris.  This was also a problem, because they didn't do a good job.  These characters were supposed to have had a very long-term romantic relationship in the past, which was shown in a number of flashbacks.  However, the actors had absolutely no chemistry at all.  I mean, to me it not only didn't look like they were in love, I wasn't even convinced that they liked each other all that much.  The end result was that the relationship angle didn't land at all and the scenes that were trying to develop it were just tedious and unengaging.  The only silver lining was that the leads were so boring and unlikable that they made the other characters more relatable.

Not that most of those were much better.  The actors didn't necessarily do a bad job, but they didn't have much to work with.  And I'm a little mystified by the casting.  I mean, aren't Salma Hayek and Angelina Jolie kind of big names to be taking what amounted to bit parts?  Are their careers in the toilet or something and I just didn't know it?  It's not like they got no screen time, but they were definitely not focal characters.  Most of the focus was on Gemma Chan and Richard Madden, who are not unknowns, but are decidedly "small" names by comparison (as were most of the other Eternals).  And it's not like this was a compelling artistic choice, like Milos Foreman casting a relatively unknown Tom Hulce as Mozart in Amadeus.  Chan and Madden weren't a phenomenal combination, they didn't have amazing chemistry - they were "fine" at best.  It just seems really odd to have such big names in the film if you're not going to use them.

But, of course, my main issue was with the writing.  Inspired by this movie, I'd like to propose a new law: Screenwriters are hereby prohibited from writing characters who are supposed to be significantly older than the average human life span.  

Seriously, the Eternals are supposed to be 7000 years old.  They've been around humans that entire time.  They were supposed to have disbanded as a team and assimilated into the human population something like 500 years ago.  And yet their actions and motivations are portrayed as the kind of thing you'd see from a teenager or twenty-something.  It's absurd.  I know they're technically not supposed to be human, but they're certainly portrayed that way.  Yet we're supposed to believe that they haven't matured or developed a wider perspective in 7000 years?  Come on!  I know everybody has issues, but I kind of feel like a few centuries should be more than enough time to deal with them.  But maybe my expectations are a little high.  

The one I found especially galling was Ajak's change of heart.  She actually remembered all of planets that she'd helped Arishem destroy to hatch new celestials, but when she saw the Avengers undo Thanos' "snap", she decided that this planet was different, that the people on this world deserved to live.  But what about all those other worlds she helped destroy?  Were they just populated by no-account NPCs who didn't deserve to live?  What about the dynamism added to the universe by the rise of new celestials and their continued creation of innumerable new worlds and galaxies?  Does she just not think that's important anymore?  Does creating a handful of super heroes really make Earth so much more special than all the others?  So nothing she saw in the previous 7000 years convinced her that humanity was worth saving, but the Avengers completely changed everything?  To put it generously, the moral calculus of that analysis seems a little sketchy.  You'd think someone who's been around that long would have put some more thought into ethics.

Sorry, but this whole thing is just stupid.  And that's my main problem at the end of the day: the plot was just stupid.  They spent too much time trying to the develop the characters and didn't leave enough time to make the plot actually make sense.  If they'd been successful in making a compelling, character driven story, then maybe it could have been OK.  But they weren't.  The dialog was clumsy and the characters were one-dimensional, with the result that I couldn't maintain enough suspension of disbelief to overlook the plot holes and simplistic characterization.  This is why I stopped caring about the MCU.

Actually, maybe that disappearing "knowledge" is OK

A couple of weeks ago I posted an entry about the disappearance of online academic journals and how that's a bad thing.  Well, this article made me rethink that a little bit.

The author, Alvaro de Menard (who seems knowledgeable, but on whom I could find no background information, so caveat emptor), apparently participated in Replication Markets, which is a prediction market focused on the replicability of scientific research.  This is not something I was familiar with, but the idea of a prediction market is basically to use the model of economic markets to predict other things.  The idea is that the participants "bet" on specific outcomes and that incentives are aligned in such a way that they gain if the get it right, lose if they get it wrong, and maintain the status quo if they don't bet.

In this case, the market was about predicting whether or not the findings of social science studies could be replicated.  As you probably know, half the point of formalized scientific studies is that other researchers should be able to repeat the study and replicate the results.  If the result can be replicated consistently, that's good evidence that the effect you're observing is real.  You probably also know that science in general, and social science in particular, has been in the midst of a replication crisis for some time, meaning that for a disturbingly large number of studies, the results cannot be replicated.  The exact percentage varies, depending on what research area you're looking at, but it looks like the overall rate is around 50%.

It's a long article, but I highly recommend reading de Menard's account.  The volume of papers he looked at gives him a very interesting perspective on the replication crisis.  He skimmed over 2500 social science papers and assessed whether they were likely to replicate.  He says that he only spent about 2.5 minutes on each paper, but that his results were in line with the consensus of the other assessors in the project and with the results of actual replication attempts.

The picture painted by this essay is actually pretty bleak.  Some areas are not as bad as you might think, but others are much worse.  To me, the worst part is that the problem is systemic.  It's not just the pressure to "publish or perish".  Even of the studies that do replicate, many are not what you'd call "good" - they might be poorly designed (which is not the same thing as not replicable) or just reach conclusions that were pretty obvious in the first place.  As de Menard argues, everyone's incentives are set up in a perverse way that fails to promote quality research.  The focus is on things like statistical significance (which is routinely gamed via p-hacking), citation counts (which doesn't seem to correlate with replicability), and journal rankings.  It's all about producing and publishing "impactful" studies that tick all the right boxes.  If the results turn out to be true, so much the better, but that's not really the main point.  But the saddest part is that it seems like everybody knows this, but nobody is really in a position to change it.

So...yeah.  Maybe it's actually not such a tragedy that all those journals went dark after all.  I mean, on one level I think that is kinda still is a bad thing when potentially useful information disappears.  But on the other hand, there probably wasn't an awful lot of knowledge in the majority of those studies.  In fact, most of them were probably either useless or misleading.  And is it really a loss when useless or misleading information disappears?  I don't know.  Maybe it has some usefulness in terms of historical context.  Or maybe it's just occupying space in libraries, servers, and our brains for no good reason.

Disappearing knowledge

I saw an interesting article on Slashdot recently about the vanishing of online scientific journals.  The short version is that some people looked at online open-access academic journals and found that, over the last decade or so, a whole bunch of them have essentially disappeared.  Presumably the organizations running them either went out of business or just decided to discontinue them.  And nobody backed them up.

In case it's not already obvious, this is a bad thing.  Academic journals are supposed to be where we publish new advances in human knowledge and understanding.  Of course, not every journal article is a leap forward for human kind.  In fact, the majority of them are either tedious crap that nobody cares about, of questionable research quality, or otherwise not really that great.  And since we're talking about open-access journals, rather than top-tier ones like Nature, lower-quality work is probably over-represented in those journals.  So in reality, this is probably not a tragedy for the accumulated wisdom of mankind.  But still, there might have been some good stuff in there that was lost, so it's not good.

To me, this underscores just how transient our digital world is.  We talk about how nothing is ever really deleted from the internet, but that's not even remotely true.  Sure, things that go viral and are copied everywhere will live for a very long time, but an awful lot of content is really just published in one place.  If you're lucky, it might get backed up by the Internet Archive or Google's cache, but for the most part, if that publisher goes away, the content is just gone.

For some content, this is a real tragedy.  Fundamentally, content on the Internet isn't that different from offline content.  Whether it's published on a blog or in a printed magazine, a good article is still a good article.  A touching personal story is no more touching for being recorded on vinyl as opposed to existing as an MP3 file.  I know there's a lot of garbage on the web, but there's also a lot of stuff that has genuine value and meaning to people, and a lot of it is not the super-popular things that get copied everywhere.  It seems a shame for it to just vanish without a trace after a few short years.

I sometimes wonder what anthropologists 5000 years from now will find of our civilization.  We already know that good quality paper can last for centuries.  How long will our digital records last?  And if the media lasts 5000 years, what about the data it contains?  Will anthropologists actually be able to access it?  Or are they going to have to reverse-engineer our current filesystems, document, and media formats?  Maybe in 5000 years figuring out the MPEG-4 fomat from a binary blob on an optical disk will be child's play to the average social science major, who knows?  Or maybe the only thing they'll end up with the archival-quality print media from our libraries.  But then again, given what the social media landscape looks like, maybe that's just as well....

Reinstalling Ubuntu

I finally got around to re-installing my desktop/home server the other day.  I upgraded it to Ubuntu 20.04.  It had been running Ubuntu for several years, and in the course of several upgrades, it somehow got...extremely messed up.

Of course, it doesn't help that the hardware is really old.  But it had accumulated a lot of cruft, to the point that it wouldn't upgrade to 20.04 without serious manual intervention - which, frankly I didn't feel like taking the time to figure out.  And something had gone wrong in the attempted upgrades, because several programs (including Firefox) had just stopped loading.  As in, I would fire them up, get a gray window for a second, and then they's segfault.  So it was time to repave.

Sadly, the process was...not great.  Some of it was my fault, some of it was Ubuntu's fault, and some of the fault lands on third parties.  But regardless, what should have been a couple of hours turned into a multi-day ordeal.

Let's start with a list of the things I needed to install and configure.  Most of these I knew going in, though a few I would have expected to be installed by default.  Note that I started with a "normal" install from the standard x64 Ubuntu Desktop DVD.  I completely reformatted my root drive, but left my data drive intact.  I had to install:

  • Vim.  I'm not sure why this isn't the standard VI in Ubuntu
  • OpenSSH server.  I sort of get why this isn't included by default in the desktop version, but it kinda feels like it should be the default for everything except maybe laptops.
  • Trinity Desktop, because I really liked KDE 3, damn it!
  • The Vivaldi browser, because I really liked old-school Opera, damn it!
  • Cloudberry Backup, for local and off-site backups.
  • libdvdcss, because I've been ripping backups of all my old DVDs before they go bad (which some already are).
  • OwnCloud, which I use to facilitate sharing various types of media files between my devices.
  • The LAMP stack, for my own apps as well as for ownCloud.
  • Cloudberry Backup.  Turns out that was super-easy - just restored the files to /opt and installed an upgrade (was on 2.x, installed .deb for 3.0), worked like a charm.

The up side is that Cloudberry was super-easy to reinstall.  I created a backup of the file in /opt before reformatting, then just restored those and installed a .deb for the latest release over top of them.  Worked like a charm!  Since it's a proprietary package with license validation, I'd been expecting to have to have to contact support and get them to release and refresh the license, but it turns out that wasn't necessary.

On the down side, a lot of other things didn't go well.  Here's a brief summary of some of the issues I came up against, for posterity.

  1. The video was corrupted.  This happened in both the installer and the main GNOME desktop.  It wasn't so bad as to be completely unusable, so I was able to get in and work around it.  But it was almost completely unusable.  Fortunately, the issue went away when I switched to Trinity Desktop.
  2. The sound didn't work.  It seems that the system picked up the wrong sound card.  I'm not sure why.  I was able to find and fix that by installing PulseAudio Volume Control.
  3. The scroll buttons on my Logitech Marble Mouse trackball don't work.  I still haven't figured this out.  Of course, there's no graphical utility to re-map mouse buttons that I've found, so it's all editing config files.  I found several possible configs online, but they don't seem to work.  I might just have to live with this, because I'm not sure I care enough to devote the time it would take dig into it.
  4. I forgot to take a dump of my MySQL databases before reinstalling.  Of course, this was completely my fault.  But the down side is that, from what I read, you can't just "put back" the old database files for InnodDB databases.  Apparently it just doesn't work that way.  Luckily I didn't have anything important in those databases (it was all just dev testing stuff), but it was still annoying.
  5. Re-installing ownCloud did not go as smoothly as anticipated.  In addition to the MySQL issue, it seems that Ubuntu 20.04 ships with PHP 7.4 out of the box, which is great.  However, ownCloud apparently doesn't support 7.4 yet, so it refused to run.  A quick search suggested that the "not working" parts were mostly in some tests, so I was able to comment out the version checks and get it to work.  I wasn't running anything but the standard apps on this instance, so it might not work in the general case, but it seems to be OK for what I need.
  6. Plex was annoying.  When I installed it, I got a brand new server that I had to claim for my account, which is fine, but the old instance was still present in my account.  Which is understandable, but annoying.  I had to re-add my libraries to my managed user accounts and was able to remove the dead instance from my authorized devices without much trouble.  It just took a bit to figure out what was going on.  Probably didn't help that both servers had the same name.  I also had to re-do all the metadata changes I'd manually made to some of my media files.  Next time I'll need to figure out how to backup the Plex database.
  7. I probably should have thought of this, but I didn't move my user and group files in /etc over to the new install.  Not a big deal, since I don't have that many, but it meant that several of the groups I created weren't present and were recreated with different GIDs than on the old install.  This was mainly an annoyance because it meant that some of the group ownerships on my data drive ended up wrong.
  8. I had some trouble getting Linga back up and running.  After setting up a new virtualenv, I kept getting errors from the Pillow image manipulation library that there was "no module named builtins".  This was really puzzling, because, as the name suggests, "builtins" is built into Python 3.  I initially assumed this was just my Python-fu being weak and that I had an error in my code somewhere.  But no - it was my environment.  After some Googling (well, actually Duck Duck Go-ing), I realized that this was a common problem with Python 2 to 3 compatibility and was reminded of this post that I wrote six months ago.  The short version is that Apache was running the Python 2 WSGI module.  That seems weird, given that the system only ever had Python 3, but apparently that's how the Apache module works.  Anyway, I installed libapache2-mod-wsgi-py3 and everything was fine.

All in all, this experience reminds me why I don't do this sort of thing very often.  All this tinkering and debugging might have been kind of fun and interesting when I was 25 and had nothing better to do with my time, but these days it's just tedious.  Now I'd much rather things "just work", and if I have to forgo some customization or flexibility, I'm kinda fine with that.

Hunkering down for the pandemic

This week started "work from home for the foreseeable future."  It's pretty much going to be nothing but sitting around the house and going out in the yard for a while.

This is because, in response to the coronavirus epidemic, pretty much everything is cancelled.  Last Thursday the news announced the first case of COVID-19 in my Monroe County, New York, where I live. That morning my boss advised all his reports to feel free to work from home.  The next day, the CEO sent out a message that we were closing most of our offices and ordering everybody to work from home.  On Saturday the county closed down all the schools.  Over the course of the rest of the week, the state government announced progressively more closures, and yesterday put the state on "pause".  This means pretty much everything is closed, i.e. all businesses deemed non-essential. I'm still not 100% clear what counts as "essential", but it's not much beyond food and medical care.

Fortunately for me and my family, I made good career choices in my youth.  As a professional software developer, working remotely is generally not a challenge for me.  I'm also fortunate to be working for a company that sells BCDR solutions with a subscription model.  Particularly with the increased move to telecommuting, the need for backup solutions is not going to go away, and our subscription services mean that the company will have recurring revenue even if new sales decline.  So not only am I in a profession that is well positioned to weather a crisis like this, I'm working for a company that is not in any immediate danger of going out of business.

Sadly, this is not the case for many people.  My brother, who works for the state unemployment agency, is already preparing for a massive influx of claims as people who aren't able to work from home go on unemployment. Local non-profits have already started sending out solicitations for donations and people on social media are calling for everyone to support local businesses that will be hard-hit by the closures.  I don't know what impact this pandemic will have on economy, either locally or nationally, but it will definitely be bad.

And let's not forget the maniacs who are hording toilet paper and hand sanitizer and generally causing way more problems than they're protecting themselves from.  Or the other maniacs who are convinced that the pandemic is some kind of political conspiracy theory and take pride in flouting even common-sense safety measures.  I don't mean to downplay the seriousness of CONVID-19, but some of the reactions have been completely out of proportion to the threat.  This is not the "insta-death virus", but it's not just a case of the sniffles either.  It needs to be taken very seriously, but it's not a reason to panic.  Panicking always makes any situation worse.

At this point, my only hope is that the response to the virus won't do too much damage to society as a whole.  Yes, it's a serious issue; yes, lots of people will die; yes, the state needs to take decisive action to slow or halt the spread of the virus.  But let us not forget that while the risk of illness is shared by all of us equally, the costs of the response are not.  More of the weight will undoubtedly fall on those least able to carry it.  While people like me will probably be pretty much OK, the status of those affected by business closures an quarantines is far less certain.  So while it's important that we take steps to protect the people who are most in danger from the virus, it's also important to protect the people who are harmed by those steps.  There are no free lunches - everything has a cost and everyone is worthy of consideration.

OK, that one wasn't good either

In my review of Star Wars: The Rise of Skywalker a few weeks ago, I mentioned that I'd enjoyed the other new Star Wars movies, with the exception of Solo: A Star Wars Story, which I hadn't seen yet.

Well, I saw it.  And...yeah.

On the up side, it was definitely better than The Rise of Skywalker.  But that doesn't make it good.  That would be like saying that Anders Breivik wasn't such a bad guy because he killed fewer people that Stalin.  It doesn't work that way.  So it was pretty bad.  I'd rank it as the second worst Star Wars movie so far.

The disappointing part was that for the first third to half of the movie, I was enjoying it.  It wasn't fantastic, but is seemed like a decent, straight-forward action/adventure movie.  I distinctly remember thinking, "I don't know what people were complaining about.  This is actually not bad." 

Then in the second half the story started going down hill.  They started throwing in plot twists and flipping the "good guys" and the "bad guys" and it just didn't really work.  Especially since a lot of the "good guys" were career criminals - are we really surprised they're not so good?  Come on.  Add in the corny dialog and some iffy acting that left the characters feeling unrelatable and that just killed it for me.

And then there were the parts that seemed like they were supposed to be some form of ham-handed social commentary.  For instance, Lando's droid that was constantly agitating for droid rights.  That could have been an interesting sub-plot.  Are droids really sentient?  What does sentience actually mean?  And if they are, why doesn't galactic civilization have a problem with basically enslaving them?  And for that matter, why don't they seem to have a problem with enslaving humans or other humanoids?  How do the characters relate to these questions?

But no - they don't even try to explore any of that.  They just have the droid assert that droids are sentient and need to be liberated and then walk around ranting about it like that friend who spends way too much time on Twitter and won't stop talking about politics.  "Yeah, we get it Bob, the patriarch is bad.  Now what do you want on your damn pizza?"  It ends up just being a punchline.

Bottom line: if you haven't seen Solo, don't bother.  My only consolation was that I was cleaning the house while I watched it, so at least the time wasn't wasted.  If you want a good Star Wars story, go watch season 1 of The Mandalorian.

Thank goodness Star Wars is over

My company has this cool tradition - when a new Star Wars movie comes out, they buy everybody a ticket to it.  In fact, for the larger offices, they even buy out a theater, which is really cool.  So, naturally, this year they sent us all to see Star Wars: The Rise of Skywalker.  And you know what?  I kinda wish they hadn't.

Warning: Spoilers Ahead!

I give that spoiler warning even though I don't think it's actually possible to spoil the movie.  Seriously, it's that bad.  If you haven't seen it, don't bother.  It's so bad I think it actually made Revenge of the Sith look good.

Lest I be accused of unfairness, let me clarify that I did not go into this movie expecting to hate it.  I hadn't read any reviews or rumors about it, so I really didn't know what to expect.  I like the previous movies well enough and expected the same from this one.  I thought The Force Awakens was good and while The Last Jedi had some issues, I still enjoyed it.  I didn't see Solo, but I liked Rogue One a lot (in my opinion, it was the best of the new movies by far) and have been thoroughly enamored The Mandalorian.  So I like Star Wars.  But I'm not a die-hard fanboy.  I'm not really familiar with the books, comics, games, or any of the other extended universe stuff - just the movies.  So I'm not looking to pick this apart in light of other sources.  All I'm really expecting is a good movie.  If not a great cinematic masterpiece, at least an entertaining action film.

I did not get what I was expecting.

On the up side, the special effects were top-notch, and the acting was generally fine given what the actors had to work with.  On the down side, everything else about the movie was awful!  The dialog was bad, but that's forgivable - this is Star Wars, not Citizen Kane, so you have to temper your expectations; the pacing was a mess, with suspenseful plot points that were resolved ten seconds later in the very next scene, like when you think Chewbacca is dead for maybe 30 seconds; the action scenes were a mixed bag, with some that were decent and others that were underwhelming, such as the big lightsaber duel on the ruins of the Death Star that actually inducing me to yawn.  But my main complaint was the story.

The plot of this film was just...impossibly bad.  And when I say that, I mean I actually find it hard to believe that it was written and approved by highly paid professionals working for a major studio with a huge budget.  It's the kind of story I would expect from a not-very-talented 13-year-old fan-fiction writer.  It's trite and uninteresting, full of holes and poorly motivated twists, loaded with plot points that don't really make sense, and littered with distracting and pointless "fan service" call-outs. 

Shadiversity has a good break-down of the plethora of problems with the plot, but I'll give a a short list of a few of my complaints.

  1. Palpatine?  Really?  Yes, the opening story roll, in the first 30 seconds of the film, reveals the twist that Emperor Palpatine is still alive and is planning to take his revenge on the galaxy.  It's never explained how he survived the Death Star or why he's been in hiding for however many years.  It wasn't alluded to in the previous movies, as far as I can tell.  But sure, why not?  Apparently they've just run out of original ideas.
  2. Another armada from nowhere?  The previous movies showed us the First Order, which apparently built a huge armada in the outer reaches of the galaxy and then started taking it over.  Well, apparently Palpatine did the same thing.  So now he has a gigantic fleet of Star Destroyers that also all have planet-killing lasers a la the Death Star.  Because...why not?  Apparently those are a dime a dozen now.
  3. Is it really that secret?  Palpatine built this fleet on the Sith home-world.  Much of the first half of the movie is spent trying to find the special navigation unit which will take the heroes to that planet so they can confront Palpatine.  Apparently only two of those units were ever made and they're the only way to get to the planet.  Kylo Ren has one of them and when he finds the planet, it's shown as being pretty much a barren wasteland with one big building (which for some reason seems to be floating about eight feet off the ground).  And yet they managed to build hundreds, if not thousands, of Star Destroyers here.  How?  Where did they get the resources?  And the crew?  So nobody can get there except for the tens of thousands of people who are responsible for those Star Destroyers?
  4. What's with the dagger?  The heroes find the coordinates at which to locate the above mentioned navigation unit inscribed on a dagger.  It must be an ancient Sith relic which will lead them to a long-forgotten temple or something, right?  Wrong.  It leads them to Endor and the wreckage of the Death Star.  So...that means that like 30 years ago, somebody knew that unit was in the Death Star and decided to...carve the coordinates on a dagger?  Why would you do that?  And then it turned out the dagger has some sort of pull-out thing in the hilt that allows Rey to match up exactly where in the Death Star the unit was.  No instructions on how to use that or where to stand to make it line up right - Rey just notices it's there and uses it at a random place.  The entire thing just makes no sense at all.
  5. What about that thing with Finn?  Two or three times they raised a point about Finn wanting to tell Rey about his feelings for her or something.  At least, I assume that's what it was.  It was one of those "he wants to say something, but never gets a good chance" things.  It comes up a few times and then they never do anything with it.  They just drop it and the movie ends without any attempt at a resolution.
  6. What happened to the Force?  The Force was always powerful and mysterious, but the previous eight movies established some precedents for the type of things the Force can do.  And that list did not include healing wounds, teleporting physical objects, or generating enough Force lightning to simultaneously attack an entire fleet of starships!  I'm not saying that new movies can't introduce new Force powers, it's just that this movie really piles them on.  It seems like at this point the Force has become "magic".  It can just do anything, for no reason, without explanation of how or why it's possible.

That's enough for now, but I could go on.  The entire movie is like this.  The characters are constantly doing things that don't make sense, seemingly at random, either to drive this ridiculous plot or as an excuse for some special effect.  And this goes on for over two hours!  There's no character development to speak of and the plot doesn't seem to develop so much as jump around.  It's as if they had a long list of things they were required to include in the movie and just tried to cram everything in without taking the time to make it work.  If it doesn't make sense, well, we'll distract them with this shiny new thing!  After a while, it becomes difficult to actually care about any of the characters or the story at all .  They just don't feel real enough to be interesting.

Overall, I regard this film as a disaster.  It's the first Star Wars film I can remember watching and not enjoying.  It has caused me to officially lose all respect for J. J. Abrams.  There were actually several sections where I actually started getting bored and wished it would just end.  If you haven't seen it, then don't.  It was a waste of two and a half hours.

I dislike voice interfaces

Last year I bought a new car.  And I mean a new new car - a 2019 model year right.  My last two "new cars" were low-mileage "pre-owned" (i.e. "used") cars.  They were fine, but they didn't have any bells or whistles.  In fact, my last one didn't even have power locks or windows.

The new car has all that stuff, though.  And one of those bells and whistles is an entertainment center with a touch screen and support for Android Auto.  This was actually something I really wanted, as opposed to having just the integrated options.  My reasoning was that with Android Auto, I can keep the "brains" of the system in my phone, which can be upgraded or replaced, whereas integrated systems are likely to become outdated and maybe even discontinued before I get rid of the car.

The Reality

The reality of Android Auto, however, is not as great as I'd hoped it would be.  Much of the reason for this is that it's primarily driven by the voice-control features in Google Assistant.  There's some support for typing and menu-driven UI, but it's intentionally limited for "safety reasons."  For example, you can't type into Google Maps while you're driving, nor can you scroll beyond a certain limit in the Plex app, because paying too much attention to the screen is dangerous.

You may have noticed I put "safety reasons" in "sarcasm quotes".  That's because the voice control can sometimes be so frustrating that I find myself more distracted by it than if I could just search for what I needed from a menu.  I end up angry and yelling at the console or just picking up my phone and using it directly rather than the car's console interface.

Let me give you an example.  While driving with my son, he asked to listen to some music.  He wanted to listen to "My Songs Know What You Did in the Dark (Light Em Up)" by Fallout Boy.  So I said to Google, "Plan light 'em up by Fallout Boy."  Google replied, "OK, asking Amazon Music to play 'My Songs Know What You Did in the Dark (Light Em Up)' by Fallout Boy."

Great!  The music started playing and I heard, "Do you have the time, to listen to me whine."  I looked at the screen and Amazon Music was playing Green Day.  Why?  I have no idea.  So I tried again and asked Google, "Play My Songs Know What You Did in the Dark by Fallout Boy."  Once again, Google replied "OK, asking Amazon Music to play 'My Songs Know What You Did in the Dark (Light Em Up)' by Fallout Boy."  And this time, it played the right song.  It claimed to be asking Amazon the same thing both times, so why did one work and the other didn't?  Who knows?

This wouldn't be a big deal if it were an isolated incident, but it's actually pretty common when using Android Auto.  Sometimes it gives you what you ask for, sometimes it doesn't.  Sometimes is claims it's giving you what you asked for, but gives you something else.  And sometimes it just doesn't give you any response at all.

And it's not just Google - I have an Amazon Echo and Alexa does the same sort of thing.  It seems to be a bit more reliable than Google, but it still has problems.

When it works...

The thing is, voice interfaces are really great...when they work as you'd expect.  When you know what you want and the system correctly understands you and maps your request to the right thing, they're like the computers on Star Trek.  And to be fair, they do work well often enough to make them useful.  It's just that when they don't work, I find them unreasonably infuriating.

I think the reason for this is that the discoverability of voice interfaces is limited.  On the one hand, yes, you can just ask your device anything.  But the world of "things you can ask" is so huge that it's overwhelming - it's hard to tell where to start.  And when something as simple as "play X song" doesn't work, it's not obvious where to go next.  Try also including the artist?  Maybe a variation on the title?  All you can do is keep trying random things and hope that one of them works.  Sometimes you stumble on the right thing, and sometimes you don't and just give up in disgust.

It's kind of like trying to use a command-line interface, but with no help and no manual.  OK, so the one command you know didn't work.  What next?  Well, you can try some variations on the arguments you passed it.  Maybe sprinkle in a common option that might be applicable.  But ultimately, it's just blind guessing.

When you're using a graphical interface, you still end up guessing, but at least it's a bounded search space - you're limited to what's on the screen.  Also, the options usually have at least some description of what they do.

If nothing else, these issues with voice interfaces are a good way to relate to non-technical users.  You've seen non-technical users get confused, frustrated, and angry when they encounter a task that they don't know how to accomplish or a problem that they can't seem to fix?  Well, that's how I feel when Android Auto tells me one thing and then does another.  Maybe they'll fix that some time soon....

Spam filters suck

Author's note: Here's another little rant that's been sitting in my drafts folder for years. Twelve years, to be precise - I created this on March 28, 2007. That was toward the end of my "government IT guy" days.

I'd forgotten how much of a pain the internet filtering was. These days, I hardly think about it. The government job was the last time I worked anyplace that even tried to filter the web. And e-mail filtering hasn't been something I've worried about in a long time either. These days, the filtering is more likely to be too lax than anything else. And if something does get incorrectly filtered, you generally just go to your junk mail folder to find it. No need for the rigamarole of going back and forth with the IT people. It's nice to know that at least some things get better.

I'm really starting to hate spam filters. Specifically, our spam filters at work. And our web filters. In fact, pretty much all the filters we have here. Even the water filters suck. (Actually, I don't think there are any water filters, which, if you'd tasted the municipal water, you would agree is a problem.)

I asked a vendor to send me a quote last week. I didn't get it, so I called and asked him to send it again. I checked with one of our network people, and she tells me it apparently didn't get through our first level of filters. So she white-listed the sender's domain and I asked the guy to send it again. It still didn't get through.

As I've mentioned before, our web filters also block Wikipedia.

On opinions and holding them

Strong Opinions Loosely Held Might be the Worst Idea in Tech.  I think the name of that article pretty much says it all.

This is essentially something I've thought since I heard about he concept of "strong opinions loosely held".  I can see how it could work in certain specific cases, or be mindfully applied as a technique for refining particular ideas.  However, that only works when everyone in the conversation agrees that that's the game they're playing, and that's not usually how I've seen the principle presented anyway.  Rather, it's usually described in terms more like "be confident in your opinions, but change your mind if you're wrong."  And that's fine, as far as it goes.  But it's far from clear that this works out well as a general principle.

To me, "strong opinions loosely held" always seemed kind of like an excuse to be an jerk.  Fight tooth and nail for your way until someone and if someone proves you wrong, oh well, you weren't that attached to the idea anyway.  It seems to fly in the face of the Hume's dictum that "a wise man proportions his belief to the evidence."  Why fight for an idea you don't care that much about?  If you're not sure you're right, why not just weigh the pros and cons of your idea with other options?

I suppose the thing that bothers me the most about it is that "strong opinions loosely held" just glorifies the Toxic Certainty Syndrome, as the article's author calls it, which already permeates the entire tech industry.  Too often, discussions turn into a game of "who's the smartest person in the room?"  Because, naturally, in a discussion between logical, intelligent STEM nerds, the best idea will obviously be the one that comes out on top (or so the self-serving narrative goes).  But in reality, nerds are just like any group, and getting people to do things your way is orthogonal to actually having good ideas.  So these conversations just as often degrade into "who's the biggest loud-mouth jerk i the room?"

I'm not sure how I feel about the article's specific "solution" to preface your assertions with a confidence level, but I do empathize with the idea.  My own approach is usually to just follow the "don't be a jerk" rule.  In other words, don't push hard for something you don't really believe in or aren't sure about, don't act more sure about a position than you are, and be honest about how much evidence or conviction you actually have.  It's like I learned in my Philosophy 101 class in college - to get to the truth, you should practice the principles of honesty and charity in argument.  Our industry already has enough toxic behavior as it is.  Don't make it worse by contributing to Toxic Certainty Syndrome.

Fluent-ish interfaces

This evening I was reading a post on Jason Gorman's blog about fluent assertions in unit tests and it made me smile.  Jason was updating some slides for a workshop and decided to illustrate fluent assertions by putting the "classical" assertion right next to the fluent version.  After doing that, he started to doubt the text on the slide that claimed the fluent version is easier to read than the classical one.

That made me feel good, because I've often wondered the same thing.  I've heard the the assertion that fluent assertions are easier to read many times - for instance, they said that in a Certified Scrum Developer class I took last year.  Apparently that's just "the direction the industry is going."  I've followed Jason's blog for a while and he seems like a pretty smart guy, so seeing him doubt the received wisdom gives me a little validation that it's not just me.  

Fluent assertions are...sort of fine, I guess.  I mean, they work.  They're not terrible.  I never really thought they were any easier to read than the old-fashioned assertions, though.  

From what I can tell, the claim is generally that fluent interfaces are easier to read because they naturally read more like a sentence.  And that's definitely true - they undeniably read more like a sentence.  Just look at some of Jason's examples (classical above, fluent below):

Assert.AreEqual(50, 50);
Assert.That(50, Is.EqualTo(50));

Assert.IsTrue(true);
Assert.That(true);

Assert.AreSame(payer, payee);
Assert.That(payer, Is.Not.SameAs(payee));

Assert.Contains(1, new ArrayList() {1, 2, 3});
Assert.That(new ArrayList() {1, 2, 3}, Contains.Item(1));

It's undeniable that "assert that 50 is equal to 50" is much closer to a real sentence than "assert are equal of 50 and 50" (or however you would read the classical version).  However, it would behoove us to remember that we're reading code here, not English prose.  We can't just assume that making your code read like an English sentence makes it more readable.  They're different things.

My theory is that we tend to think differently when we're looking at code than we do when we're reading text.  The fluent version might look closer to a sentence, but that's really just on the surface.  It's only more natural if you mentally strip out all the punctuation.  But that's not how we read code, because in code you can't just strip out the punctuation.  The punctuation is important.  Likewise, the order of things is important, and not necessarily the same as in English.

When I look at the fluent assertion, I don't just see "assert that 50 is equal to 50", I see all the pieces.  I mentally separate the "assert" from the "that", because "that" is a method, which usually means it contains the more immediately applicable information.  Likewise with "is equal to".  And, of course, I recognize Is.EqualTo(50) as a method call, which means that I have to mentally substitute the result of that into the second parameter.  But wait a minute - equal to what?  There's only one parameter to EqualTo, which doesn't make any sense.  Oh, wait, that's NUnit magic.  So we pass 50 and the result of EqualTo to That...what the heck does "that" mean anyway?  Wait - magic again.  We have to read that backwards.  So the actual assertion comes at the end and the "that" just ties things together.  OK, I've got it now.  So...what were we testing again?

OK, maybe that's a little silly, but you get the idea.  The point is that code is written in a certain way, and it's not the way English is written.  When you're reading code, you get into a certain mindset and you can't easily just switch to a different one because the next line just happens to call a fluent interface.  The old-fashioned Assert.AreEqual(50, 50) might not be sexy or natural looking, but it's short, simple, and perfectly clear to any developer worth his salt.  Why switch to an interface that's wordier and less aligned with how the rest of our code is written?  If it ain't broke, don't fix it.  

Going WYSIWYG

I must be getting old or something.  I finally went and did it - I implemented a WYSIWYG post editor for LnBlog (that's the software that runs the blog you're reading right now).

I've been holding out on doing that for years.  Well, for the most part.  At one point I did implement two different WYSIWYG plugins, but I never actually used them myself.  They were just sort of there for anybody else who might be interested in running LnBlog.  I, on the other hand, maintained my markup purity by writing posts in a plain textarea using either my own bastardized version of BBCode or good, old-fashioned HTML.  That way I could be sure that the markup in my blog was valid and semantically correct and all was well in the world.

The LnBlog post editor using the new TinyMCE plugin.

If that sounds a little naive, I should probably mention that I came to that conclusion some time in 2005.  I had only been doing web development for a few months and only on a handful of one-man projects.  So I really didn't know what I was talking about.

Now it's 2014.  I've been doing end-to-end LAMP development as my full-time, "I get paid for this shit" job for almost seven years.  I've worked for a couple of very old and very large UGC sites.  I now have a totally different appreciation for just how difficult it is to maintain good markup and how high it generally does and should rank on the priority scale.

In other words, I just don't care anymore.

Don't get me wrong - I certainly try not to write bad markup when I can avoid it.  I still wince at carelessly unterminated tags, or multiple uses of the same ID attribute on the same page.  But if the markup is generally clean, that's good enough for me these days.  I don't get all verklempt if it doesn't validate and I'm not especially concerned if it isn't strictly semantic.

I mean, let's face it - writing good markup is hard enough when you're just building a static page.  But if you're talking about user-generated content, forget it.  Trying to enforce correct markup while giving the user sufficient flexibility and keeping the interface user-friendly is just more trouble than it's worth.  You inevitably end up just recreating HTML, but with an attempt at idiot-proofing that end up limiting the user's flexibility in an unacceptable way.  And since all the user really cares about is what a post looks like in the browser, you end up either needing an option to fall back to raw HTML for those edge-cases your idiot-proof system can't handle, which completely defeats the point of building it in the first place, or just having to tell the user, " Sorry, I can't let you do that."

"But Pete," you might argue, "you're a web developer.  You know how to write valid, semantic HTML.  So that argument doesn't really apply here."  And you'd be right.  Except there's one other issue - writing HTML is a pain in the butt when you're trying to write English.  That is, when I'm writing a blog post, I want to be concentrating on the content or the post, not the markup.  In fact, I don't really want to think about the markup at all if I can help it.  It's just a distraction from the real task at hand.

Hence the idea to add a WYSIWYG editor.  My bastardized BBCode implementation was a bit of a pain, I didn't want to fix it (because all BBCode implementations are a pain to use), and I didn't want to write straight HTML.  So my solution was simply to resurrect my old TinyMCE plugin and update it for the latest version.  Turned out to be pretty easy, too.  TinyMCE even has a public CDN now, so I didn't even have to host my own copy.

So there you have it - another blow stricken against tech purity.  And you know what?  I'm happier for it.  I've found that "purity" in software is tends not to be a helpful concept.  As often as not, it seems to be a cause or excuse for not actually accomplishing anything.  These days I tend to lean toward the side of "actually getting shit done."

Unhelp desk

Author's Note: This is another entry in my "from the archives" series.  I've decided it's time to clear some of those out of my queue.  This was an unfinished stub entry I last edited on April 20, 2007, when I was still working in county government.  I actually still have some recollection of the incident that inspired this entry, nearly seven years later.  There wasn't much to the original entry itself (as I said, it was kind of a stub), so this post will feature some commentary on my own comments.  Yeah, I could try to actually finish the post, but I only vaguely recall what my original point was and I'd rather make a different one now.

Question: what is the purpose of having a "help desk" when all they do is forward support requests to the programmers and analysts? Isn't that the opposite of help?

I admit that's not entirely fair. Our help desk does all sorts of things and often solves small problems on their own. So it's not like they're just a switchboard.

It just annoys me when they pass on a request without even trying to get any more details. For instance, when they get e-mail requests and just copy and paste the body of the e-mail into our help desk software. Then they print out the ticket on a sheet of blue paper and lay it on the desk of the programmer or analyst who last worked with the department or system in question.  There's no indication that they even asked the user any questions.  It seems like I'd be just as well off is the user just e-mailed me directly.  At least we wouldn't be killing trees that way.

Commentary: I don't even really remember what the partiuclar issue that prompted this was anymore.  I just remember coming back to my desk one day, seeing a blue print out of a help desk ticket, and realizing from what it contained that the "help" desk had done no analysis or trouble-shooting on it whatsoever.

This was one of those moments that was a combination of anger and shaking my head in resignation.  I had a lot of those in 2007.  I guess it was just part of the realization that government IT just wasn't for me.  In fact, it was that August that I took the job with eBaum's World  and moved into full-time development, which is really what I wanted to be doing all along.  So really, it all worked out in the end.

Five steps to get on your IT department's good side

Author's note: This is another entry that's been sitting in my drafts folder for years and which, after three glasses of wine, I've now decided to publish. This one is from April 2007, about four months before I left the government IT world for the world of commercial product development. It's funny how my posts are much less humorous when I'm enjoying my job than they are when I'm frustrated. But, then again, working with non-technical end users generates a lot more funny stories than working with top-notch developers. Oh, well.

The corporate IT world can be frustrating at times. The IT department is often viewed as a cost center, a necessary liability, rather than a helpful resource. As a result, it often seems to the IT people like the entire organization is against them. This leads to frustration, apathy, hostility, and using a sniper rifle to pick off people walking in from the parking lot. Thus I present a few tips to keep your IT department on your good side.

  1. Don't report a problem by just saying you're "getting error messages" on your computer. We're the IT department, not the Psychic Friends Network. You're going to have to be a little more specific if you want any help.
  2. If you do call about "getting error messages," then at least have the decency to tell us what the messages say. I know fixing computer problems may seem like magic at times, but we do need something to go on. If you dismissed the error dialog, didn't take a screenshot, didn't write down the message, don't remember what it said, don't know what application raised it, and don't remember what you were doing when it came up, there's not a hell of a lot we can do for you.
  3. Please realize that "computer people" don't automatically know everything there is to know about computers. I may work in the IT department, but that doesn't mean I can fix your computer myself. It also doesn't mean I can tell you how to do something in your CAD or GIS software. It's not like I practice drawing maps in my free time. We specialize too, you know.
  4. When requesting custom software, don't expect me to read your mind. I understand that you might not think of some things until after you start using the software. I have no problem with adding things later. In fact, I expect it. However, I really don't appreciate getting attitude because the program doesn't do X when nobody ever mentioned to me that it needed to do X.
  5. When you ask how to do something, pay attention to the answer. I don't mind showing you how to add a border to that cell in Excel. I do mind showing you how to do it once a week for a month. And the same goes for clearing out your deleted items in Outlook.

Coder self-esteem

You know, I've been making my a living writing software for over 10 years now. If my resume is any indication, I'm pretty good at it. But every now and then I still read something that gives me that. just for a minute, like maybe I've just been fooling myself all these years, like I'm actually completely inadequate as a software developer.

This Slashdot article gave me such a feeling. It links to a Google case study that involved porting MAME to the Chrome Native Client. The summary ends with this quote from the article: "The port of MAME was relatively challenging; combined with figuring out how to port SDL-based games and load resources in Native Client, the overall effort took us about 4 days to complete."

Now, I don't follow Chrome development at all, so when I read this, I had absolutely no idea what the Native Client actually was. My initial guess was that it was probably some API to access native libraries or hardware from JavaScript, or something like that. So it sounded like "porting MAME" would entail porting the code to a different programming language, or a different set of APIs for audio and video, or some similar.

That sounds like a pretty huge undertaking to me. And they did it in four days?!? Wow! I know Google is supposed to be all about hiring the best and the brightest, but that's just ridiculous. I mean, that would take me months to finish! And even if I was already familiar with the internal workings of MAME and Chrome, it would still be a matter of weeks, not days. How did they do it? Are they really that good? Am I just some third-rate hack by comparison?

Well...turns out they're not actually that good. This comment on the article sums it up nicely. Turns out that the Native Client isn't actually a totally different API, but presents a POSIX-like API with support for compilers and build tools. So really, this was less like porting MAME to something totally new and more like making it run on a different operating system. And they didn't even do a real production-quality port. Instead, they simly removed several problematic parts of the code and just sort of "made it work." Granted, that's still a pretty impressive amount to accomplish in only 4 days, but it's hardly the super-human feat it seemed a first.

This sort of story is one of the things that's always bothered me about the culture of software development - it's full of tall tales. Listening to the stories people tell, you'd think everyone was building big, impressive programs in a couple of days. It's not until you pry for details that you realize that the impressive sounding thing is actually little more than a prototype. Sure, Bob may have built a working C compiler over the weekend, but he doesn't mention that it can only reliably compile "Hello, world" so far.

It's almost a lie by omission - you report a truncated version of your accomplishment and rely on an implicit comparison to something much more involved to make you sound super-human. And I say "almost" because it's not just self-aggrandizers doing this. In many cases, the tale just grows in the telling. This case is an excellent example - Slashdot took an impressive sounding quote, stuck it in a brief summary, and made the whole thing sound bigger than it was.

I sometimes wonder what effect this sort of rhetoric has on beginning programmers. Do they find it inspirational? Does it make them want to become the "programming god" who accomplished this sounds-more-impressive-than-it-is feat? Or is it discouraging? Do they hear these stories and think, "I'd never be able to do something like that."

Looking back, I think that it was kind of a progression for me. When I was first learning to code, those stories sounded cool - I could be the guy telling them one day. Then, after a few years in the industry, they started to be discouraging. I mean, I'd been doing this for years and I still wasn't the guy in those stories. Now I'm over ten years in and I'm just skeptical. Whenever I hear one of those stories, my first thought is, "So what's the catch?" (Because, let's face it, there's always a catch.)

And the worst part is, I don't even know if that story should be inspiring or just sad.

Pathological PHP

You know what annoys me? People with crazy ideas. Especially when they pimp them like crazy.

That's why this tutorial on "code separation" from a web forum I occasionally visit annoys me so much. The author links to this thing like his life depends on it. Whenever somebody has the nerve to post a code snippet that has both HTML and PHP in it, he brings it up. Even if the code is just echoing a variable inside an HTML block. It's ridiculous.

Don't get the wrong idea - separation of concerns is obviously a good thing. If you're outputting HTML and querying the database in the same file, you're doing things wrong. But this guy takes it to absurd lengths and insists that you should never have any PHP code mixed in with your HTML. Not even echo statements.

The real kicker is the content of this "tutorial". It's basically a half-baked template system that does nothing but string replacement of pre-defined placeholders. At best, it grossly over simplifies the problem. I suppose it does demonstrate that it's possible to output a page without having HTML and PHP in the same file (as if anyone really doubted that), but that's about it.

The thing that really bothers me about this approach is that the author promotes it results in code that's easier to understand than code that has both PHP and HTML in it. Except that it's not. The guy apparently just has a pathological fear of having two different languages in the same file. it's completely irrational.

The problem with his approach is that it doesn't actually solve the problem, but just moves it. Sure, basic replacement like that it's fine for simple cases, but as soon as the requirements for your markup get more complicated, things blow up. For example, how do you do conditionals? Well, you have another template file and you do a check in your controller for which one to inject into the page. What about loops? Well, you have a template file for the loop content and you run the actual loop in your controller, build up the output, and inject that into the page.

The net result? What would normally be a fairly simple page consisting of one template with a loop and two conditionals is now spread across six template (one main template, one for the loop body, and two for each if statement) and pushes all the display logic into the controller. So instead of one "messy" template to sort through, you now have a seven-file maze that accomplishes the same thing.

I find it difficult to see how this is any sort of improvement. At best, it's just trading one type of complexity for another in the name of some abstract principle that mixing code and markup is evil. Of course, if you want to follow that principle, you could always go the sane route and just use something like Smarty instead. But let's be honest - that's just using PHP code with a slightly different syntax. It may be useful in some cases, but it's not really fundamentally different from just writing your template files in PHP.

Personally, I've come to be a believer in keeping things simple. PHP is a template system. It was originally designed as a template system. It's good at it. So there's no need to introduce additional layers of template abstraction - just stick with PHP. There may be cases where things like Smarty are useful, but they're far from necessary. And the half-baked templating systems like those advocated in that tutorial are just intellectual abortions. There's no need to reinvent a less functional version of the wheel when you can just use a working, tested wheel.

Post-vacation unhappiness

On the heels of the things that make me happy, here's something that makes me not so happy: when half your team quits while you're on vacation!

I spent the last week of July in Cape Cod, staying in a cabin by the beach. It was very nice. We relaxed a bit, saw some sites, did some shopping (they have a kick-ass used book store on Main St. in Hyannis), and generally enjoyed it a great deal. It was nice to unplug for a while - I only used my laptop on twice all week, both times for less than an hour.

And what do I discover when I get back? Our engineering team is deserting me. We only had three software engineers (including me), one Flash designer, and one QA person. Turns out both of the other engineers gave their notice while I was gone. So now it's just me. Sigh....

I don't blame them - you've got to look out for your own career - but it leaves us in a really bad place. Plus I'll miss having them around the office to talk to. But on the up side, it won't be hard to stay in touch - they're both going to the same company, which is located in the same office park we're in now.

Making my games work

Despite not considering myself a "gamer" (with the exception of Battle for Wesnoth), I do have a bit of a weakness for "vintage" games, by which I mean the ones I played when I was in high-school and college. While I don't have much time for games anymore, what with full-time employment and home ownership, I still like to come back to those old every now and then.

Well, when I tried to fire up my PlayStation emulator, ePSXe, to mess around with my old copy of Final Fantasy Tactics, I ran into a problem - I no longer have GTK+ 1.2 installed! Apparently it was removed when I upgraded to Ubuntu 10.04 (or possibly even 9.10, I'm not sure). However, according to this LaunchPad bug, this particular issue is by design and will not be fixed. That kind of stinks, because I have several old closed-source Linux-based games that depend in some way on GTK+ 1.2 (often for the Loki installer).

This sort of thing is a little bit of a sore spot for me, and has been for some time. On the one hand, I can understand te Ubuntu team's position: GTK+ 1.2 is really old and has not been supported for some time. You really shouldn't be using it anyway, so there's not much reason for them to expend any effort on it.

On the other hand, how much effort is it to maintain a package that's no longer being updated? Why not at least make it available for those who need it? This is the sort of user-hostile thinking that's always bothered me about the open-source community. There's hardly any compatibility between anything. Binary compatibility between library version is sketchy, as is package compatibility between distributions. Even source compatibility breaks every few years as build tools and libraries evolve. Ever try to compile a 10-year-old program with a non-trivial build process? Good luck.

And that seems to be the attitude - "Good luck! You're on your own." It's open-source, so you can always go out and fix it yourself, if you're a programmer, or hire someone to do it for you otherwise. And while it's absolutely great that you can do that, should that really be an excuse for not giving users what they want or need? Should the community have to do it themselves when it's something that would be relatively easy for the project maintainers to set up?

Not that I can blame them. As frustrating as decisions like this can be, you can't look a gift horse in the mouth. The Ubuntu team is providing a high-quality product at no cost and with no strings attached. They don't owe me a thing. Which, I suppose, is why it's so great that members of the community can fix things themselves. The circle is complete!

But anyhow, that's enough venting. That LaunchPad thread had several posts describing how to install GTK+ 1.2 on 10.04. I chose to use a PPA repository.

sudo apt-add-repository ppa:adamkoczur/gtk1.2
sudo apt-get update
sudo apt-get install gtk+1.2

Ta-da! I'm now back at the point I was at a year or so ago when all of my old games actually worked.

Kubuntu Intrepid: Another failed upgrade

Well, that sucked.

I upgraded my Kubuntu box at work from 8.04 to 8.10 on Monday morning. It did not go well. Not only did the experience waste several hours of my time getting my system back to a state where I could actually do some work, it left me feeling bitter and fed-up.

Not that the upgrade failed or anything - on the contrary. The upgrade process itself was relatiely fast and painless. So, in contrast to some of my previous upgrade experiences - which have left systems completely inoperable - this wasn't that bad. It's just that, once the upgrade was done, nearly every customization I'd made to my desktop was broken.

Broken Stuff

As for the breakages, they were legion - at least it felt that way. The 2 most annoying were the scrolling on my Logitech Marble Mouse trackball and KHotKeys. It turns out the mouse scrolling was fixable by adding a line to my xorg.conf to disable some new half-working auto-configuration feature.

KHotKeys, on the other hand, was a lost cause. From what I've read, it just plain doesn't work right in KDE 4. So, since key bindings are an absolute must-have feature for me, I worked around it by installing xbindkeys. This works well enough, but it's a huge pain in the neck. Now, not only do I have to recreate all my key bindings, but I have to look up the DBUS commands for all those built-in KDE functions rather than just picking them from a list.

Another annoying one was that the upgrade somehow broke the init scripts for my MySQL server. I don't know how the heck that happened. I tried uninstalling it, wiping the broken init scripts, and reinstalling, but they weren't recreated, which seemed odd to me. I eventually ended up just doing a dpgk --extract on the MySQL package and manually copying the scripts into place.

On another weird note, KDE and/or X11 has been randomly killing the buttons on my mouse. I'll be working along fine and suddenly clicking a mouse button will no longer do anything. It still moves, and the keyboard still responds, but clicking does nothing. Restarting the X server resolves the problem, but that's cold comfort. It seems to happen randomly - except for when I try to run Virtual Box, in which case it happens every time the VM loses focus. Fortunately I'm more of a VMware person, so that's not a big deal, but it's still disquieting.

KDE4 In General

The other big pain-point is KDE 4. To be perfectly blunt, I don't like it. It has a few neat new features, but so far it doesn't seem worth the effort to upgrade.

The good parts that I've noticed so far seem to be small. For instance, Dolphin has a couple of nice enhancements. The one that sticks out is the graphical item-by-item highlighting. It allows you to click a little plus/minus icon to select/deselect an item, so that you no longer need to hold the control key to do arbitrary muliple selects. The media manager panel applet is nice too. It pops up a list of inserted storage devices and allows you to mount and eject them. I have to admit that I also really like the new "run" dialog. It does program searching much like Katapult, but makes it easier to run arbitrary commands and select commands with similar names. While it doesn't have some of the cool features supplied by Katapult's plugins, it's still quite good.

On the other hand, there are a lot of things I don't like (not counting the breakage). For one, I think the new version of Konsole is a huge step backward. I can't access the menus with keyboard shortcuts, the "new tab from bookmark" feature is MIA, the session close buttons are gone, and generally everything I had gotten used to is missing.

And then there's the new "kickoff" application menu. I'm getting slightly more used to it, but I still don't like it. It just feels a lot slower to access items using it. This is only made worse by the "back" button for browsing sub-menus, which is extremely hard to click when you're in a hurry (hint: Fitt's law doesn't apply on multi-monitor setups).

As for the "cool" new look of KDE 4...I'm not a fan. Maybe it's just because I don't have any of the fancy desktop effects turned on on my system (a side-effect of the crappy integrated video card that's part of my tri-monitor setup), but I just don't think it looks good. Yeah, the bare desktop itself is kind of nice looking, but the window theme is ugly as sin. It's one of those "brushed metal" sort of looks, which I find even more depressing than Windows 95 gray. It's too dark for my taste and far too monochromatic. I also find the active window highlighting to be way too subtle to be helpful. The icons also leave something to be desired. They look nice, but they don't look distinct - even after a week, it takes me a second to figure out what some of them are supposed to represent. It kind of defeats the entire point of icons.

As for the much touted Plasma, I'll grant them this - it is pretty. The panel and desktop plasmoids do pretty much all look nice. Not that it matters to me, though, because I never see my desktop - it's always covered with work. And while the various applets and widgets may look pretty, approximately 90% of them are completely useless. That's the problem with all desktop widgets for any platform. I find that if a desktop widget actually provides enough valuable functionality to justify leaving a space open for it on the desktop, it's job is probably better served by a full-fledged applicaiton. And if it's not important enough to make constantly visible, then why bother to put it on the desktop at all? I'm never going to see it, so I might as well save the RAM and CPU cycles.

Conclusion

Overall, I guess Kubuntu 8.10 and KDE 4 aren't bad systems. But to be honest, I'm not impressed. For the first time, I think that the new Kubuntu is not an improvement. In fact, I have no plans to upgrade the 3 Kubuntu boxes I have at home any time in the forseeable future.

The thing that's most disappointing to me about the upgrade to KDE 4 is that it totally defeats my purpose in switching to KDE in the first place. When I switched from the ROX desktop to KDE back in 2005, my main reason was that I was tired of having to build my own desktop. ROX was great, but it was a small community and just didn't have the range of applications and degree of integration that KDE had. You see, I always had this crazy idea that I could just use all KDE applications and everything would be tightly integrated and work well together and there would be harmony throughout my desktop.

However, more and more I've been finding that that just isn't true. Part of the problem is that lots of KDE applications just aren't that good - many of them are missing functionality and have stability problems. I find myself using fewer KDE applications all the time. I dropped Quanta+ for Komodo Edit; I tried to like Konqueror, but it just doesn't hold a candle to Firefox or Opera; I recently tried to become a KPilot user, but was almost immediately forced to switch to JPilot; I finally got fed-up with Akregator and am just using the RSS reader in Opera's M2 mail client; I still use KMail, but not because I particularly like it - I just dislike it less than M2 or Thunderbird. In fact, I think the only KDE app I would actually miss is Amarok. (K3B is very good too, but I don't burn enough disks to care what program I use, just so long as it works.)

So now I'm starting to wonder: What's the point of using KDE? If I'm not using many KDE applications, and most of the ones I am using could be easily swapped out, it seems like there's nothing keeping me with it. Maybe I should just switch to GNOME. Or maybe Windows. I have been wanting to get more into .NET development, and my tollerance for things not working has been falling over the years, so Windows is sounding better all the time.

I think next weeek I'm going to have to reinstall my work machine. Maybe a fresh install and a fresh KDE profile will give me a better experience. Or perhaps I'll ditch Kubuntu and go for straight Ubuntu with GNOME. Or perhaps I could take another look at ROX. I don't know. And while I'm at it, I think I might reinstall that old Windows partition I still have on that machine. Maybe some time playing with a nice clean install of XP, or even Vista, if we have a spare copy, will give me a little perspective.

PHP is developed by morons

Well, it's official: the people who develop PHP are morons. Or, rather, the people responsible for adding namespaces to PHP 5.3 are.

Why do I say this? Because I just read an annoucnement on Slashdot that they've decided on the operator to use for separating namespace in PHP 5.3: the backslash (\).

Seriously? The friggin' backslash? What kind of choice is that? Last I knew they'd pretty much decided to go with the double colon (::), like C++, which at least makes sense. But the backslash?

What's worse, just look at the RFC listing the operators they were considering. In addition to the backslash, they had the double star (**), double caret (^^), double percent (%%), a shell prompt (:>), a smiley face (:)), and a triple colon (:::). For God's sake, it looks like they picked this list out of a hat. They might as well have just used the string NAMESPACESEPARATOR. It's no less absurd than any of those.

Now, let's be realistic for a minute. In terms of syntax, PHP is a highly derivative language. It's an amalgamation of Perl, C++, and Java, with a dash of a few other things thrown in.

Given that heritage, there's really only a handful of choices for namespace separators that even make sense. The first, and most natural, is the double colon (::). This is what C++ uses and it's already used for static methods and class members in PHP. So the semantics of this can naturally be extended to the generic "scope resolution operator." Keeps things clean and simple.

The second choice is the dot (.), which is what's used in Java, C#, Python, and many others. This is a bit unnatural in PHP, as dot is the string concatenation operator, but it at least offers consistency with other related languages.

Third is...actually, that's it. There are only 2 valid choices of namespace separator. And the PHP namespace team didn't pick either one. Nice work guys.

The Slashdot article also linked to an interesting consequence of the choice of backslash: it has the potential to mess up referencing classes in strings. So if your class starts with, say, the letter "t" or "n", you're going to have to be very careful about using namespaces in conjunction with functions that accept a class name as a string. Just what we needed. As if PHP isn't messed up enough, now the behaviour of a function is going to depend on the names of your classes and the type of quotes you use.

I guess I'm going to have to bone up on my C#, because PHP seems to be going even farther off the deep end that before. It was always a thrown-together language, but this is just silly. The backslash is just a stupid choice for this operator and there's just no excuse for it.

Yes, PHP sucks, but it works

Jeff Atwood had a really great blog entry on PHP the other day. I think the title pretty much sums up the way I feel about it: PHP Sucks, But It Doesn't Matter.

I've been working with PHP for about 3 or 4 years now. For the last 9 months, writing PHP code has been my day job. And I've got to tell you, PHP really is kind of a crappy language. It's so bad you can't even complain that PHP was poorly designed, because it quite clearly wasn't designed. In fact, I'm not even so sure it "evolved." Sometimes it seems like it just sort of mutated.

So, as Jeff said, nobody with an ounce of programming talent could thing that PHP is a "good" language in any objective sense. It's just too hacked-up and thrown together. It's the Visual Basic 6 of the web. God knows much of the PHP code you find on the net is every bit as terrible as the VB6 code you find. In fact, when you consider ill-conceived "features" like safe mode and magic quotes, it even starts to make VB6 look good.

But all that is really beside the point. At the end of the day, PHP does the job. It might not have the orgasm-inspiring elegance of Ruby, but does it really need that? These days, PHP has all the big features - decent object-orietation, a rich standard library including decent XML handling and database access layers, and a number of good MVC frameworks. In short, it has what it needs to allow decent developers to write good, solid, maintainable code. PHP doesn't lend itself to elegance, but in the right hands, it can be elegant enough. And really, that's all that matters.

Random binary breakage and a rant on compatibility

I enjoy an occasional video game. However, I am by no means a "gamer", as evidenced by the fact that I don't have a copy of a single proprietary game published later than 2002. Rather, I enjoy open-source games like Battle For Wesnoth, vintage games such as Bandit Kings of Ancient China and Wing Commander, and the occasional old strategy game, such as my old Loki games for Linux. I also have a soft spot for emulated console games for the NES and Super NES. I even break out an emulator for my old PlayStation disks every now and then.

Well, the other day the mood struck me to play one of my old PSX games, so I clicked the icon for the ePSXe PlayStation emulator in my application menu and waited...and waited...and waited. And it never came up. So I tried running it from a command prompt and...nothing happened. And when I say "nothing", I mean nothing - no error message or output of any kind. I just got my command prompt back immediately.

Mind you it had been a while since I'd used ePSXe, but there was no immediately obvious reason why it should fail. It's installed in my home directory and has been sitting there, fully configured, for over a year. I used it regularly for a few weeks back in September and October and it worked perfect. Absolutely nothing has changed with it.

Fortunately, a little Googling turned up this thread in the Ubuntu forums. Apparently the ePSXe binary is compressed with UPX. After installing the upx-ucl-beta package via apt-get and running upx -d /path/to/epsxe to decompress the binary, it worked as expected. Apparently something about running UPX-compressed binaries changed between Ubuntu Feisty and Gutsy. I have no idea what, though.

This actually leads into one of the things that really annoys me about Linus: binary compatibility. It's also one of the reasons I prefer to stick with open-source software on Linux when at all possible.

In the Windows world, binary compatibility between releases is pretty good. Granted there are always some applications that break, but given the sheer volume of code out there, Microsoft does a good job keeping that number relatively small. In fact, if you've ever heard any of Raymond Chen's stories of application breakage between releases, you know that the Windows app compatibility team sometimes goes to truly heroic lengths to enable badly broken applications, many of which never should have worked in the first place, to keep functioning when a bug they depended on is fixed. The sample chapter (PDF) from Raymond's book has some really great examples of this.

In the Linux world, on the other hand, nobody seems to give a damn about maintaining compatibility. If you have a binary that's a few years old, it may or may not work on a current system. And if it doesn't, sometimes you can massage it into working, as was the case with ePSXe this time, and sometimes you can't. Not that this should be surprising: some developers in the Linux world are so lazy they won't even allow you to change the paths to application support files - they just hard-code them into the binary at compile-time with preprocessor defines! If they don't care if you can install the same binary in /usr or $HOME, why should they care if it works between distributions or even releases of the same distro? The attitude seems to be, "Well, it's open-source anyway, so who cares how compatible the binaries are?"

But if we're going to be honest, even being open-source only goes so far. Actively-maintained apps are usually OK, but have you ever tried to build an application that hasn't been maintained in 7 or 8 years from source? It's pretty hit and miss. Sure, if I really needed the app, had lots of spare time on my hands, and was familiar with the programming language and libraries it used, I could always fix it to build in an up-to-date environment. But for a regular user, that's simply not an option. (And even for a programmer it may well be more trouble than it's worth.)

But as annoying as I find the general lack of compatibility, as much as I wish I could just run a damn executable without having to cross my fingers, I can understand why things are they way they are. Quite simply, maintaining compatibility is hard. It takes care and diligence and it can make it hard to fix certain problems or make certain types of improvements. And really, when you're not getting paid for your work and have no real obligation to your users, you have to ask yourself if it's worth the effort. Heck, even many commercial vendors aren't that serious about backward-compatibility. Is it really reasonable to expect a loose association of unpaid volunteers to be any better?

But that's enough ranting for tonight. There are ups and downs to every software system. I'm just disgruntled that everything in my personal Linux-land seems to be 5 times more difficult than it needs to be lately.

Screw encryption!

On Friday, I said I was finally going to secure my wireless LAN. As you can probably tell from the title of this post, that didn't go so well. As of this writing, I am still running an open system because that's the only configuration I can get to work with all three of my computers.

268023_d-link_switch.jpgI've spent several hours messing with this today, and it's put me in a really foul mood. There was a time when I enjoyed messing around with my system configuration, but I just can't do it anymore. I don't care that much about networking. I have too many other things I want to spend my time on. I just want my damn network to function and not let anyone who drives by eavesdrop on all my traffic. Is that too much to ask?

My upgrade process started with a firmware update to my D-Link DI-524 C wireless router. This update included WPA2 support, which was a nice bonus. So my encryption options were now: nothing, WEP, WPA, WPA2, and something called WPA2-auto. On the down side, it included no additional documentation, so I have no clude what this "WPA2-auto" is supposed to be. But "auto" sounded promising, so I decided to go with that mode.

Turns out this was a bad idea. According to this forum thread, WPA2-auto doesn't seem to work consistently. Unfortunately, I didn't discover this until I had spent a considerable amount of time trying to get my PC configuration right. You see, I was misled because my laptop was able to connect one time while the router was in WPA2-auto mode. That led me to assume that the problem was with my PCs, not the router. Guess I should have Googled first.

So, eventually, I ended up going with plain-old WPA. The client configuration was a bit tricky for this. You see, my laptop uses NDISwrapper, so I could just use KNetworkManager to enter the pre-shared key. However, my desktops both have RaLink cards and use the rt2500 driver. This driver does not use the Linux wireless extensions and hence does not work with NetworkManager. To configure these cards, you need to add some lines to your /etc/network/interfaces file, as described here. It works, but the down side is that it breaks NetworkManager. However, since these are desktop PCs with 1 WiFi card connecting to 1 access point, that's not really a big deal.

While the desktops weren't that difficult (one I got the right router settings, that is), the laptop was another story. I still haven't figured that one out yet. Of course, I was out of energy by the time I got around to it, so I wasn't exactly in peak form.

The laptop has in integrated Broadcom card which, as I said becore, is configured to use NDISwrapper. This means it works with KNetworkManager. However, I couldn't get KNetworkManager to connect to the access point with WPA enabled. I selected the encryption mode, entered the pre-shared key, and then the connection progress bar would hang at 28%. The iwconfig output said that the card was associated with my access point, but I never got an IP address.

My current suspicion is that the laptop is using stale configuration data from my failed WPA2-auto attempt. I had some problem with stale configuration on the desktops too. For those, I just did a /etc/init.d/networking stop and then unloaded the driver module, then reloaded and restarted. That cleared everything up. In this case, however, I'm thinking it's the data stored by KNetworkManager. The only problem is, I have no clue whatsoever where I would look to find out. The interface is really spartan and there's no obvious way to delete stale configurations.

There is still one big functionality question I'm left with: how do I get NetworkManager to centrally configure an access point for all users? Both Sarah and I have our own accounts on the laptop, and I'd really like NetworkManager to automatically detect when our home network is present and connect to the access point at system start-up. I'm thinking there must be a way to do that, but there's nothing obvious in any of the configuration tools.

Things I don't care about

Following in the spirit of Mark Pilgrim's post from the other day, I thought I'd a short list of things I don't care about. I do this mostly because I'm tired and grumpy, and it's hard to come up with positive, insightful commentary when you're tired and grumpy.

1) Safari on Windows. Maybe it will be good for testing. But then again, I didn't care about Safari when it was Mac-only, and I see no reason to change my attitude now.
2) The iPhone. Yeah, it looks very cool and it's probably much easier to use than any other cell phone on the market, but you can buy an actual computer for less money.
3) Font rendering. Joel Spolsky and Jeff Atwook both commented the "revelation" that Safari for Windows was using Apple's font rendering engine instead of the Windows one. I've heard many complaints about the font rendering on Linux too. Who cares? I never got the obsession some people have with how their fonts look. As long as I can read it without getting eye strain, I'm happy. Hell, half the time I can't even tell the difference between two similar fonts.
4) VB6 programmers. I came across this link from a couple of years ago lamenting that .NET was killing hobbyist programmers. It's an argument I've heard before: .NET is just too hard compared to VB6. Well too bad. Learn to freaking program. VB6 seemed good 10 years ago, but in retrospect, it was nothing but a recipe for hideously bad code and huge magenta buttons. Good ridance! And I was a VB6 programmer, so I'm allowed to say that.
5) Out-sourcing/off-shoring. I'm sick of hearing programmers wailing about their jobs being moved to India or China. You know what? If your job is really in danger from that, it probably sucked anyway. Upgrade your skillset and next time don't work as a code monkey.

Advance your career by losing hope

This week I finally decided to take the plunge: I started working on my résumé. That's right! After six years I have finally decided that it's time to get my career moving and so I have officially entered the job market.

Ah, job hunting! It's quite the experience, isn't it? I'd almost forgotten what it was like. There really is nothing like a good job search to make you feel like a useless, incompetent sack of crap!

I don't know about other industries, but this is definitely the case in the IT world. If you've ever looked for a job in software development, you know what I'm talking about. For every reasonable job listing you see, there are twelve that absolutely require a 10 years using laundry-list of excruciatingly specific technologies, strong interpersonal skills, a Mensa membership, and a strong track record of miraculous healing. And that's for an entry-level position. With a typical career path, if you start early, you should be ready for their grunt-work jobs by about the time your kids are graduating from college and moving back in with you.

The listings that have really been killing me, though, are the absurdly specialized ones. Not the ones that require 5 years experience with ASP.NET, C#, Java, Oracle, SQL Server, SAP, Netware, Active Directory, LDAP, UNIX, SONY, Twix, and iPod - they're just asking for the kitchen sink and hoping they get lucky. I'm talking about listings like the one I saw that required advanced training in computer science, a doctorate in medical imaging, and 10 years of experience developing imaging software. Or, perhaps, all those listings for a particular defense contractor that required experience with technologies I had never even heard of. I mean, I couldn't even begin to guess what these abbreviations were supposed to stand for, and I'm pretty up on my technology! When you come across a lot of listings like that at once, it can be a little depressing. "How am I ever going to find a job? I don't even know about grombulating with FR/ZQ5 and Fizzizle Crapulence GammaVY5477 or how to do basic testing of quantum microcircuits using radiation harmonics with frequency-oscillating nano-tubes on a neural net. Every idiot understands that!"

But the real killer for me is location. I'm in the southern tier of New York state, which is not exactly a hotbed of tech startups. I like the area and don't really want to move, but there's practically nothing here in terms of software development. The best possibility I found was a local consulting company 10 minutes form home. However, when I sent them a résumé, I got a message back saying that they were currently unable to add new positions due to the fact that they were going out of business. I've applied for a couple of other semi-local positions, but of all the possibilities I've found, the closest is about 50 miles from my house. Workable, but not a situation I'm crazy about.

I'm now starting to think seriously about relocating. I don't really want to move to the west coast, both because of the cost of living and on general principle, so I'm thinking of looking either downstate (i.e. New York City) or south to the Washington, D.C. or Atlanta metropolitan areas. All three of those seem to have a fair number of positions in software development.

However, I'm faced with something of a moral dilemma. You see, having been born and raised in upstate New York, it is my patriotic duty to hate New York City. But as a New Yorker, it is also my patriotic duty to look down on the South and New Jersey. That leaves me wondering whether I'm forced in to choosing Washington, or whether it counts as "the South" too and I'm just out of luck.

In the end, I guess I'm just not that patriotic. All three of those cities sound good to me. But New Jersey is another story.

Company-thwarted testing

This week, I'm doing product evaluations at work. Specifically, evaluations of highly enterprisey system management applications. Things like Microsoft System Management Server (though I'm hoping to find a package that's slightly less hideously complicated to administer). And you know what? It's really starting to piss me off!

It's not researching an evaluating these systems that's pissing me off. Although it probably should. After all, I'm a "systems analyst," not a network administrator. I don't touch the servers without explicit permission from the network admin, and therefore don't have a great deal of experience in this area. This is not because I don't know what I'm doing (though I don't claim to be an expert on Windows administration), but because the admin jealously guards his network, and he will strike down those who trespass upon his servers with great fury! Woe to any analyst or programmer who should mess up a server in even a minor way, for he shall never again get anything done on the network. So sayeth the Lord!

The real reason I'm pissed off is that I have no test hardware. Or, more to the point, I have test hardware, but it really, really sucks. To summarize, in order to set up a Windows Server 2003 box, I'm using an old XP desktop with a RAM module cannibalized from an identical desktop just to get it up to 1/2 a gigabyte. The client boxes I'm going to have to set up will have a whopping 128MB of memory a piece. And this is the best we've got available.

Of course, my ideal setup would be to just use a completely virtual test network running in VMware. However, I'm planning on 1 server and about 3 clients initially, and the most powerful box I have available is my desktop workstation, with a measly 1GB of RAM, which would be kind of pushing it. So I'll probably end up with 4 PCs sitting in the server room running VNC servers so that I don't have to physically walk back there and let the roar of the air conditioning system slowly destroys my hearing.

The second reason I'm pissed of is because of who this project is for. If our organization made any sense, we would be looking into system management software for the 300 or 400 (or whatever - I've never seen the exact number) desktops that we support. But it's not. This is for a collection of about 60 mobile laptop units that are scattered across the countryside and hardly ever make it in to the office. The idea is that we'd like to manage updates, installations, configuration changes, and so forth on them without having to drive them 30 or 40 miles to someone with administrator access. I have no problem with that, since only three people have admin access on those machines, and I'm one of them, so it saves me some effort. It's just that it would be nice to be doing something like this in the place where it would save the most work. But then, without the constant running around clicking "next" and such, our department wouldn't have an excuse to be grossly over-staffed.

I guess the take-away here is, "Don't work for the government. Especially local government." There may not be much stress in the Silicon Valley sense, but the frustration and futility levels are through the roof. But on the up side, you can pretty much take a vacation day whenever you want and nobody cares. It's just a matter of whether you prefer stability and flexibility or a feeling that just maybe your days aren't completely wasted.

Does anyone actually like eBay and PayPal?

Last night I bought something through eBay for the very first time. It was a DVD set that I'd been looking for for a while. I actually discovered the aution through Google. The price to "buy it now" on eBay was lower than any other sites I could find and the auction ended in an hour, so I figured, "Why not?" As it turns out, because it's a lousy online shopping experience. That's why not.

Let me state right up front that, aside from one technical issue, I didn't actually have any problems making my purchase. I successfully signed up for an account, paid for the DVD set through PayPal, and today received a notice from the seller that it had shipped. So in terms of actually getting what I wanted, it worked perfectly. And yet, the process of setting up an account and buying the item through PayPal was so thoroughly annoying that it put me in a foul mood for the rest of the night. It's almost a case study in how not to implement an online shopping system.

Problem 1: Long, annoying registration

My first problem with eBay was the long, annoying account registration form. Actually, I think it might have been multiple forms. There were so damn many forms involved in this process that I kind of lost track after a while.

However, I do clearly remember that eBay needed a credit card number to register an account, even though they claim they won't actually charge it and they don't directly handle payment. This alone made me uneasy. So that's -5 points for eBay right off the bat.

Problem 2: Broken AJAX

Related to the annoying registration form was the one actual problem I encountered. As part of the registration, you have to select a unique eBay username. Of course, to me this was nothing but a hinderance, because I don't give a damn about establishing any kind of identity or reputation on eBay - I just want to buy the stupid DVD and get the hell off their site!

Anyway, to make the registration process somewhat more "user friendly", the username box had a button to check that the name you entered was unique. This button disabled the username box and form submit button, started a little "waiting" progress indicator, and made an AJAX call to eBay's servers.

Unfortunately, the AJAX call never returned. A quick look at Opera's error console indicated that this was actually due to the JavaScript violating cross-domain security rules. This cause Opera to (correctly) terminated the script. However, that left me high and dry, because the script had disabled the form's submit button, so I couldn't test the username the old-fashioned way. Instead, I just had to refresh the page and fill in the form again. That's -10 more points for eBay.

Problem 3: Confusing payment

Now that I had an eBay account and had committed to buy the DVD set, it was time to pay. Like a fool, I selected the seller's preferred payment method: PayPal. In particular, I used a credit card through PayPal.

This proved to be somewhat more confusing than I would have thought. For the payment, I was redirected to a third-party payment service. However, I was paying through PayPal, and was eventually redirected to them. Don't ask me why. I don't really understand why I couldn't just go straight to PayPal.

What's worse, at no point during the payment process was I actually sure that my credit card had been billed. At one point, I thought I had successfully paid, but was them prompted to login to PayPal or create an account. After that, my transaction was apparently complete. I think. Or maybe I didn't need to create a PayPal account. I'm still not clear on that.

I'd say this is -20 points to eBay and/or PayPal. I mean, I'm a computer programmer, for crying out loud. I'm not supposed to get lost navigating thought payment forms. I know I'm not perfect, but even on a bad day, a process like that has to be pretty poor for me to get as confused as I was. Hell, I still don't know what the heck happened. All in all, the whole process had a really ad hoc feel to it. Too ad hoc. When it comes to dealing with money, I don't like to feel as if software handling the transaction is held together with the code-equivalent of Scotch tape and bubblegum.

Problem 4: Saving my credit card

I had a couple of nits to pick with the PayPal signup process. The first was that they unilaterally decided to save my credit card information to make future purchases more "convenient."

The problem is, I don't want PayPal to store my credit card information. I don't trust them, or anybody else, to keep that on file. In fact, when I have the choice, I make it a point to never let any online store save my credit card info. I want them to hold onto it just long enough to get the charge authorized and then forget it.

I say that's -20 points for PayPal. It was nice of them to inform me that they were saving my info, and it was nice that it was easy to delete it after the fact. But they really should have offered some kind of opt-out feature. Is that too much to ask?

Problem 5: PayPal vitiates its own credibility

You know what smacks of incompetent, amatuer web design? Pages that play sounds. And that's exactly what one of the pages in PayPal's setup process did.

It was on a page with some kind of coupon offer. When the page loaded, a voice actually started reading some kind of instructions. I couldn't believe it. It was like a bad flashback to Geocities circa 1996.

And to make matters worse, the page was already really text-heavy to begin with. So not only did I have all this text to deal with, but I also had to listen to somebody yammer on about something I probably didn't care about. All I can say is, "Thank God for the mute button."

I give PayPal -100 points for that little "feature." That was the straw that broke the camel's back. It completely destroyed any trust I had in PayPal. In fact, at that point I seriously considered just walking away from the whole transaction and buying from a different site. And if I hadn't already entered my credit card information, I probably would have. I was already a little wary of PayPal, so the last thing I wanted to see was fourth-rate incompetent "webmaster" crap like that. If that's the best they can do, I don't want them handling my money.

The end

So there you have it. A happy ending to a miserable experience. I wasn't cheated or misled, and yet I regretted making this purchase before I was even done with it. I felt a little better about the whole thing in the morning, and a lot better after seeing that my purchase had shipped, but the whole experience left a bad taste in my mouth. I don't know if I'll ever use eBay or PayPal again. But at the very least, I won't be running off to do so any time in the forseeable future.

Sabotaging productivity

This week, .NET Rocks had a great interview with Jeff Atwood. Jeff is a really insightful guy and listening to him was as much fun as reading his blog. In fact, this interview inspired me to start listening to other episodes of .NET Rocks. Well, that and the fact that Carl Franklin co-hosts Hanselminutes, which I also enjoy.

One the topics the interview touch on was Jeff's Programmer's Bill of Rights. It enumerates six things a programmer should expect if he is to be productive.

I found this both depressing and comforting. It's depressing because, as Jeff pointed out in the interview, these things are neither unreasonable nor hard to fix. You can basically just throw money at them without putting in any real effort. These conditions should not be widespread enough that anyone needed to bring them up.

pipecat.jpgAs for comfort...well, it's just nice to know you're not alone. I'm currently one of those poor schleps Jeff talked about who's still working on a single 17" CRT monitor, a three year old PC, and sitting in a cubicle right next to one of the network printers. I'm not even within sight of a window and my cube is literally just barely big enough to fit my desk. I write my code in SharpDevelop because my boss won't spring for a visual studio upgrade. Two years ago it was, "Well, we'll wait for the 2005 version to come out instead of buying 2003." This year it was, "We'll wait for the 2007 version to come out instead of buying 2007." And last but not least, despite the fact that we write mostly reporting-heavy information systems, I use the version of Crystal Reports that came bundled with VS.NET because, as crappy as it is, it's the best thing available to me.

I have to agree with Jeff, Richard, and Carl. The message you get from a setup like this is clear: you are not important. We don't value you as a person, we don't value your work, and we're certainly not going to waste money on making your life easier. The net effect is that morale, especially among the more clueful people in my office, is in the gutter. There's misery galore and productivity is next to nothing. But fortunately we work for the government, so nobody notices. And no, that was not a joke.

Sometimes it seems like our environment is tailored specifically to sabotage productivity. It's kind of like the keyboard they put on the laptops that the police use.Keyboard of actual laptop used in police cars I'm the happless IT guy who has to do configuration and maintenance on these laptops, and I can tell you that the only explanation for those keyboards is that I did something really, really terrible in a past life. They're ruggedized keyboards made of semi-hard plastic. The problem is that they're so rugged that it's completely impossible to type on them. You have to use the two-finger method because the keys are too hard to press with your little fingers. Trying to type with any speed at all is completely futile. And yet the cops are somehow expected to type up tickets and accident reports on these things. It's a wonder they even give out tickets anymore. Actually, maybe that was the idea....

I suppose this is what I get for taking an IT job when I really wanted to be in software development. In retrospect, maybe I should have stayed a full-time student that extra semester or two, finished my damned thesis and looked for a job with a real software company. But I thought I needed some experience and this was the best offer I got, so I took it. Unfortunately, I was too inexperienced to know that crappy experience isn't necessarily better than no experience.

Though on the up side, when I took this job is when I moved in with my (now) wife. It also provided the money that paid for that engagement ring. So in some ways this was the right decision. It's just the professional advancement wasn't one of them.

Now I just need to finish my damned Master's thesis and get the hell out of here.

No, bloggers aren't journalists

Last week, Jeff Atwood posted an anecdote demonstrating yet again that bloggers aren't real journalists. I know this meme has been floating around for some years, but I'm still surprised when people bring it up. In fact, I'm still surprised that it ever got any traction at all.

I'm going to let you in on a little "open secret" here: blogging in 2007 is no different than having a Geocities site in 1996. "Blogging" is really just a fancy word for having a news page on your website.

Oh, sure, we have fancy services and self-hosted blog servers (like this one); there's Pingback, TrackBack, and anti-comment spam services; everybody has RSS or Atom feeds, and support for them now built into browsers. But all that is just gravy. All you really need to have a blog is web hosting, an FTP client, and Windows Notepad.

That's the reason why bloggers in general are not, and never will be, journalists. A "blog" is just a website and, by extension, a "blogger" is just some guy with a web site. There's nothing special about it. A blogger doesn't need to study investigative techniques, learn a code of ethics, or practice dispassionate analysis of the facts. He just needs an internet connection.

That's not to say that a blogger can't practice journalism or that a journalist can't blog. Of course they can. It's just that there's no necessary relationship. A blogger might be doing legitimate journalism. But he could just as easily be engaging in speculation or rumor mongering. There's just no way to say which other than on a case-by-case basis.

Like everything else, blogging, social media, and all the other Web 2.0 hype is subject to Sturgeon's law. The more blogs there are out there total, the more low-quality blogs there are. And the lower the barrier to entry, the higher the lower the average quality is. And since blogs have gotten insanely easy to start, it should come as no surprise that every clueless Tom, Dick, and Harry has started one.

I think George Carlin put it best:

Just think of how stupid the average person is. Then realize that half of them are stupider than that!

Any average person can be a blogger. Thus the quality of those blogs will follow a standard distribution. For every Raymond Chen, Jeff Atwood, and Roger Johansson, there are a thousand angst-ridden teenagers sharing bad poetry and talking about not conforming in exactly the same way. They're definitely bloggers, but if we're going to compare them to journalists, then I think society is pretty much done for a blog.

Blind, raging Windows hatred

You ever have one of those moments when you almost wish you could string Bill Gates up with his own entrails? I had one of those moments yesterday morning.

My problem was not Bill Gates or even Microsoft in general, but their representative to the common man: Windows XP. I was bitten by that annoying Explorer bug where it blocks for several seconds at a time for no particular reason. I had to clean off my D: drive because it's is a measly 30GB, which is just barely enough to house the data I've accumulated over the last five years, plus my collection of virtual machines. However, it seemed like every two minutes, when I clicked something, explorer would lock up. No apparent reason, no inordinate disk or CPU activity - it just stopped responding. Then, after 5 to 20 seconds, it would pick up where it left off.

Now, this has happened many times before, but never quite so often in one session. It was positively maddening. It's a good thing I was drinking decaf, or I probably would have ripped my monitor to pieces.

Things went slightly better today, but I did have an issue with deleting a file.  Basically, the progress dialog displayed for way, way, way too long.  Like a minute or two for a 1MB file.  It's just ridiculous.

The worst part is, it's not like I can just stop using Explorer.  If this were a UNIX, I could just switch desktops or file managers.  But on Windows, that just doesn't work.

Oh, don't get me wrong - there are alternative shells and file managers, but I've never found any of them to be very good.  At least not the free ones (I haven't tried any of the paid-for options because my boss is kind of cheap).  Most of the free file managers seem to focus on useless features like dual-panes and integrated file viewers (that's why we have file associations - duh!).  And the third-part shell replacements...they just feel like second-class citizens.  Plus I've found that things tend to break when Explorer isn't running. 

Are there really that many stupid developers?

I have a question for all the programmers out there: do people really write code in Windows Notepad? And if so, who and how many are they? I want to know so I can try to steer clear of them.

I see this every now and then in forums, tutorials, and so forth. It usually comes as instructions to open a particular file in Notepad, or open Notepad, type in this text, and save it as such and such. In some cases, it is followed by the revelation that you can download some kind of specialized editor for the file type at hand.

The worst part is that I've seen this from people who I know are not beginners. For example, one person I spoke with, who has been programming for over 20 years, said that when writing ASP applications, she uses Visual Studio .NET or Notepad. Apparently there's no in between there. It's either the heaviest of the heavy-weight IDEs, or the least code-friendly text editor in the world.

Why do people write code in Notepad? Why? It's the text-editor equivalent of writing code on punch-cards. Maybe it's just me, but every time I double-click a programming-related file in Windows and it opens in Notepad, I curse under my breath, close Notepad, and right-click the file to open it in Vim or jEdit. I mean, unless you're on somebody else's system and Notepad is the only thing available, I just can't contemplate using it for any actual work.

Can anyone enlighten me on this? Is it just ignorance? Do some people actually like Notepad? What's the deal?

Filters suck

You know what sucks? Internet filters. In particular, my employer's internet filters.

You know why they suck? One word: Wikipedia. About a month or so ago, our filters started blocking freakin' Wikipedia. Sure, it has it's pointless side, like the 13 page Chocobo article. Honestly, who needs 13 pages on an imaginary bird from a video game series, even if it is the best game series ever (rivaled only by Wing Commander)?

But there's actually lots of useful information on Wikipedia, particularly on technical topics. For instance, I've found it quite useful for explaining some of the telecom lingo I have to deal with on occasion. It might not be the most definitive reference in the world, but it's very good for quick explanations of unfamiliar topics.

I guess I shouldn't be surprised, though. We also blocked about.com for quite a while. The blacklist seems to be kind of off sometimes.

Digg users are morons

It's official: the Digg userbase is full of losers and morons. Of course, we all knew that already, but here I have photographic evidence. Observe:
Digg "YouTube down" article in Akregator

That's right: over 300 people voted for a "story" that was nothing more than a statement that YouTube wasn't working for half an hour or so. Better yet, it wasn't even a story: if you look at the URL, it was link spam for somebody's Counter Strike site!

Isn't it nice to know you can trust the Digg user community to carefully examine each story and weed out the garbage? Much better than those lazy, incompetent editors over at Slashdot! For example, take that time earlier this year when Slashdot had all those links to that crack-pot junk "science" site, rebelscience.org. The good users at Digg got the same submissions and -- oh, wait, the Digg community voted up a bunch of those links too. Well, at least the people at OSnews -- hold on, they published at least one of the same links. Hmmm... I guess democtatic, user driven sites can publish just as much garbage as sites controlled by a small group of editors. The only difference seems to be the the user-driven sites can publish greater volumes of junk in less time.

End of service problems

Well, I can finally stop complaining about my crappy network connection. It got really bad last weekend, so on Monday I finally called Time Warner to complain.

The good news is that they got a tech out here the next day. The bad news was that, as usual, the "appointment" was for sometime between noon and 5PM. The even worse news was that shortly after the tech left, the service went out again.

So, after another call, I ended up going down to the Time Warner office to trade in my six-year-old cable modem for a newer one. This seemed to fix the problem.

However, it seems that the frequent service outages played havoc with KMail. At least, I'm guessing that's what caused the problem. All I know is that, despite having KMail up and running the whole time, I didn't get any e-mail between Monday and Friday. On Friday, I logged out and logged back in later, only to discover messages from Tuesday in my inbox.

I don't know what happened there. I never saw any error messages, even when I manually checked my mail. It just silently failed. Apparently closing KMail fixed the problem, but it's still really annoying - especially since one of the messages from Tuesday was important.

Don't encourage compiling

As a programmer, I appreciate how great it is to be able to read and modify the source code for the software I use. Really I do. On more than one occasion, I've even availed myself of the source to fix bugs or implement features in software that I use. As a good member of the FOSS, I naturally sent these patches to the program maintainers. It's a great system.

However, I think the fact that source is available causes its importance to be overplayed by people in all parts of the community. I think things would be a lot better for everyone if we de-emphasized the idea of users building their software from source.

Now, I'm not saying that source code shouldn't be easily available. By all means, keep that link to the source tarball in a prominent location on the download page. Anyone who wants the code should be able to find it with no trouble.

What I am saying is that we should try to keep end users away from compiling from source unless they really need to do so.

Some people will say that all users should know how to compile things from source. They'll say it's easy and it's a good learning experience. They'll also say it's convenient, in that it will work on any variety of UNIX. They're also wrong on all counts.

First, I've seen quite a number of users on web forums who, though they apparently build programs with some regularity, haven't learned a damned thing from it. You know what they've learn? "Type ./configure, make, and make install as root." That's not learning, that's mimicry. In fact, I've seen forum postings where users couldn't figure out why ./configure was returning a file not found error on a package which I knew for a fact didn't use autotools. That's no better than the Windows monkey who sits there reading instructions on where to click.

Building things from source can be dead simple. But if you don't have everything you need, it can be a huge pain. Many users are simply ill equipped to deal with the problems that might come up, from missing libraries, to missing header files to an inappropriate build environment. The simple truth is that no system is guaranteed to have everything needed to build every program. So when the error messages start piling up, what do the users think? "Why can't I just run a setup.exe?"

And did I mention that managing programs installed from source is a pain? Not only is there not necessarily any easy way to uninstall such programs, but the simple fact that they won't be registered with your package manager can plunge you into dependency hell. The easy solution is, of course, to compile from source and build your own RPM or DEB or whatever. But doing that right isn't trivial and doing it half-assed is still a pain.

And what benefit is there to compiling something from source? Well, if there's no binary available, then the answer is obvious. Likewise if you need to apply a patch or enable a non-standard compile-time option of some kind. But where there is already an acceptable binary available, what's the benefit?

Really, there isn't any benefit. If you can use a binary package, then building from source is a waste of time. Despite what some people like to claim, compiling from source will not make the program run faster. Oh, I know you can set compiler flags to optimize for your hardware, but trust me: you won't see the difference. There are some types of applications for which the right processor-specific optimizations can make a significant difference, but for most desktop applications, it just doesn't. Any speed gains you do get are typically too small to notice.

My recommendation is that users should build from source only as a last resort. If there's a package for your distribution, use that. If there's no package for your distribution, but there is for another (e.g. you're using Ubuntu, but there are only Fedora RPMs), try converting it with alien or some similar tool. If that fails, then you can think about building from source. Going straight to the source is time consuming and just makes things more complicated than they need to be.

Ignoring the GPL

It seems that the MEPIS people have finally decided to comply with the GNU GPL. You may remember this issue from the Newsforge story a month ago. Basically, the SimplyMEPIS Linux distribution, which is a derivative of Ubuntu (and was formerly a derivative of Debian) got in trouble with the FSF for not complying with the terms of the GNU GPL. Basically, while they were providing the source code for the Ubuntu packages they modified, they were not providing code for the packages they copied unmodified from Ubuntu. Apparently they figured that as long as the source is "out there," that was good enough.

However, it doesn't work that way. The fact that the source is "out there" is not enough to satisfy the terms of the GPL and it never has been. And if they'd bothered to read and understand the license, they would have known that. The GNU GPL cleary states that if you distribute binaries of GPL-licensed software, you must either include a copy of the corresponding source, whether in the same box or from the same web site, or include a written offer to distribute the source on demand. There's nothing in there that says, or even suggests, that this only applies if you make changes to the code.

The main argument from MEPIS and others seems to be that this provision of the GPL is onerus and stifles innovation. Managing all that extra source code implies a lot more work, and hosting and distribution for that code implies more expense. The idea seems to be that since the code is "out there," it's not reasonable to force all this duplicate effort on small-time Linux distributors. Why, it just might be enough to discourage many from even building their own distribution! In fact, it's even a burden on people giving out Linux CDs to their friends, since they are techically required to give out the source too! And, really, who even cares if they have the source? Just those communist GPL police!

Of course, to be honest, they have a point. Distributing source and binaries is definitely harder than distributing just the binaries. Likewise, hosting and CD replication for both source and binaries is more expensive than for just binaries. I'm sure there are some people who would be put off starting a distribution because of this.

But on the other hand, so what? The stated purpose of the GNU GPL is to make sure that anyone who receives a program licensed under it is able to change and share the program, not to make things easy for Linux distributors. Requiring that the person who distributes the binaries also distribute the source is the simplest way to accomplish this. Sure, it's more trouble for the distributor, but using the "the source is out there" approach would leave the user open to a never-ending game of "pass the buck." He'd be entitled to a copy the source, but no particular person would be obligated to actually give him a copy of it. And if that happened, then the program would be free software in name only.

I just find the whole line of reasoning put forth by MEPIS really annoying. Maybe complying with the GPL would be burdensome. Maybe their users really don't care about the source. That's not the point. The point is that they don't get to make that decision. The GPL gives you license to redistribute the software it covers under a specific set of conditions. You can abide by those terms or you can violate them and infringe on the owner's copyright. Just don't try to argue that you ought to be able to pick and choose which terms you want to follow. It doesn't work like that. A proprietary software company certainly wouldn't put up with such nonsense and I don't see any reason why the owner of a free software package should be expected to.

Hosting problems

It seems my crappy, cheap hosting provider turned off my service Thursday. Why? Because I didn't pay them. And why didn't I pay them? Because they never sent me a renewal notice, that's why.

Well, to be fair, they did send a notice. In fact, they sent three. They just sent them to the wrong e-mail address. They used the Yahoo! account that I only check about once a month anymore and they only gave me five days to respond. So by the time I actually got the notices, it was already too late.

What the heck happened? I specifically made a point of updating my contact information.

The problem was probably that I updated it in their domain manager, Plesk (which sucks, by the way), rather than their billing system. Apparently the two are not connected. Not that I even knew they had a separate billing database. Silly me, I assumed that if they had a place for the information in Plesk, then they must actually use it. But apparently not. No, I'm sure it makes much more sense to simply have a bunch of redundant information. I'm sure the other customers love that.

Anyway, I'm instituting nightly backups while I look for a new web host. I had been thinking of getting one anyway, as I kind of wanted subdomains and shell access, but up until now it just seemed like too much hassle. This just pisses me off, though. I could forgive a billing mix-up if the service were better, but if I can get something like BlueHost's plan (which includes subdomains, shell access, and RoR among other features) for only $2/month more, I say screw LowestHosting.

I guess this is a case of "you get what you pay for." Live and learn.

MSDN pain

Will someone please tell me when MSDN started to suck? I remember back when I first started with Visual Basic, MSDN was really great. It was a wonderful reference source with lots of good material. The site was relatively quick and easy to use, the documentation was useful, and the examples tended to be at least moderately informative.

What the hell happened? Today I was looking up some information on using the XPathNodeIterator class in the .NET framework and Google directed me to the MSDN page for it. It was horrible!

The first thing I noticed was the truly massive page size. I literally sat there for seven seconds watching Opera's page load progress bar move smoothly from zero to 100%. And that's on the T1 connection at work!

The second problem is the class declaration, which says that it's a public, abstract class that implements the ICloneable and IEnumerable interfaces. There's nothing wrong with including that information per se. I personally don't think that including the code for the declaration is particularly helpful, as they could just as easily say that in pseudo-code or English, but whatever. What I do object to is that they included this declaration in five different programming languages! Why?!?! Of what conceivable value is it to waste half a screen worth of text to display a freakin' declaration in VB, C#, C++, J#, and JScript? Is the average Windows programmer really so completely clueless that he can't decipher this information without a declaration in his particular language? It's ridiculous!

The third problem is the code samples. Or should I say "sample." There are three code blocks, each of which has exactly the same code, except translated into different languages - VB, C#, and C++. Again, why? Is this really necessary? And if it is, why do they have to display all three on the same page? Why not break out at least two of the samples into separate pages? It's just a pain to have to sort through lots of irrelevant information.

My last complaint is the content of the example itself. Maybe this is just a product of my not yet being too familiar with .NET or with object-oriented enterprise-level frameworks in general, but the code sample just struck me as kind of bizarre. The goal of the algorithm was to iterate through a set of nodes in an XML file. To do this, they created an XPathDocument object and got an XPathNavigator object from that. Fine. Then they selected a node with the navigator object to get an XPathNodeIterator object. OK, I get that. Then they saved the current node of the iterator, which returns an XPathNavigator. Umm.... And after that, they selected the child nodes from the navigator to get another XPathNodeIterator, which they then used to actually iterate through the child nodes.

Is that normal? Do people actually write code like that? I mean, I can follow what they're doing, but it seems like an awfully circuitous route. Why not just go straight to from the initial navigator to the final iterator? You can just chain the method calls rather than creating a new variable for each object that gets created, so why not do that? I suppose the charitable interpretation is that the example is intentionally verbose and general for instructive purposes. But to me, all those extra object variables are just confusing. It makes for another, seemingly redundant, level of indirection. Maybe I'm atypical, but the direct approach makes a lot more sense to me.

Faith-based income

This evening, Digg directed me to an article by Steve Pavlina entitled 10 Reasons You Should Never Get a Job. This article conclusively proves that Steve is a clueless, arrogant moron.

OK, maybe that was a little harsh. I don't actually think Mr. Pavilna is a moron. He has plenty of useful and interesting things to say and he seems to be doing well enough for himself.

However, the article is positively dripping with arrogance and disdain. His basic premise is that "jobs" are for cowardly, brainwashed chumps who've sold their souls to "the man." I don't know how Mr. Pavilna makes his living (judging from the "donate" link on his page, my guess is by begging), but I sure hope it isn't through motivational speaking. I don't know about you, but I usually find that people who go around using terms like "brainwashed" or "slaves" are somewhat lacking in the knowledge and credibility department.

Of course, he is not, finally, wrong. Mr. Pavilna is certainly correct that having a traditional 9 to 5 job isn't the route to financial independence. Not that this is a shock to anyone. Everybody who's ever had a job know that the big boss gets all the money and the freedom to do basically whatever he wants. There's no question about that.

What would be really good is if Steve could tell us something useful, like exactly what to do to make money on your own, or how to go about it. See, that's the problem, isn't it? The answers to those questions are different for everybody. It's easy to spout platitudes about how "your real value is rooted in who you are, not what you do." The hard part is figuring out how to convert "who you are" into enough money to live on. Should I do contract software development? Write novels? Blog and wait for money to magically start rolling in? Figuring that out is easier said than done.

Not that I'm trying to discourage anyone. By all means, listen to Mr. Pavlina and go out and build yourself some kind of business. It's a lot of hard work, but the people who are successful at it say that it's worth every bit. I hope to do it myself someday in the foreseeable future.

I just get annoyed by the tone of the article - a combination of self-congratulatory arrogance and touchy-freely pseudo-inspiration. Building a successful business is hard, and not everybody is lucky enough to succeed the first time like Steve did. It can involve significant risk, both financial and psychological, and I don't think Steve is doing anybody any favors by trivializing such concerns as brainwashing and excuse-making.

But, as Dennis Miller says, that's just my opinion. I could be wrong.

PHP suckiness: XML

After weeks of mind-numbing IT type stuff, I'm finally getting back into programming a little. I've been playing with the .NET XML libraries the past couple of days. In particular, the System.XML.XPath library, which I found quite handy for accessing XML configuration files. So, after reading up a bit on XPath, XSLT, and XML in general, I was naturally overcome with a fit of optimism and decided to look at converting LnBlog to store data in XML files.

Currently, LnBlog stores it's data in "text files." What that really means is that it dumps each piece of entry meta into a "name: value" line at the beginning of the file and then dumps all the body data after that. It's not a pretty format in terms of interoperability or standardization. However, when you look at it in a text editor, it is very easy to see what's going on. It's also easy to parse in code, as each piece of metadata is one line with a particular name, and everything else is the body.

This scheme works well enough, but it's obviously a bit ad hoc. A standard format like XML would be much better. And since PHP is geared mostly toward developing web applications, and XML is splattered all over the web like an over-sized fly on a windshield, I figured reading and writing XML files would be a cinch.

Little did I know.

You see, for LnBlog, because it's targeted at lower-end shared hosting environments, and because I didn't want to limit myself to a possible userbase of seven people, I use PHP 4. It seems that XML support has improved in PHP 5, but that's still not as widely deployed as one might hope. So I'm stuck with the XML support in PHP4, which is kind of crappy.

If you look at the PHP 4 documentation, there are several XML extensions available. However, the only one that's not optional or experimetal, and hence the only one you can count on existing in the majority of installations, is the XML_Parser extension. What is this? It's a wrapper around expat, that's what. And that's my only option.

Don't get me wrong - it's not that expat is bad. It's just that it's not what I need. Expat is an event-driven parser, which means that you set up callback functions that get called when the parser encounters tags, attributes, etc. while scanning the data stream. The problem is, I need something more DOM-oriented. In particular, I just need something that will read the XML and parse it into an array or something based on the DOM.

The closest thing to that in the XML_Parser extension is the xml_parse_into_struct() function, which parses the file into one or two arrays, depending on the number of arguments you give. These don't actually correspond to the DOM, but rather to the sequence in which tags, data, etc. were encountered. So, in other words, if I want to get the file data into my objects, I have to write a parser to parse the output of the XML parser.

And did I mention writing XML files? What would be really nice is a few classes to handle creating nodes with the correct character encoding (handling character encoding in PHP is non-trivial), escape entities, and generally make sure the document is well-formed. But, of course, those classes don't exist. Or, rather, they exist in the PEAR repository, but I can't count on my users having shell access to install new modules. Hell, I don't have shell access to my web host, so I couldn't install PEAR modules if I wanted to. My only option is to write all the code myself. Granted, it's not a huge problem, so long as nobody ever uses a character set other than UTF-8, but it's still annoying.

Maybe tomorrow I can rant about the truly brain-dead reference passing semantics in PHP 4. I had a lovely time with that when I was trying to optimize the plugin system.