Hosting problems

It seems my crappy, cheap hosting provider turned off my service Thursday. Why? Because I didn't pay them. And why didn't I pay them? Because they never sent me a renewal notice, that's why.

Well, to be fair, they did send a notice. In fact, they sent three. They just sent them to the wrong e-mail address. They used the Yahoo! account that I only check about once a month anymore and they only gave me five days to respond. So by the time I actually got the notices, it was already too late.

What the heck happened? I specifically made a point of updating my contact information.

The problem was probably that I updated it in their domain manager, Plesk (which sucks, by the way), rather than their billing system. Apparently the two are not connected. Not that I even knew they had a separate billing database. Silly me, I assumed that if they had a place for the information in Plesk, then they must actually use it. But apparently not. No, I'm sure it makes much more sense to simply have a bunch of redundant information. I'm sure the other customers love that.

Anyway, I'm instituting nightly backups while I look for a new web host. I had been thinking of getting one anyway, as I kind of wanted subdomains and shell access, but up until now it just seemed like too much hassle. This just pisses me off, though. I could forgive a billing mix-up if the service were better, but if I can get something like BlueHost's plan (which includes subdomains, shell access, and RoR among other features) for only $2/month more, I say screw LowestHosting.

I guess this is a case of "you get what you pay for." Live and learn.

MSDN pain

Will someone please tell me when MSDN started to suck? I remember back when I first started with Visual Basic, MSDN was really great. It was a wonderful reference source with lots of good material. The site was relatively quick and easy to use, the documentation was useful, and the examples tended to be at least moderately informative.

What the hell happened? Today I was looking up some information on using the XPathNodeIterator class in the .NET framework and Google directed me to the MSDN page for it. It was horrible!

The first thing I noticed was the truly massive page size. I literally sat there for seven seconds watching Opera's page load progress bar move smoothly from zero to 100%. And that's on the T1 connection at work!

The second problem is the class declaration, which says that it's a public, abstract class that implements the ICloneable and IEnumerable interfaces. There's nothing wrong with including that information per se. I personally don't think that including the code for the declaration is particularly helpful, as they could just as easily say that in pseudo-code or English, but whatever. What I do object to is that they included this declaration in five different programming languages! Why?!?! Of what conceivable value is it to waste half a screen worth of text to display a freakin' declaration in VB, C#, C++, J#, and JScript? Is the average Windows programmer really so completely clueless that he can't decipher this information without a declaration in his particular language? It's ridiculous!

The third problem is the code samples. Or should I say "sample." There are three code blocks, each of which has exactly the same code, except translated into different languages - VB, C#, and C++. Again, why? Is this really necessary? And if it is, why do they have to display all three on the same page? Why not break out at least two of the samples into separate pages? It's just a pain to have to sort through lots of irrelevant information.

My last complaint is the content of the example itself. Maybe this is just a product of my not yet being too familiar with .NET or with object-oriented enterprise-level frameworks in general, but the code sample just struck me as kind of bizarre. The goal of the algorithm was to iterate through a set of nodes in an XML file. To do this, they created an XPathDocument object and got an XPathNavigator object from that. Fine. Then they selected a node with the navigator object to get an XPathNodeIterator object. OK, I get that. Then they saved the current node of the iterator, which returns an XPathNavigator. Umm.... And after that, they selected the child nodes from the navigator to get another XPathNodeIterator, which they then used to actually iterate through the child nodes.

Is that normal? Do people actually write code like that? I mean, I can follow what they're doing, but it seems like an awfully circuitous route. Why not just go straight to from the initial navigator to the final iterator? You can just chain the method calls rather than creating a new variable for each object that gets created, so why not do that? I suppose the charitable interpretation is that the example is intentionally verbose and general for instructive purposes. But to me, all those extra object variables are just confusing. It makes for another, seemingly redundant, level of indirection. Maybe I'm atypical, but the direct approach makes a lot more sense to me.

Fixing sites with Opera

Well, after a bit of experimenting, I implemented my first quick-and-dirty site-specific fix in Opera. It wasn't even that hard.

The motivation came when I received a site update e-mail from, which I apparently registered with at some point. I had completely forgotten about it. I reaquainted myslef with it and rediscovered the fairly decent selection of images they have.

The only problem was that their homepage layout was completely garbled in Opera. It consists of a bunch of div tags that divide the content area up into news entries. However, the divs end up being crushed down to almost nothing and the text spilling out and running together. It turns out the problem was a height: inherit line in the stylesheet for the class applied to those divs. I'm not sure what the purpose of that line was, but removing it fixed the problem.

Getting the site to render correctly for me turned out to be quite simple, once I figured out how to do it. I ended up simply downloading a copy of the (single) stylesheet for the page, removing the problem line, and setting it as "my style sheet" in the site preferences dialog. That allowed me to simply change the view mode from author mode to user mode and ta-da! The page now renders correctly.

Faith-based income

This evening, Digg directed me to an article by Steve Pavlina entitled 10 Reasons You Should Never Get a Job. This article conclusively proves that Steve is a clueless, arrogant moron.

OK, maybe that was a little harsh. I don't actually think Mr. Pavilna is a moron. He has plenty of useful and interesting things to say and he seems to be doing well enough for himself.

However, the article is positively dripping with arrogance and disdain. His basic premise is that "jobs" are for cowardly, brainwashed chumps who've sold their souls to "the man." I don't know how Mr. Pavilna makes his living (judging from the "donate" link on his page, my guess is by begging), but I sure hope it isn't through motivational speaking. I don't know about you, but I usually find that people who go around using terms like "brainwashed" or "slaves" are somewhat lacking in the knowledge and credibility department.

Of course, he is not, finally, wrong. Mr. Pavilna is certainly correct that having a traditional 9 to 5 job isn't the route to financial independence. Not that this is a shock to anyone. Everybody who's ever had a job know that the big boss gets all the money and the freedom to do basically whatever he wants. There's no question about that.

What would be really good is if Steve could tell us something useful, like exactly what to do to make money on your own, or how to go about it. See, that's the problem, isn't it? The answers to those questions are different for everybody. It's easy to spout platitudes about how "your real value is rooted in who you are, not what you do." The hard part is figuring out how to convert "who you are" into enough money to live on. Should I do contract software development? Write novels? Blog and wait for money to magically start rolling in? Figuring that out is easier said than done.

Not that I'm trying to discourage anyone. By all means, listen to Mr. Pavlina and go out and build yourself some kind of business. It's a lot of hard work, but the people who are successful at it say that it's worth every bit. I hope to do it myself someday in the foreseeable future.

I just get annoyed by the tone of the article - a combination of self-congratulatory arrogance and touchy-freely pseudo-inspiration. Building a successful business is hard, and not everybody is lucky enough to succeed the first time like Steve did. It can involve significant risk, both financial and psychological, and I don't think Steve is doing anybody any favors by trivializing such concerns as brainwashing and excuse-making.

But, as Dennis Miller says, that's just my opinion. I could be wrong.

PHP suckiness: XML

After weeks of mind-numbing IT type stuff, I'm finally getting back into programming a little. I've been playing with the .NET XML libraries the past couple of days. In particular, the System.XML.XPath library, which I found quite handy for accessing XML configuration files. So, after reading up a bit on XPath, XSLT, and XML in general, I was naturally overcome with a fit of optimism and decided to look at converting LnBlog to store data in XML files.

Currently, LnBlog stores it's data in "text files." What that really means is that it dumps each piece of entry meta into a "name: value" line at the beginning of the file and then dumps all the body data after that. It's not a pretty format in terms of interoperability or standardization. However, when you look at it in a text editor, it is very easy to see what's going on. It's also easy to parse in code, as each piece of metadata is one line with a particular name, and everything else is the body.

This scheme works well enough, but it's obviously a bit ad hoc. A standard format like XML would be much better. And since PHP is geared mostly toward developing web applications, and XML is splattered all over the web like an over-sized fly on a windshield, I figured reading and writing XML files would be a cinch.

Little did I know.

You see, for LnBlog, because it's targeted at lower-end shared hosting environments, and because I didn't want to limit myself to a possible userbase of seven people, I use PHP 4. It seems that XML support has improved in PHP 5, but that's still not as widely deployed as one might hope. So I'm stuck with the XML support in PHP4, which is kind of crappy.

If you look at the PHP 4 documentation, there are several XML extensions available. However, the only one that's not optional or experimetal, and hence the only one you can count on existing in the majority of installations, is the XML_Parser extension. What is this? It's a wrapper around expat, that's what. And that's my only option.

Don't get me wrong - it's not that expat is bad. It's just that it's not what I need. Expat is an event-driven parser, which means that you set up callback functions that get called when the parser encounters tags, attributes, etc. while scanning the data stream. The problem is, I need something more DOM-oriented. In particular, I just need something that will read the XML and parse it into an array or something based on the DOM.

The closest thing to that in the XML_Parser extension is the xml_parse_into_struct() function, which parses the file into one or two arrays, depending on the number of arguments you give. These don't actually correspond to the DOM, but rather to the sequence in which tags, data, etc. were encountered. So, in other words, if I want to get the file data into my objects, I have to write a parser to parse the output of the XML parser.

And did I mention writing XML files? What would be really nice is a few classes to handle creating nodes with the correct character encoding (handling character encoding in PHP is non-trivial), escape entities, and generally make sure the document is well-formed. But, of course, those classes don't exist. Or, rather, they exist in the PEAR repository, but I can't count on my users having shell access to install new modules. Hell, I don't have shell access to my web host, so I couldn't install PEAR modules if I wanted to. My only option is to write all the code myself. Granted, it's not a huge problem, so long as nobody ever uses a character set other than UTF-8, but it's still annoying.

Maybe tomorrow I can rant about the truly brain-dead reference passing semantics in PHP 4. I had a lovely time with that when I was trying to optimize the plugin system.

Good-bye TrackBack Spam

Today, I happened across an interesting paper on TrackBack spam called Taking TrackBack Back (from Spam), by a team at Rice University. In fact, it was so interesting and sensible, I immediately implemented it on my weblog.

If you have a blog with TrackBack support enabled, you've probably been hit by TrackBack spam. In fact, according to the paper, approximately 98% of all TrackBacks are spam. To me, this is not even remotely surprising, as every single ping I've gotten since I implemented TrackBack in LnBlog has been spam. I've been fighting it with IP blacklisting and content filtering, but it's a losing battle. After implmenting Pingback last week, I was seriously considering just disabling TrackBacks on my blogs.

The problem with TrackBack, if you've read anything about it, is that it's completely unauthenticated. To send a TrackBack ping to a blog entry, all you need to do is send an HTTP POST, populated with whatever data you like, to a specific URL. Although it is required by the specification, the most obvious (and common) implementation of TrackBack is to simply accept and store the information sent by the client. Needless to say, this leaves you completely vulnerable to spammers.

Pingback is supposed to fix this by virtue of the fact that the server receiving the ping does all the work. The client just sends an XML-RPC request with the URL of the page to ping and the URL of the page that references it. The server is not required to do anything, but it is recommended that it fetch the referring page, check that it links to your site, and extract some information to display, like a title and excerpt.

However, as the Rice University paper points out, there's no requirement in the TrackBack specification that you just take what the client gives you. In fact, the anti-spam measure recommended by the paper is essentially to do what the Pingback spec recommends - fetch the page and see if it links to you. Not only is this compatible with the TrackBack specification, but it is also, according to their information, highly effective.

The beauty of this is that it's so obvious. In fact, when I read it, my first reaction was, "Why didn't I think of that?" Although it's not required, TrackBacks from legitimate blogs will virtually always include a link to your blog. After all, how else will the readers know about your entry? However, this is almost never the case for spam pings. The spammers aren't at all interested in what your blog says - they just want to spray their links all over the web. So if the page doesn't link to your site, you can be pretty sure it's spam. And if the page does link to my site - well, at least it's boosting my Google Page Rank.

Opera and Akregator

Yay! I can finally do it! I can finally use Opera and Akregator together! Well, at least to a certain extent.

Yesterday I discovered this blog entry by zeroK on this very subject. The basic concept is so simple it's brilliant: define a custom protocol. Opera allows you to modify the handler programs for protocols like mail, telnet, etc. Well, the solution is to simply define a feed:// protocol and set the handler to your RSS aggregator.

Unfortunately, there's really no such thing as a feed:// protocol, so you need some JavaScript. For feeds linked in the page header, the solution was to use the modified bookmarklet that extracts the links and pops up a list with feed:// substituted for http://.

As for the handler application, I banged out a little shell script using DCOP calls and KDialog to add a feed to a selected group. I didn't use the Akregator command line options because they don't seem to work when you're embedding Akregator in Kontact.

The only problem with this is that it doesn't work with Opera's built-in RSS icon. Changing the protocol on the linked RSS feeds with a user JavaScript just seems to make them stop working altogether.

Hopefully Opera will eventually add a setting to configure an external feed reader. While I love Opera as a web browser, I never really cared for the mail client. And since the RSS reader is based on the mail client, I don't like that either. In fact, not only is the feed reader based on a mail client I don't like, but it seems to work more like a mail client than an RSS aggregator. I tried it out again the other day and I really hate it. I'd much rather have something with the three-panel layout like Akregator or SharpReader, so I don't think I'm going to be switching any time soon.

But, at any rate, at least I'm making progress in this department.

Licensing sucks

You know what the best part about free software is? Not having to worry about licenses. Oh, I know you still have to think about them when you're modifying or redistributing the program, but if you're simply using it, there are absolutely no worries.

In the commercial world, this is not the case, as I was reminded the other day.

You see, I'm the poor schlep in the IT department that got stuck taking care of the AutoCAD licenses for our engineering department. We have a subscription to a couple of AutoCAD products, which includes support and version upgrades. Every year, something goes wrong with our AutoCAD subscription. At first, it was the fact that we had four seats of AutoCAD on three separate subscription contracts. Then we got them partially consolidated onto two contracts, and then finally down to a single one.

This week, it was a license transfer from one of our other departments to engineering. In addition to the transfer, we needed to add subscription support, because the other department was no longer using AutoCAD and had let it lapse. I didn't handle the paperwork for this, so I called up to see if everything had gone through. Well, this time, they messed up by quoting us a new contract rather than adding the new license to our existing subscription contract. They also didn't process our order until I called - 2 months after the purchase order was written. What's up with that? They don't want our thousand dollars?

Of course, I've been referring to the people involved as just "they" so far. There are actually three companies involved in our AutoCAD dealings. Of course, there is Autodesk, who owns the software and apparently never deals with customers directly. Beneath them is DLT Solutions, who are apparently the exclusive government reseller (at least in our area). And below them is our local reseller.

The problem seems to lie primarily with DLT. Our local reseller is very good. They're friendly, knowledgeable, and know how all the licensing works. It's DLT who keeps adding a new support contract for every purchase (keeping track of multiple contracts is a big and unnecessary pain) and who didn't bother to process our order for two months. Unfortunately, since they're the government reseller, it's not like we can just go to somebody else. Not unless we want to pay retail, which is 20% more taxpayer dollars.

Of course, free software wouldn't save me from the horrors of dealing with support contracts. However, it can and does save me from the horror of begging for money for half-way decent tools. It also saves me from having to worry about whether or not I'm going to get us audited by the BSA.

Another nice thing about free software is that, if the users like it, I don't have to hear complaints about too few licenses. In particular, I'm thinking of this one guy in engineering (he fancies himself the local "computer guru") who is constantly asking me to install proprietary software on multiple PCs. He always brings up that the licenses for several programs he used to use said that you could install them on any number of PCs, so long as you only used one copy at a time. Of course, that scheme still exists - in the form of software that requires a license server. But when it comes to things like Paint Shop Pro, that's not exactly the typical scenario. And let's face it - acting based on what the license for an old piece of software used to say isn't exactly a sound legal strategy. Especially when you haven't actually read the license to the software you're using.

Random Kubuntu complaints

It's been a month since I posted anything, so here's my random list of complaints about Kubuntu. Some of them are probably hardware related, but I'll post them anyway.

  1. Adept is great - except for proprietary software. For example, the install scripts for vmware-player and the Sun JDK both require you to accept the license agreement. However, Adept's terminal emulator doesn't seem to accept input. I can see the license agreement (or at least part of it), but I can't actually hit OK to agree to it. The only way is to do it is to forget Adept and install using apt-get from the command line.
  2. Why does my CD burner not exist when I boot up after turning off the PC? The drive has power and everything, but /dev/hdc just doesn't get created. But if I reboot, then when the box comes back up, /dev/hdc is there and functioning normally. What's up with that?
  3. Sorry, but the default desktop settings suck. Call me a heretic if you want, but I would rather the default settings to be like Windows. Yeah, I can and do change them, but it's just a pain. Maybe it would be better if they did what Xandros do and just run the settings wizard on the user's first login. Or something.
  4. The kernel is buggy. Unplugging my cell phone from the USB/serial data cable crashes something in ther kernel's USB subsystem. I'm not sure what, but the exact error message from the log includes the line "kernel BUG at kernel/workqueue.c:109!" so it's definitely a kernel bug. And whatever it is keeps my phone or USB thumb drives from working until I reboot.
  5. Probably related to the above, shutdown sometimes hangs at stopping Bluetooth services. Note that I don't actually own any Bluetooth devices, so I'm not sure what's happening there.