Net neutrality

I just saw something I've never seen before: a TV commercial attacking net neutrality. Apparently they've started appealing directly to voters now.

I must admit to having some degree of ambivalence about net neutrality. Let's face it: it's just one set of big companies fighting another. Who should pay for the internet: Google or Time Warner? Does it really matter to me which one it is?

Not that I have no opinions on the issue. I'm definitely not a fan of the "tiered" service concept. Discriminating against certain kinds of data doesn't appeal to me either. When you take the money out of the calculation, none of this is good for the end user.

On the other hand, you really can't take the money out of the calculation. And it's not like the service providers don't have a point. Somebody has to pay for the cost of providing bandwidth, and a non-neutral scheme might very well result in lower overall costs and/or lower costs for end-users. At least, that's the claim. I don't claim to know enough about the business to evaluate its truth.

I think the only thing I'm really sure of in this discussion is that getting the government involved is a bad idea. In fact, as a public servant, I take it as a general rule that getting the government involved is nearly always a bad idea. And what with the DMCA and software patents, it's not like the US government has the best track record on technical issues.

So for now, I'm more inclined to let the market decide this issue. Who knows, the non-neutral net might not even be really feasible. We can only hope....

Playing with UnixWare

I took yesterday off work and, naturally, I had a wierd helpdesk request on my desk when I got back. One of our outlying locations needed help with a UNIX-based server that had gone down. This was wierd because I didn't think we had any UNIX-based systems.

Up until now, I thought the only UNIX-based system we had at work was part of our financial system. At least, I think we have one there. I'm not allowed to touch those servers. And I don't want to touch them, because in our organization, as soon as you touch a system, whether it's hardware or software, you automatically become responsible for maintaining it for the rest of your life. All I know is that I've seen KSH scripts sitting on the printer a couple of times, and the financial system is the only place they could be coming from.

But it turns out that this other department has a security system that uses a UNIX box. But not a good UNIX. It's a really old version of SCO UnixWare. The only other UNIX that old that I've ever used was the Digital UNIX server I dialed into in college to check my e-mail.

And when I say old, I mean positively primitive. No bash, no less (!), no grep -r, really old X server running Mwm. Basically, a classic example of the bad old days of UNIX.

Of course, in fairness to SCO (assuming they deserve any fairness), the computer was at least 7 years old and had never been updated. The UnixWare version was probably 8 to 10 years old. Linux wasn't that great when I got into it 6 years ago. I can only imagine what RedHat and SuSE were like 7 or 8 years ago. Although I can't imagine they were any worse to use than this.

In the end, it doesn't really matter how bad the OS was. They're replacing the security system with a Windows-based one used in other departments, so I only had to get the UnixWare box to limp along long enough for the new system to come in. And I didn't really even have to mess with the software, since the problem was with the hardware.

Breaking out DOSbox

I've been indulging my fetish for "vintage" (i.e. really old) games again. This time it's with my 10 year old copy of Wing Commander: Privateer.

I really loved the Wing Commander series, and Privateer was always one of my favorites. It's a very free-form game, but it still has a story line. The flexibility it affords you is really enjoyable. You can be an honest trader, a mercenary, or even a pirate if you want to! Plus you get to customize your ship, which is always fun.

After digging out my Privateer CD, which also includes the Righteous Fire mission set and a copy of Strike Commander (another Origin title in the same vein as Wing Commander but set much closer to the current day), getting it installed and working was a pretty easy matter. The only special thing I had to do was turn off EMS support in my dosbox.conf file and bump up the CPU cycle and frameskip settings. The emultation is bsaically perfect!

Of course, I've long since thrown out the Privateer documentation, so my first problem was remembering how to play the game. Forunately, I found a handy page that included a PDF version of the manual along with various other helpful information.

The only thing I really dislike is the controls. See, I don't own a joystick anymore, and apparently DOSbox doesn't really support them that well anyway. If you've ever tried to play a Wing Commander game without a joystick, you will recognize this as a problem. Oh, sure, I can use my gamepad or the keyboard. Without a joystick, your movements are jerky and it's hard to keep an enemy ship in your sites. Maybe I'll have to see if I can get a cheap joystick in the local EB Games. Come to think of it, do they even have cheap joysticks anymore? I can't remember the last time I saw one.

MPIO pain

Why is my USB so messed upon Ubuntu? First my cell phone data cable doesn't work properly, and now my MPIO FL100 MP3 player doesn't work right. What's the deal?

Let me rewind and give some background. A few years ago, my parents gave me a 128MB MPIO FL100 MP3 player for my birthday. It's a nice enough player, with a display and decent controls, but it's not Linux-friendly. In other words, it doesn't work as a block device, so you can't just mount it as a USB hard drive. It requires special software.

Fortunately, such software exists for Linux. The MPIO project over at SourceForge hosts it. It has both a command-line application (similar to an FTP client, for some reason) and a simple KDE-based front end. Of course, this project is now almost completely dead save for a post to the mailing list every month or two, so any hope of bug fixes or new features seems unfounded, but the software still works.

Or, it did until recently, when I noticed that this player no longer works properly under Dapper. I'm not sure exactly when the problem started - if it was the upgrade to Dapper or some upgrade since then. Either way, things are definitely no longer normal.

I'm now having trouble connecting to the device. I experimented this evening, and it seems I need to be root in order to connect to it. When connecting as a normal user, which used to work perfectly, I now get a message that the device cannot be found. Connecting as root, however, seems fine. The first time, at least. If I disconnect the software and try again, I get a message like this:
mpio: src/io.c(766): mpio_io_read: libusb returned error: (ffffff92) "No error"
mpio: src/io.c(816): mpio_io_version_read: Failed to read Sector.(nread=0xffffff92)
mpio: src/mpio.c(409): mpio_init: Unknown version string found!
Please report this to: mpio-devel@lists.sourceforge.net
mpio: src/mpio.c(410): mpio_init: data=0x8052820 len=64
mpio: 0000: 00 00 00 00 00 00 00 00 00 00 00 00 20 20 20 20 ............
mpio: 0010: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
mpio: 0020: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
mpio: 0030: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
mpio: src/mpio.c(171): mpio_init_internal: WARNING: no internal memory found
It seems to work again if I physically disconnect the device.

This is growing to be a pain. At this point, I guess the main question is, how much effort do I want to put into fixing support for a now obselete device? I'm guessing the answer is going to be "not that much."

Windows paths in PHP

Time for the PHP annoyance of the day: includes on Windows. PHP 4 has a nasty bug with the way it handles the require_once() and include_once() functions on Windows and I got bitten by it today.

If you don't know PHP, there are four functions to include code that's in other files: include(), require(), include_once(), and require_once(). The include function works the same way as #include in C: it just dumps the contents of the given file into the current one. The require() function does the same thing, but errors out the script if the file cannot be included for some reason.

Now, the *_once() varieties have a handy extra feature: if the given file has aleady been included, then they won't include it again. This is nice because it keeps you from needing to worry about errors caused by re-including the same fucntion or class library. The only problem with these functions is that they don't work correctly on Windows.

This problem came up while I was testing LnBlog on Windows. See, LnBlog stores each blog in a folder located outside the program directory, and the blog URL is actually the folder URL. It makes for a nice URL structure, but it means that the wrapper scripts that generate your pages have to be told where the LnBlog program files are. Well, at some point, while I was messing around with my test blog, I changed the path to the LnBlog directory.

Actually, that's not quite right. I changed the string that represents that path. The path that string referred to was actually correct. It's just that the path I gave was all lower-case, while the path on the filesystem was mixed-case.

It seems PHP 4 doesn't like it when you do that. Apparently it checks if a file has been included by storing the full path to each included file in a list and then doing a simple list search at each subsequent include. So one script include a require_once("lib/utils.php") and it would be included relative to the mixed-case current directory. That's fine. Then that script would include another files that did the same thing. Only this file apparently found the file relative to the include_path, which had the all lower-case path. Same path, but a different string representing it. Since PHP was apparently doing a simple string comparison to check if the file was included, it concluded that these were actually two different files and included the same one again. Bah!

Something of an exoteric bug, but still a pain. Although, to be fair, this is fixed in PHP 5. Not that it matters, because my target audience is still made up of people who don't necessarily have PHP 5.

To be honest, I'm getting a little sick of PHP. I was playing with Python again last week, and PHP is just painful by comparison. I'm starting to agree that it really is the "Visual Basic of the web." The only thing going against that impression is the fact that PHP treats Windows as a second-class citizen.

Don't encourage compiling

As a programmer, I appreciate how great it is to be able to read and modify the source code for the software I use. Really I do. On more than one occasion, I've even availed myself of the source to fix bugs or implement features in software that I use. As a good member of the FOSS, I naturally sent these patches to the program maintainers. It's a great system.

However, I think the fact that source is available causes its importance to be overplayed by people in all parts of the community. I think things would be a lot better for everyone if we de-emphasized the idea of users building their software from source.

Now, I'm not saying that source code shouldn't be easily available. By all means, keep that link to the source tarball in a prominent location on the download page. Anyone who wants the code should be able to find it with no trouble.

What I am saying is that we should try to keep end users away from compiling from source unless they really need to do so.

Some people will say that all users should know how to compile things from source. They'll say it's easy and it's a good learning experience. They'll also say it's convenient, in that it will work on any variety of UNIX. They're also wrong on all counts.

First, I've seen quite a number of users on web forums who, though they apparently build programs with some regularity, haven't learned a damned thing from it. You know what they've learn? "Type ./configure, make, and make install as root." That's not learning, that's mimicry. In fact, I've seen forum postings where users couldn't figure out why ./configure was returning a file not found error on a package which I knew for a fact didn't use autotools. That's no better than the Windows monkey who sits there reading instructions on where to click.

Building things from source can be dead simple. But if you don't have everything you need, it can be a huge pain. Many users are simply ill equipped to deal with the problems that might come up, from missing libraries, to missing header files to an inappropriate build environment. The simple truth is that no system is guaranteed to have everything needed to build every program. So when the error messages start piling up, what do the users think? "Why can't I just run a setup.exe?"

And did I mention that managing programs installed from source is a pain? Not only is there not necessarily any easy way to uninstall such programs, but the simple fact that they won't be registered with your package manager can plunge you into dependency hell. The easy solution is, of course, to compile from source and build your own RPM or DEB or whatever. But doing that right isn't trivial and doing it half-assed is still a pain.

And what benefit is there to compiling something from source? Well, if there's no binary available, then the answer is obvious. Likewise if you need to apply a patch or enable a non-standard compile-time option of some kind. But where there is already an acceptable binary available, what's the benefit?

Really, there isn't any benefit. If you can use a binary package, then building from source is a waste of time. Despite what some people like to claim, compiling from source will not make the program run faster. Oh, I know you can set compiler flags to optimize for your hardware, but trust me: you won't see the difference. There are some types of applications for which the right processor-specific optimizations can make a significant difference, but for most desktop applications, it just doesn't. Any speed gains you do get are typically too small to notice.

My recommendation is that users should build from source only as a last resort. If there's a package for your distribution, use that. If there's no package for your distribution, but there is for another (e.g. you're using Ubuntu, but there are only Fedora RPMs), try converting it with alien or some similar tool. If that fails, then you can think about building from source. Going straight to the source is time consuming and just makes things more complicated than they need to be.

Travel log

Happy (slightly belated) birthday to me! I officially turned 0x1d years old yesterday. If you don't understand what that means, then go look up hexadecimal. It builds character, as my Sarah always says.

I've been trying to keep up with the blogging more lately, but I've been on vacation most of the past week, so I just didn't feel like it. Sarah and I left for Washington, DC on Thursday, got back late Sunday, and took Monday to recuperate. I actually went to work Tuesday (because I was trying to conserve vacation days), had a wretched day, and took my birthday to recover.

We had a nice trip down to DC on Thursday. The six hour drive was made somewhat more enjoyable by my recent purchase of an MP3onchannel from New Egg. It's basically just a car radio adapter for MP3 players and other audio devices. You plug the headphone jack of your MP3/CD/whatever player into the device, set your car radio to a particular FM channel, and you can get the audio on you car stereo system. Much handier than trying to share headphones (which isn't really safe when you're driving anyway). In addition to an analog audio input jack, this particular device also has a USB port with an integrated MP3 player, so you can just stick a USB thumb drive loaded with music into it and hit the play button on the device. A nice, cheap way to increase your music capacity.

Even without music, the drive down US-15 through rural Pennsylvania was actually very pleasant. It was clear and sunny and the scenery was breathtaking in several places.

My only complaint was the disproportionate number of flee markets and porn shops along the road. And I mean really disproportionate. In one place, I actually saw two porn shops within a mile of each other. Over the whole length of the trip, we probably passed a dozen adult video and novelty stores on US-15, most of the really seedy looking. We even passed one called the "Adult Gift Shoppe." Both Sarah and I agreed that there should be a law against using the archaic spelling of "shop" for porn stores.

We arrived at the Omni Shoreham in Washington at around 7:00 PM on Thursday, despite the fact that it's a 6 hour drive and we left at 11:00 AM. I blame MapQuest. I don't know why I still use their site. The directions are only right about half the time.

Actually, that's not really fair. The directions are usually pretty good, and as far as highways and major thoroughfares go, they're almost always right. The problem is with the details. In this case, MapQuest got us into Chevy Chase, MD (if you haven't heard, the town really is named after the actor), but sent us the wrong direction. We eventually ended up in White Flint, where we stopped at the Borders Books in the mall and consulted a few maps to get our bearings. On the up side, getting to the hotel from the mall was easy, so we ended up coming back to the mall on Friday night to eat at the Cheesecake Factory. (Yes, the Cheesecake Factory sells actual dinners, not just cheesecake. Although I didn't know that until we got there.)

On Friday morning, we went to the national zoo for what was probably a once in a lifetime experience: we saw Tai-Shan, the baby giant panda. This was a special treat for me because:

  1. I love zoos. If I lived closer to a zoo, I would literally visit it every weekend.
  2. I love giant pandas, too.
  3. I've never seen a giant panda in person.
  4. Giant pandas are very rare and rarely breed in captivity, so you hardly ever see the babies outside of China.

Of course, the rest of the zoo was very enjoyable as well. They had some very interesting features, including a series of towers with cables strung between them which the orangutans used to get from one habitat to the other. They also had free-range tamarins. Apparently the just let them run around in the trees. We didn't actually see any of them, but the concept was pretty neat.

Saturday was museum day. We went to the natural history museum and the museum of American history. They were both quite interesting. The American history museum was featuring a small Jim Henson exhibit, so we got to see some of the original Muppets, including a couple of the ones from The Dark Crystal, which still weirds me out nearly as much as it did when I first saw it as a kid over 20 years ago.

They also had a neat Information Age exhibit, featuring a number of really, really old computers. I was taken in by one that had buttons for memory addresses and various other CPU instructions right on the operator's console. I find it amazing that we've come so far, and yet, in many ways, we're still in the dark ages of computing.

We wrapped up our trip on Sunday, because the hotel prices doubled on Sunday night. In the morning, we went to the Freer and Sackler galleries to see the Asian art exhibit. They had beautiful collections, including some very lovely Japanese prints.

In the afternoon, we finished our tour off with a visit to the International Spy Museum. That was really cool, and much larger than I had expected. They actually had displays of real spy equipment, such as cigarette-box cameras, cyanide capsules, and assassination weapons. On a computing-related note, they even had a one-time pad - the paper variety. Very cool, but decrypting a message using a paper key must have been kind of a pain.

My birthday relaxation yesterday consisted of a trip to Ithaca with my mother and brother. We had lunch at the Moosewood Restaurant and then went shopping on the Commons. I love browsing used book stores, and Ithaca has quite a few of them. I ended up getting the second volume of a treatise on Buddhist Logic, a retelling of the Ramayana, and a small volume of collected works by Erasmus.

The only thing I don't like about Ithaca is that it's a little too pretentious. I guess that's not unexpected, what with it being the home of Cornell University, but does all the graffiti really have to be leftist political slogans? Can't the kids at Cornell just write the name of their favorite bands on the bathroom walls like everywhere else? I have no problem with feminism or pacifism, but when I see slogans advocating them drawn into the concrete on the sidewalk, it just seems...a little off.

Online applications won't work

Periodically, you see stories making the rounds about how online applications are going to completely supplant traditional desktop applications. In fact, these ideas have recently been extended to encompass the entire desktop, with the rise of web-based "operating systems."

It sounds great, doesn't it? All your data and all your applications would be available from a central server. You could use any computer, anywhere in the world, to access your personal, customized desktop, and it would always be exactly the same.

However, over the last month or so, and this week in particular, I've experienced the perfect proof that such ideas are, well, over-rated. That proof is internet outage.

Yes, Time Warner's Road Runner cable internet service has been very unreliable the last month or so. It's normally pretty good, but I've been experiencing frequent outages, usually for several hours at a time.

With wide broad-band availability, many of us have started to take high-speed, always-on connections for granted. Putting your entire desktop online is great when you know you will always be able to access it. But if everything is online, then when your connection goes down, your computer is completely useless.

The everything online philosophy also seriously limits the usefulness of laptops. I know that may sound shocking to some people, but the truth is that you can't get free WiFi access everywhere. In fact, there are many places where you can't even get paid WiFi access. For instance, my wife sometimes takes the laptop to work on slow days, where they have no web connection (though I'm not sure why) and no wireless access points nearby. On those days, it's nice that OpenOffice and Klickity (her new addiction) are desktop, rather than web, applications.

Not that I have anything against web applications. They're great! It's just that, like everything else in the world of computing, they've been over-hyped. Not everything needs to - or should - be a web application. Not every web application has to use AJAX. Not every program will benefit from using the latest trendy technology. And, finally, one that seems to have finally sunk in: not every application has to incorporate XML in some way, whether it makes sense or not.

Corollary: don't use udev scripts

Quickly following up on yesterday's tip to newbies, the advent of HAL and volume managers means that you no longer have to do things like mess around with udev scripts to accomplish automounting and such.

See, this is the beauty of HAL. At the system level, all it does is keep an up-to-date list of hardware. That's all. The HAL daemon just tracks when devices are added, removed, or changed and doesn't actually do anything.

All the actual actions are carried out by client applications, such as the desktop volume manager. The beauty of using a volume manager is that it's just a regular program run by a regular user. That means that you can customize things on a per-user basis without even needing root access. You can even run more than one of them if you really want.

So if you're trying to set up auto-mounting or some form of auto-execution, just use your freakin' volume manager. Don't mess around with udev or hotplug scripts. Don't use old solutions like the autorun program or supermount. Just use the HAL solution. And if distribution you use for your desktop doesn't support HAL, then I suggest you change distributions. Handling removable drives no longer sucks in the Linux world and there's no point in suffering when you don't have to.

Newbies: "auto" does not mean auto-mount

Listen up Linux newbies! I'm going to let you in on a little secret: the "auto" and "noauto" options in /etc/fstab don't mean what you think they do.

Why do I bring this up? Because if you frequent forums like LinuxQuestions.org, you've probably seen threads like this one, asking about getting USB thumb drives to auto-mount or not auto-mount. Invariably, somebody suggests adding an "auto" or "noauto" to /etc/fstab.

To an inexperienced user, this might seem like a natural suggestion. "Auto" means auto-mount, so that should do the trick, right? And if you specify "noauto" then the device should not auto-mount. It makes sense and it's easy to do. Perfect!

Unfortunately, the system doesn't work that way. If you read the man page for the mount command, you'll see that the auto and noauto options actually control whether or not the filesystem is mounted when you run the "mount -a" command. This command mounts all filesystems not specified as "noauto" and is typically run on boot to mount your local hard drives and network filesystems.

So, in other words, the "auto" option in /etc/fstab really just means "mount this device on boot." For USB thumb drives, this is pretty useless, as you don't normally have them plugged in when you first boot up. What you want is for the drive to be automatically mounted when you plug it into the system. However, /etc/fstab has absolutely nothing to do with this.

On modern Linux systems, such as Ubuntu, Fedora, or anything released since 2005 (except for Slackware), automounting of removable drives is accomplished through the magical combination of D-BUS, HAL, and a volume manager (which is often integrated into the desktop environment).

The good thing about this setup is that it's highly flexible. D-BUS is a generic message bus and HAL is a generic hardware abstraction layer, so they can handle most any kind of hardware hotplugging, not just USB drives or CDs. It can also work for printers, digital cameras, MP3 players, or what have you. The down side is that it's a fairly complex software stack with lots of dependencies, so if it's not working out of the box, you're pretty much screwed. Sure, you might be able to get them working eventually, but it will probably take some significant knowledge of the system.

So if you want auto-mounting and it's not working, the news is bad. But if you're just looking to turn off the auto-mounting or popping up a file manager or something, the news is good. In fact, if you're using GNOME or KDE, the news is great, because they both have graphical configuration for the volume manager. In KDE, for example, you can set custom actions by the type of media, so you can choose to auto-mount USB drives, but not DVDs, or choose to automatically play audio CDs and video DVDs.

So the lesson for today is: don't worry about the auto and noauto options in /etc/fstab. They don't do anything interesting. And if you're trying to get your USB or other removable media to mount when you insert them, they don't do anything at all.

And, lastly, if you see somebody spreading this myth of "auto" and "noauto" around, call them on it. Correct them and point them to this page. And, if you're a real jack-ass, laugh and mock them for their lack of skillz.

Ignoring the GPL

It seems that the MEPIS people have finally decided to comply with the GNU GPL. You may remember this issue from the Newsforge story a month ago. Basically, the SimplyMEPIS Linux distribution, which is a derivative of Ubuntu (and was formerly a derivative of Debian) got in trouble with the FSF for not complying with the terms of the GNU GPL. Basically, while they were providing the source code for the Ubuntu packages they modified, they were not providing code for the packages they copied unmodified from Ubuntu. Apparently they figured that as long as the source is "out there," that was good enough.

However, it doesn't work that way. The fact that the source is "out there" is not enough to satisfy the terms of the GPL and it never has been. And if they'd bothered to read and understand the license, they would have known that. The GNU GPL cleary states that if you distribute binaries of GPL-licensed software, you must either include a copy of the corresponding source, whether in the same box or from the same web site, or include a written offer to distribute the source on demand. There's nothing in there that says, or even suggests, that this only applies if you make changes to the code.

The main argument from MEPIS and others seems to be that this provision of the GPL is onerus and stifles innovation. Managing all that extra source code implies a lot more work, and hosting and distribution for that code implies more expense. The idea seems to be that since the code is "out there," it's not reasonable to force all this duplicate effort on small-time Linux distributors. Why, it just might be enough to discourage many from even building their own distribution! In fact, it's even a burden on people giving out Linux CDs to their friends, since they are techically required to give out the source too! And, really, who even cares if they have the source? Just those communist GPL police!

Of course, to be honest, they have a point. Distributing source and binaries is definitely harder than distributing just the binaries. Likewise, hosting and CD replication for both source and binaries is more expensive than for just binaries. I'm sure there are some people who would be put off starting a distribution because of this.

But on the other hand, so what? The stated purpose of the GNU GPL is to make sure that anyone who receives a program licensed under it is able to change and share the program, not to make things easy for Linux distributors. Requiring that the person who distributes the binaries also distribute the source is the simplest way to accomplish this. Sure, it's more trouble for the distributor, but using the "the source is out there" approach would leave the user open to a never-ending game of "pass the buck." He'd be entitled to a copy the source, but no particular person would be obligated to actually give him a copy of it. And if that happened, then the program would be free software in name only.

I just find the whole line of reasoning put forth by MEPIS really annoying. Maybe complying with the GPL would be burdensome. Maybe their users really don't care about the source. That's not the point. The point is that they don't get to make that decision. The GPL gives you license to redistribute the software it covers under a specific set of conditions. You can abide by those terms or you can violate them and infringe on the owner's copyright. Just don't try to argue that you ought to be able to pick and choose which terms you want to follow. It doesn't work like that. A proprietary software company certainly wouldn't put up with such nonsense and I don't see any reason why the owner of a free software package should be expected to.