I spent some time looking through some old links the other day. I imported all my bookmarks into Lnto (which I really need to release one of these days) and I was browsing through some of the ones that I've had hanging around forever. Some of them date back to when I was in college.
Turns out quite a few of them were dead. Some of these were not unexpected. There were a few links to cjb.net, members.tripod.com, and suchlike sites that are now defunct. There were also several links to university web pages, many presumably belonging to students who have long since graduated.
Several of them were also domains that had changed hands. Most were parked and covered with ads. One was an anime fan site that now redirects to the official site of the distributor.
The most interesting one was a Final Fantasy fan site that is now an "escort service" site. Out of curiosity, I looked the site up in the Wayback Machine and found that this is actually a fairly recent development. Apparently the fan site was in existence until 2009. In 2010, the archived copies are just mostly empty directory listings. These continue into 2011, and then there's one copy that appears to be a broken and/or spammy blog. There are no archived pages from 2012, and then in 2013, there's a GoDaddy parked domain page in June, followed by the escort service site in July.
It's strange how the web works. Despite the talk about how digital content lasts forever and how it's virtually impossible to completely delete anything you put online, the truth is that content on the web is surprisingly ephemeral. Sites regularly disappear with no explanation; content gets modified with no indication whatsoever to readers; sites get reorganized, breaking every external link and just redirecting them to the front page. It's a wonder people manage to find anything at all!
This has been on my mind anyway, since I've been meaning to get back to refactoring LnBlog (which is a topic for another post). As part of that, I was going to work on a nicer URL structure. That piece is easy, but I'm committed to keeping all the old links valid. That's less easy, but not unmanageable. (It's actually further complicated by the fact that I'm considering moving off of subdomains so that )
The thing is, I've owned this domain for nearly ten years and URLs are something I never really put a great deal of thought into. But it seems obvious that I need to start thinking seriously about the best way to manage them. I want the content on my site to have true permalinks - I want the college kids who bookmark a blog entry today to still be able to visit that link when their kids are in college.
This will require some planning and future-proofing. And I'm not just about the URLs themselves - those are the easy part - but conventions for different types of content, what constitutes "permanent" content, and how I'm going to maintain all this stuff across potentially many changes in hosting and underlying technology. If I'm going to have this site until I die (and that is the plan), I'm eventually going to have an awful lot of content, and it would pay to have a plan for how to deal with that.
So a couple of weeks ago I bought myself a new Ultrabook - a Lenovo IdeaPad U310. I've been used to using a MacBook Pro for work and I wanted something with a similar form-factor, but that didn't cost $1300, have an unnecessarily weird keyboard, and run OSX. The IdeaPaf U310 was fairly cheap and seemed like it would fit the bill.
For the most part, I'm pretty happy with it so far. I even like Windows 8.1 well enough (after installing Classic Start Menu, that is). There are only two parts that I'm not crazy about:
- The viewing angle of the screen is pretty limited.
- The trackpad is harder to use than on a Mac.
While there's obviously not much to be done about the first complaint, I figured the second one could be remedied in software.
Now, the actual problem here is not with the physical trackpad - that works well enough. It's with the click handling. If you're not familiar with a MacBook trackpad, it has no "buttons" as you customarily see on PC trackpads. Rather, you can just press down (as opposed to "tapping") on the trackpad with one finger to "click" - the pad actually gets depressed a bit and makes an audible clicking sound. To do a right-click (or command-click in Mac speak), you just press down with two fingers. It's a little unintuitive at first, but you get used to it and it works pretty well.
The Lenovo IdeaPad, however, doesn't quite do that. It still uses the same "press to click" mechanism, and you can still press with two fingers to double-click. However, it also has two "virtual buttons" at the bottom of the track pad. There's just a small vertical line, about half an inch long, at the bottom-center of the track pad that. You press on the left of that line to left-click and on the right of it to right-click. Even simpler than the Mac. And, in fact, when using a Mac I'd often wished it had something like that.
Problem is, that doesn't actually work out so well all the time. Because there's no tactile feedback as to whether your finger is over the left-click or right click button, it's very easy to accidentally click the wrong one. This is especially the case if you're using the laptop on your lap and so end up with your hand coming at the trackpad from one side.
So my solution to this was to try and change the behavior to work like a Mac - that is, make both virtual buttons left-click and use two fingers to right-click. Seems simple enough, but the Synaptics trackpad software that came with the laptop - which is actually quite good and contains a surprising number of options - doesn't support that. The only option it has for the primary buttons is to swap them, which doesn't help.
However, I did find a solution that uses Synaptics in conjunction with X-Mouse Button Control, a little third-party freeware app that lets you remap mouse buttons if various sophisticated ways.
- In the Synaptics settings, I configured the "Two-Finger Click" action to "Middle Click". (It doesn't have to be "Middle Click" in particular, just something other than right-click that X-Mouse can recognize.)
- I installed X-Mouse and configured it to set the "Right Button" action to "Left Click" and the "Middle Button" action to "Right Click".
So far this configuration is working pretty well. It's not ideal perfect - two-finger clicking on the left-click button doesn't trigger a right click. However, it does prevent those accidental right-clicks, which is what was really bothering me.
Well, the new version of Opera is now in stable release - Opera 15! This is the first version based on Webkit instead of Presto, Opera's in-house rendering engine. After using it for a week or so on OSX, I have to say that I'm both pleased and disappointed.
I'm also pleased about the appearance. The new Opera is definitely pretty. The old version wasn't bad, but you can tell that they've had designers putting some time in on the new version. It looks very smooth and polished.
I'm less pleased about everything else. You see, Opera 15 introduces some fairly major UI changes. And by "major UI changes", I mean they've basically scrapped the old UI and started over from scratch. Many of the familiar UI elements have been removed. Here's a quick run-down of the ones I noticed:
- The status bar is gone.
- Ditto for the side panel.
- The bookmark bar is also gone.
- In fact, bookmarks are gone altogether.
- Opera Mail has been moved to a separate application (not that I miss it).
- The RSS reader has disappeared (maybe it went with Opera Mail).
- Opera Unite is MIA.
- Tab stacking has disappeared.
- Page preview in tabs and on tab-hover are gone.
- Numeric shortcuts for Speed Dial items have been removed (i.e. they took away what made it "Speed Dial" in the first palce).
- The content blocker is gone (which is just as well - it was kind of half-baked anyway).
And that's just the things I noticed. But the worst part is that those UI features were the only reason I still used Opera. I feel like I'm no longer welcome in Opera land. They've literally removed everything from the browser that I cared about.
And what have they given me in return? Well, Webkit, which is no small thing. But if that's all I wanted, I'd just use Chrome or Safari. It's nice, but it doesn't distinguish Opera from the competition.
So what else is there? Well, Speed Dial has gotten a facelift. In fact, Speed Dial has sort of turned into the Opera 15 version of bookmarks. You can now create folders full of items in your Speed Dial and there's a tool to import your old bookmarks as Speed Dial items. I guess that's nice, but I'm not seeing where the "speed" comes in. It seems like they've just changed bookmarks to take up more screen real estate and be more cumbersome to browse through.
They've also introduced something called Stash, which, as far as I can tell, is also just another version of bookmarks. But instead of tile previews like Speed Dial, it uses really big previews and stacks them vertically. They're billing it as a "read later" feature, but as far as I can tell it's functionally equivalent to a dedicated bookmark folder. I guess that's nice, but I don't really get the point.
And, last and least, there's the new Discover feature. This is basically a whole listing of news items, right in your browser. Yeah, 'cause there aren't already 42,000 tools or services that do that. One that's directly integrated into the browser is just the killer feature to capture the key demographic of people who like to read random news stories and are too stupid to use one of the gajillion other established ways of getting them. Brilliant play, Opera!
Now, I'll grant you - visually, the new Speed Dial, Stash, and even Discover look fantastic. They're very pretty and make for really nice screenshots. However, I just don't see the point. I can imagine some people liking them, but they're just not new or innovative - they've just re-invented the wheel in a way that's more convoluted, but not visibly better.
Overall, I get the feeling that Opera 15 was designed by a graphic designer. Not a UI designer, but a graphic designer. In other words, it was built to be pretty rather than functional. I know I've sometimes had that experience when working with a graphic designer to create a UI - you get mock-ups that look beautiful, but were clearly created with little or no thought to their actual functionality. So you end up with workflows that don't make much sense, or new UI elements with undefined behavior, or some little control that's just "thrown in" but represents new behavior that is non-trivial to implement.
Honestly, at this point I think it's just time to switch to Chrome. I already use it for all my development work anyway, so I might as well use it for my personal stuff too. I had a good run with Opera, but I just don't think the new version is going to meet my needs. Maybe I'll take another look when version 16 comes out.
This weekend I did something I've been meaning to do for a while: I redesigned my website. In fact, unless you're reading this in your RSS aggregator, you probably already noticed. It was about time, too - I'd had the same fractured, dated design for years, There are probably still some kinks to work out, but for the most part I think it looks much cleaner.
This time I decided to do a real site design. As in, not only did I update this blog, but also the other sections of this site and the landing page as well. In fact, this started as a re-do of my landing page and I ended up abstracting that design out and making it a theme for LnBlog. Then I just set this and my other LnBlog blogs (you know, the ones in the header that nobody reads) to use it. Update accomplished!
I figured that, what with being an experienced front-end web developer, it would probably be a good idea to make my site look decent. Lends to the credibility and all. The randomly differing styles of the old pages looked kinda crappy and the sharp edges,weird colors, and embellishments were not so great. Of course, I'm no graphic designer, but I think this looks a bit cleaner. And at the very least, it's consistent.
This past week I finally decided to migrate to a new bug tracking system for my personal projects. Granted, my projects aren't particularly big (in most cases, I'm probably the only user), so it's not like I'm flooded with bug reports. But in my last two jobs I lived and died by our ticket tracking system and found the use of a good bug tracker to be extremely helpful for my development process.
For the last few years, I've been using Mantis BT for my personal bug tracking needs. Mantis is actually a very capable bug tracker and worked fairly well for my needs. However, Mantis is just a bug tracker. It doesn't have a wiki or much in the way of project planning tools, release management, or anything else, really. It has some basic roadmapping features and a changelog generator, but that's about it. And since I've become used to working with the likes of Jira, Trac, and Phabricator, I've come to want a little more than that.
On a side-note, the other thing about Mantis is that it's a bit cumbersome to work with. The UI is a bit...antiquated, for one thing. In fact, I've heard people refer to it as a disaster. There's a reason that the Mantis site only shows screenshots of the mobile app, as seen from the issue reporting screen below. The workflow is also a little weird when it comes to things like the changelog and roadmap features. It centers on fixed-in and target versions for individual bugs, which presupposes a more organized type of release planning than I'm looking to do.
So if not Mantis, what to do for an issue tracker? Obviously, the simple answer would be to install one of the popular project management packages, like Trac or Redmine. However, there are a couple of problems with this:
- I run this site on a shared hosting account. A really cheap shared hosting account.
- I'm kinda lazy.
To expand on the "cheap" part, my hosting provider is really targeted more at simple PHP-based sites that are administered through their control panel. So not only do I not have root access to the server, I don't even have shell access - just the control panel and FTP. So anything that requires running interactive commands for setup is out. And while my host does "support" Ruby and Python, in the sense that the interpreters are installed and accessible, it only does so through CGI and has a rather limited set of libraries installed.
This all leads into the second point, i.e. laziness. I could just get a more full-featured hosting provider. However, I've been happy enough with my current one, they're very affordable ($6/month), and frankly, I can't be bothered to go to the effort of moving all my stuff to a new server. I also can't be bothered to figure out the non-trivial steps to set up a semi-supported package like Trac or Redmine on my current host. It just isn't worth the time and energy to me.
So I decided to do a little more research. After some Googling, I ended up settling on The Bug Genie. So far, it seems to be a pretty decent compromise between Mantis and something like Trac. In addition to ticket tracking, it has an integrated wiki module, release and milestone tracking features, and even source control integration.
The initial setup was not as intuitive as I might have hoped. For starters, there was no easy way to migrate my data from Mantis to The Bug Genie. I ended up just migrating the issues themselves, minus comments, by using Mantis's CSV export. A highly sub-optimal solution, to say the least, but it wasn't important enough to me to make a project out of the migration. Second, the permission and user system, while it seems pretty powerful, is a bit more complicated and granular than I need. Lastly, the source control integration was a bit of a pain to set up. The actual configuration wasn't that difficult once I figured out what I needed to do, but I had to go to the project team blog to find the documentation I needed. Honestly, the worst part was the particular format of commit message needed to trigger the integration - it's quite verbose and not at all intuitive.
It's only been about a month of fairly light use, but so far I'm pretty satisfied with The Bug Genie. The UI has its quriks, but is modern and easy to use. It's pretty configurable and well documented, allowing you to customize project pages using the wiki module. Things like the release tracking and VCS integration are nice touches and seem to work quite well. All in all, it pretty much does what I wanted and does it reasonably well. I'm pretty happy with it.
It seems that Microsoft has decided to be a bit nicer about testing old versions of Internet Explorer. I just found out about modern.IE, which is an official Microsoft site with various tools for testing your web pages in IE.
The really nice thing about this is that they now provide virtual machines for multiple platforms. That means that you can now get your IE VM as a VMware image, or a VirtualBox image, or one of a few other options.
When I was using Microsoft's IE testing VMs a couple of years ago, they were only offered in one format - Virtual PC. Sure, if you were running Windows and using Virtual PC, that was great. For everyone else, it was anywhere from a pain in the butt to completely useless. This is a much better arrangement and a welcome change. Nice work, Microsoft!
With the imminent demise of Google Reader, and FeedDemon consequently being end-of-lifed, my next task was clear: find a new RSS aggregator. This was not something I was looking forward to. However, as things turned out, I actually got to solve the problem in more or less the way I'd wanted to for years.
The thing is, I never really liked Google Reader. The only reason I started using it was because I liked FeedDemon and FeedDemon used Google Reader as it's back-end sync platform. (And if you've ever tried to use a desktop aggregator across multiple systems, you know that not being able to sync your read items across devices is just painful.) But I seldom used the Reader web app - I didn't think it was especially well done and I always regarded the "social" features as nothing but a waste of screen space. Granted, the "accessible anywhere" aspect of it was nice on occasion, but my overall impression was that the only reason it was so popular was because it was produced by Google.
The other issue with Reader is that I don't trust Google or hosted web services in general. Paranoia? Perhaps. But they remind me of the saying that "If you're not paying for the product, then you are the product." And while I know a lot of people aren't bothered by that, I think that Google knows far too much about me without handing them more information on a silver platter. Furthermore, you can't rely upon such services to always be available. Sure, Google is huge and has ridiculous amounts of money, but even the richest company has finite resources. And if a product isn't generating enough revenue, then the producer will eventually kill it, as evidenced by the case of reader.
What I'd really wanted to do for some time was to host my own RSS syncing service. Of course, there's not standard API for RSS syncing, so my favorite desktop clients wouldn't work. But with FeedDemon going away as well, and having no desktop replacement lined up, I no longer had to worry about that. So I decided to take a chance on a self-hosted web app and gave Tiny Tiny RSS a try. I was very pleasantly surprised.
The installation for TT-RSS is pretty simple. I use a shared hosting account, and though the documentation says that isn't supported, it actually works just fine. The install process for my host consisted of:
- Copy the files to my host.
- Create the database using my host's tool.
- Import the database schema using using PHPMyAdmin.
- Edit the config.php file to set the database connection information and a couple of other settings.
- Use my host's tool to create a cron job to run the feed update script.
- Log in to the administrator account and change the default password.
- Create a new account for myself.
- Import the OPML file that I exported from FeedDemon.
That's it. Note that half of those steps were in the TT-RSS UI. So the installation was pretty much dead-simple.
In the past, I wasn't a fan of web-based RSS readers. However, I have to say that Tiny Tiny RSS actually has a very nice UI. It's a traditional three-pane layout, much as you would find in a desktop app. It's all AJAX driven and works very much like a desktop client. It even has a rich set of keyboard shortcuts and contextual right-click menus.
But who cares about the mobile site anyway? There are native Android clients! The official client is available as trial-ware in the Google Play store. And while it's good, I use a fork of it which is available for free through F-Droid. In addition to being free (as in both beer and speech), it has a few extra features which are nice. And while I may be a bit cheap, my main motivation to use the fork was not the price, but rather the fact that the official client isn't in the Amazon app store and I don't want to root my Kindle Fire HD. This was a big deal for me as I've found that lately my RSS-reading habits have changed - rather than using a desktop client, I've been spending most of my RSS reading time using the Google Reader app on my Kindle. The TT-RSS app isn't quite as good as the reader app, but it's still veyr good and more than adequate for my needs.
Overall, I'd definitely recommend Tiny Tiny RSS to anyone in need of an RSS reader. The main selling point for me was the self-hosted nature of it, but it's a very strong contender in any evaluation simply on its own merits. In my opinion, it's better than Google Reader and is competetive with NewsBlur, which I also looked at.
Last summer I learned a new acronym, courtesy of Chinmay. He asked for a developer to put some estimates on the outstanding technical tasks needed to make a demo production ready. So I fabricated a few numbers, slapped them on the wiki page, and dropped a note in the project chat room saying that I'd put up some grossly inaccurate estimates. Chinmay replied by thanking me for the SWAG - Sophisticated, Wild-Ass Guess.
The reason I like this acronym so much is that it pretty much sums up all the software estimates I've ever seen. I mean, it's not like there's no methodology to it - you look at the task, break it down into pieces, and apply some heuristic to attach a number of hours to each piece. But really, at the end of the day, it's all just guess work. The "heuristic" usually just comes down to a gut feeling of how long something ought to take based on your knowledge of the system. In other words, a guess.
And it's not like there aren't more organized ways of doing this. We could use COCOMO models to come up with our estimates, for example. The problem is, approaches like that require you to have actual data to use as a basis for comparison, and most teams don't have that. Or they do have it, but the data is useless - corrupt, fabricated, poorly defined, etc.
The reason it's hard to get good data on developer productivity is obvious to anyone who has ever worked in a pathological development shop - because it can be dangerous. In a good organization, such data would be used to estimate costs and schedule and to help make better decisions about what projects to do and where to allocate resources. In a pathological organization...not so much.
In a pathological shop, measurement is used against you. If the measurement is number of hours needed to complete a task, then you'll be judged to be incompetent or slacking off if a "simple" task takes you "too long" to complete - where what counts as "simple" and "too long" are determined by a non-technical manager who really has no idea what you actually do. Or perhaps you'll be required to account for all eight hours of your day, down to five-minute increments. Because heaven forbid you should "cheat" the company by taking a bathroom break.
When I think of pathological examples of measurement, I always think about my last job with an online advertising startup. At one staff meeting, our VP of Engineering announced that he new expectation was that engineering would deliver on its estimates with 90% accuracy based on 80% definition from the product manager. So we were required to be 90% sure how long a task was going to take, even though we were only 80% sure exactly what it was we were building. And based on previous experience with the product manager, calling the specs he gave us "80% definition" was really generous - I'd say it was more like 50%.
In that case, the productivity measurement was being used as a tool to force the VP of Engineering out of the company (and it worked - he put in his notice the next week), but the principle applies to anyone - if a measurement can be used against you - even if that use is unfair and out of context - assume it will be used against you. That's a very sad reaction to have, but it's natural if your experience is primarily with unhealthy shops. And while I don't endorese such a paranoid attitude, having been in such a shop more than once in the past I can certainly understand it.
One of these day, I'd really like to work in a shop where performace metrics were used as a real tool in project planning. Even in healthy shops, this sort of thing seems to be uncommon. That seems to be something of a comment on the state of "software engineering" as a discipline. The differentiator between quality and pathological shops seems to be the absence of bad practices as opposed to the precense of good ones. Even in good shops, building software seems to be less engineering and more art or black magic.
Personally, I'm not convinced that that is a necessary or desireable state of affairs. I'd just like some first-hand evidence to support that suspicion.
Two quick humorous items that came up in the last couple of days.
First, some animated GIFs describing moments in the life of a programmer. They're funny because they're true.
Second, a joke that evolved during a two-hour conference call I had on Tuesday:
VP of Engineering: "There are really only two problems in programming: cache invalidation and naming things."
CTO: "And off-by-one errors."
<Laughter slowly builds as people start to get the joke.>
I have a new side project. It wasn't one I was really planning on, but it's the fun kind - the kind that scratches an itch.
You see, I've become a big fan of the Roku set-top box. I have three of them in my house. And in addition to using them for Netflix and Amazon Instant Video, I also use them to stream video and audio from my desktop PC (which has become an ersatz home server). For this I use an application called Roksbox. It allows you to stream videos from any web server. So since I already had IIS running on my PC, all I needed to do was set up a new website, symlink my video directory into the document root, and I was ready to go.
The problem with Roksbox is that it's a bit basic and the configuration is a little clunky. For example, if you want to add thumbnail images and metadata to the video listings, you have two options:
1) Put all the information for every video in a single XML file.
2) Turn on directory listings for your server and use one XML file and thumbnail file per video.
The problem with the first option is that it's ludicrously slow. By which I mean the interface can freeze for upwards of 30 seconds while Roksbox parses the XML file. And the problem with the second method is that you end up littering your filesystem with extra files.
This was particularly the case when I looked at adding thumbnails for a TV series. All I wanted was for each episode to have the season's DVD cover art as a thumbnail image rather than the default image. Unfortunately for me, in order to do that Roksbox wants you to have one image file for each episode. So I either needed a bunch of redundant copies, or a bunch of symlinks, and neither of those options appealed to me.
So I decided to try something - faking the directory listing. That is, I wrote a couple of simple PHP scripts. The first read the contents of a directory and simulated the output of the IIS directory browsing page, but inserting thumbnail entries for each video. So, for example, if there was a video "Episode 1.mp4", the script would add an "Episode 1.jpg" to the listing. The second script worked in conjunction with a rewrite rule for non-existent files. So since the "Episode 1.jpg" didn't exist, the request would be sent to the second script, which would parse the request URL and then redirect to an actual, existing thumbnail image. The result was that I had my images for every episode in the season and I didn't have to create any pointless links or files.
This worked so well that, naturally, I took it to the next level. I started branching out and added more support for customizing listings, such as hiding and re-ordering directories. Then I started getting really crazy and wrote code to generate listings for entirely fictitious directories in order to create video playlists. I even started writing admin pages to provide a GUI for configuring this stuff, built an SQLite database to save the information, and created an ad hoc application framework to help manage it.
I think the reason this little project really caught my interest is that it's something that I actually use on a daily basis. I've been trying for a while now to find a side-project that I can use for picking up new languages or technologies, but nothing ever seems to stick. I'll play with something for a while, but them lose interest pretty quickly when some other obligation comes up. I think that's because those projects are just excuses - I'm doing them because I need a project in order to learn something new, but I have no particular interest in that particular project. But even though I'm not using anything new for this project, it grabs my attention because it's something that I want for myself. That's something I haven't had in a while, and I really like the feeling.
If anyone is curious, the current code is in my public mercurial repository here. I'm planning to put up a proper release of it when it's a bit more together. For now, I'm just having fun making it work.