<?xml version="1.0"?>
<rss version="2.0">
  <channel>
    <title><![CDATA[LinLog]]></title>
    <link>https://linlog.skepticats.com/</link>
    <description><![CDATA[Linux, Programming, and Computing in General]]></description>
    <lastBuildDate>2023-03-11T22:20:35+00:00</lastBuildDate>
    <managingEditor>pageer@skepticats.com (Peter Geer)</managingEditor>
    <language>en-US</language>
    <generator>https://lnblog.skepticats.com/?v=2.3.1</generator>
    <item>
      <title><![CDATA[On GitHub pipelines and diverging branches]]></title>
      <link>https://linlog.skepticats.com/entries/2023/03/on-github-pipelines-and-diverging-branches.php</link>
      <description><![CDATA[<p>Just before Christmas I started a new job.&nbsp; I won't get into the details, but my new team has a different workflow than I'm used to, and the other day I noticed a problem with it.&nbsp; My colleague suggested that the experience might make for good blog-fodder, so here's the break-down.</p>
<p>First, let me start by describing the workflow I was used to at my last job.&nbsp; We had a private GitLab instance and used a forking workflow, so getting a change into production went like this:</p>
<ol>
<li>Developer forks the main repo to their GitLab account.</li>
<li>Developer does their thing and makes a bunch of commits.</li>
<li>Developer opens a merge request from their fork's branch to the main repo's main branch.&nbsp; Code review ensues.</li>
<li>When review is complete, the QA team pulls down the code from the developer's fork and tests in a QA environment.&nbsp; Obviously testing needs differ, but a "QA environment" is generally exactly the same thing as a developer environment (in this case, a set of disposable OpenStack VMs).</li>
<li>When testing is complete, the merge request gets merged and the code will go out in the next release (whenever that is - we didn't do continuous deployment).</li>
<li>Every night, a set of system-level tests runs against a production-like setup that uses the main branches of all the relevant repos.&nbsp; Any failures get investigated by developers and QA the next morning.</li>
</ol>
<p>I'm sure many people would quibble with various parts of this process, and I'm not going to claim there weren't problems, but it worked well enough.&nbsp; But the key feature to note here is the simplicity of the branching setup.&nbsp; It's basically a two-step process: you fork from X and then merge back to X.&nbsp; You might have to pull in new changes along the way, but everything gets reconciled sooner or later.</p>
<p>The new team's process is not like that.&nbsp; We use GitHub, and instead of one main branch, there are three branches to deal with: dev, test, and master, with deployment jobs linked to dev and test.&nbsp; And in this case, the merging only goes in one direction.&nbsp; So a typical workflow would go like this:</p>
<ol>
<li>Developer creates a branch off of master, call if "feature-X"</li>
<li>Developer does their thing and makes a bunch of commits.</li>
<li>Developer opens a pull request from feature-X to dev and code review ensues.</li>
<li>When the pull request is approved, the developer merges it and the dev branch code is automatically deployed to a shared development environment where the developer can test it.&nbsp; (This might not be necessary in all cases, e.g. if local testing is sufficient.)</li>
<li>When the developer is ready to hand the code off to QA, they open a pull request from feature-X to test.&nbsp; Again, review ensues.</li>
<li>When review is done, the pull request gets merged and the test branch code is automatically deployed to test, where QA pokes at it.</li>
<li>When QA is done, the developer opens a pull request from feature-X to master and (drum roll) review ensues.</li>
<li>When the master pull request is approved, the code is merged and is ready to be deployed in the next release, which is a manual (but pretty frequent) process.</li>
</ol>
<p>You might notice something odd here - we're only ever merging <em>to</em> dev and test, never&nbsp;<em>from</em> them.&nbsp; There are occasionally merges from master to those branches, but never the other way around.&nbsp; Now,&nbsp;<em>in theory</em> this should be fine, right?&nbsp; As long as everything gets merged to all three branches in the same order, they'll end up with the same code.&nbsp; Granted, it's three times as many pull requests to review as you really need, but other than that it should work.</p>
<p>Unfortunately, theory rarely matches practice.&nbsp; In fact, the three branches end up diverging - sometimes wildly.&nbsp; On a large team, this is easy to do by accident - Bob and Joe are both working on features, Bob gets his code merged to test first, but testing takes a long time, so Joe's code gets out of QA and into master first.&nbsp; So if there are any conflicts, you have the potential for things like inconsistent resolutions.&nbsp; But in our case, I found a bunch of code that was committed to the dev branch and just never made it out to test or master.&nbsp; In some cases, it even looks like this was intentional.</p>
<p>So this creates an obvious procedural issue: the code you test in QA is not necessarily the same as what ends up in production.&nbsp; This may be fine, or it may not - it depends on how the code diverges.&nbsp; But it still creates an obvious risk because you <em>don't know</em> if the code your releasing is <em>actually</em> the same as what you validated.</p>
<p>But it gets worse.&nbsp; This also creates issues with the GitHub pipeline, which is where we get to the next part of the story.</p>
<p>Our GitHub pipelines are set up to run on both "push" and "pull_request" actions.&nbsp; We ended up having to do both in order to avoid spurious error reporting from CodeQL, but that's a different story.&nbsp; The key thing to notice here is that, by default, GitHub "pull_request" actions don't run against the source branch of your pull request, they run against&nbsp;<em>a merge of the source and target branches</em>.&nbsp; Which, when you think about it, is probably what you want.&nbsp; That way you can be confident that the merged code will pass your checks.</p>
<p>If you're following closely, the problem may be evident at this point - the original code is based on master, but it needs to be merged to dev and test, which <em>diverge</em> from master.&nbsp; That means that you can get into a situation where a change introduces breakage in code from the target branch that isn't even present in the source branch.&nbsp; This makes it very hard to fix the pipeline.&nbsp; Your only real choice at that point is to make <em>another</em> branch of the target branch, merge your code into that, and then re-create the pull request with the new merged branch.&nbsp; This is annoying and awkward at best.</p>
<p>But it gets worse than <em>that</em>, because it turns out that&nbsp;<em>your pipeline might report success, even if the merge result would be broken</em>!&nbsp; This appears to be a GitHub issue and it can be triggered simply by <em>creating pull requests</em>.&nbsp;&nbsp;</p>
<p>The easiest way to explain is probably by describing the situation I actually ran into.&nbsp; I had a change in my feature-X branch and wanted to go through our normal process, which involves creating three pull requests.&nbsp; But in my case, this was just a pipeline change (specifically, adding PHPStan analysis), so it didn't require any testing in dev or test.&nbsp; Once it was approved, it could be merged immediately.&nbsp; So here's what I did:</p>
<ol>
<li>First, I created a pull request against dev.&nbsp; The "pull_request" pipeline here actually failed, because there was a bunch of code in the dev branch that violated the PHPStan rules and wasn't in master, so I couldn't even fix it.&nbsp; Crud.</li>
<li>After messing around with dev for a while, I decided to come back to that and just create the pull requests for test and master.</li>
<li>So I created the pull request for test.&nbsp; That failed due to drift from master as well.&nbsp; Double crud.</li>
<li>Then I created the pull request for master.&nbsp; That succeeded, as expected, since it was branched from master.&nbsp; So at least one of them was reviewable.</li>
<li>Then I went back and looked at the dev pull request and discovered that&nbsp;<em>the "pull_request" pipeline job now reported as passing</em>!</li>
</ol>
<p>Let me say that more explicitly: the "pull_request" job on my pipeline went from "fail" to "pass"&nbsp;<em>because I created a different pull request for the same branch</em>.&nbsp; There was no code change or additional commits involved.</p>
<p>Needless to say, this is very bad.&nbsp; The point of running the pipeline on a pull request is to verify that it's safe to merge.&nbsp; But if just doing things in the wrong order can change a "fail" to a "pass", that means that I can't trust the results of my GitHub pipeline - which defeats the entire purpose of having it!</p>
<p>As for why this happens, I'm not really certain.&nbsp; But from my testing, it <em>looks</em> like GitHub ties the results of the "pull_request" job to the last commit on the source branch.&nbsp; So when I created the pull request to dev, GitHub checked out a merge of my code and dev, ran the pipeline, and it failed.&nbsp; It then stores that as part of the results for the last commit on the branch.&nbsp; Then I created the master pull request.&nbsp; This time GitHub runs the pipeline jobs against a merge of my code with master and the jobs pass.&nbsp; But it still associates that result with the last commit on the branch.&nbsp; Since it's the same commit and branch are for both pull requests, this success clobbers the failure on the dev pull request and they both report a "pass".&nbsp; (And in case you're wondering, re-running the failed job doesn't help - it just runs whatever the last branch it tested was, so the result doesn't change.)</p>
<p>The good news is that this only seems to affect pull requests with the same source branch.&nbsp; If you create a new branch with the same commits and use that for one pull request and the original for the other, they don't seem to step on each other.&nbsp; In my case, I actually had to do that anyway to resolve the pipeline failures.</p>
<p>So what's the bottom line?&nbsp; Don't manage your Git branches like this!&nbsp; There are any number of valid approaches to branch management, but this one just doesn't work well.&nbsp; It introduces extra work in the form of extra pull requests and merge issues; it actually <em>creates</em> risk by allowing divergence between what's tested and what's released; and it just really doesn't work properly with GitHub.&nbsp; So find a different approach that works for you - the simpler, the better.&nbsp; And remember that your workflow tools are supposed to make things&nbsp;<em>easier</em>.&nbsp; If you find yourself fighting with them, then you're probably doing something wrong.</p>]]></description>
      <author><![CDATA[pageer@skepticats.com (Peter Geer)]]></author>
      <pubDate>Sat, 11 Mar 2023 22:20:35 +0000</pubDate>
      <category><![CDATA[Git]]></category>
      <category><![CDATA[Software Engineering]]></category>
      <category><![CDATA[Tools]]></category>
      <guid isPermalink="true">https://linlog.skepticats.com/entries/2023/03/on-github-pipelines-and-diverging-branches.php</guid>
      <comments>https://linlog.skepticats.com/entries/2023/03/11_1720/comments/</comments>
    </item>
    <item>
      <title><![CDATA[Broken development environment]]></title>
      <link>https://linlog.skepticats.com/entries/2021/08/broken-development-environment.php</link>
      <description><![CDATA[<p><em><strong>Author's note:</strong> This episode of From The Archives is the stub of an article I wrote on December 10, 2007.&nbsp; At the time, I was working for eBaum's World (this was back around the time ebaum sold it, but before he was forced out).&nbsp; It was kind of a weird time because it was my first experience working for an actual tech company.&nbsp; (It was also a weird place to work, for other reasons, but that's a different story.)&nbsp; Previously, I'd been stuck in the world of public-sector internal IT which is...not great, by comparison.</em></p>
<p><em>This particular post was expressing my annoyance over how our development environment was broken.&nbsp; Our dev environment, at the time, was literally just a shared server that we all pushed code to. And yes, that means we had plenty of opportunity to step all over each other and things could easily break for non-obvious reasons.</em></p>
<p><em>This was obviously terrible, but it's what we had to work with.&nbsp; We had a four-man development and system administration team with not a huge budget or set of internal resources.&nbsp; And, of course, there was&nbsp;<span style="text-decoration: underline;">always</span> something more important to do than improving our dev environment setup.</em></p>
<p><em>These days, I still have dev environment issues, but for completely different reasons.&nbsp; My company has an entire infrastructure for spinning up development VMs, including dedicated host clusters, custom tooling to setup environments, and teams responsible for managing that stuff.&nbsp; So now I have my own virtual instance of every server I need (which, for reference, is currently about eight VMs).&nbsp; However, there are still some holes in that infrastructure.</em></p>
<p><em>Part of the issue is that we have a&nbsp;<span style="text-decoration: underline;">lot</span> of teams that share or depend on the same infrastructure and/or services.&nbsp; For instance, one of the services that my team maintains is also worked on by at least three other teams.&nbsp; And we all share the same database schema, which doesn't automatically get updated in dev environments when changes are made in production, so when you pull in the develop branch, you frequently get breaking changes that you may not have ever heard of, usually in the form of config or database changes that aren't set in&nbsp; your environment.&nbsp; Sure, everything "just works" if you start fresh, but nobody ever starts fresh because it takes too long to spin up all the required pieces and set up the test data.&nbsp; (However, we are moving toward a Docker/Kubernetes setup for the pieces that don't need test data, so things are moving in the right direction.)</em></p>
<p><em>Not that I can complain too much.&nbsp; Even for my personal projects, which are&nbsp;<span style="text-decoration: underline;">much</span> simpler, my dev environment is frequently in disarray.&nbsp; Things get out of date, permissions or services don't get correctly configured after an update, or my setup is out of sync with the production environment.&nbsp; &nbsp; In fact, I had that problem just the other day - I pushed some code for this blog that worked fine locally, but broke on the live server because it was running a different version of PHP.&nbsp; Granted, the code in my dev environment was&nbsp;<span style="text-decoration: underline;">wrong</span> (and I really should have caught it there), but that's not the point.</em></p>
<p><em>The point is that environment maintenance is hard and dev environment maintenance doubly so because it changes frequently and is of much lower priority than production.&nbsp; That's something I hadn't really had to worry about in my internal IT job, because I was writing desktop apps and everybody was running identical Windows desktops.&nbsp; It was a a simpler time....</em></p>
<hr />
<p>This week's installment of "Pete's list of things that suck" features broken development environments. &nbsp;I spent the last couple of days at work wrestling with one and it really, really sucks.</p>
<p>The thing is, our development environment is...really screwed. &nbsp;Things went really bad during the production server upgrade a couple of months ago and we had to hack the hell out of the code just to keep the site running. &nbsp;As a result, we had all kinds of ugly, server-specific hackery in production which just plain broke when we moved it back into devel. &nbsp;And since, of course, we have a huge backlog of projects that management wants implemented, we've had no time to go back and reconfigure the development servers.</p>
<p>This week, I've been trying to test some changes to our user upload process. &nbsp;However, due to some NFS misconfiguration and some code problems, uploads just plain don't work in our main development environment. &nbsp;We do have a new, additional set of development servers (well, one box with a half-dozen virtual machines, but you get the idea), but I couldn't get those to work either. &nbsp;Part of it was that the configuration on those servers was incomplete, and part of it was that Firefox sucks.&nbsp; <em>(Note from the present: I no longer remember why that was.)</em></p>]]></description>
      <author><![CDATA[pageer@skepticats.com (Peter Geer)]]></author>
      <pubDate>Sat, 21 Aug 2021 22:20:06 +0000</pubDate>
      <category><![CDATA[Programming]]></category>
      <category><![CDATA[Software Engineering]]></category>
      <category><![CDATA[From the Archives]]></category>
      <guid isPermalink="true">https://linlog.skepticats.com/entries/2021/08/broken-development-environment.php</guid>
      <comments>https://linlog.skepticats.com/entries/2021/08/21_1820/comments/</comments>
    </item>
    <item>
      <title><![CDATA[PHP documentation and sockets]]></title>
      <link>https://linlog.skepticats.com/entries/2021/06/php-documentation-and-sockets.php</link>
      <description><![CDATA[<p>PHP's documentation gets way too much credit.&nbsp; I often hear people rave about how great it is.&nbsp; Many of them are newbies, but I hear the same thing from experienced developers who've been writing PHP code for years.</p>
<p>Well, they're wrong.&nbsp; PHP's documentation sucks.&nbsp; And if you disagree, you're just plain <em>wrong</em>.</p>
<p>Actually, let me add some nuance to that.&nbsp; It's not that the documentation sucks <em>per se</em>, it's that it sucks <em>as documentation</em>.&nbsp;</p>
<p>You see, a lot of PHP's documentation is written with an eye to beginners.&nbsp; It has lots of examples and it actually does a very good job of showing you what's available and giving you a general idea of how to use it.&nbsp; So in terms of a tutorial on how to use the language, the documentation is actually quite <em>good</em>.</p>
<p>The problem is that, sometimes, you don't need a tutorial.&nbsp; You need <em>actual documentation</em>.&nbsp; By that, I mean that sometimes you care less about the generalities and more about the particulars.&nbsp; For instance, you might want to know <em>exactly</em> what a function returns in specific circumstances, or <em>exactly</em> what the behavior is when you pass a particular argument.&nbsp; Software is about details, and these details <em>matter</em>.&nbsp; However, PHP frequently elides these details in favor of a more tutorial-like format.&nbsp; And while that might pass muster for a rookie developer, it's decidedly <em>not</em> OK from the perspective of a seasoned professional.</p>
<p>Case in point: <a href="https://www.php.net/manual/en/function.socket-read.php">the socket_read() function</a>.&nbsp; I had to deal with this function the other day.&nbsp; The documentation page is rather short and I was less than pleased with what I found on it.&nbsp;</p>
<p>By way of context, I was trying to talk to the OpenVPN management console, which runs on a UNIX domain socket.&nbsp; We had a small class (lifted from another project) that basically provided a nice facade over the socket communication functions.&nbsp; I'd noticed that, for some reason, the socket communication was slow.&nbsp; And I mean <em>really</em> slow.&nbsp; Like, a couple of seconds <em>per call</em> slow.&nbsp; Remember, this is not a network call - this is to a domain socket on the same box.&nbsp; It might not be the <em>fastest</em> way to do <abbr title="Inter-Process Communication">IPC</abbr>, but it should still be reasonably quick.</p>
<p>So I did some experimentation.&nbsp; Nothing fancy - just injecting some <code>microtime()</code> and <code>var_dump()</code> calls to get a general idea of how long things were taking.&nbsp; Turns out that's all I needed.&nbsp; It quickly became obvious that each call to the method to read from the was taking about 1 second, which is completely absurd.</p>
<p>For context, the code in that method was doing something like this (simplified for illustration):</p>
<p><code>$timeoutTime = time() + 30;<br />$message = '';<br />while (time() &lt; $timeoutTime) {<br />&nbsp;&nbsp;&nbsp; $character = socket_read($this-&gt;socket, 1);<br />&nbsp;&nbsp;&nbsp; if ($character === '' || $character === false) {<br />&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; break;&nbsp; // We're done reading<br />&nbsp;&nbsp;&nbsp; }<br />&nbsp;&nbsp;&nbsp; $message .= $character;<br />}</code><code></code><code></code></p>
<p>Looks reasonable, right?&nbsp; After all, the documentation says that <code>socket_read()</code> will return the number of characters requested (in this case one), or false on error, or the empty string if there's no more data.&nbsp; So this seems like it should work just fine.&nbsp;</p>
<p>Well...not so much.</p>
<p>The problem is with the last read.&nbsp; It turns out that the documentation is wrong - <code>socket_read()</code> <em>doesn't</em> return the empty string when there's no more data.&nbsp; In fact, I couldn't get it to return an empty string <em>ever</em>.&nbsp; What actually happens is that it goes along happily until it exhausts the available data, and then it waits for more data.&nbsp; So the last call just hangs until it reaches a timeout that's set on the connection (in our case, it was configured to 1 second) and then returns false.</p>
<p>So because we were relying on that "empty string on empty buffer" behavior to detect the end of input, calling that method <em>always</em> resulted in a one-second hang.&nbsp; This was fairly easily fixed by just reading the data in much larger chunks and checking how much was actually returned to determine if we needed another read call.&nbsp; But that's not the point.&nbsp; The point is that we relied on what was in the documentation, and it was just totally wrong!</p>
<p>And it's not like this is the first time I've been bitten by the PHP docs.&nbsp; Historically, PHP has been very bad about documenting edge cases.&nbsp; For example, what happens if a particular parameter is null?&nbsp; What's the exact behavior if the parameters do not match the expected preconditions?&nbsp; Or what about that "flags" parameter that a bunch of functions take?&nbsp; Sometimes the available flags are well documented, but sometimes it's just an opaque one-line description that doesn't really tell you what the flag <em>actually does</em>.&nbsp; It's a crap shoot.</p>
<p>To be fair, the PHP documentation is not the worst I've ever seen.&nbsp; Not even close.&nbsp; And it really is very good about providing helpful examples.&nbsp; It's just that it errs on the side of being light on details, and <a title="GOTO 2020 talk by Kevlin Henney" href="https://youtu.be/kX0prJklhUE"><em>software is details</em></a>.</p>]]></description>
      <author><![CDATA[pageer@skepticats.com (Peter Geer)]]></author>
      <pubDate>Sat, 05 Jun 2021 22:20:24 +0000</pubDate>
      <category><![CDATA[PHP]]></category>
      <category><![CDATA[Software Engineering]]></category>
      <guid isPermalink="true">https://linlog.skepticats.com/entries/2021/06/php-documentation-and-sockets.php</guid>
      <comments>https://linlog.skepticats.com/entries/2021/06/05_1820/comments/</comments>
    </item>
    <item>
      <title><![CDATA[Conference talks]]></title>
      <link>https://linlog.skepticats.com/entries/2021/02/conference-talks.php</link>
      <description><![CDATA[<p>The other day I decided to go back to something I did when I was still working in the office: putting tech conference talks on in the background while I work.&nbsp; Sometimes it's nice to have a little background noise if I'm not trying to concentrate too deeply on something (what I'm currently working on involved a lot of waiting for CI pipelines to run).&nbsp; Of course, sometimes I my concentration level ramps up and I totally tune out the entire talk, but sometimes I don't and it ends up being interesting.</p>
<p>This particular afternoon I was listening to some talks from the <a href="https://youtube.com/playlist?list=PLEx5khR4g7PIiAEHCt6LGMFnzq7JjO8we">GOTOpia Europe 2020</a> virtual conference.&nbsp; This one had an especially good lineup, including talks from <a href="https://youtu.be/0AzkH8SYyOc">Dave Thomas</a>, <a href="https://youtu.be/kX0prJklhUE">Kevlin Henney</a>, and <a href="https://youtu.be/F42A3R28WMU">Allen Holub</a>, who are some of my favorite presenters.&nbsp; <a href="https://youtu.be/__7K_fDqVJs">Cat Swetel</a>, who I'd never seen speak before, also had a very interesting presentation on metrics that I would heartily recommend.</p>
<p>It might not be the same vibe as the live conferences, but it's at least nice to be reminded that the entire world hasn't been canceled due to the pandemic.&nbsp; And there are always some interesting discussions at GOTO, so it's worth a watch.</p>]]></description>
      <author><![CDATA[pageer@skepticats.com (Peter Geer)]]></author>
      <pubDate>Sat, 20 Feb 2021 21:12:31 +0000</pubDate>
      <category><![CDATA[Industry]]></category>
      <category><![CDATA[Software Engineering]]></category>
      <guid isPermalink="true">https://linlog.skepticats.com/entries/2021/02/conference-talks.php</guid>
      <comments>https://linlog.skepticats.com/entries/2021/02/20_1612/comments/</comments>
    </item>
    <item>
      <title><![CDATA[Refactoring LnBlog]]></title>
      <link>https://linlog.skepticats.com/entries/2021/01/refactoring-lnblog.php</link>
      <description><![CDATA[<p><strong><em>Author's note:</em></strong><em>&nbsp; Happy new year!&nbsp; I thought I'd start the year off with another old post that's been sitting in my drafts folder since March 24, 2013.&nbsp; This time, though, I'm going to provide more inline commentary.&nbsp; As usual, the interjections will be italicized and in parentheses.</em></p>
<p><em>You see, this post is on <a href="http://lnblog.skepticats.com/">LnBlog</a>, and specifically what is (or was) wrong with it.&nbsp; If you don't know, LnBlog is the software that runs this website - I wrote it as a "teach yourself PHP" project starting back around 2005.&nbsp; I've been improving it, on and off, ever since.&nbsp; So in this post, I'm going to show you what I thought of it back in 2013 and then discuss what I think now and what has changed.&nbsp; Hopefully it will be somewhat enlightening.&nbsp; Enjoy!</em></p>
<hr />
<p>The year is (relatively) new and it's time for some reflection. In this case, reflection on past code - namely <a href="http://lnblog.skepticats.com/">LnBlog</a>, the software that runs this site.</p>
<p>I've come a long way from LnBlog, which as my first "teach yourself PHP" project. I've now been doing full-time professional PHP development since 2007 and can reasonably claim to have some expertise in it. And looking back, while the LnBlog codebase is surprisingly not horrifying for someone who had a whopping two months of web development experience going into it, it's still a mess. So it's time to start slowly refactoring it. And who knows? Blogging my thought process might be useful or interesting to others.</p>
<p><em>(<strong>Back to now:</strong></em><strong>&nbsp;</strong><em>I actually did blog <a href="https://linlog.skepticats.com/entries/2017/09/LnBlog_Blogging_the_redesign.php">some</a> <a href="https://linlog.skepticats.com/entries/2017/10/LnBlog_Refactoring_Step_1_Publishing.php">of</a> <a href="https://linlog.skepticats.com/entries/2019/06/LnBlog_Refactoring_Step_2_Adding_Webmention_Support.php">this</a> <a href="https://linlog.skepticats.com/entries/2019/08/LnBlog_Refactoring_Step_3_Uploads_and_drafts.php">stuff</a>, but not until 2017 or so.&nbsp; And I still agree with that initial assessment.&nbsp; The code had plenty of problems then and it still does.&nbsp; If I were starting fresh today, I'd probably do almost everything differently.&nbsp; But on the other hand, I've seen <span style="text-decoration: underline;">much</span> worse in much newer code.&nbsp; And in the last three years or so I've been making slow and steady improvements.)</em></p>
<h3>The Issues</h3>
<p>There are lot of things about LnBlog that need changing. A few of them are functional, but it's mostly maintenance issues. By that I mean that the code is not amenable to change. It's not well organized, it's too hard to understand, and it's too difficult to make updates. So let's go over a few of the obvious difficulties.</p>
<h4>1. The plugin system</h4>
<p>I have to face it - the plugin system is an unholy mess. The entire design is poorly thought out. It's built on the premise that a "plugin" will be a single PHP file, which makes things...painful. Any plugin with significant functionality or a decent amount markup starts to get messy very quickly. The "single file" limitation makes adding styles and JavaScript ugly as well.</p>
<p>On the up side, the event-driven aspect works reasonably well. The code for it is a bit nasty, but it works. The main problem is that there aren't really enough extension points. It needs a bit more granularity, I think. Or perhaps it just needs to be better organized.</p>
<p><em>(<strong>Back to now:</strong> I still agree with most of this, except perhaps the thing about extension points.&nbsp; So far, the only place where that's been a real problem is when it comes to inserting markup mid-page.&nbsp; But yeah, the whole "a plugin is one file" thing was ill-conceived.&nbsp; The good news is that it's totally fixable - I just need to figure out some design conventions around splitting things out, which hasn't been a priority so far.)</em></p>
<h4>2. The templating system</h4>
<p>This one is also an unholy mess. The idea isn't bad - allow any file in a theme to be over-ridden. However, I tried to abstract the template files too much. The files are too big and contain too much logic. Also, the simple template library I'm using is more a hindrance than a help. I'd be better off just ditching it.</p>
<p>I've also been thinking of getting rid of the translation support. Let's face it - I'm the only person using this software. And I'm only fluent in one language. Granted, the translation markers don't cause any harm, but they don't really do anything for me either, and accounting for them in JS is a bit of a pain.</p>
<p><em>(<strong>Back to now:</strong> The only thing I still agree with here is that the existing templates are a mess.&nbsp; But that has nothing to do with the template system - I just did a bad job of implementing the template logic.&nbsp; I'm working on fixing that - for instance, I added some Jinja-like block functionality to the template library.&nbsp; I had considered re-writing the templates in Twig or something, but it quickly became obvious that that would be a huge amount of work, that it would be difficult to do in a piece-wise fashion, and it's not clear that the payoff would be worth it.&nbsp; Likewise with the translation markers - taking them out would be a bunch of work for almost zero payoff and the JS thing isn't really that big a deal.&nbsp; Besides, if I ever changed my mind again it's WAY more work to put them back in.)</em></p>
<h4>3. The UI sucks</h4>
<p>Yeah, my client-side skills have come a long way since I built LnBlog. The UI is very Web 1.0. The JavaScript is poorly written, the style sheets are a mess, the markup is badly done, and it's generally "serviceable" at best.</p>
<p>As I realized the other day, the style sheets and markup are probably the worst part. Trying to update them is difficult at best, which is exactly the opposite of what you want in a theme system. In retrospect, my idea to replace files wholesale rather than overriding seems misguided. They're too fragmented. When it comes to the style sheets and JavaScript, this also hurts performance, because there are a lot of files and everything is loaded in the page head.</p>
<p><em>(<strong>Back to now:</strong> This is pretty much still accurate.&nbsp; I've been slowly improving the UI, but it's still not looking particularly "modern".&nbsp; That's not such a big deal, but the templates and CSS are still a pain-point.&nbsp; Really, what I need to do is rework the theme system so that I can easily make lighter-weight themes, i.e. I should be able to just create one override CSS file and call it good.&nbsp; I have the framework for that in place, but I have yet to actually go through the existing themes and make that work.)</em></p>
<h4>4. Too much compatibility</h4>
<p>When I first started writing LnBlog, I had a really crappy shared web hosting account. And by "really crappy", I mean it offered no database server and had safe-mode and the various other half-baked PHP "security measures" enabled by default. So I actually built LnBlog to be maximally compatible with such an environment.</p>
<p>These days, you can get decent hosting pretty cheap. So unless you can't afford to pay <em>anything</em>, there's no need to settle for such crappy hosting. And again, let's be honest here - I don't even <em>know</em> anyone other than me who's using this software. So supporting such crappy, hypothetical configurations is a waste of my time.</p>
<p>In addition, I really put an absolutely ridiculous number of configuration settings into LnBlog. The main config file is extensively documented and comes to <em>over 700 lines</em>. That's completely nuts and a pain to deal with. It contains a lot of settings that are pointless and that hardly anyone would ever <em>want</em> to override. And most of those could be moved into a GUI rather than having to edit a file. There's absolutely no reason for many of those settings.</p>
<p><em>(<strong>Back to now:</strong> This is also still true.&nbsp; I've been looking at redoing the config system, but that's another one of those things that is a big change because it has tendrils all through the code.&nbsp; I have been moving some stuff out of the main blogconfig.php file, and I've been avoiding adding to it, but there's still a lot there.&nbsp; For the most part, it's not a <span style="text-decoration: underline;">huge</span> issue, since most of the things you would want to configure are through the UI, but still....)</em></p>
<h4>5. No real controller structure</h4>
<p>I knew nothing of MVC or design patterns when I first wrote LnBlog. As a result, the "glue" code is in the form of old-style procedural pages. They're messy, poorly organized, and hard to maintain. A more modern approach would make things much easier to deal with.</p>
<p><em>(<strong>Back to now:</strong> The old "pages" are dead in all but name.&nbsp; A handful of them still exist, but they're three-liners that just delegate to a controller class.&nbsp; The bad news is that it's pretty much just two monolithic controller classes with all the old logic dumped into them.&nbsp; So that sucks.&nbsp; But they have dependency injection and some unit test coverage, so this is still an improvement.&nbsp; And I've at least got a little routing groundwork laid so that I could start breaking off pieces of functionality into other classes in the future.)</em></p>
<h3>The Problem</h3>
<p>While I'd like to fix all this stuff in one shot, there are three big problems here:</p>
<ol>
<li>That's a lot of stuff, both in terms of the number of tasks and the amount of code involved.</li>
<li>I no longer have the kind of free time I did when I first wrote this.</li>
<li>I'm actually using this software.</li>
</ol>
<p>Of course, those are two sides of the same coin. &nbsp;LnBlog isn't huge, but it isn't tiny either - the codebase is upwards of 20,000 lines. &nbsp;That wouldn't be a big deal if I were working on it as my full-time job, but this is a side-project and I can devote&nbsp;<em>maybe</em> a couple hours a day to it&nbsp;<em>sometimes</em>. &nbsp;So major surgery is pretty much out. &nbsp;And the third factor means that I need to be careful about breaking changes - not only do I not want to break my own website, but I also want to avoid having to do a lot of migration work because writing migration scripts is&nbsp;<em>not</em> my idea of a fun way to spend my free time.</p>
<p><em>(<strong>Back to now:</strong> This is always a problem with open-source and side projects.&nbsp; Nothing has changed here except, perhaps, my development process.&nbsp; After that year I spent <a href="https://linlog.skepticats.com/?action=tags&amp;tag=PSP">learning about the Personal Software Process</a>, I started using some of those methods for my personal projects.&nbsp; The main change was that, when making any kind of a big change or feature addition, I actual do a semi-formal process with a requirements and design phase and review phases.&nbsp; It sounds kind of silly for a personal project, but it's actually <span style="text-decoration: underline;">extremely</span> useful.&nbsp; The main benefit is just in having my thoughts documented.&nbsp; Since I might be going a week or more between coding sessions on any particular feature, it's insanely helpful to have documentation to refer back to.&nbsp; That way I don't have to remember or waste time figuring things out again.&nbsp; And by having design- and code-review phases as part of my development process, I have a built-in reminder to go back and check that I actually implemented all those things I documented.&nbsp; Having the whole thing written out just makes it much easier when you have long gaps in between work sessions.)</em></p>
<hr />
<p><em><strong>General commentary from the present:</strong> So as you can see from the above comments, I've fixed or am fixing a lot of the things that bothered me about LnBlog eight years ago.&nbsp; In the last two or three years I've put a lot of work into this project again.&nbsp; Part of it is because I actually use it and want it to be better, but part of it is also "sharpening the saw".&nbsp; I've been using LnBlog as an exercise in building my development skills.&nbsp; It's not just coding new features, like the flurry of development in the first two years or so that I worked on LnBlog, it's cleaning up my past messes, adding quality assurance (in the form of tests and static analysis), updating the documentation and figuring out how to balance responsible project management with limited resources).&nbsp; It's an exercise in managing legacy code.</em></p>
<p><em>To me, this is a useful and important thing to practice.&nbsp; As a professional developer, you&nbsp;<strong>will</strong> have to deal with legacy code.&nbsp; In my day job, I've had to deal with code that was written by our CEO 10+ years ago when he started the company.&nbsp; Software is a weird combination of things that live a week and things that live forever, and there's seldom any good way to tell which group the code will be in when you're writing it.&nbsp; So while it's important to know how to write code correctly the first time, it's also important to know how to deal with the reality of the code you have.&nbsp; And no, "let's rewrite it" is not dealing with reality.&nbsp; And when you have a code-base that's 15 years old, that you're actively using, and that you originally wrote, it's a great opportunity to experiment and build your skills in terms of modernizing legacy code.</em></p>
<p><em>And that's just what I'm doing.&nbsp; Slowly but surely, LnBlog is getting better.&nbsp; I've implemented a bunch of new features, and in the process I've worked on my design and analysis skills, both at a product level and at a technical level.&nbsp; I've fixed a bunch of bugs, which makes my life easier.&nbsp; I've implemented additional tests and static analysis, which also makes my life easier by finding bugs faster and giving me more confidence in my code.&nbsp; I've improved the design of the system, which again makes my life easier because I can now do more with less effort.&nbsp; Sure, there's still plenty do to, but I've made lots of progress, and things are only getting better.</em></p>]]></description>
      <author><![CDATA[pageer@skepticats.com (Peter Geer)]]></author>
      <pubDate>Sun, 10 Jan 2021 22:43:24 +0000</pubDate>
      <category><![CDATA[Software]]></category>
      <category><![CDATA[Software Engineering]]></category>
      <category><![CDATA[Web]]></category>
      <category><![CDATA[From the Archives]]></category>
      <guid isPermalink="true">https://linlog.skepticats.com/entries/2021/01/refactoring-lnblog.php</guid>
      <comments>https://linlog.skepticats.com/entries/2021/01/10_1743/comments/</comments>
    </item>
    <item>
      <title><![CDATA[Old people and legacy support]]></title>
      <link>https://linlog.skepticats.com/entries/2020/09/Old_people_and_legacy_support.php</link>
      <description><![CDATA[<p>Lauren Weinstein had an interesting post a earlier this year <a href="https://lauren.vortex.com/2020/01/17/how-some-software-designers-dont-seem-to-care-about-the-elderly">discussing software developers' attitudes toward the elderly</a>.&nbsp; His main point is that developers tend not to think at all about the issues that older people have when working with computers. These include things like reluctance to or difficulty with learning new programs or ways of working; old hardware which they can't afford to upgrade; isolation and lack of access to help; and physical limitations, such as poor eyesight or reduced manual dexterity.</p>
<p>Of course, this is obviously not true of <em>all </em>developers (like Lauren, for example), but if we apply it to the general zeitgeist of the community, at least as you see it online, then there does seem to be something to this.&nbsp; As a group, developers are very focused on "the coming thing", as Brisco County Jr. would say.&nbsp; We all want to be ahead of the curve, working with the cool new technology that's going to take over the world.&nbsp; We want to be on greenfield projects that are setting the standard for how to do things.&nbsp; That's why otherwise intelligent programmers do or suggest crazy things like rewriting their conventional LAMP-based site in Go and ReactJS.&nbsp; Of course, it's <a href="https://www.joelonsoftware.com/2000/04/06/things-you-should-never-do-part-i/">long been established</a> that rewriting from scratch is almost always stupid and wasteful, but the fact is that while PHP might pay the bills, it isn't cool.</p>
<p style="text-align: center;"><iframe src="https://www.youtube.com/embed/VqCwAY3MKw4" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p>
<p>Of course, it isn't&nbsp;<em>just</em> because they want to be cool that developers like newer technologies.&nbsp; There are plenty of other reasons.&nbsp; Intellectual curiosity, for one.&nbsp; Many of us got into this line of work because we enjoy learning new things, and there are always interesting new technologies coming out to learn.&nbsp; Learning&nbsp;<em>old</em> things&nbsp;can be interesting as well, but there are a few problems with that:</p>
<ol>
<li><strong>Older technologies are less marketable.</strong>&nbsp; Learning new tech takes a lot of time and effort, and if the tech is already on the way out, the odds of seeing a return on that investment of time, whether financial or just in terms of re-using that knowledge, are significantly lower.</li>
<li><strong>Older tech involves more grunt work.</strong>&nbsp; In other words, older programming technologies tend to work at a lower level.&nbsp; Not always, but the trend is to increasing levels of abstraction.&nbsp; That means that it will likely take more effort and/or code to do the same thing that you might get more or less for free with newer tech.</li>
<li><strong>The problems are less fun.</strong>&nbsp; This particularly applies to things like "supporting Internet Explorer", which Lauren mentions.&nbsp; When you have to support both the old stuff&nbsp;<em>and</em> the new stuff, you generally have lots of problems with platform-specific quirks, things that are supposed to be compatible but really aren't, and just generally trying to work around limitations of the older tech.&nbsp; These are the kind of problems that can be difficult, but not in a good way.&nbsp; They're less like "build a better mousetrap" and more like "find a needle in this haystack".</li>
</ol>
<p>So, in general, developers aren't usually super enthusiastic about working with or supporting old tech.&nbsp; It's not&nbsp;<em>really</em> as bad as some people make it sound, but it's not really where most of us want to be.</p>
<p>Another factor is the way websites are developed.&nbsp; The ideal is that you'd have somebody who is trained and experienced in designing user experiences and who is capable of considering all the use-cases and evaluating the site based on them.&nbsp; That person could communicate that information to the designers and developers, who could incorporate it into their work and produce sites that are easy to use, compatible with assistive technologies, degrade gracefully when using less capable hardware or software, etc.&nbsp; The reality is that this rarely happens.&nbsp; In my experience:</p>
<ol>
<li>Many teams (at least the ones I have experience with) have no UX designer.&nbsp; If you're lucky, you'll have a graphic designer who has some knowledge or awareness of UX concerns.&nbsp; More likely, it will be left up to the developers, who are typically not experts.&nbsp; And if you're <em>very</em> unlucky, you'll have to work with a graphic designer who is fixated on pixel-perfect fidelity to the design and is completely indifferent to the user experience.</li>
<li>Most developers are on the young side.&nbsp; (There are plenty of older developers out there, but the field has been growing for years and the new people coming in are almost all young.)&nbsp; They're also able-bodied, so they really have any conception of the physical challenges that older people can have.&nbsp; And it's hard to design for a limitation that you didn't think of and don't really understand.</li>
<li>While it's easy in principle, progressive enhancement and graceful degradation can be very tricky to actually pull off.&nbsp; The main reason is that it's&nbsp;<em>extremely</em> easy to accidentally introduce a change that doesn't play well with some browser, doesn't display properly at some resolution, or what have you.</li>
<li>And let's not forget testing.&nbsp; Even if you can build a site with progressive enhancement, proper accessibility, and attention to the needs of less technical users with little support available, you still need to test it.&nbsp; And the more considerations, use-cases, and supported configurations you have, the more your testing space expands.&nbsp; That makes it much harder and more time-consuming to make sure that all these things are&nbsp;<em>actually</em> present and working as intended for all users.</li>
</ol>
<p>So what am I trying to say here?&nbsp; I do agree with Lauren that supporting elderly users, disabled users, etc. is an important thing.&nbsp; It's a thing that, as an industry, we should do.&nbsp; But it's hard.&nbsp; And expensive (at least compared to the way most shops work now).&nbsp; That's not an excuse for not doing it - more like an explanation.</p>
<p>Every shop needs to find a balance between supporting a diversity of users and doing what they need to do within a reasonable budget of time and money.&nbsp; While it's important to think about security and support new standards, I think that in recent years the industry has probably been a little too quick to abandon old, but still widely used technologies.&nbsp; If nothing else, we should at least think more about our target user base and whether we're actually serving <em>them</em> or <em>ourselves</em> by introducing $COOL_JS_FRAMEWORK or dropping support for Internet Explorer.&nbsp; I'm sure that in many (but <em>not all</em>) cases, dropping the old stuff is the right choice, but that shouldn't be the default assumption.</p>]]></description>
      <author><![CDATA[pageer@skepticats.com (Peter Geer)]]></author>
      <pubDate>Sun, 13 Sep 2020 22:21:08 +0000</pubDate>
      <category><![CDATA[Industry]]></category>
      <category><![CDATA[Software Engineering]]></category>
      <category><![CDATA[Web]]></category>
      <guid isPermalink="true">https://linlog.skepticats.com/entries/2020/09/Old_people_and_legacy_support.php</guid>
      <comments>https://linlog.skepticats.com/entries/2020/09/13_1821/comments/</comments>
    </item>
    <item>
      <title><![CDATA[Famous companies and technology blogs]]></title>
      <link>https://linlog.skepticats.com/entries/2020/09/Famous_companies_and_technology_blogs.php</link>
      <description><![CDATA[<p>On a lighter note this weekend, here's a "funny because it's true" <a href="https://saagarjha.com/blog/2020/05/10/why-we-at-famous-company-switched-to-hyped-technology/">parody blog entry from earlier this year</a>.&nbsp; It pretty well sums up the sometimes-rampant insanity in the software industry.&nbsp; You know, where&nbsp;<em>literally every startup</em> thinks they're Facebook or Google and is going to go from zero to sixty-three quadrillion hits a day, so, well, you&nbsp;<em>have</em> to be ready for that kind of scale on day one.</p>
<p>Of course, approximately 90% of those startups will be out of business in less than five years, but hey, you never know, right?&nbsp; After all, the entire business model is "get big so you can IPO or get bought out".&nbsp; Sure, you&nbsp;<em>could</em> try to build a profitable, medium-sized business, but that's basically the same thing as failure.&nbsp; Besides, if you can attract venture capital, it's not your money you're blowing through anyway, so who cares?</p>
<p>The good news is that the industry seems to be moving away from that mindset a little bit in the last few years.&nbsp; Or, at least, things seem to be moving more toward business-to-business sales, as opposed mass-market products that have no clear business plan beyond "get huge".&nbsp; So maybe that's a good sign and this idea of copying every crazy/innovative thing that the big boys are doing will be a thing of the past soon.</p>
<p>Or maybe not.&nbsp; But I can dream.</p>]]></description>
      <author><![CDATA[pageer@skepticats.com (Peter Geer)]]></author>
      <pubDate>Sun, 06 Sep 2020 01:46:07 +0000</pubDate>
      <category><![CDATA[Industry]]></category>
      <category><![CDATA[Software Engineering]]></category>
      <category><![CDATA[Humor]]></category>
      <guid isPermalink="true">https://linlog.skepticats.com/entries/2020/09/Famous_companies_and_technology_blogs.php</guid>
      <comments>https://linlog.skepticats.com/entries/2020/09/05_2146/comments/</comments>
    </item>
    <item>
      <title><![CDATA[Questioning agility]]></title>
      <link>https://linlog.skepticats.com/entries/2020/04/Questioning_agility.php</link>
      <description><![CDATA[<p><strong><em>Author's note:</em></strong><em> This is based on some notes and links I started collecting in November of 2015.&nbsp; The bulk of the commentary is actually relatively new - from last year, August 15th, 2019 - so it's not too out of line with my current thinking.</em></p>
<p>As might be clear from my <a href="https://linlog.skepticats.com/entries/2020/04/19_0931/../../../2015/11/Review_-_Agile_The_Good_the_Hype_and_the_Ugly.php">review of Bertrand Meyer's <em>Agile!: The Good, the Hype, and the Ugly</em></a>, I've been rethinking the whole agile development craze that has swept the industry.</p>
<p>There are a number of good presentations online questioning the "agile" movement.&nbsp; For a more provocative point of view, I recommend <a href="https://vimeo.com/110554082">Erik Miejer's One Hacker Way talk</a>.&nbsp; Dave Thomas, one of the "pragmatic programmers", also has a&nbsp;<a href="https://youtu.be/a-BOSpxYJ9M">good talk</a> on the topic.&nbsp;There's also a good one by Fred George on the <a href="https://youtu.be/l1Efy4RB_kw">hidden assumptions of agile</a>.</p>
<p>My current thought (<em><strong>note:</strong> we're back to 2019 now</em>) is that "agile" has become pretty much a meaningless buzz word.&nbsp; Pretty much&nbsp;<em>everybody</em> is "doing agile" now - or at least claiming they do.&nbsp; It's come to mean "anything that's not waterfall".&nbsp; And we all know that "waterfall" doesn't work, which is why everyone is doing "agile".&nbsp; (Side note: Winston Royce, in <a href="http://www-scf.usc.edu/~csci201/lectures/Lecture11/royce1970.pdf">his paper</a> that initially described the waterfall process, actually <em>says</em> that it doesn't really work.&nbsp; But, of course, that didn't stop people from trying.)</p>
<p>Not that agility is a bad concept.&nbsp; It isn't - being flexible is a good thing.&nbsp; Responding to change is almost a requirement in most shops.&nbsp; The values in the Agile Manifesto are all good things to emphasize.&nbsp; But none of that amounts to a process.&nbsp; It's just a loose collection of principles and ideas that are useful, but aren't a road-map for how to build software.</p>
<p>That's why, in practice, most shops that "do agile" are using some variation on Scrum.&nbsp; And while I have no problem with Scrum&nbsp;<em>per se</em>, it's hardly the be-all and end-all of development processes.&nbsp; In fact, the main problem with Scrum is probably that it's&nbsp;<em>not</em> a software development process - it's more of a project management framework.&nbsp; It doesn't have a whole lot to say about the details of how to code, how to test, how to manage defects and other quality issues, how to manage releases, etc.&nbsp; It's up to each shop to figure that stuff out for themselves.</p>
<p>Of course, that's not bad.&nbsp; Every business is different and you should expect that you'll have to adapt any process to a certain extent.&nbsp; Scrum is useful in that it gives you a framework for tracking what needs to be done and creating a feedback loop to improve your process.&nbsp; But you still have to actually&nbsp;<em>use</em> that feedback loop to improve your process, i.e. you have to do the hard work of self-improvement.&nbsp; Simply going through the motions of what the "agile consultant" says you should do it's going to cut it.&nbsp; As with everything else in life, there are no shortcuts.</p>]]></description>
      <author><![CDATA[pageer@skepticats.com (Peter Geer)]]></author>
      <pubDate>Sun, 19 Apr 2020 13:31:56 +0000</pubDate>
      <category><![CDATA[Industry]]></category>
      <category><![CDATA[Software Engineering]]></category>
      <category><![CDATA[From the Archives]]></category>
      <guid isPermalink="true">https://linlog.skepticats.com/entries/2020/04/Questioning_agility.php</guid>
      <comments>https://linlog.skepticats.com/entries/2020/04/19_0931/comments/</comments>
    </item>
    <item>
      <title><![CDATA[Code sharing, pros and cons]]></title>
      <link>https://linlog.skepticats.com/entries/2020/03/Code_sharing_pros_and_cons.php</link>
      <description><![CDATA[<p>A few months ago, the DropBox blog had an interesting article on <a href="https://blogs.dropbox.com/tech/2019/08/the-not-so-hidden-cost-of-sharing-code-between-ios-and-android/">code sharing in their mobile apps</a>.&nbsp; It was a good reminder that developers should be mindful of not fetishizing code re-use.&nbsp; Like so many things in engineering, re-use is generally good, but still involves trade-offs.</p>
<p>The short-short version of the article is that DropBox originally had the idea to share code between their iOS and Android mobile applications.&nbsp; After all, why write the same app twice when you can do it once?&nbsp; Well, it turned out to be a really bad idea.&nbsp; Because they were doing things for both platforms in a custom, non-standard way, they ended up taking on a <em>lot</em> of extra work just for the sake of re-using the code.&nbsp; And not just in the amount of extra code they had to write in terms of custom libraries and tools - there's the extra effort of bringing new developers up to speed.&nbsp; In fact, they eventually realized that it would actually be <em>less</em> work to just write everything twice and ended up abandoning their code-sharing approach.</p>
<p>Of course, this is an extreme case.&nbsp; Trying to share code between two radically different platforms with different native implementation languages is bound to generate some issues.&nbsp; A more prosaic example is the <a href="https://www.theregister.co.uk/2016/03/23/npm_left_pad_chaos/">left-pad debacle</a>.&nbsp; You know, the one where a developer removed an 11-line JavaScript "library" from NPM and broke half the internet's deployment scripts.&nbsp; But in that case, the scale was reversed - instead of trying to share a significant amount of code between a couple of projects, the left-pad package shared a completely trivial amount of code between an absurdly huge number of packages.&nbsp; But the underlying problem is similar.&nbsp; By trying to share that 11 lines of JavaScript "the right way", you end up taking on an external dependency that now has to be managed and probably generates more long-term maintenance effort than you saved from not writing the code yourself.&nbsp; (I mean, seriously, why use a library for something that any half-competent programmer could write in less than 15 minutes?)</p>
<p>On the same theme, I've seen similar problems in client-side JavaScript.&nbsp; I've worked on a codebase or two where large portions of the front-end were essentially cobbled together out of jQuery plugins that sorta-kinda work together, but not really.&nbsp; Clearly the intention was to save effort by re-using existing components, but it didn't quite work out.&nbsp; Sometimes it was because they didn't play well together, other times it was just that one of the components was not very well written.&nbsp; In a couple of cases, after examining the plugins and the desired behavior, it became apparent that I could just write a version that <em>did</em> work cleanly in an hour or so - less time than it would take to debug the third-party plugin.</p>
<p>Of course, that's not to say that re-using code, whether your own or someone else's, is a bad idea.&nbsp; It's clearly not.&nbsp; Sometimes you find yourself writing nearly identical components for multiple projects, in which case it probably makes sense to abstract that into a shared library.&nbsp; Or maybe there's a third-party library that already does more or less exactly what you need, in which case it probably makes sense to just use it.&nbsp; That's all fine.</p>
<p>My point is just that "re-use some existing code," like any other idea in engineering, comes with trade-offs.&nbsp; Sometimes those trade-offs are minor, such as the cost of managing a dependency that buys you a really big chunk of functionality.&nbsp; Other times, they're pretty major, like the opportunity cost to DropBox of breaking all of the standard platform tooling.&nbsp; Either way, the important thing is to think about those trade-offs and evaluate them honestly, not just jump to an "easy" answer.</p>]]></description>
      <author><![CDATA[pageer@skepticats.com (Peter Geer)]]></author>
      <pubDate>Sat, 07 Mar 2020 23:11:06 +0000</pubDate>
      <category><![CDATA[Programming]]></category>
      <category><![CDATA[Software Engineering]]></category>
      <guid isPermalink="true">https://linlog.skepticats.com/entries/2020/03/Code_sharing_pros_and_cons.php</guid>
      <comments>https://linlog.skepticats.com/entries/2020/03/07_1811/comments/</comments>
    </item>
    <item>
      <title><![CDATA[I give up - switching to GItHub]]></title>
      <link>https://linlog.skepticats.com/entries/2019/09/I_give_up_-_switching_to_GItHub.php</link>
      <description><![CDATA[<p>Well, I officially give up.&nbsp; I'm switching to GitHub.</p>
<p>If you read back through this blog, you might get the idea that I'm a bit of a contrarian.&nbsp; I'm generally not the type to jump on the latest popular thing.&nbsp; I'd rather go my own way and do what I think is best than go along with the crowd.&nbsp; But at the same time, I know a lost cause when I see it and I can recognize when it's time to cut my losses.</p>
<p>For many years, I ran my own Mercurial repository on my web host, including the web viewer interface, as well as my own issue tracker (originally <a href="https://mantisbt.org/">MantisBT</a>, more recently <a href="https://thebuggenie.com/">The Bug Genie</a>).&nbsp; However, I've reached the point where I can't justify doing that anymore.&nbsp; So I'm giving up and switching over to GitHub like everybody else.</p>
<p>I take no real pleasure in this.&nbsp; I've been using Git professionally for many years, but I've never been a big fan of it.&nbsp; I mean, I can't say it's <em>bad</em> - it's not.&nbsp; But I think it's hard to use and more complicated than it needs to be.&nbsp; As a comment I once saw put it, Git "isn't a revision control system, it's more of a workflow tool that you can use to do version control."&nbsp; And I still think the only reason Git got popular is because it was created by programming celebrity Linus Torvalds.&nbsp; If it had been created by Joe Nobody I suspect it would probably be in the same boat as <a href="https://bazaar.canonical.com/">Bazaar</a>&nbsp;today.</p>
<p>That said, at this point it's clear that Git has won the distributed VCS war, and done so decisively.&nbsp; Everything supports Git, and nothing supports Mercurial.&nbsp; Heck, even BitBucket, the original cloud Mercurial host, is now <a href="https://developers.slashdot.org/story/19/08/20/1654253/bitbucket-dropping-support-for-mercurial">dropping Mercurial support</a>.&nbsp; For me, that was kind of the final nail in the coffin.&nbsp;&nbsp;</p>
<p>That's not the only reason for my switch, though.&nbsp; There are a bunch of smaller things that have been adding up over time:</p>
<ul>
<li>There's just more tool support for Git.&nbsp; These days, if a development tool has&nbsp;<em>any</em> VCS integration, it's for Git.&nbsp; Mercurial is left out in the cold.</li>
<li>While running my own Mercurial and bug tracker installations isn't a&nbsp;<em>huge</em> maintenance burden, it is a burden.&nbsp; Every now and then they break because of my host changing some configuration, or they need to be upgraded.&nbsp; These days my time is scarce and it's no longer fun or interesting to do that work.</li>
<li>There are some niggling bugs in my existing environment.&nbsp; The one that really annoys me is that my last Mercurial upgrade broke the script that integrates it with The Bug Genie.&nbsp; I could probably fix it if I really wanted to, but the script is larger than you'd expect and it's not enough of an annoyance to dedicate the time it would take to become familiar with it.</li>
<li>My web host actually now provides support for Git hosting.&nbsp; So I can actually still have my own repo on my own hosting (in addition to GitHub) without having to do any extra work.</li>
<li>Honestly, at this point I've got ore experience with Git than Mercurial, to the point that I find myself trying to run Git commands in my Mercurial repos.&nbsp; So by using Mercurial at home I'm kind of fighting my own instincts, which is counterproductive.</li>
</ul>
<p>So there you have it.&nbsp; I'm currently in the process of converting all my Mercurial repos to Git.&nbsp; After that, I'll look at moving my issue tracking in to GitHub.&nbsp; In the long run, it's gonna be less work to just go with the flow.</p>]]></description>
      <author><![CDATA[pageer@skepticats.com (Peter Geer)]]></author>
      <pubDate>Sat, 07 Sep 2019 13:37:03 +0000</pubDate>
      <category><![CDATA[Git]]></category>
      <category><![CDATA[Programming]]></category>
      <category><![CDATA[Tools]]></category>
      <category><![CDATA[Software Engineering]]></category>
      <guid isPermalink="true">https://linlog.skepticats.com/entries/2019/09/I_give_up_-_switching_to_GItHub.php</guid>
      <comments>https://linlog.skepticats.com/entries/2019/09/07_0937/comments/</comments>
    </item>
  </channel>
</rss>
