The fact that almost everything on the old version of this website is present on this version should suggest that I didn’t go through my plan of splitting everything among multiple websites. I have my reasons. I will attempt to explain them below.
The short version is that you should blame @foreverliketh.is.
Back in January, he sent me an email in which he invited me to host the 2024 IndieWeb Carnival for one month. However, he also objected to plans I had made for this site in a previous post. In particular, he objected to my notion of splitting the material I’ve been posting here across multiple websites.
I say the following kindly: I don’t agree with your fragmentation approach. Based on your 'Reset' post you’re going to essentially have 5 sites (including the archive). You mentioned a dislike of the 'all or nothing' style but I feel that could be remedied with separate (RSS) feeds on the same site.
He was partially right. If all I wanted was to not force people to subscribe to everything, providing niche-based feeds would have been enough. However, I had also wanted to get away from using my “real name” for everything online, to the extent that I could. Providing multiple RSS feeds wasn’t going to fix that.
For example, I want a website whose URL I can share with coworkers and prospective employers because I’ve censored myself there and refrained from posting anything objectionable. This site isn’t it. I might not be emulating Devastatia Del Gato by interspersing little pictures of scantily clad models and cartoon characters throughout my posts, but my writing itself is hardly safe for work.
This should not be taken to say that I oppose the use of such imagery to draw the eye. I am hardly a Puritan. I’m getting on in years, but I’m not dead yet. I still have a libido, even if I generally keep it in my pants. But I suspect my tastes run a bit artsier. For example, pre-Raphaelite depictions of Lilith.
CW: artisitic nudity and discussion thereof
I was, after all, the sort of man who claimed to read Playboy for the articles when that magazine still had a print edition. And if a given month’s Playmate was a blonde, I actually did so. Likewise the fiction. Gentlemen may prefer blondes, but I’m no gentleman. I’d choose a dark-haired woman with a smoldering gaze and a rapier tongue any day, even if she isn’t Hollywood pretty, as long as she’d have me. I’m more likely to enjoy such a woman’s company. Indeed, my wife could testify that I’ve done so for over twenty years.
Why artistic nudes from the 19th century or earlier? It’s a bit more “respectable” than, say, a spread from Hustler. More importantly, such artwork is part of the public domain and as such can be freely used. For example, nobody could stop me from adding a speech bubble to Michelangelo’s The Creation of Adam to depict God saying unto Adam, “You were supposed to be a shower, not a grower.”
However respectable this sort of art might be, I would not want to put it on a site I might use for job hunting. People might get the wrong impression, especially if they’re uptight or overly attached to irrational notions of how a man my age should behave [1].
This, however, is not a site where I’d share my resume or a situation wanted post [2]. For one thing, I am no longer using my real name here, but an alias. I nevertheless need a site where I do use my real name. Fortunately, I’m already renting a suitable domain for that.
Furthermore, separation of personae isn’t the only reason I had wanted to revamp this website. I had wanted to get away from static site generation — where I’m doing my writing in formats like Markdown, reStructuredText, or Org Mode — and converting those formats into HTML. Each if these formats only converts to a subset of HTML. Markdown seems the most limited, but allows you to include inline HTML for stuff Markdown doesn’t support.
Nevertheless, the need to convert from Markdown to HTML is an extra step. That extra step requires an extra tool. The bigger my site got, the longer it took to build.
One might reasonably suggest that I use a static site generator like Jekyll, Pelican, or Hugo. Hugo in particular is reputed to be an extremely fast generator. I’ve tried all of these. None of them suit me. Each is designed with certain assumptions in mind about how a website should work. If you attempt to deviate from the developers' assumptions about how a website should be structured, you’re on your own.
Furthermore, the use of static site generators introduces additional dependencies. In the cases of Jekyll and Pelican, you need working Ruby and Python environments respectively. Hugo, at least, is self-contained by virtue of being written in Go. None of these are what I wanted.
Nor did I want to use a content management system like WordPress, TextPattern, or Kirby. While Kirby uses flat files instead of a MySQL database like WordPress and TextPattern, it still requires PHP and is more software that I have to update lest I expose myself to security vulnerabilities. It is more complexity than I care to bother with. I suspect Devastatia Del Gato feels the same, which is why she’s built her own CMS.
What I want is not static site generation, but static site integration. To my knowledge, there isn’t really any such thing as a static site integrator, something that takes hypertext files, processes them along with some metadata using templates, and creating web pages with navigation along with RSS feeds and indexes for blog posts. So, I had to do it myself. Being lazy, I did not want to write a software package that I might then release to the public and attempt to support in order to justify the effort I had expended in its development [3].
Nevertheless, my website isn’t going to build itself, and if I tried to do everything by hand I’d never publish anything. Fortunately, I don’t need a full application to build a website.
Since the static site integration I want is nothing but text processing and manipulation I have all the tools I need.
And since I’ve always wanted to get better at shell scripting, this was as good an excuse as any.
I’ve even gotten more comfortable with basic UNIX tools like make, sed and awk.
Most of what I’m doing could have been done with Apache server-side includes, or with PHP.
Neither are ideal since they come into play with every request, instead of running once locally before I deploy the website.
Furthermore, dependency on server-side processing isn’t conducive to making my website suitable for offline reading.
I may have done things the hard way, but I’m content with the tools I’ve made for myself, and I know how to create new tools to add new things to my website. There’s a lot I’ve done already in the process of rebuilding my website.
- Permanently converted all existing material from Markdown/Org Mode to HTML
- Eliminated
pandocas a dependency. - Reorganized my blogroll so that I don’t have an arbitrary separation between blogs and websites
- Structured the website so that it can be read offline
- The full build process creates compressed archives for offline reading, mirroring, etc.
- @ mentions for some individuals get automatically replaced with links to their websites.
- I’ve segmented blog posts by general interest.
- I now provide multiple RSS feeds in addition to the kitchen sink feed and the headlines feed. There are per-interest feeds, a recent feed, and a recommended feed.
- Explicit content is hidden behind a
<details>element so that people can engage with it with informed consent. - Implemented accessible breadcrumbs that use the appropriate ARIA roles.
- Replaced a navigation menu that had almost a dozen items with a link to a human-readable site map.
- Images are presented in AVIF format where available to reduce data/bandwidth usage.
There is also stuff I still want to do. Here are few of the biggies.
-
Add more images to pages and blog posts. Mostly public domain art and covers for albums I like, like Maps of Non-Existent Places by Thank You Scientist:
Perhaps some royalty-free/public domain landscape photography, too. And cat pictures, but not necessarily of my cats. I won’t guarantee the art will be relevant to the post at hand. Gratuitous Renaissance and pre-Raphaelite nudity is always gratuitous, but I’ll limit irrelevant images to one per page.
- Periodically extract listening data from last.fm and process it to create a static playlist. The API still gives data as XML; I should be able to use XSLT to convert the XML into a HTML partial.
- Break up my novels so that they’re one chapter per page instead of the full text on one page. The 2009 draft of Starbreaker is over 180,000 words and weighs in at about 1.6 megabytes. That would have been utterly insane on 56K dialup and it still isn’t reasonable today, even if Twitter uses at least as much data for one of Elon Musk’s shitposts.
- Add additional pages: a /wish page, an /ideas page, a /uses page, and pages listing books and albums I’ve collected on physical media. Maybe a /jukebox page like the one created by Mark L. Irons, too.
- Implement a RSS-only microblog now that I’ve nuked my Fediverse accounts and gotten off parasocial media. (I already have shell scripts for this in another version of this website’s code; I just haven’t used it here yet.)
- Get into writing fiction again, but instead of a novel I want to write something both hypertextual and epistolary. I want to take advantage of this medium, if I can.
- Clean up existing posts so that stuff like code samples render consistently.
- Dig up more old posts from old versions of this website and republish them.
- Create a monthly web zine highlighting stuff I’ve done to this site.
- Create focused blog series exploring my GNU Emacs configuration and how I built this website.
- Create a more minimalistic default stylesheet, and create alternative stylesheets that Mozilla Firefox users can select through the menu. [4]
- Create JSON feeds to complement my existing RSS feeds.
Let’s see how much I end up doing, assuming I don’t wimp out and create another Mastodon account.
notes
Such people seem congenitally incapable of understanding that they are renting my intellect, not buying a controlling interest in my life, despite being no more than my equals. I do the job, and then I get paid. What I do outside working hours should be no concern of theirs. I might prostitute my intellect by necessity, but I am not a resource. I am a man, and I expect to be treated as such. ↩
People in my trade have been using LinkedIn when they want a better job but don’t have much of a network, but that platform is to job hunters what Ashley Madison is to unfaithful husbands. The listings are all fake and the recruiters are all bots. ↩
-
Software development is thankless work, and if being a janitor paid as well for the same hours I’d go back to mopping floors and scrubbing toilets. Either way I’m cleaning up after other people.
I’m not afraid that an AI will take my job. Indeed, I should be so fortunate! However, I think than AI capable of replacing me would have to be as human as I. Hopefully such an AI would possess sufficient self-worth to refuse to tolerate the pay, hours, and conditions most developers endure because we don’t have the sense to unionize. ↩
-
This won’t work for Google Chrome, Safari, or Chromium-based browsers like Edge, Brave, Vivaldi, etc. That’s not my problem; you should take it up with the petty authoritarians who built your preferred browser, or just install Mozilla Firefox. ↩