New Year, New Website

I had been in stealth mode, but maybe it's time to go public.


I’m writing this as part of a 32bit.cafe event. One of the prompts is, Make a page or write a blog post that explores a reflection on past decisions (good or bad), or brainstorms / shares plans for the future.

I had been of two minds as to whether I would write this post for the last couple of weeks. If you are reading this, Occasional Reader, then it is plain that I have in fact decided to write it. The 32bit.cafe event I mentioned is the impetus.

It is rather tautological, but I am no stranger to tautology. When Smudge becomes a querulous coeurl and starts telling me shaggy cat tales about how horribly he’s been neglected and that he is all but dying of hunger, I can say: Who do you think you’re fooling, Mr. Kitty? I know you’ve been fed because I fed you. Of course, I am usually then obliged to remind Smudge that nobody puts Cat Baby in a corner. Not even Cat Baby himself.

And here is the cat tax, paid in full.

a photo of a long-haired standard-issue cat
Smudge, the assistant webmaster for starbreaker.org, taking a break in his favorite old chair

It seemed silly to announce that I would be putting starbreaker.org on hiatus when it has in fact been on hiatus since September 2025. I had decided to give myself a birthday present: I would rethink my website’s structure and the tooling I use to build it. I was also going through a 20,000 word first draft for a new novel called Revölution by Night and deciding what I wanted to do with it in the second draft. I had ended up writing a new first chapter for that project’s second draft entitled “Old Testament Shit”. It starts like this:

Morgan Cooper never wore a watch for his own sake, but only as a courtesy to others. Even without his neuroelectronic implant to provide the time at home in New York and in London, where his girlfriend was an express maglev away, he knew when he was as surely as where. In his heart it was always two minutes to midnight on a Raymond Chandler evening. There was always a crisis in the offing, but it had not happened yet.

It was happening now. On a packed Manhattan subway car: an angel on the A train, heading to Hell’s Kitchen. But first, it would do some butchery. Its kind always did.

not bad for a second draft, I think

However, I did not get as far in retooling starbreaker.org as I had wanted. Much of that can be blamed on my day job. I had started working on a different account that obliged me to change my working hours from the standard 9:00am to 5:00pm. I was instead supposed to work from 4:00pm to midnight. However, the demands of the work I did often meant that I was on the job until 1:00am or even 3:00am. Not only was I not getting enough sleep, but I wasn’t getting paid for overtime either. However, I had been told up front that it might be like that, and I still took the assignment. (It was either that, or try to hold out for something better, and I had already refused an assignment involving Meta — because Mark Zuckerberg can go to Hell.)

Regardless, I had been at least reading my makefile and shell scripts, which I had been using to build this site. I thought my approach was reasonably clever:

This shell script worked, but I remained dissatisfied. I had suspected throughout 2024 (when I had first implemented my current toolchain) and 2025 that there was room for improvement. It felt overengineered, and it also felt slow.

It was faster than running pandoc on Markdown or Org mode files would have been, but my makefile was still running a shell script for each page, This shell script would read each page’s variable file, and do all kinds of data manipulation. It would then use these variables in a series of variable substitutions using sed. Don’t get me wrong; I was proud of myself for applying the UNIX philosophy instead of depending on some black box like Hugo or some godawful framework that depends on Node.js and the npm ecosystem like Eleventy or Astro.

It came to me while my mind was wandering during yet another company all-hands meeting that should have been an email: Why store metadata as shell variables that I will inject into sed expressions? What if I could use a sed script as the metadata file? What if each page could have its own sed script, and the site itself could have a common script for expressions that apply to all pages?

It isn’t as crazy as you might think, unless you think in React. After all, aartaka uses ed(1) to build his website. However, I’m not that hardcore, even if I have read my copy of Michael W. Lucas’s ed(1) Mastery and will occasionally disable X11 and use ed(1) in the Linux console on my IBM ThinkPad T60 as a distraction-free writing environment. (Just to prove to myself that I still can. Yes, I know how it looks.)

Sometimes I even do that while listening to Seduction (1997) by Dark, but only when I’m drunk and thinking of my first girlfriend, whom I had loved and lost a few years before I met my wife of 21 years in 2000; she had that album and C. J. Cherryh’s Morgaine novels. Other times I’m listening to Straight, No Chaser or Underground by Thelonious Monk. Depends on my mood. (And if the Naomi I knew sees this: I know better than to miss her, but I wish Nims well and hope she’s happy.)

Regardless of memories and music, I did do some experimenting. The nice thing about a UNIX terminal window is that it makes a decent REPL for the shell (whether it’s bash, zsh, or ksh). You can experiment with pipelines, write the output to a temporary file in /tmp so you aren’t cluttering your home directory, and make adjustments if your last set of commands don’t work out as you had hoped. Also, writing to a file is optional; you can just pipe the output to less and discard it when you’re done.

That’s how I went from this html: target in my previous makefile:

html: index-pages $(DST_HTML) $(TMP_ENTRIES_HTML_TXT) $(TMP_ENTRIES_HTML_RSS)  ## Process all HTML files

$(DST_DIR)/%.html: $(SRC_DIR)/%.html
       ./process-html.sh $< > $@
the html target from my original makefile

To this:

#
# PROCESS WEB PAGES
#
html: $(DST_HTML)  ## Process all HTML files

$(DST_DIR)/%.html: $(SRC_DIR)/%.html $(SRC_DIR)/%.sed $(DEP_TEMPLATES) $(DEP_INCLUDES) $(DEP_SHELL_SCRIPTS) $(COMMON_SED_FILE) $(ARACHNE_M4_FILE) $(POST_TIDY_SED_FILE) | directories indexes common.sed arachne.sed arachne.m4
        $(eval PAGE_SED_FILE:= $(subst.html,.sed,$<))
        $(eval LAYOUT_FILE:= $(shell grep __LAYOUT $(PAGE_SED_FILE) | cut -d'|' -f3))
        sed -e "/__PAGE_CONTENT/r $<" -e "//d" $(LAYOUT_FILE) \
        | hxincl -x -f -b./includes/ \
        | m4 -Q -P -E -I. \
        | sed -f $(PAGE_SED_FILE) -f $(COMMON_SED_FILE) -f $(PAGE_SED_FILE) -f $(ARACHNE_SED_FILE) > "$@"
        tidy -config $(TIDY_HTML) -m "$@" 2> /dev/null || true
        sed -i -f post-tidy.sed "$@"
        
code listing from arachne.mk, lines 86-99

Is the above more complex than just calling that shell script? Not really. The shell script was more complex because it had to read in shell variables and manipulate them.

Here’s what I do in the new version. If you don’t speak MEWNIX, Occasional Reader, you are welcome to skip the following.

  1. Given a HTML file ($<), identify the corresponding sed script for metadata and put in a make variable called SED_FILE.
  2. Find the path to the desired layout in the sed script (__LAYOUT).
  3. Use sed’s r command to inject the contents of the HTML file into the template.
  4. Pipe the output of sed into hxincl to pull in partials as if they were server-side includes, but locally.
  5. Pipe the output of hxincl into M4 using switches like -P (a GNU extension) for additional safety. By default, if your text includes words like ‘define’ or ‘include’, M4 will mistake them for commands. The -P switch forces M4 to only act on commands prefixed with m4_. In addition, my own macro names are in all-caps and all begin with M4_ for additional safety. I read the man page, you see, and tried to learn from others’ mistakes.
  6. Pipe M4’s output into sed, which applies the page’s sed script, a common sed script, and then the page’s sed script again. This triple pass lets me account for situations where the common script introduces variables like ../../.. (for relative paths) that would break things if not handled. It then applies a few expressions for dates I can’t script because they use make variables, and writes the result to the destination file ($@).
  7. I then run tidy but ignore its output and override its return code because it will return an error code even for warnings, which would halt make if not handled.
  8. I finally run an additional sed script because tidy will remove line breaks around HTML comments and, quite frankly, that offends me.

The original process-html.sh script that I had written a couple of years ago also implements this functionality, but with more overhead. Running these commands directly in make results in better performance, especially on my IBM ThinkPad T60.

I’ve made other adjustments as well, based on additional readings of the GNU make manual. For example, I’ve come up with a more streamlined approach to generating <item> XML partials for RSS feeds from my posts without having to compile the post twice. It involves using hxselect: a companion to hxincl in the W3C's HTML-XML-utils kit.

I’ve also expanded my collection of M4 macros somewhat, and come up with a way to create headings and tables of contents using M4 macros that use consistent IDs and text using sed replacements. Rather than create brittle scripts to automate the creation of index pages and RSS feeds, I am using includes and macros to implement a more organic approach. One approach I’ve kept was my use of avifenc to create AVIF images from JPEG and PNG images. I had that in my old makefile as a one-liner, and it works fine. Likewise my directives for copying over the originals so I can use them as fallbacks in older browsers that don’t support AVIF images or the <picture> element.

Why am I doing all of this? I want to refine my tooling so that it will serve me for the rest of my life. I don’t want to ever again use another CMS that is not the MEWNIX filesystem. I don’t want to ever again use another Web framework that isn’t the operating system itself. I know it’s possible. We have the technology.

There are, after all, things I want to do with my space in the Wired:

And on a note that has nothing whatsoever to do with this website, there are a couple of other things I want to do:

In the meantime, starbreaker.org is on hiatus while I work on the new version. I was strongly tempted to go live with the new version today, putting the old version on an archive. subdomain. That was supposed to be my Christmas present to myself, but I thought better of it.

I’d rather do it properly and confirm that all old URLs redirect to the new ones. Otherwise I’d just be confusing and annoying people who have yet to annoy me. That hardly seems fair, does it?

Hopefully I will remember to give notice a week before I finally push the new version. Please don’t panic if your feed reader shows a bunch of duplicate, unread entries for posts you know you’ve read. You can unsubscribe and resubscribe, or mark as read anything older than 2026-01-01.

But don’t ask me how long I’ll be on hiatus. It’ll take as long as it takes. But when I run make install, there will be plenty of new stuff for you to overdose on. Hell, there might even be a completed second draft of Revölution by Night.

how I get my “now playing” info, among other things
generating code listings
regarding my (mis)use of the title attribute