2024-03-15 Happy Pi(ROUNDUP) Day

What I Did


Some essays for you on computer input/output. These are peripheral to computing and, therefore, easy to output (get it?):


I cleared out quite a bit, and was able to clear my Hacker-News-hyperlink-add sidequest at the literal eleventh hour (finished around 11:10 p.m. about an hour ago). Still with the yak shaving, though my stamina has been too taxed from the busy season at the day job to push as heavily into the online courses.

I deeply miss writing things out. It feels like I have a hole in my head that the creative juices are leaking from. Thankfully, I’m on my way to the end of the de-hoard business.

What I Learned

Under specific conditions, someone working hard at data entry is indistinguishable over a network from an automated script. All you need is limited intel (e.g., NoScript + VPN), someone on the ASD spectrum, and software designed to flag that sort of thing.

This isn’t my first rodeo in data-harvesting, but it does seem like the internet is slowly becoming more hostile to it.

By my estimation, we will likely see further removal of the privacy we’ve been accustomed to as the years go on. Presently, a bank presumes people will give their social security number to them, and it may be their fingerprint or DNA sample in 10 years.

What I’m Doing


  • Working in an insurance office right now.
  • Keeping a home together with a woman at the maximum threshold of the Crazy/Hot Matrix.
  • Slowly succumbing to the standard mental decline caused by maintaining two schoolchildren before they’re old enough to vote.


My Grandiose De-Hoarding Mission now has 2 domains, loosely inspired by Johnny.Decimal:

  • It consists of 3,813 files, each one containing between 1 and 50 subjects.
  • As I go, each condensation will make fewer files, but each re-categorization may make more files.
  • The number is moderately arbitrary relative to results, thereby avoiding the risk of Goodhart’s Law while also implying I’ve made some sort of progress.

The software-leaning side has 2,701 pages, and will (eventually) go to my toolbox:

  • 03X — an inbox of stuff that goes everywhere else, and where I dump any new content when it’s not explicitly obvious or convenient to file away (66 files)
  • 1XX — need to both sift for duplicates in the system and group the information (550 files)
  • 2XX — need to sift for duplicates in the system (499 files)
  • 3XX — need to group the information (1,414 files)
  • 4XX — has been sifted and grouped and ready for the toolbox, presuming I understand it (167 files)

The writing-leaning side has 1,109 pages, and spans the output of my Trendless Tech essays and my remaining NotaGenius essays:

  • 02X — content to update my already-finalized essays (87 files)
  • 05X — needs regrouping into narrower classifications (252 files)
  • 1XX — written content (my notes or copy-pasted stuff) that must make its way to a new essay (681 files)
  • 2XX — hyperlinks-only lists of guides (45 files)
  • 3XX — hyperlinks-only opinions and expert wisdom (43 files)

Throughout the entire system, I maintain a sub-schema that actually reflects the content I’m building:

The flow of work represents itself through a unique flow of “phases”:

  1. Sift through the duplicates (somewhat alongside Phase 2)
    • S1->S3
  2. Precisely group content (somewhat alongside Phase 1)
    • S1->S2
  3. Sift through duplicates in grouped content (alongside Phase 5)
    • S2->S4
  4. Group/merge content into other categorizations (alongside Phase 4)
    • S3->S4
  5. Separate out the toolbox items, guides, and opinions
    • S4->TB
    • S4->W2
    • S4->W3
  6. Regroup the essays and update old content
    • S0->S1
    • S0->S2
    • S0->S3
    • S0->TT/NAG/TLS
  7. Add ready-to-go content updates, which will make all my essays officially “done”
    • S1->TT/NAG/TLS
  8. Make decisions on the guides
    • S2->Maybe/Later
    • S2->?
  9. Consume and update the last of TrendlessTech
    • S3->TT