It has been quite a while since I wrote one of these. Since my grandmother's passing near the end of last year I've felt like I've been in a bit of a fugue state and have been a bit of, as an old supervisor used to say, a "hot mess express."
Thunderus
So far this year I've been tweaking my workflows with respect to AI and tools I've been using. Most of the past summer and fall I used Claude Code (with Pro) but have recently pivoted to more cost effective tools, namely Antigravity (Google Opinion Rewards + Pro = $0) & z.ai's Coding Plan (lite has been enough for me so far this month).
Antigravity has been...interesting. It went from a way to get cheap access to Opus to a constantly broken, and unclear mess. Gemini 3 Pro is borderline unusable and doesn't follow any constraints, rules, or patterns. Flash is great but burns through usage fairly quickly. My current experiment with this is simply having the agent work on a task, review it myself then reprompt.
This also led me to try out OpenCode & Crush in place of Claude and while I like them, some of the tools didn't work super well. OpenCode would constantly misplace brackets in dart code and get stuck in loops trying to fix it, and Crush, while slow to reach that point would eventually waste a lot of time with the same problem.
All complaining aside, what I'm trying to do here is create systems that keep the human-in-the-loop and also minimize scenarios in which users produce code they don't understand. The two concepts I've settled on are mixed initiative & progressive disclosure. There's literally a teaching module in my codebase.
I'm continuing to build at the manic-ish rate I've been doing but trying to really change my workflows & processes.
Also I want these things to stop being in JS/TS. Rust and Go are more fun to write terminal tools with. Fight me.
Mccabre
This project evolved from a small CLI sandbox for me to play around with code analysis algorithms (Mccabe Cyclomatic Complexity, repeated code, etc.) to something I'd like Thunderus to read from to evaluate a model's output.
A big step forward was creating a view in the terminal that resembles code coverage tools so an LLM can read coverage files the way I look at codecov's hits/misses view.
Lectito
Lectito is the sum of months of sporadic article fetching and parsing work I've done. Noteleaf has an implementation in Go and my AT Proto powered learning platform, Malfestio has what's basically the prototype for this. Basically this fetches an article, and scores it based on its structure to find the bulk of the text content. For some sites, it uses XPath rules to find specific elements to extract text. I think it's kinda neat an even wrote a few words about it in its book.
MCP
This was always going to be the logical evolution/next step from lectito. The problem it solves for me is that the coding plan that I use for my main model, GLM-4.7 only has 100 search/web fetch requests per month.
That feels limiting and a little anxiety inducing so I built a fetcher on top of lectito's parsing capabilities to take article text and let the model read and process the information.
I'd only ever used the browser MCP with Claude and I didn't really understand how this protocol worked until I wrote my own.
Lazurite
I started the week working on my BlueSky client project. It's been fun to take inspiration and ideas from the community to build it. Currently I'm in the middle of a refactor to move domain models to Freezed, which I really should have done from the get go. This is another code gen based library that's been removing a ton of boilerplate from domain models. Most files are now half the size of what I'd previously written, albeit with huge .g|freezed.dart files.
Looking Ahead
This weekend I want to really take advantage of standard.site and have my posts live here and on my personal website, maybe also in my digital garden. This is born out of me wanting to continue to stay immersed in the AT Protocol while I try at a higher/meta level to stay focused on shipping and finishing projects.