About this project
This is the site you are reading. It's an atlas of the projects I've been building on my server, with a page for each one and a blog attached for anything that wants more space than a project page gives.
There are a couple of reasons it exists. One is that some of this is going to be open-sourced, and there needs to be somewhere to describe the tools so people can decide if they are useful to them. The other is methodology. Quite a lot of the AI-written-code conversation right now is very vibe-coded, and I have ended up taking a different approach: treating models like a development team rather than a magic autocomplete, and pairing them with real tests and real coordination.
The idea I really want to land is this: models generate the next token as a probability, so there is always a probability of being wrong. You can't get rid of that. What you can do is surround the model with the things software developers have used for decades to catch being wrong: tests, clear specs, small reviewable changes, the ability to refactor and prove the refactor didn't break anything. Once you do that, AI-written code starts behaving a lot more like any other software, and I find that genuinely interesting.