05 May 26
In the past two weeks, I’ve been leading an effort to roll out Claude Code to our engineering team. We have a repo that contains several Claude config/dot files. They include some git conventions. A Merge Request skill. Some MCP config. Etc. It’s all easily installed via Rake. I made it idempotent so that new updates and changes can be easily installed by each team member. So our scaffolding is in place.
Next, we’re thinking about how to efficiently provide context in our various repos. We have an AGENTS file, but we’re looking at documentation and how it can be used. We may introduce PRDs. We may introduce some changelog-type docs. But I’m reminded of an approach at Slack, they called Runbooks. I’ve worked on teams that have borrowed this approach.
The idea is pretty simple. You have a Runbooks directory at the root of our repo. In it are simple Markdown files that chronicle decisions made, how to get started, how to run the service, its tests, etc. Things that are discussed in conversations and agreed upon get documented in a Runbook. The idea being that if you pull down the repo, you have access to everything you need to get started and start contributing. And if you ever need to troubleshoot something, there is context around why that thing is the way it is. Those are all pros in my book. The con is that it lives in the repo. Some members of the teams I was on didn’t like this. They wanted the documentation in something like Confluence or Notion. I like the idea of the Runbook context getting committed as part of the Pull Request. It felt like a disconnect having the documentation separate from the code. But I understand, in some setups, not all stakeholders have access to the repo.
But this idea has stuck with me and seems to have advantages in the new world of AI and context engineering. We’re experimenting with documentation outside the repo, but having the agent know where to look for it. I think there would be advantages to having the plain English Markdown files alongside the code.
This is something we’re going to experiment with over the coming weeks.
The other topic that has been on my mind lately has been the cognitive debt discussion. I’ve read a number of posts, linked below, about teams feeling like they no longer have a solid mental model of their application because the code is being generated by an agent. There are multiple things to consider here. Is the agent just being given a task and no one is really reviewing it, or are they reviewing it just to make sure it is not atrocious, but not really spending the time to grok it? Maybe they’re spending some time really reviewing the code, but not spending time to ensure it is applying OO the way they would apply OO. I miss the days of discussing OO. Anywho, I think this is a real issue, and it’s popping up very quickly. Teams are losing their confidence that they have a full understanding and control over the codebase they are building and deploying. We’ll see how this develops over time.
▧ ▧ ▧
Links
https://strategizeyourcareer.com/p/ai-making-developers-lazy - this is fascinating, the studies say you comprehend much less when using AI to generate code doesn’t surprise me in the slightest
https://margaretstorey.com/blog/2026/02/18/cognitive-debt-revisited/ - interesting post about teams losing a shared understanding of their codebase(s)
▧ ▧ ▧
Music
- https://normal-bias.bandcamp.com/album/kingdom-come - h/t to Mikey for sending me this one, great DM inspired new wave stuff from NY
- https://www.nts.live/shows/dark-entries-record/episodes/dark-entries-record-8th-january-2026 - great show, great tribute to Perry Bamonte of The Cure
▧ ▧ ▧
A couple of promotions each week. First, use my invite link to try Warp as your terminal. It’s fast and has some great features. I’m not affiliated with them at all, just really like it. I’m writing a book about learning Rust if you are familiar with Ruby. Stay tuned. As always, you can connect with me more at https://mikekrisher.com.