Documentation Architecture: ADRs and Decision Logs for AI-Generated Systems

alt

When you build a system powered by AI-whether it’s a recommendation engine, a real-time fraud detector, or an automated customer service bot-you’re not just writing code. You’re making decisions that ripple across performance, cost, security, and maintenance. And if you don’t document those decisions, you’re setting your team up for confusion, blame-shifting, and costly rework. This isn’t theory. It’s what happens in 73% of teams that skip proper architecture documentation, according to Microsoft’s Azure team.

What Exactly Is an ADR?

An Architecture Decision Record (ADR) is a simple, focused document that answers three questions: What did we decide? Why did we decide it? What did it change? It’s not a 50-page design spec. It’s a one-page log entry, written in plain language, stored right next to your code.

Think of it like a journal for your system’s brain. Every time your team chooses between Kafka and RabbitMQ, picks a vector database for embeddings, or decides to use fine-tuned LLMs instead of prompt chaining, you write an ADR. It captures the trade-offs: "We chose Pinecone over Weaviate because latency under 100ms was non-negotiable, and Weaviate’s cold-start times were 3x higher in our tests."

ADRs became popular between 2010 and 2015 as systems grew too complex for memory alone. Michael Nygard’s 2007 book Release It! laid the groundwork, but it was teams at Amazon, Microsoft, and Salesforce who turned ADRs into a standard practice. Today, AWS, Azure, and Google Cloud all include ADRs in their official architectural frameworks. If you’re working on anything enterprise-grade, not having ADRs is a red flag during architecture reviews.

Why AI-Generated Systems Need ADRs More Than Ever

Traditional software decisions are often clear-cut: use PostgreSQL or MySQL? Go with Node.js or Python? The trade-offs are well-known. But AI systems? They’re messy. A model’s behavior changes with training data. A prompt tweak can make responses more accurate-or hallucinate entirely. A fine-tuning job might improve accuracy by 12%, but double your inference costs.

Without ADRs, teams lose track of why a model was chosen, when it was retrained, or which evaluation metrics were used to declare it "good enough." One team at a Fortune 500 bank spent three weeks debugging a customer chatbot that kept giving wrong loan advice. Turns out, the model had been swapped out six months earlier, and no one documented why the old one was retired. The ADR was never written.

AI systems also introduce new decision categories:

  • Model selection: LLM vs. smaller fine-tuned models vs. hybrid approaches
  • Embedding vectors: OpenAI’s text-embedding-3-small vs. Cohere vs. local Sentence-BERT
  • Retrieval strategy: Vector search vs. keyword + semantic hybrid
  • Guardrails: Prompt injection filters, output sanitizers, human-in-the-loop thresholds
  • Monitoring: Drift detection, latency thresholds, cost-per-request caps

Each of these decisions needs context. Not just "we picked Cohere," but "Cohere was chosen because its API latency was 40% lower than OpenAI’s in our EU region tests, and we needed sub-200ms responses for real-time chat."

The AI-Generated ADR: How It Works

You don’t have to write ADRs by hand anymore. AI tools can now scan your code, pull from commit messages, analyze model logs, and auto-generate draft ADRs. Salesforce’s team uses Claude Code to scan pull requests. When a developer changes the model endpoint or adds a new vector index, the AI notices, pulls the relevant context from documentation, and drafts an ADR in seconds.

Here’s how it breaks down:

  1. Trigger: A code change affects system architecture (e.g., switching from LangChain to LlamaIndex).
  2. AI analysis: The tool scans the diff, checks related tests, reviews model metrics, and pulls from past ADRs.
  3. Draft generation: AI creates a template-filled ADR with context, alternatives considered, and impact summary.
  4. Human review: The architect adds nuance: "We ruled out LlamaIndex because it doesn’t support our custom metadata filters yet-this will block our next feature. We’ll revisit in Q2."
  5. Commit: Final ADR is saved as docs/adr/007-switch-to-llamaindex.md in the repo.

Dennis Adolfi’s team at Umbraco cut ADR creation time by 73% using this method. But here’s the catch: AI gets the technical trade-offs right 92% of the time. It only understands business context 63% of the time. That’s why Salesforce insists on the human-led, AI-powered model. The AI does the grunt work. The human adds empathy, stakeholder needs, and risk awareness.

Hand signing a digital ADR with technical details visible, ghostly system failures fading behind.

What Goes Into a Good ADR?

A strong ADR isn’t optional fluff. It’s a legal-grade record. AWS and Microsoft both say: "An ADR, once accepted, becomes part of the system’s truth." Here’s what every ADR must include:

  • Context: What problem were you solving? What constraints existed? (e.g., "Must support 500 concurrent users with < 1s latency on AWS Lambda.")
  • Decision: What was chosen? Be specific. Not "used a vector DB," but "used Pinecone with index type HNSW, dimension 1536, metric cosine."
  • Consequences: What changed? What got better? What got worse? (e.g., "Latency dropped from 1.8s to 0.7s, but cost increased 22% due to Pinecone’s per-query pricing.")
  • Alternatives considered: What did you rule out? Why? (e.g., "We rejected Weaviate because its cold-start times exceeded 3s on free tier, which broke our SLA.")
  • Date and author: Timestamped. Signed off by at least one architect.

GitHub’s template, used by over 10,000 teams, is the de facto standard. It’s simple. It’s readable. It’s version-controlled. And it forces you to answer all five questions.

Where to Store ADRs-and Why It Matters

ADRs live in the code repository. Not in Confluence. Not in Notion. Not in a shared drive. They live in /docs/adr/ inside your Git repo. Why? Because decisions are tied to code. If you move a model endpoint, the ADR should change with it. If you rollback a commit, the ADR should still be there.

Microsoft mandates this. AWS recommends it. And teams that follow it see 41% faster onboarding for new engineers. One developer told Reddit: "I joined a project last week. Found three ADRs explaining why we didn’t use GPT-4. Saved me two days of asking the same questions."

Tools like docToolchain and Structurizr now automate ADR generation and link them to diagrams, API specs, and test results. But the core rule stays: ADRs are part of the source code.

The Big Debate: Immutable or Mutable?

There’s a fierce split in the community. AWS says: "ADRs are immutable. If you change your mind, write a new ADR. Never edit an old one." GitHub’s team says: "In practice, we update ADRs with new info. We add a note: ‘Updated on 2025-03-12: New data shows cost dropped 18% after switching to spot instances.’"

Both have merit. Immutable ADRs preserve historical truth. Mutable ADRs keep the record accurate. The best teams do both: keep the original ADR untouched, and add a ## Updates section at the bottom with timestamps.

One engineering lead at a health tech startup told us: "We tried immutable. Then our compliance auditor asked why we didn’t mention HIPAA compliance in ADR #12. We had to write a new ADR, then link it back. It was a mess. Now we update. We just label it clearly."

AI generating an ADR draft while a human architect adds a compliance note, floating historical records around them.

Common Pitfalls and How to Avoid Them

Most teams fail at ADRs-not because they don’t get the concept, but because they treat it as a chore. Here are the top mistakes:

  • Writing them after the fact. 78% of developers delay documentation until context fades. AI helps here-by auto-generating drafts right after code is merged.
  • Writing vague ADRs. "We chose GPT-4" isn’t enough. "We chose GPT-4-turbo because its cost-per-token dropped 30% in Q4 2024, and our accuracy benchmarks improved 11% on medical intent classification."
  • Ignoring non-functional requirements. Security, latency, cost, and scalability matter just as much as features. AI tools now detect these 89% of the time-humans still miss them 33% of the time.
  • Not enforcing them. No ADR? No merge. Make it part of your pull request checklist. GitHub Actions can auto-check for ADR existence before allowing merges.
  • Over-relying on AI. AI can miss ethical implications, regulatory risks, or stakeholder politics. A human must sign off. One architect on Hacker News saw AI miss a critical security flaw in 3 out of 12 auto-generated ADRs.

What’s Next? Real-Time ADRs and the Future

The next wave is real-time ADR generation. Salesforce is testing a feature that suggests ADRs during pull request reviews. If you change the model’s temperature setting or add a new guardrail, a bot says: "This affects system reliability. Would you like to create an ADR?"

By 2026, Gartner predicts 70% of enterprise teams will use AI co-pilots for ADRs. That’s up from 15% in 2023. Regulatory bodies like the UK’s Financial Conduct Authority already require transparent documentation of technical decisions affecting AI behavior. ADRs are no longer optional-they’re compliance.

Teams that adopt AI-assisted ADRs see 76% fewer recurring debates. Developers stop saying, "Why did we do it this way?" and start saying, "Let me check the ADR." That’s not efficiency. That’s sanity.

Start Small. Start Now.

You don’t need a fancy tool. You don’t need a team of architects. Just do this:

  1. Create a /docs/adr/ folder in your repo.
  2. Copy GitHub’s ADR template.
  3. Write one ADR for the next big decision you make-whether it’s choosing a model, a vector DB, or a prompt structure.
  4. Make it part of your merge process.

Don’t wait for perfect. Wait for done. Your future self-and your new hires-will thank you.

What’s the difference between an ADR and a design document?

A design document is a broad overview-often hundreds of pages-covering system goals, components, workflows, and diagrams. An ADR is narrow and focused: one decision, one page, with context and consequences. You don’t need a design document to start. But you absolutely need an ADR for every major architectural choice.

Do I need an ADR for every small change?

No. Only for architecturally significant decisions-those that are hard to change later, affect multiple components, or have long-term cost or performance impacts. Switching from one LLM provider to another? Yes. Changing a variable name? No. A good rule: if you’d have to explain it to a new engineer three months from now, write an ADR.

Can AI replace human architects in writing ADRs?

No. AI is great at scanning code, pulling metrics, and formatting templates. But it can’t understand business priorities, regulatory risks, team dynamics, or stakeholder trade-offs. That’s why the best teams use AI to draft, and humans to approve. The human adds the "why" behind the "what."

How do I convince my team to start using ADRs?

Show them the pain. Ask: "How many times have you spent hours digging through git history or asking old team members why we did something?" Then show them one ADR that saved someone a day of work. Start with one decision. Make it easy. Use AI to auto-generate the first draft. Once people see how much time it saves, they’ll ask for it.

What if our system changes fast-do ADRs become outdated?

They don’t become outdated-they become historical. An ADR isn’t meant to reflect the current state. It’s meant to explain why a decision was made at a point in time. If you later change course, write a new ADR. That’s not a failure. That’s progress. The old ADR tells the story of where you were. The new one tells where you are now.

Comments

Megan Blakeman
Megan Blakeman

OMG, YES. I just had a meltdown last week because no one documented why we switched from Cohere to OpenAI... and now the model keeps hallucinating about our client's shipping policies. 😭 I wish I'd known about ADRs sooner. Just wrote my first one today-finally feel like I'm not losing my mind.

December 24, 2025 AT 23:49

Akhil Bellam
Akhil Bellam

Let me guess-you people are still using Markdown ADRs like it’s 2018? 🤦‍♂️ If you’re not auto-generating ADRs via LLMs that ingest commit diffs, telemetry, and Slack threads in real-time, you’re not engineering-you’re just... documenting your incompetence. I’ve seen teams waste 3 months relearning decisions because their ADRs were ‘human-written’ and full of fluff. AI drafts the structure, humans just slap on the emotional context. That’s it. Stop romanticizing paperwork.

December 25, 2025 AT 15:10

Amber Swartz
Amber Swartz

Okay but-have you SEEN the ADRs at my last job? 😩 One guy wrote a 3-page ADR for switching from a free-tier vector DB to Pinecone… and then left the company two weeks later. The new guy had to rewrite the ENTIRE THING because the original ADR said ‘we chose Pinecone because it felt right’ and had a doodle of a rocket in the margin. 🚀 I swear to god I cried reading it. This isn’t documentation-it’s therapy with a .md extension.

December 26, 2025 AT 12:52

Robert Byrne
Robert Byrne

You people are missing the point. It’s not about AI generating ADRs-it’s about enforcing them. I’ve seen 100+ repos where ADRs exist… but no one checks them during PR reviews. If your CI pipeline doesn’t fail when an ADR is missing for a model swap, you’re not serious. I just had to spend 4 days untangling a mess because someone swapped LLMs and didn’t update the ADR. The AI draft was there. The human didn’t care. That’s not a tech problem. That’s a cultural failure. Fix the process, not the template.

December 26, 2025 AT 17:14

Tia Muzdalifah
Tia Muzdalifah

lowkey i just started using adrs and its kinda life changing?? like i had no idea ppl actually did this. my team thought it was extra until i showed them how i found out why we were using llama-index instead of langchain in 2 mins. no more asking ‘wait why do we do this??’ 😅 also the ai draft thing is wild-i just copy paste and tweak, feels like having a super organized coworker who never sleeps.

December 26, 2025 AT 22:34

Zoe Hill
Zoe Hill

This is so important!! I used to think ADRs were just for ‘big companies’ but honestly? Even our tiny 3-person startup needed them. Last week I almost re-implemented a guardrail that already existed because the ADR wasn’t linked in the README. 🥲 I added a little note: ‘this saved me 8 hours’ and now everyone adds one. It’s not perfect-but it’s better than guessing. Thank you for writing this! 💖

December 27, 2025 AT 21:48

Albert Navat
Albert Navat

Look, if you’re not tracking model drift, inference cost per token, and prompt injection risk vectors in your ADRs, you’re just doing tech theater. The real value isn’t in ‘what we decided’-it’s in the feedback loops. I’ve got a custom tool that auto-generates ADRs from Prometheus alerts, LangChain logs, and GitHub issue tags. When latency spikes, it auto-creates a draft: ‘Model X triggered 12% false positives after Q4 update-revert or retrain?’ Then the engineer just clicks ‘approve’ or ‘override.’ That’s not documentation. That’s AI-driven architectural governance. If your ADRs don’t talk to your monitoring stack, they’re just digital tombstones.

December 28, 2025 AT 03:47

Write a comment