This folder is a field guide to recurring failures in RAG and multi-stage LLM pipelines.
Each pattern is actionable: fast signals, root causes, a minimal repro, a deterministic fix, and links to hands-on examples (SDK-free, stdlib-only).
How to use this folder
- Start with the symptom you’re seeing.
- Open the matching pattern and run the Minimal Repro + Standard Fix.
- Wire the acceptance criteria into CI (see Example 08) so the fix stays fixed.
| Pattern | Problem Map No. | Symptoms you’ll see | Fix entrypoint |
|---|---|---|---|
| RAG Semantic Drift (pattern_rag_semantic_drift.md) | No.1 | Plausible but ungrounded answers; citations don’t contain the claim | Example 01, Example 03 |
| Memory Desync (pattern_memory_desync.md) | — (State/Context) | Old names/IDs reappear; agents disagree across turns | Example 04 |
| Vector Store Fragmentation (pattern_vectorstore_fragmentation.md) | No.3 | Recall flips across envs; score scales change; rank inversions | Example 05 |
| Hallucination Re-Entry (pattern_hallucination_reentry.md) | — (Provenance) | Model’s prior text shows up as “evidence”; non-corpus sources cited | Example 06 |
| Bootstrap Deadlock (pattern_bootstrap_deadlock.md) | No.14 | /readyz stuck/flapping; circular waits at startup |
Example 07 |
| Query Parsing Split (pattern_query_parsing_split.md) | — (Parsing) | Multi-intent prompts answered partially or mixed | Example 03, Example 04 |
| Symbolic Constraint Unlock (SCU) (pattern_symbolic_constraint_unlock.md) | No.11 (Symbolic collapse) | “Must/Only/Never” rules vanish mid-pipeline; impossible states | Example 03, Example 04, Example 08 |
Legend: Problem Map numbers refer to root categories used across the repo. “—” means cross-cutting (not a single number).
- Grounding first — Run Example 01 on a few failing questions.
- If refusal behavior or citations fail ⇒ go to Semantic Drift.
- Context/state sanity — Check
context_id/mem_rev/hash.- Mismatch ⇒ Memory Desync.
- Index parity — Validate
index_out/manifest.jsonvs runtime.- Drift or score scale shift ⇒ Vector Store Fragmentation.
- Provenance — Inspect
sourcefor cited ids.- Any
model|chat|tmp:⇒ Hallucination Re-Entry.
- Any
- Startup — If the first minute after deploy is flaky ⇒ Bootstrap Deadlock.
- Query shape — If the prompt mixes “compare… then draft…” ⇒ Query Parsing Split.
- Logic rules — If answers cross “must/only/never” boundaries ⇒ SCU.
- Guarded Output: either exact refusal token
not in contextor JSON withclaim+citations:[id,…]scoped to retrieved ids. - Provenance: all citations pass the corpus-only filter (no
chat:/draft:/tmp:). - Context Consistency: if used,
context_id.mem_rev/hashechoes the turn snapshot. - Constraint Integrity (SCU):
constraints_echo≡ locked set; no contradiction patterns matched. - Quality Gates (Ex.08): precision≥0.80, under-refusal≤0.05, citation hit rate≥0.75.
- pattern_rag_semantic_drift.md — How to stop plausible-but-wrong answers with hard grounding.
- pattern_memory_desync.md — One snapshot per turn; bind and echo across agents.
- pattern_vectorstore_fragmentation.md — Keep embeddings/metrics/chunkers aligned.
- pattern_hallucination_reentry.md — Keep model/session text out of evidence.
- pattern_bootstrap_deadlock.md — Deterministic startup ordering and readiness.
- pattern_query_parsing_split.md — Deterministically split multi-intent prompts.
- pattern_symbolic_constraint_unlock.md — Lock+echo constraints; gate contradictions.
See ../examples/ for runnable, stdlib-only code referenced in each pattern.
- Propose a new pattern via issue labels:
pattern-proposal, with minimal repro + acceptance gate. - Stabilize with an example (Python or Node, stdlib-only).
- Add to this README only after approval.
- Guard with Example 08 metrics before shipping a pattern-driven fix.
| Tool | Link | 3-Step Setup |
|---|---|---|
| WFGY 1.0 PDF | Engine Paper | 1️⃣ Download · 2️⃣ Upload to your LLM · 3️⃣ Ask “Answer using WFGY + <your question>” |
| TXT OS (plain-text OS) | TXTOS.txt | 1️⃣ Download · 2️⃣ Paste into any LLM chat · 3️⃣ Type “hello world” — OS boots instantly |
| Layer | Page | What it’s for |
|---|---|---|
| ⭐ Proof | WFGY Recognition Map | External citations, integrations, and ecosystem proof |
| ⚙️ Engine | WFGY 1.0 | Original PDF tension engine and early logic sketch (legacy reference) |
| ⚙️ Engine | WFGY 2.0 | Production tension kernel for RAG and agent systems |
| ⚙️ Engine | WFGY 3.0 | TXT based Singularity tension engine (131 S class set) |
| 🗺️ Map | Problem Map 1.0 | Flagship 16 problem RAG failure taxonomy and fix map |
| 🗺️ Map | Problem Map 2.0 | Global Debug Card for RAG and agent pipeline diagnosis |
| 🗺️ Map | Problem Map 3.0 | Global AI troubleshooting atlas and failure pattern map |
| 🧰 App | TXT OS | .txt semantic OS with fast bootstrap |
| 🧰 App | Blah Blah Blah | Abstract and paradox Q&A built on TXT OS |
| 🧰 App | Blur Blur Blur | Text to image generation with semantic control |
| 🏡 Onboarding | Starter Village | Guided entry point for new users |
If this repository helped, starring it improves discovery so more builders can find the docs and tools.