Research
5–10 parallel agents search academic, technical, applied, news, and contrarian angles. --min-time 2h keeps going in rounds, drilling into gaps each round finds.
██╗ ██╗ ███╗ ███╗ ██╗ ██╗██╗██╗ ██╗██╗
██║ ██║ ████╗ ████║ ██║ ██║██║██║ ██╔╝██║
██║ ██║ ██╔████╔██║ ██║ █╗ ██║██║█████╔╝ ██║
██║ ██║ ██║╚██╔╝██║ ██║███╗██║██║██╔═██╗ ██║
███████╗███████╗██║ ╚═╝ ██║ ╚███╔███╔╝██║██║ ██╗██║
╚══════╝╚══════╝╚═╝ ╚═╝ ╚══╝╚══╝ ╚═╝╚═╝ ╚═╝╚═╝
Parallel multi-agent research. Thesis-driven investigation. Source ingestion.
Wiki compilation. Truth-seeking audits. Querying. Artifact generation. Ships as a
Claude Code plugin, an OpenAI Codex plugin, an OpenCode instruction file, or a
portable AGENTS.md. Obsidian-compatible.
Every run compounds. Sources become cross-referenced articles. Articles become reports, slide decks, study guides, playbooks, and implementation plans. The more you research, the stronger every output gets.
One command spins up a topic wiki, dispatches up to ten agents, ingests what's worth keeping, compiles it into articles, and hands you a deliverable built on top. All plain Markdown you own.
5–10 parallel agents search academic, technical, applied, news, and contrarian angles. --min-time 2h keeps going in rounds, drilling into gaps each round finds.
Start from a claim. Agents split across supporting, opposing, mechanistic, meta, and adjacent angles. Output is a verdict — not a summary. Round two fights confirmation bias.
URLs, files, tweets, bulk inbox. Raw sources stay immutable; articles synthesize on top. Every claim traces back to its source.
Raw sources become synthesized articles with cross-references and confidence scores. Every directory has an _index.md — nothing is scanned blindly.
Quick (indexes), standard (articles), or deep (everything + sibling wikis). --resume picks up where you left off.
Score every article for staleness and quality. Two-tier scan: fast metadata check, then deep content read for flagged articles. Checkpoint recovery. Machine-readable JSON + human-readable report.
Answer the broader trust question. Reuse the librarian pass, trace outputs across raw/, wiki/, and output/, detect drift, inspect provenance, and do fresh research when local evidence is not enough.
Extract lessons learned from the current session — error→fix patterns, user corrections, discoveries. Saved as structured notes the wiki can query later. --rules emits enforceable rules instead of prose.
Wiki-grounded implementation plans. Reads the knowledge base, interviews you about requirements, fills gaps with targeted research, and produces a phased plan citing wiki articles as evidence. --format rfc|adr|spec.
Reports, slide decks, study guides, playbooks, implementation plans, timelines, glossaries, comparisons. Filed back into the wiki so the next output builds on every previous one.
Native plugin. Recommended.
claude plugin install wiki@llm-wiki
Installs from the public marketplace. Restart Claude Code to apply.
Marketplace plugin. Invoke with @wiki.
codex plugin marketplace add nvk/llm-wiki
# Then open /plugins, enable "LLM Wiki", use @wiki
Or from a local checkout: ./scripts/bootstrap-codex-plugin.sh --scope user --verify. The Codex tree is a generated mirror of the Claude source of truth — updates land identically.
Instruction file.
# In opencode.json:
{ "instructions": [
"path/to/llm-wiki/plugins/llm-wiki-opencode/skills/wiki-manager/SKILL.md"
] }
Or copy to ~/.config/opencode/AGENTS.md. Web search requires OPENCODE_ENABLE_EXA=1.
Instruction file. Best for local models.
pi --instructions path/to/llm-wiki/plugins/\
llm-wiki-opencode/skills/wiki-manager/SKILL.md
Pi's 1K system prompt leaves room for the full wiki skill on 32K context local models. Uses the same skill file as OpenCode.
Portable AGENTS.md.
curl -sL https://raw.githubusercontent.com/nvk/llm-wiki/master/AGENTS.md \
> ~/your-project/AGENTS.md
Drop the file into any agent's context or project root. Works with anything that can read/write files and search the web.
Claude Code:
claude plugin update wiki@llm-wiki
# Restart Claude Code to apply
If the update command misses a new version (stale marketplace cache), sync manually:
git clone https://github.com/nvk/llm-wiki.git # or: git -C ~/llm-wiki pull
REPO=~/llm-wiki/claude-plugin
DEST=~/.claude/plugins/cache/llm-wiki/wiki
VERSION=$(grep '"version"' "$REPO/.claude-plugin/plugin.json" | grep -o '[0-9.]*')
rm -rf "$DEST"/*
mkdir -p "$DEST/$VERSION"
cp -R "$REPO/.claude-plugin" "$REPO/commands" "$REPO/skills" "$DEST/$VERSION/"
Codex: codex plugin marketplace upgrade llm-wiki. For a local checkout: re-run ./scripts/bootstrap-codex-plugin.sh --scope user --verify.
AGENTS.md: re-run the curl command above to replace the file.
One command, from anywhere — creates a topic wiki, launches parallel agents, keeps researching for an hour, comes back compiled.
/wiki:research "gut microbiome" --new-topic --min-time 1h
More common flows:
/wiki:research "nutrition" --new-topic
/wiki:research "fasting" --deep --min-time 2h
/wiki:research "What makes articles go viral?" --new-topic
/wiki:research --mode thesis "fiber reduces neuroinflammation via SCFAs"
/wiki:query "How does fiber affect mood?"
/wiki:query --resume
/wiki add https://example.com/article # fuzzy router → ingest
/wiki what do we know about CRISPR? # fuzzy router → query
/wiki:compile
/wiki:output report --topic gut-brain
/wiki:assess /path/to/my-app --wiki nutrition
/wiki:lint --fix
inbox/.--fix auto-repairs.~/wiki/ # Hub — lightweight, no content
├── wikis.json # Registry of all topic wikis
├── _index.md # Lists topic wikis with stats
├── log.md # Global activity log
└── topics/ # Each topic is an isolated wiki
├── nutrition/
│ ├── .obsidian/ # Obsidian vault config
│ ├── inbox/ # Drop zone for this topic
│ ├── raw/ # Immutable sources
│ ├── wiki/ # Compiled articles
│ │ ├── concepts/
│ │ ├── topics/
│ │ └── references/
│ ├── output/ # Generated artifacts
│ ├── _index.md
│ ├── config.md
│ └── log.md
└── woodworking/ # Another topic wiki
Each research area is isolated. No cross-topic noise. Queries stay focused. A multi-wiki peek finds overlap when relevant.
[[wikilinks]] for Obsidian plus standard markdown links for everything else. Works in every viewer — including no viewer at all.
Once a source is ingested it is never modified. Articles synthesize on top. Retraction removes both cleanly.
Runs entirely on the host agent's built-in tools. Plugin is Markdown + commands. No servers, no services, no telemetry.
All commands accept --wiki <name> to target a topic wiki and --local for the project wiki. query, output, and plan also accept --with <wiki> for cross-wiki context.
| Command | Description |
|---|---|
/wiki <natural language> | Fuzzy intent router — say what you want, it routes to the right subcommand. |
/wiki | Show status, stats, and list all topic wikis. |
/wiki init <name> | Create a topic wiki at ~/wiki/topics/<name>/. |
/wiki:ingest <source> | Ingest a URL, file path, or quoted text. |
/wiki:compile | Compile new sources into wiki articles. |
/wiki:query <question> | Q&A against the wiki. --quick / --deep / --list / --resume. |
/wiki:ll | Extract lessons from current session into wiki. --dry-run, --rules. |
/wiki:research <topic> | 5 parallel agents. --plan (multi-path), --deep (8), --retardmax (10), --new-topic, --min-time 1h. |
/wiki:research --mode thesis <claim> | Thesis-driven research: for + against → verdict. |
/wiki:plan <goal> | Wiki-grounded implementation plan. --format rfc|adr|spec. |
/wiki:output <type> | summary, report, study-guide, slides, timeline, glossary, comparison. |
/wiki:assess <path> | Assess a repo against wiki + market. Gap analysis. |
/wiki:audit | Truth-seeking audit across wiki, outputs, provenance, and fresh research when needed. |
/wiki:librarian | Score articles for staleness and quality. Checkpoint recovery. --article <path> for single article. |
/wiki:lint | Health checks. --fix auto-repairs. --deep web-verifies facts. |
/wiki:retract | Remove a source and clean up downstream references. |
/wiki:project | Group outputs into projects with goals and manifests. |
From zero to a compiled wiki in 5 minutes.
1. Install the plugin
claude plugin install wiki@llm-wiki
2. Create a topic wiki
Pick any topic you're curious about:
/wiki init nutrition
This creates a hub at ~/wiki/ and your first topic wiki at ~/wiki/topics/nutrition/.
3. Research it
/wiki:research "gut microbiome and mental health" --wiki nutrition
# or just say it naturally:
/wiki research gut microbiome and mental health
Five parallel agents search the web from different angles (academic, technical, applied, news, contrarian), ingest the best sources, and compile them into cross-referenced wiki articles. Takes 2-5 minutes.
4. Ask your wiki a question
/wiki:query "how does fiber affect mood?" --wiki nutrition
# or naturally:
/wiki how does fiber affect mood?
The wiki answers from its compiled articles with citations.
5. Audit before you trust an output
/wiki:audit --wiki nutrition
/wiki:audit --artifact output/report-gut-brain.md
Audit rechecks the wiki layer, traces the output's evidence chain, flags drift, and will do fresh research if the local corpus is not enough to answer the trust question.
What to do next:
/wiki:research "topic" --deep — 8 agents instead of 5, adds historical and data angles/wiki:research "topic" --min-time 1h — keep researching in rounds for an hour/wiki:research "topic" --plan — decompose into parallel research paths/wiki:audit --project nutrition-playbook — verify outputs and upstream wiki state together/wiki add https://example.com/article — fuzzy router detects the URL and ingests it/wiki what do we know about CRISPR? — fuzzy router detects the question and queries/wiki:lint --fix — clean up any structural issuesA typical research session flows through four stages:
Stage 1: Ask a question or pick a topic
llm-wiki auto-detects whether you're asking a question or naming a topic. Use the direct command or the fuzzy router:
/wiki:research "What makes long-form articles go viral?" # direct command
/wiki research quantum computing # fuzzy router — same result
/wiki:research --mode thesis "fiber reduces neuroinflammation via SCFAs" # thesis → for/against evidence
Stage 2: Agents search in parallel
5 agents (8 with --deep, 10 with --retardmax) search simultaneously from different angles — 2-3 web searches each, full-content fetch, quality scoring (1-5). A credibility pass deduplicates before ingestion.
Stage 3: Sources are ingested and compiled
Top sources are saved to raw/ (immutable — never modified after ingestion). Then the compilation pass synthesizes them into wiki articles under wiki/concepts/, wiki/topics/, and wiki/references/ with cross-references, confidence scores, and bidirectional links.
Stage 4: Gap report and follow-up
After each round, you see what's covered, what's still missing, and suggested follow-ups. If 2+ gaps remain, you're offered to close them in parallel:
### Close gaps?
1. Dose-response curves for wavelength specificity
2. Long-term safety data for daily exposure
3. Device comparison (clinical vs consumer panels)
Enter numbers (e.g. 1,2,4), "all", or "skip":
Multi-round research: Add --min-time 2h to keep researching in rounds, each drilling into gaps the previous round found. Add --plan to decompose into parallel paths upfront.
The hub (~/wiki/) is just a registry. No content — only wikis.json, _index.md, and log.md. All content lives in topic sub-wikis.
Topic wikis (~/wiki/topics/<name>/) are isolated research areas. Each has its own sources, articles, outputs, and Obsidian vault config. Isolation means researching quantum computing can't pollute your nutrition wiki.
Raw sources (raw/) are immutable. Once a paper, article, or data file is ingested, it's never modified. This is the audit trail — every claim in every article traces back to a source.
Wiki articles (wiki/) are LLM-compiled syntheses organized into three categories:
Articles use dual-link format: [[wikilink]] for Obsidian + standard markdown links for everything else. Confidence scores (high/medium/low) reflect source quality and corroboration.
Indexes (_index.md) exist in every directory. They're derived caches — rebuilt automatically from file frontmatter. The agent reads indexes first and never scans blindly.
Outputs (output/) are generated artifacts: reports, slide outlines, study guides, implementation plans. They're built from wiki articles, so every output compounds on all prior research.
Audit walks that full artifact graph. It can trace an output back through the wiki state and raw sources it depended on, then escalate into fresh research when the stored evidence is stale or incomplete.
For broad topics, --plan decomposes your research into independent paths and runs them all in parallel:
/wiki:research "red light therapy" --plan --wiki redlight-therapy
# or naturally:
/wiki research red light therapy --plan
The agent generates a research plan and asks for confirmation:
## Research Plan — red light therapy
### Paths (will run in parallel)
1. **Mechanisms** — cytochrome c oxidase, wavelength specificity, dose-response
2. **Clinical evidence** — RCTs for skin, joints, wounds, cognition
3. **Devices** — LED vs laser, panel sizing, FDA clearances
4. **Criticisms** — placebo effects, publication bias, safety concerns
### Estimated: 4 paths x 5 agents = 20 parallel agents
Proceed? (y/n/edit)
On confirmation, all paths launch simultaneously. Each path runs its own 5-agent swarm. Sources are ingested in parallel (each path writes unique files), then a single compilation pass sees all sources at once for cross-path synthesis.
After the first round, a gap report shows what's still missing. Pick which gaps to close — they launch as another parallel batch:
### Close gaps?
1. Dose-response curves for wavelength specificity
2. Long-term safety data for daily exposure
3. Device comparison (clinical vs consumer)
Enter numbers (e.g. 1,2,4), "all", or "skip":
Combine with other flags:
--plan --deep — 8 agents per path instead of 5--plan --min-time 2h — multiple plan-dispatch-compile cycles over 2 hours--plan --new-topic — create the wiki and research in one shotOnce your wiki has compiled articles, generate deliverables from them:
/wiki:output report --topic gut-brain --wiki nutrition
# or naturally:
/wiki write a report on gut-brain axis
Output types:
summary — concise overview of a topic or the entire wikireport — detailed analysis with citations and evidencestudy-guide — structured learning material with key concepts and review questionsslides — slide deck outline with speaker notestimeline — chronological view of events and developmentsglossary — term definitions extracted from articlescomparison — side-by-side analysis of two or more subjectsOutputs are saved to output/ inside the topic wiki and indexed automatically. Every output builds on all compiled articles, so the more you research, the stronger every output gets.
Retardmax mode works here too — /wiki:output report --retardmax ships a rough draft immediately. Iterate later.
Cross-wiki context: Use --with to pull knowledge from another wiki into your output:
/wiki:output report --wiki nutrition --with article-writing
# Uses nutrition content + article-writing craft knowledge
Projects: Group related outputs into project folders with goals:
/wiki:project new rebuild-blog "Rewrite the company blog using wiki research"
/wiki:output report --topic gut-brain --project rebuild-blog
The output lands in output/projects/rebuild-blog/ with a WHY.md that captures the goal. Future outputs with the same --project flag accumulate there.
Research sessions fetch many URLs. By default, Claude Code asks for approval on each one. To skip these prompts, add WebFetch and WebSearch to your project's allow list:
In .claude/settings.local.json:
{
"permissions": {
"allow": [
"WebFetch",
"WebSearch"
]
}
}
This pre-approves all web fetches and searches. You can also allow specific domains only: WebFetch(domain:arxiv.org).
If you prefer per-session approval, choose "Always allow" when Claude asks about the first URL — that covers the rest of the session.
Pulled live from github.com/nvk/llm-wiki/releases. Falls back to the version baked into the plugin manifest if the API is unavailable.
Truth-seeking umbrella audit. New /wiki:audit combines the wiki-only librarian pass with output drift checks, provenance review, and fresh research when local evidence is not enough. /wiki:librarian stays available as the focused tool for keeping the wiki/ layer in check.
Codex and OpenCode stay in parity. The generated mirrors advertise the same audit behavior, native runtime metadata, and trust-oriented routing as the Claude source of truth.
Pulled live from .claude-plugin/marketplace.json on the master branch via jsDelivr CDN — no rate limits, always current.
LLM-compiled knowledge base. Commands: /wiki (router + init/status), /wiki:ingest, /wiki:compile, /wiki:query, /wiki:research, /wiki:audit, /wiki:librarian, /wiki:ll, /wiki:assess, /wiki:plan, /wiki:lint, /wiki:output, /wiki:retract, /wiki:project.
LLM Wiki is a set of commands and a knowledge model that turns any LLM agent into a research engine and an append-only, Markdown-native wiki. It runs parallel multi-agent research, ingests URLs and files, compiles raw sources into synthesized articles with cross-references, audits whether those articles and outputs are still trustworthy, answers questions against the compiled knowledge, and generates artifacts like reports, slides, study guides, and implementation plans.
Inspired by Andrej Karpathy's LLM wiki concept.
Five install modes: Claude Code (native plugin via the llm-wiki marketplace), OpenAI Codex (marketplace plugin via codex plugin marketplace add nvk/llm-wiki, invoked with @wiki), OpenCode (instruction file via opencode.json), Pi (instruction file — best for local models with its minimal system prompt), and any other LLM agent via the portable AGENTS.md file.
The behavioral logic lives in a single wiki-manager skill shared across runtimes — Codex, OpenCode, and Pi trees symlink into the Claude source of truth so there is no fork. Drift is caught by self-healing sync tests.
By default in ~/wiki/ on your machine. You can relocate the hub to iCloud Drive, Dropbox, or any custom path:
/wiki config hub-path ~/Library/Mobile\ Documents/com~apple~CloudDocs/wiki
Config lives at ~/.config/llm-wiki/config.json and caches a pre-computed resolved_path, so tilde expansion runs at most once. A ~/wiki symlink to the resolved path makes resolution instant — recommended if your hub is on iCloud.
Yes. Each topic wiki ships with its own .obsidian/ vault config and can be opened as an independent vault:
open ~/wiki/topics/nutrition/
Cross-references use a dual-link format:
[[gut-brain-axis|Gut-Brain Axis]] ([Gut-Brain Axis](../concepts/gut-brain-axis.md))
Obsidian reads the wikilink for graph view and backlinks; everything else (Claude Code, GitHub, plain text editors) follows the standard markdown link.
A research mode inspired by Elisha Long's retardmaxxing philosophy — act first, think later. Ten parallel agents, skip planning, cast the widest net, ingest aggressively, compile fast, lint later.
Available on /wiki:research --retardmax and /wiki:output --retardmax for when you want results now and will clean up afterward.
/wiki:research --mode thesis "<claim>" starts from a specific claim and uses it as a filter. Agents are split across supporting, opposing, mechanistic, meta/review, and adjacent — balanced by design.
Sources that don't relate to the claim's variables are skipped, which keeps the wiki tight. Output is a verdict: supported, partially supported, contradicted, insufficient evidence, or mixed.
With --min-time, round two focuses harder on the weaker side of the evidence — counter-weight against confirmation bias.
Yes. LLM Wiki is MIT-licensed. Source and releases at github.com/nvk/llm-wiki.
Compiling, querying, linting, and generating artifacts from an existing wiki work offline — everything is plain Markdown on your disk. Research and ingestion need internet since they fetch URLs and search the web.
Zero runtime dependencies. LLM Wiki uses only the built-in tools of the host agent (file read/write, web fetch, web search). The plugin itself is Markdown: command definitions, skills, and reference docs.
Optional: ask-grok-mcp for best-in-class tweet ingestion, tobi/qmd for local search beyond ~100 articles.
Claude Code: claude plugin update wiki@llm-wiki and restart.
Codex: codex plugin marketplace upgrade llm-wiki (or re-run the bootstrap helper for a local checkout).
AGENTS.md: re-curl from the master branch to replace your file.
See Install for the manual sync fallback when the plugin updater hits stale cache.