Run this helper free — no credit card
Every helper is free for 30 days. Answer 3 questions and get the full result in 2 minutes.
Start free →Repo Standardization
Standardize a newly entered or drifting repository for both humans and AI agents. Use this skill whenever a project lacks clear README/CONTRIBUTING/CHANGELOG or core runtime docs, lacks model-facing memory, or when code changes have likely invalidate
👁 1 views · 📦 0 installs
Free to install — no account needed
Copy the command below and paste into your agent.
Instant access • No coding needed • No account needed
What you get in 5 minutes
- Full skill code ready to install
- Works with 4 AI agents
- Lifetime updates included
Run this helper
Answer a few questions and let this helper do the work.
▸Advanced: use with your AI agent
Description
--- name: repo-standardization description: Standardize a newly entered or drifting repository for both humans and AI agents. Use this skill whenever a project lacks clear README/CONTRIBUTING/CHANGELOG or core runtime docs, lacks model-facing memory, or when code changes have likely invalidated docs or memory. On first entry, inspect the repository, identify maintained paths, core entry points, source-of-truth files, and output structure, then choose one mode: audit, bootstrap, or sync. Prefer this skill before deep development in unfamiliar repositories and whenever you want the project to remain easy for both humans and agents to understand. --- # Repo Standardization Standardize the repository before doing substantial work. The goal is not to generate generic documentation. The goal is to create and maintain: - human-facing docs that explain the repository truthfully - model-facing memory that is structured, traceable, and derived from real sources ## Core Principles 1. Treat code, tests, contracts, and authoritative docs as source of truth. 2. Treat memory as a derived artifact for models, never as a second truth source. 3. Prefer repository-specific summaries over boilerplate. 4. Preserve existing conventions when the repository already has a clear style. 5. Do not overwrite mature docs just because a template exists. 6. Keep model memory compact, structured, and tied to source files. ## Modes Pick exactly one mode before making edits. ### `audit` Use when: - you just entered a repository and want to assess readiness - docs and memory exist but may be inconsistent - the user asks for recommendations before changes Expected outcome: - a gap report - a recommended mode (`bootstrap` or `sync` or `none`) - a list of source-of-truth files and maintained paths ### `bootstrap` Use when: - critical docs are missing - there is no repository memory system - the repository is new to the team or to the agent workflow Expected outcome: - minimal human docs or filled gaps in existing docs - `memory/knowledge/` skeleton - `.agent_memory/` local structure recommendation - sync rules so future code changes update docs and memory ### `sync` Use when: - docs and memory already exist - code or contract files changed - the repository likely drifted from its documentation or memory Expected outcome: - refreshed model memory - targeted doc updates or a review list - updated synchronization metadata ## Decision Rules Default mode selection: 1. If critical human docs or model-memory roots are missing, use `bootstrap`. 2. If they exist but source files changed, use `sync`. 3. If the repository looks mature and the user asked for assessment, use `audit`. 4. If the repository is already consistent and no change is needed, report that explicitly. Do not treat every repo as needing a full bootstrap. Mature repos often need `audit` or a narrow `sync`. ## Required Thinking Order 1. Identify repository shape. 2. Identify maintained path or primary execution path. 3. Identify source-of-truth files. 4. Decide which docs are human-facing and which memory files are model-facing. 5. Decide what should be git-tracked and what should stay local. 6. Only then scaffold or update files. If the repository shape is unclear, read `references/repo_shapes.md`. ## Human Docs vs Model Memory Human docs usually include: - `README.md` - `CONTRIBUTING.md` - `CHANGELOG.md` - core runtime or architecture docs under `docs/` Model memory usually includes: - `memory/knowledge/*.json` - `memory/schemas/*.json` - local task or execution state under `.agent_memory/` Read `references/doc_policy.md` before writing human docs. Read `references/memory_policy.md` before creating model memory. Read `references/sync_rules.md` before choosing what to refresh after code changes. ## Minimal Bootstrap Target Bootstrap only the smallest useful set. Usually that means: - add or refine `README.md` - add or refine `CONTRIBUTING.md` - add or refine `CHANGELOG.md` - add one core runtime or architecture doc if the repo needs it - create `memory/knowledge/` - create `memory/schemas/` - add local `.agent_memory/` directory guidance and ignore rules Do not create extra files such as skill-local README guides or redundant summaries. ## Source-of-Truth Hierarchy Use this order when facts conflict: 1. runtime code and shared contracts 2. tests that enforce current behavior 3. maintained architecture docs 4. README and contribution rules 5. changelog entries 6. derived memory If derived memory disagrees with code or shared contracts, update memory. ## Scripts Use the bundled scripts when they help: - `scripts/detect_repo_shape.py` - summarize repository shape, entry points, docs, tests, and code layout - `scripts/audit_repo.py` - report readiness gaps and recommend a mode - `scripts/bootstrap_repo.py` - create missing standardization files conservatively - `scripts/build_knowledge_memory.py` - build or refresh model-facing knowledge cards - `scripts/sync_repo.py` - refresh knowledge memory and highlight doc review targets Use script output as evidence, not as unquestioned truth. Review the results before finalizing changes. ## Bootstrap Workflow 1. Run `scripts/detect_repo_shape.py`. 2. Identify maintained paths, entry points, and core contracts. 3. Check whether `README.md`, `CONTRIBUTING.md`, `CHANGELOG.md`, and `memory/knowledge/` exist. 4. Create only the missing or clearly inadequate pieces. 5. Materialize initial knowledge cards with `scripts/build_knowledge_memory.py`. 6. Add or update ignore rules for local memory. 7. Summarize what future code changes must sync. ## Sync Workflow 1. Identify changed source-of-truth files. 2. Refresh derived knowledge memory. 3. Update only docs whose facts are affected. 4. Update changelog when user-facing behavior, architecture, or contract semantics changed. 5. Report what was refreshed automatically and what still needs human review. ## Audit Workflow 1. Report repository shape and maintained path candidates. 2. Report missing docs and memory. 3. Report likely source-of-truth files. 4. Recommend `bootstrap`, `sync`, or `none`. 5. If the user wants changes, continue with the recommended mode. ## Quality Bar - Docs must mention real modules, not placeholders. - Memory must include source paths and a staleness rule. - Generated content must be specific to the repository. - Local memory must not leak secrets or raw chat transcripts. - If the repository already has a strong standard, align with it instead of replacing it. ## When Not To Use Do not use this skill for: - a one-file scratch script - vendored or third-party code you should not standardize - archived logs or generated outputs - pure implementation work in a repo that already has good docs and memory, unless drift is suspected ## Example Requests - "Audit this repository and tell me whether it needs bootstrap or sync." - "Standardize this repo for both humans and AI agents." - "Create the initial docs and memory skeleton for this project." - "Sync the repository docs and memory after the recent refactor."
Security Status
Verified
Manually verified by security team
Related AI Tools
More Save Money tools you might like
Family History Research Planning Skill
FreeProvides assistance with planning family history and genealogy research projects.
Run freeNaming Skill
FreeName products, SaaS, brands, open source projects, bots, and apps. Use when the user needs to name something, find a brand name, or pick a product name. Metaphor-driven process that produces memorable, meaningful names and avoids AI slop.
Run freeProfit Margin Calculator
Free during launchNormally $8Find hidden profit leaks — see exactly where your money goes
Run freeguard-scanner
Free"Security scanner and runtime guard for OpenClaw skills, MCP servers, and AI agent workflows. Detects prompt injection, identity hijacking, memory poisoning, A2A contagion, secret leaks, supply-chain abuse, and dangerous tool calls with 364 static th
Run freeLife OS · Personal Decision Engine
Free"A personal decision engine with 16 independent AI agents, checks and balances, and swappable cultural themes. Covers relationships, finance, learning, execution, risk control, health, and infrastructure. Use when facing complex personal decisions (c
Run freebbc-skill — Bilibili Comment Collector
FreeFetch Bilibili (哔哩哔哩) video comments for UP主 self-analysis. Use when the user asks to collect, download, export, or analyze comments on a Bilibili video (BV号 / URL / UID). Produces JSONL + summary.json suitable for further Claude Code analysis (senti
Run free