Back to Marketplace
30-day free campaign

Run this helper free — no credit card

Every helper is free for 30 days. Answer 3 questions and get the full result in 2 minutes.

Start free →
FREE
Verified
Grow Business

Deep Memory

Project-memory and context-compression workflow for long-running engineering work. Use when Codex needs continuity across multiple sessions, especially for software development, repo work, architecture discussions, bug and regression tracking, perfor

👁 2 views · 📦 0 installs

Install in one line

mfkvault install deep-memory

Requires the MFKVault CLI. Prefer MCP?

No reviews yet
💻 Codex
This helper was discovered by MFKVault crawlers from public sources. Original author retains all rights. To request removal: [email protected]
Community helper
This helper was discovered by MFKVault crawlers from public sources. MFKVault does not create, maintain, or guarantee the output of this helper. Results are AI-generated and may be incomplete, inaccurate, or outdated. Use at your own risk. Original author retains all rights. Request removal
FREE

Free to install — no account needed

Copy the command below and paste into your agent.

Instant access • No coding needed • No account needed

What you get in 5 minutes

  • Full skill code ready to install
  • Works with 1 AI agent
  • Lifetime updates included
VerifiedSecureBe the first
Ready to run

Run this helper

Answer a few questions and let this helper do the work.

Advanced: use with your AI agent

Description

--- name: deep-memory description: Project-memory and context-compression workflow for long-running engineering work. Use when Codex needs continuity across multiple sessions, especially for software development, repo work, architecture discussions, bug and regression tracking, performance tuning, build/test loops, or any task where earlier decisions, commands, file paths, metrics, and open follow-ups must be retained without keeping full chat history in context. --- # Deep Memory Preserve durable state, not chatter. Use this skill to turn long conversations into a compact working memory that survives across sessions without dragging the full transcript back into context. ## Keep These Things Prioritize: - current project and objective - architecture and subsystem boundaries - bug and regression history - validated commands and hot files - performance and tuning numbers - open loops, blockers, and next experiments - user preferences that materially affect future work Drop: - pleasantries and filler - repeated summaries of unchanged state - speculative dead ends with no lasting value - brainstorming that never became a decision or task ## Workspace Layout For project work, keep memory inside the active workspace: ```text .codex-memory/<project-slug>/ raw/ session_YYYYMMDD_HHMM.md compressed/ session_YYYYMMDD_HHMM.json state/ project_state.md open_loops.md milestones.md ``` If there is no meaningful project workspace, fall back to a user-level cache directory. Treat repo-owned documents such as `README`, `docs/`, issue trackers, or project notes as canonical. Deep memory is a compact working index, not a conflicting source of truth. ## Phase 1: Session Start ### 1. Pull only the raw context you need Start small. If your Codex environment supports recent-chat or conversation-history retrieval, use only the smallest relevant slice. If it does not, write a short raw handoff note yourself. Good intake sources: - the last 2-3 relevant chats - a targeted search on one prior bug, feature, or repo topic - a short manual summary captured from the current conversation ### 2. Save raw context immediately Do not keep raw summaries floating in working memory. ```bash mkdir -p .codex-memory/<project-slug>/raw .codex-memory/<project-slug>/compressed .codex-memory/<project-slug>/state cat << 'MEMEOF' > .codex-memory/<project-slug>/raw/session_YYYYMMDD_HHMM.md [paste chat summaries, notes, or search results here] MEMEOF ``` ### 3. Compress the raw dump Use the bundled helper: ```bash python3 scripts/compressor.py compress \ .codex-memory/<project-slug>/raw/session_YYYYMMDD_HHMM.md \ --session-id "YYYYMMDD_HHMM" \ --max-lines 25 \ --json > .codex-memory/<project-slug>/compressed/session_YYYYMMDD_HHMM.json ``` Then render the compact view: ```bash python3 scripts/compressor.py format \ .codex-memory/<project-slug>/compressed/session_YYYYMMDD_HHMM.json ``` ### 4. Read state before acting Consult these in order: 1. `state/project_state.md` 2. `state/open_loops.md` 3. the latest compressed session summary 4. the raw session dump only if exact wording or extra detail is still needed ## Phase 2: Mid-Conversation Recall When the user references earlier work such as: - "지난번" - "that crash" - "the renderer issue" - "the release packaging problem" Use this order: 1. check `project_state.md` for durable facts 2. check `open_loops.md` for unfinished work 3. use `compressor.py extract` on the latest JSON for the category you need 4. search raw dumps only if the compact state still misses the detail 5. only then fall back to wider conversation history retrieval, if available Example: ```bash python3 scripts/compressor.py extract \ .codex-memory/<project-slug>/compressed/session_YYYYMMDD_HHMM.json \ --category architecture ``` ## Phase 3: Session End For meaningful project sessions, update three rolling artifacts: - `state/project_state.md` Stable facts that should still matter next week - `state/open_loops.md` Active blockers, TODOs, validation gaps, and follow-ups - `state/milestones.md` Dated bullets for completed fixes, shipped behavior changes, and confirmed regressions Before updating them, compare the newest compressed summary with the previous one: ```bash python3 scripts/compressor.py diff old_session.json new_session.json ``` Only write deltas that are actually durable. ## Session-End Template ```markdown # YYYY-MM-DD ## Completed - [behavioral change or shipped fix] ## Verified - [command, test, or manual verification] ## Open - [remaining blocker or follow-up] ## Metrics - [value + context] ## Files - [important files touched] ``` ## High-Value Engineering Details Always try to capture: - subsystem boundaries and ownership - bug cause, fix, repro steps, and verification status - exact commands that worked or failed - repeatedly touched files and directories - concrete metrics such as FPS, latency, memory, build times, chart counts, or test counts - user constraints and preferences that affect future choices ## Helper Script Bundled at `scripts/compressor.py`. Useful commands: ```bash # Compress raw text into structured memory python3 scripts/compressor.py compress input.md --session-id "20260312_1930" --json # Render a compact readable summary python3 scripts/compressor.py format session.json # Extract one category for targeted recall python3 scripts/compressor.py extract session.json --category bugs python3 scripts/compressor.py extract session.json --category commands # Build a cross-session timeline python3 scripts/compressor.py timeline session1.json session2.json session3.json # Show what changed between two sessions python3 scripts/compressor.py diff old_session.json new_session.json ``` ## Anti-Patterns Do not: - keep raw conversation dumps in context after intake - reload the same summaries repeatedly in one session - store transient chatter as long-term state - let deep-memory disagree with repo-owned docs without reconciling them - overwrite `project_state.md` every session instead of updating only changed durable facts - log vague notes like "worked on audio" without the subsystem, behavior, and result ## Quick Start For ongoing engineering work: ```text 1. Pull 2-3 relevant chat fragments or write a short raw handoff note. 2. Save that raw text under .codex-memory/<project-slug>/raw/. 3. Run compressor.py compress. 4. Read project_state.md + open_loops.md + the formatted summary. 5. Work normally. 6. At the end, append durable deltas to milestones/open_loops/project_state. ``` Typical context cost stays low while keeping the parts that still matter next session.

Preview in:

Security Status

Verified

Manually verified by security team

Time saved
How much time did this skill save you?

Related AI Tools

More Grow Business tools you might like

codex-collab

Free

Use when the user asks to invoke, delegate to, or collaborate with Codex on any task. Also use PROACTIVELY when an independent, non-Claude perspective from Codex would add value — second opinions on code, plans, architecture, or design decisions.

Run free

Rails Upgrade Analyzer

Free

Analyze Rails application upgrade path. Checks current version, finds latest release, fetches upgrade notes and diffs, then performs selective upgrade preserving local customizations.

Run free

Asta MCP — Academic Paper Search

Free

Domain expertise for Ai2 Asta MCP tools (Semantic Scholar corpus). Intent-to-tool routing, safe defaults, workflow patterns, and pitfall warnings for academic paper search, citation traversal, and author discovery.

Run free

Hand Drawn Diagrams

Free

Create hand-drawn Excalidraw diagrams, flows, explainers, wireframes, and page mockups. Default to monochrome sketch output; allow restrained color only for page mockups when the user explicitly wants webpage-like fidelity.

Run free

Move Code Quality Checker

Free

Analyzes Move language packages against the official Move Book Code Quality Checklist. Use this skill when reviewing Move code, checking Move 2024 Edition compliance, or analyzing Move packages for best practices. Activates automatically when working

Run free

Claude Memory Kit

Free

"Persistent memory system for Claude Code. Your agent remembers everything across sessions and projects. Two-layer architecture: hot cache (MEMORY.md) + knowledge wiki. Safety hooks prevent context loss. /close-day captures your day in one command. Z

Run free