Back to Marketplace

Run this helper free

Answer 3 questions. Get a result in 2 minutes. Preview free.

Start free →
FREE
Verified
Save Money

Project Skill

Reproducible emotion recognition from audio without manual file shuffling

Building speech emotion recognition systems requires careful dataset management, reproducible workflows, and strict separation of concerns that are error-prone when done manually.

A fully reproducible speech emotion recognition system with versioned artifacts, speaker-disjoint splits, and manifest-driven workflows ready for training.

  • Manifest-driven workflows preserve raw dataset integrity
  • Speaker-disjoint train/validation/test splits prevent data leakage
  • Six V1 emotion labels with intensity as metadata
  • Versioned artifacts enable reproducible experiment tracking

👁 2 views · 📦 0 installs

Install in one line

mfkvault install lawliet2004-speech-emotion-detector

Requires the MFKVault CLI. Prefer MCP?

No reviews yet
🤖 Claude Code Cursor💻 Codex🦞 OpenClaw
FREE

Free to install — no account needed

Copy the command below and paste into your agent.

Instant access • No coding needed • No account needed

What you get in 5 minutes

  • Full skill code ready to install
  • Works with 4 AI agents
  • Lifetime updates included
VerifiedSecureBe the first
Ready to run

Run this helper

Answer a few questions and let this helper do the work.

Advanced: use with your AI agent

Description

# Project Skill This repository is for building a speech emotion recognition system from the local `AudioWAV/` dataset. ## Required First Step For Any Future Agent - Read `context.md` before making plans, writing code, or changing files. - Treat `context.md` as the current project handoff document. ## Working Rules - Keep the raw dataset in `AudioWAV/` untouched. - Use manifest-driven workflows instead of moving audio files into new folders. - Preserve speaker-disjoint train/validation/test splits. - Use the six V1 emotion labels only: `angry`, `disgust`, `fear`, `happy`, `neutral`, `sad`. - Treat filename intensity as metadata, not as the V1 prediction target, unless the user explicitly changes that decision. - Prefer reproducible scripts and versioned artifacts over manual steps. ## Current Repo Conventions - Dataset manifests live in `manifests/`. - Utility scripts live in `scripts/`. - The current manifest generator is `scripts/create_audio_manifest.py`. ## Update Contract - Any time code, data-processing logic, file structure, training setup, model behavior, or project decisions change, update `context.md` in the same work session. - Update `skill.md` too if the workflow rules, repo conventions, or standing instructions change. - When updating `context.md`, refresh: - current status - key decisions - important files - recent changes - next recommended step ## Preferred Agent Behavior - Before changing anything, inspect the current manifests, scripts, and `context.md`. - After changing anything, leave the repo in a state where another agent can continue without re-discovering the project. - Be explicit about assumptions when the user has not decided something yet.

Preview in:

Security Status

Verified

Manually verified by security team

Time saved
How much time did this skill save you?

Related AI Tools

More Save Money tools you might like