MATRIX — video type × style
11 video types × 13 styles. Cell color: green=ships, yellow=MVP, gray=planned.
| cel_shaded | claymation | comic_halftone | mushroom_institute | painterly_van_gogh | painterly_watercolor | paper_cutout | photoreal_3d | pixel_art_warm | vector_flat | voxel_blocks | pixel_pillow | passthrough_image | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| news_report | |||||||||||||
| character_intro | |||||||||||||
| shorts_brainrot | |||||||||||||
| comparison_review | |||||||||||||
| recipe_short | |||||||||||||
| react_video | |||||||||||||
| silent_film | |||||||||||||
| top_n_list | |||||||||||||
| unboxing | |||||||||||||
| before_after | |||||||||||||
| tier_list | |||||||||||||
| music_video_parallel | |||||||||||||
| editorial_cartoon | |||||||||||||
| explainer | |||||||||||||
| documentary | |||||||||||||
| video_essay | |||||||||||||
| feature_film |
AGENT — orchestrate the studio with Claude
Type a brief in plain English. The agent (Claude Opus 4.7,
adaptive thinking) plans, decides cast/scene/style, calls
compose_project + render_project, and
watches the render. Brain = LLM tokens. Body = procedural CPU.
Zero neural inference for visual content.
ANTHROPIC_API_KEY not set; agent will refuse to start.
PROJECTS
Render compute runs on remote. Toggle:
STYLES
Render backends and their compatible style YAMLs.
CAST — 56 characters
4 originals · 26 alphabet citizens · 26 letter-forms
RECENT OUTPUTS
17 latest renders / panels in tv/output/
DOCS — frame-defining + reference
7 primary documents. Click to read in-place; raw markdown served by the dashboard.
TOOLS — action surfaces
9 entry points. Web tools open in this dashboard; CLI commands shown for SSH / terminal.
-
Pick video type / cast / scene / style; type script; one Haiku call → project YAML. ~$0.0015 cached per video.open →
-
Drag character thumbs onto SVG scene; y-coord maps to multiplane z-position; placements drive camera_params at render time.open →
-
SSH-orchestrated remote render delegation. rsync up → render remotely → rsync MP4 down. Configure with STUDIO_REMOTE_HOST + friends.
python -m studio.remote_bridge --project <yaml> -
Generate 26 voice profiles + bridge libraries on the remote box overnight (~13hr). Uses STUDIO_REMOTE_* env vars.
ssh "$STUDIO_REMOTE_USER@$STUDIO_REMOTE_HOST" 'cd "$STUDIO_REMOTE_REPO" && python3 -m tv.voice.generate_alphabet_voices --letters all' -
One-shot: installs Blender on the configured remote box, downloads 3 Polyhaven HDRIs, runs verification render.
ssh "$STUDIO_REMOTE_USER@$STUDIO_REMOTE_HOST" 'bash -s' < scripts/setup_photoreal_box_a.sh -
Photoreal Cycles render on the configured remote box after preflight. PBR materials + HDRI lighting + scene composition.
ssh "$STUDIO_REMOTE_USER@$STUDIO_REMOTE_HOST" 'cd "$STUDIO_REMOTE_REPO" && python3 -m backends.blender_headless.scene_builder --character tv/characters/spore_oracle.yaml --scene mushroom_grove --out tv/output/oracle_photoreal.png' -
Emit per-platform aspect variants (TikTok 9:16, IG 4:5, square 1:1, YouTube 16:9) in parallel.
python -m tv.multi_aspect --input <mp4> --output-dir <dir> --platforms tiktok,youtube -
Generate SRT subtitle file from a project. Auto-attached to every render.
python -m tv.srt_export --project <yaml> --out <srt> -
Measure per-backend parallel speedup. Shows sequential vs N-worker timing.
python -m backends._parallel --bench --module backends.svg_cairo --fn apply_vector_flat
SOURCES — pluggable input modules
6 source modules under sources/. Studio pulls reference material from any of these into renders.
duckduckgo-searchopencv-pythonlibrosa (optional, for beat-grid)PIPELINE STATUS — runtime
CPU / parallelism / dependency versions. Read fresh on every page load.
- CPU cores
- 2 (1 workers default)
- Parallel pipeline
- enabled
- Anthropic API key
- NOT SET — Compose tab disabled
- Compute target
- remote (default: local; STUDIO_COMPUTE_TARGET set)
- flask
3.1.3- anthropic
0.97.0- numpy
2.2.6
RECENT COMMITS — last 2
Local repo history. Refresh page after a commit to see it here.
-
606f983sync v0.1.1 + security fixes from local 12 days ago -
90a7d65v0.1: silent-film pipeline with persistent-character animation 12 days ago
GAPS — what doesn't work / what's planned
-
Photoreal_3d actual Cycles render
mvp
Days 1-5 fully scaffolded (bpy_photoreal + scene_builder + makehuman_bridge + pbr_materials with 25-entry institute-palette PBR library). Days 6-7 = Blender 4.x + Polyhaven HDRIs + actual render. NO MORE CODE WAITS — only the Blender install + 3 HDRI downloads.
-
Alphabet voice library overnight run
mvp
Orchestration script ready (tv/voice/generate_alphabet_voices.py). Run on Box A: `python3 -m tv.voice.generate_alphabet_voices --letters all`. ~13hr overnight. NO MORE CODE WAITS.
-
Operative letter operator definitions
blocked
BLOCKED on user. Cosmetic alphabet ships; semantic version replaces parameter tables when 26 operators arrive.
-
Brainrot v5 parallelization
planned
~3-4h refactor: precompute scheduler state up front (single thread), then dispatch per-frame application across workers. Pattern matches silent_film_parallel. Lower priority — main render loop and post-process backends already parallel.
-
Public GitHub repo sync + landing site
planned
Local master is 20+ commits ahead of public github.com/psiloceyeben/spore-animation-studios. Phase 3 of apparatus plan (sync + sporeanimationstudios.com landing) pending.