- Consolidate scattered controls into unified status bar at bottom
- Replace auto-switching with subtle "Try Mirror" hint when detection stable
- Use flex layout instead of absolute positioning to prevent clipping
- Make AbacusDock a flex sibling of problem-area for proper layout
- Use responsive sizing with clamp() instead of matching problem height
- Add Video/Mirror toggle with clear text labels
- Container widens to 900px when dock is shown (vs 600px normally)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Create VisionDetection.stories.tsx with interactive demos
- Add screenshot capture script for Storybook stories
- Update blog post with captured screenshots
- Include before/after comparison, progress gallery, and step demos
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace vague "Getting Started" section with step-by-step guide:
- Printing and attaching ArUco markers
- Docking the abacus in practice sessions
- Opening vision settings via camera icon
- Camera options (local vs phone)
- Automatic marker-based calibration
- Enabling vision and starting practice
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Introduces the new ML-powered vision system that watches physical
abacus use in real-time, providing instant feedback as students
work through problems.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Enable auto-detection in DockedVisionFeed using ML column classifier
- Replace CV-based detection with useColumnClassifier hook
- Add concurrent inference prevention with isInferringRef
- Show model loading status in detection overlay
- Add detectedPrefixIndex prop to VerticalProblem for visual feedback
- Display ✓ checkmarks on terms completed via vision detection
- Connect vision detection to term tracking and auto-submit flow
The vision system now:
1. Detects abacus values via ML classification at 10fps
2. Shows visual feedback (checkmarks) as terms are completed
3. Triggers help mode when prefix sums are detected
4. Auto-submits when the correct answer is shown
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix column classifier to use [0,1] normalization (model has internal
Rescaling layer that converts to [-1,1] for MobileNetV2)
- Remove quantization from model export (was corrupting weights)
- Add /vision-training/test page for direct model testing
- Add Python test script for local model verification
- Clean up verbose debug logging from classifier
- Add model tester to training results card
- Add training data hub modal and supporting components
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When training accuracy is low (<50%), the results card now shows
a yellow warning box explaining why accuracy might be poor and
what actions to take:
- Data imbalance: identifies underrepresented digits
- Insufficient data: recommends collecting more images
- Poor convergence: suggests checking data quality
- Unknown issues: fallback advice
Uses a context provider pattern to avoid prop drilling through
5 component levels (page → TrainingWizard → PhaseSection →
CardCarousel → ExpandedCard → ResultsCard).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Each tile now randomly transitions between showing the training image
and the numeral it represents. The animations are staggered with random
initial delays and intervals so tiles don't all animate together.
- Created AnimatedTile component with crossfade effect
- Random 3-8 second intervals between transitions
- 0.8s ease-in-out opacity transitions
- Random initial delays (0-10s) for natural feel
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The tiled background wasn't covering the entire viewport because
the inner grid had no explicit dimensions. Fixed by:
- Setting grid to 120vw x 120vh to overflow viewport
- Negative margins to center the oversized grid
- Increased tile count from 100 to 800 for full coverage
- Removed scale(1.2) since explicit sizing handles it
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Next.js App Router caches GET route handlers by default when they
don't use request-based data. This caused /api/vision-training/samples
to return cached "no data" responses even after data was collected.
Added `export const dynamic = 'force-dynamic'` to all vision-training
API routes that read from disk or run system commands:
- samples/route.ts
- hardware/route.ts
- preflight/route.ts
- sync/route.ts
- train/route.ts
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The videoRef callback in VisionCameraFeed was only firing once on mount,
before the video element existed (due to early return when no videoStream).
Added videoStream to the effect dependency array so it re-runs when the
stream becomes available.
Also:
- Add willReadFrequently option to canvas contexts in frameProcessor
and arucoDetection to fix console warnings about getImageData
- Add VISION_COMPONENTS.md documentation with required wiring checklist
- Add reference in CLAUDE.md to prevent this recurring mistake
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
After training completes, users can now test the model directly:
- New "Test Model" tab in results phase
- Uses CameraCapture with marker detection
- Runs inference at ~10 FPS and shows detected value + confidence
- Color-coded confidence display (green > 80%, yellow > 50%, red otherwise)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Training Data Management:
- Add bulk delete modal with timeline range selection
- Add deletion tracking (.deleted tombstone file) to prevent re-syncing
- Add file-by-file comparison for sync (shows new/deleted counts)
- Improve image viewer with filtering, sorting, and batch operations
- Add delete toast with undo functionality
Shared Marker Detection:
- Create useMarkerDetection hook for ArUco marker detection
- Refactor DockedVisionFeed to use shared hook (removes ~100 lines)
- Add marker detection support to CameraCapture component
- Calibration now persists when markers are temporarily lost
Training Wizard UI:
- Add tabbed "Get More Data" section in DataCard
- Show excluded (deleted) file count in sync tab
- Improve phase navigation and state management
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add preflight API endpoint to check Python dependencies before training
- Create DependencyCard wizard step showing installed/missing packages
- Create reusable CameraCapture component supporting local + phone cameras
- Add inline training data capture directly in the training wizard
- Update venv setup to install all requirements from requirements.txt
- Block training start until all dependencies verified
The wizard now shows a dedicated Dependencies step between Hardware and
Config, displaying which packages are installed and providing pip install
commands for any missing packages.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Training images were incorrectly showing 2 columns instead of 1 because
the vision system confused "digits in answer" with "physical columns".
Root cause: For 2-digit problems, dock.columns was 2 (digits needed),
but the physical abacus has 4 columns. Training images were sliced into
2 parts, each containing 2 physical columns.
Fixes:
- ActiveSession: Use physicalAbacusColumns for training capture
- MyAbacus: Use physicalAbacusColumns for vision detection
- VisionSetupModal: Use physicalAbacusColumns for calibration
- frameProcessor: Add column margins (6% left/right) for isolation
- types/vision: Add ColumnMargins interface to CalibrationGrid
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Research confirmed TensorFlow has wheels for Linux x86_64 (including
Synology NAS). The previous failures were due to:
1. Training scripts not being copied to Docker image
2. Venv path not being writable in container
Changes:
- Update isPlatformSupported() to include Linux ARM64 support
- Move venv to data/vision-training/.venv (mounted volume, writable)
- Add python3-venv to Docker image dependencies
- Copy training scripts to Docker image
- Create vision-training directories in Docker build
Sources:
- https://www.tensorflow.org/install/pip
- https://pypi.org/project/tensorflow/ (shows manylinux_2_17_aarch64 wheels)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add platform detection before attempting TensorFlow installation.
TensorFlow doesn't have wheels for ARM Linux (like Synology NAS),
so the hardware detection now returns a clear "Platform Not Supported"
message instead of failing during pip install.
Changes:
- Add isPlatformSupported() check in config.ts
- Check platform before attempting venv setup in hardware/route.ts
- Check platform before starting training in train/route.ts
- Show user-friendly message in HardwareCard for unsupported platforms
- Add data/vision-training/collected/ to .gitignore
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The vision source registration effect was not re-running when the first
remote camera frame arrived, because remoteLatestFrame was not in the
dependency array. The <img> element that remoteImageRef points to only
renders when remoteLatestFrame is truthy, so the effect would run before
the element existed and fail to register the source.
Now both local and remote cameras properly register their vision source
for training data collection when a correct answer is entered.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Transform training pipeline into a phase-based wizard stepper:
- Preparation phase: data check, hardware detection, config
- Training phase: setup, loading, training progress, export
- Results phase: final accuracy and train again option
Add sync UI to pull training images from production NAS:
- SSE-based progress updates during rsync
- Shows remote vs local image counts
- Skip option for users who don't need sync
New wizard components:
- TrainingWizard: main orchestrator with phase management
- PhaseSection: collapsible phase containers
- CardCarousel: left/center/right card positioning
- CollapsedCard: compact done/upcoming cards with rich previews
- ExpandedCard: full card with content for current step
- Individual card components for each step
APIs added:
- /api/vision-training/samples: check training data status
- /api/vision-training/sync: rsync from production with SSE
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Consolidated confusing multiple status messages into one clear flow:
- Status panel is now the single source of truth for what's happening
- "Preparing..." with explanation that first run takes 2-5 minutes
- Hardware panel just shows hardware (dash while loading, name when ready)
- Button always says "Start Training" (disabled state is visual enough)
- Error details shown once in status panel, with hint
- Simple "Retry" button in hardware panel on failure
Before: "Detecting..." + "Setting up..." + "Setting up Python environment..."
After: "Preparing..." + "Installing TensorFlow... First run may take 2-5 min"
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Removed redundant status messages:
- Hardware panel is the single source for setup state
- Button just says "Start Training" (disabled when not ready)
- Status panel only shows training progress, not setup state
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Status indicator and text now properly reflect:
- Blue pulsing dot + "Setting up Python environment..." while loading
- Red dot + "Setup failed" when hardware detection fails
- Green dot + "Ready to train" only when hardware is ready
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Use Python's --clear flag to handle existing venv directories atomically
- Disable "Start Training" button until hardware setup succeeds
- Show "Setup Failed" state on button when hardware detection fails
- Add "Retry Setup" button when setup fails
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove incomplete venv directory before creating new one
- Clear setup cache on failure so subsequent calls can retry
- Fixes "File exists" error when venv creation was interrupted
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The vision training APIs now automatically set up the Python
environment when first called:
1. Detects if venv exists with tensorflow installed
2. If not, finds best Python (ARM on Apple Silicon for Metal GPU)
3. Creates venv and installs tensorflow (with Metal on M-series Macs)
4. Caches result so subsequent calls are instant
This eliminates the need to manually set up the venv before deployment.
The setup happens lazily on first hardware detection or training request.
Also includes various other working tree changes that were in progress.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Extracts TRAINING_PYTHON, TRAINING_SCRIPTS_DIR, and PYTHON_ENV to a
shared config.ts file. Both hardware detection and training now import
from the same source, making it impossible for them to diverge.
This ensures hardware detection always shows exactly what training
will use.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Ensures the training script uses the same Python venv with
tensorflow-metal as the hardware detection, so training
actually runs on the Metal GPU.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The hardware detection was using system python3 which was x86_64
(Rosetta) instead of native ARM. This prevented TensorFlow from
detecting the Metal GPU on Apple Silicon.
Now uses a Python 3.11 venv with tensorflow-macos and tensorflow-metal
for proper M4 Max GPU detection.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Use absolute path for script instead of relative
- Reorder event handlers to prevent race conditions
- Add explicit timeout handling
- Better error messages for spawn failures
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The spawned child process was named 'process' which shadowed the
Node.js global 'process', causing 'Cannot access process before
initialization' when trying to access process.env.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Adds hardware detection that queries TensorFlow to determine exactly
which device will be used for training:
- New Python script detect_hardware.py queries TensorFlow for available
devices (GPU/Metal/CPU)
- New API endpoint GET /api/vision-training/hardware runs the script
and caches results for 1 hour
- Training page shows hardware prominently before the "Start Training"
button with GPU/CPU badge, TensorFlow version, and system memory
On Apple Silicon Macs, shows chip type (e.g., "Apple M3 Pro GPU (Metal)")
On NVIDIA systems, shows GPU name from CUDA
Falls back to CPU with processor info if no GPU available
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Forgot to commit this file in the previous commit, causing
the CI build to fail with "Module not found: Can't resolve
'@/lib/vision/trainingData'"
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implements a web UI to initiate ML model training from the browser:
- Python script now outputs JSON progress with --json-progress flag
- SSE API endpoint streams training progress in real-time
- Training UI page with configuration (epochs, batch size, augmentation)
- Real-time epoch metrics display with loss/accuracy charts
- Dataset info display and training history log
- Cancel button to abort training in progress
- Auto-export to TensorFlow.js format after training
Enables training the abacus column classifier on devbox after syncing
training data from production, with visual feedback in the browser.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Extract TrainingImageViewer as reusable presentational component
- Add Storybook stories showcasing all UI states
- Use AbacusStatic to generate mock column images for demos
- Include stories: default, empty, loading, error, grouped by player/session
- Add AllDigitsShowcase story explaining abacus digit representation
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add /vision-training page to view collected abacus column images
- Add API routes for listing and serving training images
- Add sync-training-data.sh script to pull data from production NAS
- Remove synthetic training data generation (now using real collected data)
- Update README to document new real-data workflow
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Migration 0060 had timestamp before 0059, causing drizzle to skip it.
Fixed timestamp to be after 0059 (Jan 8 vs Jan 7).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Production was failing with 'no such column: game_break_settings'.
The column was added to the schema but migration was missing.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When arcade games go fullscreen during game breaks, they hide the
GameBreakScreen header containing the timer. This moves the game break
timer display into PracticeSubNav, which stays visible during fullscreen.
Changes:
- Add GameBreakHudData interface with startTime, maxDurationMs, onSkip
- Add gameBreakHud prop to PracticeSubNav for game break timer display
- Render timer with progress bar, color transitions (green→yellow→red)
- Hide session HUD when on game break (mutually exclusive display)
- Remove duplicate header from GameBreakScreen playing phase
- Update/create Storybook stories for both components
- Fix lint issues (unused import, useless fragment)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Key insight from debugging: session-state only fires on join, not during
gameplay. Must listen to move-accepted to detect game phase transitions.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When a student completes a game during a practice game break, the break
now ends automatically instead of waiting for the timer to expire.
Implementation:
- PracticeGameModeProvider now listens for move-accepted socket events
(not just session-state, which only fires on join)
- Detects transition to 'results' phase and fires onGameComplete callback
- GameBreakScreen wires up the callback to dismiss the break
Note: Not all games have a 'results' phase. Endless games like complement-race
will still end via timeout or manual skip, which is expected.
Also adds architecture documentation for arcade room + practice integration.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add MIN_SECONDS_PER_PROBLEM (10s) to clamp the average time calculation
and prevent generating 100+ problems when student timing data is
anomalously low (e.g., 2-3 seconds per problem).
- Add MIN_SECONDS_PER_PROBLEM constant in session-timing.ts
- Apply Math.max() clamp in generateSessionPlan()
- Fix seed script to use realistic 30s timing instead of 5s
A 5-minute session at 10 sec/problem = 30 problems (reasonable)
A 5-minute session at 2 sec/problem = 150 problems (too many)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add the foundational infrastructure for game break settings without
the room/player context integration:
- Add GameBreakSettings interface to session-plans schema
- Add gameBreakSettings column to session_plans DB table
- Add toggle and duration selector UI to StartPracticeModal
- Create useGameBreakTimer hook for countdown timer logic
- Add unit tests for useGameBreakTimer (9 tests)
- Add id field to StudentInfo interface for future player context
- Update stories with gameBreakSettings mock data
The room creation and player joining logic still needs iteration
to properly align browser viewerId with student DBPlayer identity.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add global mocks for next-intl and @soroban/abacus-react in test setup
- Fix typstGenerator.ts: V4 manual mode uses displayRules, not booleans
- Fix tenFrames.test.ts: Add missing V4 fields (includeAnswerKey, etc.)
- Update snapshots for pedagogicalCore and journey-simulator tests
This fixes 18 worksheet tests that were failing due to:
1. Missing test context providers (now mocked globally)
2. Incorrect V4 manual mode handling (was using V3-style boolean flags)
3. Missing required V4 config fields in test fixtures
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Components now require WorksheetParsingContext instead of accepting
optional streaming props as fallbacks. This simplifies the codebase
by removing ~300 lines of fallback logic.
Changes:
- Create MockWorksheetParsingProvider for Storybook/testing
- Simplify OfflineWorkSection to use required context
- Simplify PhotoViewerEditor to use required context (~100 lines removed)
- Update stories to wrap with mock provider
- Remove unused StreamingParseState/StreamingReparseState types
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Remove streaming hooks from useWorksheetParsing.ts now that parsing
state is centralized in WorksheetParsingContext:
- Remove useStreamingParse hook (~300 lines)
- Remove useStreamingReparse hook (~300 lines)
- Remove unused imports (useState, useRef, useCallback)
- Update components to use context with prop fallback
- Fix TypeScript errors in PhotoViewerEditor state mapping
- Add missing props to Storybook stories
The streaming functionality is now handled by WorksheetParsingContext
which provides centralized state management via a reducer pattern.
Type interfaces are preserved for backward-compatible prop types.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>