- Add torch/flashlight toggle (when available)
- Add camera flip button (when multiple cameras)
- Add lighting preset selector with auto-detection based on frame analysis
- Add experimental finger occlusion mode for better edge detection
- Persist scanner settings per-user in database
- Add responsive ScannerControlsDrawer with compact mobile layout
- Hide sublabels on small screens, show on tablets+
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Auto-generated fresh SVG examples and unified gallery from latest templates.
Includes comprehensive crop mark demonstrations with before/after comparisons.
Files updated:
- packages/templates/gallery-unified.html
🤖 Generated with GitHub Actions
Co-Authored-By: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
- Add NavSyncIndicator component for compact sync status in nav bar
- Shows sync availability, new items count, syncing progress
- Dropdown with sync actions and history
- States: loading, unavailable, in-sync, new-available, syncing, error
- Add useSyncStatus hook for managing sync state
- Fetches sync status from API
- Handles sync execution with SSE progress
- Tracks sync history
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add multi-selection support with click to toggle, shift+click for range
- Add bulk delete for selected items (both model types)
- Add bulk reclassify for column-classifier (move images to different digit)
- Show bulk selection panel when 2+ items selected with action buttons
- Single click now also opens detail panel for the clicked item
- Add selection indicators on grid items (checkmark badge)
- Add hover actions on grid items (view details, delete buttons)
- Update grid items to show focused vs selected states with different colors
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix sync status fetch URL from non-existent /api/vision-training/sync/status
to the correct /api/vision-training/sync endpoint
- Add modelType query parameter to both GET (status) and POST (sync) requests
- Enable sync feature for both boundary-detector and column-classifier
(was previously restricted to column-classifier only)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Create unified data panel architecture replacing separate BoundaryDataPanel and ColumnClassifierDataPanel
- Add MiniBoundaryTester with SVG overlay showing ground truth vs predicted corners
- Add MiniColumnTester with prediction badge overlay showing digit and correct/wrong status
- Support preloading images in full test page via ?image= query param
- Fix overflow on detail panel to allow scrolling
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Track all training data sync operations with detailed metrics:
- Add vision_training_sync_history table with SQLite schema
- Record sync start/end times, file counts, tombstone stats
- Prune tombstone entries for files that no longer exist on remote
- Show sync history indicator in ColumnClassifierDataPanel
New files:
- vision-training-sync-history.ts - DB schema for sync history
- sync/history/route.ts - API to query sync history
- SyncHistoryIndicator.tsx - UI component showing recent syncs
The tombstone pruning ensures local tombstone files don't grow unbounded -
after each sync, entries for files deleted on remote are removed since
there's no longer a risk of re-syncing them.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
All training data deletions now go through shared utility functions that
ensure deletions are recorded to tombstone files (preventing re-sync from
production).
- Add trainingDataDeletion.ts with deleteColumnClassifierSample() and
deleteBoundaryDetectorSample() functions
- Update bulk delete endpoint to use shared utility (was missing tombstone)
- Update single-file delete endpoint to use shared utility
- Update boundary-samples delete to use shared utility (was missing tombstone)
- Update sync route to use shared readTombstone() function
- Add comprehensive unit tests (17 tests covering validation, deletion,
tombstone recording, and edge cases)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Image route now tries both .png and .jpg extensions (passive captures are JPEG)
- Session detail page guards against undefined result/config values
- Prevents "Cannot read properties of undefined" errors
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Major changes:
- Add passive boundary capture during practice sessions via DockedVisionFeed
- Create BoundaryFrameFilters component with capture type, session, player,
and time range filtering using TimelineRangeSelector
- Add sessionId/playerId metadata to boundary frame annotations
- Update boundary-samples API to store and return session/player context
- Improve boundary detector training pipeline with marker masking
- Add preprocessing pipeline preview in training data browser
- Update model sharding (2 shards instead of 1)
Files added:
- BoundaryFrameFilters.tsx - Filter UI component
- usePassiveBoundaryCapture.ts - Hook for passive training data collection
- saveBoundarySample.ts - Shared utility for saving boundary samples
- preview-augmentation/route.ts - API for preprocessing pipeline preview
- preview-masked/route.ts - API for marker masking preview
- marker_masking.py, pipeline_preview.py - Python training utilities
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Implement shared nav bar and path-based model selection for vision training pages.
Changes:
- Add useModelType() hook to get model type from URL path
- Add VisionTrainingNav component with model selector dropdown and tab links
- Add [model]/layout.tsx with fixed nav and --nav-height CSS variable
- Move pages under [model]/ directory structure:
- /vision-training/[model] - Data hub
- /vision-training/[model]/train - Training wizard
- /vision-training/[model]/test - Model tester
- /vision-training/[model]/sessions - Sessions list
- /vision-training/[model]/sessions/[id] - Session detail
- Remove Model Card from training wizard (model selection now via URL/nav)
- Update TrainingWizard to use modelType from URL (always defined, not null)
- Remove localStorage restoration of model type
- Add registry.tsx for model metadata
URL is now single source of truth for model type. Browser history and
deep linking work correctly.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add boundary detector ML model infrastructure (MobileNetV2-based)
- Add training script for boundary detector (train_model.py)
- Add useBoundaryDetector hook for browser inference
- Add BoundaryCameraTester for real-time camera testing
- Add BoundaryImageTester for static image testing
- Add sync API support for boundary detector training data
- Add model type selector on test page (column classifier vs boundary detector)
- Add marker inpainting for training data preprocessing
- Update training wizard to support both model types
The boundary detector aims to detect abacus corners without ArUco markers,
using ML to predict corner positions from raw camera frames. Currently
requires more training data for accurate predictions.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Data augmentation doesn't work well for abacus column recognition,
so remove the option entirely. Training now always runs without
augmentation (--no-augmentation flag always passed to Python script).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
1. VisionSetupModal too tall - constrain modal height and video aspect ratio
2. Modal not draggable after setup - add drag constraint ref to motion.div
3. Remote camera URL changes on reload - prevent race condition by checking
localStorage before rendering QR code component
4. "Waiting for remote connection" stuck - add 15s timeout with retry/settings buttons
5. Auto-undocking on problem progression - remove incorrect isDockedByUser=false
from unregisterDock() which fired during React re-renders
6. Re-docking goes to fullscreen - check dock existence instead of visibility
Also includes updated ML model weights (non-quantized export).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Model exported without --quantize_uint8 flag, which was corrupting
weights. Size increased from ~556KB to ~2.2MB but predictions should
now be correct in the browser.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The --quantize_uint8 flag in tensorflowjs_converter corrupts model
weights, causing completely wrong predictions in the browser even
though Python testing works fine.
This was documented in .claude/CLAUDE.md but the training script
still used quantization. Model size increases (556KB → 2.2MB) but
predictions are now correct.
Removed quantization from all 3 export paths:
- SavedModel → GraphModel conversion
- Keras → LayersModel fallback
- Direct Python API fallback
After this fix, re-run training via /vision-training/train to get
a working model.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Re-trained model with additional training samples for improved
digit detection accuracy.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Dock sizing:
- Larger on desktop: up to 240px wide, 300px tall (md breakpoint)
- Smaller on mobile: 140-180px wide, 180-240px tall (base)
Mirror hint:
- Once shown, stays visible until dismissed or mirror enabled
- No more flashing when stability temporarily drops
- Added dismiss (✕) button to hide hint for rest of session
- Clicking "Try Mirror" still enables mirror mode
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add explicit height constraint alongside width for the docked abacus.
Both vision-enabled and vision-disabled modes now share the same
container size constraints:
- Width: clamp(140px, 16vw, 200px) - about 2/3 of previous
- Height: clamp(180px, 24vh, 260px) - prevents going under nav
Also reduced container max-width from 900px to 780px to match.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Use the same flex column layout for both vision-enabled and digital
abacus modes. The status bar is now at the bottom (not overlapping the
abacus columns) with VisionIndicator on left and undock button on right.
This matches the DockedVisionFeed layout for consistency.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Move the undock button into the DockedVisionFeed status bar alongside
the disable-vision button. This prevents the buttons from overlapping.
When vision is enabled, the dock-controls overlay is now hidden since
all controls are in the unified status bar.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Consolidate scattered controls into unified status bar at bottom
- Replace auto-switching with subtle "Try Mirror" hint when detection stable
- Use flex layout instead of absolute positioning to prevent clipping
- Make AbacusDock a flex sibling of problem-area for proper layout
- Use responsive sizing with clamp() instead of matching problem height
- Add Video/Mirror toggle with clear text labels
- Container widens to 900px when dock is shown (vs 600px normally)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Create VisionDetection.stories.tsx with interactive demos
- Add screenshot capture script for Storybook stories
- Update blog post with captured screenshots
- Include before/after comparison, progress gallery, and step demos
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Replace vague "Getting Started" section with step-by-step guide:
- Printing and attaching ArUco markers
- Docking the abacus in practice sessions
- Opening vision settings via camera icon
- Camera options (local vs phone)
- Automatic marker-based calibration
- Enabling vision and starting practice
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Introduces the new ML-powered vision system that watches physical
abacus use in real-time, providing instant feedback as students
work through problems.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Enable auto-detection in DockedVisionFeed using ML column classifier
- Replace CV-based detection with useColumnClassifier hook
- Add concurrent inference prevention with isInferringRef
- Show model loading status in detection overlay
- Add detectedPrefixIndex prop to VerticalProblem for visual feedback
- Display ✓ checkmarks on terms completed via vision detection
- Connect vision detection to term tracking and auto-submit flow
The vision system now:
1. Detects abacus values via ML classification at 10fps
2. Shows visual feedback (checkmarks) as terms are completed
3. Triggers help mode when prefix sums are detected
4. Auto-submits when the correct answer is shown
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Fix column classifier to use [0,1] normalization (model has internal
Rescaling layer that converts to [-1,1] for MobileNetV2)
- Remove quantization from model export (was corrupting weights)
- Add /vision-training/test page for direct model testing
- Add Python test script for local model verification
- Clean up verbose debug logging from classifier
- Add model tester to training results card
- Add training data hub modal and supporting components
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When training accuracy is low (<50%), the results card now shows
a yellow warning box explaining why accuracy might be poor and
what actions to take:
- Data imbalance: identifies underrepresented digits
- Insufficient data: recommends collecting more images
- Poor convergence: suggests checking data quality
- Unknown issues: fallback advice
Uses a context provider pattern to avoid prop drilling through
5 component levels (page → TrainingWizard → PhaseSection →
CardCarousel → ExpandedCard → ResultsCard).
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Each tile now randomly transitions between showing the training image
and the numeral it represents. The animations are staggered with random
initial delays and intervals so tiles don't all animate together.
- Created AnimatedTile component with crossfade effect
- Random 3-8 second intervals between transitions
- 0.8s ease-in-out opacity transitions
- Random initial delays (0-10s) for natural feel
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The tiled background wasn't covering the entire viewport because
the inner grid had no explicit dimensions. Fixed by:
- Setting grid to 120vw x 120vh to overflow viewport
- Negative margins to center the oversized grid
- Increased tile count from 100 to 800 for full coverage
- Removed scale(1.2) since explicit sizing handles it
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Next.js App Router caches GET route handlers by default when they
don't use request-based data. This caused /api/vision-training/samples
to return cached "no data" responses even after data was collected.
Added `export const dynamic = 'force-dynamic'` to all vision-training
API routes that read from disk or run system commands:
- samples/route.ts
- hardware/route.ts
- preflight/route.ts
- sync/route.ts
- train/route.ts
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The videoRef callback in VisionCameraFeed was only firing once on mount,
before the video element existed (due to early return when no videoStream).
Added videoStream to the effect dependency array so it re-runs when the
stream becomes available.
Also:
- Add willReadFrequently option to canvas contexts in frameProcessor
and arucoDetection to fix console warnings about getImageData
- Add VISION_COMPONENTS.md documentation with required wiring checklist
- Add reference in CLAUDE.md to prevent this recurring mistake
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
After training completes, users can now test the model directly:
- New "Test Model" tab in results phase
- Uses CameraCapture with marker detection
- Runs inference at ~10 FPS and shows detected value + confidence
- Color-coded confidence display (green > 80%, yellow > 50%, red otherwise)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Training Data Management:
- Add bulk delete modal with timeline range selection
- Add deletion tracking (.deleted tombstone file) to prevent re-syncing
- Add file-by-file comparison for sync (shows new/deleted counts)
- Improve image viewer with filtering, sorting, and batch operations
- Add delete toast with undo functionality
Shared Marker Detection:
- Create useMarkerDetection hook for ArUco marker detection
- Refactor DockedVisionFeed to use shared hook (removes ~100 lines)
- Add marker detection support to CameraCapture component
- Calibration now persists when markers are temporarily lost
Training Wizard UI:
- Add tabbed "Get More Data" section in DataCard
- Show excluded (deleted) file count in sync tab
- Improve phase navigation and state management
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Add preflight API endpoint to check Python dependencies before training
- Create DependencyCard wizard step showing installed/missing packages
- Create reusable CameraCapture component supporting local + phone cameras
- Add inline training data capture directly in the training wizard
- Update venv setup to install all requirements from requirements.txt
- Block training start until all dependencies verified
The wizard now shows a dedicated Dependencies step between Hardware and
Config, displaying which packages are installed and providing pip install
commands for any missing packages.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Training images were incorrectly showing 2 columns instead of 1 because
the vision system confused "digits in answer" with "physical columns".
Root cause: For 2-digit problems, dock.columns was 2 (digits needed),
but the physical abacus has 4 columns. Training images were sliced into
2 parts, each containing 2 physical columns.
Fixes:
- ActiveSession: Use physicalAbacusColumns for training capture
- MyAbacus: Use physicalAbacusColumns for vision detection
- VisionSetupModal: Use physicalAbacusColumns for calibration
- frameProcessor: Add column margins (6% left/right) for isolation
- types/vision: Add ColumnMargins interface to CalibrationGrid
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Research confirmed TensorFlow has wheels for Linux x86_64 (including
Synology NAS). The previous failures were due to:
1. Training scripts not being copied to Docker image
2. Venv path not being writable in container
Changes:
- Update isPlatformSupported() to include Linux ARM64 support
- Move venv to data/vision-training/.venv (mounted volume, writable)
- Add python3-venv to Docker image dependencies
- Copy training scripts to Docker image
- Create vision-training directories in Docker build
Sources:
- https://www.tensorflow.org/install/pip
- https://pypi.org/project/tensorflow/ (shows manylinux_2_17_aarch64 wheels)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add platform detection before attempting TensorFlow installation.
TensorFlow doesn't have wheels for ARM Linux (like Synology NAS),
so the hardware detection now returns a clear "Platform Not Supported"
message instead of failing during pip install.
Changes:
- Add isPlatformSupported() check in config.ts
- Check platform before attempting venv setup in hardware/route.ts
- Check platform before starting training in train/route.ts
- Show user-friendly message in HardwareCard for unsupported platforms
- Add data/vision-training/collected/ to .gitignore
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
The vision source registration effect was not re-running when the first
remote camera frame arrived, because remoteLatestFrame was not in the
dependency array. The <img> element that remoteImageRef points to only
renders when remoteLatestFrame is truthy, so the effect would run before
the element existed and fail to register the source.
Now both local and remote cameras properly register their vision source
for training data collection when a correct answer is entered.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Transform training pipeline into a phase-based wizard stepper:
- Preparation phase: data check, hardware detection, config
- Training phase: setup, loading, training progress, export
- Results phase: final accuracy and train again option
Add sync UI to pull training images from production NAS:
- SSE-based progress updates during rsync
- Shows remote vs local image counts
- Skip option for users who don't need sync
New wizard components:
- TrainingWizard: main orchestrator with phase management
- PhaseSection: collapsible phase containers
- CardCarousel: left/center/right card positioning
- CollapsedCard: compact done/upcoming cards with rich previews
- ExpandedCard: full card with content for current step
- Individual card components for each step
APIs added:
- /api/vision-training/samples: check training data status
- /api/vision-training/sync: rsync from production with SSE
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Consolidated confusing multiple status messages into one clear flow:
- Status panel is now the single source of truth for what's happening
- "Preparing..." with explanation that first run takes 2-5 minutes
- Hardware panel just shows hardware (dash while loading, name when ready)
- Button always says "Start Training" (disabled state is visual enough)
- Error details shown once in status panel, with hint
- Simple "Retry" button in hardware panel on failure
Before: "Detecting..." + "Setting up..." + "Setting up Python environment..."
After: "Preparing..." + "Installing TensorFlow... First run may take 2-5 min"
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Removed redundant status messages:
- Hardware panel is the single source for setup state
- Button just says "Start Training" (disabled when not ready)
- Status panel only shows training progress, not setup state
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Status indicator and text now properly reflect:
- Blue pulsing dot + "Setting up Python environment..." while loading
- Red dot + "Setup failed" when hardware detection fails
- Green dot + "Ready to train" only when hardware is ready
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Use Python's --clear flag to handle existing venv directories atomically
- Disable "Start Training" button until hardware setup succeeds
- Show "Setup Failed" state on button when hardware detection fails
- Add "Retry Setup" button when setup fails
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Remove incomplete venv directory before creating new one
- Clear setup cache on failure so subsequent calls can retry
- Fixes "File exists" error when venv creation was interrupted
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>