Commit Graph

3365 Commits

Author SHA1 Message Date
Thomas Hallock bc02ba281d feat(scanner): add camera controls, auto presets, and persisted settings
- Add torch/flashlight toggle (when available)
- Add camera flip button (when multiple cameras)
- Add lighting preset selector with auto-detection based on frame analysis
- Add experimental finger occlusion mode for better edge detection
- Persist scanner settings per-user in database
- Add responsive ScannerControlsDrawer with compact mobile layout
- Hide sublabels on small screens, show on tablets+

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 11:12:30 -06:00
Thomas Hallock fb0abcef27 feat(vision): add modular quad detection with opencv-react
- Add quad-test page for testing quadrilateral detection
- Add modular quadDetector.ts with createQuadDetector(cv) factory
- Add quadTracker.ts for temporal stability tracking
- Add OpenCV types and lazy loader modules
- Add opencv-react and @techstark/opencv-js dependencies
- Add useOpenCV and useQuadDetection hooks
- Add various loader test pages for debugging OpenCV loading

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-12 11:12:30 -06:00
github-actions[bot] 102c18f44c 🎨 Update template examples and crop mark gallery
Auto-generated fresh SVG examples and unified gallery from latest templates.
Includes comprehensive crop mark demonstrations with before/after comparisons.

Files updated:
- packages/templates/gallery-unified.html

🤖 Generated with GitHub Actions

Co-Authored-By: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2026-01-11 02:48:08 +00:00
Thomas Hallock b96f8e6d2f feat(vision): add nav sync indicator and useSyncStatus hook
- Add NavSyncIndicator component for compact sync status in nav bar
  - Shows sync availability, new items count, syncing progress
  - Dropdown with sync actions and history
  - States: loading, unavailable, in-sync, new-available, syncing, error
- Add useSyncStatus hook for managing sync state
  - Fetches sync status from API
  - Handles sync execution with SSE progress
  - Tracks sync history

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 18:21:23 -06:00
Thomas Hallock 9ec139bdfa feat(vision): add multi-selection and bulk actions to unified data panel
- Add multi-selection support with click to toggle, shift+click for range
- Add bulk delete for selected items (both model types)
- Add bulk reclassify for column-classifier (move images to different digit)
- Show bulk selection panel when 2+ items selected with action buttons
- Single click now also opens detail panel for the clicked item
- Add selection indicators on grid items (checkmark badge)
- Add hover actions on grid items (view details, delete buttons)
- Update grid items to show focused vs selected states with different colors

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 18:20:02 -06:00
Thomas Hallock 5239e3f3cf fix(vision): fix sync status API endpoint and enable for both models
- Fix sync status fetch URL from non-existent /api/vision-training/sync/status
  to the correct /api/vision-training/sync endpoint
- Add modelType query parameter to both GET (status) and POST (sync) requests
- Enable sync feature for both boundary-detector and column-classifier
  (was previously restricted to column-classifier only)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 18:19:07 -06:00
Thomas Hallock ff5e0afc64 feat(vision): unify data panels with inline model testers
- Create unified data panel architecture replacing separate BoundaryDataPanel and ColumnClassifierDataPanel
- Add MiniBoundaryTester with SVG overlay showing ground truth vs predicted corners
- Add MiniColumnTester with prediction badge overlay showing digit and correct/wrong status
- Support preloading images in full test page via ?image= query param
- Fix overflow on detail panel to allow scrolling

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 14:00:09 -06:00
Thomas Hallock bc95a2f34b feat(vision): add sync history tracking with tombstone pruning
Track all training data sync operations with detailed metrics:
- Add vision_training_sync_history table with SQLite schema
- Record sync start/end times, file counts, tombstone stats
- Prune tombstone entries for files that no longer exist on remote
- Show sync history indicator in ColumnClassifierDataPanel

New files:
- vision-training-sync-history.ts - DB schema for sync history
- sync/history/route.ts - API to query sync history
- SyncHistoryIndicator.tsx - UI component showing recent syncs

The tombstone pruning ensures local tombstone files don't grow unbounded -
after each sync, entries for files deleted on remote are removed since
there's no longer a risk of re-syncing them.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 11:59:13 -06:00
Thomas Hallock 2635bc565b fix(vision): consolidate training data deletion with tombstone recording
All training data deletions now go through shared utility functions that
ensure deletions are recorded to tombstone files (preventing re-sync from
production).

- Add trainingDataDeletion.ts with deleteColumnClassifierSample() and
  deleteBoundaryDetectorSample() functions
- Update bulk delete endpoint to use shared utility (was missing tombstone)
- Update single-file delete endpoint to use shared utility
- Update boundary-samples delete to use shared utility (was missing tombstone)
- Update sync route to use shared readTombstone() function
- Add comprehensive unit tests (17 tests covering validation, deletion,
  tombstone recording, and edge cases)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 11:31:54 -06:00
Thomas Hallock 391ae20157 fix(vision): support JPEG images and fix null safety in session detail
- Image route now tries both .png and .jpg extensions (passive captures are JPEG)
- Session detail page guards against undefined result/config values
- Prevents "Cannot read properties of undefined" errors

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 10:34:00 -06:00
Thomas Hallock cf45b86a28 feat(vision): add filtering and passive capture for boundary training data
Major changes:
- Add passive boundary capture during practice sessions via DockedVisionFeed
- Create BoundaryFrameFilters component with capture type, session, player,
  and time range filtering using TimelineRangeSelector
- Add sessionId/playerId metadata to boundary frame annotations
- Update boundary-samples API to store and return session/player context
- Improve boundary detector training pipeline with marker masking
- Add preprocessing pipeline preview in training data browser
- Update model sharding (2 shards instead of 1)

Files added:
- BoundaryFrameFilters.tsx - Filter UI component
- usePassiveBoundaryCapture.ts - Hook for passive training data collection
- saveBoundarySample.ts - Shared utility for saving boundary samples
- preview-augmentation/route.ts - API for preprocessing pipeline preview
- preview-masked/route.ts - API for marker masking preview
- marker_masking.py, pipeline_preview.py - Python training utilities

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 10:25:56 -06:00
Thomas Hallock f19e8b1dfa feat(vision): add path-based URL routing for vision training
Implement shared nav bar and path-based model selection for vision training pages.

Changes:
- Add useModelType() hook to get model type from URL path
- Add VisionTrainingNav component with model selector dropdown and tab links
- Add [model]/layout.tsx with fixed nav and --nav-height CSS variable
- Move pages under [model]/ directory structure:
  - /vision-training/[model] - Data hub
  - /vision-training/[model]/train - Training wizard
  - /vision-training/[model]/test - Model tester
  - /vision-training/[model]/sessions - Sessions list
  - /vision-training/[model]/sessions/[id] - Session detail
- Remove Model Card from training wizard (model selection now via URL/nav)
- Update TrainingWizard to use modelType from URL (always defined, not null)
- Remove localStorage restoration of model type
- Add registry.tsx for model metadata

URL is now single source of truth for model type. Browser history and
deep linking work correctly.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-10 08:07:21 -06:00
Thomas Hallock 7ee42a1b56 chore: update pnpm-lock.yaml and add __pycache__ to gitignore
🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 19:19:00 -06:00
Thomas Hallock f78eae1148 feat(vision): add boundary detector for marker-free calibration
- Add boundary detector ML model infrastructure (MobileNetV2-based)
- Add training script for boundary detector (train_model.py)
- Add useBoundaryDetector hook for browser inference
- Add BoundaryCameraTester for real-time camera testing
- Add BoundaryImageTester for static image testing
- Add sync API support for boundary detector training data
- Add model type selector on test page (column classifier vs boundary detector)
- Add marker inpainting for training data preprocessing
- Update training wizard to support both model types

The boundary detector aims to detect abacus corners without ArUco markers,
using ML to predict corner positions from raw camera frames. Currently
requires more training data for accurate predictions.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 19:15:00 -06:00
Thomas Hallock 07908218b5 fix(vision): remove data augmentation option from training UI
Data augmentation doesn't work well for abacus column recognition,
so remove the option entirely. Training now always runs without
augmentation (--no-augmentation flag always passed to Python script).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 13:35:53 -06:00
Thomas Hallock aa10d549b5 fix(vision): resolve 6 issues from practice session testing
1. VisionSetupModal too tall - constrain modal height and video aspect ratio
2. Modal not draggable after setup - add drag constraint ref to motion.div
3. Remote camera URL changes on reload - prevent race condition by checking
   localStorage before rendering QR code component
4. "Waiting for remote connection" stuck - add 15s timeout with retry/settings buttons
5. Auto-undocking on problem progression - remove incorrect isDockedByUser=false
   from unregisterDock() which fired during React re-renders
6. Re-docking goes to fullscreen - check dock existence instead of visibility

Also includes updated ML model weights (non-quantized export).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 12:59:15 -06:00
Thomas Hallock 49d05d9f21 feat(vision): re-trained model without quantization
Model exported without --quantize_uint8 flag, which was corrupting
weights. Size increased from ~556KB to ~2.2MB but predictions should
now be correct in the browser.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 11:02:16 -06:00
Thomas Hallock a8c0c5d921 fix(vision): remove quantization from model export - corrupts weights!
The --quantize_uint8 flag in tensorflowjs_converter corrupts model
weights, causing completely wrong predictions in the browser even
though Python testing works fine.

This was documented in .claude/CLAUDE.md but the training script
still used quantization. Model size increases (556KB → 2.2MB) but
predictions are now correct.

Removed quantization from all 3 export paths:
- SavedModel → GraphModel conversion
- Keras → LayersModel fallback
- Direct Python API fallback

After this fix, re-run training via /vision-training/train to get
a working model.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 10:58:28 -06:00
Thomas Hallock c7ae52212c feat(vision): update column classifier model with more training data
Re-trained model with additional training samples for improved
digit detection accuracy.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 10:52:58 -06:00
Thomas Hallock 25bdc99b63 fix(vision): improve dock sizing and mirror hint UX
Dock sizing:
- Larger on desktop: up to 240px wide, 300px tall (md breakpoint)
- Smaller on mobile: 140-180px wide, 180-240px tall (base)

Mirror hint:
- Once shown, stays visible until dismissed or mirror enabled
- No more flashing when stability temporarily drops
- Added dismiss (✕) button to hide hint for rest of session
- Clicking "Try Mirror" still enables mirror mode

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 10:34:22 -06:00
Thomas Hallock 24d08bb0f5 fix(vision): constrain docked abacus size for responsive layout
Add explicit height constraint alongside width for the docked abacus.
Both vision-enabled and vision-disabled modes now share the same
container size constraints:

- Width: clamp(140px, 16vw, 200px) - about 2/3 of previous
- Height: clamp(180px, 24vh, 260px) - prevents going under nav

Also reduced container max-width from 900px to 780px to match.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 10:32:08 -06:00
Thomas Hallock 3adc427b9d fix(vision): unify docked abacus layout with status bar
Use the same flex column layout for both vision-enabled and digital
abacus modes. The status bar is now at the bottom (not overlapping the
abacus columns) with VisionIndicator on left and undock button on right.

This matches the DockedVisionFeed layout for consistency.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 10:28:54 -06:00
Thomas Hallock e685f10aff fix(vision): move undock button to vision status bar
Move the undock button into the DockedVisionFeed status bar alongside
the disable-vision button. This prevents the buttons from overlapping.

When vision is enabled, the dock-controls overlay is now hidden since
all controls are in the unified status bar.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 10:23:48 -06:00
Thomas Hallock fbf4aa449c feat(vision): improve docked vision UX with unified status bar
- Consolidate scattered controls into unified status bar at bottom
- Replace auto-switching with subtle "Try Mirror" hint when detection stable
- Use flex layout instead of absolute positioning to prevent clipping
- Make AbacusDock a flex sibling of problem-area for proper layout
- Use responsive sizing with clamp() instead of matching problem height
- Add Video/Mirror toggle with clear text labels
- Container widens to 900px when dock is shown (vs 600px normally)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-08 10:18:44 -06:00
Thomas Hallock 3c238dc550 feat(blog): add vision detection stories and screenshots
- Create VisionDetection.stories.tsx with interactive demos
- Add screenshot capture script for Storybook stories
- Update blog post with captured screenshots
- Include before/after comparison, progress gallery, and step demos

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 20:31:00 -06:00
Thomas Hallock 77b8e6cfb4 fix: correct blog post year to 2026 2026-01-07 19:39:02 -06:00
Thomas Hallock e64063aab9 blog: add detailed setup instructions for vision mode
Replace vague "Getting Started" section with step-by-step guide:
- Printing and attaching ArUco markers
- Docking the abacus in practice sessions
- Opening vision settings via camera icon
- Camera options (local vs phone)
- Automatic marker-based calibration
- Enabling vision and starting practice

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 19:02:37 -06:00
Thomas Hallock 8da1a760dd blog: add post about vision-powered abacus detection
Introduces the new ML-powered vision system that watches physical
abacus use in real-time, providing instant feedback as students
work through problems.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 18:55:14 -06:00
Thomas Hallock fb57d1f2ef feat(vision): integrate ML classifier for real-time abacus detection
- Enable auto-detection in DockedVisionFeed using ML column classifier
- Replace CV-based detection with useColumnClassifier hook
- Add concurrent inference prevention with isInferringRef
- Show model loading status in detection overlay
- Add detectedPrefixIndex prop to VerticalProblem for visual feedback
- Display ✓ checkmarks on terms completed via vision detection
- Connect vision detection to term tracking and auto-submit flow

The vision system now:
1. Detects abacus values via ML classification at 10fps
2. Shows visual feedback (checkmarks) as terms are completed
3. Triggers help mode when prefix sums are detected
4. Auto-submits when the correct answer is shown

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 18:36:52 -06:00
Thomas Hallock 71c5321431 docs: add TensorFlow.js model debugging notes
Document lessons learned from debugging Keras → browser inference:
- Normalization mismatch (preprocessing layers are preserved)
- Quantization corruption (--quantize_uint8 corrupts weights)
- Output tensor ordering (detect by shape, not index)
- Debugging checklist and key files

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 18:07:23 -06:00
Thomas Hallock 9df367bfe2 fix(vision): correct model normalization and add test tooling
- Fix column classifier to use [0,1] normalization (model has internal
  Rescaling layer that converts to [-1,1] for MobileNetV2)
- Remove quantization from model export (was corrupting weights)
- Add /vision-training/test page for direct model testing
- Add Python test script for local model verification
- Clean up verbose debug logging from classifier
- Add model tester to training results card
- Add training data hub modal and supporting components

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-07 18:02:31 -06:00
Thomas Hallock e9bd3b3c61 feat(vision): add remediation advice when training produces poor results
When training accuracy is low (<50%), the results card now shows
a yellow warning box explaining why accuracy might be poor and
what actions to take:

- Data imbalance: identifies underrepresented digits
- Insufficient data: recommends collecting more images
- Poor convergence: suggests checking data quality
- Unknown issues: fallback advice

Uses a context provider pattern to avoid prop drilling through
5 component levels (page → TrainingWizard → PhaseSection →
CardCarousel → ExpandedCard → ResultsCard).

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 17:06:56 -06:00
Thomas Hallock 2bcdceef59 feat(vision-training): animate background tiles between image and digit
Each tile now randomly transitions between showing the training image
and the numeral it represents. The animations are staggered with random
initial delays and intervals so tiles don't all animate together.

- Created AnimatedTile component with crossfade effect
- Random 3-8 second intervals between transitions
- 0.8s ease-in-out opacity transitions
- Random initial delays (0-10s) for natural feel

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 16:03:44 -06:00
Thomas Hallock fc38b5e6d3 fix(vision-training): increase background tile opacity from 4% to 12% 2026-01-06 15:59:05 -06:00
Thomas Hallock a9db984a03 fix(vision-training): make background tiles cover full viewport
The tiled background wasn't covering the entire viewport because
the inner grid had no explicit dimensions. Fixed by:
- Setting grid to 120vw x 120vh to overflow viewport
- Negative margins to center the oversized grid
- Increased tile count from 100 to 800 for full coverage
- Removed scale(1.2) since explicit sizing handles it

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 15:57:48 -06:00
Thomas Hallock 6efabc2968 fix(vision-training): force dynamic rendering for all API routes
Next.js App Router caches GET route handlers by default when they
don't use request-based data. This caused /api/vision-training/samples
to return cached "no data" responses even after data was collected.

Added `export const dynamic = 'force-dynamic'` to all vision-training
API routes that read from disk or run system commands:
- samples/route.ts
- hardware/route.ts
- preflight/route.ts
- sync/route.ts
- train/route.ts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 15:51:06 -06:00
Thomas Hallock 6b1fa85bb8 fix(vision): fix video ref timing bug preventing marker detection
The videoRef callback in VisionCameraFeed was only firing once on mount,
before the video element existed (due to early return when no videoStream).
Added videoStream to the effect dependency array so it re-runs when the
stream becomes available.

Also:
- Add willReadFrequently option to canvas contexts in frameProcessor
  and arucoDetection to fix console warnings about getImageData
- Add VISION_COMPONENTS.md documentation with required wiring checklist
- Add reference in CLAUDE.md to prevent this recurring mistake

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 15:45:12 -06:00
Thomas Hallock 6fb8f71565 feat(vision): add model tester to training results
After training completes, users can now test the model directly:
- New "Test Model" tab in results phase
- Uses CameraCapture with marker detection
- Runs inference at ~10 FPS and shows detected value + confidence
- Color-coded confidence display (green > 80%, yellow > 50%, red otherwise)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 14:45:14 -06:00
Thomas Hallock 94f53d8097 feat(vision): improve training data management and share marker detection
Training Data Management:
- Add bulk delete modal with timeline range selection
- Add deletion tracking (.deleted tombstone file) to prevent re-syncing
- Add file-by-file comparison for sync (shows new/deleted counts)
- Improve image viewer with filtering, sorting, and batch operations
- Add delete toast with undo functionality

Shared Marker Detection:
- Create useMarkerDetection hook for ArUco marker detection
- Refactor DockedVisionFeed to use shared hook (removes ~100 lines)
- Add marker detection support to CameraCapture component
- Calibration now persists when markers are temporarily lost

Training Wizard UI:
- Add tabbed "Get More Data" section in DataCard
- Show excluded (deleted) file count in sync tab
- Improve phase navigation and state management

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 14:27:55 -06:00
Thomas Hallock 47a98f1bb7 feat(vision): add dependency preflight checks and inline training capture
- Add preflight API endpoint to check Python dependencies before training
- Create DependencyCard wizard step showing installed/missing packages
- Create reusable CameraCapture component supporting local + phone cameras
- Add inline training data capture directly in the training wizard
- Update venv setup to install all requirements from requirements.txt
- Block training start until all dependencies verified

The wizard now shows a dedicated Dependencies step between Hardware and
Config, displaying which packages are installed and providing pip install
commands for any missing packages.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 12:16:28 -06:00
Thomas Hallock 0e1e7cf25d fix(vision): use physicalAbacusColumns for training data capture
Training images were incorrectly showing 2 columns instead of 1 because
the vision system confused "digits in answer" with "physical columns".

Root cause: For 2-digit problems, dock.columns was 2 (digits needed),
but the physical abacus has 4 columns. Training images were sliced into
2 parts, each containing 2 physical columns.

Fixes:
- ActiveSession: Use physicalAbacusColumns for training capture
- MyAbacus: Use physicalAbacusColumns for vision detection
- VisionSetupModal: Use physicalAbacusColumns for calibration
- frameProcessor: Add column margins (6% left/right) for isolation
- types/vision: Add ColumnMargins interface to CalibrationGrid

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 10:42:41 -06:00
Thomas Hallock a40087284c fix(vision): enable ML training on production server
Research confirmed TensorFlow has wheels for Linux x86_64 (including
Synology NAS). The previous failures were due to:
1. Training scripts not being copied to Docker image
2. Venv path not being writable in container

Changes:
- Update isPlatformSupported() to include Linux ARM64 support
- Move venv to data/vision-training/.venv (mounted volume, writable)
- Add python3-venv to Docker image dependencies
- Copy training scripts to Docker image
- Create vision-training directories in Docker build

Sources:
- https://www.tensorflow.org/install/pip
- https://pypi.org/project/tensorflow/ (shows manylinux_2_17_aarch64 wheels)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 09:58:36 -06:00
Thomas Hallock 2eee17e159 fix(vision): handle unsupported platforms gracefully for training
Add platform detection before attempting TensorFlow installation.
TensorFlow doesn't have wheels for ARM Linux (like Synology NAS),
so the hardware detection now returns a clear "Platform Not Supported"
message instead of failing during pip install.

Changes:
- Add isPlatformSupported() check in config.ts
- Check platform before attempting venv setup in hardware/route.ts
- Check platform before starting training in train/route.ts
- Show user-friendly message in HardwareCard for unsupported platforms
- Add data/vision-training/collected/ to .gitignore

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 09:53:52 -06:00
Thomas Hallock e1e0e6cc0c fix(vision): ensure training data collection works with remote camera
The vision source registration effect was not re-running when the first
remote camera frame arrived, because remoteLatestFrame was not in the
dependency array. The <img> element that remoteImageRef points to only
renders when remoteLatestFrame is truthy, so the effect would run before
the element existed and fail to register the source.

Now both local and remote cameras properly register their vision source
for training data collection when a correct answer is entered.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 09:45:51 -06:00
Thomas Hallock 44d3ca612d feat(vision): wizard-style training UI with production sync
Transform training pipeline into a phase-based wizard stepper:
- Preparation phase: data check, hardware detection, config
- Training phase: setup, loading, training progress, export
- Results phase: final accuracy and train again option

Add sync UI to pull training images from production NAS:
- SSE-based progress updates during rsync
- Shows remote vs local image counts
- Skip option for users who don't need sync

New wizard components:
- TrainingWizard: main orchestrator with phase management
- PhaseSection: collapsible phase containers
- CardCarousel: left/center/right card positioning
- CollapsedCard: compact done/upcoming cards with rich previews
- ExpandedCard: full card with content for current step
- Individual card components for each step

APIs added:
- /api/vision-training/samples: check training data status
- /api/vision-training/sync: rsync from production with SSE

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 09:33:14 -06:00
Thomas Hallock 8ec148985e fix(vision): simplify training page UX with single status source
Consolidated confusing multiple status messages into one clear flow:

- Status panel is now the single source of truth for what's happening
- "Preparing..." with explanation that first run takes 2-5 minutes
- Hardware panel just shows hardware (dash while loading, name when ready)
- Button always says "Start Training" (disabled state is visual enough)
- Error details shown once in status panel, with hint
- Simple "Retry" button in hardware panel on failure

Before: "Detecting..." + "Setting up..." + "Setting up Python environment..."
After: "Preparing..." + "Installing TensorFlow... First run may take 2-5 min"

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 07:30:05 -06:00
Thomas Hallock 19680e39fd fix(vision): consolidate setup status to hardware panel only
Removed redundant status messages:
- Hardware panel is the single source for setup state
- Button just says "Start Training" (disabled when not ready)
- Status panel only shows training progress, not setup state

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 07:25:35 -06:00
Thomas Hallock aa9618d192 fix(vision): sync status panel with hardware setup state
Status indicator and text now properly reflect:
- Blue pulsing dot + "Setting up Python environment..." while loading
- Red dot + "Setup failed" when hardware detection fails
- Green dot + "Ready to train" only when hardware is ready

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 07:23:59 -06:00
Thomas Hallock 17289c8a1c fix(vision): use --clear flag for venv creation and add preflight check
- Use Python's --clear flag to handle existing venv directories atomically
- Disable "Start Training" button until hardware setup succeeds
- Show "Setup Failed" state on button when hardware detection fails
- Add "Retry Setup" button when setup fails

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 07:22:41 -06:00
Thomas Hallock e8d338b5e5 fix(vision): handle incomplete venv and retry on failure
- Remove incomplete venv directory before creating new one
- Clear setup cache on failure so subsequent calls can retry
- Fixes "File exists" error when venv creation was interrupted

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-06 07:19:57 -06:00