Compare commits

..

23 Commits

Author SHA1 Message Date
semantic-release-bot
990b573baa chore(abacus-react): release v2.21.0 [skip ci]
# [2.21.0](https://github.com/antialias/soroban-abacus-flashcards/compare/abacus-react-v2.20.0...abacus-react-v2.21.0) (2026-01-03)

### Bug Fixes

* **practice:** add fallback error message when photo upload is blocked ([33efdf0](33efdf0c0d))
* **vision:** hide detection overlay when auto-detection disabled ([995cb60](995cb60086))
* **vision:** remote camera connection and session management ([8a45415](8a454158b5))

### Features

* add LLM client package and worksheet parsing infrastructure ([5a4c751](5a4c751ebe))
* **observer:** responsive session observer layout ([9610ddb](9610ddb8f1))
* **worksheet-parsing:** add parsing UI and fix parent access control ([91aaddb](91aaddbeab))
* **worksheet-parsing:** add selective re-parsing and improve UI ([830a48e](830a48e74f))
2026-01-03 02:42:27 +00:00
Thomas Hallock
830a48e74f feat(worksheet-parsing): add selective re-parsing and improve UI
Selective Re-parsing:
- Add parse-selected API endpoint for re-parsing specific problems
- Support user-adjusted bounding boxes that persist across re-parses
- Add crop-utils for extracting problem regions from worksheet images

LLM Metadata Tracking:
- Store JSON schema, prompt, and raw response in database
- Add debug panel in PhotoViewerEditor to inspect LLM details
- Add migrations for llm_metadata, llm_prompt, llm_json_schema columns

UI Improvements:
- Remove selection mode toggle - problems always selectable
- Show checkboxes on hover only (no layout jump)
- Move selection toolbar to fixed footer outside scrollable area
- Add BoundingBoxOverlay component for visual problem selection
- Add EditableProblemRow with hover-based checkbox visibility
- Unified hover highlighting across checkbox and problem cells

Also includes:
- Fix approve route to handle excluded problems correctly
- Add DebugContentModal for viewing prompts/responses
- Update LLM client to return metadata in responses

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-02 20:41:09 -06:00
Thomas Hallock
33efdf0c0d fix(practice): add fallback error message when photo upload is blocked
When canUpload is false but there's no specific remediation available
(e.g., due to a bug in access control), show a generic "Unable to upload
photos" banner instead of silently hiding the upload buttons.

This ensures users see feedback when access is unexpectedly denied,
rather than being confused by missing UI elements.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-02 14:14:20 -06:00
Thomas Hallock
91aaddbeab feat(worksheet-parsing): add parsing UI and fix parent access control
Worksheet Parsing UI (Slices 1-2):
- Add parse button to OfflineWorkSection thumbnails and PhotoViewerEditor
- Create ParsedProblemsList component to display extracted problems
- Add useWorksheetParsing hook with mutations for parse/review/approve
- Add attachmentKeys to queryKeys for cache management
- Wire up parsing workflow in SummaryClient

Fix parent upload access:
- Change /api/players/[id]/access to use getDbUserId() instead of getViewerId()
- Guest users' guestId was not matching parent_child.parent_user_id
- Parents can now see upload/camera buttons in offline work section

Fix curriculum type errors:
- Add missing 'advanced' property to createFullSkillSet()
- Fix enabledRequiredSkills -> enabledAllowedSkills in problem-generator
- Remove incorrect Partial<> wrapper from type casts

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-02 14:10:29 -06:00
Thomas Hallock
5a4c751ebe feat: add LLM client package and worksheet parsing infrastructure
Part A: @soroban/llm-client package
- Multi-provider support (OpenAI, Anthropic) via env vars
- Zod schema validation for structured LLM responses
- Retry loop with validation error feedback in prompt
- Progress indication hooks for UI feedback
- Vision support for image analysis

Part B: Worksheet parsing feature
- Zod schemas for parsed worksheet problems
- LLM prompt builder for abacus workbook images
- Parser using llm.vision() with retry logic
- Session converter to create SlotResults for BKT
- Database migration for parsing columns
- API routes: /parse, /review, /approve workflow

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-02 08:49:48 -06:00
Thomas Hallock
9610ddb8f1 feat(observer): responsive session observer layout
- Make session observer modal/page fully responsive for all screen sizes
- Replace absolute positioning with flex layout for problem + abacus
- Create MobileResultsSummary component for compact results on small screens
- Full-screen modal on mobile, centered dialog on desktop
- Stack problem and abacus vertically on small screens (<640px)
- Reduce vertical spacing to eliminate scrolling on mobile
- Hide desktop results panel on mobile, show compact summary chip

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 21:32:20 -06:00
Thomas Hallock
d80601d162 chore: add vision planning doc, storybook story, and update gitignore
- Add VISION_DOCK_INTEGRATION_PLAN.md for vision dock architecture
- Add VisionCameraControls.stories.tsx for storybook
- Update .gitignore to exclude venv, uploads, and training data

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 20:43:06 -06:00
Thomas Hallock
995cb60086 fix(vision): hide detection overlay when auto-detection disabled
Add ENABLE_AUTO_DETECTION flag to ObserverVisionFeed.tsx to hide the
useless detection overlay that always showed "---" and "0%" since
auto-detection is globally disabled. This matches the pattern already
used in DockedVisionFeed.tsx.

Also includes minor formatting fixes from Biome.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 20:37:37 -06:00
Thomas Hallock
8a454158b5 fix(vision): remote camera connection and session management
- Fix race condition in useRemoteCameraDesktop where session ID wasn't
  saved before socket connection check, preventing auto-reconnect
- Same fix in useRemoteCameraPhone for phone-side connection
- Fix "new session" button in RemoteCameraQRCode - properly clears old
  session and creates new one using prevRef to detect state changes
- Show full QR code UI with copyable URL (removed compact mode)
- Redesign AbacusVisionBridge UI: camera feed as hero, toolbar on feed,
  collapsible crop settings, source selector as tabs

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 19:01:46 -06:00
semantic-release-bot
41aa7ff33f chore(abacus-react): release v2.20.0 [skip ci]
# [2.20.0](https://github.com/antialias/soroban-abacus-flashcards/compare/abacus-react-v2.19.0...abacus-react-v2.20.0) (2026-01-02)

### Bug Fixes

* **vision:** clear config when switching camera sources ([ff59612](ff59612e7b))
* **vision:** hide flip camera button when only one camera available ([7a9185e](7a9185eadb))
* **vision:** include remote camera in isVisionSetupComplete check ([a8fb77e](a8fb77e8e3))
* **vision:** remote camera persistence and UI bugs ([d90d263](d90d263b2a))

### Features

* **vision:** add activeCameraSource tracking and simplify calibration UI ([1be6151](1be6151bae))
* **vision:** add CV-based bead detection and fix remote camera connection ([005140a](005140a1e7))
* **vision:** add TensorFlow.js column classifier model and improve detection ([5d0ac65](5d0ac65bdd))
* **vision:** broadcast vision frames to observers (Phase 5) ([b3b769c](b3b769c0e2))
* **vision:** disable auto-detection with feature flag ([a5025f0](a5025f01bc))
* **vision:** integrate vision feed into docked abacus ([d8c7645](d8c764595d))
2026-01-02 00:02:33 +00:00
Thomas Hallock
1be6151bae feat(vision): add activeCameraSource tracking and simplify calibration UI
- Add explicit activeCameraSource field to VisionConfig to track which
  camera is in use (local vs phone), fixing button visibility bugs when
  switching between camera sources
- Simplify calibration UI by removing the confusing "Auto/Manual" mode
  toggle, replacing with a cleaner crop status indicator
- Remove calibration requirement from isVisionSetupComplete for local
  camera since auto-crop runs continuously when markers are detected
- Update DockedVisionFeed to use activeCameraSource instead of inferring
  from which configs are set

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 18:01:12 -06:00
Thomas Hallock
70b363ce88 refactor(vision): combine setup modal into single draggable experience
- Merge VisionSetupModal and AbacusVisionBridge into unified UI
- Remove two-step configuration process (no more "Configure Camera" button)
- Add vision control props to AbacusVisionBridge:
  - showVisionControls, isVisionEnabled, isVisionSetupComplete
  - onToggleVision, onClearSettings callbacks
- Add Enable/Disable Vision and Clear Settings buttons to bridge footer
- Simplify VisionSetupModal from ~257 to ~93 lines
- Modal is now draggable via framer-motion (built into AbacusVisionBridge)

User experience: Open modal → immediately see camera feed and all controls
in one place. Drag modal anywhere. Configure, enable/disable, close.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 17:30:46 -06:00
Thomas Hallock
d90d263b2a fix(vision): remote camera persistence and UI bugs
- Fix camera source switching: clear remoteCameraSessionId in context when
  switching to local camera so DockedVisionFeed uses the correct source
- Fix modal drag during calibration: disable framer-motion drag when
  calibration overlay is active to allow handle dragging
- Fix initial camera source: pass initialCameraSource prop to
  AbacusVisionBridge so it shows phone camera when reconfiguring remote
- Extend session TTL from 10 to 60 minutes for remote camera sessions
- Add localStorage persistence for remote camera session IDs
- Add auto-reconnect logic for both desktop and phone hooks
- Add comprehensive tests for session-manager, useRemoteCameraDesktop,
  and useRemoteCameraPhone hooks
- Guard test setup.ts for node environment (HTMLImageElement check)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 17:23:27 -06:00
Thomas Hallock
43524d8238 test: add unit tests for vision broadcast feature
- VisionIndicator.test.tsx: tests for rendering, status indicator, click behavior, accessibility
- ObserverVisionFeed.test.tsx: tests for image display, detected value, live/stale indicator
- useSessionBroadcast.vision.test.ts: tests for sendVisionFrame socket emission
- useSessionObserver.vision.test.ts: tests for visionFrame receiving and cleanup
- MyAbacusContext.vision.test.tsx: tests for vision config state and callbacks

Also fixes:
- useSessionObserver: clear visionFrame and transitionState on stopObserving
- test/setup.ts: add canvas Image mock to prevent jsdom errors with data URIs

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 16:08:51 -06:00
Thomas Hallock
a5025f01bc feat(vision): disable auto-detection with feature flag
- Add ENABLE_AUTO_DETECTION flag (set to false) in DockedVisionFeed
- Conditionally import detection modules for tree-shaking when disabled
- Guard all detection processing, loops, and value handlers
- Hide detection overlay when auto-detection is disabled
- Remove vision toggle button from ActiveSession (no longer needed)
- Clean up unused imports and code
- Format fixes from biome

The camera feed still works for observation mode, but the ML/CV
bead detection is disabled until accuracy is improved.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 15:55:50 -06:00
Thomas Hallock
a8fb77e8e3 fix(vision): include remote camera in isVisionSetupComplete check
The isVisionSetupComplete flag was only checking for local camera
setup (cameraDeviceId + calibration), which caused remote camera
mode to be treated as "not configured" even when connected.

Now considers vision setup complete if either:
- Local camera: has camera device AND calibration
- Remote camera: has remote session ID (phone handles calibration)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 15:36:27 -06:00
Thomas Hallock
e80ef04f45 chore(vision): clean up debug console.log statements
Remove unnecessary debug logging from vision components
that was used during development.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 15:31:06 -06:00
Thomas Hallock
b3b769c0e2 feat(vision): broadcast vision frames to observers (Phase 5)
Wire up the vision broadcast pipeline:

1. DockedVisionFeed captures rectified frames from canvas and emits
   them at 5fps via the context's emitVisionFrame callback

2. PracticeClient wires setVisionFrameCallback to call sendVisionFrame
   from useSessionBroadcast, connecting the context to the socket

3. useSessionBroadcast sends VisionFrameEvent to the session channel
   with imageData, detectedValue, and confidence

4. socket-server relays vision-frame events to observers

5. useSessionObserver receives and stores visionFrame for display

6. SessionObserverModal shows ObserverVisionFeed when visionFrame
   is available, replacing the interactive AbacusDock with the
   student's live camera feed

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 15:28:59 -06:00
Thomas Hallock
ff59612e7b fix(vision): clear config when switching camera sources
When switching between local and phone camera, clear the other
source's configuration to prevent stale data.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 15:13:20 -06:00
Thomas Hallock
d8c764595d feat(vision): integrate vision feed into docked abacus
- Add vision state management to MyAbacusContext (camera, calibration,
  remote session, enabled state)
- Add VisionIndicator component showing vision status on dock header
- Add VisionSetupModal for configuring camera and calibration
- Add DockedVisionFeed component that replaces SVG abacus when vision
  is enabled, with:
  - Continuous ArUco marker detection for auto-calibration
  - OpenCV perspective correction via VisionCameraFeed
  - Real-time bead detection and value display
  - Support for both local camera and remote phone camera
- Wire AbacusVisionBridge to save config to context via
  onConfigurationChange callback
- Update MyAbacus to conditionally render DockedVisionFeed vs
  AbacusReact based on vision state

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 15:05:58 -06:00
Thomas Hallock
005140a1e7 feat(vision): add CV-based bead detection and fix remote camera connection
- Add beadDetector.ts with intensity-profile-based bead detection (CV approach)
- Integrate CV pipeline for both local camera and remote phone camera feeds
- Add processImageFrame() to frameProcessor for remote camera image processing
- Fix React 18 Strict Mode duplicate session creation in RemoteCameraQRCode
- Add debug logging to remote camera hooks for connection troubleshooting
- Add VisionStatusIndicator for remote camera feed in AbacusVisionBridge

The duplicate session bug was caused by React 18 Strict Mode double-mounting
components and running effects twice with fresh state, which called
createSession() twice and created two different sessions - phone joined
one, desktop subscribed to the other.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-01 11:29:02 -06:00
Thomas Hallock
5d0ac65bdd feat(vision): add TensorFlow.js column classifier model and improve detection
- Add trained CNN model for abacus column digit classification
  - model.json: TensorFlow.js layers model (fixed for Keras 3 compatibility)
  - group1-shard1of1.bin: quantized model weights (~2.2MB)

- Improve detection performance and stability
  - Throttle inference to 5fps (was running every animation frame)
  - Lower stability threshold: 3 consecutive frames (was 10)
  - Lower confidence threshold: 50% (was 70%)

- Clean up debug logging from development

Note: Model trained on synthetic data, accuracy on real images is limited.
Future work: retrain on real abacus photos for better accuracy.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-31 22:59:40 -06:00
Thomas Hallock
7a9185eadb fix(vision): hide flip camera button when only one camera available
Only show camera controls section when there's something to display:
- Flip button: only if multiple cameras
- Torch button: only if torch available
- Whole section: only if either button would be shown

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-31 22:35:52 -06:00
119 changed files with 22901 additions and 1044 deletions

View File

@@ -422,7 +422,79 @@
"Bash(apps/web/src/lib/vision/perspectiveTransform.ts )",
"Bash(apps/web/src/socket-server.ts)",
"Bash(apps/web/src/components/vision/CalibrationOverlay.tsx )",
"Bash(apps/web/src/components/practice/ActiveSession.tsx )"
"Bash(apps/web/src/components/practice/ActiveSession.tsx )",
"Bash(open -a Preview:*)",
"Bash(pip3 install:*)",
"Bash(pip3 uninstall:*)",
"Bash(/opt/homebrew/bin/python3:*)",
"Bash(/usr/bin/python3:*)",
"Bash(/opt/homebrew/bin/pip3 install:*)",
"Bash(source:*)",
"Bash(pip install:*)",
"Bash(/opt/homebrew/opt/python@3.11/bin/python3.11:*)",
"Bash(tensorflowjs_converter:*)",
"Bash(public/models/abacus-column-classifier/column-classifier.keras )",
"Bash(public/models/abacus-column-classifier/)",
"Bash(public/models/abacus-column-classifier/column-classifier.h5 )",
"Bash(apps/web/scripts/train-column-classifier/train_model.py )",
"Bash(apps/web/src/app/remote-camera/[sessionId]/page.tsx )",
"Bash(apps/web/src/hooks/useColumnClassifier.ts )",
"Bash(apps/web/src/lib/vision/columnClassifier.ts )",
"Bash(\"apps/web/src/app/remote-camera/[sessionId]/page.tsx\" )",
"Bash(apps/web/drizzle/0054_new_mathemanic.sql )",
"Bash(apps/web/drizzle/meta/0054_snapshot.json )",
"Bash(apps/web/src/components/AbacusDisplayDropdown.tsx )",
"Bash(apps/web/src/db/schema/abacus-settings.ts )",
"Bash(packages/abacus-react/src/AbacusContext.tsx)",
"Bash(apps/web/src/lib/vision/frameProcessor.ts )",
"Bash(apps/web/src/lib/vision/beadDetector.ts )",
"Bash(apps/web/public/models/abacus-column-classifier/model.json )",
"Bash(.claude/settings.local.json)",
"Bash(apps/web/src/components/MyAbacus.tsx )",
"Bash(apps/web/src/contexts/MyAbacusContext.tsx )",
"Bash(apps/web/src/components/vision/DockedVisionFeed.tsx )",
"Bash(apps/web/src/components/vision/VisionIndicator.tsx )",
"Bash(apps/web/src/components/vision/VisionSetupModal.tsx)",
"Bash(npx storybook:*)",
"Bash(apps/web/src/hooks/usePhoneCamera.ts )",
"Bash(apps/web/src/lib/remote-camera/session-manager.ts )",
"Bash(apps/web/src/test/setup.ts )",
"Bash(apps/web/src/hooks/__tests__/useRemoteCameraDesktop.test.ts )",
"Bash(apps/web/src/hooks/__tests__/useRemoteCameraPhone.test.ts )",
"Bash(apps/web/src/lib/remote-camera/__tests__/)",
"Bash(packages/abacus-react/CHANGELOG.md )",
"WebFetch(domain:zod.dev)",
"Bash(npm view:*)",
"Bash(tsc:*)",
"WebFetch(domain:colinhacks.com)",
"Bash(npm install:*)",
"Bash(corepack prepare:*)",
"Bash(/Users/antialias/Library/pnpm/pnpm self-update:*)",
"Bash(readlink:*)",
"Bash(src/app/api/curriculum/[playerId]/attachments/[attachmentId]/approve/route.ts )",
"Bash(src/app/api/curriculum/[playerId]/attachments/[attachmentId]/parse/route.ts )",
"Bash(src/app/api/curriculum/[playerId]/attachments/[attachmentId]/review/route.ts )",
"Bash(src/app/api/curriculum/[playerId]/sessions/[sessionId]/attachments/route.ts )",
"Bash(src/app/api/players/[id]/access/route.ts )",
"Bash(src/app/practice/[studentId]/summary/SummaryClient.tsx )",
"Bash(src/components/worksheet-parsing/ )",
"Bash(src/hooks/useLLMCall.ts )",
"Bash(src/hooks/usePlayerAccess.ts )",
"Bash(src/hooks/useWorksheetParsing.ts )",
"Bash(src/lib/classroom/access-control.ts )",
"Bash(src/lib/classroom/index.ts )",
"Bash(src/lib/curriculum/definitions.ts )",
"Bash(src/lib/curriculum/problem-generator.ts )",
"Bash(src/lib/worksheet-parsing/parser.ts )",
"Bash(src/lib/worksheet-parsing/schemas.ts )",
"Bash(src/lib/worksheet-parsing/session-converter.ts )",
"Bash(src/types/css.d.ts )",
"Bash(tsconfig.json)",
"Bash(git status:*)",
"Bash(photos\" banner instead of silently hiding the upload buttons.\n\nThis ensures users see feedback when access is unexpectedly denied,\nrather than being confused by missing UI elements.\n\n🤖 Generated with [Claude Code]\\(https://claude.com/claude-code\\)\n\nCo-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>\nEOF\n\\)\")",
"WebFetch(domain:platform.openai.com)",
"WebFetch(domain:cookbook.openai.com)",
"WebFetch(domain:docs.aimlapi.com)"
],
"deny": [],
"ask": []

View File

@@ -965,10 +965,30 @@ When adding/modifying database schema:
mcp__sqlite__describe_table table_name
```
**CRITICAL: Verify migration timestamp order after generation:**
After running `npx drizzle-kit generate --custom`, check `drizzle/meta/_journal.json`:
1. Look at the `"when"` timestamp of the new migration
2. Verify it's GREATER than the previous migration's timestamp
3. If not, manually edit the journal to use a timestamp after the previous one
Example of broken ordering (0057 before 0056):
```json
{ "idx": 56, "when": 1767484800000, "tag": "0056_..." }, // Jan 3
{ "idx": 57, "when": 1767400331475, "tag": "0057_..." } // Jan 2 - WRONG!
```
Fix by setting 0057's timestamp to be after 0056:
```json
{ "idx": 57, "when": 1767571200000, "tag": "0057_..." } // Jan 4 - CORRECT
```
**Why this happens:** `drizzle-kit generate` uses current system time, but if previous migrations were manually given future timestamps (common in CI/production scenarios), new migrations can get timestamps that sort incorrectly.
**What NOT to do:**
- ❌ DO NOT manually create SQL files in `drizzle/` without using `drizzle-kit generate`
- ❌ DO NOT manually edit `drizzle/meta/_journal.json`
- ❌ DO NOT manually edit `drizzle/meta/_journal.json` (except to fix timestamp ordering)
- ❌ DO NOT run SQL directly with `sqlite3` command
- ❌ DO NOT use `drizzle-kit generate` without `--custom` flag (it requires interactive prompts)

View File

@@ -0,0 +1,335 @@
# Plan: Abacus Vision as Docked Abacus Video Source
**Status:** In Progress
**Created:** 2026-01-01
**Last Updated:** 2026-01-01
## Overview
Transform abacus vision from a standalone modal into an alternate "source" for the docked abacus. When vision is enabled, the processed camera feed replaces the SVG abacus representation in the dock.
**Current Architecture:**
```
AbacusDock → MyAbacus (SVG) → value displayed
AbacusVisionBridge → Modal → onValueDetected callback
```
**Target Architecture:**
```
AbacusDock
├─ [vision disabled] → MyAbacus (SVG)
└─ [vision enabled] → VisionFeed (processed video) + value detection
Broadcasts to observers
```
---
## Key Requirements
1. **Vision hint on docks** - Camera icon visible on/near AbacusDock
2. **Persistent across docking** - Vision icon stays visible when abacus is docked
3. **Setup gating** - Clicking opens setup if no source/calibration configured
4. **Video replaces SVG** - When enabled, camera feed shows instead of SVG abacus
5. **Observer visibility** - Teachers/parents see student's video feed during observation
---
## Progress Tracker
- [ ] **Phase 1:** Vision State in MyAbacusContext
- [ ] **Phase 2:** Vision Indicator on AbacusDock
- [ ] **Phase 3:** Video Feed Replaces Docked Abacus
- [ ] **Phase 4:** Vision Setup Modal Refactor
- [ ] **Phase 5:** Broadcast Video Feed to Observers
- [ ] **Phase 6:** Polish & Edge Cases
---
## Phase 1: Vision State in MyAbacusContext
**Goal:** Add vision-related state to the abacus context so it's globally accessible.
**Files to modify:**
- `apps/web/src/contexts/MyAbacusContext.tsx`
**State to add:**
```typescript
interface VisionConfig {
enabled: boolean
cameraDeviceId: string | null
calibration: CalibrationGrid | null
remoteCameraSessionId: string | null // For phone camera
}
// In context:
visionConfig: VisionConfig
setVisionEnabled: (enabled: boolean) => void
setVisionCalibration: (calibration: CalibrationGrid | null) => void
setVisionCamera: (deviceId: string | null) => void
isVisionSetupComplete: boolean // Derived: has camera AND calibration
```
**Persistence:** Save to localStorage alongside existing abacus display config.
**Testable outcome:**
- Open browser console, check that vision config is in context
- Toggle vision state programmatically, see it persist across refresh
---
## Phase 2: Vision Indicator on AbacusDock
**Goal:** Show a camera icon near the dock that indicates vision status and opens setup.
**Files to modify:**
- `apps/web/src/components/AbacusDock.tsx` - Add vision indicator
- `apps/web/src/components/MyAbacus.tsx` - Show indicator when docked
**UI Design:**
```
┌─────────────────────────────────┐
│ [Docked Abacus] [↗] │ ← Undock button (existing)
│ │
│ [📷] │ ← Vision toggle (NEW)
│ │
└─────────────────────────────────┘
```
**Behavior:**
- Icon shows camera with status indicator:
- 🔴 Red dot = not configured
- 🟢 Green dot = configured and enabled
- ⚪ No dot = configured but disabled
- Click opens VisionSetupModal (Phase 4)
- Visible in BOTH floating button AND docked states
**Testable outcome:**
- See camera icon on docked abacus
- Click icon, see setup modal open
- Icon shows different states based on config
---
## Phase 3: Video Feed Replaces Docked Abacus
**Goal:** When vision is enabled, render processed video instead of SVG abacus.
**Files to modify:**
- `apps/web/src/components/MyAbacus.tsx` - Conditional rendering
- Create: `apps/web/src/components/vision/DockedVisionFeed.tsx`
**DockedVisionFeed component:**
```typescript
interface DockedVisionFeedProps {
width: number;
height: number;
onValueDetected: (value: number) => void;
}
// Renders:
// - Processed/cropped camera feed
// - Overlays detected column values
// - Small "disable vision" button
```
**MyAbacus docked mode change:**
```tsx
// In docked rendering section:
{isDocked && (
visionConfig.enabled && isVisionSetupComplete ? (
<DockedVisionFeed
width={...}
height={...}
onValueDetected={setDockedValue}
/>
) : (
<Abacus value={abacusValue} ... />
)
)}
```
**Testable outcome:**
- Enable vision (manually set in console if needed)
- See video feed in dock instead of SVG abacus
- Detected values update the context
---
## Phase 4: Vision Setup Modal Refactor
**Goal:** Streamline the setup flow - AbacusVisionBridge becomes a setup wizard.
**Files to modify:**
- `apps/web/src/components/vision/AbacusVisionBridge.tsx` - Simplify to setup-only
- Create: `apps/web/src/components/vision/VisionSetupModal.tsx`
**Setup flow:**
```
[Open Modal]
Is camera selected? ─No──→ [Select Camera Screen]
│Yes ↓
↓ Select device
Is calibrated? ─No───→ [Calibration Screen]
│Yes ↓
↓ Manual or ArUco
[Ready Screen]
├─ Preview of what vision sees
├─ [Enable Vision] button
└─ [Reconfigure] button
```
**Quick-toggle behavior:**
- If fully configured: clicking vision icon toggles on/off immediately
- If not configured: opens setup modal
- Long-press or secondary click: always opens settings
**Testable outcome:**
- Complete setup flow from scratch
- Settings persist across refresh
- Quick toggle works when configured
---
## Phase 5: Broadcast Video Feed to Observers
**Goal:** Teachers/parents observing a session see the student's vision video feed.
**Files to modify:**
- `apps/web/src/hooks/useSessionBroadcast.ts` - Add vision frame broadcasting
- `apps/web/src/hooks/useSessionObserver.ts` - Receive vision frames
- `apps/web/src/components/classroom/SessionObserverView.tsx` - Display vision feed
**Broadcasting strategy:**
```typescript
// In useSessionBroadcast, when vision is enabled:
// Emit compressed frames at reduced rate (5 fps for bandwidth)
socket.emit("vision-frame", {
sessionId,
imageData: compressedJpegBase64,
timestamp: Date.now(),
detectedValue: currentValue,
});
// Also broadcast vision state:
socket.emit("practice-state", {
...existingState,
visionEnabled: true,
visionConfidence: confidence,
});
```
**Observer display:**
```typescript
// In SessionObserverView, when student has vision enabled:
// Show video feed instead of SVG abacus in the observation panel
{studentState.visionEnabled ? (
<ObserverVisionFeed frames={receivedFrames} />
) : (
<AbacusDock value={studentState.abacusValue} />
)}
```
**Testable outcome:**
- Student enables vision, starts practice
- Teacher opens observer modal
- Teacher sees student's camera feed (not SVG abacus)
---
## Phase 6: Polish & Edge Cases
**Goal:** Handle edge cases and improve UX.
**Items:**
1. **Connection loss handling** - Fall back to SVG if video stops
2. **Bandwidth management** - Adaptive quality based on connection
3. **Mobile optimization** - Vision setup works on phone screens
4. **Reconnection** - Re-establish vision feed after disconnect
5. **Multiple observers** - Efficient multicast of video frames
**Testable outcome:**
- Disconnect/reconnect scenarios work smoothly
- Mobile users can configure vision
- Multiple teachers can observe same student
---
## Implementation Order & Dependencies
```
Phase 1 (Foundation)
Phase 2 (UI Integration)
Phase 3 (Core Feature) ←── Requires Phase 1, 2
Phase 4 (UX Refinement) ←── Can start in parallel with Phase 3
Phase 5 (Observation) ←── Requires Phase 3
Phase 6 (Polish) ←── After all features work
```
---
## Files Summary
### Modify
| File | Changes |
| ---------------------------------------------- | --------------------------------------------- |
| `contexts/MyAbacusContext.tsx` | Add vision state, persistence |
| `components/MyAbacus.tsx` | Vision indicator, conditional video rendering |
| `components/AbacusDock.tsx` | Pass through vision-related props |
| `hooks/useSessionBroadcast.ts` | Emit vision frames |
| `hooks/useSessionObserver.ts` | Receive vision frames |
| `components/classroom/SessionObserverView.tsx` | Display vision feed |
### Create
| File | Purpose |
| ------------------------------------------ | ----------------------------- |
| `components/vision/VisionSetupModal.tsx` | Streamlined setup wizard |
| `components/vision/DockedVisionFeed.tsx` | Video display for docked mode |
| `components/vision/VisionIndicator.tsx` | Camera icon with status |
| `components/vision/ObserverVisionFeed.tsx` | Observer-side video display |
---
## Testing Checkpoints
After each phase, manually verify:
- [ ] **Phase 1:** Console shows vision config in context, persists on refresh
- [ ] **Phase 2:** Camera icon visible on dock, opens modal on click
- [ ] **Phase 3:** Enable vision → video shows in dock instead of SVG
- [ ] **Phase 4:** Full setup flow works, quick toggle works when configured
- [ ] **Phase 5:** Observer sees student's video feed during session
- [ ] **Phase 6:** Edge cases handled gracefully

9
apps/web/.gitignore vendored
View File

@@ -54,3 +54,12 @@ src/generated/build-info.json
# biome
.biome
# Python virtual environments
.venv*/
# User uploads
data/uploads/
# ML training data
training-data/

View File

@@ -0,0 +1,37 @@
-- Add LLM-powered worksheet parsing columns to practice_attachments
-- These columns support the workflow: parse → review → approve → create session
-- Parsing workflow status
ALTER TABLE `practice_attachments` ADD COLUMN `parsing_status` text;
--> statement-breakpoint
-- When parsing completed (ISO timestamp)
ALTER TABLE `practice_attachments` ADD COLUMN `parsed_at` text;
--> statement-breakpoint
-- Error message if parsing failed
ALTER TABLE `practice_attachments` ADD COLUMN `parsing_error` text;
--> statement-breakpoint
-- Raw LLM parsing result (JSON) - before user corrections
ALTER TABLE `practice_attachments` ADD COLUMN `raw_parsing_result` text;
--> statement-breakpoint
-- Approved result (JSON) - after user corrections
ALTER TABLE `practice_attachments` ADD COLUMN `approved_result` text;
--> statement-breakpoint
-- Overall confidence score from LLM (0-1)
ALTER TABLE `practice_attachments` ADD COLUMN `confidence_score` real;
--> statement-breakpoint
-- True if any problems need manual review
ALTER TABLE `practice_attachments` ADD COLUMN `needs_review` integer;
--> statement-breakpoint
-- True if a session was created from this parsed worksheet
ALTER TABLE `practice_attachments` ADD COLUMN `session_created` integer;
--> statement-breakpoint
-- Reference to the session created from this parsing
ALTER TABLE `practice_attachments` ADD COLUMN `created_session_id` text REFERENCES session_plans(id) ON DELETE SET NULL;

View File

@@ -0,0 +1,31 @@
-- Add LLM call metadata columns to practice_attachments
-- These provide transparency/debugging info about the parsing request
-- Which LLM provider was used (e.g., "openai", "anthropic")
ALTER TABLE `practice_attachments` ADD COLUMN `llm_provider` text;
--> statement-breakpoint
-- Which model was used (e.g., "gpt-4o", "claude-sonnet-4")
ALTER TABLE `practice_attachments` ADD COLUMN `llm_model` text;
--> statement-breakpoint
-- The full prompt sent to the LLM (for debugging)
ALTER TABLE `practice_attachments` ADD COLUMN `llm_prompt_used` text;
--> statement-breakpoint
-- Which image was sent: "cropped" or "original"
ALTER TABLE `practice_attachments` ADD COLUMN `llm_image_source` text;
--> statement-breakpoint
-- How many LLM call attempts were needed (retries on validation failure)
ALTER TABLE `practice_attachments` ADD COLUMN `llm_attempts` integer;
--> statement-breakpoint
-- Token usage for cost tracking
ALTER TABLE `practice_attachments` ADD COLUMN `llm_prompt_tokens` integer;
--> statement-breakpoint
ALTER TABLE `practice_attachments` ADD COLUMN `llm_completion_tokens` integer;
--> statement-breakpoint
ALTER TABLE `practice_attachments` ADD COLUMN `llm_total_tokens` integer;

View File

@@ -0,0 +1,3 @@
-- Custom SQL migration file, put your code below! --
-- Add llm_raw_response column to practice_attachments for storing raw LLM JSON responses
ALTER TABLE `practice_attachments` ADD `llm_raw_response` text;

View File

@@ -0,0 +1,2 @@
-- Add llm_json_schema column to practice_attachments for storing the JSON Schema sent to the LLM
ALTER TABLE `practice_attachments` ADD `llm_json_schema` text;

View File

@@ -116,13 +116,9 @@
"abacus_settings_user_id_users_id_fk": {
"name": "abacus_settings_user_id_users_id_fk",
"tableFrom": "abacus_settings",
"columnsFrom": [
"user_id"
],
"columnsFrom": ["user_id"],
"tableTo": "users",
"columnsTo": [
"id"
],
"columnsTo": ["id"],
"onUpdate": "no action",
"onDelete": "cascade"
}
@@ -240,9 +236,7 @@
"indexes": {
"arcade_rooms_code_unique": {
"name": "arcade_rooms_code_unique",
"columns": [
"code"
],
"columns": ["code"],
"isUnique": true
}
},
@@ -339,26 +333,18 @@
"arcade_sessions_room_id_arcade_rooms_id_fk": {
"name": "arcade_sessions_room_id_arcade_rooms_id_fk",
"tableFrom": "arcade_sessions",
"columnsFrom": [
"room_id"
],
"columnsFrom": ["room_id"],
"tableTo": "arcade_rooms",
"columnsTo": [
"id"
],
"columnsTo": ["id"],
"onUpdate": "no action",
"onDelete": "cascade"
},
"arcade_sessions_user_id_users_id_fk": {
"name": "arcade_sessions_user_id_users_id_fk",
"tableFrom": "arcade_sessions",
"columnsFrom": [
"user_id"
],
"columnsFrom": ["user_id"],
"tableTo": "users",
"columnsTo": [
"id"
],
"columnsTo": ["id"],
"onUpdate": "no action",
"onDelete": "cascade"
}
@@ -424,9 +410,7 @@
"indexes": {
"players_user_id_idx": {
"name": "players_user_id_idx",
"columns": [
"user_id"
],
"columns": ["user_id"],
"isUnique": false
}
},
@@ -434,13 +418,9 @@
"players_user_id_users_id_fk": {
"name": "players_user_id_users_id_fk",
"tableFrom": "players",
"columnsFrom": [
"user_id"
],
"columnsFrom": ["user_id"],
"tableTo": "users",
"columnsTo": [
"id"
],
"columnsTo": ["id"],
"onUpdate": "no action",
"onDelete": "cascade"
}
@@ -514,9 +494,7 @@
"indexes": {
"idx_room_members_user_id_unique": {
"name": "idx_room_members_user_id_unique",
"columns": [
"user_id"
],
"columns": ["user_id"],
"isUnique": true
}
},
@@ -524,13 +502,9 @@
"room_members_room_id_arcade_rooms_id_fk": {
"name": "room_members_room_id_arcade_rooms_id_fk",
"tableFrom": "room_members",
"columnsFrom": [
"room_id"
],
"columnsFrom": ["room_id"],
"tableTo": "arcade_rooms",
"columnsTo": [
"id"
],
"columnsTo": ["id"],
"onUpdate": "no action",
"onDelete": "cascade"
}
@@ -605,13 +579,9 @@
"room_member_history_room_id_arcade_rooms_id_fk": {
"name": "room_member_history_room_id_arcade_rooms_id_fk",
"tableFrom": "room_member_history",
"columnsFrom": [
"room_id"
],
"columnsFrom": ["room_id"],
"tableTo": "arcade_rooms",
"columnsTo": [
"id"
],
"columnsTo": ["id"],
"onUpdate": "no action",
"onDelete": "cascade"
}
@@ -713,10 +683,7 @@
"indexes": {
"idx_room_invitations_user_room": {
"name": "idx_room_invitations_user_room",
"columns": [
"user_id",
"room_id"
],
"columns": ["user_id", "room_id"],
"isUnique": true
}
},
@@ -724,13 +691,9 @@
"room_invitations_room_id_arcade_rooms_id_fk": {
"name": "room_invitations_room_id_arcade_rooms_id_fk",
"tableFrom": "room_invitations",
"columnsFrom": [
"room_id"
],
"columnsFrom": ["room_id"],
"tableTo": "arcade_rooms",
"columnsTo": [
"id"
],
"columnsTo": ["id"],
"onUpdate": "no action",
"onDelete": "cascade"
}
@@ -833,13 +796,9 @@
"room_reports_room_id_arcade_rooms_id_fk": {
"name": "room_reports_room_id_arcade_rooms_id_fk",
"tableFrom": "room_reports",
"columnsFrom": [
"room_id"
],
"columnsFrom": ["room_id"],
"tableTo": "arcade_rooms",
"columnsTo": [
"id"
],
"columnsTo": ["id"],
"onUpdate": "no action",
"onDelete": "cascade"
}
@@ -918,10 +877,7 @@
"indexes": {
"idx_room_bans_user_room": {
"name": "idx_room_bans_user_room",
"columns": [
"user_id",
"room_id"
],
"columns": ["user_id", "room_id"],
"isUnique": true
}
},
@@ -929,13 +885,9 @@
"room_bans_room_id_arcade_rooms_id_fk": {
"name": "room_bans_room_id_arcade_rooms_id_fk",
"tableFrom": "room_bans",
"columnsFrom": [
"room_id"
],
"columnsFrom": ["room_id"],
"tableTo": "arcade_rooms",
"columnsTo": [
"id"
],
"columnsTo": ["id"],
"onUpdate": "no action",
"onDelete": "cascade"
}
@@ -998,13 +950,9 @@
"user_stats_user_id_users_id_fk": {
"name": "user_stats_user_id_users_id_fk",
"tableFrom": "user_stats",
"columnsFrom": [
"user_id"
],
"columnsFrom": ["user_id"],
"tableTo": "users",
"columnsTo": [
"id"
],
"columnsTo": ["id"],
"onUpdate": "no action",
"onDelete": "cascade"
}
@@ -1062,16 +1010,12 @@
"indexes": {
"users_guest_id_unique": {
"name": "users_guest_id_unique",
"columns": [
"guest_id"
],
"columns": ["guest_id"],
"isUnique": true
},
"users_email_unique": {
"name": "users_email_unique",
"columns": [
"email"
],
"columns": ["email"],
"isUnique": true
}
},
@@ -1091,4 +1035,4 @@
"internal": {
"indexes": {}
}
}
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -386,6 +386,34 @@
"when": 1767240895813,
"tag": "0054_new_mathemanic",
"breakpoints": true
},
{
"idx": 55,
"version": "6",
"when": 1767398400000,
"tag": "0055_add_attachment_parsing",
"breakpoints": true
},
{
"idx": 56,
"version": "6",
"when": 1767484800000,
"tag": "0056_add_llm_metadata",
"breakpoints": true
},
{
"idx": 57,
"version": "6",
"when": 1767571200000,
"tag": "0057_flowery_korath",
"breakpoints": true
},
{
"idx": 58,
"version": "6",
"when": 1767657600000,
"tag": "0058_blushing_impossible_man",
"breakpoints": true
}
]
}
}

View File

@@ -54,6 +54,7 @@
"@react-three/fiber": "^8.17.0",
"@soroban/abacus-react": "workspace:*",
"@soroban/core": "workspace:*",
"@soroban/llm-client": "workspace:*",
"@soroban/templates": "workspace:*",
"@strudel/soundfonts": "^1.2.6",
"@strudel/web": "^1.2.6",
@@ -95,6 +96,7 @@
"qrcode.react": "^4.2.0",
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-markdown": "^10.1.0",
"react-resizable-layout": "^0.7.3",
"react-resizable-panels": "^3.0.6",
"react-simple-keyboard": "^3.8.139",

View File

@@ -204,6 +204,11 @@ export default defineConfig({
'0%, 100%': { transform: 'scale(1)' },
'50%': { transform: 'scale(1.05)' },
},
// Pulse opacity - fading effect for loading states
pulseOpacity: {
'0%, 100%': { opacity: '1' },
'50%': { opacity: '0.5' },
},
// Error shake - stronger horizontal oscillation (line 2009)
errorShake: {
'0%, 100%': { transform: 'translateX(0)' },
@@ -239,6 +244,11 @@ export default defineConfig({
'0%, 100%': { opacity: '0.7' },
'50%': { opacity: '0.4' },
},
// Spin - rotate 360 degrees for spinners
spin: {
from: { transform: 'rotate(0deg)' },
to: { transform: 'rotate(360deg)' },
},
// Fade in with scale - entrance animation
fadeInScale: {
'0%': { opacity: '0', transform: 'scale(0.9)' },

View File

@@ -0,0 +1,778 @@
{
"format": "layers-model",
"generatedBy": "keras v3.13.0",
"convertedBy": "TensorFlow.js Converter v4.22.0",
"modelTopology": {
"keras_version": "3.13.0",
"backend": "tensorflow",
"model_config": {
"class_name": "Sequential",
"config": {
"name": "sequential",
"trainable": true,
"dtype": "float32",
"layers": [
{
"class_name": "InputLayer",
"config": {
"dtype": "float32",
"sparse": false,
"ragged": false,
"name": "input_layer",
"optional": false,
"batchInputShape": [null, 128, 64, 1]
}
},
{
"class_name": "Conv2D",
"config": {
"name": "conv2d",
"trainable": true,
"dtype": "float32",
"filters": 32,
"kernel_size": [3, 3],
"strides": [1, 1],
"padding": "same",
"data_format": "channels_last",
"dilation_rate": [1, 1],
"groups": 1,
"activation": "relu",
"use_bias": true,
"kernel_initializer": {
"module": "keras.initializers",
"class_name": "GlorotUniform",
"config": {
"seed": null
},
"registered_name": null
},
"bias_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"kernel_regularizer": null,
"bias_regularizer": null,
"activity_regularizer": null,
"kernel_constraint": null,
"bias_constraint": null
}
},
{
"class_name": "BatchNormalization",
"config": {
"name": "batch_normalization",
"trainable": true,
"dtype": "float32",
"axis": -1,
"momentum": 0.99,
"epsilon": 0.001,
"center": true,
"scale": true,
"beta_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"gamma_initializer": {
"module": "keras.initializers",
"class_name": "Ones",
"config": {},
"registered_name": null
},
"moving_mean_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"moving_variance_initializer": {
"module": "keras.initializers",
"class_name": "Ones",
"config": {},
"registered_name": null
},
"beta_regularizer": null,
"gamma_regularizer": null,
"beta_constraint": null,
"gamma_constraint": null,
"synchronized": false
}
},
{
"class_name": "MaxPooling2D",
"config": {
"name": "max_pooling2d",
"trainable": true,
"dtype": "float32",
"pool_size": [2, 2],
"padding": "valid",
"strides": [2, 2],
"data_format": "channels_last"
}
},
{
"class_name": "Dropout",
"config": {
"name": "dropout",
"trainable": true,
"dtype": "float32",
"rate": 0.25,
"seed": null,
"noise_shape": null
}
},
{
"class_name": "Conv2D",
"config": {
"name": "conv2d_1",
"trainable": true,
"dtype": "float32",
"filters": 64,
"kernel_size": [3, 3],
"strides": [1, 1],
"padding": "same",
"data_format": "channels_last",
"dilation_rate": [1, 1],
"groups": 1,
"activation": "relu",
"use_bias": true,
"kernel_initializer": {
"module": "keras.initializers",
"class_name": "GlorotUniform",
"config": {
"seed": null
},
"registered_name": null
},
"bias_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"kernel_regularizer": null,
"bias_regularizer": null,
"activity_regularizer": null,
"kernel_constraint": null,
"bias_constraint": null
}
},
{
"class_name": "BatchNormalization",
"config": {
"name": "batch_normalization_1",
"trainable": true,
"dtype": "float32",
"axis": -1,
"momentum": 0.99,
"epsilon": 0.001,
"center": true,
"scale": true,
"beta_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"gamma_initializer": {
"module": "keras.initializers",
"class_name": "Ones",
"config": {},
"registered_name": null
},
"moving_mean_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"moving_variance_initializer": {
"module": "keras.initializers",
"class_name": "Ones",
"config": {},
"registered_name": null
},
"beta_regularizer": null,
"gamma_regularizer": null,
"beta_constraint": null,
"gamma_constraint": null,
"synchronized": false
}
},
{
"class_name": "MaxPooling2D",
"config": {
"name": "max_pooling2d_1",
"trainable": true,
"dtype": "float32",
"pool_size": [2, 2],
"padding": "valid",
"strides": [2, 2],
"data_format": "channels_last"
}
},
{
"class_name": "Dropout",
"config": {
"name": "dropout_1",
"trainable": true,
"dtype": "float32",
"rate": 0.25,
"seed": null,
"noise_shape": null
}
},
{
"class_name": "Conv2D",
"config": {
"name": "conv2d_2",
"trainable": true,
"dtype": "float32",
"filters": 128,
"kernel_size": [3, 3],
"strides": [1, 1],
"padding": "same",
"data_format": "channels_last",
"dilation_rate": [1, 1],
"groups": 1,
"activation": "relu",
"use_bias": true,
"kernel_initializer": {
"module": "keras.initializers",
"class_name": "GlorotUniform",
"config": {
"seed": null
},
"registered_name": null
},
"bias_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"kernel_regularizer": null,
"bias_regularizer": null,
"activity_regularizer": null,
"kernel_constraint": null,
"bias_constraint": null
}
},
{
"class_name": "BatchNormalization",
"config": {
"name": "batch_normalization_2",
"trainable": true,
"dtype": "float32",
"axis": -1,
"momentum": 0.99,
"epsilon": 0.001,
"center": true,
"scale": true,
"beta_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"gamma_initializer": {
"module": "keras.initializers",
"class_name": "Ones",
"config": {},
"registered_name": null
},
"moving_mean_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"moving_variance_initializer": {
"module": "keras.initializers",
"class_name": "Ones",
"config": {},
"registered_name": null
},
"beta_regularizer": null,
"gamma_regularizer": null,
"beta_constraint": null,
"gamma_constraint": null,
"synchronized": false
}
},
{
"class_name": "MaxPooling2D",
"config": {
"name": "max_pooling2d_2",
"trainable": true,
"dtype": "float32",
"pool_size": [2, 2],
"padding": "valid",
"strides": [2, 2],
"data_format": "channels_last"
}
},
{
"class_name": "Dropout",
"config": {
"name": "dropout_2",
"trainable": true,
"dtype": "float32",
"rate": 0.25,
"seed": null,
"noise_shape": null
}
},
{
"class_name": "Flatten",
"config": {
"name": "flatten",
"trainable": true,
"dtype": "float32",
"data_format": "channels_last"
}
},
{
"class_name": "Dense",
"config": {
"name": "dense",
"trainable": true,
"dtype": "float32",
"units": 128,
"activation": "relu",
"use_bias": true,
"kernel_initializer": {
"module": "keras.initializers",
"class_name": "GlorotUniform",
"config": {
"seed": null
},
"registered_name": null
},
"bias_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"kernel_regularizer": null,
"bias_regularizer": null,
"kernel_constraint": null,
"bias_constraint": null,
"quantization_config": null
}
},
{
"class_name": "BatchNormalization",
"config": {
"name": "batch_normalization_3",
"trainable": true,
"dtype": "float32",
"axis": -1,
"momentum": 0.99,
"epsilon": 0.001,
"center": true,
"scale": true,
"beta_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"gamma_initializer": {
"module": "keras.initializers",
"class_name": "Ones",
"config": {},
"registered_name": null
},
"moving_mean_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"moving_variance_initializer": {
"module": "keras.initializers",
"class_name": "Ones",
"config": {},
"registered_name": null
},
"beta_regularizer": null,
"gamma_regularizer": null,
"beta_constraint": null,
"gamma_constraint": null,
"synchronized": false
}
},
{
"class_name": "Dropout",
"config": {
"name": "dropout_3",
"trainable": true,
"dtype": "float32",
"rate": 0.5,
"seed": null,
"noise_shape": null
}
},
{
"class_name": "Dense",
"config": {
"name": "dense_1",
"trainable": true,
"dtype": "float32",
"units": 10,
"activation": "softmax",
"use_bias": true,
"kernel_initializer": {
"module": "keras.initializers",
"class_name": "GlorotUniform",
"config": {
"seed": null
},
"registered_name": null
},
"bias_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"kernel_regularizer": null,
"bias_regularizer": null,
"kernel_constraint": null,
"bias_constraint": null,
"quantization_config": null
}
}
],
"build_input_shape": [null, 128, 64, 1]
}
},
"training_config": {
"loss": "sparse_categorical_crossentropy",
"loss_weights": null,
"metrics": ["accuracy"],
"weighted_metrics": null,
"run_eagerly": false,
"steps_per_execution": 1,
"jit_compile": false,
"optimizer_config": {
"class_name": "Adam",
"config": {
"name": "adam",
"learning_rate": 0.0010000000474974513,
"weight_decay": null,
"clipnorm": null,
"global_clipnorm": null,
"clipvalue": null,
"use_ema": false,
"ema_momentum": 0.99,
"ema_overwrite_frequency": null,
"loss_scale_factor": null,
"gradient_accumulation_steps": null,
"beta_1": 0.9,
"beta_2": 0.999,
"epsilon": 1e-7,
"amsgrad": false
}
}
}
},
"weightsManifest": [
{
"paths": ["group1-shard1of1.bin"],
"weights": [
{
"name": "batch_normalization/gamma",
"shape": [32],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.970035195350647,
"scale": 0.00039288062675326476,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization/beta",
"shape": [32],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.04866361422281639,
"scale": 0.00040217862994063134,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization/moving_mean",
"shape": [32],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.000010939256753772497,
"scale": 0.001048501559268391,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization/moving_variance",
"shape": [32],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.000532817910425365,
"scale": 0.00016297123568388176,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_1/gamma",
"shape": [64],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.9726127982139587,
"scale": 0.00019898110744999905,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_1/beta",
"shape": [64],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.06264814909766703,
"scale": 0.00037290564939087515,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_1/moving_mean",
"shape": [64],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.12544548511505127,
"scale": 0.001907470179539101,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_1/moving_variance",
"shape": [64],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.042508192360401154,
"scale": 0.002489794206385519,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_2/gamma",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.975760817527771,
"scale": 0.0003113854165170707,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_2/beta",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.023137448749998037,
"scale": 0.00013072004943501716,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_2/moving_mean",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.015866611152887344,
"scale": 0.005222073358063605,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_2/moving_variance",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.01432291604578495,
"scale": 0.00944612571860061,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_3/gamma",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.9765098690986633,
"scale": 0.0008689317048764697,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_3/beta",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.05253423078387391,
"scale": 0.00032833894239921196,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_3/moving_mean",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 2.3402893845059225e-8,
"scale": 0.124165194550534,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_3/moving_variance",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.000532600621227175,
"scale": 0.8092722632006888,
"original_dtype": "float32"
}
},
{
"name": "conv2d/kernel",
"shape": [3, 3, 1, 32],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.1684967933916578,
"scale": 0.0012961291799358293,
"original_dtype": "float32"
}
},
{
"name": "conv2d/bias",
"shape": [32],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.014791351323034248,
"scale": 0.00019462304372413485,
"original_dtype": "float32"
}
},
{
"name": "conv2d_1/kernel",
"shape": [3, 3, 32, 64],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.14185832411635155,
"scale": 0.0010912178778180888,
"original_dtype": "float32"
}
},
{
"name": "conv2d_1/bias",
"shape": [64],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.052345379924072934,
"scale": 0.00033341006321065564,
"original_dtype": "float32"
}
},
{
"name": "conv2d_2/kernel",
"shape": [3, 3, 64, 128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.09215074052997664,
"scale": 0.0007199276603904425,
"original_dtype": "float32"
}
},
{
"name": "conv2d_2/bias",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.052666782806901374,
"scale": 0.00035346834098591524,
"original_dtype": "float32"
}
},
{
"name": "dense/kernel",
"shape": [16384, 128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.1078803108311167,
"scale": 0.0006960020053620432,
"original_dtype": "float32"
}
},
{
"name": "dense/bias",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.010696043731535184,
"scale": 0.00013539295862702763,
"original_dtype": "float32"
}
},
{
"name": "dense_1/kernel",
"shape": [128, 10],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.26071277062098186,
"scale": 0.002190863618663713,
"original_dtype": "float32"
}
},
{
"name": "dense_1/bias",
"shape": [10],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.020677046455881174,
"scale": 0.00016028718182853623,
"original_dtype": "float32"
}
}
]
}
]
}

View File

@@ -0,0 +1,858 @@
{
"format": "layers-model",
"generatedBy": "keras v3.13.0",
"convertedBy": "TensorFlow.js Converter v4.22.0",
"modelTopology": {
"keras_version": "3.13.0",
"backend": "tensorflow",
"model_config": {
"class_name": "Sequential",
"config": {
"name": "sequential",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"layers": [
{
"class_name": "InputLayer",
"config": {
"batch_shape": [null, 128, 64, 1],
"dtype": "float32",
"sparse": false,
"ragged": false,
"name": "input_layer",
"optional": false
}
},
{
"class_name": "Conv2D",
"config": {
"name": "conv2d",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"filters": 32,
"kernel_size": [3, 3],
"strides": [1, 1],
"padding": "same",
"data_format": "channels_last",
"dilation_rate": [1, 1],
"groups": 1,
"activation": "relu",
"use_bias": true,
"kernel_initializer": {
"module": "keras.initializers",
"class_name": "GlorotUniform",
"config": { "seed": null },
"registered_name": null
},
"bias_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"kernel_regularizer": null,
"bias_regularizer": null,
"activity_regularizer": null,
"kernel_constraint": null,
"bias_constraint": null
}
},
{
"class_name": "BatchNormalization",
"config": {
"name": "batch_normalization",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"axis": -1,
"momentum": 0.99,
"epsilon": 0.001,
"center": true,
"scale": true,
"beta_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"gamma_initializer": {
"module": "keras.initializers",
"class_name": "Ones",
"config": {},
"registered_name": null
},
"moving_mean_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"moving_variance_initializer": {
"module": "keras.initializers",
"class_name": "Ones",
"config": {},
"registered_name": null
},
"beta_regularizer": null,
"gamma_regularizer": null,
"beta_constraint": null,
"gamma_constraint": null,
"synchronized": false
}
},
{
"class_name": "MaxPooling2D",
"config": {
"name": "max_pooling2d",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"pool_size": [2, 2],
"padding": "valid",
"strides": [2, 2],
"data_format": "channels_last"
}
},
{
"class_name": "Dropout",
"config": {
"name": "dropout",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"rate": 0.25,
"seed": null,
"noise_shape": null
}
},
{
"class_name": "Conv2D",
"config": {
"name": "conv2d_1",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"filters": 64,
"kernel_size": [3, 3],
"strides": [1, 1],
"padding": "same",
"data_format": "channels_last",
"dilation_rate": [1, 1],
"groups": 1,
"activation": "relu",
"use_bias": true,
"kernel_initializer": {
"module": "keras.initializers",
"class_name": "GlorotUniform",
"config": { "seed": null },
"registered_name": null
},
"bias_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"kernel_regularizer": null,
"bias_regularizer": null,
"activity_regularizer": null,
"kernel_constraint": null,
"bias_constraint": null
}
},
{
"class_name": "BatchNormalization",
"config": {
"name": "batch_normalization_1",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"axis": -1,
"momentum": 0.99,
"epsilon": 0.001,
"center": true,
"scale": true,
"beta_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"gamma_initializer": {
"module": "keras.initializers",
"class_name": "Ones",
"config": {},
"registered_name": null
},
"moving_mean_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"moving_variance_initializer": {
"module": "keras.initializers",
"class_name": "Ones",
"config": {},
"registered_name": null
},
"beta_regularizer": null,
"gamma_regularizer": null,
"beta_constraint": null,
"gamma_constraint": null,
"synchronized": false
}
},
{
"class_name": "MaxPooling2D",
"config": {
"name": "max_pooling2d_1",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"pool_size": [2, 2],
"padding": "valid",
"strides": [2, 2],
"data_format": "channels_last"
}
},
{
"class_name": "Dropout",
"config": {
"name": "dropout_1",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"rate": 0.25,
"seed": null,
"noise_shape": null
}
},
{
"class_name": "Conv2D",
"config": {
"name": "conv2d_2",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"filters": 128,
"kernel_size": [3, 3],
"strides": [1, 1],
"padding": "same",
"data_format": "channels_last",
"dilation_rate": [1, 1],
"groups": 1,
"activation": "relu",
"use_bias": true,
"kernel_initializer": {
"module": "keras.initializers",
"class_name": "GlorotUniform",
"config": { "seed": null },
"registered_name": null
},
"bias_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"kernel_regularizer": null,
"bias_regularizer": null,
"activity_regularizer": null,
"kernel_constraint": null,
"bias_constraint": null
}
},
{
"class_name": "BatchNormalization",
"config": {
"name": "batch_normalization_2",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"axis": -1,
"momentum": 0.99,
"epsilon": 0.001,
"center": true,
"scale": true,
"beta_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"gamma_initializer": {
"module": "keras.initializers",
"class_name": "Ones",
"config": {},
"registered_name": null
},
"moving_mean_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"moving_variance_initializer": {
"module": "keras.initializers",
"class_name": "Ones",
"config": {},
"registered_name": null
},
"beta_regularizer": null,
"gamma_regularizer": null,
"beta_constraint": null,
"gamma_constraint": null,
"synchronized": false
}
},
{
"class_name": "MaxPooling2D",
"config": {
"name": "max_pooling2d_2",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"pool_size": [2, 2],
"padding": "valid",
"strides": [2, 2],
"data_format": "channels_last"
}
},
{
"class_name": "Dropout",
"config": {
"name": "dropout_2",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"rate": 0.25,
"seed": null,
"noise_shape": null
}
},
{
"class_name": "Flatten",
"config": {
"name": "flatten",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"data_format": "channels_last"
}
},
{
"class_name": "Dense",
"config": {
"name": "dense",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"units": 128,
"activation": "relu",
"use_bias": true,
"kernel_initializer": {
"module": "keras.initializers",
"class_name": "GlorotUniform",
"config": { "seed": null },
"registered_name": null
},
"bias_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"kernel_regularizer": null,
"bias_regularizer": null,
"kernel_constraint": null,
"bias_constraint": null,
"quantization_config": null
}
},
{
"class_name": "BatchNormalization",
"config": {
"name": "batch_normalization_3",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"axis": -1,
"momentum": 0.99,
"epsilon": 0.001,
"center": true,
"scale": true,
"beta_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"gamma_initializer": {
"module": "keras.initializers",
"class_name": "Ones",
"config": {},
"registered_name": null
},
"moving_mean_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"moving_variance_initializer": {
"module": "keras.initializers",
"class_name": "Ones",
"config": {},
"registered_name": null
},
"beta_regularizer": null,
"gamma_regularizer": null,
"beta_constraint": null,
"gamma_constraint": null,
"synchronized": false
}
},
{
"class_name": "Dropout",
"config": {
"name": "dropout_3",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"rate": 0.5,
"seed": null,
"noise_shape": null
}
},
{
"class_name": "Dense",
"config": {
"name": "dense_1",
"trainable": true,
"dtype": {
"module": "keras",
"class_name": "DTypePolicy",
"config": { "name": "float32" },
"registered_name": null
},
"units": 10,
"activation": "softmax",
"use_bias": true,
"kernel_initializer": {
"module": "keras.initializers",
"class_name": "GlorotUniform",
"config": { "seed": null },
"registered_name": null
},
"bias_initializer": {
"module": "keras.initializers",
"class_name": "Zeros",
"config": {},
"registered_name": null
},
"kernel_regularizer": null,
"bias_regularizer": null,
"kernel_constraint": null,
"bias_constraint": null,
"quantization_config": null
}
}
],
"build_input_shape": [null, 128, 64, 1]
}
},
"training_config": {
"loss": "sparse_categorical_crossentropy",
"loss_weights": null,
"metrics": ["accuracy"],
"weighted_metrics": null,
"run_eagerly": false,
"steps_per_execution": 1,
"jit_compile": false,
"optimizer_config": {
"class_name": "Adam",
"config": {
"name": "adam",
"learning_rate": 0.0010000000474974513,
"weight_decay": null,
"clipnorm": null,
"global_clipnorm": null,
"clipvalue": null,
"use_ema": false,
"ema_momentum": 0.99,
"ema_overwrite_frequency": null,
"loss_scale_factor": null,
"gradient_accumulation_steps": null,
"beta_1": 0.9,
"beta_2": 0.999,
"epsilon": 1e-7,
"amsgrad": false
}
}
}
},
"weightsManifest": [
{
"paths": ["group1-shard1of1.bin"],
"weights": [
{
"name": "batch_normalization/gamma",
"shape": [32],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.970035195350647,
"scale": 0.00039288062675326476,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization/beta",
"shape": [32],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.04866361422281639,
"scale": 0.00040217862994063134,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization/moving_mean",
"shape": [32],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 1.0939256753772497e-5,
"scale": 0.001048501559268391,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization/moving_variance",
"shape": [32],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.000532817910425365,
"scale": 0.00016297123568388176,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_1/gamma",
"shape": [64],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.9726127982139587,
"scale": 0.00019898110744999905,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_1/beta",
"shape": [64],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.06264814909766703,
"scale": 0.00037290564939087515,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_1/moving_mean",
"shape": [64],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.12544548511505127,
"scale": 0.001907470179539101,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_1/moving_variance",
"shape": [64],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.042508192360401154,
"scale": 0.002489794206385519,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_2/gamma",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.975760817527771,
"scale": 0.0003113854165170707,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_2/beta",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.023137448749998037,
"scale": 0.00013072004943501716,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_2/moving_mean",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.015866611152887344,
"scale": 0.005222073358063605,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_2/moving_variance",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.01432291604578495,
"scale": 0.00944612571860061,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_3/gamma",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.9765098690986633,
"scale": 0.0008689317048764697,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_3/beta",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.05253423078387391,
"scale": 0.00032833894239921196,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_3/moving_mean",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 2.3402893845059225e-8,
"scale": 0.124165194550534,
"original_dtype": "float32"
}
},
{
"name": "batch_normalization_3/moving_variance",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": 0.000532600621227175,
"scale": 0.8092722632006888,
"original_dtype": "float32"
}
},
{
"name": "conv2d/kernel",
"shape": [3, 3, 1, 32],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.1684967933916578,
"scale": 0.0012961291799358293,
"original_dtype": "float32"
}
},
{
"name": "conv2d/bias",
"shape": [32],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.014791351323034248,
"scale": 0.00019462304372413485,
"original_dtype": "float32"
}
},
{
"name": "conv2d_1/kernel",
"shape": [3, 3, 32, 64],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.14185832411635155,
"scale": 0.0010912178778180888,
"original_dtype": "float32"
}
},
{
"name": "conv2d_1/bias",
"shape": [64],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.052345379924072934,
"scale": 0.00033341006321065564,
"original_dtype": "float32"
}
},
{
"name": "conv2d_2/kernel",
"shape": [3, 3, 64, 128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.09215074052997664,
"scale": 0.0007199276603904425,
"original_dtype": "float32"
}
},
{
"name": "conv2d_2/bias",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.052666782806901374,
"scale": 0.00035346834098591524,
"original_dtype": "float32"
}
},
{
"name": "dense/kernel",
"shape": [16384, 128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.1078803108311167,
"scale": 0.0006960020053620432,
"original_dtype": "float32"
}
},
{
"name": "dense/bias",
"shape": [128],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.010696043731535184,
"scale": 0.00013539295862702763,
"original_dtype": "float32"
}
},
{
"name": "dense_1/kernel",
"shape": [128, 10],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.26071277062098186,
"scale": 0.002190863618663713,
"original_dtype": "float32"
}
},
{
"name": "dense_1/bias",
"shape": [10],
"dtype": "float32",
"quantization": {
"dtype": "uint8",
"min": -0.020677046455881174,
"scale": 0.00016028718182853623,
"original_dtype": "float32"
}
}
]
}
]
}

View File

@@ -0,0 +1,169 @@
/**
* API route for approving parsed worksheet results and adding to existing session
*
* POST /api/curriculum/[playerId]/attachments/[attachmentId]/approve
* - Approves the parsing result
* - Adds the parsed problems to the EXISTING session that the attachment belongs to
* - Does NOT create a new session - attachments are already associated with sessions
*/
import { NextResponse } from 'next/server'
import { eq } from 'drizzle-orm'
import { db } from '@/db'
import { practiceAttachments } from '@/db/schema/practice-attachments'
import { sessionPlans, type SlotResult } from '@/db/schema/session-plans'
import { canPerformAction } from '@/lib/classroom'
import { getDbUserId } from '@/lib/viewer'
import { convertToSlotResults, computeParsingStats } from '@/lib/worksheet-parsing'
interface RouteParams {
params: Promise<{ playerId: string; attachmentId: string }>
}
/**
* POST - Approve parsing and add problems to existing session
*/
export async function POST(_request: Request, { params }: RouteParams) {
try {
const { playerId, attachmentId } = await params
if (!playerId || !attachmentId) {
return NextResponse.json({ error: 'Player ID and Attachment ID required' }, { status: 400 })
}
// Authorization check
const userId = await getDbUserId()
const canApprove = await canPerformAction(userId, playerId, 'start-session')
if (!canApprove) {
return NextResponse.json({ error: 'Not authorized' }, { status: 403 })
}
// Get attachment record
const attachment = await db
.select()
.from(practiceAttachments)
.where(eq(practiceAttachments.id, attachmentId))
.get()
if (!attachment) {
return NextResponse.json({ error: 'Attachment not found' }, { status: 404 })
}
if (attachment.playerId !== playerId) {
return NextResponse.json({ error: 'Attachment not found' }, { status: 404 })
}
// Check if already processed
if (attachment.sessionCreated) {
return NextResponse.json(
{
error: 'Problems from this worksheet already added to session',
},
{ status: 400 }
)
}
// Get the existing session that this attachment belongs to
const existingSession = await db
.select()
.from(sessionPlans)
.where(eq(sessionPlans.id, attachment.sessionId))
.get()
if (!existingSession) {
return NextResponse.json(
{
error: 'Associated session not found',
},
{ status: 404 }
)
}
// Get the parsing result to convert (prefer approved result, fall back to raw)
const parsingResult = attachment.approvedResult ?? attachment.rawParsingResult
if (!parsingResult) {
return NextResponse.json(
{
error: 'No parsing results available. Parse the worksheet first.',
},
{ status: 400 }
)
}
// Convert to slot results
// Always use part 1 for offline worksheets - slot indices track individual problems
const conversionResult = convertToSlotResults(parsingResult, {
partNumber: 1,
source: 'practice',
})
if (conversionResult.slotResults.length === 0) {
return NextResponse.json(
{
error: 'No valid problems to add to session',
},
{ status: 400 }
)
}
const now = new Date()
// Add timestamps to slot results and adjust slot indices
const existingResults = (existingSession.results ?? []) as SlotResult[]
const startSlotIndex = existingResults.length
const slotResultsWithTimestamps: SlotResult[] = conversionResult.slotResults.map(
(result, idx) => ({
...result,
slotIndex: startSlotIndex + idx,
timestamp: now,
})
)
// Merge new results with existing results
const mergedResults = [...existingResults, ...slotResultsWithTimestamps]
// Calculate updated stats
const totalCount = mergedResults.length
const correctCount = mergedResults.filter((r) => r.isCorrect).length
// Update the existing session with the new problems
await db
.update(sessionPlans)
.set({
results: mergedResults,
// Update the completed timestamp since we added new work
completedAt: now,
// Mark as completed if it wasn't already
status: 'completed',
})
.where(eq(sessionPlans.id, existingSession.id))
// Update attachment to mark as processed
await db
.update(practiceAttachments)
.set({
parsingStatus: 'approved',
sessionCreated: true,
createdSessionId: existingSession.id, // Reference to the session we added to
})
.where(eq(practiceAttachments.id, attachmentId))
// Compute final stats
const stats = computeParsingStats(parsingResult)
return NextResponse.json({
success: true,
sessionId: existingSession.id,
problemCount: slotResultsWithTimestamps.length,
totalSessionProblems: totalCount,
correctCount,
accuracy: totalCount > 0 ? correctCount / totalCount : null,
skillsExercised: conversionResult.skillsExercised,
stats,
})
} catch (error) {
console.error('Error approving and adding to session:', error)
return NextResponse.json({ error: 'Failed to approve and add to session' }, { status: 500 })
}
}

View File

@@ -0,0 +1,323 @@
/**
* API route for selective problem re-parsing
*
* POST /api/curriculum/[playerId]/attachments/[attachmentId]/parse-selected
* - Re-parse specific problems by cropping their bounding boxes
* - Merges results back into existing parsing result
*/
import { readFile } from 'fs/promises'
import { NextResponse } from 'next/server'
import { join } from 'path'
import { eq } from 'drizzle-orm'
import sharp from 'sharp'
import { z } from 'zod'
import { db } from '@/db'
import { practiceAttachments } from '@/db/schema/practice-attachments'
import { canPerformAction } from '@/lib/classroom'
import { getDbUserId } from '@/lib/viewer'
import { llm } from '@/lib/llm'
import {
type ParsedProblem,
type BoundingBox,
type WorksheetParsingResult,
getModelConfig,
getDefaultModelConfig,
calculateCropRegion,
CROP_PADDING,
} from '@/lib/worksheet-parsing'
interface RouteParams {
params: Promise<{ playerId: string; attachmentId: string }>
}
// Schema for single problem re-parse response
const SingleProblemSchema = z.object({
terms: z
.array(z.number().int())
.min(2)
.max(7)
.describe(
'The terms (numbers) in this problem. First term is always positive. ' +
'Negative numbers indicate subtraction. Example: "45 - 17 + 8" -> [45, -17, 8]'
),
studentAnswer: z
.number()
.int()
.nullable()
.describe("The student's written answer. null if no answer is visible or answer box is empty."),
format: z
.enum(['vertical', 'linear'])
.describe('Format: "vertical" for stacked column, "linear" for horizontal'),
termsConfidence: z.number().min(0).max(1).describe('Confidence in terms reading (0-1)'),
studentAnswerConfidence: z
.number()
.min(0)
.max(1)
.describe('Confidence in student answer reading (0-1)'),
})
// Request body schema
const RequestBodySchema = z.object({
problemIndices: z.array(z.number().int().min(0)).min(1).max(20),
boundingBoxes: z.array(
z.object({
x: z.number().min(0).max(1),
y: z.number().min(0).max(1),
width: z.number().min(0).max(1),
height: z.number().min(0).max(1),
})
),
additionalContext: z.string().optional(),
modelConfigId: z.string().optional(),
})
/**
* Build prompt for single problem parsing
*/
function buildSingleProblemPrompt(additionalContext?: string): string {
let prompt = `You are analyzing a cropped image showing a SINGLE arithmetic problem from an abacus workbook.
Extract the following from this cropped problem image:
1. The problem terms (numbers being added/subtracted)
2. The student's written answer (if any)
3. The format (vertical or linear)
4. Your confidence in each reading
⚠️ **CRITICAL: MINUS SIGN DETECTION** ⚠️
Minus signs are SMALL but EXTREMELY IMPORTANT. Missing a minus sign completely changes the answer!
**How minus signs appear in VERTICAL problems:**
- A small horizontal dash/line to the LEFT of a number
- May appear as: (minus), - (hyphen), or a short horizontal stroke
- Often smaller than you expect - LOOK CAREFULLY!
- Sometimes positioned slightly above or below the number's vertical center
**Example - the ONLY difference is that tiny minus sign:**
- NO minus: 45 + 17 + 8 = 70 → terms = [45, 17, 8]
- WITH minus: 45 - 17 + 8 = 36 → terms = [45, -17, 8]
**You MUST examine the LEFT side of each number for minus signs!**
IMPORTANT:
- The first term is always positive
- Negative numbers indicate subtraction (e.g., "45 - 17" has terms [45, -17])
- If no student answer is visible, set studentAnswer to null
- Be precise about handwritten digits - common confusions: 1/7, 4/9, 6/0, 5/8
CONFIDENCE GUIDELINES:
- 0.9-1.0: Clear, unambiguous reading
- 0.7-0.89: Slightly unclear but confident
- 0.5-0.69: Uncertain, could be misread
- Below 0.5: Very uncertain`
if (additionalContext) {
prompt += `\n\nADDITIONAL CONTEXT FROM USER:\n${additionalContext}`
}
return prompt
}
/**
* Crop image to bounding box with padding using sharp (server-side).
* Uses shared calculateCropRegion for consistent cropping with client-side.
*/
async function cropToBoundingBox(
imageBuffer: Buffer,
box: BoundingBox,
padding: number = CROP_PADDING
): Promise<Buffer> {
const metadata = await sharp(imageBuffer).metadata()
const imageWidth = metadata.width ?? 1
const imageHeight = metadata.height ?? 1
// Use shared crop region calculation
const region = calculateCropRegion(box, imageWidth, imageHeight, padding)
return sharp(imageBuffer)
.extract({ left: region.left, top: region.top, width: region.width, height: region.height })
.toBuffer()
}
/**
* POST - Re-parse selected problems
*/
export async function POST(request: Request, { params }: RouteParams) {
try {
const { playerId, attachmentId } = await params
if (!playerId || !attachmentId) {
return NextResponse.json({ error: 'Player ID and Attachment ID required' }, { status: 400 })
}
// Parse request body
let body: z.infer<typeof RequestBodySchema>
try {
const rawBody = await request.json()
body = RequestBodySchema.parse(rawBody)
} catch (err) {
return NextResponse.json(
{ error: 'Invalid request body', details: err instanceof Error ? err.message : 'Unknown' },
{ status: 400 }
)
}
const { problemIndices, boundingBoxes, additionalContext, modelConfigId } = body
if (problemIndices.length !== boundingBoxes.length) {
return NextResponse.json(
{ error: 'problemIndices and boundingBoxes must have the same length' },
{ status: 400 }
)
}
// Resolve model config
const modelConfig = modelConfigId ? getModelConfig(modelConfigId) : getDefaultModelConfig()
// Authorization check
const userId = await getDbUserId()
const canParse = await canPerformAction(userId, playerId, 'start-session')
if (!canParse) {
return NextResponse.json({ error: 'Not authorized' }, { status: 403 })
}
// Get attachment record
const attachment = await db
.select()
.from(practiceAttachments)
.where(eq(practiceAttachments.id, attachmentId))
.get()
if (!attachment) {
return NextResponse.json({ error: 'Attachment not found' }, { status: 404 })
}
if (attachment.playerId !== playerId) {
return NextResponse.json({ error: 'Attachment not found' }, { status: 404 })
}
// Must have existing parsing result to merge into
if (!attachment.rawParsingResult) {
return NextResponse.json({ error: 'Attachment has not been parsed yet' }, { status: 400 })
}
const existingResult = attachment.rawParsingResult as WorksheetParsingResult
// Read the image file
const uploadDir = join(process.cwd(), 'data', 'uploads', 'players', playerId)
const filepath = join(uploadDir, attachment.filename)
const imageBuffer = await readFile(filepath)
const mimeType = attachment.mimeType || 'image/jpeg'
// Build the prompt
const prompt = buildSingleProblemPrompt(additionalContext)
// Process each selected problem
const reparsedProblems: Array<{
index: number
originalProblem: ParsedProblem
newData: z.infer<typeof SingleProblemSchema>
}> = []
for (let i = 0; i < problemIndices.length; i++) {
const problemIndex = problemIndices[i]
const box = boundingBoxes[i]
const originalProblem = existingResult.problems[problemIndex]
if (!originalProblem) {
console.warn(`Problem index ${problemIndex} not found in existing result`)
continue
}
try {
// Crop image to bounding box
const croppedBuffer = await cropToBoundingBox(imageBuffer, box)
const base64Cropped = croppedBuffer.toString('base64')
const croppedDataUrl = `data:${mimeType};base64,${base64Cropped}`
// Call LLM for this problem
const response = await llm.vision({
prompt,
images: [croppedDataUrl],
schema: SingleProblemSchema,
maxRetries: 1,
provider: modelConfig?.provider,
model: modelConfig?.model,
reasoningEffort: modelConfig?.reasoningEffort,
})
reparsedProblems.push({
index: problemIndex,
originalProblem,
newData: response.data,
})
} catch (err) {
console.error(`Failed to re-parse problem ${problemIndex}:`, err)
// Continue with other problems
}
}
// Merge results back into existing parsing result
// Create a map from problem index to the user's adjusted bounding box
const adjustedBoxMap = new Map<number, BoundingBox>()
for (let i = 0; i < problemIndices.length; i++) {
adjustedBoxMap.set(problemIndices[i], boundingBoxes[i])
}
const updatedProblems = [...existingResult.problems]
for (const { index, originalProblem, newData } of reparsedProblems) {
const correctAnswer = newData.terms.reduce((a, b) => a + b, 0)
// Use the user's adjusted bounding box (passed in request), not the original
const userAdjustedBox = adjustedBoxMap.get(index) ?? originalProblem.problemBoundingBox
updatedProblems[index] = {
...originalProblem,
terms: newData.terms,
studentAnswer: newData.studentAnswer,
correctAnswer,
format: newData.format,
termsConfidence: newData.termsConfidence,
studentAnswerConfidence: newData.studentAnswerConfidence,
// Use the user's adjusted bounding box
problemBoundingBox: userAdjustedBox,
}
}
// Update the parsing result
const updatedResult: WorksheetParsingResult = {
...existingResult,
problems: updatedProblems,
// Recalculate overall confidence
overallConfidence:
updatedProblems.reduce(
(sum, p) => sum + Math.min(p.termsConfidence, p.studentAnswerConfidence),
0
) / updatedProblems.length,
// Check if any problems still need review
needsReview: updatedProblems.some(
(p) => Math.min(p.termsConfidence, p.studentAnswerConfidence) < 0.7
),
}
// Save updated result to database
await db
.update(practiceAttachments)
.set({
rawParsingResult: updatedResult,
confidenceScore: updatedResult.overallConfidence,
needsReview: updatedResult.needsReview,
parsingStatus: updatedResult.needsReview ? 'needs_review' : 'approved',
})
.where(eq(practiceAttachments.id, attachmentId))
return NextResponse.json({
success: true,
reparsedCount: reparsedProblems.length,
reparsedIndices: reparsedProblems.map((p) => p.index),
updatedResult,
})
} catch (error) {
console.error('Error in parse-selected:', error)
return NextResponse.json({ error: 'Failed to re-parse selected problems' }, { status: 500 })
}
}

View File

@@ -0,0 +1,311 @@
/**
* API route for LLM-powered worksheet parsing
*
* POST /api/curriculum/[playerId]/attachments/[attachmentId]/parse
* - Start parsing the attachment image
* - Returns immediately, polling via GET for status
*
* GET /api/curriculum/[playerId]/attachments/[attachmentId]/parse
* - Get current parsing status and results
*/
import { readFile } from 'fs/promises'
import { NextResponse } from 'next/server'
import { join } from 'path'
import { eq } from 'drizzle-orm'
import { db } from '@/db'
import { practiceAttachments, type ParsingStatus } from '@/db/schema/practice-attachments'
import { canPerformAction } from '@/lib/classroom'
import { getDbUserId } from '@/lib/viewer'
import {
parseWorksheetImage,
computeParsingStats,
buildWorksheetParsingPrompt,
getModelConfig,
getDefaultModelConfig,
type WorksheetParsingResult,
} from '@/lib/worksheet-parsing'
interface RouteParams {
params: Promise<{ playerId: string; attachmentId: string }>
}
/**
* POST - Start parsing the attachment
*
* Body (optional):
* - modelConfigId: string - ID of the model config to use (from PARSING_MODEL_CONFIGS)
* - additionalContext: string - Additional context/hints for the LLM
* - preservedBoundingBoxes: Record<number, BoundingBox> - Bounding boxes to preserve by index
*/
export async function POST(request: Request, { params }: RouteParams) {
try {
const { playerId, attachmentId } = await params
if (!playerId || !attachmentId) {
return NextResponse.json({ error: 'Player ID and Attachment ID required' }, { status: 400 })
}
// Parse optional parameters from request body
let modelConfigId: string | undefined
let additionalContext: string | undefined
let preservedBoundingBoxes:
| Record<number, { x: number; y: number; width: number; height: number }>
| undefined
try {
const body = await request.json()
modelConfigId = body?.modelConfigId
additionalContext = body?.additionalContext
preservedBoundingBoxes = body?.preservedBoundingBoxes
} catch {
// No body or invalid JSON is fine - use defaults
}
// Resolve model config
const modelConfig = modelConfigId ? getModelConfig(modelConfigId) : getDefaultModelConfig()
// Authorization check
const userId = await getDbUserId()
const canParse = await canPerformAction(userId, playerId, 'start-session')
if (!canParse) {
return NextResponse.json({ error: 'Not authorized' }, { status: 403 })
}
// Get attachment record
const attachment = await db
.select()
.from(practiceAttachments)
.where(eq(practiceAttachments.id, attachmentId))
.get()
if (!attachment) {
return NextResponse.json({ error: 'Attachment not found' }, { status: 404 })
}
if (attachment.playerId !== playerId) {
return NextResponse.json({ error: 'Attachment not found' }, { status: 404 })
}
// Check if already processing
if (attachment.parsingStatus === 'processing') {
return NextResponse.json({
status: 'processing',
message: 'Parsing already in progress',
})
}
// Update status to processing
await db
.update(practiceAttachments)
.set({
parsingStatus: 'processing',
parsingError: null,
})
.where(eq(practiceAttachments.id, attachmentId))
// Read the image file
const uploadDir = join(process.cwd(), 'data', 'uploads', 'players', playerId)
const filepath = join(uploadDir, attachment.filename)
const imageBuffer = await readFile(filepath)
const base64Image = imageBuffer.toString('base64')
const mimeType = attachment.mimeType || 'image/jpeg'
const imageDataUrl = `data:${mimeType};base64,${base64Image}`
// Build the prompt (capture for debugging)
const promptOptions = additionalContext ? { additionalContext } : {}
const promptUsed = buildWorksheetParsingPrompt(promptOptions)
try {
// Parse the worksheet (always uses cropped image)
const result = await parseWorksheetImage(imageDataUrl, {
maxRetries: 2,
modelConfigId: modelConfig?.id,
promptOptions,
})
let parsingResult = result.data
// Merge preserved bounding boxes from user adjustments
// This allows the user's manual adjustments to be retained after re-parsing
if (preservedBoundingBoxes && Object.keys(preservedBoundingBoxes).length > 0) {
parsingResult = {
...parsingResult,
problems: parsingResult.problems.map((problem, index) => {
const preservedBox = preservedBoundingBoxes[index]
if (preservedBox) {
return {
...problem,
problemBoundingBox: preservedBox,
}
}
return problem
}),
}
}
const stats = computeParsingStats(parsingResult)
// Determine status based on confidence
const status: ParsingStatus = parsingResult.needsReview ? 'needs_review' : 'approved'
// Save results and LLM metadata to database
await db
.update(practiceAttachments)
.set({
parsingStatus: status,
parsedAt: new Date().toISOString(),
rawParsingResult: parsingResult,
confidenceScore: parsingResult.overallConfidence,
needsReview: parsingResult.needsReview,
parsingError: null,
// LLM metadata for debugging/transparency
llmProvider: result.provider,
llmModel: result.model,
llmPromptUsed: promptUsed,
llmRawResponse: result.rawResponse,
llmJsonSchema: result.jsonSchema,
llmImageSource: 'cropped',
llmAttempts: result.attempts,
llmPromptTokens: result.usage.promptTokens,
llmCompletionTokens: result.usage.completionTokens,
llmTotalTokens: result.usage.promptTokens + result.usage.completionTokens,
})
.where(eq(practiceAttachments.id, attachmentId))
return NextResponse.json({
success: true,
status,
result: parsingResult,
stats,
// LLM metadata in response
llm: {
provider: result.provider,
model: result.model,
attempts: result.attempts,
imageSource: 'cropped',
usage: result.usage,
},
})
} catch (parseError) {
const errorMessage =
parseError instanceof Error ? parseError.message : 'Unknown parsing error'
console.error('Worksheet parsing error:', parseError)
// Update status to failed
await db
.update(practiceAttachments)
.set({
parsingStatus: 'failed',
parsingError: errorMessage,
})
.where(eq(practiceAttachments.id, attachmentId))
return NextResponse.json(
{
success: false,
status: 'failed',
error: errorMessage,
},
{ status: 500 }
)
}
} catch (error) {
console.error('Error starting parse:', error)
return NextResponse.json({ error: 'Failed to start parsing' }, { status: 500 })
}
}
/**
* GET - Get parsing status and results
*/
export async function GET(_request: Request, { params }: RouteParams) {
try {
const { playerId, attachmentId } = await params
if (!playerId || !attachmentId) {
return NextResponse.json({ error: 'Player ID and Attachment ID required' }, { status: 400 })
}
// Authorization check
const userId = await getDbUserId()
const canView = await canPerformAction(userId, playerId, 'view')
if (!canView) {
return NextResponse.json({ error: 'Not authorized' }, { status: 403 })
}
// Get attachment record
const attachment = await db
.select()
.from(practiceAttachments)
.where(eq(practiceAttachments.id, attachmentId))
.get()
if (!attachment) {
return NextResponse.json({ error: 'Attachment not found' }, { status: 404 })
}
if (attachment.playerId !== playerId) {
return NextResponse.json({ error: 'Attachment not found' }, { status: 404 })
}
// Build response based on status
const response: {
status: ParsingStatus | null
parsedAt: string | null
result: WorksheetParsingResult | null
error: string | null
needsReview: boolean
confidenceScore: number | null
stats?: ReturnType<typeof computeParsingStats>
llm?: {
provider: string | null
model: string | null
promptUsed: string | null
rawResponse: string | null
jsonSchema: string | null
imageSource: string | null
attempts: number | null
usage: {
promptTokens: number | null
completionTokens: number | null
totalTokens: number | null
}
}
} = {
status: attachment.parsingStatus,
parsedAt: attachment.parsedAt,
result: attachment.rawParsingResult,
error: attachment.parsingError,
needsReview: attachment.needsReview === true,
confidenceScore: attachment.confidenceScore,
}
// Add stats if we have results
if (attachment.rawParsingResult) {
response.stats = computeParsingStats(attachment.rawParsingResult)
}
// Add LLM metadata if available
if (attachment.llmProvider || attachment.llmModel) {
response.llm = {
provider: attachment.llmProvider,
model: attachment.llmModel,
promptUsed: attachment.llmPromptUsed,
rawResponse: attachment.llmRawResponse,
jsonSchema: attachment.llmJsonSchema,
imageSource: attachment.llmImageSource,
attempts: attachment.llmAttempts,
usage: {
promptTokens: attachment.llmPromptTokens,
completionTokens: attachment.llmCompletionTokens,
totalTokens: attachment.llmTotalTokens,
},
}
}
return NextResponse.json(response)
} catch (error) {
console.error('Error getting parse status:', error)
return NextResponse.json({ error: 'Failed to get parsing status' }, { status: 500 })
}
}

View File

@@ -0,0 +1,141 @@
/**
* API route for reviewing and correcting parsed worksheet results
*
* PATCH /api/curriculum/[playerId]/attachments/[attachmentId]/review
* - Submit user corrections to parsed problems
* - Updates the parsing result with corrections
*/
import { NextResponse } from 'next/server'
import { eq } from 'drizzle-orm'
import { z } from 'zod'
import { db } from '@/db'
import { practiceAttachments, type ParsingStatus } from '@/db/schema/practice-attachments'
import { canPerformAction } from '@/lib/classroom'
import { getDbUserId } from '@/lib/viewer'
import {
applyCorrections,
computeParsingStats,
ProblemCorrectionSchema,
} from '@/lib/worksheet-parsing'
interface RouteParams {
params: Promise<{ playerId: string; attachmentId: string }>
}
/**
* Request body schema for corrections
*/
const ReviewRequestSchema = z.object({
corrections: z.array(ProblemCorrectionSchema).min(1),
markAsReviewed: z.boolean().default(false),
})
/**
* PATCH - Submit corrections to parsed problems
*/
export async function PATCH(request: Request, { params }: RouteParams) {
try {
const { playerId, attachmentId } = await params
if (!playerId || !attachmentId) {
return NextResponse.json({ error: 'Player ID and Attachment ID required' }, { status: 400 })
}
// Authorization check
const userId = await getDbUserId()
const canReview = await canPerformAction(userId, playerId, 'start-session')
if (!canReview) {
return NextResponse.json({ error: 'Not authorized' }, { status: 403 })
}
// Parse request body
const body = await request.json()
const parseResult = ReviewRequestSchema.safeParse(body)
if (!parseResult.success) {
return NextResponse.json(
{
error: 'Invalid request body',
details: parseResult.error.issues,
},
{ status: 400 }
)
}
const { corrections, markAsReviewed } = parseResult.data
// Get attachment record
const attachment = await db
.select()
.from(practiceAttachments)
.where(eq(practiceAttachments.id, attachmentId))
.get()
if (!attachment) {
return NextResponse.json({ error: 'Attachment not found' }, { status: 404 })
}
if (attachment.playerId !== playerId) {
return NextResponse.json({ error: 'Attachment not found' }, { status: 404 })
}
// Check if we have parsing results to correct
if (!attachment.rawParsingResult) {
return NextResponse.json(
{
error: 'No parsing results to correct. Parse the worksheet first.',
},
{ status: 400 }
)
}
// Apply corrections to the raw result
const correctedResult = applyCorrections(
attachment.rawParsingResult,
corrections.map((c) => ({
problemNumber: c.problemNumber,
correctedTerms: c.correctedTerms ?? undefined,
correctedStudentAnswer: c.correctedStudentAnswer ?? undefined,
shouldExclude: c.shouldExclude,
}))
)
// Compute new stats
const stats = computeParsingStats(correctedResult)
// Determine new status
let newStatus: ParsingStatus = attachment.parsingStatus ?? 'needs_review'
if (markAsReviewed) {
// If user explicitly marks as reviewed, set to approved
newStatus = 'approved'
} else if (!correctedResult.needsReview) {
// If all problems now have high confidence, auto-approve
newStatus = 'approved'
} else {
// Still needs review
newStatus = 'needs_review'
}
// Update database - store corrected result as approved result
await db
.update(practiceAttachments)
.set({
parsingStatus: newStatus,
approvedResult: correctedResult,
confidenceScore: correctedResult.overallConfidence,
needsReview: correctedResult.needsReview,
})
.where(eq(practiceAttachments.id, attachmentId))
return NextResponse.json({
success: true,
status: newStatus,
result: correctedResult,
stats,
correctionsApplied: corrections.length,
})
} catch (error) {
console.error('Error applying corrections:', error)
return NextResponse.json({ error: 'Failed to apply corrections' }, { status: 500 })
}
}

View File

@@ -18,7 +18,7 @@ import { join } from 'path'
import { and, eq } from 'drizzle-orm'
import { db } from '@/db'
import { practiceAttachments, sessionPlans } from '@/db/schema'
import { canPerformAction } from '@/lib/classroom'
import { getPlayerAccess, generateAuthorizationError } from '@/lib/classroom'
import { getDbUserId } from '@/lib/viewer'
import { createId } from '@paralleldrive/cuid2'
@@ -37,6 +37,31 @@ export interface SessionAttachment {
originalUrl: string | null
corners: Array<{ x: number; y: number }> | null
rotation: 0 | 90 | 180 | 270
// Parsing fields
parsingStatus: string | null
parsedAt: string | null
parsingError: string | null
rawParsingResult: unknown | null
approvedResult: unknown | null
confidenceScore: number | null
needsReview: boolean
sessionCreated: boolean
createdSessionId: string | null
// LLM metadata (for debugging/transparency)
llm: {
provider: string | null
model: string | null
promptUsed: string | null
rawResponse: string | null
jsonSchema: string | null
imageSource: string | null
attempts: number | null
usage: {
promptTokens: number | null
completionTokens: number | null
totalTokens: number | null
}
} | null
}
/**
@@ -52,9 +77,12 @@ export async function GET(_request: Request, { params }: RouteParams) {
// Authorization check
const userId = await getDbUserId()
const canView = await canPerformAction(userId, playerId, 'view')
if (!canView) {
return NextResponse.json({ error: 'Not authorized' }, { status: 403 })
const access = await getPlayerAccess(userId, playerId)
if (access.accessLevel === 'none') {
const authError = generateAuthorizationError(access, 'view', {
actionDescription: 'view attachments for this student',
})
return NextResponse.json(authError, { status: 403 })
}
// Get attachments for this session
@@ -84,6 +112,34 @@ export async function GET(_request: Request, { params }: RouteParams) {
: null,
corners: att.corners ?? null,
rotation: (att.rotation ?? 0) as 0 | 90 | 180 | 270,
// Parsing fields
parsingStatus: att.parsingStatus ?? null,
parsedAt: att.parsedAt ?? null,
parsingError: att.parsingError ?? null,
rawParsingResult: att.rawParsingResult ?? null,
approvedResult: att.approvedResult ?? null,
confidenceScore: att.confidenceScore ?? null,
needsReview: att.needsReview === true,
sessionCreated: att.sessionCreated === true,
createdSessionId: att.createdSessionId ?? null,
// LLM metadata (for debugging/transparency)
llm:
att.llmProvider || att.llmModel
? {
provider: att.llmProvider ?? null,
model: att.llmModel ?? null,
promptUsed: att.llmPromptUsed ?? null,
rawResponse: att.llmRawResponse ?? null,
jsonSchema: att.llmJsonSchema ?? null,
imageSource: att.llmImageSource ?? null,
attempts: att.llmAttempts ?? null,
usage: {
promptTokens: att.llmPromptTokens ?? null,
completionTokens: att.llmCompletionTokens ?? null,
totalTokens: att.llmTotalTokens ?? null,
},
}
: null,
}))
return NextResponse.json({ attachments: result })
@@ -104,11 +160,18 @@ export async function POST(request: Request, { params }: RouteParams) {
return NextResponse.json({ error: 'Player ID and Session ID required' }, { status: 400 })
}
// Authorization check - require 'start-session' permission (parent or present teacher)
// Authorization check - require 'view' permission (parent, teacher-enrolled, or teacher-present)
// Adding photos to an existing session is less sensitive than starting a new session
const userId = await getDbUserId()
const canAdd = await canPerformAction(userId, playerId, 'start-session')
if (!canAdd) {
return NextResponse.json({ error: 'Not authorized' }, { status: 403 })
const access = await getPlayerAccess(userId, playerId)
if (access.accessLevel === 'none') {
console.error(
`[attachments POST] Authorization failed: userId=${userId} has no access to playerId=${playerId}`
)
const authError = generateAuthorizationError(access, 'view', {
actionDescription: 'add photos for this student',
})
return NextResponse.json(authError, { status: 403 })
}
// Verify session exists and belongs to player
@@ -257,6 +320,18 @@ export async function POST(request: Request, { params }: RouteParams) {
: null,
corners,
rotation,
// New attachments have no parsing data yet
parsingStatus: null,
parsedAt: null,
parsingError: null,
rawParsingResult: null,
approvedResult: null,
confidenceScore: null,
needsReview: false,
sessionCreated: false,
createdSessionId: null,
// No LLM metadata yet
llm: null,
})
}

View File

@@ -1,6 +1,6 @@
import { type NextRequest, NextResponse } from 'next/server'
import { getPlayerAccess } from '@/lib/classroom'
import { getViewerId } from '@/lib/viewer'
import { getDbUserId } from '@/lib/viewer'
interface RouteParams {
params: Promise<{ id: string }>
@@ -10,12 +10,14 @@ interface RouteParams {
* GET /api/players/[id]/access
* Check access level for specific player
*
* Returns: { accessLevel, isParent, isTeacher, isPresent }
* Returns: { accessLevel, isParent, isTeacher, isPresent, classroomId? }
*/
export async function GET(req: NextRequest, { params }: RouteParams) {
try {
const { id: playerId } = await params
const viewerId = await getViewerId()
// Use getDbUserId() to get the database user.id, not the guestId
// This is required because parent_child links to user.id
const viewerId = await getDbUserId()
const access = await getPlayerAccess(viewerId, playerId)
@@ -24,6 +26,7 @@ export async function GET(req: NextRequest, { params }: RouteParams) {
isParent: access.isParent,
isTeacher: access.isTeacher,
isPresent: access.isPresent,
classroomId: access.classroomId,
})
} catch (error) {
console.error('Failed to check player access:', error)

View File

@@ -1,8 +1,9 @@
'use client'
import { useRouter } from 'next/navigation'
import { useCallback, useMemo, useState } from 'react'
import { useCallback, useEffect, useMemo, useState } from 'react'
import { useToast } from '@/components/common/ToastContext'
import { useMyAbacus } from '@/contexts/MyAbacusContext'
import { PageWithNav } from '@/components/PageWithNav'
import {
ActiveSession,
@@ -43,6 +44,7 @@ interface PracticeClientProps {
export function PracticeClient({ studentId, player, initialSession }: PracticeClientProps) {
const router = useRouter()
const { showError } = useToast()
const { setVisionFrameCallback } = useMyAbacus()
// Track pause state for HUD display (ActiveSession owns the modal and actual pause logic)
const [isPaused, setIsPaused] = useState(false)
@@ -168,7 +170,7 @@ export function PracticeClient({ studentId, player, initialSession }: PracticeCl
// broadcastState is updated by ActiveSession via the onBroadcastStateChange callback
// onAbacusControl receives control events from observing teacher
// onTeacherPause/onTeacherResume receive pause/resume commands from teacher
const { sendPartTransition, sendPartTransitionComplete } = useSessionBroadcast(
const { sendPartTransition, sendPartTransitionComplete, sendVisionFrame } = useSessionBroadcast(
currentPlan.id,
studentId,
broadcastState,
@@ -179,6 +181,17 @@ export function PracticeClient({ studentId, player, initialSession }: PracticeCl
}
)
// Wire vision frame callback to broadcast vision frames to observers
useEffect(() => {
setVisionFrameCallback((frame) => {
sendVisionFrame(frame.imageData, frame.detectedValue, frame.confidence)
})
return () => {
setVisionFrameCallback(null)
}
}, [setVisionFrameCallback, sendVisionFrame])
// Build session HUD data for PracticeSubNav
const sessionHud: SessionHudData | undefined = currentPart
? {

View File

@@ -8,6 +8,7 @@ import {
AllProblemsSection,
ContentBannerSlot,
OfflineWorkSection,
type OfflineAttachment,
PhotoViewerEditor,
type PhotoViewerEditorPhoto,
PracticeSubNav,
@@ -19,6 +20,8 @@ import {
SkillsPanel,
StartPracticeModal,
} from '@/components/practice'
import type { ProblemCorrection } from '@/components/worksheet-parsing'
import type { ParsingStatus } from '@/db/schema/practice-attachments'
import { calculateAutoPauseInfo } from '@/components/practice/autoPauseCalculator'
import { DocumentAdjuster } from '@/components/practice/DocumentAdjuster'
import { useDocumentDetection } from '@/components/practice/useDocumentDetection'
@@ -36,9 +39,18 @@ import { VisualDebugProvider } from '@/contexts/VisualDebugContext'
import type { Player } from '@/db/schema/players'
import type { SessionPlan, SlotResult } from '@/db/schema/session-plans'
import { useSessionMode } from '@/hooks/useSessionMode'
import { usePlayerAccess, canUploadPhotos } from '@/hooks/usePlayerAccess'
import {
useStartParsing,
useApproveAndCreateSession,
useSubmitCorrections,
useReparseSelected,
} from '@/hooks/useWorksheetParsing'
import { PARSING_MODEL_CONFIGS } from '@/lib/worksheet-parsing'
import { computeBktFromHistory, type SkillBktResult } from '@/lib/curriculum/bkt'
import type { ProblemResultWithContext } from '@/lib/curriculum/session-planner'
import { api } from '@/lib/queryClient'
import { attachmentKeys } from '@/lib/queryKeys'
import { css } from '../../../../../styled-system/css'
// Combined height of sticky elements above content area
@@ -130,29 +142,98 @@ export function SummaryClient({
// Session mode - single source of truth for session planning decisions
const { data: sessionMode, isLoading: isLoadingSessionMode } = useSessionMode(studentId)
// Player access - pre-flight authorization check for upload capability
const { data: playerAccess } = usePlayerAccess(studentId)
const canUpload = canUploadPhotos(playerAccess)
const queryClient = useQueryClient()
// Fetch attachments for this session
// Type for session attachment from API
interface SessionAttachmentResponse {
id: string
url: string
originalUrl: string | null
corners: Array<{ x: number; y: number }> | null
rotation: 0 | 90 | 180 | 270
// Parsing fields
parsingStatus: string | null
parsedAt: string | null
parsingError: string | null
rawParsingResult: object | null
approvedResult: object | null
confidenceScore: number | null
needsReview: boolean
sessionCreated: boolean
createdSessionId: string | null
// LLM metadata
llm: {
provider: string | null
model: string | null
promptUsed: string | null
rawResponse: string | null
jsonSchema: string | null
imageSource: string | null
attempts: number | null
usage: {
promptTokens: number | null
completionTokens: number | null
totalTokens: number | null
}
} | null
}
// Fetch attachments for this session (includes parsing data)
const { data: attachmentsData } = useQuery({
queryKey: ['session-attachments', studentId, session?.id],
queryFn: async () => {
queryKey: session?.id ? attachmentKeys.session(studentId, session.id) : ['no-session'],
queryFn: async (): Promise<{ attachments: SessionAttachmentResponse[] }> => {
if (!session?.id) return { attachments: [] }
const res = await api(`curriculum/${studentId}/sessions/${session.id}/attachments`)
if (!res.ok) return { attachments: [] }
return res.json() as Promise<{
attachments: Array<{
id: string
url: string
originalUrl: string | null
corners: Array<{ x: number; y: number }> | null
rotation: 0 | 90 | 180 | 270
}>
}>
return res.json() as Promise<{ attachments: SessionAttachmentResponse[] }>
},
enabled: !!session?.id,
})
const attachments = attachmentsData?.attachments ?? []
// Map API response to OfflineAttachment type (cast parsingStatus and rawParsingResult)
const attachments: OfflineAttachment[] = (attachmentsData?.attachments ?? []).map((att) => ({
id: att.id,
url: att.url,
parsingStatus: att.parsingStatus as ParsingStatus | null,
rawParsingResult: att.rawParsingResult as OfflineAttachment['rawParsingResult'],
needsReview: att.needsReview,
sessionCreated: att.sessionCreated,
}))
// Worksheet parsing mutation
const startParsing = useStartParsing(studentId, session?.id ?? '')
// Approve and create session mutation
const approveAndCreateSession = useApproveAndCreateSession(studentId, session?.id ?? '')
// Submit corrections mutation
const submitCorrections = useSubmitCorrections(studentId, session?.id ?? '')
// Re-parse selected problems mutation
const reparseSelected = useReparseSelected(studentId, session?.id ?? '')
// Map attachments to PhotoViewerEditorPhoto type for the viewer
const viewerPhotos: PhotoViewerEditorPhoto[] = (attachmentsData?.attachments ?? []).map(
(att): PhotoViewerEditorPhoto => ({
id: att.id,
url: att.url,
originalUrl: att.originalUrl,
corners: att.corners,
rotation: att.rotation,
parsingStatus: att.parsingStatus as PhotoViewerEditorPhoto['parsingStatus'],
problemCount: (att.rawParsingResult as { problems?: unknown[] } | null)?.problems?.length,
sessionCreated: att.sessionCreated,
rawParsingResult: att.rawParsingResult
? (att.rawParsingResult as NonNullable<PhotoViewerEditorPhoto['rawParsingResult']>)
: null,
llm: att.llm as PhotoViewerEditorPhoto['llm'],
})
)
const hasPhotos = attachments.length > 0
const isInProgress = session?.startedAt && !session?.completedAt
@@ -249,7 +330,7 @@ export function SummaryClient({
// Refresh attachments
queryClient.invalidateQueries({
queryKey: ['session-attachments', studentId, session.id],
queryKey: attachmentKeys.session(studentId, session.id),
})
} catch (err) {
setUploadError(err instanceof Error ? err.message : 'Failed to upload photos')
@@ -359,9 +440,11 @@ export function SummaryClient({
}
// Refresh attachments
queryClient.invalidateQueries({
queryKey: ['session-attachments', studentId, session?.id],
})
if (session?.id) {
queryClient.invalidateQueries({
queryKey: attachmentKeys.session(studentId, session.id),
})
}
} catch (err) {
setUploadError(err instanceof Error ? err.message : 'Failed to update photo')
} finally {
@@ -433,7 +516,7 @@ export function SummaryClient({
// Refresh attachments
queryClient.invalidateQueries({
queryKey: ['session-attachments', studentId, session.id],
queryKey: attachmentKeys.session(studentId, session.id),
})
} catch (err) {
setUploadError(err instanceof Error ? err.message : 'Failed to delete photo')
@@ -595,8 +678,20 @@ export function SummaryClient({
isUploading={isUploading}
uploadError={uploadError}
deletingId={deletingId}
parsingId={
startParsing.isPending
? typeof startParsing.variables === 'string'
? startParsing.variables
: ((startParsing.variables as { attachmentId: string } | undefined)
?.attachmentId ?? null)
: null
}
dragOver={dragOver}
isDark={isDark}
canUpload={canUpload}
studentId={studentId}
studentName={player.name}
classroomId={playerAccess?.classroomId}
onFileSelect={handleFileSelect}
onDrop={handleDrop}
onDragOver={handleDragOver}
@@ -604,6 +699,7 @@ export function SummaryClient({
onOpenCamera={() => setShowCamera(true)}
onOpenViewer={openViewer}
onDeletePhoto={deletePhoto}
onParse={(attachmentId) => startParsing.mutate({ attachmentId })}
/>
{/* All Problems - complete session listing */}
{hasProblems && (
@@ -690,18 +786,64 @@ export function SummaryClient({
{/* Photo Viewer/Editor */}
<PhotoViewerEditor
photos={attachments.map((att) => ({
id: att.id,
url: att.url,
originalUrl: att.originalUrl,
corners: att.corners,
rotation: att.rotation,
}))}
photos={viewerPhotos}
initialIndex={viewerIndex}
initialMode={viewerMode}
isOpen={viewerOpen}
onClose={() => setViewerOpen(false)}
onEditConfirm={handlePhotoEditConfirm}
onParse={(attachmentId, modelConfigId, additionalContext, preservedBoundingBoxes) =>
startParsing.mutate({
attachmentId,
modelConfigId,
additionalContext,
preservedBoundingBoxes,
})
}
parsingPhotoId={
startParsing.isPending
? ((typeof startParsing.variables === 'string'
? startParsing.variables
: (startParsing.variables as { attachmentId: string })?.attachmentId) ?? null)
: null
}
modelConfigs={PARSING_MODEL_CONFIGS}
onApprove={(attachmentId) => approveAndCreateSession.mutate(attachmentId)}
approvingPhotoId={
approveAndCreateSession.isPending
? ((approveAndCreateSession.variables as string) ?? null)
: null
}
onSubmitCorrection={async (attachmentId, correction) => {
await submitCorrections.mutateAsync({
attachmentId,
corrections: [correction],
})
}}
savingProblemNumber={
submitCorrections.isPending
? ((
submitCorrections.variables as {
attachmentId: string
corrections: ProblemCorrection[]
}
)?.corrections?.[0]?.problemNumber ?? null)
: null
}
onReparseSelected={async (
attachmentId,
problemIndices,
boundingBoxes,
additionalContext
) => {
await reparseSelected.mutateAsync({
attachmentId,
problemIndices,
boundingBoxes,
additionalContext,
})
}}
isReparsingSelected={reparseSelected.isPending}
/>
{/* Fullscreen Camera Modal */}

View File

@@ -94,9 +94,13 @@ export default function RemoteCameraPage() {
// Validate session on mount
useEffect(() => {
async function validateSession() {
console.log('[RemoteCameraPage] Validating session:', sessionId)
try {
const response = await fetch(`/api/remote-camera?sessionId=${sessionId}`)
console.log('[RemoteCameraPage] Session validation response:', response.status)
if (response.ok) {
const data = await response.json()
console.log('[RemoteCameraPage] Session valid:', data)
setSessionStatus('connected')
} else if (response.status === 404) {
setSessionStatus('expired')
@@ -107,6 +111,7 @@ export default function RemoteCameraPage() {
setSessionError(data.error || 'Failed to validate session')
}
} catch (err) {
console.error('[RemoteCameraPage] Session validation error:', err)
setSessionStatus('error')
setSessionError('Network error')
}

View File

@@ -301,7 +301,9 @@ export function AbacusDisplayDropdown({
step="1"
value={config.physicalAbacusColumns}
onChange={(e) =>
updateConfig({ physicalAbacusColumns: parseInt(e.target.value, 10) })
updateConfig({
physicalAbacusColumns: parseInt(e.target.value, 10),
})
}
className={css({
flex: 1,

View File

@@ -9,6 +9,9 @@ import { createRoot } from 'react-dom/client'
import { HomeHeroContext } from '@/contexts/HomeHeroContext'
import { type DockAnimationState, useMyAbacus } from '@/contexts/MyAbacusContext'
import { useTheme } from '@/contexts/ThemeContext'
import { DockedVisionFeed } from '@/components/vision/DockedVisionFeed'
import { VisionIndicator } from '@/components/vision/VisionIndicator'
import { VisionSetupModal } from '@/components/vision/VisionSetupModal'
import { css } from '../../styled-system/css'
/**
@@ -85,6 +88,8 @@ export function MyAbacus() {
clearDockRequest,
abacusValue: contextAbacusValue,
setDockedValue,
visionConfig,
isVisionSetupComplete,
} = useMyAbacus()
const appConfig = useAbacusConfig()
const pathname = usePathname()
@@ -493,6 +498,9 @@ export function MyAbacus() {
position: 'relative',
})}
>
{/* Vision indicator - positioned at top-right, before undock button */}
<VisionIndicator size="small" position="top-left" />
{/* Undock button - positioned at top-right of dock container */}
<button
data-action="undock-abacus"
@@ -536,44 +544,67 @@ export function MyAbacus() {
data-element="abacus-display"
className={css({
filter: 'drop-shadow(0 4px 12px rgba(251, 191, 36, 0.2))',
width: '100%',
height: '100%',
})}
>
<AbacusReact
key="docked"
value={dock.value ?? abacusValue}
defaultValue={dock.defaultValue}
columns={dock.columns ?? 5}
scaleFactor={effectiveScaleFactor}
beadShape={appConfig.beadShape}
showNumbers={dock.showNumbers ?? true}
interactive={dock.interactive ?? true}
animated={dock.animated ?? true}
customStyles={structuralStyles}
onValueChange={(newValue: number | bigint) => {
const numValue = Number(newValue)
// Update the appropriate state based on dock mode
// (unless dock provides its own value prop for full control)
if (dock.value === undefined) {
// When docked by user, update context value; otherwise update local/hero
if (isDockedByUser) {
setDockedValue(numValue)
} else {
setAbacusValue(numValue)
{/* Show vision feed when enabled, otherwise show digital abacus */}
{visionConfig.enabled && isVisionSetupComplete ? (
<DockedVisionFeed
columnCount={dock.columns ?? 5}
onValueDetected={(value) => {
// Update the appropriate state based on dock mode
if (dock.value === undefined) {
if (isDockedByUser) {
setDockedValue(value)
} else {
setAbacusValue(value)
}
}
}
// Also call dock's callback if provided
if (dock.onValueChange) {
dock.onValueChange(numValue)
}
}}
enhanced3d="realistic"
material3d={{
heavenBeads: 'glossy',
earthBeads: 'satin',
lighting: 'dramatic',
woodGrain: true,
}}
/>
// Also call dock's callback if provided
if (dock.onValueChange) {
dock.onValueChange(value)
}
}}
/>
) : (
<AbacusReact
key="docked"
value={dock.value ?? abacusValue}
defaultValue={dock.defaultValue}
columns={dock.columns ?? 5}
scaleFactor={effectiveScaleFactor}
beadShape={appConfig.beadShape}
showNumbers={dock.showNumbers ?? true}
interactive={dock.interactive ?? true}
animated={dock.animated ?? true}
customStyles={structuralStyles}
onValueChange={(newValue: number | bigint) => {
const numValue = Number(newValue)
// Update the appropriate state based on dock mode
// (unless dock provides its own value prop for full control)
if (dock.value === undefined) {
// When docked by user, update context value; otherwise update local/hero
if (isDockedByUser) {
setDockedValue(numValue)
} else {
setAbacusValue(numValue)
}
}
// Also call dock's callback if provided
if (dock.onValueChange) {
dock.onValueChange(numValue)
}
}}
enhanced3d="realistic"
material3d={{
heavenBeads: 'glossy',
earthBeads: 'satin',
lighting: 'dramatic',
woodGrain: true,
}}
/>
)}
</div>
</div>,
dock.element
@@ -820,6 +851,9 @@ export function MyAbacus() {
`,
}}
/>
{/* Vision setup modal - controlled by context state */}
<VisionSetupModal />
</>
)
}

View File

@@ -1,26 +1,28 @@
'use client'
import * as Dialog from '@radix-ui/react-dialog'
import { useRouter } from 'next/navigation'
import { useCallback, useEffect, useLayoutEffect, useRef, useState, type ReactElement } from 'react'
import { useMutation } from '@tanstack/react-query'
import { useRouter } from 'next/navigation'
import { type ReactElement, useCallback, useEffect, useLayoutEffect, useRef, useState } from 'react'
import { useToast } from '@/components/common/ToastContext'
import { Z_INDEX } from '@/constants/zIndex'
import { useMyAbacus } from '@/contexts/MyAbacusContext'
import { useTheme } from '@/contexts/ThemeContext'
import { useToast } from '@/components/common/ToastContext'
import type { ActiveSessionInfo } from '@/hooks/useClassroom'
import { useSessionObserver } from '@/hooks/useSessionObserver'
import { api } from '@/lib/queryClient'
import { css } from '../../../styled-system/css'
import { AbacusDock } from '../AbacusDock'
import { SessionShareButton } from './SessionShareButton'
import { LiveResultsPanel } from '../practice/LiveResultsPanel'
import { LiveSessionReportInline } from '../practice/LiveSessionReportModal'
import { MobileResultsSummary } from '../practice/MobileResultsSummary'
import { ObserverTransitionView } from '../practice/ObserverTransitionView'
import { PracticeFeedback } from '../practice/PracticeFeedback'
import { PurposeBadge } from '../practice/PurposeBadge'
import { SessionProgressIndicator } from '../practice/SessionProgressIndicator'
import { VerticalProblem } from '../practice/VerticalProblem'
import { ObserverVisionFeed } from '../vision/ObserverVisionFeed'
import { SessionShareButton } from './SessionShareButton'
interface SessionObserverModalProps {
/** Whether the modal is open */
@@ -106,14 +108,15 @@ export function SessionObserverModal({
data-component="session-observer-modal"
className={css({
position: 'fixed',
top: '50%',
left: '50%',
transform: 'translate(-50%, -50%)',
width: '90vw',
maxWidth: '800px',
maxHeight: '85vh',
top: { base: 0, md: '50%' },
left: { base: 0, md: '50%' },
transform: { base: 'none', md: 'translate(-50%, -50%)' },
width: { base: '100vw', md: '95vw', lg: '90vw' },
maxWidth: { base: 'none', md: '900px', lg: '1000px' },
height: { base: '100vh', md: 'auto' },
maxHeight: { base: '100vh', md: '90vh' },
backgroundColor: isDark ? 'gray.900' : 'white',
borderRadius: '16px',
borderRadius: { base: 0, md: '16px' },
boxShadow: '0 20px 60px rgba(0, 0, 0, 0.3)',
zIndex: Z_INDEX.NESTED_MODAL,
overflow: 'hidden',
@@ -162,6 +165,7 @@ export function SessionObserverView({
state,
results,
transitionState,
visionFrame,
isConnected,
isObserving,
error,
@@ -326,29 +330,29 @@ export function SessionObserverView({
display: 'flex',
alignItems: 'center',
justifyContent: 'space-between',
padding: '16px 20px',
padding: { base: '10px 16px', md: '16px 20px' },
borderBottom: '1px solid',
borderColor: isDark ? 'gray.700' : 'gray.200',
gap: '12px',
gap: { base: '8px', md: '12px' },
})}
>
<div
className={css({
display: 'flex',
alignItems: 'center',
gap: '12px',
gap: { base: '8px', md: '12px' },
minWidth: 0,
})}
>
<span
className={css({
width: '40px',
height: '40px',
width: { base: '32px', md: '40px' },
height: { base: '32px', md: '40px' },
borderRadius: '50%',
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
fontSize: '1.25rem',
fontSize: { base: '1rem', md: '1.25rem' },
flexShrink: 0,
})}
style={{ backgroundColor: student.color }}
@@ -460,7 +464,7 @@ export function SessionObserverView({
<div
data-element="progress-indicator"
className={css({
padding: '0 20px 12px',
padding: { base: '0 16px 8px', md: '0 20px 12px' },
borderBottom: '1px solid',
borderColor: isDark ? 'gray.700' : 'gray.200',
})}
@@ -481,13 +485,13 @@ export function SessionObserverView({
<div
className={css({
flex: 1,
padding: variant === 'page' ? '28px' : '24px',
padding: variant === 'page' ? { base: '12px', md: '28px' } : { base: '12px', md: '24px' },
overflowY: 'auto',
display: 'flex',
flexDirection: 'column',
alignItems: 'center',
justifyContent: 'center',
gap: '20px',
gap: { base: '12px', md: '20px' },
backgroundColor: variant === 'page' ? (isDark ? 'gray.900' : 'white') : undefined,
})}
>
@@ -701,16 +705,19 @@ export function SessionObserverView({
data-element="observer-main-content"
className={css({
display: 'flex',
alignItems: 'flex-start',
gap: '24px',
flexDirection: { base: 'column', lg: 'row' },
alignItems: { base: 'center', lg: 'flex-start' },
gap: { base: '16px', md: '24px' },
width: '100%',
justifyContent: 'center',
})}
>
{/* Live results panel - left side */}
{/* Live results panel - hidden on small/medium, shown on large */}
<div
data-element="results-panel-desktop"
className={css({
width: '220px',
display: { base: 'none', lg: 'block' },
width: '200px',
flexShrink: 0,
})}
>
@@ -722,30 +729,35 @@ export function SessionObserverView({
/>
</div>
{/* Problem area - center/right */}
{/* Problem area - center */}
<div
data-element="observer-content"
className={css({
display: 'flex',
flexDirection: 'column',
alignItems: 'center',
gap: '16px',
gap: { base: '8px', md: '16px' },
width: '100%',
maxWidth: { base: '100%', md: '500px' },
})}
>
{/* Purpose badge with tooltip - matches student's view */}
<PurposeBadge purpose={state.purpose} complexity={state.complexity} />
{/* Problem container with absolutely positioned AbacusDock */}
{/* Problem container with AbacusDock - responsive flex layout */}
<div
data-element="problem-with-dock"
className={css({
position: 'relative',
display: 'flex',
alignItems: 'flex-start',
flexDirection: { base: 'column', sm: 'row' },
alignItems: { base: 'center', sm: 'flex-start' },
justifyContent: 'center',
gap: { base: '12px', sm: '24px' },
width: '100%',
})}
>
{/* Problem - ref for height measurement */}
<div ref={problemRef}>
<div ref={problemRef} className={css({ flexShrink: 0 })}>
<VerticalProblem
terms={state.currentProblem.terms}
userAnswer={state.studentAnswer}
@@ -756,24 +768,36 @@ export function SessionObserverView({
/>
</div>
{/* AbacusDock - positioned exactly like ActiveSession */}
{state.phase === 'problem' && (problemHeight ?? 0) > 0 && (
<AbacusDock
id="teacher-observer-dock"
columns={abacusColumns}
interactive={true}
showNumbers={false}
animated={true}
onValueChange={handleTeacherAbacusChange}
{/* Vision feed or AbacusDock - flex layout instead of absolute */}
{state.phase === 'problem' && (
<div
data-element="abacus-container"
className={css({
position: 'absolute',
left: '100%',
top: 0,
width: '100%',
marginLeft: '1.5rem',
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
width: { base: '140px', sm: '120px', md: '140px' },
height: { base: '160px', sm: 'auto' },
minHeight: { sm: '160px' },
flexShrink: 0,
})}
style={{ height: problemHeight ?? undefined }}
/>
style={{ height: problemHeight ? `${problemHeight}px` : undefined }}
>
{/* Show vision feed if available, otherwise show teacher's abacus dock */}
{visionFrame ? (
<ObserverVisionFeed frame={visionFrame} />
) : (
<AbacusDock
id="teacher-observer-dock"
columns={abacusColumns}
interactive={true}
showNumbers={false}
animated={true}
onValueChange={handleTeacherAbacusChange}
style={{ height: '100%', width: '100%' }}
/>
)}
</div>
)}
</div>
@@ -784,6 +808,23 @@ export function SessionObserverView({
correctAnswer={state.currentProblem.answer}
/>
)}
{/* Mobile results summary - shown on small/medium, hidden on large */}
<div
data-element="results-panel-mobile"
className={css({
display: { base: 'flex', lg: 'none' },
width: '100%',
justifyContent: 'center',
})}
>
<MobileResultsSummary
results={results}
totalProblems={state.totalProblems}
isDark={isDark}
onExpand={() => setShowFullReport(true)}
/>
</div>
</div>
</div>
)}
@@ -803,7 +844,7 @@ export function SessionObserverView({
{/* Footer with connection status and controls */}
<div
className={css({
padding: '12px 20px',
padding: { base: '8px 12px', md: '12px 20px' },
borderTop: '1px solid',
borderColor: isDark ? 'gray.700' : 'gray.200',
display: 'flex',

View File

@@ -7,8 +7,6 @@ import { useMyAbacus } from '@/contexts/MyAbacusContext'
import { useTheme } from '@/contexts/ThemeContext'
import {
getCurrentProblemInfo,
isInRetryEpoch,
needsRetryTransition,
type ProblemSlot,
type SessionHealth,
type SessionPart,
@@ -50,8 +48,6 @@ import { PracticeHelpOverlay } from './PracticeHelpOverlay'
import { ProblemDebugPanel } from './ProblemDebugPanel'
import { VerticalProblem } from './VerticalProblem'
import type { ReceivedAbacusControl } from '@/hooks/useSessionBroadcast'
import { AbacusVisionBridge } from '../vision'
import { Z_INDEX } from '@/constants/zIndex'
/**
* Timing data for the current problem attempt
@@ -995,9 +991,6 @@ export function ActiveSession({
// Track previous epoch to detect epoch changes
const prevEpochRef = useRef<number>(0)
// Vision mode state - for physical abacus camera detection
const [isVisionEnabled, setIsVisionEnabled] = useState(false)
// Browse mode state - isBrowseMode is controlled via props
// browseIndex can be controlled (browseIndexProp + onBrowseIndexChange) or internal
const [internalBrowseIndex, setInternalBrowseIndex] = useState(0)
@@ -1323,17 +1316,6 @@ export function ActiveSession({
[setAnswer]
)
// Handle value detected from vision (physical abacus camera)
const handleVisionValueDetected = useCallback(
(value: number) => {
// Update the docked abacus to show the detected value
setDockedValue(value)
// Also set the answer input
setAnswer(String(value))
},
[setDockedValue, setAnswer]
)
// Handle submit
const handleSubmit = useCallback(async () => {
// Allow submitting from inputting, awaitingDisambiguation, or helpMode
@@ -1996,56 +1978,22 @@ export function ActiveSession({
{/* Abacus dock - positioned absolutely so it doesn't affect problem centering */}
{/* Width 100% matches problem width, height matches problem height */}
{currentPart.type === 'abacus' && !showHelpOverlay && (problemHeight ?? 0) > 0 && (
<>
<AbacusDock
id="practice-abacus"
columns={calculateAbacusColumns(attempt.problem.terms)}
interactive={true}
showNumbers={false}
animated={true}
onValueChange={handleAbacusDockValueChange}
className={css({
position: 'absolute',
left: '100%',
top: 0,
width: '100%',
marginLeft: '1.5rem',
})}
style={{ height: problemHeight }}
/>
{/* Vision mode toggle button */}
<button
type="button"
data-action="toggle-vision"
data-enabled={isVisionEnabled}
onClick={() => setIsVisionEnabled((prev) => !prev)}
className={css({
position: 'absolute',
left: '100%',
bottom: 0,
marginLeft: '1.5rem',
px: 2,
py: 1,
display: 'flex',
alignItems: 'center',
gap: 1,
fontSize: 'xs',
bg: isVisionEnabled ? 'green.600' : isDark ? 'gray.700' : 'gray.200',
color: isVisionEnabled ? 'white' : isDark ? 'gray.300' : 'gray.700',
border: 'none',
borderRadius: 'md',
cursor: 'pointer',
transition: 'all 0.2s',
_hover: {
bg: isVisionEnabled ? 'green.500' : isDark ? 'gray.600' : 'gray.300',
},
})}
title="Use camera to detect physical abacus"
>
<span>📷</span>
<span>Vision</span>
</button>
</>
<AbacusDock
id="practice-abacus"
columns={calculateAbacusColumns(attempt.problem.terms)}
interactive={true}
showNumbers={false}
animated={true}
onValueChange={handleAbacusDockValueChange}
className={css({
position: 'absolute',
left: '100%',
top: 0,
width: '100%',
marginLeft: '1.5rem',
})}
style={{ height: problemHeight }}
/>
)}
</animated.div>
</animated.div>
@@ -2130,27 +2078,6 @@ export function ActiveSession({
/>
)}
{/* Abacus Vision Bridge - floating camera panel for physical abacus detection */}
{isVisionEnabled && currentPart.type === 'abacus' && attempt && (
<div
data-component="vision-panel"
className={css({
position: 'fixed',
top: '200px', // Below main nav (80px) + sub nav (~56px) + mini sub-nav (~60px)
right: '1rem',
zIndex: Z_INDEX.DROPDOWN, // Above content but below modals
boxShadow: 'xl',
borderRadius: 'xl',
})}
>
<AbacusVisionBridge
columnCount={abacusDisplayConfig.physicalAbacusColumns}
onValueDetected={handleVisionValueDetected}
onClose={() => setIsVisionEnabled(false)}
/>
</div>
)}
{/* Session Paused Modal - rendered here as single source of truth */}
<SessionPausedModal
isOpen={isPaused}

View File

@@ -0,0 +1,159 @@
'use client'
import { useMemo } from 'react'
import type { ObservedResult } from '@/hooks/useSessionObserver'
import { css } from '../../../styled-system/css'
interface MobileResultsSummaryProps {
/** Accumulated results from the session */
results: ObservedResult[]
/** Total problems in the session */
totalProblems: number
/** Whether dark mode */
isDark: boolean
/** Callback to expand to full report view */
onExpand: () => void
}
/**
* Compact results summary for mobile screens
*
* Shows progress, accuracy, and incorrect count in a horizontal chip layout.
* Tapping expands to full report view.
*/
export function MobileResultsSummary({
results,
totalProblems,
isDark,
onExpand,
}: MobileResultsSummaryProps) {
// Compute stats
const stats = useMemo(() => {
const correct = results.filter((r) => r.isCorrect).length
const incorrect = results.filter((r) => !r.isCorrect).length
const completed = results.length
const accuracy = completed > 0 ? correct / completed : 0
return { correct, incorrect, completed, accuracy }
}, [results])
// No results yet - show minimal placeholder
if (results.length === 0) {
return (
<div
data-component="mobile-results-summary"
data-state="empty"
className={css({
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
padding: '8px 16px',
borderRadius: '20px',
backgroundColor: isDark ? 'gray.800' : 'gray.100',
fontSize: '0.8125rem',
color: isDark ? 'gray.500' : 'gray.400',
})}
>
Waiting for results...
</div>
)
}
return (
<button
type="button"
data-component="mobile-results-summary"
onClick={onExpand}
className={css({
display: 'flex',
alignItems: 'center',
gap: '12px',
padding: '8px 16px',
borderRadius: '20px',
backgroundColor: isDark ? 'gray.800' : 'gray.100',
border: '1px solid',
borderColor: isDark ? 'gray.700' : 'gray.200',
cursor: 'pointer',
transition: 'background-color 0.15s ease',
_hover: {
backgroundColor: isDark ? 'gray.700' : 'gray.200',
},
})}
>
{/* Progress */}
<span
className={css({
fontSize: '0.875rem',
fontWeight: 'bold',
color: isDark ? 'gray.200' : 'gray.700',
})}
>
{stats.completed}/{totalProblems}
</span>
{/* Divider */}
<span
className={css({
width: '1px',
height: '16px',
backgroundColor: isDark ? 'gray.600' : 'gray.300',
})}
/>
{/* Accuracy */}
<span
className={css({
fontSize: '0.875rem',
fontWeight: 'bold',
color:
stats.accuracy >= 0.8
? isDark
? 'green.400'
: 'green.600'
: stats.accuracy >= 0.6
? isDark
? 'yellow.400'
: 'yellow.600'
: isDark
? 'red.400'
: 'red.600',
})}
>
{Math.round(stats.accuracy * 100)}%
</span>
{/* Incorrect count badge (only if there are incorrect) */}
{stats.incorrect > 0 && (
<span
className={css({
display: 'flex',
alignItems: 'center',
gap: '4px',
padding: '2px 8px',
borderRadius: '9999px',
backgroundColor: isDark ? 'red.900/50' : 'red.100',
fontSize: '0.75rem',
fontWeight: 'bold',
color: isDark ? 'red.300' : 'red.700',
})}
>
<span></span>
<span>{stats.incorrect}</span>
</span>
)}
{/* View report arrow */}
<span
className={css({
marginLeft: 'auto',
fontSize: '0.75rem',
color: isDark ? 'blue.400' : 'blue.600',
fontWeight: 'medium',
})}
>
Report
</span>
</button>
)
}
export default MobileResultsSummary

View File

@@ -1,12 +1,23 @@
'use client'
import { useCallback, useState } from 'react'
import type { RefObject } from 'react'
import { useMutation } from '@tanstack/react-query'
import type { ParsingStatus } from '@/db/schema/practice-attachments'
import type { WorksheetParsingResult } from '@/lib/worksheet-parsing'
import { api } from '@/lib/queryClient'
import { css } from '../../../styled-system/css'
import { ParsedProblemsList } from '../worksheet-parsing'
export interface OfflineAttachment {
id: string
url: string
filename?: string
// Parsing fields
parsingStatus?: ParsingStatus | null
rawParsingResult?: WorksheetParsingResult | null
needsReview?: boolean
sessionCreated?: boolean
}
export interface OfflineWorkSectionProps {
@@ -20,10 +31,20 @@ export interface OfflineWorkSectionProps {
uploadError: string | null
/** ID of photo being deleted */
deletingId: string | null
/** ID of photo currently being parsed */
parsingId: string | null
/** Whether drag is over the drop zone */
dragOver: boolean
/** Dark mode */
isDark: boolean
/** Whether the user can upload photos (pre-flight auth check) */
canUpload?: boolean
/** Student ID for entry prompt */
studentId?: string
/** Student name for remediation message */
studentName?: string
/** Classroom ID for entry prompt (when canUpload is false) */
classroomId?: string
/** Handlers */
onFileSelect: (e: React.ChangeEvent<HTMLInputElement>) => void
onDrop: (e: React.DragEvent) => void
@@ -33,6 +54,8 @@ export interface OfflineWorkSectionProps {
/** Open photo viewer/editor at index with specified mode */
onOpenViewer: (index: number, mode: 'view' | 'edit') => void
onDeletePhoto: (id: string) => void
/** Start parsing a worksheet photo */
onParse?: (id: string) => void
}
/**
@@ -50,8 +73,13 @@ export function OfflineWorkSection({
isUploading,
uploadError,
deletingId,
parsingId,
dragOver,
isDark,
canUpload = true,
studentId,
studentName,
classroomId,
onFileSelect,
onDrop,
onDragOver,
@@ -59,10 +87,60 @@ export function OfflineWorkSection({
onOpenCamera,
onOpenViewer,
onDeletePhoto,
onParse,
}: OfflineWorkSectionProps) {
const photoCount = attachments.length
// Show add tile unless we have 8+ photos (max reasonable gallery size)
const showAddTile = photoCount < 8
// Also only show if user can upload
const showAddTile = photoCount < 8 && canUpload
// Show remediation when user can't upload but is a teacher with enrolled student
const showTeacherRemediation = !canUpload && classroomId && studentId
// Show generic access denied message when canUpload is false for unknown reasons
// (catches bugs like parent-child link not being recognized)
const showGenericAccessDenied = !canUpload && !classroomId
// Entry prompt state (for teachers who need student to enter classroom)
const [promptSent, setPromptSent] = useState(false)
// Mutation for sending entry prompt
const sendEntryPrompt = useMutation({
mutationFn: async (playerId: string) => {
if (!classroomId) throw new Error('No classroom ID')
const response = await api(`classrooms/${classroomId}/entry-prompts`, {
method: 'POST',
body: JSON.stringify({ playerIds: [playerId] }),
})
if (!response.ok) {
const data = await response.json()
throw new Error(data.error || 'Failed to send prompt')
}
return response.json()
},
onSuccess: (data) => {
if (data.created > 0) {
setPromptSent(true)
}
},
})
const handleSendEntryPrompt = useCallback(() => {
if (studentId) {
sendEntryPrompt.mutate(studentId)
}
}, [sendEntryPrompt, studentId])
// Find all attachments with parsing results
const parsedAttachments = attachments.filter(
(att) =>
att.rawParsingResult?.problems &&
att.rawParsingResult.problems.length > 0 &&
(att.parsingStatus === 'needs_review' || att.parsingStatus === 'approved')
)
// Track which parsed result is currently expanded (default to first one)
const [expandedResultId, setExpandedResultId] = useState<string | null>(
parsedAttachments[0]?.id ?? null
)
return (
<div
@@ -294,6 +372,115 @@ export function OfflineWorkSection({
>
{index + 1}
</div>
{/* Parse button - show if not parsed yet OR if failed (to allow retry) */}
{onParse &&
(!att.parsingStatus || att.parsingStatus === 'failed') &&
!att.sessionCreated && (
<button
type="button"
data-action="parse-worksheet"
onClick={(e) => {
e.stopPropagation()
onParse(att.id)
}}
disabled={parsingId === att.id}
className={css({
position: 'absolute',
bottom: '0.5rem',
right: '0.5rem',
height: '24px',
paddingX: '0.5rem',
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
gap: '0.25rem',
backgroundColor: att.parsingStatus === 'failed' ? 'orange.500' : 'blue.500',
color: 'white',
borderRadius: 'full',
border: 'none',
cursor: 'pointer',
fontSize: '0.6875rem',
fontWeight: '600',
transition: 'background-color 0.2s',
_hover: {
backgroundColor: att.parsingStatus === 'failed' ? 'orange.600' : 'blue.600',
},
_disabled: {
backgroundColor: 'gray.400',
cursor: 'wait',
},
})}
aria-label={att.parsingStatus === 'failed' ? 'Retry parsing' : 'Parse worksheet'}
>
{parsingId === att.id ? '⏳' : att.parsingStatus === 'failed' ? '🔄' : '🔍'}{' '}
{att.parsingStatus === 'failed' ? 'Retry' : 'Parse'}
</button>
)}
{/* Parsing status badge - don't show for 'failed' since retry button is shown instead */}
{att.parsingStatus && att.parsingStatus !== 'failed' && (
<div
data-element="parsing-status"
className={css({
position: 'absolute',
bottom: '0.5rem',
right: '0.5rem',
height: '24px',
paddingX: '0.5rem',
display: 'flex',
alignItems: 'center',
gap: '0.25rem',
borderRadius: 'full',
fontSize: '0.6875rem',
fontWeight: '600',
backgroundColor:
att.parsingStatus === 'processing'
? 'blue.500'
: att.parsingStatus === 'needs_review'
? 'yellow.500'
: att.parsingStatus === 'approved'
? 'green.500'
: 'gray.500',
color: att.parsingStatus === 'needs_review' ? 'yellow.900' : 'white',
})}
>
{att.parsingStatus === 'processing' && '⏳'}
{att.parsingStatus === 'needs_review' && '⚠️'}
{att.parsingStatus === 'approved' && '✓'}
{att.parsingStatus === 'processing'
? 'Analyzing...'
: att.parsingStatus === 'needs_review'
? `${att.rawParsingResult?.problems?.length ?? '?'} problems`
: att.parsingStatus === 'approved'
? `${att.rawParsingResult?.problems?.length ?? '?'} problems`
: att.parsingStatus}
</div>
)}
{/* Session created indicator */}
{att.sessionCreated && (
<div
data-element="session-created"
className={css({
position: 'absolute',
bottom: '0.5rem',
right: '0.5rem',
height: '24px',
paddingX: '0.5rem',
display: 'flex',
alignItems: 'center',
gap: '0.25rem',
borderRadius: 'full',
fontSize: '0.6875rem',
fontWeight: '600',
backgroundColor: 'green.600',
color: 'white',
})}
>
Session Created
</div>
)}
</div>
))}
@@ -417,24 +604,255 @@ export function OfflineWorkSection({
)}
</div>
{/* Coming Soon footer - subtle, integrated */}
<div
data-element="coming-soon-hint"
className={css({
marginTop: '1rem',
paddingTop: '0.75rem',
borderTop: '1px solid',
borderColor: isDark ? 'gray.700' : 'gray.200',
display: 'flex',
alignItems: 'center',
gap: '0.5rem',
color: isDark ? 'gray.500' : 'gray.500',
fontSize: '0.8125rem',
})}
>
<span>🔮</span>
<span>Coming soon: Auto-analyze worksheets to track progress</span>
</div>
{/* Remediation banner - shown when teacher can't upload because student isn't present */}
{showTeacherRemediation && (
<div
data-element="upload-remediation"
className={css({
marginTop: '1rem',
padding: '1rem',
backgroundColor: isDark ? 'orange.900/30' : 'orange.50',
border: '2px solid',
borderColor: isDark ? 'orange.700' : 'orange.300',
borderRadius: '12px',
})}
>
{!promptSent ? (
<>
<h4
className={css({
fontSize: '0.9375rem',
fontWeight: '600',
color: isDark ? 'orange.300' : 'orange.700',
marginBottom: '0.5rem',
})}
>
{studentName || 'This student'} is not in your classroom
</h4>
<p
className={css({
fontSize: '0.875rem',
color: isDark ? 'gray.300' : 'gray.600',
marginBottom: '1rem',
lineHeight: '1.5',
})}
>
To upload photos for {studentName || 'this student'}, they need to enter your
classroom first. Send a notification to their parent to have them join.
</p>
<button
type="button"
onClick={handleSendEntryPrompt}
disabled={sendEntryPrompt.isPending}
data-action="send-entry-prompt"
className={css({
padding: '0.625rem 1rem',
fontSize: '0.875rem',
fontWeight: '600',
color: 'white',
backgroundColor: isDark ? 'orange.600' : 'orange.500',
borderRadius: '8px',
border: 'none',
cursor: sendEntryPrompt.isPending ? 'wait' : 'pointer',
opacity: sendEntryPrompt.isPending ? 0.7 : 1,
transition: 'all 0.15s ease',
_hover: {
backgroundColor: isDark ? 'orange.500' : 'orange.600',
},
_disabled: {
cursor: 'wait',
opacity: 0.7,
},
})}
>
{sendEntryPrompt.isPending ? 'Sending...' : 'Send Entry Prompt'}
</button>
</>
) : (
<div
className={css({
display: 'flex',
alignItems: 'center',
gap: '0.5rem',
})}
>
<span
className={css({
fontSize: '1.25rem',
})}
>
</span>
<p
className={css({
fontSize: '0.9375rem',
fontWeight: '500',
color: isDark ? 'green.300' : 'green.700',
})}
>
Entry prompt sent to {studentName || 'the student'}&apos;s parent
</p>
</div>
)}
</div>
)}
{/* Generic access denied banner - shown when upload is blocked for unknown reasons */}
{showGenericAccessDenied && (
<div
data-element="upload-access-denied"
className={css({
marginTop: '1rem',
padding: '1rem',
backgroundColor: isDark ? 'red.900/30' : 'red.50',
border: '2px solid',
borderColor: isDark ? 'red.700' : 'red.300',
borderRadius: '12px',
})}
>
<h4
className={css({
fontSize: '0.9375rem',
fontWeight: '600',
color: isDark ? 'red.300' : 'red.700',
marginBottom: '0.5rem',
})}
>
Unable to upload photos
</h4>
<p
className={css({
fontSize: '0.875rem',
color: isDark ? 'gray.300' : 'gray.600',
lineHeight: '1.5',
})}
>
Your account doesn&apos;t have permission to upload photos for this student. If you
believe this is an error, try refreshing the page or logging out and back in.
</p>
</div>
)}
{/* Parsing hint footer - only show if no parsed results yet */}
{onParse && parsedAttachments.length === 0 && (
<div
data-element="parsing-hint"
className={css({
marginTop: '1rem',
paddingTop: '0.75rem',
borderTop: '1px solid',
borderColor: isDark ? 'gray.700' : 'gray.200',
display: 'flex',
alignItems: 'center',
gap: '0.5rem',
color: isDark ? 'gray.400' : 'gray.600',
fontSize: '0.8125rem',
})}
>
<span></span>
<span>
Click &ldquo;Parse&rdquo; on any photo to auto-extract problems from worksheets
</span>
</div>
)}
{/* Parsed Results Section - show when any photo has parsing results */}
{parsedAttachments.length > 0 && (
<div
data-element="parsed-results-section"
className={css({
marginTop: '1rem',
paddingTop: '1rem',
borderTop: '1px solid',
borderColor: isDark ? 'gray.700' : 'gray.200',
})}
>
{/* Section header with photo selector if multiple parsed photos */}
<div
className={css({
display: 'flex',
alignItems: 'center',
justifyContent: 'space-between',
marginBottom: '0.75rem',
})}
>
<h4
className={css({
fontSize: '0.875rem',
fontWeight: 'bold',
color: isDark ? 'white' : 'gray.800',
display: 'flex',
alignItems: 'center',
gap: '0.5rem',
})}
>
<span>📊</span>
Extracted Problems
</h4>
{/* Photo selector tabs when multiple parsed photos */}
{parsedAttachments.length > 1 && (
<div
className={css({
display: 'flex',
gap: '0.25rem',
})}
>
{parsedAttachments.map((att, index) => {
const photoIndex = attachments.findIndex((a) => a.id === att.id)
return (
<button
key={att.id}
type="button"
onClick={() => setExpandedResultId(att.id)}
className={css({
px: 2,
py: 1,
fontSize: '0.75rem',
fontWeight: '500',
borderRadius: 'md',
border: 'none',
cursor: 'pointer',
transition: 'all 0.15s',
backgroundColor:
expandedResultId === att.id
? isDark
? 'blue.600'
: 'blue.500'
: isDark
? 'gray.700'
: 'gray.100',
color:
expandedResultId === att.id ? 'white' : isDark ? 'gray.300' : 'gray.700',
_hover: {
backgroundColor:
expandedResultId === att.id
? isDark
? 'blue.500'
: 'blue.600'
: isDark
? 'gray.600'
: 'gray.200',
},
})}
>
Photo {photoIndex + 1}
</button>
)
})}
</div>
)}
</div>
{/* Show the selected parsed result */}
{parsedAttachments.map((att) => {
if (att.id !== expandedResultId) return null
if (!att.rawParsingResult) return null
return <ParsedProblemsList key={att.id} result={att.rawParsingResult} isDark={isDark} />
})}
</div>
)}
</div>
)
}

File diff suppressed because it is too large Load Diff

View File

@@ -22,6 +22,7 @@ export {
useIsTouchDevice,
} from './hooks/useDeviceDetection'
export { LiveResultsPanel } from './LiveResultsPanel'
export { MobileResultsSummary } from './MobileResultsSummary'
export {
LiveSessionReportInline,
LiveSessionReportModal,

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,697 @@
'use client'
import { useCallback, useEffect, useRef, useState } from 'react'
import { useMyAbacus } from '@/contexts/MyAbacusContext'
import { useRemoteCameraDesktop } from '@/hooks/useRemoteCameraDesktop'
import {
cleanupArucoDetector,
detectMarkers,
initArucoDetector,
isArucoAvailable,
loadAruco,
} from '@/lib/vision/arucoDetection'
import { useFrameStability } from '@/hooks/useFrameStability'
import { VisionCameraFeed } from './VisionCameraFeed'
import { css } from '../../../styled-system/css'
import type { CalibrationGrid } from '@/types/vision'
/**
* Feature flag: Enable automatic abacus value detection from video feed.
*
* When enabled:
* - Runs CV-based bead detection on video frames
* - Shows detected value overlay
* - Calls setDockedValue and onValueDetected with detected values
*
* When disabled:
* - Only shows the video feed (no detection)
* - Hides the detection overlay
* - Does not interfere with student's manual input
*
* Set to true when ready to work on improving detection accuracy.
*/
const ENABLE_AUTO_DETECTION = false
// Only import detection modules when auto-detection is enabled
// This ensures the detection code is tree-shaken when disabled
let analyzeColumns: typeof import('@/lib/vision/beadDetector').analyzeColumns
let analysesToDigits: typeof import('@/lib/vision/beadDetector').analysesToDigits
let digitsToNumber: typeof import('@/lib/vision/beadDetector').digitsToNumber
let processVideoFrame: typeof import('@/lib/vision/frameProcessor').processVideoFrame
let processImageFrame: typeof import('@/lib/vision/frameProcessor').processImageFrame
if (ENABLE_AUTO_DETECTION) {
// eslint-disable-next-line @typescript-eslint/no-require-imports
const beadDetector = require('@/lib/vision/beadDetector')
// eslint-disable-next-line @typescript-eslint/no-require-imports
const frameProcessor = require('@/lib/vision/frameProcessor')
analyzeColumns = beadDetector.analyzeColumns
analysesToDigits = beadDetector.analysesToDigits
digitsToNumber = beadDetector.digitsToNumber
processVideoFrame = frameProcessor.processVideoFrame
processImageFrame = frameProcessor.processImageFrame
}
interface DockedVisionFeedProps {
/** Called when a stable value is detected */
onValueDetected?: (value: number) => void
/** Number of columns to detect */
columnCount?: number
}
/**
* Renders the processed camera feed in place of the docked abacus
*
* When vision is enabled in MyAbacusContext, this component:
* - For local camera: Opens the saved camera, applies calibration, runs detection
* - For remote camera: Receives frames from phone, runs detection
* - Shows the video feed with detection overlay
*/
export function DockedVisionFeed({ onValueDetected, columnCount = 5 }: DockedVisionFeedProps) {
const { visionConfig, setDockedValue, setVisionEnabled, setVisionCalibration, emitVisionFrame } =
useMyAbacus()
const videoRef = useRef<HTMLVideoElement>(null)
const remoteImageRef = useRef<HTMLImageElement>(null)
const rectifiedCanvasRef = useRef<HTMLCanvasElement | null>(null)
const animationFrameRef = useRef<number | null>(null)
const markerDetectionFrameRef = useRef<number | null>(null)
const lastInferenceTimeRef = useRef<number>(0)
const lastBroadcastTimeRef = useRef<number>(0)
const [videoStream, setVideoStream] = useState<MediaStream | null>(null)
const [error, setError] = useState<string | null>(null)
const [isLoading, setIsLoading] = useState(true)
const [detectedValue, setDetectedValue] = useState<number | null>(null)
const [confidence, setConfidence] = useState(0)
const [isArucoReady, setIsArucoReady] = useState(false)
const [markersFound, setMarkersFound] = useState(0)
// Stability tracking for detected values (hook must be called unconditionally)
const stability = useFrameStability()
// Determine camera source from explicit activeCameraSource field
const isLocalCamera = visionConfig.activeCameraSource === 'local'
const isRemoteCamera = visionConfig.activeCameraSource === 'phone'
// Load and initialize ArUco on mount (for local camera auto-calibration)
useEffect(() => {
if (!isLocalCamera) return
let cancelled = false
const initAruco = async () => {
try {
await loadAruco()
if (cancelled) return
const available = isArucoAvailable()
if (available) {
initArucoDetector()
setIsArucoReady(true)
}
} catch (err) {
console.error('[DockedVisionFeed] Failed to load ArUco:', err)
}
}
initAruco()
return () => {
cancelled = true
}
}, [isLocalCamera])
// Cleanup ArUco detector on unmount
useEffect(() => {
return () => {
cleanupArucoDetector()
}
}, [])
// Auto-calibration loop using ArUco markers (for local camera)
useEffect(() => {
if (!visionConfig.enabled || !isLocalCamera || !videoStream || !isArucoReady) {
if (markerDetectionFrameRef.current) {
cancelAnimationFrame(markerDetectionFrameRef.current)
markerDetectionFrameRef.current = null
}
return
}
const video = videoRef.current
if (!video) return
let running = true
const detectLoop = () => {
if (!running || !video || video.readyState < 2) {
if (running) {
markerDetectionFrameRef.current = requestAnimationFrame(detectLoop)
}
return
}
const result = detectMarkers(video)
setMarkersFound(result.markersFound)
// Auto-update calibration when all 4 markers found
if (result.allMarkersFound && result.quadCorners) {
const grid: CalibrationGrid = {
roi: {
x: Math.min(result.quadCorners.topLeft.x, result.quadCorners.bottomLeft.x),
y: Math.min(result.quadCorners.topLeft.y, result.quadCorners.topRight.y),
width:
Math.max(result.quadCorners.topRight.x, result.quadCorners.bottomRight.x) -
Math.min(result.quadCorners.topLeft.x, result.quadCorners.bottomLeft.x),
height:
Math.max(result.quadCorners.bottomLeft.y, result.quadCorners.bottomRight.y) -
Math.min(result.quadCorners.topLeft.y, result.quadCorners.topRight.y),
},
corners: result.quadCorners,
columnCount,
columnDividers: Array.from({ length: columnCount - 1 }, (_, i) => (i + 1) / columnCount),
rotation: 0,
}
// Update calibration in context
setVisionCalibration(grid)
}
markerDetectionFrameRef.current = requestAnimationFrame(detectLoop)
}
detectLoop()
return () => {
running = false
if (markerDetectionFrameRef.current) {
cancelAnimationFrame(markerDetectionFrameRef.current)
markerDetectionFrameRef.current = null
}
}
}, [
visionConfig.enabled,
isLocalCamera,
videoStream,
isArucoReady,
columnCount,
setVisionCalibration,
])
// Remote camera hook
const {
isPhoneConnected: remoteIsPhoneConnected,
latestFrame: remoteLatestFrame,
subscribe: remoteSubscribe,
unsubscribe: remoteUnsubscribe,
} = useRemoteCameraDesktop()
const INFERENCE_INTERVAL_MS = 100 // 10fps
// Start local camera when component mounts (only for local camera)
useEffect(() => {
if (!visionConfig.enabled || !isLocalCamera || !visionConfig.cameraDeviceId) {
return
}
let cancelled = false
setIsLoading(true)
setError(null)
const startCamera = async () => {
try {
const stream = await navigator.mediaDevices.getUserMedia({
video: {
deviceId: { exact: visionConfig.cameraDeviceId! },
width: { ideal: 1280 },
height: { ideal: 720 },
},
})
if (cancelled) {
stream.getTracks().forEach((track) => track.stop())
return
}
setVideoStream(stream)
setIsLoading(false)
} catch (err) {
if (cancelled) return
console.error('[DockedVisionFeed] Failed to start camera:', err)
setError('Failed to access camera')
setIsLoading(false)
}
}
startCamera()
return () => {
cancelled = true
}
}, [visionConfig.enabled, isLocalCamera, visionConfig.cameraDeviceId])
// Stop camera when stream changes or component unmounts
useEffect(() => {
return () => {
if (videoStream) {
videoStream.getTracks().forEach((track) => track.stop())
}
}
}, [videoStream])
// Attach stream to video element
useEffect(() => {
if (videoRef.current && videoStream) {
videoRef.current.srcObject = videoStream
}
}, [videoStream])
// Subscribe to remote camera session
useEffect(() => {
if (!visionConfig.enabled || !isRemoteCamera || !visionConfig.remoteCameraSessionId) {
return
}
setIsLoading(true)
remoteSubscribe(visionConfig.remoteCameraSessionId)
return () => {
remoteUnsubscribe()
}
}, [
visionConfig.enabled,
isRemoteCamera,
visionConfig.remoteCameraSessionId,
remoteSubscribe,
remoteUnsubscribe,
])
// Update loading state when remote camera connects
useEffect(() => {
if (isRemoteCamera && remoteIsPhoneConnected) {
setIsLoading(false)
}
}, [isRemoteCamera, remoteIsPhoneConnected])
// Process local camera frames for detection (only when enabled)
const processLocalFrame = useCallback(() => {
// Skip detection when feature is disabled
if (!ENABLE_AUTO_DETECTION) return
const now = performance.now()
if (now - lastInferenceTimeRef.current < INFERENCE_INTERVAL_MS) {
return
}
lastInferenceTimeRef.current = now
const video = videoRef.current
if (!video || video.readyState < 2) return
if (!visionConfig.calibration) return
// Process video frame into column strips
const columnImages = processVideoFrame(video, visionConfig.calibration)
if (columnImages.length === 0) return
// Use CV-based bead detection
const analyses = analyzeColumns(columnImages)
const { digits, minConfidence } = analysesToDigits(analyses)
// Convert to number
const value = digitsToNumber(digits)
// Push to stability buffer
stability.pushFrame(value, minConfidence)
}, [visionConfig.calibration, stability])
// Process remote camera frames for detection (only when enabled)
useEffect(() => {
// Skip detection when feature is disabled
if (!ENABLE_AUTO_DETECTION) return
if (!isRemoteCamera || !remoteIsPhoneConnected || !remoteLatestFrame) {
return
}
const now = performance.now()
if (now - lastInferenceTimeRef.current < INFERENCE_INTERVAL_MS) {
return
}
lastInferenceTimeRef.current = now
const image = remoteImageRef.current
if (!image || !image.complete || image.naturalWidth === 0) {
return
}
// Phone sends pre-cropped frames in auto mode, so no calibration needed
const columnImages = processImageFrame(image, null, columnCount)
if (columnImages.length === 0) return
// Use CV-based bead detection
const analyses = analyzeColumns(columnImages)
const { digits, minConfidence } = analysesToDigits(analyses)
// Convert to number
const value = digitsToNumber(digits)
// Push to stability buffer
stability.pushFrame(value, minConfidence)
}, [isRemoteCamera, remoteIsPhoneConnected, remoteLatestFrame, columnCount, stability])
// Local camera detection loop (only when enabled)
useEffect(() => {
// Skip detection loop when feature is disabled
if (!ENABLE_AUTO_DETECTION) return
if (!visionConfig.enabled || !isLocalCamera || !videoStream || !visionConfig.calibration) {
return
}
let running = true
const loop = () => {
if (!running) return
processLocalFrame()
animationFrameRef.current = requestAnimationFrame(loop)
}
loop()
return () => {
running = false
if (animationFrameRef.current) {
cancelAnimationFrame(animationFrameRef.current)
animationFrameRef.current = null
}
}
}, [
visionConfig.enabled,
isLocalCamera,
videoStream,
visionConfig.calibration,
processLocalFrame,
])
// Handle stable value changes (only when auto-detection is enabled)
useEffect(() => {
// Skip value updates when feature is disabled
if (!ENABLE_AUTO_DETECTION) return
if (stability.stableValue !== null && stability.stableValue !== detectedValue) {
setDetectedValue(stability.stableValue)
setConfidence(stability.currentConfidence)
setDockedValue(stability.stableValue)
onValueDetected?.(stability.stableValue)
}
}, [
stability.stableValue,
stability.currentConfidence,
detectedValue,
setDockedValue,
onValueDetected,
])
// Broadcast vision frames to observers (5fps to save bandwidth)
const BROADCAST_INTERVAL_MS = 200
useEffect(() => {
if (!visionConfig.enabled) return
let running = true
const broadcastLoop = () => {
if (!running) return
const now = performance.now()
if (now - lastBroadcastTimeRef.current >= BROADCAST_INTERVAL_MS) {
lastBroadcastTimeRef.current = now
// Capture from rectified canvas (local camera) or remote image
let imageData: string | null = null
if (isLocalCamera && rectifiedCanvasRef.current) {
const canvas = rectifiedCanvasRef.current
if (canvas.width > 0 && canvas.height > 0) {
// Convert canvas to JPEG (quality 0.7 for bandwidth)
imageData = canvas.toDataURL('image/jpeg', 0.7).replace('data:image/jpeg;base64,', '')
}
} else if (isRemoteCamera && remoteLatestFrame) {
// Remote camera already sends base64 JPEG
imageData = remoteLatestFrame.imageData
}
if (imageData) {
emitVisionFrame({
imageData,
detectedValue,
confidence,
})
}
}
requestAnimationFrame(broadcastLoop)
}
broadcastLoop()
return () => {
running = false
}
}, [
visionConfig.enabled,
isLocalCamera,
isRemoteCamera,
remoteLatestFrame,
detectedValue,
confidence,
emitVisionFrame,
])
const handleDisableVision = (e: React.MouseEvent) => {
e.stopPropagation()
setVisionEnabled(false)
if (videoStream) {
videoStream.getTracks().forEach((track) => track.stop())
}
}
if (error) {
return (
<div
data-component="docked-vision-feed"
data-status="error"
className={css({
display: 'flex',
flexDirection: 'column',
alignItems: 'center',
justifyContent: 'center',
gap: 2,
p: 4,
bg: 'red.900/30',
borderRadius: 'lg',
color: 'red.400',
textAlign: 'center',
})}
>
<span className={css({ fontSize: 'xl' })}></span>
<span className={css({ fontSize: 'sm' })}>{error}</span>
<button
type="button"
onClick={handleDisableVision}
className={css({
mt: 2,
px: 3,
py: 1,
bg: 'gray.700',
color: 'white',
borderRadius: 'md',
fontSize: 'xs',
border: 'none',
cursor: 'pointer',
})}
>
Disable Vision
</button>
</div>
)
}
if (isLoading) {
return (
<div
data-component="docked-vision-feed"
data-status="loading"
className={css({
display: 'flex',
flexDirection: 'column',
alignItems: 'center',
justifyContent: 'center',
gap: 2,
p: 4,
bg: 'gray.800/50',
borderRadius: 'lg',
color: 'gray.400',
})}
>
<span className={css({ fontSize: 'xl' })}>📷</span>
<span className={css({ fontSize: 'sm' })}>
{isRemoteCamera ? 'Connecting to phone...' : 'Starting camera...'}
</span>
</div>
)
}
return (
<div
data-component="docked-vision-feed"
data-status="active"
data-source={isRemoteCamera ? 'remote' : 'local'}
className={css({
position: 'relative',
display: 'flex',
flexDirection: 'column',
alignItems: 'center',
overflow: 'hidden',
borderRadius: 'lg',
bg: 'black',
width: '100%',
height: '100%',
})}
>
{/* Rectified video feed - local camera */}
{isLocalCamera && (
<VisionCameraFeed
videoStream={videoStream}
calibration={visionConfig.calibration}
showRectifiedView={true}
videoRef={(el) => {
videoRef.current = el
}}
rectifiedCanvasRef={(el) => {
rectifiedCanvasRef.current = el
}}
/>
)}
{/* Remote camera feed */}
{isRemoteCamera && remoteLatestFrame && (
<img
ref={remoteImageRef}
src={`data:image/jpeg;base64,${remoteLatestFrame.imageData}`}
alt="Phone camera view"
className={css({
width: '100%',
height: 'auto',
objectFit: 'contain',
})}
/>
)}
{/* Waiting for remote frames */}
{isRemoteCamera && !remoteLatestFrame && remoteIsPhoneConnected && (
<div
className={css({
width: '100%',
aspectRatio: '2/1',
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
color: 'gray.400',
fontSize: 'sm',
})}
>
Waiting for frames...
</div>
)}
{/* Detection overlay - only shown when auto-detection is enabled */}
{ENABLE_AUTO_DETECTION && (
<div
data-element="detection-overlay"
className={css({
position: 'absolute',
bottom: 0,
left: 0,
right: 0,
display: 'flex',
justifyContent: 'space-between',
alignItems: 'center',
p: 2,
bg: 'rgba(0, 0, 0, 0.7)',
backdropFilter: 'blur(4px)',
})}
>
{/* Detected value */}
<div className={css({ display: 'flex', alignItems: 'center', gap: 2 })}>
<span
className={css({
fontSize: 'lg',
fontWeight: 'bold',
color: 'white',
fontFamily: 'mono',
})}
>
{detectedValue !== null ? detectedValue : '---'}
</span>
{detectedValue !== null && (
<span className={css({ fontSize: 'xs', color: 'gray.400' })}>
{Math.round(confidence * 100)}%
</span>
)}
</div>
{/* Stability indicator */}
<div className={css({ display: 'flex', alignItems: 'center', gap: 1 })}>
{stability.consecutiveFrames > 0 && (
<div className={css({ display: 'flex', gap: 0.5 })}>
{Array.from({ length: 3 }).map((_, i) => (
<div
key={i}
className={css({
w: '6px',
h: '6px',
borderRadius: 'full',
bg: i < stability.consecutiveFrames ? 'green.500' : 'gray.600',
})}
/>
))}
</div>
)}
</div>
</div>
)}
{/* Disable button */}
<button
type="button"
data-action="disable-vision"
onClick={handleDisableVision}
title="Disable vision mode"
className={css({
position: 'absolute',
top: '4px',
right: '4px',
w: '24px',
h: '24px',
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
bg: 'rgba(0, 0, 0, 0.5)',
backdropFilter: 'blur(4px)',
border: '1px solid rgba(255, 255, 255, 0.3)',
borderRadius: 'md',
color: 'white',
fontSize: 'xs',
cursor: 'pointer',
zIndex: 10,
opacity: 0.7,
_hover: {
bg: 'rgba(239, 68, 68, 0.8)',
opacity: 1,
},
})}
>
</button>
</div>
)
}

View File

@@ -0,0 +1,136 @@
'use client'
import type { ObservedVisionFrame } from '@/hooks/useSessionObserver'
import { css } from '../../../styled-system/css'
/**
* Feature flag to control auto-detection display
* When false, hides the detection overlay since auto-detection is disabled globally
*/
const ENABLE_AUTO_DETECTION = false
interface ObserverVisionFeedProps {
/** The latest vision frame from the observed student */
frame: ObservedVisionFrame
}
/**
* Displays the vision feed received from an observed student's session.
*
* Used in the SessionObserver modal when the student has abacus vision enabled.
* Shows the processed camera feed with detection status overlay.
*/
export function ObserverVisionFeed({ frame }: ObserverVisionFeedProps) {
// Calculate age of frame for staleness indicator
const frameAge = Date.now() - frame.receivedAt
const isStale = frameAge > 1000 // More than 1 second old
return (
<div
data-component="observer-vision-feed"
data-stale={isStale}
className={css({
position: 'relative',
display: 'flex',
flexDirection: 'column',
borderRadius: 'lg',
overflow: 'hidden',
bg: 'black',
})}
>
{/* Video frame */}
<img
src={`data:image/jpeg;base64,${frame.imageData}`}
alt="Student's abacus vision feed"
className={css({
width: '100%',
height: 'auto',
display: 'block',
opacity: isStale ? 0.5 : 1,
transition: 'opacity 0.3s',
})}
/>
{/* Detection overlay - only shown when auto-detection is enabled */}
{ENABLE_AUTO_DETECTION && (
<div
data-element="detection-overlay"
className={css({
position: 'absolute',
bottom: 0,
left: 0,
right: 0,
display: 'flex',
justifyContent: 'space-between',
alignItems: 'center',
p: 2,
bg: 'rgba(0, 0, 0, 0.7)',
backdropFilter: 'blur(4px)',
})}
>
{/* Detected value */}
<div className={css({ display: 'flex', alignItems: 'center', gap: 2 })}>
<span
className={css({
fontSize: 'lg',
fontWeight: 'bold',
color: 'white',
fontFamily: 'mono',
})}
>
{frame.detectedValue !== null ? frame.detectedValue : '---'}
</span>
{frame.detectedValue !== null && (
<span className={css({ fontSize: 'xs', color: 'gray.400' })}>
{Math.round(frame.confidence * 100)}%
</span>
)}
</div>
{/* Live indicator */}
<div className={css({ display: 'flex', alignItems: 'center', gap: 1 })}>
<div
className={css({
w: '8px',
h: '8px',
borderRadius: 'full',
bg: isStale ? 'gray.500' : 'green.500',
animation: isStale ? 'none' : 'pulse 2s infinite',
})}
/>
<span
className={css({
fontSize: 'xs',
color: isStale ? 'gray.500' : 'green.400',
})}
>
{isStale ? 'Stale' : 'Live'}
</span>
</div>
</div>
)}
{/* Vision mode badge */}
<div
data-element="vision-badge"
className={css({
position: 'absolute',
top: '4px',
left: '4px',
display: 'flex',
alignItems: 'center',
gap: 1,
px: 2,
py: 1,
bg: 'rgba(0, 0, 0, 0.6)',
borderRadius: 'md',
fontSize: 'xs',
color: 'cyan.400',
})}
>
<span>📷</span>
<span>Vision</span>
</div>
</div>
)
}

View File

@@ -1,6 +1,6 @@
'use client'
import { useEffect, useState } from 'react'
import { useEffect, useRef, useState } from 'react'
import { AbacusQRCode } from '@/components/common/AbacusQRCode'
import { useRemoteCameraSession } from '@/hooks/useRemoteCameraSession'
import { css } from '../../../styled-system/css'
@@ -12,6 +12,8 @@ export interface RemoteCameraQRCodeProps {
size?: number
/** Existing session ID to reuse (for reconnection scenarios) */
existingSessionId?: string | null
/** Compact mode - just the QR code, no instructions or URL */
compact?: boolean
}
/**
@@ -28,9 +30,24 @@ export function RemoteCameraQRCode({
onSessionCreated,
size = 200,
existingSessionId,
compact = false,
}: RemoteCameraQRCodeProps) {
const { session, isCreating, error, createSession, setExistingSession, getPhoneUrl } =
useRemoteCameraSession()
const {
session,
isCreating,
error,
createSession,
setExistingSession,
clearSession,
getPhoneUrl,
} = useRemoteCameraSession()
// Ref to track if we've already initiated session creation
// This prevents React 18 Strict Mode from creating duplicate sessions
const creationInitiatedRef = useRef(false)
// Track previous existingSessionId to detect when it changes TO null
const prevExistingSessionIdRef = useRef<string | null | undefined>(existingSessionId)
// If we have an existing session ID, use it instead of creating a new one
useEffect(() => {
@@ -39,9 +56,24 @@ export function RemoteCameraQRCode({
}
}, [existingSessionId, session, setExistingSession])
// Create session on mount only if no existing session
// Reset when existingSessionId CHANGES from truthy to null (user wants fresh session)
// This prevents clearing sessions that we just created ourselves
useEffect(() => {
if (!session && !isCreating && !existingSessionId) {
const prevId = prevExistingSessionIdRef.current
prevExistingSessionIdRef.current = existingSessionId
// Only clear if existingSessionId changed FROM something TO null
if (prevId && !existingSessionId) {
clearSession()
creationInitiatedRef.current = false
}
}, [existingSessionId, clearSession])
// Create session on mount only if no existing session
// Use ref to prevent duplicate creation in React 18 Strict Mode
useEffect(() => {
if (!session && !isCreating && !existingSessionId && !creationInitiatedRef.current) {
creationInitiatedRef.current = true
createSession().then((newSession) => {
if (newSession && onSessionCreated) {
onSessionCreated(newSession.sessionId)
@@ -137,6 +169,22 @@ export function RemoteCameraQRCode({
return null
}
// Compact mode - just the QR code in a minimal container
if (compact) {
return (
<div
className={css({
bg: 'white',
p: 2,
borderRadius: 'lg',
})}
data-component="remote-camera-qr-compact"
>
<AbacusQRCode value={phoneUrl} size={size} />
</div>
)
}
return (
<div
className={css({

View File

@@ -58,8 +58,14 @@ export function RemoteCameraReceiver({
const [calibration, setCalibration] = useState<CalibrationGrid | null>(null)
const containerRef = useRef<HTMLDivElement>(null)
const imageRef = useRef<HTMLImageElement>(null)
const [containerDimensions, setContainerDimensions] = useState({ width: 0, height: 0 })
const [imageDimensions, setImageDimensions] = useState({ width: 0, height: 0 })
const [containerDimensions, setContainerDimensions] = useState({
width: 0,
height: 0,
})
const [imageDimensions, setImageDimensions] = useState({
width: 0,
height: 0,
})
// Subscribe when sessionId changes
useEffect(() => {
@@ -100,7 +106,10 @@ export function RemoteCameraReceiver({
// Track image dimensions when it loads
const handleImageLoad = useCallback((e: React.SyntheticEvent<HTMLImageElement>) => {
const img = e.currentTarget
setImageDimensions({ width: img.naturalWidth, height: img.naturalHeight })
setImageDimensions({
width: img.naturalWidth,
height: img.naturalHeight,
})
}, [])
// Create image src from base64 data

View File

@@ -0,0 +1,675 @@
import type { Meta, StoryObj } from '@storybook/react'
import { useState } from 'react'
import { css } from '../../../styled-system/css'
// =============================================================================
// Mock Types (matching the real implementation)
// =============================================================================
interface MockDevice {
deviceId: string
label: string
}
type CameraSource = 'local' | 'phone'
// =============================================================================
// Mock Camera Controls Component
// =============================================================================
interface VisionCameraControlsProps {
/** Camera source: local or phone */
cameraSource: CameraSource
/** Available camera devices */
availableDevices: MockDevice[]
/** Currently selected device ID */
selectedDeviceId: string | null
/** Whether torch is available */
isTorchAvailable: boolean
/** Whether torch is on */
isTorchOn: boolean
/** Current facing mode */
facingMode: 'user' | 'environment'
/** Whether phone is connected (for remote camera) */
isPhoneConnected?: boolean
/** Remote torch available */
remoteTorchAvailable?: boolean
/** Remote torch on */
remoteTorchOn?: boolean
/** Callback when camera is selected */
onCameraSelect?: (deviceId: string) => void
/** Callback when camera is flipped */
onFlipCamera?: () => void
/** Callback when torch is toggled */
onToggleTorch?: () => void
/** Callback when camera source changes */
onCameraSourceChange?: (source: CameraSource) => void
}
/**
* VisionCameraControls - UI for camera selection, torch control, and camera source toggle
*
* This component demonstrates the unified camera controls that appear in AbacusVisionBridge:
* - Camera selector dropdown (always visible for local camera, even with 1 device)
* - Flip camera button (only when multiple cameras available)
* - Unified torch button (works for both local and remote cameras)
* - Camera source toggle (local vs phone)
*/
function VisionCameraControls({
cameraSource,
availableDevices,
selectedDeviceId,
isTorchAvailable,
isTorchOn,
facingMode,
isPhoneConnected = false,
remoteTorchAvailable = false,
remoteTorchOn = false,
onCameraSelect,
onFlipCamera,
onToggleTorch,
onCameraSourceChange,
}: VisionCameraControlsProps) {
// Determine if torch button should show
const showTorchButton =
(cameraSource === 'local' && isTorchAvailable) ||
(cameraSource === 'phone' && isPhoneConnected && remoteTorchAvailable)
// Get current torch state based on source
const currentTorchOn = cameraSource === 'local' ? isTorchOn : remoteTorchOn
return (
<div
data-component="vision-camera-controls"
className={css({
display: 'flex',
flexDirection: 'column',
gap: 3,
p: 4,
bg: 'gray.900',
borderRadius: 'xl',
maxWidth: '400px',
width: '100%',
})}
>
{/* Header */}
<div
className={css({
display: 'flex',
alignItems: 'center',
gap: 2,
})}
>
<span className={css({ fontSize: 'lg' })}>📷</span>
<span className={css({ color: 'white', fontWeight: 'medium' })}>Camera Controls</span>
</div>
{/* Camera source selector */}
<div
data-element="camera-source"
className={css({
display: 'flex',
alignItems: 'center',
gap: 2,
p: 2,
bg: 'gray.800',
borderRadius: 'md',
})}
>
<span className={css({ color: 'gray.400', fontSize: 'sm' })}>Source:</span>
<button
type="button"
onClick={() => onCameraSourceChange?.('local')}
className={css({
px: 3,
py: 1,
fontSize: 'sm',
border: 'none',
borderRadius: 'md',
cursor: 'pointer',
bg: cameraSource === 'local' ? 'blue.600' : 'gray.700',
color: 'white',
_hover: { bg: cameraSource === 'local' ? 'blue.500' : 'gray.600' },
})}
>
Local Camera
</button>
<button
type="button"
onClick={() => onCameraSourceChange?.('phone')}
className={css({
px: 3,
py: 1,
fontSize: 'sm',
border: 'none',
borderRadius: 'md',
cursor: 'pointer',
bg: cameraSource === 'phone' ? 'blue.600' : 'gray.700',
color: 'white',
_hover: { bg: cameraSource === 'phone' ? 'blue.500' : 'gray.600' },
})}
>
Phone Camera
</button>
</div>
{/* Camera controls - unified for both local and phone */}
<div
data-element="camera-controls"
className={css({
display: 'flex',
alignItems: 'center',
gap: 2,
flexWrap: 'wrap',
})}
>
{/* Camera selector - always show for local camera (even with 1 device) */}
{cameraSource === 'local' && availableDevices.length > 0 && (
<select
data-element="camera-selector"
value={selectedDeviceId ?? ''}
onChange={(e) => onCameraSelect?.(e.target.value)}
className={css({
flex: 1,
p: 2,
bg: 'gray.800',
color: 'white',
border: '1px solid',
borderColor: 'gray.600',
borderRadius: 'md',
fontSize: 'sm',
minWidth: '150px',
})}
>
{availableDevices.map((device) => (
<option key={device.deviceId} value={device.deviceId}>
{device.label || `Camera ${device.deviceId.slice(0, 8)}`}
</option>
))}
</select>
)}
{/* Flip camera button - only show if multiple cameras available */}
{cameraSource === 'local' && availableDevices.length > 1 && (
<button
type="button"
onClick={onFlipCamera}
data-action="flip-camera"
className={css({
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
width: '40px',
height: '40px',
bg: 'gray.700',
color: 'white',
border: 'none',
borderRadius: 'md',
cursor: 'pointer',
fontSize: 'lg',
_hover: { bg: 'gray.600' },
})}
title={`Switch to ${facingMode === 'environment' ? 'front' : 'back'} camera`}
>
🔄
</button>
)}
{/* Torch toggle button - unified for both local and remote */}
{showTorchButton && (
<button
type="button"
onClick={onToggleTorch}
data-action="toggle-torch"
data-status={currentTorchOn ? 'on' : 'off'}
className={css({
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
width: '40px',
height: '40px',
bg: currentTorchOn ? 'yellow.600' : 'gray.700',
color: 'white',
border: 'none',
borderRadius: 'md',
cursor: 'pointer',
fontSize: 'lg',
_hover: { bg: currentTorchOn ? 'yellow.500' : 'gray.600' },
})}
title={currentTorchOn ? 'Turn off flash' : 'Turn on flash'}
>
{currentTorchOn ? '🔦' : '💡'}
</button>
)}
{/* Phone status when using phone camera */}
{cameraSource === 'phone' && (
<div
className={css({
display: 'flex',
alignItems: 'center',
gap: 2,
fontSize: 'sm',
color: isPhoneConnected ? 'green.400' : 'gray.400',
})}
>
<span
className={css({
width: 2,
height: 2,
borderRadius: 'full',
bg: isPhoneConnected ? 'green.500' : 'gray.500',
})}
/>
{isPhoneConnected ? 'Phone Connected' : 'Waiting for phone...'}
</div>
)}
</div>
{/* Info about wide-angle preference */}
<div
data-element="wide-angle-info"
className={css({
display: 'flex',
alignItems: 'center',
gap: 2,
p: 2,
bg: 'blue.900/50',
borderRadius: 'md',
fontSize: 'xs',
color: 'blue.300',
})}
>
<span>📐</span>
<span>Cameras default to widest angle lens (zoom: 1)</span>
</div>
</div>
)
}
// =============================================================================
// Storybook Meta
// =============================================================================
const meta: Meta<typeof VisionCameraControls> = {
title: 'Vision/VisionCameraControls',
component: VisionCameraControls,
decorators: [
(Story) => (
<div
className={css({
padding: '2rem',
display: 'flex',
justifyContent: 'center',
alignItems: 'center',
minHeight: '300px',
bg: 'gray.950',
})}
>
<Story />
</div>
),
],
parameters: {
layout: 'centered',
backgrounds: { default: 'dark' },
},
}
export default meta
type Story = StoryObj<typeof VisionCameraControls>
// =============================================================================
// Mock Data
// =============================================================================
const singleCamera: MockDevice[] = [{ deviceId: 'camera-1', label: 'FaceTime HD Camera' }]
const multipleCameras: MockDevice[] = [
{ deviceId: 'camera-1', label: 'FaceTime HD Camera' },
{ deviceId: 'camera-2', label: 'iPhone Continuity Camera (Wide)' },
{ deviceId: 'camera-3', label: 'iPhone Continuity Camera (Ultra Wide)' },
{ deviceId: 'camera-4', label: 'Desk View Camera' },
]
const iphoneCameras: MockDevice[] = [
{ deviceId: 'camera-wide', label: 'Back Camera (Wide)' },
{ deviceId: 'camera-ultrawide', label: 'Back Camera (Ultra Wide)' },
{ deviceId: 'camera-front', label: 'Front Camera' },
]
// =============================================================================
// Stories: Single Camera
// =============================================================================
export const SingleCameraNoTorch: Story = {
name: 'Single Camera - No Torch',
args: {
cameraSource: 'local',
availableDevices: singleCamera,
selectedDeviceId: 'camera-1',
isTorchAvailable: false,
isTorchOn: false,
facingMode: 'environment',
},
}
export const SingleCameraWithTorch: Story = {
name: 'Single Camera - With Torch',
args: {
cameraSource: 'local',
availableDevices: singleCamera,
selectedDeviceId: 'camera-1',
isTorchAvailable: true,
isTorchOn: false,
facingMode: 'environment',
},
}
export const SingleCameraTorchOn: Story = {
name: 'Single Camera - Torch On',
args: {
cameraSource: 'local',
availableDevices: singleCamera,
selectedDeviceId: 'camera-1',
isTorchAvailable: true,
isTorchOn: true,
facingMode: 'environment',
},
}
// =============================================================================
// Stories: Multiple Cameras
// =============================================================================
export const MultipleCameras: Story = {
name: 'Multiple Cameras - Desktop',
args: {
cameraSource: 'local',
availableDevices: multipleCameras,
selectedDeviceId: 'camera-2',
isTorchAvailable: true,
isTorchOn: false,
facingMode: 'environment',
},
}
export const MultipleCamerasUltraWideSelected: Story = {
name: 'Multiple Cameras - Ultra Wide Selected',
args: {
cameraSource: 'local',
availableDevices: multipleCameras,
selectedDeviceId: 'camera-3',
isTorchAvailable: true,
isTorchOn: false,
facingMode: 'environment',
},
}
// =============================================================================
// Stories: Phone Camera
// =============================================================================
export const PhoneCameraWaiting: Story = {
name: 'Phone Camera - Waiting for Connection',
args: {
cameraSource: 'phone',
availableDevices: [],
selectedDeviceId: null,
isTorchAvailable: false,
isTorchOn: false,
facingMode: 'environment',
isPhoneConnected: false,
remoteTorchAvailable: false,
remoteTorchOn: false,
},
}
export const PhoneCameraConnected: Story = {
name: 'Phone Camera - Connected',
args: {
cameraSource: 'phone',
availableDevices: [],
selectedDeviceId: null,
isTorchAvailable: false,
isTorchOn: false,
facingMode: 'environment',
isPhoneConnected: true,
remoteTorchAvailable: true,
remoteTorchOn: false,
},
}
export const PhoneCameraTorchOn: Story = {
name: 'Phone Camera - Torch On',
args: {
cameraSource: 'phone',
availableDevices: [],
selectedDeviceId: null,
isTorchAvailable: false,
isTorchOn: false,
facingMode: 'environment',
isPhoneConnected: true,
remoteTorchAvailable: true,
remoteTorchOn: true,
},
}
// =============================================================================
// Stories: Interactive
// =============================================================================
function InteractiveCameraControls() {
const [cameraSource, setCameraSource] = useState<CameraSource>('local')
const [selectedDeviceId, setSelectedDeviceId] = useState('camera-2')
const [isTorchOn, setIsTorchOn] = useState(false)
const [remoteTorchOn, setRemoteTorchOn] = useState(false)
const [facingMode, setFacingMode] = useState<'user' | 'environment'>('environment')
return (
<VisionCameraControls
cameraSource={cameraSource}
availableDevices={multipleCameras}
selectedDeviceId={selectedDeviceId}
isTorchAvailable={true}
isTorchOn={isTorchOn}
facingMode={facingMode}
isPhoneConnected={true}
remoteTorchAvailable={true}
remoteTorchOn={remoteTorchOn}
onCameraSourceChange={setCameraSource}
onCameraSelect={setSelectedDeviceId}
onFlipCamera={() => setFacingMode((m) => (m === 'user' ? 'environment' : 'user'))}
onToggleTorch={() => {
if (cameraSource === 'local') {
setIsTorchOn((t) => !t)
} else {
setRemoteTorchOn((t) => !t)
}
}}
/>
)
}
export const Interactive: Story = {
name: 'Interactive Demo',
render: () => <InteractiveCameraControls />,
}
// =============================================================================
// Stories: Feature Showcase
// =============================================================================
export const FeatureShowcase: Story = {
name: 'Feature Showcase - All Features',
render: () => (
<div
className={css({
display: 'flex',
flexDirection: 'column',
gap: 6,
maxWidth: '800px',
})}
>
<div
className={css({
p: 4,
bg: 'gray.800',
borderRadius: 'lg',
color: 'white',
})}
>
<h2 className={css({ fontSize: 'xl', fontWeight: 'bold', mb: 4 })}>
Camera Control Features
</h2>
<div className={css({ display: 'flex', flexDirection: 'column', gap: 4 })}>
<div>
<h3
className={css({
fontSize: 'md',
fontWeight: 'semibold',
color: 'blue.300',
mb: 2,
})}
>
1. Camera Selector Always Visible
</h3>
<p className={css({ color: 'gray.400', fontSize: 'sm', mb: 2 })}>
The camera dropdown now shows even with just 1 camera, so you can always see which
device is selected.
</p>
<VisionCameraControls
cameraSource="local"
availableDevices={singleCamera}
selectedDeviceId="camera-1"
isTorchAvailable={false}
isTorchOn={false}
facingMode="environment"
/>
</div>
<div>
<h3
className={css({
fontSize: 'md',
fontWeight: 'semibold',
color: 'yellow.300',
mb: 2,
})}
>
2. Unified Torch Control
</h3>
<p className={css({ color: 'gray.400', fontSize: 'sm', mb: 2 })}>
The torch button works for both local and remote cameras. It automatically controls
the active camera&apos;s flash.
</p>
<div className={css({ display: 'flex', gap: 4, flexWrap: 'wrap' })}>
<VisionCameraControls
cameraSource="local"
availableDevices={singleCamera}
selectedDeviceId="camera-1"
isTorchAvailable={true}
isTorchOn={true}
facingMode="environment"
/>
<VisionCameraControls
cameraSource="phone"
availableDevices={[]}
selectedDeviceId={null}
isTorchAvailable={false}
isTorchOn={false}
facingMode="environment"
isPhoneConnected={true}
remoteTorchAvailable={true}
remoteTorchOn={true}
/>
</div>
</div>
<div>
<h3
className={css({
fontSize: 'md',
fontWeight: 'semibold',
color: 'green.300',
mb: 2,
})}
>
3. Wide-Angle Lens Preference
</h3>
<p className={css({ color: 'gray.400', fontSize: 'sm', mb: 2 })}>
Cameras default to their widest field of view using{' '}
<code className={css({ bg: 'gray.700', px: 1, borderRadius: 'sm' })}>
zoom: &#123; ideal: 1 &#125;
</code>{' '}
constraint, ensuring you capture the full abacus.
</p>
<VisionCameraControls
cameraSource="local"
availableDevices={iphoneCameras}
selectedDeviceId="camera-ultrawide"
isTorchAvailable={true}
isTorchOn={false}
facingMode="environment"
/>
</div>
</div>
</div>
</div>
),
}
// =============================================================================
// Stories: Comparison
// =============================================================================
export const LocalVsPhoneComparison: Story = {
name: 'Local vs Phone Camera Comparison',
render: () => (
<div className={css({ display: 'flex', gap: 6, flexWrap: 'wrap' })}>
<div className={css({ display: 'flex', flexDirection: 'column', gap: 2 })}>
<span
className={css({
color: 'white',
fontSize: 'sm',
fontWeight: 'bold',
})}
>
Local Camera (Multiple)
</span>
<VisionCameraControls
cameraSource="local"
availableDevices={multipleCameras}
selectedDeviceId="camera-2"
isTorchAvailable={true}
isTorchOn={false}
facingMode="environment"
/>
</div>
<div className={css({ display: 'flex', flexDirection: 'column', gap: 2 })}>
<span
className={css({
color: 'white',
fontSize: 'sm',
fontWeight: 'bold',
})}
>
Phone Camera (Connected)
</span>
<VisionCameraControls
cameraSource="phone"
availableDevices={[]}
selectedDeviceId={null}
isTorchAvailable={false}
isTorchOn={false}
facingMode="environment"
isPhoneConnected={true}
remoteTorchAvailable={true}
remoteTorchOn={false}
/>
</div>
</div>
),
}

View File

@@ -19,6 +19,8 @@ export interface VisionCameraFeedProps {
showRectifiedView?: boolean
/** Video element ref callback for external access */
videoRef?: (el: HTMLVideoElement | null) => void
/** Rectified canvas ref callback for external access (only when showRectifiedView=true) */
rectifiedCanvasRef?: (el: HTMLCanvasElement | null) => void
/** Called when video metadata is loaded (provides dimensions) */
onVideoReady?: (width: number, height: number) => void
/** Children rendered over the video (e.g., CalibrationOverlay) */
@@ -55,6 +57,7 @@ export function VisionCameraFeed({
showCalibrationGrid = false,
showRectifiedView = false,
videoRef: externalVideoRef,
rectifiedCanvasRef: externalCanvasRef,
onVideoReady,
children,
}: VisionCameraFeedProps): ReactNode {
@@ -82,6 +85,13 @@ export function VisionCameraFeed({
}
}, [externalVideoRef])
// Set canvas ref for external access (when rectified view is active)
useEffect(() => {
if (externalCanvasRef && showRectifiedView) {
externalCanvasRef(rectifiedCanvasRef.current)
}
}, [externalCanvasRef, showRectifiedView])
// Attach stream to video element
useEffect(() => {
const video = internalVideoRef.current

View File

@@ -0,0 +1,122 @@
'use client'
import { useMyAbacus } from '@/contexts/MyAbacusContext'
import { css } from '../../../styled-system/css'
interface VisionIndicatorProps {
/** Size variant */
size?: 'small' | 'medium'
/** Position for absolute placement */
position?: 'top-left' | 'bottom-right'
}
/**
* Camera icon indicator for abacus vision mode
*
* Shows:
* - 🔴 Red dot = not configured (no camera or no calibration)
* - 🟢 Green dot = configured and enabled
* - ⚪ Gray = configured but disabled
*
* Click behavior:
* - If not configured: opens setup modal
* - If configured: toggles vision on/off
*/
export function VisionIndicator({
size = 'medium',
position = 'bottom-right',
}: VisionIndicatorProps) {
const { visionConfig, isVisionSetupComplete, openVisionSetup } = useMyAbacus()
const handleClick = (e: React.MouseEvent) => {
e.stopPropagation()
// Always open setup modal on click for now
// This gives users easy access to vision settings
openVisionSetup()
}
const handleContextMenu = (e: React.MouseEvent) => {
e.preventDefault()
e.stopPropagation()
// Right-click always opens setup
openVisionSetup()
}
// Determine status indicator color
const statusColor = !isVisionSetupComplete
? 'red.500' // Not configured
: visionConfig.enabled
? 'green.500' // Enabled
: 'gray.400' // Configured but disabled
const statusLabel = !isVisionSetupComplete
? 'Vision not configured'
: visionConfig.enabled
? 'Vision enabled'
: 'Vision disabled'
const sizeStyles =
size === 'small'
? { w: '20px', h: '20px', fontSize: '10px' }
: { w: '28px', h: '28px', fontSize: '14px' }
const positionStyles =
position === 'top-left'
? { top: 0, left: 0, margin: '4px' }
: { bottom: 0, right: 0, margin: '4px' }
return (
<button
type="button"
data-vision-status={
!isVisionSetupComplete ? 'not-configured' : visionConfig.enabled ? 'enabled' : 'disabled'
}
onClick={handleClick}
onContextMenu={handleContextMenu}
title={`${statusLabel} (right-click for settings)`}
style={{
position: 'absolute',
...positionStyles,
}}
className={css({
...sizeStyles,
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
bg: 'rgba(0, 0, 0, 0.5)',
backdropFilter: 'blur(4px)',
border: '1px solid rgba(255, 255, 255, 0.3)',
borderRadius: 'md',
color: 'white',
cursor: 'pointer',
transition: 'all 0.2s',
zIndex: 10,
opacity: 0.8,
_hover: {
bg: 'rgba(0, 0, 0, 0.7)',
opacity: 1,
transform: 'scale(1.1)',
},
})}
>
{/* Camera icon */}
<span style={{ position: 'relative' }}>
📷{/* Status dot */}
<span
data-element="vision-status-dot"
className={css({
position: 'absolute',
top: '-2px',
right: '-4px',
w: '8px',
h: '8px',
borderRadius: 'full',
bg: statusColor,
border: '1px solid white',
boxShadow: '0 1px 2px rgba(0,0,0,0.3)',
})}
/>
</span>
</button>
)
}

View File

@@ -0,0 +1,99 @@
'use client'
import { useMyAbacus } from '@/contexts/MyAbacusContext'
import { css } from '../../../styled-system/css'
import { AbacusVisionBridge } from './AbacusVisionBridge'
/**
* Modal for configuring abacus vision settings
*
* Renders AbacusVisionBridge directly in a draggable modal.
* The bridge component handles all camera/calibration configuration.
*/
export function VisionSetupModal() {
const {
isVisionSetupOpen,
closeVisionSetup,
visionConfig,
isVisionSetupComplete,
setVisionEnabled,
setVisionCamera,
setVisionCalibration,
setVisionRemoteSession,
setVisionCameraSource,
dock,
} = useMyAbacus()
const handleClearSettings = () => {
setVisionCamera(null)
setVisionCalibration(null)
setVisionRemoteSession(null)
setVisionCameraSource(null)
setVisionEnabled(false)
}
const handleToggleVision = () => {
setVisionEnabled(!visionConfig.enabled)
}
if (!isVisionSetupOpen) return null
return (
<div
data-component="vision-setup-modal"
className={css({
position: 'fixed',
inset: 0,
bg: 'rgba(0, 0, 0, 0.7)',
backdropFilter: 'blur(4px)',
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
zIndex: 10000,
})}
onClick={closeVisionSetup}
onKeyDown={(e) => {
if (e.key === 'Escape') {
closeVisionSetup()
}
}}
>
{/* AbacusVisionBridge is a motion.div with drag - stopPropagation prevents backdrop close */}
<div onClick={(e) => e.stopPropagation()}>
<AbacusVisionBridge
columnCount={dock?.columns ?? 5}
onValueDetected={() => {
// Value detected - configuration is working
}}
onClose={closeVisionSetup}
onConfigurationChange={(config) => {
// Save configuration to context as it changes
if (config.cameraDeviceId !== undefined) {
setVisionCamera(config.cameraDeviceId)
}
if (config.calibration !== undefined) {
setVisionCalibration(config.calibration)
}
if (config.remoteCameraSessionId !== undefined) {
setVisionRemoteSession(config.remoteCameraSessionId)
}
if (config.activeCameraSource !== undefined) {
setVisionCameraSource(config.activeCameraSource)
}
}}
// Use saved activeCameraSource if available, otherwise infer from configs
initialCameraSource={
visionConfig.activeCameraSource ??
(visionConfig.remoteCameraSessionId && !visionConfig.cameraDeviceId ? 'phone' : 'local')
}
// Show enable/disable and clear buttons
showVisionControls={true}
isVisionEnabled={visionConfig.enabled}
isVisionSetupComplete={isVisionSetupComplete}
onToggleVision={handleToggleVision}
onClearSettings={handleClearSettings}
/>
</div>
</div>
)
}

View File

@@ -0,0 +1,191 @@
/**
* Unit tests for ObserverVisionFeed component
*
* Note: Canvas.Image mock is provided in src/test/setup.ts to prevent
* jsdom errors with data URI images. Actual image rendering is verified
* through integration/e2e tests.
*/
import { render, screen } from '@testing-library/react'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import type { ObservedVisionFrame } from '@/hooks/useSessionObserver'
import { ObserverVisionFeed } from '../ObserverVisionFeed'
describe('ObserverVisionFeed', () => {
const createMockFrame = (overrides?: Partial<ObservedVisionFrame>): ObservedVisionFrame => ({
imageData: 'base64ImageData==',
detectedValue: 123,
confidence: 0.95,
receivedAt: Date.now(),
...overrides,
})
beforeEach(() => {
vi.useFakeTimers()
})
afterEach(() => {
vi.useRealTimers()
})
describe('rendering', () => {
it('renders the vision feed container', () => {
const frame = createMockFrame()
render(<ObserverVisionFeed frame={frame} />)
expect(screen.getByRole('img')).toBeInTheDocument()
})
it('displays the image with correct src', () => {
const frame = createMockFrame({ imageData: 'testImageData123' })
render(<ObserverVisionFeed frame={frame} />)
const img = screen.getByRole('img') as HTMLImageElement
// Check the src property (not attribute) because our test setup
// intercepts data:image/ src attributes to prevent jsdom canvas errors
expect(img.src).toBe('data:image/jpeg;base64,testImageData123')
})
it('has appropriate alt text for accessibility', () => {
const frame = createMockFrame()
render(<ObserverVisionFeed frame={frame} />)
const img = screen.getByRole('img')
expect(img).toHaveAttribute('alt', "Student's abacus vision feed")
})
})
describe('detected value display', () => {
it('displays the detected value', () => {
const frame = createMockFrame({ detectedValue: 456, confidence: 0.87 })
render(<ObserverVisionFeed frame={frame} />)
expect(screen.getByText('456')).toBeInTheDocument()
})
it('displays confidence percentage', () => {
const frame = createMockFrame({ detectedValue: 123, confidence: 0.87 })
render(<ObserverVisionFeed frame={frame} />)
expect(screen.getByText('87%')).toBeInTheDocument()
})
it('displays dashes when detectedValue is null', () => {
const frame = createMockFrame({ detectedValue: null, confidence: 0 })
render(<ObserverVisionFeed frame={frame} />)
expect(screen.getByText('---')).toBeInTheDocument()
})
it('hides confidence when value is null', () => {
const frame = createMockFrame({ detectedValue: null, confidence: 0.95 })
render(<ObserverVisionFeed frame={frame} />)
expect(screen.queryByText('95%')).not.toBeInTheDocument()
})
it('handles zero as a valid detected value', () => {
const frame = createMockFrame({ detectedValue: 0, confidence: 0.99 })
render(<ObserverVisionFeed frame={frame} />)
expect(screen.getByText('0')).toBeInTheDocument()
expect(screen.getByText('99%')).toBeInTheDocument()
})
})
describe('live/stale indicator', () => {
it('shows Live status for fresh frames (less than 1 second old)', () => {
const now = Date.now()
vi.setSystemTime(now)
const frame = createMockFrame({ receivedAt: now - 500 }) // 500ms ago
render(<ObserverVisionFeed frame={frame} />)
expect(screen.getByText('Live')).toBeInTheDocument()
})
it('shows Stale status for old frames (more than 1 second old)', () => {
const now = Date.now()
vi.setSystemTime(now)
const frame = createMockFrame({ receivedAt: now - 1500 }) // 1.5 seconds ago
render(<ObserverVisionFeed frame={frame} />)
expect(screen.getByText('Stale')).toBeInTheDocument()
})
it('sets stale data attribute when frame is old', () => {
const now = Date.now()
vi.setSystemTime(now)
const frame = createMockFrame({ receivedAt: now - 2000 }) // 2 seconds ago
const { container } = render(<ObserverVisionFeed frame={frame} />)
const component = container.querySelector('[data-component="observer-vision-feed"]')
expect(component).toHaveAttribute('data-stale', 'true')
})
it('sets stale data attribute to false for fresh frames', () => {
const now = Date.now()
vi.setSystemTime(now)
const frame = createMockFrame({ receivedAt: now - 100 }) // 100ms ago
const { container } = render(<ObserverVisionFeed frame={frame} />)
const component = container.querySelector('[data-component="observer-vision-feed"]')
expect(component).toHaveAttribute('data-stale', 'false')
})
it('reduces image opacity for stale frames', () => {
const now = Date.now()
vi.setSystemTime(now)
const frame = createMockFrame({ receivedAt: now - 2000 })
render(<ObserverVisionFeed frame={frame} />)
const img = screen.getByRole('img')
// The opacity should be reduced for stale frames
expect(img.className).toBeDefined()
})
})
describe('vision badge', () => {
it('displays the vision badge', () => {
const frame = createMockFrame()
render(<ObserverVisionFeed frame={frame} />)
expect(screen.getByText('📷')).toBeInTheDocument()
expect(screen.getByText('Vision')).toBeInTheDocument()
})
})
describe('edge cases', () => {
it('handles very large detected values', () => {
const frame = createMockFrame({ detectedValue: 99999, confidence: 1.0 })
render(<ObserverVisionFeed frame={frame} />)
expect(screen.getByText('99999')).toBeInTheDocument()
expect(screen.getByText('100%')).toBeInTheDocument()
})
it('rounds confidence to nearest integer', () => {
const frame = createMockFrame({ detectedValue: 123, confidence: 0.876 })
render(<ObserverVisionFeed frame={frame} />)
expect(screen.getByText('88%')).toBeInTheDocument()
})
it('handles confidence edge case of exactly 1', () => {
const frame = createMockFrame({ detectedValue: 123, confidence: 1.0 })
render(<ObserverVisionFeed frame={frame} />)
expect(screen.getByText('100%')).toBeInTheDocument()
})
it('handles confidence edge case of exactly 0', () => {
const frame = createMockFrame({ detectedValue: 123, confidence: 0 })
render(<ObserverVisionFeed frame={frame} />)
expect(screen.getByText('0%')).toBeInTheDocument()
})
})
})

View File

@@ -0,0 +1,173 @@
/**
* Unit tests for VisionIndicator component
*/
import { fireEvent, render, screen } from '@testing-library/react'
import { beforeEach, describe, expect, it, vi } from 'vitest'
import { VisionIndicator } from '../VisionIndicator'
// Mock the MyAbacusContext
const mockOpenVisionSetup = vi.fn()
const mockVisionConfig = {
enabled: false,
cameraDeviceId: null,
calibration: null,
remoteCameraSessionId: null,
}
vi.mock('@/contexts/MyAbacusContext', () => ({
useMyAbacus: () => ({
visionConfig: mockVisionConfig,
isVisionSetupComplete:
mockVisionConfig.cameraDeviceId !== null && mockVisionConfig.calibration !== null,
openVisionSetup: mockOpenVisionSetup,
}),
}))
describe('VisionIndicator', () => {
beforeEach(() => {
vi.clearAllMocks()
// Reset to default state
mockVisionConfig.enabled = false
mockVisionConfig.cameraDeviceId = null
mockVisionConfig.calibration = null
mockVisionConfig.remoteCameraSessionId = null
})
describe('rendering', () => {
it('renders the camera icon', () => {
render(<VisionIndicator />)
expect(screen.getByText('📷')).toBeInTheDocument()
})
it('renders with medium size by default', () => {
render(<VisionIndicator />)
const button = screen.getByRole('button')
// Medium size button should exist with the vision-status attribute
expect(button).toHaveAttribute('data-vision-status')
})
it('renders with small size when specified', () => {
render(<VisionIndicator size="small" />)
expect(screen.getByRole('button')).toBeInTheDocument()
})
})
describe('status indicator', () => {
it('shows not-configured status when camera is not set', () => {
mockVisionConfig.cameraDeviceId = null
mockVisionConfig.calibration = null
render(<VisionIndicator />)
const button = screen.getByRole('button')
expect(button).toHaveAttribute('data-vision-status', 'not-configured')
})
it('shows disabled status when configured but not enabled', () => {
mockVisionConfig.cameraDeviceId = 'camera-123'
mockVisionConfig.calibration = {
roi: { x: 0, y: 0, width: 100, height: 100 },
columnCount: 5,
columnDividers: [],
rotation: 0,
}
mockVisionConfig.enabled = false
render(<VisionIndicator />)
const button = screen.getByRole('button')
expect(button).toHaveAttribute('data-vision-status', 'disabled')
})
it('shows enabled status when configured and enabled', () => {
mockVisionConfig.cameraDeviceId = 'camera-123'
mockVisionConfig.calibration = {
roi: { x: 0, y: 0, width: 100, height: 100 },
columnCount: 5,
columnDividers: [],
rotation: 0,
}
mockVisionConfig.enabled = true
render(<VisionIndicator />)
const button = screen.getByRole('button')
expect(button).toHaveAttribute('data-vision-status', 'enabled')
})
})
describe('click behavior', () => {
it('opens setup modal on click', () => {
render(<VisionIndicator />)
const button = screen.getByRole('button')
fireEvent.click(button)
expect(mockOpenVisionSetup).toHaveBeenCalledTimes(1)
})
it('opens setup modal on right-click', () => {
render(<VisionIndicator />)
const button = screen.getByRole('button')
fireEvent.contextMenu(button)
expect(mockOpenVisionSetup).toHaveBeenCalledTimes(1)
})
it('stops event propagation on click', () => {
const parentClickHandler = vi.fn()
render(
<div onClick={parentClickHandler}>
<VisionIndicator />
</div>
)
const button = screen.getByRole('button')
fireEvent.click(button)
expect(parentClickHandler).not.toHaveBeenCalled()
})
})
describe('accessibility', () => {
it('has appropriate title based on status', () => {
mockVisionConfig.cameraDeviceId = null
render(<VisionIndicator />)
const button = screen.getByRole('button')
expect(button).toHaveAttribute('title', expect.stringContaining('not configured'))
})
it('updates title when vision is enabled', () => {
mockVisionConfig.cameraDeviceId = 'camera-123'
mockVisionConfig.calibration = {
roi: { x: 0, y: 0, width: 100, height: 100 },
columnCount: 5,
columnDividers: [],
rotation: 0,
}
mockVisionConfig.enabled = true
render(<VisionIndicator />)
const button = screen.getByRole('button')
expect(button).toHaveAttribute('title', expect.stringContaining('enabled'))
})
})
describe('positioning', () => {
it('uses bottom-right position by default', () => {
render(<VisionIndicator />)
const button = screen.getByRole('button')
expect(button.style.position).toBe('absolute')
})
it('accepts top-left position', () => {
render(<VisionIndicator position="top-left" />)
const button = screen.getByRole('button')
expect(button.style.position).toBe('absolute')
})
})
})

View File

@@ -0,0 +1,613 @@
'use client'
import type { ReactNode } from 'react'
import { useCallback, useEffect, useRef, useState } from 'react'
import { css } from '../../../styled-system/css'
import type { ParsedProblem, BoundingBox } from '@/lib/worksheet-parsing'
interface BoundingBoxOverlayProps {
/** The problems with bounding box data */
problems: ParsedProblem[]
/** Currently selected problem index (null if none) */
selectedIndex: number | null
/** Callback when a problem is clicked */
onSelectProblem: (index: number | null) => void
/** The image element to overlay on */
imageRef: React.RefObject<HTMLImageElement | null>
/** Show debug info (raw coordinates, image dimensions) */
debug?: boolean
/** Set of problem indices selected for re-parsing */
selectedForReparse?: Set<number>
/** Callback when a problem is toggled for re-parsing */
onToggleReparse?: (index: number) => void
/** Adjusted bounding boxes (overrides original when present) */
adjustedBoxes?: Map<number, BoundingBox>
/** Callback when a bounding box is adjusted */
onAdjustBox?: (index: number, box: BoundingBox) => void
}
/** Handle positions for resize */
type HandlePosition = 'nw' | 'ne' | 'sw' | 'se' | 'n' | 's' | 'e' | 'w'
/** State for drag/resize operations */
interface DragState {
type: 'move' | 'resize'
index: number
handle?: HandlePosition
startX: number
startY: number
startBox: BoundingBox
}
/**
* Calculate the actual rendered dimensions of an image with object-fit: contain
* Returns the offset and size of the actual image content within the element
*/
function getContainedImageDimensions(img: HTMLImageElement): {
offsetX: number
offsetY: number
width: number
height: number
} {
const naturalRatio = img.naturalWidth / img.naturalHeight
const elementRatio = img.clientWidth / img.clientHeight
let width: number
let height: number
if (naturalRatio > elementRatio) {
// Image is wider than container - letterboxed top/bottom
width = img.clientWidth
height = img.clientWidth / naturalRatio
} else {
// Image is taller than container - letterboxed left/right
height = img.clientHeight
width = img.clientHeight * naturalRatio
}
const offsetX = (img.clientWidth - width) / 2
const offsetY = (img.clientHeight - height) / 2
return { offsetX, offsetY, width, height }
}
/**
* BoundingBoxOverlay - SVG overlay that draws bounding boxes on worksheet images
*
* Uses normalized coordinates (0-1) from the parsing results to draw boxes
* that highlight where each problem was detected on the worksheet.
*
* Features:
* - All problems shown with semi-transparent boxes
* - Selected problem highlighted with thicker border
* - Click on a box to select that problem
* - Automatically sizes to match the underlying image
*/
export function BoundingBoxOverlay({
problems,
selectedIndex,
onSelectProblem,
imageRef,
debug = false,
selectedForReparse = new Set(),
onToggleReparse,
adjustedBoxes = new Map(),
onAdjustBox,
}: BoundingBoxOverlayProps): ReactNode {
const [dimensions, setDimensions] = useState({
elementWidth: 0,
elementHeight: 0,
// Actual image content dimensions (accounting for object-fit: contain)
offsetX: 0,
offsetY: 0,
contentWidth: 0,
contentHeight: 0,
// Natural image dimensions (for debug display)
naturalWidth: 0,
naturalHeight: 0,
})
const containerRef = useRef<HTMLDivElement>(null)
const svgRef = useRef<SVGSVGElement>(null)
// Drag/resize state
const [dragState, setDragState] = useState<DragState | null>(null)
// Hover state for showing checkbox on hover
const [hoveredIndex, setHoveredIndex] = useState<number | null>(null)
// Update dimensions when image loads or resizes
const updateDimensions = useCallback(() => {
const img = imageRef.current
if (img?.complete && img.naturalWidth > 0) {
const contained = getContainedImageDimensions(img)
setDimensions({
elementWidth: img.clientWidth,
elementHeight: img.clientHeight,
offsetX: contained.offsetX,
offsetY: contained.offsetY,
contentWidth: contained.width,
contentHeight: contained.height,
naturalWidth: img.naturalWidth,
naturalHeight: img.naturalHeight,
})
}
}, [imageRef])
// Watch for image load and resize
useEffect(() => {
const img = imageRef.current
if (!img) return
// Update on load
if (img.complete) {
updateDimensions()
} else {
img.addEventListener('load', updateDimensions)
}
// Update on resize using ResizeObserver
const observer = new ResizeObserver(updateDimensions)
observer.observe(img)
return () => {
img.removeEventListener('load', updateDimensions)
observer.disconnect()
}
}, [imageRef, updateDimensions])
// Convert normalized coordinates to pixel coordinates
// Accounts for object-fit: contain letterboxing
const toPixels = useCallback(
(box: BoundingBox) => ({
x: dimensions.offsetX + box.x * dimensions.contentWidth,
y: dimensions.offsetY + box.y * dimensions.contentHeight,
width: box.width * dimensions.contentWidth,
height: box.height * dimensions.contentHeight,
}),
[dimensions]
)
// Convert pixel coordinates back to normalized (0-1)
const toNormalized = useCallback(
(pixelBox: { x: number; y: number; width: number; height: number }): BoundingBox => ({
x: Math.max(0, Math.min(1, (pixelBox.x - dimensions.offsetX) / dimensions.contentWidth)),
y: Math.max(0, Math.min(1, (pixelBox.y - dimensions.offsetY) / dimensions.contentHeight)),
width: Math.max(0.02, Math.min(1, pixelBox.width / dimensions.contentWidth)),
height: Math.max(0.02, Math.min(1, pixelBox.height / dimensions.contentHeight)),
}),
[dimensions]
)
// Get the effective bounding box (adjusted or original)
const getEffectiveBox = useCallback(
(index: number, problem: ParsedProblem): BoundingBox => {
return adjustedBoxes.get(index) ?? problem.problemBoundingBox
},
[adjustedBoxes]
)
// Handle mouse down on box (start drag)
const handleMouseDown = useCallback(
(e: React.MouseEvent, index: number, handle?: HandlePosition) => {
// Only allow drag/resize for boxes that are selected for reparse
if (!selectedForReparse.has(index) || !onAdjustBox) return
e.preventDefault()
e.stopPropagation()
const problem = problems[index]
const box = getEffectiveBox(index, problem)
setDragState({
type: handle ? 'resize' : 'move',
index,
handle,
startX: e.clientX,
startY: e.clientY,
startBox: { ...box },
})
},
[selectedForReparse, onAdjustBox, problems, getEffectiveBox]
)
// Handle mouse move (drag/resize)
const handleMouseMove = useCallback(
(e: React.MouseEvent) => {
if (!dragState || !onAdjustBox) return
const dx = (e.clientX - dragState.startX) / dimensions.contentWidth
const dy = (e.clientY - dragState.startY) / dimensions.contentHeight
let newBox: BoundingBox
if (dragState.type === 'move') {
// Move the entire box
newBox = {
x: Math.max(0, Math.min(1 - dragState.startBox.width, dragState.startBox.x + dx)),
y: Math.max(0, Math.min(1 - dragState.startBox.height, dragState.startBox.y + dy)),
width: dragState.startBox.width,
height: dragState.startBox.height,
}
} else {
// Resize based on handle
const { handle, startBox } = dragState
let x = startBox.x
let y = startBox.y
let width = startBox.width
let height = startBox.height
// Adjust based on which handle is being dragged
if (handle?.includes('w')) {
const newX = Math.max(0, Math.min(startBox.x + startBox.width - 0.02, startBox.x + dx))
width = startBox.width - (newX - startBox.x)
x = newX
}
if (handle?.includes('e')) {
width = Math.max(0.02, Math.min(1 - startBox.x, startBox.width + dx))
}
if (handle?.includes('n')) {
const newY = Math.max(0, Math.min(startBox.y + startBox.height - 0.02, startBox.y + dy))
height = startBox.height - (newY - startBox.y)
y = newY
}
if (handle?.includes('s')) {
height = Math.max(0.02, Math.min(1 - startBox.y, startBox.height + dy))
}
newBox = { x, y, width, height }
}
onAdjustBox(dragState.index, newBox)
},
[dragState, onAdjustBox, dimensions]
)
// Handle mouse up (end drag)
const handleMouseUp = useCallback(() => {
setDragState(null)
}, [])
// Add global mouse listeners when dragging
useEffect(() => {
if (!dragState) return
const handleGlobalMouseMove = (e: MouseEvent) => {
if (!dragState || !onAdjustBox) return
const dx = (e.clientX - dragState.startX) / dimensions.contentWidth
const dy = (e.clientY - dragState.startY) / dimensions.contentHeight
let newBox: BoundingBox
if (dragState.type === 'move') {
newBox = {
x: Math.max(0, Math.min(1 - dragState.startBox.width, dragState.startBox.x + dx)),
y: Math.max(0, Math.min(1 - dragState.startBox.height, dragState.startBox.y + dy)),
width: dragState.startBox.width,
height: dragState.startBox.height,
}
} else {
const { handle, startBox } = dragState
let x = startBox.x
let y = startBox.y
let width = startBox.width
let height = startBox.height
if (handle?.includes('w')) {
const newX = Math.max(0, Math.min(startBox.x + startBox.width - 0.02, startBox.x + dx))
width = startBox.width - (newX - startBox.x)
x = newX
}
if (handle?.includes('e')) {
width = Math.max(0.02, Math.min(1 - startBox.x, startBox.width + dx))
}
if (handle?.includes('n')) {
const newY = Math.max(0, Math.min(startBox.y + startBox.height - 0.02, startBox.y + dy))
height = startBox.height - (newY - startBox.y)
y = newY
}
if (handle?.includes('s')) {
height = Math.max(0.02, Math.min(1 - startBox.y, startBox.height + dy))
}
newBox = { x, y, width, height }
}
onAdjustBox(dragState.index, newBox)
}
const handleGlobalMouseUp = () => {
setDragState(null)
}
window.addEventListener('mousemove', handleGlobalMouseMove)
window.addEventListener('mouseup', handleGlobalMouseUp)
return () => {
window.removeEventListener('mousemove', handleGlobalMouseMove)
window.removeEventListener('mouseup', handleGlobalMouseUp)
}
}, [dragState, onAdjustBox, dimensions])
// Don't render if we don't have valid dimensions
if (dimensions.contentWidth === 0 || dimensions.contentHeight === 0) {
return null
}
return (
<div
ref={containerRef}
data-element="bounding-box-overlay"
className={css({
position: 'absolute',
top: 0,
left: 0,
pointerEvents: 'none', // Allow clicks to pass through except on boxes
})}
style={{
width: dimensions.elementWidth,
height: dimensions.elementHeight,
}}
>
<svg
width={dimensions.elementWidth}
height={dimensions.elementHeight}
viewBox={`0 0 ${dimensions.elementWidth} ${dimensions.elementHeight}`}
className={css({ display: 'block' })}
>
{/* Debug: show actual image content bounds */}
{debug && (
<rect
x={dimensions.offsetX}
y={dimensions.offsetY}
width={dimensions.contentWidth}
height={dimensions.contentHeight}
fill="none"
stroke="cyan"
strokeWidth={2}
strokeDasharray="8 4"
/>
)}
{/* Render boxes in two passes: unselected first, then selected on top */}
{[false, true].map((renderSelected) =>
problems.map((problem, index) => {
if (!problem.problemBoundingBox) return null
const isMarkedForReparse = selectedForReparse.has(index)
// First pass: render unselected boxes
// Second pass: render selected boxes (on top for drag/resize)
if (renderSelected !== isMarkedForReparse) return null
// Use adjusted box if available, otherwise original
const box = getEffectiveBox(index, problem)
const pixels = toPixels(box)
const isSelected = selectedIndex === index
const isCorrect = problem.studentAnswer === problem.correctAnswer
const hasAnswer = problem.studentAnswer != null
const isAdjusted = adjustedBoxes.has(index)
const canDrag = isMarkedForReparse && onAdjustBox
const isHovered = hoveredIndex === index
const hasAnySelections = selectedForReparse.size > 0
// Determine box color based on status
let strokeColor: string
let fillColor: string
if (isMarkedForReparse) {
// Orange for problems marked for re-parsing
strokeColor = '#f97316' // orange-500
fillColor = 'rgba(249, 115, 22, 0.2)'
} else if (isSelected) {
strokeColor = '#3b82f6' // blue-500
fillColor = 'rgba(59, 130, 246, 0.15)'
} else if (!hasAnswer) {
strokeColor = '#6b7280' // gray-500
fillColor = 'rgba(107, 114, 128, 0.08)'
} else if (isCorrect) {
strokeColor = '#22c55e' // green-500
fillColor = 'rgba(34, 197, 94, 0.08)'
} else {
strokeColor = '#ef4444' // red-500
fillColor = 'rgba(239, 68, 68, 0.08)'
}
const handleBoxClick = (e: React.MouseEvent) => {
e.stopPropagation()
// Toggle highlight selection (for viewing problem details)
onSelectProblem(isSelected ? null : index)
}
const handleCheckboxClick = (e: React.MouseEvent) => {
e.stopPropagation()
if (onToggleReparse) {
onToggleReparse(index)
}
}
// Resize handle size
const handleSize = 10
// Handle positions for resize handles (corners only for simplicity)
const handles: Array<{ pos: HandlePosition; x: number; y: number; cursor: string }> = [
{ pos: 'nw', x: pixels.x, y: pixels.y, cursor: 'nwse-resize' },
{ pos: 'ne', x: pixels.x + pixels.width, y: pixels.y, cursor: 'nesw-resize' },
{ pos: 'sw', x: pixels.x, y: pixels.y + pixels.height, cursor: 'nesw-resize' },
{
pos: 'se',
x: pixels.x + pixels.width,
y: pixels.y + pixels.height,
cursor: 'nwse-resize',
},
]
return (
<g
key={`${renderSelected ? 'selected' : 'unselected'}-${problem.problemNumber ?? index}`}
>
{/* Background fill */}
<rect
x={pixels.x}
y={pixels.y}
width={pixels.width}
height={pixels.height}
fill={fillColor}
rx={4}
ry={4}
/>
{/* Border - draggable when selected for reparse */}
<rect
x={pixels.x}
y={pixels.y}
width={pixels.width}
height={pixels.height}
fill="none"
stroke={strokeColor}
strokeWidth={isMarkedForReparse ? 3 : isSelected ? 3 : 1.5}
strokeDasharray={
isAdjusted ? 'none' : isMarkedForReparse || isSelected ? 'none' : '4 2'
}
rx={4}
ry={4}
style={{
pointerEvents: 'all',
cursor: canDrag ? 'move' : 'pointer',
}}
onClick={canDrag ? undefined : handleBoxClick}
onMouseDown={canDrag ? (e) => handleMouseDown(e, index) : undefined}
onMouseEnter={() => setHoveredIndex(index)}
onMouseLeave={() => setHoveredIndex((prev) => (prev === index ? null : prev))}
/>
{/* Resize handles for selected boxes */}
{canDrag &&
handles.map((handle) => (
<rect
key={handle.pos}
x={handle.x - handleSize / 2}
y={handle.y - handleSize / 2}
width={handleSize}
height={handleSize}
fill="#f97316"
stroke="#ea580c"
strokeWidth={1}
rx={2}
ry={2}
style={{ pointerEvents: 'all', cursor: handle.cursor }}
onMouseDown={(e) => handleMouseDown(e, index, handle.pos)}
/>
))}
{/* Adjusted indicator */}
{isAdjusted && isMarkedForReparse && (
<text
x={pixels.x + pixels.width - 4}
y={pixels.y + 14}
fill="#f97316"
fontSize={10}
fontWeight="bold"
textAnchor="end"
style={{ pointerEvents: 'none' }}
>
</text>
)}
{/* Checkbox indicator - show on hover or if selected */}
{onToggleReparse && (isHovered || isMarkedForReparse || hasAnySelections) && (
<g
style={{ opacity: isMarkedForReparse || isHovered ? 1 : 0.5 }}
onMouseEnter={() => setHoveredIndex(index)}
onMouseLeave={() => setHoveredIndex((prev) => (prev === index ? null : prev))}
>
<rect
x={pixels.x + pixels.width - 22}
y={pixels.y + 4}
width={18}
height={18}
fill={isMarkedForReparse ? '#f97316' : 'rgba(0, 0, 0, 0.6)'}
stroke={isMarkedForReparse ? '#ea580c' : '#9ca3af'}
strokeWidth={2}
rx={3}
ry={3}
style={{ pointerEvents: 'all', cursor: 'pointer' }}
onClick={handleCheckboxClick}
/>
{isMarkedForReparse && (
<text
x={pixels.x + pixels.width - 13}
y={pixels.y + 17}
fill="white"
fontSize={12}
fontWeight="bold"
textAnchor="middle"
style={{ pointerEvents: 'none' }}
>
</text>
)}
</g>
)}
{/* Problem number label */}
<text
x={pixels.x + 4}
y={pixels.y + 14}
fill={strokeColor}
fontSize={12}
fontWeight={isMarkedForReparse || isSelected ? 'bold' : 'normal'}
fontFamily="monospace"
style={{ pointerEvents: 'none' }}
>
#{index + 1}
</text>
</g>
)
})
)}
</svg>
{/* Debug panel showing dimensions and selected box coordinates */}
{debug && (
<div
data-element="bbox-debug-panel"
className={css({
position: 'absolute',
bottom: 0,
left: 0,
right: 0,
padding: 2,
backgroundColor: 'rgba(0, 0, 0, 0.85)',
color: 'white',
fontSize: 'xs',
fontFamily: 'mono',
maxHeight: '150px',
overflow: 'auto',
pointerEvents: 'auto',
})}
>
<div className={css({ marginBottom: 1 })}>
<strong>Image Debug:</strong> natural={dimensions.naturalWidth}x
{dimensions.naturalHeight} | element={dimensions.elementWidth}x
{dimensions.elementHeight} | content=
{Math.round(dimensions.contentWidth)}x{Math.round(dimensions.contentHeight)} | offset=(
{Math.round(dimensions.offsetX)},{Math.round(dimensions.offsetY)})
</div>
{selectedIndex !== null && problems[selectedIndex] && (
<div>
<strong>Selected #{selectedIndex + 1}:</strong> raw=(
{problems[selectedIndex].problemBoundingBox.x.toFixed(3)},{' '}
{problems[selectedIndex].problemBoundingBox.y.toFixed(3)},{' '}
{problems[selectedIndex].problemBoundingBox.width.toFixed(3)},{' '}
{problems[selectedIndex].problemBoundingBox.height.toFixed(3)}) | pixels=(
{Math.round(toPixels(problems[selectedIndex].problemBoundingBox).x)},{' '}
{Math.round(toPixels(problems[selectedIndex].problemBoundingBox).y)},{' '}
{Math.round(toPixels(problems[selectedIndex].problemBoundingBox).width)}x
{Math.round(toPixels(problems[selectedIndex].problemBoundingBox).height)})
</div>
)}
<div className={css({ marginTop: 1, color: 'cyan' })}>
Cyan dashed border = actual image content bounds (accounting for object-fit: contain)
</div>
</div>
)}
</div>
)
}
export default BoundingBoxOverlay

View File

@@ -0,0 +1,224 @@
'use client'
import { useCallback, useEffect, useMemo } from 'react'
import { Z_INDEX } from '@/constants/zIndex'
import { css } from '../../../styled-system/css'
export interface DebugContentModalProps {
/** Modal title */
title: string
/** Content to display (raw text) */
content: string
/** Whether the modal is open */
isOpen: boolean
/** Callback when modal should close */
onClose: () => void
/** Content type for syntax highlighting */
contentType?: 'text' | 'json' | 'markdown'
}
/**
* Simple JSON syntax highlighter using regex
* Returns HTML with spans for different token types
*/
function highlightJson(json: string): string {
// Escape HTML entities first
const escaped = json.replace(/&/g, '&amp;').replace(/</g, '&lt;').replace(/>/g, '&gt;')
// Apply syntax highlighting
return (
escaped
// Strings (including property names in quotes)
.replace(/"([^"\\]|\\.)*"/g, (match) => `<span class="json-string">${match}</span>`)
// Numbers
.replace(/\b(-?\d+\.?\d*([eE][+-]?\d+)?)\b/g, '<span class="json-number">$1</span>')
// Booleans and null
.replace(/\b(true|false|null)\b/g, '<span class="json-literal">$1</span>')
)
}
/**
* DebugContentModal - Fullscreen modal for viewing raw debug content
*
* Shows the original text content with syntax highlighting for JSON.
* Does NOT render markdown - shows the raw text as-is.
*/
export function DebugContentModal({
title,
content,
isOpen,
onClose,
contentType = 'text',
}: DebugContentModalProps) {
// Handle escape key
useEffect(() => {
if (!isOpen) return
const handleKeyDown = (e: KeyboardEvent) => {
if (e.key === 'Escape') {
onClose()
}
}
window.addEventListener('keydown', handleKeyDown)
return () => window.removeEventListener('keydown', handleKeyDown)
}, [isOpen, onClose])
// Prevent body scroll when modal is open
useEffect(() => {
if (isOpen) {
document.body.style.overflow = 'hidden'
} else {
document.body.style.overflow = ''
}
return () => {
document.body.style.overflow = ''
}
}, [isOpen])
const handleBackdropClick = useCallback(
(e: React.MouseEvent) => {
if (e.target === e.currentTarget) {
onClose()
}
},
[onClose]
)
// Memoize highlighted content
const highlightedContent = useMemo(() => {
if (contentType === 'json') {
return highlightJson(content)
}
// For text/markdown, just escape HTML
return content.replace(/&/g, '&amp;').replace(/</g, '&lt;').replace(/>/g, '&gt;')
}, [content, contentType])
if (!isOpen) return null
return (
<div
data-component="debug-content-modal"
className={css({
position: 'fixed',
inset: 0,
backgroundColor: 'rgba(0, 0, 0, 0.9)',
display: 'flex',
flexDirection: 'column',
zIndex: Z_INDEX.MODAL + 10, // Above other modals
padding: 4,
})}
onClick={handleBackdropClick}
>
{/* Header */}
<div
className={css({
display: 'flex',
alignItems: 'center',
justifyContent: 'space-between',
padding: 4,
borderBottom: '1px solid',
borderColor: 'gray.600',
backgroundColor: 'gray.800',
borderRadius: 'lg lg 0 0',
flexShrink: 0,
})}
>
<h2
className={css({
fontSize: 'lg',
fontWeight: 'semibold',
color: 'white',
})}
>
{title}
</h2>
<div
className={css({
display: 'flex',
alignItems: 'center',
gap: 3,
})}
>
<span
className={css({
fontSize: 'sm',
color: 'gray.400',
fontFamily: 'mono',
})}
>
{content.length.toLocaleString()} chars
</span>
<button
type="button"
onClick={onClose}
className={css({
padding: 2,
borderRadius: 'md',
backgroundColor: 'gray.700',
color: 'gray.300',
border: 'none',
cursor: 'pointer',
fontSize: 'lg',
lineHeight: 1,
_hover: {
backgroundColor: 'gray.600',
color: 'white',
},
})}
aria-label="Close"
>
</button>
</div>
</div>
{/* Content - Raw text display */}
<div
className={css({
flex: 1,
overflow: 'auto',
backgroundColor: '#1a1a2e', // Dark blue-ish background for code
borderRadius: '0 0 lg lg',
})}
onClick={(e) => e.stopPropagation()}
>
<pre
className={css({
margin: 0,
padding: 4,
fontFamily: 'mono',
fontSize: 'sm',
lineHeight: 1.6,
whiteSpace: 'pre-wrap',
wordBreak: 'break-word',
color: '#e0e0e0', // Light gray text
// JSON syntax highlighting colors
'& .json-string': {
color: '#a8e6a3', // Light green for strings
},
'& .json-number': {
color: '#f4a460', // Orange for numbers
},
'& .json-literal': {
color: '#87ceeb', // Light blue for true/false/null
},
})}
dangerouslySetInnerHTML={{ __html: highlightedContent }}
/>
</div>
{/* Footer hint */}
<div
className={css({
textAlign: 'center',
padding: 2,
fontSize: 'xs',
color: 'gray.500',
})}
>
Press Esc to close
</div>
</div>
)
}

View File

@@ -0,0 +1,749 @@
'use client'
/**
* EditableProblemRow - Inline editor for parsed worksheet problems
*
* Allows users to correct:
* - Problem terms (the addends/subtrahends)
* - Student's answer
* - Mark problem for exclusion
*/
import type { ReactNode } from 'react'
import { useState, useCallback, useRef, useEffect } from 'react'
import { css } from '../../../styled-system/css'
import type { ParsedProblem } from '@/lib/worksheet-parsing'
export interface ProblemCorrection {
problemNumber: number
correctedTerms?: number[] | null
correctedStudentAnswer?: number | null
shouldExclude?: boolean
}
export interface EditableProblemRowProps {
/** The problem data */
problem: ParsedProblem
/** The 0-based index of this problem in the list */
index: number
/** Whether this problem is currently selected (highlighted on image) */
isSelected: boolean
/** Callback when this problem is clicked (for highlighting) */
onSelect: () => void
/** Callback when corrections are submitted */
onSubmitCorrection: (correction: ProblemCorrection) => void
/** Whether a correction is currently being saved */
isSaving: boolean
/** Dark mode styling */
isDark?: boolean
/** Whether any problems are selected (shows all checkboxes when true) */
hasSelections?: boolean
/** Whether this problem is checked for re-parsing */
isCheckedForReparse?: boolean
/** Callback when checkbox is toggled */
onToggleReparse?: (index: number) => void
/** Optional cropped thumbnail URL for this problem */
thumbnailUrl?: string
}
/**
* Parse a terms string like "45 + 27 - 12" into an array of numbers
*/
function parseTermsString(input: string): number[] | null {
const cleaned = input.trim()
if (!cleaned) return null
// Split by + or - while keeping the operator
const parts = cleaned.split(/([+-])/).filter((p) => p.trim())
const terms: number[] = []
let sign = 1
for (const part of parts) {
const trimmed = part.trim()
if (trimmed === '+') {
sign = 1
} else if (trimmed === '-') {
sign = -1
} else {
const num = parseInt(trimmed, 10)
if (Number.isNaN(num)) return null
terms.push(sign * num)
sign = 1 // Reset sign after using
}
}
return terms.length > 0 ? terms : null
}
/**
* Format terms array into a string like "45 + 27 - 12"
*/
function formatTerms(terms: number[]): string {
if (terms.length === 0) return ''
if (terms.length === 1) return terms[0].toString()
return terms
.map((term, i) => {
if (i === 0) return term.toString()
if (term >= 0) return `+ ${term}`
return `- ${Math.abs(term)}`
})
.join(' ')
}
export function EditableProblemRow({
problem,
index,
isSelected,
onSelect,
onSubmitCorrection,
isSaving,
isDark = true,
hasSelections = false,
isCheckedForReparse = false,
onToggleReparse,
thumbnailUrl,
}: EditableProblemRowProps): ReactNode {
const [isEditing, setIsEditing] = useState(false)
const [isHovered, setIsHovered] = useState(false)
const [termsInput, setTermsInput] = useState(formatTerms(problem.terms))
const [studentAnswerInput, setStudentAnswerInput] = useState(
problem.studentAnswer?.toString() ?? ''
)
const [termsError, setTermsError] = useState<string | null>(null)
const [answerError, setAnswerError] = useState<string | null>(null)
const termsInputRef = useRef<HTMLInputElement>(null)
// Focus terms input when entering edit mode
useEffect(() => {
if (isEditing && termsInputRef.current) {
termsInputRef.current.focus()
termsInputRef.current.select()
}
}, [isEditing])
// Reset form when problem changes
useEffect(() => {
setTermsInput(formatTerms(problem.terms))
setStudentAnswerInput(problem.studentAnswer?.toString() ?? '')
setTermsError(null)
setAnswerError(null)
}, [problem])
const handleEdit = useCallback((e: React.MouseEvent) => {
e.stopPropagation()
setIsEditing(true)
}, [])
const handleCancel = useCallback(
(e: React.MouseEvent) => {
e.stopPropagation()
setIsEditing(false)
setTermsInput(formatTerms(problem.terms))
setStudentAnswerInput(problem.studentAnswer?.toString() ?? '')
setTermsError(null)
setAnswerError(null)
},
[problem]
)
const handleSave = useCallback(
(e: React.MouseEvent) => {
e.stopPropagation()
// Validate terms
const parsedTerms = parseTermsString(termsInput)
if (!parsedTerms || parsedTerms.length < 2) {
setTermsError('Enter at least 2 terms (e.g., "45 + 27")')
return
}
// Validate student answer (can be empty for "no answer")
let parsedAnswer: number | null = null
if (studentAnswerInput.trim()) {
parsedAnswer = parseInt(studentAnswerInput.trim(), 10)
if (Number.isNaN(parsedAnswer)) {
setAnswerError('Enter a valid number or leave blank')
return
}
}
// Check if anything actually changed
const termsChanged = JSON.stringify(parsedTerms) !== JSON.stringify(problem.terms)
const answerChanged = parsedAnswer !== problem.studentAnswer
if (!termsChanged && !answerChanged) {
// Nothing changed, just exit edit mode
setIsEditing(false)
return
}
// Submit correction
onSubmitCorrection({
problemNumber: problem.problemNumber,
correctedTerms: termsChanged ? parsedTerms : undefined,
correctedStudentAnswer: answerChanged ? parsedAnswer : undefined,
})
setIsEditing(false)
},
[termsInput, studentAnswerInput, problem, onSubmitCorrection]
)
const handleExclude = useCallback(
(e: React.MouseEvent) => {
e.stopPropagation()
onSubmitCorrection({
problemNumber: problem.problemNumber,
shouldExclude: true,
})
},
[problem.problemNumber, onSubmitCorrection]
)
const handleKeyDown = useCallback(
(e: React.KeyboardEvent) => {
if (e.key === 'Enter') {
handleSave(e as unknown as React.MouseEvent)
} else if (e.key === 'Escape') {
handleCancel(e as unknown as React.MouseEvent)
}
},
[handleSave, handleCancel]
)
const handleCheckboxClick = useCallback(
(e: React.MouseEvent) => {
e.stopPropagation()
onToggleReparse?.(index)
},
[index, onToggleReparse]
)
const isCorrect =
problem.studentAnswer !== null && problem.studentAnswer === problem.correctAnswer
const isIncorrect =
problem.studentAnswer !== null && problem.studentAnswer !== problem.correctAnswer
const isLowConfidence = Math.min(problem.termsConfidence, problem.studentAnswerConfidence) < 0.7
// Edit mode UI
if (isEditing) {
return (
<div
data-element="problem-row-editing"
data-problem-index={index}
className={css({
padding: 3,
backgroundColor: isDark ? 'blue.900' : 'blue.50',
borderRadius: 'lg',
border: '2px solid token(colors.blue.500)',
})}
onClick={(e) => e.stopPropagation()}
>
<div
className={css({
display: 'flex',
alignItems: 'center',
gap: 2,
marginBottom: 3,
})}
>
<span
className={css({
fontSize: 'sm',
fontWeight: 'medium',
color: isDark ? 'gray.400' : 'gray.600',
})}
>
#{index + 1}
</span>
<span
className={css({
fontSize: 'xs',
px: 2,
py: 0.5,
borderRadius: 'md',
backgroundColor: isDark ? 'blue.800' : 'blue.100',
color: isDark ? 'blue.300' : 'blue.700',
})}
>
Editing
</span>
</div>
{/* Terms input */}
<div className={css({ marginBottom: 3 })}>
<label
className={css({
display: 'block',
fontSize: 'xs',
fontWeight: 'medium',
color: isDark ? 'gray.400' : 'gray.600',
marginBottom: 1,
})}
>
Problem terms (e.g., "45 + 27 - 12")
</label>
<input
ref={termsInputRef}
type="text"
value={termsInput}
onChange={(e) => {
setTermsInput(e.target.value)
setTermsError(null)
}}
onKeyDown={handleKeyDown}
className={css({
width: '100%',
px: 3,
py: 2,
fontSize: 'sm',
fontFamily: 'mono',
backgroundColor: isDark ? 'gray.800' : 'white',
color: isDark ? 'white' : 'gray.900',
border: '1px solid',
borderColor: termsError ? 'red.500' : isDark ? 'gray.600' : 'gray.300',
borderRadius: 'md',
_focus: {
outline: 'none',
borderColor: 'blue.500',
boxShadow: '0 0 0 2px token(colors.blue.500/20)',
},
})}
placeholder="45 + 27 - 12"
/>
{termsError && (
<p
className={css({
fontSize: 'xs',
color: 'red.400',
marginTop: 1,
})}
>
{termsError}
</p>
)}
</div>
{/* Student answer input */}
<div className={css({ marginBottom: 3 })}>
<label
className={css({
display: 'block',
fontSize: 'xs',
fontWeight: 'medium',
color: isDark ? 'gray.400' : 'gray.600',
marginBottom: 1,
})}
>
Student's answer (leave blank if no answer)
</label>
<input
type="text"
value={studentAnswerInput}
onChange={(e) => {
setStudentAnswerInput(e.target.value)
setAnswerError(null)
}}
onKeyDown={handleKeyDown}
className={css({
width: '100px',
px: 3,
py: 2,
fontSize: 'sm',
fontFamily: 'mono',
backgroundColor: isDark ? 'gray.800' : 'white',
color: isDark ? 'white' : 'gray.900',
border: '1px solid',
borderColor: answerError ? 'red.500' : isDark ? 'gray.600' : 'gray.300',
borderRadius: 'md',
_focus: {
outline: 'none',
borderColor: 'blue.500',
boxShadow: '0 0 0 2px token(colors.blue.500/20)',
},
})}
placeholder="60"
/>
{answerError && (
<p
className={css({
fontSize: 'xs',
color: 'red.400',
marginTop: 1,
})}
>
{answerError}
</p>
)}
</div>
{/* Correct answer display */}
<div
className={css({
fontSize: 'xs',
color: isDark ? 'gray.500' : 'gray.500',
marginBottom: 3,
})}
>
Correct answer (from terms):{' '}
<span className={css({ fontFamily: 'mono', fontWeight: 'medium' })}>
{parseTermsString(termsInput)?.reduce((a, b) => a + b, 0) ?? '?'}
</span>
</div>
{/* Action buttons */}
<div
className={css({
display: 'flex',
alignItems: 'center',
gap: 2,
})}
>
<button
type="button"
onClick={handleSave}
disabled={isSaving}
className={css({
px: 3,
py: 1.5,
fontSize: 'sm',
fontWeight: 'medium',
backgroundColor: 'green.600',
color: 'white',
border: 'none',
borderRadius: 'md',
cursor: 'pointer',
_hover: { backgroundColor: 'green.700' },
_disabled: { opacity: 0.5, cursor: 'wait' },
})}
>
{isSaving ? 'Saving...' : 'Save'}
</button>
<button
type="button"
onClick={handleCancel}
disabled={isSaving}
className={css({
px: 3,
py: 1.5,
fontSize: 'sm',
fontWeight: 'medium',
backgroundColor: isDark ? 'gray.700' : 'gray.200',
color: isDark ? 'white' : 'gray.700',
border: 'none',
borderRadius: 'md',
cursor: 'pointer',
_hover: { backgroundColor: isDark ? 'gray.600' : 'gray.300' },
_disabled: { opacity: 0.5, cursor: 'not-allowed' },
})}
>
Cancel
</button>
<div className={css({ flex: 1 })} />
<button
type="button"
onClick={handleExclude}
disabled={isSaving}
className={css({
px: 3,
py: 1.5,
fontSize: 'sm',
fontWeight: 'medium',
backgroundColor: 'transparent',
color: 'red.400',
border: '1px solid token(colors.red.400)',
borderRadius: 'md',
cursor: 'pointer',
_hover: { backgroundColor: 'red.900/30' },
_disabled: { opacity: 0.5, cursor: 'not-allowed' },
})}
>
Exclude
</button>
</div>
</div>
)
}
// Show checkbox if hovered, has any selections, or this is checked
const showCheckbox = isHovered || hasSelections || isCheckedForReparse
// Determine the background color based on state
const bgColor = isSelected
? isDark
? 'blue.900'
: 'blue.50'
: isCheckedForReparse
? isDark
? 'blue.900/50'
: 'blue.50'
: isLowConfidence
? isDark
? 'yellow.900/30'
: 'yellow.50'
: isDark
? 'gray.700'
: 'gray.100'
const hoverBgColor = isSelected
? isDark
? 'blue.900'
: 'blue.100'
: isDark
? 'gray.600'
: 'gray.200'
// Display mode UI
return (
<div
data-element="problem-row-container"
data-problem-index={index}
data-selected={isSelected}
data-checked={isCheckedForReparse}
className={css({
display: 'flex',
alignItems: 'stretch',
gap: 0,
borderRadius: 'lg',
backgroundColor: bgColor,
border: isSelected
? '2px solid token(colors.blue.500)'
: isCheckedForReparse
? '2px solid token(colors.blue.500/50)'
: '2px solid transparent',
cursor: 'pointer',
transition: 'all 0.15s',
_hover: {
backgroundColor: hoverBgColor,
},
})}
onMouseEnter={() => setIsHovered(true)}
onMouseLeave={() => setIsHovered(false)}
>
{/* Checkbox - always takes up space when onToggleReparse is provided, but only visible on hover/selection */}
{onToggleReparse && (
<button
type="button"
data-action="toggle-reparse"
onClick={handleCheckboxClick}
className={css({
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
width: '40px',
flexShrink: 0,
backgroundColor: 'transparent',
borderRadius: 'lg 0 0 lg',
border: 'none',
cursor: 'pointer',
transition: 'opacity 0.15s',
// Hide checkbox visually but keep space reserved
opacity: showCheckbox ? (isCheckedForReparse || isHovered ? 1 : 0.5) : 0,
pointerEvents: showCheckbox ? 'auto' : 'none',
})}
>
<div
className={css({
width: '20px',
height: '20px',
borderRadius: 'sm',
border: '2px solid',
borderColor: isCheckedForReparse ? 'blue.400' : isDark ? 'gray.500' : 'gray.400',
backgroundColor: isCheckedForReparse ? 'blue.500' : 'transparent',
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
color: 'white',
fontSize: 'xs',
fontWeight: 'bold',
})}
>
{isCheckedForReparse && ''}
</div>
</button>
)}
<button
type="button"
data-element="problem-row"
onClick={onSelect}
className={css({
flex: 1,
display: 'flex',
alignItems: 'center',
justifyContent: 'space-between',
gap: 3,
padding: 3,
backgroundColor: 'transparent',
borderRadius: onToggleReparse ? '0 lg lg 0' : 'lg',
border: 'none',
cursor: 'pointer',
textAlign: 'left',
})}
>
{/* Small thumbnail of cropped problem region */}
{thumbnailUrl && (
<div
className={css({
width: '48px',
height: '32px',
flexShrink: 0,
borderRadius: 'sm',
overflow: 'hidden',
backgroundColor: 'gray.900',
})}
>
<img
src={thumbnailUrl}
alt=""
className={css({
width: '100%',
height: '100%',
objectFit: 'cover',
})}
/>
</div>
)}
<div className={css({ display: 'flex', flexDirection: 'column', gap: 1, flex: 1 })}>
{/* Problem expression */}
<div
className={css({
fontFamily: 'mono',
fontSize: 'sm',
color: isDark ? 'white' : 'gray.900',
})}
>
#{index + 1}: {formatTerms(problem.terms)} ={' '}
<span
className={css({
color:
problem.studentAnswer === null
? isDark
? 'gray.500'
: 'gray.400'
: isCorrect
? isDark
? 'green.400'
: 'green.600'
: isDark
? 'red.400'
: 'red.600',
})}
>
{problem.studentAnswer ?? '?'}
</span>
</div>
{/* Correct answer and status */}
<div
className={css({
fontSize: 'xs',
color: isCorrect
? isDark
? 'green.400'
: 'green.600'
: problem.studentAnswer == null
? isDark
? 'gray.500'
: 'gray.500'
: isDark
? 'red.400'
: 'red.600',
})}
>
{isCorrect
? ' Correct'
: problem.studentAnswer == null
? 'No answer detected'
: `✗ Incorrect (correct: ${problem.correctAnswer})`}
</div>
</div>
<div
className={css({
display: 'flex',
alignItems: 'center',
gap: 2,
})}
>
{/* Low confidence warning */}
{isLowConfidence && (
<span
className={css({
fontSize: 'xs',
color: isDark ? 'yellow.400' : 'yellow.600',
})}
title={`${Math.round(Math.min(problem.termsConfidence, problem.studentAnswerConfidence) * 100)}% confidence`}
>
⚠️
</span>
)}
{/* Confidence indicator */}
<div
className={css({
px: 2,
py: 1,
fontSize: 'xs',
fontWeight: 'medium',
borderRadius: 'md',
backgroundColor:
problem.studentAnswerConfidence >= 0.8
? isDark
? 'green.900'
: 'green.100'
: problem.studentAnswerConfidence >= 0.5
? isDark
? 'yellow.900'
: 'yellow.100'
: isDark
? 'red.900'
: 'red.100',
color:
problem.studentAnswerConfidence >= 0.8
? isDark
? 'green.300'
: 'green.700'
: problem.studentAnswerConfidence >= 0.5
? isDark
? 'yellow.300'
: 'yellow.700'
: isDark
? 'red.300'
: 'red.700',
})}
>
{Math.round(problem.studentAnswerConfidence * 100)}%
</div>
{/* Edit button */}
<button
type="button"
data-action="edit-problem"
onClick={handleEdit}
className={css({
px: 2,
py: 1,
fontSize: 'xs',
fontWeight: 'medium',
backgroundColor: isDark ? 'gray.600' : 'gray.200',
color: isDark ? 'white' : 'gray.700',
border: 'none',
borderRadius: 'md',
cursor: 'pointer',
_hover: {
backgroundColor: isDark ? 'gray.500' : 'gray.300',
},
})}
>
Edit
</button>
</div>
</button>
</div>
)
}
export default EditableProblemRow

View File

@@ -0,0 +1,401 @@
'use client'
/**
* ParsedProblemsList - Displays extracted problems from worksheet parsing
*
* Shows a compact list of parsed problems with:
* - Problem number and terms (e.g., "45 + 27")
* - Student answer with correct/incorrect indicator
* - Low confidence highlighting
* - Needs review badge
*/
import { css } from '../../../styled-system/css'
import type { ParsedProblem, WorksheetParsingResult } from '@/lib/worksheet-parsing'
export interface ParsedProblemsListProps {
/** The parsed result from worksheet parsing */
result: WorksheetParsingResult
/** Whether to use dark mode styling */
isDark: boolean
/** Optional callback when a problem is clicked (for highlighting on image) */
onProblemClick?: (problem: ParsedProblem) => void
/** Currently selected problem index (for highlighting) */
selectedProblemIndex?: number | null
/** Threshold below which confidence is considered "low" */
lowConfidenceThreshold?: number
}
/**
* Format terms into a readable string like "45 + 27 - 12"
*/
function formatTerms(terms: number[]): string {
if (terms.length === 0) return ''
if (terms.length === 1) return terms[0].toString()
return terms
.map((term, i) => {
if (i === 0) return term.toString()
if (term >= 0) return `+ ${term}`
return `- ${Math.abs(term)}`
})
.join(' ')
}
/**
* Get the minimum confidence for a problem (either terms or student answer)
*/
function getMinConfidence(problem: ParsedProblem): number {
// If student answer is null, only consider terms confidence
if (problem.studentAnswer === null) {
return problem.termsConfidence
}
return Math.min(problem.termsConfidence, problem.studentAnswerConfidence)
}
export function ParsedProblemsList({
result,
isDark,
onProblemClick,
selectedProblemIndex,
lowConfidenceThreshold = 0.7,
}: ParsedProblemsListProps) {
const { problems, needsReview, overallConfidence } = result
// Calculate summary stats
const totalProblems = problems.length
const answeredProblems = problems.filter((p) => p.studentAnswer !== null).length
const correctProblems = problems.filter(
(p) => p.studentAnswer !== null && p.studentAnswer === p.correctAnswer
).length
const lowConfidenceCount = problems.filter(
(p) => getMinConfidence(p) < lowConfidenceThreshold
).length
return (
<div
data-component="parsed-problems-list"
className={css({
borderRadius: '12px',
border: '1px solid',
borderColor: isDark ? 'gray.700' : 'gray.200',
backgroundColor: isDark ? 'gray.800' : 'white',
overflow: 'hidden',
})}
>
{/* Header with summary */}
<div
data-element="header"
className={css({
padding: '0.75rem 1rem',
backgroundColor: isDark ? 'gray.750' : 'gray.50',
borderBottom: '1px solid',
borderColor: isDark ? 'gray.700' : 'gray.200',
display: 'flex',
alignItems: 'center',
justifyContent: 'space-between',
flexWrap: 'wrap',
gap: '0.5rem',
})}
>
<div
className={css({
display: 'flex',
alignItems: 'center',
gap: '0.75rem',
})}
>
<span
className={css({
fontSize: '0.875rem',
fontWeight: 'bold',
color: isDark ? 'white' : 'gray.800',
})}
>
{totalProblems} Problems
</span>
{answeredProblems > 0 && (
<span
className={css({
fontSize: '0.75rem',
color: isDark ? 'gray.400' : 'gray.600',
})}
>
{correctProblems}/{answeredProblems} correct
</span>
)}
</div>
<div
className={css({
display: 'flex',
alignItems: 'center',
gap: '0.5rem',
})}
>
{/* Needs review badge */}
{needsReview && (
<span
data-element="needs-review-badge"
className={css({
px: 2,
py: 0.5,
fontSize: '0.6875rem',
fontWeight: '600',
borderRadius: 'full',
backgroundColor: 'yellow.100',
color: 'yellow.800',
display: 'flex',
alignItems: 'center',
gap: '0.25rem',
})}
>
<span></span> Needs Review
</span>
)}
{/* Low confidence count */}
{lowConfidenceCount > 0 && !needsReview && (
<span
className={css({
px: 2,
py: 0.5,
fontSize: '0.6875rem',
fontWeight: '500',
borderRadius: 'full',
backgroundColor: isDark ? 'yellow.900/30' : 'yellow.50',
color: isDark ? 'yellow.400' : 'yellow.700',
})}
>
{lowConfidenceCount} low confidence
</span>
)}
{/* Confidence indicator */}
<span
className={css({
fontSize: '0.6875rem',
color:
overallConfidence >= 0.9
? isDark
? 'green.400'
: 'green.600'
: overallConfidence >= 0.7
? isDark
? 'yellow.400'
: 'yellow.600'
: isDark
? 'red.400'
: 'red.600',
})}
>
{Math.round(overallConfidence * 100)}% confidence
</span>
</div>
</div>
{/* Problems list */}
<div
data-element="problems-list"
className={css({
maxHeight: '300px',
overflowY: 'auto',
})}
>
{problems.map((problem, index) => {
const isCorrect =
problem.studentAnswer !== null && problem.studentAnswer === problem.correctAnswer
const isIncorrect =
problem.studentAnswer !== null && problem.studentAnswer !== problem.correctAnswer
const isLowConfidence = getMinConfidence(problem) < lowConfidenceThreshold
const isSelected = selectedProblemIndex === index
return (
<button
key={problem.problemNumber}
type="button"
data-element="problem-row"
data-problem-number={problem.problemNumber}
data-is-correct={isCorrect}
data-is-low-confidence={isLowConfidence}
onClick={() => onProblemClick?.(problem)}
className={css({
width: '100%',
display: 'flex',
alignItems: 'center',
padding: '0.5rem 1rem',
gap: '0.75rem',
borderBottom: '1px solid',
borderColor: isDark ? 'gray.700' : 'gray.100',
backgroundColor: isSelected
? isDark
? 'blue.900/30'
: 'blue.50'
: isLowConfidence
? isDark
? 'yellow.900/20'
: 'yellow.50'
: 'transparent',
cursor: onProblemClick ? 'pointer' : 'default',
transition: 'background-color 0.15s',
border: 'none',
textAlign: 'left',
_hover: {
backgroundColor: isSelected
? isDark
? 'blue.900/40'
: 'blue.100'
: isDark
? 'gray.750'
: 'gray.50',
},
_last: {
borderBottom: 'none',
},
})}
>
{/* Problem number */}
<span
className={css({
minWidth: '24px',
fontSize: '0.75rem',
fontWeight: '600',
color: isDark ? 'gray.500' : 'gray.400',
})}
>
#{problem.problemNumber}
</span>
{/* Terms */}
<span
className={css({
flex: 1,
fontSize: '0.875rem',
fontFamily: 'monospace',
color: isDark ? 'gray.200' : 'gray.700',
})}
>
{formatTerms(problem.terms)}
</span>
{/* Equals sign and answer */}
<span
className={css({
fontSize: '0.875rem',
color: isDark ? 'gray.400' : 'gray.500',
})}
>
=
</span>
{/* Student answer */}
<span
className={css({
minWidth: '48px',
fontSize: '0.875rem',
fontFamily: 'monospace',
fontWeight: '500',
textAlign: 'right',
color:
problem.studentAnswer === null
? isDark
? 'gray.500'
: 'gray.400'
: isCorrect
? isDark
? 'green.400'
: 'green.600'
: isDark
? 'red.400'
: 'red.600',
})}
>
{problem.studentAnswer ?? '—'}
</span>
{/* Correct/incorrect indicator */}
<span
className={css({
width: '20px',
height: '20px',
display: 'flex',
alignItems: 'center',
justifyContent: 'center',
fontSize: '0.875rem',
})}
>
{isCorrect && <span className={css({ color: 'green.500' })}></span>}
{isIncorrect && <span className={css({ color: 'red.500' })}></span>}
{problem.studentAnswer === null && (
<span className={css({ color: isDark ? 'gray.600' : 'gray.300' })}></span>
)}
</span>
{/* Low confidence indicator */}
{isLowConfidence && (
<span
className={css({
fontSize: '0.6875rem',
color: isDark ? 'yellow.400' : 'yellow.600',
})}
title={`${Math.round(getMinConfidence(problem) * 100)}% confidence`}
>
</span>
)}
</button>
)
})}
</div>
{/* Warnings section if any */}
{result.warnings.length > 0 && (
<div
data-element="warnings"
className={css({
padding: '0.75rem 1rem',
backgroundColor: isDark ? 'yellow.900/20' : 'yellow.50',
borderTop: '1px solid',
borderColor: isDark ? 'yellow.800/30' : 'yellow.200',
})}
>
<div
className={css({
fontSize: '0.75rem',
fontWeight: '600',
color: isDark ? 'yellow.400' : 'yellow.700',
marginBottom: '0.25rem',
})}
>
Warnings:
</div>
<ul
className={css({
margin: 0,
padding: 0,
listStyle: 'none',
})}
>
{result.warnings.map((warning, i) => (
<li
key={i}
className={css({
fontSize: '0.6875rem',
color: isDark ? 'yellow.300' : 'yellow.800',
paddingLeft: '0.75rem',
position: 'relative',
_before: {
content: '"•"',
position: 'absolute',
left: 0,
},
})}
>
{warning}
</li>
))}
</ul>
</div>
)}
</div>
)
}
export default ParsedProblemsList

View File

@@ -0,0 +1,324 @@
'use client'
/**
* ParsingProgressModal - Shows progress during worksheet parsing
*
* Displays animated stages while the LLM analyzes the worksheet image.
* Since the API is synchronous, stages are simulated based on typical timing.
*/
import { useEffect, useState } from 'react'
import * as Dialog from '@radix-ui/react-dialog'
import { Z_INDEX } from '@/constants/zIndex'
import { css } from '../../../styled-system/css'
interface ParsingProgressModalProps {
/** Whether the modal is open */
isOpen: boolean
/** Callback when modal should close (e.g., cancel button) */
onClose: () => void
/** Whether parsing completed successfully */
isSuccess?: boolean
/** Whether parsing failed */
isError?: boolean
/** Error message if failed */
errorMessage?: string
/** Number of problems found (shown on success) */
problemCount?: number
}
// Stages with typical timing (in milliseconds)
const STAGES = [
{ id: 'preparing', label: 'Preparing image...', duration: 1000 },
{ id: 'analyzing', label: 'Analyzing worksheet...', duration: 8000 },
{ id: 'extracting', label: 'Extracting problems...', duration: 6000 },
{ id: 'validating', label: 'Validating results...', duration: 3000 },
] as const
type StageId = (typeof STAGES)[number]['id']
export function ParsingProgressModal({
isOpen,
onClose,
isSuccess = false,
isError = false,
errorMessage,
problemCount,
}: ParsingProgressModalProps) {
const [currentStageIndex, setCurrentStageIndex] = useState(0)
const [stageStartTime, setStageStartTime] = useState<number>(Date.now())
// Reset when modal opens
useEffect(() => {
if (isOpen) {
setCurrentStageIndex(0)
setStageStartTime(Date.now())
}
}, [isOpen])
// Advance through stages based on timing
useEffect(() => {
if (!isOpen || isSuccess || isError) return
if (currentStageIndex >= STAGES.length - 1) return
const stage = STAGES[currentStageIndex]
const elapsed = Date.now() - stageStartTime
const remaining = Math.max(0, stage.duration - elapsed)
const timer = setTimeout(() => {
setCurrentStageIndex((i) => Math.min(i + 1, STAGES.length - 1))
setStageStartTime(Date.now())
}, remaining)
return () => clearTimeout(timer)
}, [isOpen, isSuccess, isError, currentStageIndex, stageStartTime])
// Auto-close on success after brief delay
useEffect(() => {
if (isSuccess) {
const timer = setTimeout(onClose, 1500)
return () => clearTimeout(timer)
}
}, [isSuccess, onClose])
const currentStage = STAGES[currentStageIndex]
return (
<Dialog.Root open={isOpen} onOpenChange={(open) => !open && onClose()}>
<Dialog.Portal>
<Dialog.Overlay
className={css({
position: 'fixed',
inset: 0,
backgroundColor: 'rgba(0, 0, 0, 0.5)',
zIndex: Z_INDEX.MODAL_BACKDROP,
})}
/>
<Dialog.Content
className={css({
position: 'fixed',
top: '50%',
left: '50%',
transform: 'translate(-50%, -50%)',
backgroundColor: 'white',
borderRadius: '16px',
padding: '2rem',
width: '90%',
maxWidth: '360px',
zIndex: Z_INDEX.MODAL,
boxShadow: '0 25px 50px -12px rgba(0, 0, 0, 0.25)',
_dark: {
backgroundColor: 'gray.800',
},
})}
aria-describedby={undefined}
>
<Dialog.Title
className={css({
fontSize: '1.125rem',
fontWeight: '600',
marginBottom: '1.5rem',
textAlign: 'center',
color: 'gray.800',
_dark: { color: 'white' },
})}
>
{isSuccess
? '✅ Parsing Complete'
: isError
? '❌ Parsing Failed'
: '📊 Analyzing Worksheet'}
</Dialog.Title>
<div
className={css({
display: 'flex',
flexDirection: 'column',
alignItems: 'center',
gap: '1.5rem',
})}
>
{/* Success State */}
{isSuccess && (
<div
className={css({
textAlign: 'center',
color: 'green.600',
_dark: { color: 'green.400' },
})}
>
<div
className={css({
fontSize: '3rem',
marginBottom: '0.5rem',
})}
>
</div>
<p className={css({ fontSize: '1rem', fontWeight: '500' })}>
Found {problemCount ?? 'some'} problems
</p>
</div>
)}
{/* Error State */}
{isError && (
<div
className={css({
textAlign: 'center',
})}
>
<div
className={css({
fontSize: '3rem',
marginBottom: '0.5rem',
})}
>
</div>
<p
className={css({
fontSize: '0.875rem',
color: 'red.600',
_dark: { color: 'red.400' },
maxWidth: '280px',
})}
>
{errorMessage || 'An error occurred while parsing the worksheet.'}
</p>
<button
type="button"
onClick={onClose}
className={css({
marginTop: '1rem',
px: 4,
py: 2,
backgroundColor: 'gray.100',
color: 'gray.700',
borderRadius: 'lg',
border: 'none',
cursor: 'pointer',
fontWeight: '500',
_hover: { backgroundColor: 'gray.200' },
_dark: {
backgroundColor: 'gray.700',
color: 'gray.200',
_hover: { backgroundColor: 'gray.600' },
},
})}
>
Close
</button>
</div>
)}
{/* Loading State */}
{!isSuccess && !isError && (
<>
{/* Spinner */}
<div
className={css({
width: '48px',
height: '48px',
border: '4px solid',
borderColor: 'blue.100',
borderTopColor: 'blue.500',
borderRadius: 'full',
animation: 'spin 1s linear infinite',
_dark: {
borderColor: 'gray.700',
borderTopColor: 'blue.400',
},
})}
/>
{/* Current Stage */}
<p
className={css({
fontSize: '1rem',
fontWeight: '500',
color: 'gray.700',
animation: 'pulseOpacity 2s ease-in-out infinite',
_dark: { color: 'gray.300' },
})}
>
{currentStage.label}
</p>
{/* Stage Progress */}
<div
className={css({
display: 'flex',
gap: '0.5rem',
alignItems: 'center',
})}
>
{STAGES.map((stage, index) => (
<div
key={stage.id}
className={css({
width: '8px',
height: '8px',
borderRadius: 'full',
backgroundColor:
index < currentStageIndex
? 'green.500'
: index === currentStageIndex
? 'blue.500'
: 'gray.300',
transition: 'background-color 0.3s',
_dark: {
backgroundColor:
index < currentStageIndex
? 'green.400'
: index === currentStageIndex
? 'blue.400'
: 'gray.600',
},
})}
/>
))}
</div>
{/* Timing hint */}
<p
className={css({
fontSize: '0.75rem',
color: 'gray.500',
textAlign: 'center',
_dark: { color: 'gray.400' },
})}
>
This usually takes 15-30 seconds
</p>
{/* Cancel hint - parsing continues in background */}
<button
type="button"
onClick={onClose}
className={css({
fontSize: '0.75rem',
color: 'gray.400',
background: 'none',
border: 'none',
cursor: 'pointer',
textDecoration: 'underline',
_hover: { color: 'gray.600' },
_dark: {
color: 'gray.500',
_hover: { color: 'gray.300' },
},
})}
>
Hide (parsing continues in background)
</button>
</>
)}
</div>
</Dialog.Content>
</Dialog.Portal>
</Dialog.Root>
)
}
export default ParsingProgressModal

View File

@@ -0,0 +1,14 @@
/**
* Worksheet Parsing UI Components
*
* Components for displaying and interacting with LLM-parsed worksheet data.
*/
export { BoundingBoxOverlay } from './BoundingBoxOverlay'
export { ParsedProblemsList, type ParsedProblemsListProps } from './ParsedProblemsList'
export {
EditableProblemRow,
type EditableProblemRowProps,
type ProblemCorrection,
} from './EditableProblemRow'
export { DebugContentModal, type DebugContentModalProps } from './DebugContentModal'

View File

@@ -6,9 +6,76 @@ import {
type MutableRefObject,
useCallback,
useContext,
useEffect,
useRef,
useState,
} from 'react'
import type { CalibrationGrid } from '@/types/vision'
/**
* Camera source type for vision
*/
export type CameraSourceType = 'local' | 'phone'
/**
* Configuration for abacus vision (camera-based input)
*/
export interface VisionConfig {
/** Whether vision mode is enabled */
enabled: boolean
/** Selected camera device ID */
cameraDeviceId: string | null
/** Saved calibration grid for cropping */
calibration: CalibrationGrid | null
/** Remote phone camera session ID (for phone-as-camera mode) */
remoteCameraSessionId: string | null
/** Currently active camera source - tracks which camera is in use */
activeCameraSource: CameraSourceType | null
}
const DEFAULT_VISION_CONFIG: VisionConfig = {
enabled: false,
cameraDeviceId: null,
calibration: null,
remoteCameraSessionId: null,
activeCameraSource: null,
}
const VISION_CONFIG_STORAGE_KEY = 'abacus-vision-config'
/**
* Load vision config from localStorage
*/
function loadVisionConfig(): VisionConfig {
if (typeof window === 'undefined') return DEFAULT_VISION_CONFIG
try {
const stored = localStorage.getItem(VISION_CONFIG_STORAGE_KEY)
if (stored) {
const parsed = JSON.parse(stored)
return {
...DEFAULT_VISION_CONFIG,
...parsed,
// Always start with vision disabled - user must re-enable
enabled: false,
}
}
} catch (e) {
console.error('[MyAbacusContext] Failed to load vision config:', e)
}
return DEFAULT_VISION_CONFIG
}
/**
* Save vision config to localStorage
*/
function saveVisionConfig(config: VisionConfig): void {
if (typeof window === 'undefined') return
try {
localStorage.setItem(VISION_CONFIG_STORAGE_KEY, JSON.stringify(config))
} catch (e) {
console.error('[MyAbacusContext] Failed to save vision config:', e)
}
}
/**
* Configuration for a docked abacus
@@ -54,6 +121,23 @@ export interface DockAnimationState {
toScale: number
}
/**
* Vision frame data for broadcasting
*/
export interface VisionFrameData {
/** Base64-encoded JPEG image data */
imageData: string
/** Detected abacus value (null if not yet detected) */
detectedValue: number | null
/** Detection confidence (0-1) */
confidence: number
}
/**
* Callback type for vision frame broadcasting
*/
export type VisionFrameCallback = (frame: VisionFrameData) => void
interface MyAbacusContextValue {
isOpen: boolean
open: () => void
@@ -107,6 +191,31 @@ interface MyAbacusContextValue {
setDockedValue: (value: number) => void
/** Current abacus value (for reading) */
abacusValue: number
// Vision-related state
/** Current vision configuration */
visionConfig: VisionConfig
/** Whether vision setup is complete (has camera and calibration) */
isVisionSetupComplete: boolean
/** Set whether vision is enabled */
setVisionEnabled: (enabled: boolean) => void
/** Set the selected camera device ID */
setVisionCamera: (deviceId: string | null) => void
/** Set the calibration grid */
setVisionCalibration: (calibration: CalibrationGrid | null) => void
/** Set the remote camera session ID */
setVisionRemoteSession: (sessionId: string | null) => void
/** Set the active camera source */
setVisionCameraSource: (source: CameraSourceType | null) => void
/** Whether the vision setup modal is open */
isVisionSetupOpen: boolean
/** Open the vision setup modal */
openVisionSetup: () => void
/** Close the vision setup modal */
closeVisionSetup: () => void
/** Set a callback for receiving vision frames (for broadcasting to observers) */
setVisionFrameCallback: (callback: VisionFrameCallback | null) => void
/** Emit a vision frame (called by DockedVisionFeed) */
emitVisionFrame: (frame: VisionFrameData) => void
}
const MyAbacusContext = createContext<MyAbacusContextValue | undefined>(undefined)
@@ -124,6 +233,16 @@ export function MyAbacusProvider({ children }: { children: React.ReactNode }) {
const [pendingDockRequest, setPendingDockRequest] = useState(false)
const [abacusValue, setAbacusValue] = useState(0)
// Vision state
const [visionConfig, setVisionConfig] = useState<VisionConfig>(DEFAULT_VISION_CONFIG)
const [isVisionSetupOpen, setIsVisionSetupOpen] = useState(false)
// Load vision config from localStorage on mount
useEffect(() => {
const loaded = loadVisionConfig()
setVisionConfig(loaded)
}, [])
const open = useCallback(() => setIsOpen(true), [])
const close = useCallback(() => setIsOpen(false), [])
const toggle = useCallback(() => setIsOpen((prev) => !prev), [])
@@ -200,6 +319,73 @@ export function MyAbacusProvider({ children }: { children: React.ReactNode }) {
setAbacusValue(value)
}, [])
// Vision callbacks
// Setup is complete if an active camera source is set and configured:
// - Local camera: has camera device (calibration is optional - auto-crop works without it)
// - Remote camera: has remote session ID (phone handles calibration)
const isVisionSetupComplete =
visionConfig.activeCameraSource !== null &&
((visionConfig.activeCameraSource === 'local' && visionConfig.cameraDeviceId !== null) ||
(visionConfig.activeCameraSource === 'phone' && visionConfig.remoteCameraSessionId !== null))
const setVisionEnabled = useCallback((enabled: boolean) => {
setVisionConfig((prev) => {
const updated = { ...prev, enabled }
saveVisionConfig(updated)
return updated
})
}, [])
const setVisionCamera = useCallback((deviceId: string | null) => {
setVisionConfig((prev) => {
const updated = { ...prev, cameraDeviceId: deviceId }
saveVisionConfig(updated)
return updated
})
}, [])
const setVisionCalibration = useCallback((calibration: CalibrationGrid | null) => {
setVisionConfig((prev) => {
const updated = { ...prev, calibration }
saveVisionConfig(updated)
return updated
})
}, [])
const setVisionRemoteSession = useCallback((sessionId: string | null) => {
setVisionConfig((prev) => {
const updated = { ...prev, remoteCameraSessionId: sessionId }
saveVisionConfig(updated)
return updated
})
}, [])
const setVisionCameraSource = useCallback((source: CameraSourceType | null) => {
setVisionConfig((prev) => {
const updated = { ...prev, activeCameraSource: source }
saveVisionConfig(updated)
return updated
})
}, [])
const openVisionSetup = useCallback(() => {
setIsVisionSetupOpen(true)
}, [])
const closeVisionSetup = useCallback(() => {
setIsVisionSetupOpen(false)
}, [])
// Vision frame broadcasting
const visionFrameCallbackRef = useRef<VisionFrameCallback | null>(null)
const setVisionFrameCallback = useCallback((callback: VisionFrameCallback | null) => {
visionFrameCallbackRef.current = callback
}, [])
const emitVisionFrame = useCallback((frame: VisionFrameData) => {
visionFrameCallbackRef.current?.(frame)
}, [])
return (
<MyAbacusContext.Provider
value={{
@@ -233,6 +419,19 @@ export function MyAbacusProvider({ children }: { children: React.ReactNode }) {
clearDockRequest,
setDockedValue,
abacusValue,
// Vision
visionConfig,
isVisionSetupComplete,
setVisionEnabled,
setVisionCamera,
setVisionCalibration,
setVisionRemoteSession,
setVisionCameraSource,
isVisionSetupOpen,
openVisionSetup,
closeVisionSetup,
setVisionFrameCallback,
emitVisionFrame,
}}
>
{children}

View File

@@ -0,0 +1,432 @@
/**
* Unit tests for MyAbacusContext vision functionality
*/
import { act, renderHook } from '@testing-library/react'
import type { ReactNode } from 'react'
import { beforeEach, describe, expect, it, vi } from 'vitest'
import { MyAbacusProvider, useMyAbacus, type VisionFrameData } from '../MyAbacusContext'
// Mock localStorage
const localStorageMock = (() => {
let store: Record<string, string> = {}
return {
getItem: vi.fn((key: string) => store[key] || null),
setItem: vi.fn((key: string, value: string) => {
store[key] = value
}),
removeItem: vi.fn((key: string) => {
delete store[key]
}),
clear: vi.fn(() => {
store = {}
}),
}
})()
Object.defineProperty(window, 'localStorage', { value: localStorageMock })
describe('MyAbacusContext - vision functionality', () => {
const wrapper = ({ children }: { children: ReactNode }) => (
<MyAbacusProvider>{children}</MyAbacusProvider>
)
beforeEach(() => {
vi.clearAllMocks()
localStorageMock.clear()
})
describe('visionConfig state', () => {
it('starts with vision disabled', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
expect(result.current.visionConfig.enabled).toBe(false)
})
it('starts with null cameraDeviceId', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
expect(result.current.visionConfig.cameraDeviceId).toBeNull()
})
it('starts with null calibration', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
expect(result.current.visionConfig.calibration).toBeNull()
})
it('starts with null remoteCameraSessionId', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
expect(result.current.visionConfig.remoteCameraSessionId).toBeNull()
})
})
describe('isVisionSetupComplete', () => {
it('returns false when camera is not set', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
expect(result.current.isVisionSetupComplete).toBe(false)
})
it('returns false when calibration is not set', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
act(() => {
result.current.setVisionCamera('camera-123')
})
expect(result.current.isVisionSetupComplete).toBe(false)
})
it('returns true when both camera and calibration are set', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
act(() => {
result.current.setVisionCamera('camera-123')
result.current.setVisionCalibration({
roi: { x: 0, y: 0, width: 100, height: 100 },
columnCount: 5,
columnDividers: [],
rotation: 0,
})
})
expect(result.current.isVisionSetupComplete).toBe(true)
})
})
describe('setVisionEnabled', () => {
it('enables vision mode', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
act(() => {
result.current.setVisionEnabled(true)
})
expect(result.current.visionConfig.enabled).toBe(true)
})
it('disables vision mode', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
act(() => {
result.current.setVisionEnabled(true)
})
act(() => {
result.current.setVisionEnabled(false)
})
expect(result.current.visionConfig.enabled).toBe(false)
})
it('persists to localStorage', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
act(() => {
result.current.setVisionEnabled(true)
})
expect(localStorageMock.setItem).toHaveBeenCalledWith(
'abacus-vision-config',
expect.stringContaining('"enabled":true')
)
})
})
describe('setVisionCamera', () => {
it('sets camera device ID', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
act(() => {
result.current.setVisionCamera('camera-device-123')
})
expect(result.current.visionConfig.cameraDeviceId).toBe('camera-device-123')
})
it('clears camera device ID when set to null', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
act(() => {
result.current.setVisionCamera('camera-123')
})
act(() => {
result.current.setVisionCamera(null)
})
expect(result.current.visionConfig.cameraDeviceId).toBeNull()
})
it('persists to localStorage', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
act(() => {
result.current.setVisionCamera('camera-abc')
})
expect(localStorageMock.setItem).toHaveBeenCalledWith(
'abacus-vision-config',
expect.stringContaining('"cameraDeviceId":"camera-abc"')
)
})
})
describe('setVisionCalibration', () => {
it('sets calibration grid', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
const calibration = {
roi: { x: 10, y: 20, width: 200, height: 100 },
columnCount: 5,
columnDividers: [0.2, 0.4, 0.6, 0.8],
rotation: 0,
}
act(() => {
result.current.setVisionCalibration(calibration)
})
expect(result.current.visionConfig.calibration).toEqual(calibration)
})
it('clears calibration when set to null', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
act(() => {
result.current.setVisionCalibration({
roi: { x: 0, y: 0, width: 100, height: 100 },
columnCount: 5,
columnDividers: [],
rotation: 0,
})
})
act(() => {
result.current.setVisionCalibration(null)
})
expect(result.current.visionConfig.calibration).toBeNull()
})
})
describe('setVisionRemoteSession', () => {
it('sets remote camera session ID', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
act(() => {
result.current.setVisionRemoteSession('remote-session-456')
})
expect(result.current.visionConfig.remoteCameraSessionId).toBe('remote-session-456')
})
it('clears remote session when set to null', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
act(() => {
result.current.setVisionRemoteSession('session-123')
})
act(() => {
result.current.setVisionRemoteSession(null)
})
expect(result.current.visionConfig.remoteCameraSessionId).toBeNull()
})
})
describe('vision setup modal', () => {
it('starts with modal closed', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
expect(result.current.isVisionSetupOpen).toBe(false)
})
it('opens the setup modal', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
act(() => {
result.current.openVisionSetup()
})
expect(result.current.isVisionSetupOpen).toBe(true)
})
it('closes the setup modal', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
act(() => {
result.current.openVisionSetup()
})
act(() => {
result.current.closeVisionSetup()
})
expect(result.current.isVisionSetupOpen).toBe(false)
})
})
describe('vision frame callback', () => {
it('setVisionFrameCallback sets the callback', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
const callback = vi.fn()
act(() => {
result.current.setVisionFrameCallback(callback)
})
// The callback should be stored (we can verify by emitting a frame)
const frame: VisionFrameData = {
imageData: 'test',
detectedValue: 123,
confidence: 0.9,
}
act(() => {
result.current.emitVisionFrame(frame)
})
expect(callback).toHaveBeenCalledWith(frame)
})
it('emitVisionFrame calls the registered callback', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
const callback = vi.fn()
act(() => {
result.current.setVisionFrameCallback(callback)
})
const frame: VisionFrameData = {
imageData: 'base64data',
detectedValue: 456,
confidence: 0.85,
}
act(() => {
result.current.emitVisionFrame(frame)
})
expect(callback).toHaveBeenCalledTimes(1)
expect(callback).toHaveBeenCalledWith(frame)
})
it('emitVisionFrame does nothing when no callback is set', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
// This should not throw
const frame: VisionFrameData = {
imageData: 'test',
detectedValue: 123,
confidence: 0.9,
}
expect(() => {
act(() => {
result.current.emitVisionFrame(frame)
})
}).not.toThrow()
})
it('clearing callback stops emissions', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
const callback = vi.fn()
act(() => {
result.current.setVisionFrameCallback(callback)
})
act(() => {
result.current.setVisionFrameCallback(null)
})
const frame: VisionFrameData = {
imageData: 'test',
detectedValue: 123,
confidence: 0.9,
}
act(() => {
result.current.emitVisionFrame(frame)
})
expect(callback).not.toHaveBeenCalled()
})
it('handles null detectedValue in frame', () => {
const { result } = renderHook(() => useMyAbacus(), { wrapper })
const callback = vi.fn()
act(() => {
result.current.setVisionFrameCallback(callback)
})
const frame: VisionFrameData = {
imageData: 'test',
detectedValue: null,
confidence: 0,
}
act(() => {
result.current.emitVisionFrame(frame)
})
expect(callback).toHaveBeenCalledWith({
imageData: 'test',
detectedValue: null,
confidence: 0,
})
})
})
describe('localStorage persistence', () => {
it('loads saved config from localStorage on mount', () => {
const savedConfig = {
enabled: false, // Always starts disabled per the code logic
cameraDeviceId: 'saved-camera',
calibration: {
roi: { x: 0, y: 0, width: 100, height: 100 },
columnCount: 5,
columnDividers: [],
rotation: 0,
},
remoteCameraSessionId: 'saved-session',
}
localStorageMock.getItem.mockReturnValueOnce(JSON.stringify(savedConfig))
const { result } = renderHook(() => useMyAbacus(), { wrapper })
// Wait for effect to run
expect(result.current.visionConfig.cameraDeviceId).toBe('saved-camera')
// Note: enabled is always false on load per the implementation
expect(result.current.visionConfig.enabled).toBe(false)
})
it('handles corrupted localStorage gracefully', () => {
localStorageMock.getItem.mockReturnValueOnce('invalid json {{{')
// Should not throw
const { result } = renderHook(() => useMyAbacus(), { wrapper })
expect(result.current.visionConfig).toBeDefined()
expect(result.current.visionConfig.enabled).toBe(false)
})
})
describe('negative cases', () => {
it('throws when useMyAbacus is used outside provider', () => {
// Using renderHook without the wrapper should throw
expect(() => {
renderHook(() => useMyAbacus())
}).toThrow('useMyAbacus must be used within MyAbacusProvider')
})
})
})

View File

@@ -1,14 +1,23 @@
import { sqliteTable, text, integer } from 'drizzle-orm/sqlite-core'
import { sqliteTable, text, integer, real } from 'drizzle-orm/sqlite-core'
import { createId } from '@paralleldrive/cuid2'
import { players } from './players'
import { sessionPlans } from './session-plans'
import { users } from './users'
import type { WorksheetParsingResult } from '@/lib/worksheet-parsing'
/**
* Parsing workflow status
*/
export type ParsingStatus = 'pending' | 'processing' | 'needs_review' | 'approved' | 'failed'
/**
* Practice attachments - photos of student work
*
* Used primarily for offline practice sessions where parents/teachers
* upload photos of the student's physical abacus work.
*
* Now also supports LLM-powered parsing of worksheet images to extract
* problems and student answers automatically.
*/
export const practiceAttachments = sqliteTable('practice_attachments', {
id: text('id')
@@ -41,6 +50,45 @@ export const practiceAttachments = sqliteTable('practice_attachments', {
// Rotation in degrees (0, 90, 180, or 270) - applied after cropping
rotation: integer('rotation').$type<0 | 90 | 180 | 270>().default(0),
// ============================================================================
// LLM Parsing Workflow
// ============================================================================
// Parsing status
parsingStatus: text('parsing_status').$type<ParsingStatus>(),
parsedAt: text('parsed_at'), // ISO timestamp when parsing completed
parsingError: text('parsing_error'), // Error message if parsing failed
// LLM parsing results (raw from LLM, before user corrections)
rawParsingResult: text('raw_parsing_result', {
mode: 'json',
}).$type<WorksheetParsingResult | null>(),
// Approved results (after user corrections)
approvedResult: text('approved_result', { mode: 'json' }).$type<WorksheetParsingResult | null>(),
// Confidence and review indicators
confidenceScore: real('confidence_score'), // 0-1, from LLM
needsReview: integer('needs_review', { mode: 'boolean' }), // True if any problems need manual review
// LLM call metadata (for debugging/transparency)
llmProvider: text('llm_provider'), // e.g., "openai", "anthropic"
llmModel: text('llm_model'), // e.g., "gpt-4o", "claude-sonnet-4"
llmPromptUsed: text('llm_prompt_used'), // The actual prompt sent to the LLM
llmRawResponse: text('llm_raw_response'), // Raw JSON response from the LLM (before parsing)
llmJsonSchema: text('llm_json_schema'), // JSON Schema sent to the LLM (with field descriptions)
llmImageSource: text('llm_image_source').$type<'cropped' | 'original'>(), // Which image was sent
llmAttempts: integer('llm_attempts'), // How many retries were needed
llmPromptTokens: integer('llm_prompt_tokens'),
llmCompletionTokens: integer('llm_completion_tokens'),
llmTotalTokens: integer('llm_total_tokens'),
// Session linkage (for parsed worksheets that created sessions)
sessionCreated: integer('session_created', { mode: 'boolean' }), // True if session was created from this parsing
createdSessionId: text('created_session_id').references(() => sessionPlans.id, {
onDelete: 'set null',
}),
// Audit
uploadedBy: text('uploaded_by')
.notNull()

View File

@@ -0,0 +1,498 @@
/**
* Tests for useRemoteCameraDesktop hook
*
* Tests session persistence, auto-reconnection, and Socket.IO event handling.
*/
import { act, renderHook, waitFor } from '@testing-library/react'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { useRemoteCameraDesktop } from '../useRemoteCameraDesktop'
// Mock socket.io-client - use vi.hoisted for variables referenced in vi.mock
const { mockSocket, mockIo } = vi.hoisted(() => {
const socket = {
id: 'test-socket-id',
on: vi.fn(),
off: vi.fn(),
emit: vi.fn(),
disconnect: vi.fn(),
connected: true,
}
return {
mockSocket: socket,
mockIo: vi.fn(() => socket),
}
})
vi.mock('socket.io-client', () => ({
io: mockIo,
}))
// Mock localStorage
const localStorageMock = (() => {
let store: Record<string, string> = {}
return {
getItem: vi.fn((key: string) => store[key] || null),
setItem: vi.fn((key: string, value: string) => {
store[key] = value
}),
removeItem: vi.fn((key: string) => {
delete store[key]
}),
clear: vi.fn(() => {
store = {}
}),
}
})()
Object.defineProperty(window, 'localStorage', {
value: localStorageMock,
})
describe('useRemoteCameraDesktop', () => {
beforeEach(() => {
vi.clearAllMocks()
localStorageMock.clear()
// Reset mock socket handlers
mockIo.mockClear()
mockSocket.on.mockClear()
mockSocket.off.mockClear()
mockSocket.emit.mockClear()
})
afterEach(() => {
vi.restoreAllMocks()
})
describe('initialization', () => {
it('should initialize with default state', () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
expect(result.current.isPhoneConnected).toBe(false)
expect(result.current.latestFrame).toBeNull()
expect(result.current.frameRate).toBe(0)
expect(result.current.error).toBeNull()
expect(result.current.currentSessionId).toBeNull()
expect(result.current.isReconnecting).toBe(false)
})
it('should set up socket with reconnection config', () => {
renderHook(() => useRemoteCameraDesktop())
expect(mockIo).toHaveBeenCalledWith(
expect.objectContaining({
path: '/api/socket',
reconnection: true,
reconnectionDelay: 1000,
reconnectionDelayMax: 5000,
reconnectionAttempts: 10,
})
)
})
})
describe('localStorage persistence', () => {
it('should persist session ID when subscribing', async () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
// Simulate socket connect
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
if (connectHandler) {
act(() => {
connectHandler()
})
}
// Subscribe to a session
act(() => {
result.current.subscribe('test-session-123')
})
expect(localStorageMock.setItem).toHaveBeenCalledWith(
'remote-camera-session-id',
'test-session-123'
)
})
it('should return persisted session ID from getPersistedSessionId', () => {
localStorageMock.getItem.mockReturnValue('persisted-session-456')
const { result } = renderHook(() => useRemoteCameraDesktop())
const persistedId = result.current.getPersistedSessionId()
expect(persistedId).toBe('persisted-session-456')
})
it('should clear persisted session ID on clearSession', async () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
act(() => {
result.current.clearSession()
})
expect(localStorageMock.removeItem).toHaveBeenCalledWith('remote-camera-session-id')
})
})
describe('auto-reconnect on socket reconnect', () => {
it('should re-subscribe to persisted session on socket connect', () => {
localStorageMock.getItem.mockReturnValue('persisted-session-789')
renderHook(() => useRemoteCameraDesktop())
// Find the connect handler
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
expect(connectHandler).toBeDefined()
// Simulate socket connect
act(() => {
connectHandler()
})
// Should emit subscribe with persisted session
expect(mockSocket.emit).toHaveBeenCalledWith('remote-camera:subscribe', {
sessionId: 'persisted-session-789',
})
})
it('should not subscribe if no persisted session', () => {
localStorageMock.getItem.mockReturnValue(null)
renderHook(() => useRemoteCameraDesktop())
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
// Should not emit subscribe
expect(mockSocket.emit).not.toHaveBeenCalledWith('remote-camera:subscribe', expect.anything())
})
})
describe('session subscription', () => {
it('should emit subscribe event with session ID', () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
// Simulate connection
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
act(() => {
result.current.subscribe('new-session-id')
})
expect(mockSocket.emit).toHaveBeenCalledWith('remote-camera:subscribe', {
sessionId: 'new-session-id',
})
})
it('should update currentSessionId on subscribe', () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
// Simulate connection
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
act(() => {
result.current.subscribe('my-session')
})
expect(result.current.currentSessionId).toBe('my-session')
})
})
describe('event handling', () => {
it('should handle phone connected event', () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
// Find the event handler setup
const setupHandler = mockSocket.on.mock.calls.find(
(call) => call[0] === 'remote-camera:connected'
)?.[1]
if (setupHandler) {
act(() => {
setupHandler({ phoneConnected: true })
})
}
expect(result.current.isPhoneConnected).toBe(true)
})
it('should handle phone disconnected event', () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
// Set connected first
const connectedHandler = mockSocket.on.mock.calls.find(
(call) => call[0] === 'remote-camera:connected'
)?.[1]
if (connectedHandler) {
act(() => {
connectedHandler({ phoneConnected: true })
})
}
// Then disconnect
const disconnectedHandler = mockSocket.on.mock.calls.find(
(call) => call[0] === 'remote-camera:disconnected'
)?.[1]
if (disconnectedHandler) {
act(() => {
disconnectedHandler({ phoneConnected: false })
})
}
expect(result.current.isPhoneConnected).toBe(false)
})
it('should handle frame events', () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
const frameHandler = mockSocket.on.mock.calls.find(
(call) => call[0] === 'remote-camera:frame'
)?.[1]
const testFrame = {
imageData: 'base64-image-data',
timestamp: Date.now(),
mode: 'cropped' as const,
}
if (frameHandler) {
act(() => {
frameHandler(testFrame)
})
}
expect(result.current.latestFrame).toEqual(testFrame)
})
it('should handle error events and clear invalid sessions', () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
const errorHandler = mockSocket.on.mock.calls.find(
(call) => call[0] === 'remote-camera:error'
)?.[1]
if (errorHandler) {
act(() => {
errorHandler({ error: 'Invalid session' })
})
}
expect(result.current.error).toBe('Invalid session')
expect(localStorageMock.removeItem).toHaveBeenCalledWith('remote-camera-session-id')
})
it('should handle torch state events', () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
const torchHandler = mockSocket.on.mock.calls.find(
(call) => call[0] === 'remote-camera:torch-state'
)?.[1]
if (torchHandler) {
act(() => {
torchHandler({ isTorchOn: true, isTorchAvailable: true })
})
}
expect(result.current.isTorchOn).toBe(true)
expect(result.current.isTorchAvailable).toBe(true)
})
})
describe('calibration commands', () => {
it('should emit calibration to phone', () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
// Simulate connection and subscription
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
act(() => {
result.current.subscribe('calibration-session')
})
const corners = {
topLeft: { x: 0, y: 0 },
topRight: { x: 100, y: 0 },
bottomLeft: { x: 0, y: 100 },
bottomRight: { x: 100, y: 100 },
}
act(() => {
result.current.sendCalibration(corners)
})
expect(mockSocket.emit).toHaveBeenCalledWith('remote-camera:set-calibration', {
sessionId: 'calibration-session',
corners,
})
})
it('should emit clear calibration to phone', () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
act(() => {
result.current.subscribe('clear-cal-session')
})
act(() => {
result.current.clearCalibration()
})
expect(mockSocket.emit).toHaveBeenCalledWith('remote-camera:clear-calibration', {
sessionId: 'clear-cal-session',
})
})
})
describe('frame mode control', () => {
it('should emit frame mode change to phone', () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
act(() => {
result.current.subscribe('mode-session')
})
act(() => {
result.current.setPhoneFrameMode('raw')
})
expect(mockSocket.emit).toHaveBeenCalledWith('remote-camera:set-mode', {
sessionId: 'mode-session',
mode: 'raw',
})
})
})
describe('torch control', () => {
it('should emit torch command to phone', () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
act(() => {
result.current.subscribe('torch-session')
})
act(() => {
result.current.setRemoteTorch(true)
})
expect(mockSocket.emit).toHaveBeenCalledWith('remote-camera:set-torch', {
sessionId: 'torch-session',
on: true,
})
})
it('should optimistically update torch state', () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
act(() => {
result.current.subscribe('torch-session-2')
})
act(() => {
result.current.setRemoteTorch(true)
})
expect(result.current.isTorchOn).toBe(true)
})
})
describe('cleanup', () => {
it('should emit leave on unsubscribe', () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
act(() => {
result.current.subscribe('leave-session')
})
act(() => {
result.current.unsubscribe()
})
expect(mockSocket.emit).toHaveBeenCalledWith('remote-camera:leave', {
sessionId: 'leave-session',
})
})
it('should reset state on unsubscribe', () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
act(() => {
result.current.subscribe('reset-session')
})
// Set some state
const connectedHandler = mockSocket.on.mock.calls.find(
(call) => call[0] === 'remote-camera:connected'
)?.[1]
if (connectedHandler) {
act(() => {
connectedHandler({ phoneConnected: true })
})
}
act(() => {
result.current.unsubscribe()
})
expect(result.current.isPhoneConnected).toBe(false)
expect(result.current.latestFrame).toBeNull()
expect(result.current.frameRate).toBe(0)
})
it('should clear all state on clearSession', () => {
const { result } = renderHook(() => useRemoteCameraDesktop())
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
act(() => {
result.current.subscribe('clear-session')
})
act(() => {
result.current.clearSession()
})
expect(result.current.currentSessionId).toBeNull()
expect(result.current.isPhoneConnected).toBe(false)
expect(result.current.isReconnecting).toBe(false)
expect(localStorageMock.removeItem).toHaveBeenCalledWith('remote-camera-session-id')
})
})
})

View File

@@ -0,0 +1,498 @@
/**
* Tests for useRemoteCameraPhone hook
*
* Tests socket connection, auto-reconnection, and frame sending behavior.
*/
import { act, renderHook } from '@testing-library/react'
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import { useRemoteCameraPhone } from '../useRemoteCameraPhone'
// Mock socket.io-client - use vi.hoisted for variables referenced in vi.mock
const { mockSocket, mockIo } = vi.hoisted(() => {
const socket = {
id: 'test-phone-socket-id',
on: vi.fn(),
off: vi.fn(),
emit: vi.fn(),
disconnect: vi.fn(),
connected: true,
}
return {
mockSocket: socket,
mockIo: vi.fn(() => socket),
}
})
vi.mock('socket.io-client', () => ({
io: mockIo,
}))
// Mock OpenCV loading
vi.mock('@/lib/vision/perspectiveTransform', () => ({
loadOpenCV: vi.fn(() => Promise.resolve()),
isOpenCVReady: vi.fn(() => true),
rectifyQuadrilateralToBase64: vi.fn(() => 'mock-base64-image'),
}))
describe('useRemoteCameraPhone', () => {
beforeEach(() => {
vi.clearAllMocks()
mockIo.mockClear()
mockSocket.on.mockClear()
mockSocket.off.mockClear()
mockSocket.emit.mockClear()
})
afterEach(() => {
vi.restoreAllMocks()
})
describe('initialization', () => {
it('should initialize with default state', async () => {
const { result } = renderHook(() => useRemoteCameraPhone())
expect(result.current.isConnected).toBe(false)
expect(result.current.isSending).toBe(false)
expect(result.current.frameMode).toBe('raw')
expect(result.current.desktopCalibration).toBeNull()
expect(result.current.error).toBeNull()
})
it('should set up socket with reconnection config', () => {
renderHook(() => useRemoteCameraPhone())
expect(mockIo).toHaveBeenCalledWith(
expect.objectContaining({
path: '/api/socket',
reconnection: true,
reconnectionDelay: 1000,
reconnectionDelayMax: 5000,
reconnectionAttempts: 10,
})
)
})
})
describe('session connection', () => {
it('should emit join event when connecting', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
// Simulate socket connect
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
if (connectHandler) {
act(() => {
connectHandler()
})
}
act(() => {
result.current.connect('phone-session-123')
})
expect(mockSocket.emit).toHaveBeenCalledWith('remote-camera:join', {
sessionId: 'phone-session-123',
})
})
it('should update isConnected on connect', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
if (connectHandler) {
act(() => {
connectHandler()
})
}
act(() => {
result.current.connect('connect-session')
})
expect(result.current.isConnected).toBe(true)
})
it('should set error if socket not connected', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
// Don't simulate connect - socket is not connected
act(() => {
result.current.connect('fail-session')
})
expect(result.current.error).toBe('Socket not connected')
})
})
describe('auto-reconnect on socket reconnect', () => {
it('should re-join session on socket reconnect', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
// Initial connect
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
// Connect to session
act(() => {
result.current.connect('reconnect-session')
})
// Clear emit calls
mockSocket.emit.mockClear()
// Simulate socket reconnect (connect event fires again)
act(() => {
connectHandler()
})
// Should auto-rejoin
expect(mockSocket.emit).toHaveBeenCalledWith('remote-camera:join', {
sessionId: 'reconnect-session',
})
})
it('should not rejoin if no session was set', () => {
renderHook(() => useRemoteCameraPhone())
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
mockSocket.emit.mockClear()
// Simulate reconnect without ever connecting to a session
act(() => {
connectHandler()
})
expect(mockSocket.emit).not.toHaveBeenCalledWith('remote-camera:join', expect.anything())
})
})
describe('socket disconnect handling', () => {
it('should not clear session on temporary disconnect', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
act(() => {
result.current.connect('persist-session')
})
// Simulate temporary disconnect
const disconnectHandler = mockSocket.on.mock.calls.find(
(call) => call[0] === 'disconnect'
)?.[1]
act(() => {
disconnectHandler('transport close')
})
// Session ref should still be set (will reconnect)
// isConnected might be false but session should persist internally
})
it('should clear state on server disconnect', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
act(() => {
result.current.connect('server-disconnect-session')
})
const disconnectHandler = mockSocket.on.mock.calls.find(
(call) => call[0] === 'disconnect'
)?.[1]
act(() => {
disconnectHandler('io server disconnect')
})
expect(result.current.isConnected).toBe(false)
})
})
describe('desktop commands', () => {
it('should handle set-mode command from desktop', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
const setModeHandler = mockSocket.on.mock.calls.find(
(call) => call[0] === 'remote-camera:set-mode'
)?.[1]
if (setModeHandler) {
act(() => {
setModeHandler({ mode: 'cropped' })
})
}
expect(result.current.frameMode).toBe('cropped')
})
it('should handle set-calibration command from desktop', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
const calibrationHandler = mockSocket.on.mock.calls.find(
(call) => call[0] === 'remote-camera:set-calibration'
)?.[1]
const corners = {
topLeft: { x: 10, y: 10 },
topRight: { x: 100, y: 10 },
bottomLeft: { x: 10, y: 100 },
bottomRight: { x: 100, y: 100 },
}
if (calibrationHandler) {
act(() => {
calibrationHandler({ corners })
})
}
expect(result.current.desktopCalibration).toEqual(corners)
// Should auto-switch to cropped mode
expect(result.current.frameMode).toBe('cropped')
})
it('should handle clear-calibration command from desktop', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
// First set calibration
const calibrationHandler = mockSocket.on.mock.calls.find(
(call) => call[0] === 'remote-camera:set-calibration'
)?.[1]
if (calibrationHandler) {
act(() => {
calibrationHandler({
corners: {
topLeft: { x: 0, y: 0 },
topRight: { x: 100, y: 0 },
bottomLeft: { x: 0, y: 100 },
bottomRight: { x: 100, y: 100 },
},
})
})
}
expect(result.current.desktopCalibration).not.toBeNull()
// Then clear it
const clearHandler = mockSocket.on.mock.calls.find(
(call) => call[0] === 'remote-camera:clear-calibration'
)?.[1]
if (clearHandler) {
act(() => {
clearHandler()
})
}
expect(result.current.desktopCalibration).toBeNull()
})
it('should handle set-torch command from desktop', () => {
const torchCallback = vi.fn()
renderHook(() => useRemoteCameraPhone({ onTorchRequest: torchCallback }))
const torchHandler = mockSocket.on.mock.calls.find(
(call) => call[0] === 'remote-camera:set-torch'
)?.[1]
if (torchHandler) {
act(() => {
torchHandler({ on: true })
})
}
expect(torchCallback).toHaveBeenCalledWith(true)
})
})
describe('frame mode', () => {
it('should allow setting frame mode locally', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
act(() => {
result.current.setFrameMode('cropped')
})
expect(result.current.frameMode).toBe('cropped')
})
})
describe('torch state emission', () => {
it('should emit torch state to desktop', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
act(() => {
result.current.connect('torch-emit-session')
})
act(() => {
result.current.emitTorchState(true, true)
})
expect(mockSocket.emit).toHaveBeenCalledWith('remote-camera:torch-state', {
sessionId: 'torch-emit-session',
isTorchOn: true,
isTorchAvailable: true,
})
})
})
describe('disconnect', () => {
it('should emit leave event on disconnect', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
act(() => {
result.current.connect('disconnect-session')
})
act(() => {
result.current.disconnect()
})
expect(mockSocket.emit).toHaveBeenCalledWith('remote-camera:leave', {
sessionId: 'disconnect-session',
})
})
it('should reset state on disconnect', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
act(() => {
result.current.connect('reset-disconnect-session')
})
expect(result.current.isConnected).toBe(true)
act(() => {
result.current.disconnect()
})
expect(result.current.isConnected).toBe(false)
})
})
describe('error handling', () => {
it('should handle error events', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
const errorHandler = mockSocket.on.mock.calls.find(
(call) => call[0] === 'remote-camera:error'
)?.[1]
if (errorHandler) {
act(() => {
errorHandler({ error: 'Session expired' })
})
}
expect(result.current.error).toBe('Session expired')
expect(result.current.isConnected).toBe(false)
})
})
describe('calibration update', () => {
it('should update calibration for frame processing', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
const newCalibration = {
topLeft: { x: 20, y: 20 },
topRight: { x: 200, y: 20 },
bottomLeft: { x: 20, y: 200 },
bottomRight: { x: 200, y: 200 },
}
act(() => {
result.current.updateCalibration(newCalibration)
})
// The calibration is stored in a ref for frame processing
// We can verify by checking that no error is thrown
})
})
describe('sending frames', () => {
it('should set isSending when startSending is called', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
act(() => {
result.current.connect('sending-session')
})
// Create mock video element
const mockVideo = document.createElement('video')
act(() => {
result.current.startSending(mockVideo)
})
expect(result.current.isSending).toBe(true)
})
it('should set error if not connected when starting to send', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
const mockVideo = document.createElement('video')
act(() => {
result.current.startSending(mockVideo)
})
expect(result.current.error).toBe('Not connected to session')
})
it('should reset isSending on stopSending', () => {
const { result } = renderHook(() => useRemoteCameraPhone())
const connectHandler = mockSocket.on.mock.calls.find((call) => call[0] === 'connect')?.[1]
act(() => {
connectHandler()
})
act(() => {
result.current.connect('stop-sending-session')
})
const mockVideo = document.createElement('video')
act(() => {
result.current.startSending(mockVideo)
})
act(() => {
result.current.stopSending()
})
expect(result.current.isSending).toBe(false)
})
})
})

View File

@@ -0,0 +1,218 @@
/**
* Unit tests for useSessionBroadcast vision frame broadcasting
*/
import { act, renderHook } from '@testing-library/react'
import { beforeEach, describe, expect, it, vi } from 'vitest'
import type { BroadcastState } from '@/components/practice'
import { useSessionBroadcast } from '../useSessionBroadcast'
// Mock socket.io-client
const mockSocket = {
on: vi.fn(),
off: vi.fn(),
emit: vi.fn(),
disconnect: vi.fn(),
connected: true,
}
vi.mock('socket.io-client', () => ({
io: vi.fn(() => mockSocket),
}))
describe('useSessionBroadcast - vision frame broadcasting', () => {
beforeEach(() => {
vi.clearAllMocks()
mockSocket.on.mockReset()
mockSocket.emit.mockReset()
})
const createMockBroadcastState = (): BroadcastState => ({
currentProblem: { terms: [5, 3], answer: 8 },
phase: 'problem',
studentAnswer: '',
isCorrect: null,
startedAt: Date.now(),
purpose: 'focus',
complexity: undefined,
currentProblemNumber: 1,
totalProblems: 10,
sessionParts: [],
currentPartIndex: 0,
currentSlotIndex: 0,
slotResults: [],
})
describe('sendVisionFrame', () => {
it('returns sendVisionFrame function', () => {
const { result } = renderHook(() =>
useSessionBroadcast('session-123', 'player-456', createMockBroadcastState())
)
expect(result.current.sendVisionFrame).toBeDefined()
expect(typeof result.current.sendVisionFrame).toBe('function')
})
it('emits vision-frame event with correct payload when connected', async () => {
// Simulate connection
let connectHandler: (() => void) | undefined
mockSocket.on.mockImplementation((event: string, handler: unknown) => {
if (event === 'connect') {
connectHandler = handler as () => void
}
return mockSocket
})
const { result } = renderHook(() =>
useSessionBroadcast('session-123', 'player-456', createMockBroadcastState())
)
// Trigger connect
act(() => {
connectHandler?.()
})
// Send vision frame
const imageData = 'base64ImageData=='
const detectedValue = 456
const confidence = 0.92
act(() => {
result.current.sendVisionFrame(imageData, detectedValue, confidence)
})
expect(mockSocket.emit).toHaveBeenCalledWith(
'vision-frame',
expect.objectContaining({
sessionId: 'session-123',
imageData: 'base64ImageData==',
detectedValue: 456,
confidence: 0.92,
timestamp: expect.any(Number),
})
)
})
it('includes timestamp in vision-frame event', async () => {
const now = Date.now()
vi.setSystemTime(now)
let connectHandler: (() => void) | undefined
mockSocket.on.mockImplementation((event: string, handler: unknown) => {
if (event === 'connect') {
connectHandler = handler as () => void
}
return mockSocket
})
const { result } = renderHook(() =>
useSessionBroadcast('session-123', 'player-456', createMockBroadcastState())
)
act(() => {
connectHandler?.()
})
act(() => {
result.current.sendVisionFrame('imageData', 123, 0.95)
})
expect(mockSocket.emit).toHaveBeenCalledWith(
'vision-frame',
expect.objectContaining({
timestamp: now,
})
)
vi.useRealTimers()
})
it('handles null detectedValue', async () => {
let connectHandler: (() => void) | undefined
mockSocket.on.mockImplementation((event: string, handler: unknown) => {
if (event === 'connect') {
connectHandler = handler as () => void
}
return mockSocket
})
const { result } = renderHook(() =>
useSessionBroadcast('session-123', 'player-456', createMockBroadcastState())
)
act(() => {
connectHandler?.()
})
act(() => {
result.current.sendVisionFrame('imageData', null, 0)
})
expect(mockSocket.emit).toHaveBeenCalledWith(
'vision-frame',
expect.objectContaining({
detectedValue: null,
confidence: 0,
})
)
})
})
describe('negative cases', () => {
it('does not emit when sessionId is undefined', () => {
const { result } = renderHook(() =>
useSessionBroadcast(undefined, 'player-456', createMockBroadcastState())
)
act(() => {
result.current.sendVisionFrame('imageData', 123, 0.95)
})
expect(mockSocket.emit).not.toHaveBeenCalledWith('vision-frame', expect.anything())
})
it('does not emit when not connected', () => {
// Don't trigger connect handler
const { result } = renderHook(() =>
useSessionBroadcast('session-123', 'player-456', createMockBroadcastState())
)
act(() => {
result.current.sendVisionFrame('imageData', 123, 0.95)
})
// The join-session emit happens on connect, but vision-frame should not
const visionFrameCalls = mockSocket.emit.mock.calls.filter(
([event]) => event === 'vision-frame'
)
expect(visionFrameCalls).toHaveLength(0)
})
it('does not emit when state is null', () => {
const { result } = renderHook(() => useSessionBroadcast('session-123', 'player-456', null))
act(() => {
result.current.sendVisionFrame('imageData', 123, 0.95)
})
// Should still not emit vision-frame (no connection due to null state cleanup logic)
const visionFrameCalls = mockSocket.emit.mock.calls.filter(
([event]) => event === 'vision-frame'
)
expect(visionFrameCalls).toHaveLength(0)
})
})
describe('result interface', () => {
it('includes sendVisionFrame in the result', () => {
const { result } = renderHook(() =>
useSessionBroadcast('session-123', 'player-456', createMockBroadcastState())
)
expect(result.current).toHaveProperty('sendVisionFrame')
expect(result.current).toHaveProperty('isConnected')
expect(result.current).toHaveProperty('isBroadcasting')
expect(result.current).toHaveProperty('sendPartTransition')
expect(result.current).toHaveProperty('sendPartTransitionComplete')
})
})
})

View File

@@ -0,0 +1,255 @@
/**
* Unit tests for useSessionObserver vision frame receiving
*/
import { act, renderHook, waitFor } from '@testing-library/react'
import { beforeEach, describe, expect, it, vi } from 'vitest'
import type { VisionFrameEvent } from '@/lib/classroom/socket-events'
import { useSessionObserver } from '../useSessionObserver'
// Mock socket.io-client
const mockSocket = {
on: vi.fn(),
off: vi.fn(),
emit: vi.fn(),
disconnect: vi.fn(),
connected: true,
}
vi.mock('socket.io-client', () => ({
io: vi.fn(() => mockSocket),
}))
describe('useSessionObserver - vision frame receiving', () => {
let eventHandlers: Map<string, (data: unknown) => void>
beforeEach(() => {
vi.clearAllMocks()
eventHandlers = new Map()
// Capture event handlers
mockSocket.on.mockImplementation((event: string, handler: unknown) => {
eventHandlers.set(event, handler as (data: unknown) => void)
return mockSocket
})
})
describe('visionFrame state', () => {
it('initially returns null visionFrame', () => {
const { result } = renderHook(() =>
useSessionObserver('session-123', 'observer-456', 'player-789', true)
)
expect(result.current.visionFrame).toBeNull()
})
it('updates visionFrame when vision-frame event is received', async () => {
const { result } = renderHook(() =>
useSessionObserver('session-123', 'observer-456', 'player-789', true)
)
// Simulate receiving a vision frame event
const visionFrameData: VisionFrameEvent = {
sessionId: 'session-123',
imageData: 'base64ImageData==',
detectedValue: 456,
confidence: 0.92,
timestamp: Date.now(),
}
act(() => {
const handler = eventHandlers.get('vision-frame')
handler?.(visionFrameData)
})
await waitFor(() => {
expect(result.current.visionFrame).not.toBeNull()
expect(result.current.visionFrame?.imageData).toBe('base64ImageData==')
expect(result.current.visionFrame?.detectedValue).toBe(456)
expect(result.current.visionFrame?.confidence).toBe(0.92)
expect(result.current.visionFrame?.receivedAt).toBeDefined()
})
})
it('sets receivedAt to current time when frame is received', async () => {
const now = Date.now()
vi.setSystemTime(now)
const { result } = renderHook(() =>
useSessionObserver('session-123', 'observer-456', 'player-789', true)
)
const visionFrameData: VisionFrameEvent = {
sessionId: 'session-123',
imageData: 'imageData',
detectedValue: 123,
confidence: 0.9,
timestamp: now - 100, // Sent 100ms ago
}
act(() => {
const handler = eventHandlers.get('vision-frame')
handler?.(visionFrameData)
})
await waitFor(() => {
expect(result.current.visionFrame?.receivedAt).toBe(now)
})
vi.useRealTimers()
})
it('updates visionFrame with new frames', async () => {
const { result } = renderHook(() =>
useSessionObserver('session-123', 'observer-456', 'player-789', true)
)
// First frame
act(() => {
const handler = eventHandlers.get('vision-frame')
handler?.({
sessionId: 'session-123',
imageData: 'firstFrame',
detectedValue: 100,
confidence: 0.8,
timestamp: Date.now(),
})
})
await waitFor(() => {
expect(result.current.visionFrame?.detectedValue).toBe(100)
})
// Second frame
act(() => {
const handler = eventHandlers.get('vision-frame')
handler?.({
sessionId: 'session-123',
imageData: 'secondFrame',
detectedValue: 200,
confidence: 0.95,
timestamp: Date.now(),
})
})
await waitFor(() => {
expect(result.current.visionFrame?.detectedValue).toBe(200)
expect(result.current.visionFrame?.imageData).toBe('secondFrame')
})
})
it('handles null detectedValue in frames', async () => {
const { result } = renderHook(() =>
useSessionObserver('session-123', 'observer-456', 'player-789', true)
)
const visionFrameData: VisionFrameEvent = {
sessionId: 'session-123',
imageData: 'imageData',
detectedValue: null,
confidence: 0,
timestamp: Date.now(),
}
act(() => {
const handler = eventHandlers.get('vision-frame')
handler?.(visionFrameData)
})
await waitFor(() => {
expect(result.current.visionFrame?.detectedValue).toBeNull()
expect(result.current.visionFrame?.confidence).toBe(0)
})
})
})
describe('cleanup', () => {
it('clears visionFrame on stopObserving', async () => {
const { result } = renderHook(() =>
useSessionObserver('session-123', 'observer-456', 'player-789', true)
)
// Receive a frame
act(() => {
const handler = eventHandlers.get('vision-frame')
handler?.({
sessionId: 'session-123',
imageData: 'imageData',
detectedValue: 123,
confidence: 0.9,
timestamp: Date.now(),
})
})
await waitFor(() => {
expect(result.current.visionFrame).not.toBeNull()
})
// Stop observing
act(() => {
result.current.stopObserving()
})
await waitFor(() => {
expect(result.current.visionFrame).toBeNull()
})
})
})
describe('result interface', () => {
it('includes visionFrame in the result', () => {
const { result } = renderHook(() =>
useSessionObserver('session-123', 'observer-456', 'player-789', true)
)
expect(result.current).toHaveProperty('visionFrame')
expect(result.current).toHaveProperty('state')
expect(result.current).toHaveProperty('results')
expect(result.current).toHaveProperty('transitionState')
expect(result.current).toHaveProperty('isConnected')
expect(result.current).toHaveProperty('isObserving')
expect(result.current).toHaveProperty('error')
})
})
describe('negative cases', () => {
it('does not update visionFrame when observer is disabled', () => {
const { result } = renderHook(
() => useSessionObserver('session-123', 'observer-456', 'player-789', false) // disabled
)
// The socket won't be created when disabled
expect(eventHandlers.size).toBe(0)
expect(result.current.visionFrame).toBeNull()
})
it('does not update visionFrame when sessionId is undefined', () => {
const { result } = renderHook(() =>
useSessionObserver(undefined, 'observer-456', 'player-789', true)
)
expect(result.current.visionFrame).toBeNull()
expect(result.current.isObserving).toBe(false)
})
it('handles empty imageData gracefully', async () => {
const { result } = renderHook(() =>
useSessionObserver('session-123', 'observer-456', 'player-789', true)
)
act(() => {
const handler = eventHandlers.get('vision-frame')
handler?.({
sessionId: 'session-123',
imageData: '',
detectedValue: 123,
confidence: 0.9,
timestamp: Date.now(),
})
})
await waitFor(() => {
expect(result.current.visionFrame?.imageData).toBe('')
})
})
})
})

View File

@@ -8,6 +8,11 @@ import {
isArucoAvailable,
loadAruco,
} from '@/lib/vision/arucoDetection'
import {
analyzeColumns,
analysesToDigits,
digitsToNumber as cvDigitsToNumber,
} from '@/lib/vision/beadDetector'
import { digitsToNumber, getMinConfidence, processVideoFrame } from '@/lib/vision/frameProcessor'
import type {
CalibrationGrid,
@@ -83,6 +88,10 @@ export function useAbacusVision(options: UseAbacusVisionOptions = {}): UseAbacus
// Track previous stable value to avoid duplicate callbacks
const lastStableValueRef = useRef<number | null>(null)
// Throttle detection (CV is fast, 10fps is plenty)
const lastInferenceTimeRef = useRef<number>(0)
const INFERENCE_INTERVAL_MS = 100 // 10fps
// Ref for calibration functions to avoid infinite loop in auto-calibration effect
const calibrationRef = useRef(calibration)
calibrationRef.current = calibration
@@ -271,9 +280,16 @@ export function useAbacusVision(options: UseAbacusVisionOptions = {}): UseAbacus
}, [calibration])
/**
* Process a video frame for detection using TensorFlow.js classifier
* Process a video frame for detection using CV-based bead detection
*/
const processFrame = useCallback(async () => {
// Throttle inference for performance (10fps)
const now = performance.now()
if (now - lastInferenceTimeRef.current < INFERENCE_INTERVAL_MS) {
return
}
lastInferenceTimeRef.current = now
// Get video element from camera stream
const videoElements = document.querySelectorAll('video')
let video: HTMLVideoElement | null = null
@@ -292,24 +308,33 @@ export function useAbacusVision(options: UseAbacusVisionOptions = {}): UseAbacus
// Process video frame into column strips
const columnImages = processVideoFrame(video, calibration.calibration)
if (columnImages.length === 0) return
// Run classification
const result = await classifier.classifyColumns(columnImages)
// Use CV-based bead detection instead of ML
const analyses = analyzeColumns(columnImages)
const { digits, confidences, minConfidence } = analysesToDigits(analyses)
if (!result) return
// Log analysis for debugging
console.log(
'[CV] Bead analysis:',
analyses.map((a) => ({
digit: a.digit,
conf: a.confidence.toFixed(2),
heaven: a.heavenActive ? '5' : '0',
earth: a.earthActiveCount,
bar: a.reckoningBarPosition.toFixed(2),
}))
)
// Update column confidences
setColumnConfidences(result.confidences)
setColumnConfidences(confidences)
// Convert digits to number
const detectedValue = digitsToNumber(result.digits)
const minConfidence = getMinConfidence(result.confidences)
const detectedValue = cvDigitsToNumber(digits)
// Push to stability buffer
stability.pushFrame(detectedValue, minConfidence)
}, [camera.videoStream, calibration.isCalibrated, calibration.calibration, stability, classifier])
}, [camera.videoStream, calibration.isCalibrated, calibration.calibration, stability])
/**
* Detection loop

View File

@@ -86,7 +86,6 @@ export function useColumnClassifier(): UseColumnClassifierReturn {
setIsModelLoaded(true)
return true
} else {
// Model doesn't exist - not an error, just unavailable
setIsModelUnavailable(true)
return false
}

View File

@@ -162,6 +162,9 @@ export function useDeskViewCamera(): UseDeskViewCameraReturn {
video: {
width: { ideal: 1920 },
height: { ideal: 1440 },
// Prefer widest angle lens (zoom: 1 = no zoom = widest)
// @ts-expect-error - zoom is valid but not in TS types
zoom: { ideal: 1 },
// Try to disable face-tracking auto-focus (not all cameras support this)
// @ts-expect-error - focusMode is valid but not in TS types
focusMode: 'continuous',

View File

@@ -0,0 +1,152 @@
/**
* React hooks for making LLM calls with progress tracking
*
* These hooks integrate the LLM client with React Query for proper
* state management, caching, and UI feedback.
*
* @example
* ```typescript
* import { useLLMCall } from '@/hooks/useLLMCall'
* import { z } from 'zod'
*
* const SentimentSchema = z.object({
* sentiment: z.enum(['positive', 'negative', 'neutral']),
* confidence: z.number(),
* })
*
* function MyComponent() {
* const { mutate, progress, isPending, error, data } = useLLMCall(SentimentSchema)
*
* return (
* <div>
* <button onClick={() => mutate({ prompt: 'Analyze: I love this!' })}>
* Analyze
* </button>
* {progress && <div>{progress.message}</div>}
* {data && <div>Sentiment: {data.data.sentiment}</div>}
* </div>
* )
* }
* ```
*/
import { useState, useCallback } from 'react'
import { useMutation, type UseMutationOptions } from '@tanstack/react-query'
import type { z } from 'zod'
import { llm, type LLMProgress, type LLMResponse } from '@/lib/llm'
/** Request options for LLM call (without schema) */
interface LLMCallRequest {
prompt: string
images?: string[]
provider?: string
model?: string
maxRetries?: number
}
/** Request options for vision call (requires images) */
interface LLMVisionRequest extends LLMCallRequest {
images: string[]
}
/**
* Hook for making type-safe LLM calls with progress tracking
*
* @param schema - Zod schema for validating the LLM response
* @param options - Optional React Query mutation options
*/
export function useLLMCall<T extends z.ZodType>(
schema: T,
options?: Omit<UseMutationOptions<LLMResponse<z.infer<T>>, Error, LLMCallRequest>, 'mutationFn'>
) {
const [progress, setProgress] = useState<LLMProgress | null>(null)
const mutation = useMutation({
mutationFn: async (request: LLMCallRequest) => {
setProgress(null)
return llm.call({
...request,
schema,
onProgress: setProgress,
})
},
onSettled: () => {
setProgress(null)
},
...options,
})
return {
...mutation,
progress,
}
}
/**
* Hook for making vision (image + text) LLM calls with progress tracking
*
* @param schema - Zod schema for validating the LLM response
* @param options - Optional React Query mutation options
*
* @example
* ```typescript
* const { mutate, progress } = useLLMVision(ImageAnalysisSchema)
*
* mutate({
* prompt: 'Describe this image',
* images: ['data:image/jpeg;base64,...'],
* })
* ```
*/
export function useLLMVision<T extends z.ZodType>(
schema: T,
options?: Omit<UseMutationOptions<LLMResponse<z.infer<T>>, Error, LLMVisionRequest>, 'mutationFn'>
) {
const [progress, setProgress] = useState<LLMProgress | null>(null)
const mutation = useMutation({
mutationFn: async (request: LLMVisionRequest) => {
setProgress(null)
return llm.vision({
...request,
schema,
onProgress: setProgress,
})
},
onSettled: () => {
setProgress(null)
},
...options,
})
return {
...mutation,
progress,
}
}
/**
* Hook for getting LLM client status and configuration
*
* @example
* ```typescript
* const { providers, isProviderAvailable, defaultProvider } = useLLMStatus()
*
* if (!isProviderAvailable('openai')) {
* return <div>OpenAI is not configured</div>
* }
* ```
*/
export function useLLMStatus() {
const getProviders = useCallback(() => llm.getProviders(), [])
const isProviderAvailable = useCallback((name: string) => llm.isProviderAvailable(name), [])
const getDefaultProvider = useCallback(() => llm.getDefaultProvider(), [])
const getDefaultModel = useCallback((provider?: string) => llm.getDefaultModel(provider), [])
return {
providers: getProviders(),
isProviderAvailable,
defaultProvider: getDefaultProvider(),
getDefaultModel,
}
}

View File

@@ -117,11 +117,14 @@ export function usePhoneCamera(options: UsePhoneCameraOptions = {}): UsePhoneCam
}
// Request camera with specified facing mode
// Prefer widest angle lens (zoom: 1 = no zoom = widest)
const constraints: MediaStreamConstraints = {
video: {
facingMode: { ideal: targetFacingMode },
width: { ideal: 1280 },
height: { ideal: 720 },
// @ts-expect-error - zoom is valid but not in TS types
zoom: { ideal: 1 },
},
audio: false,
}

View File

@@ -0,0 +1,107 @@
/**
* Hook to check viewer's access level to a player
*
* Used for pre-flight authorization checks before showing UI that requires
* specific access levels.
*/
import { useQuery } from '@tanstack/react-query'
import type { AccessLevel } from '@/lib/classroom'
import { api } from '@/lib/queryClient'
export interface PlayerAccessData {
accessLevel: AccessLevel
isParent: boolean
isTeacher: boolean
isPresent: boolean
/** Classroom ID if the viewer is a teacher */
classroomId?: string
}
/**
* Query key factory for player access
*/
export const playerAccessKeys = {
all: ['player-access'] as const,
detail: (playerId: string) => [...playerAccessKeys.all, playerId] as const,
}
/**
* Hook to get the current viewer's access level to a player
*
* Returns access information including:
* - accessLevel: 'none' | 'teacher-enrolled' | 'teacher-present' | 'parent'
* - isParent: true if viewer is a parent of the player
* - isTeacher: true if player is enrolled in viewer's classroom
* - isPresent: true if player is currently present in viewer's classroom
*/
export function usePlayerAccess(playerId: string) {
return useQuery({
queryKey: playerAccessKeys.detail(playerId),
queryFn: async (): Promise<PlayerAccessData> => {
const response = await api(`players/${playerId}/access`)
if (!response.ok) {
throw new Error('Failed to check player access')
}
return response.json()
},
// Refetch on window focus to catch presence changes
refetchOnWindowFocus: true,
// Keep data fresh - presence can change anytime
staleTime: 30 * 1000, // 30 seconds
})
}
/**
* Helper to check if the viewer can upload photos for a player
*
* Upload requires either:
* - Being a parent (full access)
* - Being a teacher with the student present in classroom
*
* Note: This mirrors the server-side logic in the attachments API
*/
export function canUploadPhotos(access: PlayerAccessData | undefined): boolean {
if (!access) return false
return access.isParent || access.isPresent
}
/**
* Helper to get remediation info for upload-restricted access
*/
export function getUploadRemediation(access: PlayerAccessData | undefined): {
type: 'send-entry-prompt' | 'enroll-student' | 'link-via-family-code' | 'no-access' | null
message: string | null
} {
if (!access) {
return { type: null, message: null }
}
// Can upload - no remediation needed
if (canUploadPhotos(access)) {
return { type: null, message: null }
}
// Teacher with enrolled student, but student not present
if (access.accessLevel === 'teacher-enrolled' && !access.isPresent) {
return {
type: 'send-entry-prompt',
message:
'This student is enrolled in your classroom but not currently present. To upload photos, they need to enter your classroom first.',
}
}
// User has some access but not enough
if (access.accessLevel !== 'none') {
return {
type: 'no-access',
message: "You don't have permission to upload photos for this student.",
}
}
// No access at all
return {
type: 'link-via-family-code',
message: 'Your account is not linked to this student.',
}
}

View File

@@ -7,6 +7,9 @@ import type { QuadCorners } from '@/types/vision'
/** Frame mode: raw sends uncropped frames, cropped applies calibration */
export type FrameMode = 'raw' | 'cropped'
/** LocalStorage key for persisting session ID */
const STORAGE_KEY = 'remote-camera-session-id'
interface RemoteCameraFrame {
imageData: string // Base64 JPEG
timestamp: number
@@ -31,6 +34,10 @@ interface UseRemoteCameraDesktopReturn {
isTorchAvailable: boolean
/** Error message if connection failed */
error: string | null
/** Current session ID (null if not subscribed) */
currentSessionId: string | null
/** Whether actively trying to reconnect */
isReconnecting: boolean
/** Subscribe to receive frames for a session */
subscribe: (sessionId: string) => void
/** Unsubscribe from the session */
@@ -43,6 +50,10 @@ interface UseRemoteCameraDesktopReturn {
clearCalibration: () => void
/** Set phone's torch state */
setRemoteTorch: (on: boolean) => void
/** Get the persisted session ID (if any) */
getPersistedSessionId: () => string | null
/** Clear persisted session and disconnect */
clearSession: () => void
}
/**
@@ -66,24 +77,69 @@ export function useRemoteCameraDesktop(): UseRemoteCameraDesktopReturn {
const [isTorchOn, setIsTorchOn] = useState(false)
const [isTorchAvailable, setIsTorchAvailable] = useState(false)
const [error, setError] = useState<string | null>(null)
const currentSessionId = useRef<string | null>(null)
const [currentSessionId, setCurrentSessionId] = useState<string | null>(null)
const [isReconnecting, setIsReconnecting] = useState(false)
// Refs for values needed in callbacks
const currentSessionIdRef = useRef<string | null>(null)
const reconnectAttemptRef = useRef(0)
const reconnectTimeoutRef = useRef<NodeJS.Timeout | null>(null)
// Frame rate calculation
const frameTimestamps = useRef<number[]>([])
// Initialize socket connection
// Helper to persist session ID
const persistSessionId = useCallback((sessionId: string | null) => {
if (sessionId) {
localStorage.setItem(STORAGE_KEY, sessionId)
} else {
localStorage.removeItem(STORAGE_KEY)
}
}, [])
// Helper to get persisted session ID
const getPersistedSessionId = useCallback((): string | null => {
if (typeof window === 'undefined') return null
return localStorage.getItem(STORAGE_KEY)
}, [])
// Initialize socket connection with reconnection support
useEffect(() => {
console.log('[RemoteCameraDesktop] Initializing socket connection...')
const socketInstance = io({
path: '/api/socket',
autoConnect: true,
reconnection: true,
reconnectionDelay: 1000,
reconnectionDelayMax: 5000,
reconnectionAttempts: 10,
})
socketInstance.on('connect', () => {
console.log('[RemoteCameraDesktop] Socket connected! ID:', socketInstance.id)
setIsConnected(true)
// If we have a session ID (either from state or localStorage), re-subscribe
const sessionId = currentSessionIdRef.current || getPersistedSessionId()
if (sessionId) {
console.log('[RemoteCameraDesktop] Re-subscribing to session after reconnect:', sessionId)
setIsReconnecting(true)
socketInstance.emit('remote-camera:subscribe', { sessionId })
}
})
socketInstance.on('disconnect', () => {
socketInstance.on('connect_error', (error) => {
console.error('[RemoteCameraDesktop] Socket connect error:', error)
})
socketInstance.on('disconnect', (reason) => {
console.log('[RemoteCameraDesktop] Socket disconnected:', reason)
setIsConnected(false)
// Don't clear phone connected state immediately - might reconnect
if (reason === 'io server disconnect') {
// Server forced disconnect - clear state
setIsPhoneConnected(false)
}
})
setSocket(socketInstance)
@@ -91,7 +147,7 @@ export function useRemoteCameraDesktop(): UseRemoteCameraDesktopReturn {
return () => {
socketInstance.disconnect()
}
}, [])
}, [getPersistedSessionId])
const calculateFrameRate = useCallback(() => {
const now = Date.now()
@@ -105,18 +161,25 @@ export function useRemoteCameraDesktop(): UseRemoteCameraDesktopReturn {
if (!socket) return
const handleConnected = ({ phoneConnected }: { phoneConnected: boolean }) => {
console.log('[RemoteCameraDesktop] Phone connected event:', phoneConnected)
setIsPhoneConnected(phoneConnected)
setIsReconnecting(false)
setError(null)
reconnectAttemptRef.current = 0
}
const handleDisconnected = ({ phoneConnected }: { phoneConnected: boolean }) => {
console.log('[RemoteCameraDesktop] Phone disconnected event:', phoneConnected)
setIsPhoneConnected(phoneConnected)
setLatestFrame(null)
setFrameRate(0)
// Don't clear frame/framerate - keep last state for visual continuity
// Phone might reconnect quickly
}
const handleStatus = ({ phoneConnected }: { phoneConnected: boolean }) => {
console.log('[RemoteCameraDesktop] Status event:', phoneConnected)
setIsPhoneConnected(phoneConnected)
setIsReconnecting(false)
reconnectAttemptRef.current = 0
}
const handleFrame = (frame: RemoteCameraFrame) => {
@@ -135,7 +198,16 @@ export function useRemoteCameraDesktop(): UseRemoteCameraDesktopReturn {
}
const handleError = ({ error: errorMsg }: { error: string }) => {
console.log('[RemoteCameraDesktop] Error event:', errorMsg)
// If session is invalid/expired, clear the persisted session
if (errorMsg.includes('Invalid') || errorMsg.includes('expired')) {
console.log('[RemoteCameraDesktop] Session invalid, clearing persisted session')
persistSessionId(null)
setCurrentSessionId(null)
currentSessionIdRef.current = null
}
setError(errorMsg)
setIsReconnecting(false)
}
const handleTorchState = ({
@@ -164,7 +236,7 @@ export function useRemoteCameraDesktop(): UseRemoteCameraDesktopReturn {
socket.off('remote-camera:error', handleError)
socket.off('remote-camera:torch-state', handleTorchState)
}
}, [socket, calculateFrameRate])
}, [socket, calculateFrameRate, persistSessionId])
// Frame rate update interval
useEffect(() => {
@@ -174,23 +246,42 @@ export function useRemoteCameraDesktop(): UseRemoteCameraDesktopReturn {
const subscribe = useCallback(
(sessionId: string) => {
console.log(
'[RemoteCameraDesktop] Subscribing to session:',
sessionId,
'socket:',
!!socket,
'connected:',
isConnected
)
// Save session ID FIRST, so auto-connect handler can use it
// even if socket isn't connected yet
currentSessionIdRef.current = sessionId
setCurrentSessionId(sessionId)
persistSessionId(sessionId)
setError(null)
if (!socket || !isConnected) {
setError('Socket not connected')
console.log('[RemoteCameraDesktop] Socket not connected yet, will subscribe on connect')
return
}
currentSessionId.current = sessionId
setError(null)
console.log('[RemoteCameraDesktop] Emitting remote-camera:subscribe')
socket.emit('remote-camera:subscribe', { sessionId })
},
[socket, isConnected]
[socket, isConnected, persistSessionId]
)
const unsubscribe = useCallback(() => {
if (!socket || !currentSessionId.current) return
if (!socket || !currentSessionIdRef.current) return
socket.emit('remote-camera:leave', { sessionId: currentSessionId.current })
currentSessionId.current = null
socket.emit('remote-camera:leave', {
sessionId: currentSessionIdRef.current,
})
currentSessionIdRef.current = null
setCurrentSessionId(null)
// Don't clear persisted session - unsubscribe is for temporary disconnect
setIsPhoneConnected(false)
setLatestFrame(null)
setFrameRate(0)
@@ -201,6 +292,30 @@ export function useRemoteCameraDesktop(): UseRemoteCameraDesktopReturn {
setIsTorchAvailable(false)
}, [socket])
/**
* Clear session completely (forget persisted session)
* Use when user explicitly wants to start fresh
*/
const clearSession = useCallback(() => {
if (socket && currentSessionIdRef.current) {
socket.emit('remote-camera:leave', {
sessionId: currentSessionIdRef.current,
})
}
currentSessionIdRef.current = null
setCurrentSessionId(null)
persistSessionId(null)
setIsPhoneConnected(false)
setLatestFrame(null)
setFrameRate(0)
setError(null)
setVideoDimensions(null)
setFrameMode('raw')
setIsTorchOn(false)
setIsTorchAvailable(false)
setIsReconnecting(false)
}, [socket, persistSessionId])
/**
* Set the phone's frame mode
* - raw: Phone sends uncropped frames (for calibration)
@@ -208,10 +323,10 @@ export function useRemoteCameraDesktop(): UseRemoteCameraDesktopReturn {
*/
const setPhoneFrameMode = useCallback(
(mode: FrameMode) => {
if (!socket || !currentSessionId.current) return
if (!socket || !currentSessionIdRef.current) return
socket.emit('remote-camera:set-mode', {
sessionId: currentSessionId.current,
sessionId: currentSessionIdRef.current,
mode,
})
setFrameMode(mode)
@@ -225,10 +340,10 @@ export function useRemoteCameraDesktop(): UseRemoteCameraDesktopReturn {
*/
const sendCalibration = useCallback(
(corners: QuadCorners) => {
if (!socket || !currentSessionId.current) return
if (!socket || !currentSessionIdRef.current) return
socket.emit('remote-camera:set-calibration', {
sessionId: currentSessionId.current,
sessionId: currentSessionIdRef.current,
corners,
})
// Phone will automatically switch to cropped mode when it receives calibration
@@ -242,10 +357,10 @@ export function useRemoteCameraDesktop(): UseRemoteCameraDesktopReturn {
* This tells the phone to forget the desktop calibration and go back to auto-detection
*/
const clearCalibration = useCallback(() => {
if (!socket || !currentSessionId.current) return
if (!socket || !currentSessionIdRef.current) return
socket.emit('remote-camera:clear-calibration', {
sessionId: currentSessionId.current,
sessionId: currentSessionIdRef.current,
})
}, [socket])
@@ -254,10 +369,10 @@ export function useRemoteCameraDesktop(): UseRemoteCameraDesktopReturn {
*/
const setRemoteTorch = useCallback(
(on: boolean) => {
if (!socket || !currentSessionId.current) return
if (!socket || !currentSessionIdRef.current) return
socket.emit('remote-camera:set-torch', {
sessionId: currentSessionId.current,
sessionId: currentSessionIdRef.current,
on,
})
// Optimistically update local state
@@ -269,9 +384,9 @@ export function useRemoteCameraDesktop(): UseRemoteCameraDesktopReturn {
// Cleanup on unmount
useEffect(() => {
return () => {
if (socket && currentSessionId.current) {
if (socket && currentSessionIdRef.current) {
socket.emit('remote-camera:leave', {
sessionId: currentSessionId.current,
sessionId: currentSessionIdRef.current,
})
}
}
@@ -286,11 +401,15 @@ export function useRemoteCameraDesktop(): UseRemoteCameraDesktopReturn {
isTorchOn,
isTorchAvailable,
error,
currentSessionId,
isReconnecting,
subscribe,
unsubscribe,
setPhoneFrameMode,
sendCalibration,
clearCalibration,
setRemoteTorch,
getPersistedSessionId,
clearSession,
}
}

View File

@@ -68,8 +68,13 @@ interface UseRemoteCameraPhoneReturn {
export function useRemoteCameraPhone(
options: UseRemoteCameraPhoneOptions = {}
): UseRemoteCameraPhoneReturn {
const { targetFps = 10, jpegQuality = 0.8, targetWidth = 300, rawWidth = 640, onTorchRequest } =
options
const {
targetFps = 10,
jpegQuality = 0.8,
targetWidth = 300,
rawWidth = 640,
onTorchRequest,
} = options
// Keep onTorchRequest in a ref to avoid stale closures
const onTorchRequestRef = useRef(onTorchRequest)
@@ -113,21 +118,48 @@ export function useRemoteCameraPhone(
frameModeRef.current = frameMode
}, [frameMode])
// Initialize socket connection
// Initialize socket connection with reconnection support
useEffect(() => {
console.log('[RemoteCameraPhone] Initializing socket connection...')
const socketInstance = io({
path: '/api/socket',
autoConnect: true,
reconnection: true,
reconnectionDelay: 1000,
reconnectionDelayMax: 5000,
reconnectionAttempts: 10,
})
socketInstance.on('connect', () => {
console.log('[RemoteCameraPhone] Socket connected! ID:', socketInstance.id)
setIsSocketConnected(true)
// Auto-reconnect to session if we have one
const sessionId = sessionIdRef.current
if (sessionId) {
console.log(
'[RemoteCameraPhone] Auto-reconnecting to session after socket reconnect:',
sessionId
)
socketInstance.emit('remote-camera:join', { sessionId })
setIsConnected(true)
isConnectedRef.current = true
}
})
socketInstance.on('disconnect', () => {
socketInstance.on('connect_error', (error) => {
console.error('[RemoteCameraPhone] Socket connect error:', error)
})
socketInstance.on('disconnect', (reason) => {
console.log('[RemoteCameraPhone] Socket disconnected:', reason)
setIsSocketConnected(false)
setIsConnected(false)
isConnectedRef.current = false
// Don't clear isConnected or sessionIdRef - we want to auto-reconnect
// Only clear if server explicitly disconnected us
if (reason === 'io server disconnect') {
setIsConnected(false)
isConnectedRef.current = false
}
})
socketRef.current = socketInstance
@@ -314,14 +346,26 @@ export function useRemoteCameraPhone(
const connect = useCallback(
(sessionId: string) => {
const socket = socketRef.current
if (!socket || !isSocketConnected) {
setError('Socket not connected')
return
}
console.log(
'[RemoteCameraPhone] Connecting to session:',
sessionId,
'socket:',
!!socket,
'connected:',
isSocketConnected
)
// Save session ID FIRST, so auto-connect handler can use it
// even if socket isn't connected yet
sessionIdRef.current = sessionId
setError(null)
if (!socket || !isSocketConnected) {
console.log('[RemoteCameraPhone] Socket not connected yet, will join on connect')
return
}
console.log('[RemoteCameraPhone] Emitting remote-camera:join')
socket.emit('remote-camera:join', { sessionId })
setIsConnected(true)
isConnectedRef.current = true

View File

@@ -11,6 +11,7 @@ import type {
PracticeStateEvent,
SessionPausedEvent,
SessionResumedEvent,
VisionFrameEvent,
} from '@/lib/classroom/socket-events'
/**
@@ -64,6 +65,8 @@ export interface UseSessionBroadcastResult {
) => void
/** Send part transition complete event to observers */
sendPartTransitionComplete: () => void
/** Send vision frame to observers (when student has vision mode enabled) */
sendVisionFrame: (imageData: string, detectedValue: number | null, confidence: number) => void
}
export function useSessionBroadcast(
@@ -271,10 +274,31 @@ export function useSessionBroadcast(
console.log('[SessionBroadcast] Emitted part-transition-complete')
}, [sessionId])
// Broadcast vision frame to observers
const sendVisionFrame = useCallback(
(imageData: string, detectedValue: number | null, confidence: number) => {
if (!socketRef.current || !isConnectedRef.current || !sessionId) {
return
}
const event: VisionFrameEvent = {
sessionId,
imageData,
detectedValue,
confidence,
timestamp: Date.now(),
}
socketRef.current.emit('vision-frame', event)
},
[sessionId]
)
return {
isConnected: isConnectedRef.current,
isBroadcasting: isConnectedRef.current && !!state,
sendPartTransition,
sendPartTransitionComplete,
sendVisionFrame,
}
}

View File

@@ -10,6 +10,7 @@ import type {
PracticeStateEvent,
SessionPausedEvent,
SessionResumedEvent,
VisionFrameEvent,
} from '@/lib/classroom/socket-events'
/**
@@ -110,6 +111,20 @@ export interface ObservedResult {
recordedAt: number
}
/**
* Vision frame received from student's abacus camera
*/
export interface ObservedVisionFrame {
/** Base64-encoded JPEG image data */
imageData: string
/** Detected abacus value (null if not yet detected) */
detectedValue: number | null
/** Detection confidence (0-1) */
confidence: number
/** When this frame was received by observer */
receivedAt: number
}
interface UseSessionObserverResult {
/** Current observed state (null if not yet received) */
state: ObservedSessionState | null
@@ -117,6 +132,8 @@ interface UseSessionObserverResult {
results: ObservedResult[]
/** Current part transition state (null if not in transition) */
transitionState: ObservedTransitionState | null
/** Latest vision frame from student's camera (null if vision not enabled) */
visionFrame: ObservedVisionFrame | null
/** Whether connected to the session channel */
isConnected: boolean
/** Whether actively observing (connected and joined session) */
@@ -155,6 +172,7 @@ export function useSessionObserver(
const [state, setState] = useState<ObservedSessionState | null>(null)
const [results, setResults] = useState<ObservedResult[]>([])
const [transitionState, setTransitionState] = useState<ObservedTransitionState | null>(null)
const [visionFrame, setVisionFrame] = useState<ObservedVisionFrame | null>(null)
const [isConnected, setIsConnected] = useState(false)
const [isObserving, setIsObserving] = useState(false)
const [error, setError] = useState<string | null>(null)
@@ -174,6 +192,8 @@ export function useSessionObserver(
setIsObserving(false)
setState(null)
setResults([])
setTransitionState(null)
setVisionFrame(null)
recordedProblemsRef.current.clear()
hasSeededHistoryRef.current = false
}
@@ -354,6 +374,16 @@ export function useSessionObserver(
setTransitionState(null)
})
// Listen for vision frames from student's camera
socket.on('vision-frame', (data: VisionFrameEvent) => {
setVisionFrame({
imageData: data.imageData,
detectedValue: data.detectedValue,
confidence: data.confidence,
receivedAt: Date.now(),
})
})
// Listen for session ended event
socket.on('session-ended', () => {
console.log('[SessionObserver] Session ended')
@@ -445,6 +475,7 @@ export function useSessionObserver(
state,
results,
transitionState,
visionFrame,
isConnected,
isObserving,
error,

View File

@@ -0,0 +1,450 @@
'use client'
/**
* React Query hooks for worksheet parsing workflow
*
* Provides mutations for:
* - Starting worksheet parsing (POST /parse)
* - Submitting corrections (PATCH /review)
* - Approving and creating session (POST /approve)
*
* Includes optimistic updates for immediate UI feedback.
*/
import { useMutation, useQueryClient } from '@tanstack/react-query'
import { api } from '@/lib/queryClient'
import { attachmentKeys, sessionPlanKeys, sessionHistoryKeys } from '@/lib/queryKeys'
import type { WorksheetParsingResult, computeParsingStats } from '@/lib/worksheet-parsing'
import type { ParsingStatus } from '@/db/schema/practice-attachments'
/** Stats returned from parsing */
type ParsingStats = ReturnType<typeof computeParsingStats>
// ============================================================================
// Types
// ============================================================================
/** Extended attachment data with parsing fields */
export interface AttachmentWithParsing {
id: string
filename: string
originalFilename: string | null
mimeType: string
fileSize: number
uploadedAt: string
url: string
originalUrl: string | null
corners: Array<{ x: number; y: number }> | null
rotation: 0 | 90 | 180 | 270
// Parsing fields
parsingStatus: ParsingStatus | null
parsedAt: string | null
parsingError: string | null
rawParsingResult: WorksheetParsingResult | null
approvedResult: WorksheetParsingResult | null
confidenceScore: number | null
needsReview: boolean
sessionCreated: boolean
createdSessionId: string | null
}
/** Response from parse API */
interface ParseResponse {
success: boolean
status: ParsingStatus
result?: WorksheetParsingResult
stats?: ParsingStats
error?: string
attempts?: number
}
/** Response from approve API */
interface ApproveResponse {
success: boolean
sessionId: string
problemCount: number
correctCount: number
accuracy: number | null
skillsExercised: string[]
stats: ParsingStats
}
/** Cached session attachments shape */
interface AttachmentsCache {
attachments: AttachmentWithParsing[]
}
// ============================================================================
// Hooks
// ============================================================================
/** Options for starting parsing */
export interface StartParsingOptions {
attachmentId: string
/** Optional model config ID - uses default if not specified */
modelConfigId?: string
/** Optional additional context/hints for re-parsing */
additionalContext?: string
/** Optional bounding boxes to preserve from user adjustments (keyed by problem index) */
preservedBoundingBoxes?: Record<number, { x: number; y: number; width: number; height: number }>
}
/**
* Hook to start parsing a worksheet attachment
*/
export function useStartParsing(playerId: string, sessionId: string) {
const queryClient = useQueryClient()
const queryKey = attachmentKeys.session(playerId, sessionId)
return useMutation({
mutationFn: async (options: StartParsingOptions | string) => {
// Support both old (string) and new (object) signature for backwards compatibility
const { attachmentId, modelConfigId, additionalContext, preservedBoundingBoxes } =
typeof options === 'string'
? {
attachmentId: options,
modelConfigId: undefined,
additionalContext: undefined,
preservedBoundingBoxes: undefined,
}
: options
// Build request body if we have any options
const body =
modelConfigId || additionalContext || preservedBoundingBoxes
? JSON.stringify({ modelConfigId, additionalContext, preservedBoundingBoxes })
: undefined
const res = await api(`curriculum/${playerId}/attachments/${attachmentId}/parse`, {
method: 'POST',
body,
})
if (!res.ok) {
const error = await res.json()
throw new Error(error.error || 'Failed to start parsing')
}
return res.json() as Promise<ParseResponse>
},
onMutate: async (options) => {
const attachmentId = typeof options === 'string' ? options : options.attachmentId
// Cancel outgoing refetches
await queryClient.cancelQueries({ queryKey })
// Snapshot current state
const previous = queryClient.getQueryData<AttachmentsCache>(queryKey)
// Optimistic update: mark as processing
if (previous) {
queryClient.setQueryData<AttachmentsCache>(queryKey, {
...previous,
attachments: previous.attachments.map((a) =>
a.id === attachmentId
? { ...a, parsingStatus: 'processing' as ParsingStatus, parsingError: null }
: a
),
})
}
return { previous }
},
onError: (_err, _options, context) => {
// Revert on error
if (context?.previous) {
queryClient.setQueryData(queryKey, context.previous)
}
},
onSuccess: (data, options) => {
const attachmentId = typeof options === 'string' ? options : options.attachmentId
// Update cache with actual result
const current = queryClient.getQueryData<AttachmentsCache>(queryKey)
if (current && data.success) {
queryClient.setQueryData<AttachmentsCache>(queryKey, {
...current,
attachments: current.attachments.map((a) =>
a.id === attachmentId
? {
...a,
parsingStatus: data.status,
rawParsingResult: data.result ?? null,
confidenceScore: data.result?.overallConfidence ?? null,
needsReview: data.result?.needsReview ?? false,
parsedAt: new Date().toISOString(),
}
: a
),
})
}
},
onSettled: () => {
// Always refetch to ensure consistency
queryClient.invalidateQueries({ queryKey })
},
})
}
/**
* Hook to submit corrections to parsed problems
*/
export function useSubmitCorrections(playerId: string, sessionId: string) {
const queryClient = useQueryClient()
const queryKey = attachmentKeys.session(playerId, sessionId)
return useMutation({
mutationFn: async ({
attachmentId,
corrections,
markAsReviewed = false,
}: {
attachmentId: string
corrections: Array<{
problemNumber: number
correctedTerms?: number[] | null
correctedStudentAnswer?: number | null
shouldExclude?: boolean
}>
markAsReviewed?: boolean
}) => {
const res = await api(`curriculum/${playerId}/attachments/${attachmentId}/review`, {
method: 'PATCH',
body: JSON.stringify({ corrections, markAsReviewed }),
})
if (!res.ok) {
const error = await res.json()
throw new Error(error.error || 'Failed to submit corrections')
}
return res.json()
},
onSuccess: () => {
// Refetch to get updated data
queryClient.invalidateQueries({ queryKey })
},
})
}
/**
* Hook to approve parsing and create a practice session
*/
export function useApproveAndCreateSession(playerId: string, sessionId: string) {
const queryClient = useQueryClient()
const queryKey = attachmentKeys.session(playerId, sessionId)
return useMutation({
mutationFn: async (attachmentId: string) => {
const res = await api(`curriculum/${playerId}/attachments/${attachmentId}/approve`, {
method: 'POST',
})
if (!res.ok) {
const error = await res.json()
throw new Error(error.error || 'Failed to approve and create session')
}
return res.json() as Promise<ApproveResponse>
},
onMutate: async (attachmentId) => {
// Cancel outgoing refetches
await queryClient.cancelQueries({ queryKey })
// Snapshot current state
const previous = queryClient.getQueryData<AttachmentsCache>(queryKey)
// Optimistic update: mark as creating session
if (previous) {
queryClient.setQueryData<AttachmentsCache>(queryKey, {
...previous,
attachments: previous.attachments.map((a) =>
a.id === attachmentId ? { ...a, sessionCreated: true } : a
),
})
}
return { previous }
},
onError: (_err, _attachmentId, context) => {
// Revert on error
if (context?.previous) {
queryClient.setQueryData(queryKey, context.previous)
}
},
onSuccess: (data, attachmentId) => {
// Update cache with session ID
const current = queryClient.getQueryData<AttachmentsCache>(queryKey)
if (current && data.success) {
queryClient.setQueryData<AttachmentsCache>(queryKey, {
...current,
attachments: current.attachments.map((a) =>
a.id === attachmentId
? {
...a,
sessionCreated: true,
createdSessionId: data.sessionId,
parsingStatus: 'approved' as ParsingStatus,
}
: a
),
})
}
// Invalidate session-related queries so new session appears
queryClient.invalidateQueries({ queryKey: sessionPlanKeys.list(playerId) })
queryClient.invalidateQueries({ queryKey: sessionHistoryKeys.list(playerId) })
},
onSettled: () => {
// Always refetch attachments to ensure consistency
queryClient.invalidateQueries({ queryKey })
},
})
}
/** Options for selective re-parsing */
export interface ReparseSelectedOptions {
attachmentId: string
/** Indices of problems to re-parse (0-based) */
problemIndices: number[]
/** Bounding boxes for each problem (must match problemIndices length) */
boundingBoxes: Array<{ x: number; y: number; width: number; height: number }>
/** Optional additional context/hints for the LLM */
additionalContext?: string
/** Optional model config ID */
modelConfigId?: string
}
/** Response from selective re-parse API */
interface ReparseSelectedResponse {
success: boolean
reparsedCount: number
reparsedIndices: number[]
updatedResult: import('@/lib/worksheet-parsing').WorksheetParsingResult
}
/**
* Hook to re-parse selected problems
*/
export function useReparseSelected(playerId: string, sessionId: string) {
const queryClient = useQueryClient()
const queryKey = attachmentKeys.session(playerId, sessionId)
return useMutation({
mutationFn: async (options: ReparseSelectedOptions) => {
const { attachmentId, problemIndices, boundingBoxes, additionalContext, modelConfigId } =
options
const res = await api(`curriculum/${playerId}/attachments/${attachmentId}/parse-selected`, {
method: 'POST',
body: JSON.stringify({
problemIndices,
boundingBoxes,
additionalContext,
modelConfigId,
}),
})
if (!res.ok) {
const error = await res.json()
throw new Error(error.error || 'Failed to re-parse selected problems')
}
return res.json() as Promise<ReparseSelectedResponse>
},
onMutate: async (options) => {
// Cancel outgoing refetches
await queryClient.cancelQueries({ queryKey })
// Snapshot current state
const previous = queryClient.getQueryData<AttachmentsCache>(queryKey)
// Optimistic update: mark as processing
if (previous) {
queryClient.setQueryData<AttachmentsCache>(queryKey, {
...previous,
attachments: previous.attachments.map((a) =>
a.id === options.attachmentId
? { ...a, parsingStatus: 'processing' as ParsingStatus }
: a
),
})
}
return { previous }
},
onError: (_err, _options, context) => {
// Revert on error
if (context?.previous) {
queryClient.setQueryData(queryKey, context.previous)
}
},
onSuccess: (data, options) => {
// Update cache with actual result
const current = queryClient.getQueryData<AttachmentsCache>(queryKey)
if (current && data.success) {
queryClient.setQueryData<AttachmentsCache>(queryKey, {
...current,
attachments: current.attachments.map((a) =>
a.id === options.attachmentId
? {
...a,
parsingStatus: data.updatedResult.needsReview
? ('needs_review' as ParsingStatus)
: ('approved' as ParsingStatus),
rawParsingResult: data.updatedResult,
confidenceScore: data.updatedResult.overallConfidence,
needsReview: data.updatedResult.needsReview,
}
: a
),
})
}
},
onSettled: () => {
// Always refetch to ensure consistency
queryClient.invalidateQueries({ queryKey })
},
})
}
/**
* Get parsing status badge color
*/
export function getParsingStatusColor(status: ParsingStatus | null): string {
switch (status) {
case 'processing':
return 'blue.500'
case 'needs_review':
return 'yellow.500'
case 'approved':
return 'green.500'
case 'failed':
return 'red.500'
default:
return 'gray.500'
}
}
/**
* Get parsing status display text
*/
export function getParsingStatusText(status: ParsingStatus | null, problemCount?: number): string {
switch (status) {
case 'processing':
return 'Analyzing...'
case 'needs_review':
return problemCount ? `${problemCount} problems (needs review)` : 'Needs review'
case 'approved':
return problemCount ? `${problemCount} problems` : 'Ready'
case 'failed':
return 'Failed'
default:
return 'Not parsed'
}
}

View File

@@ -15,7 +15,6 @@ import {
classrooms,
parentChild,
type Player,
players,
} from '@/db/schema'
/**
@@ -236,3 +235,107 @@ export async function isTeacherOf(userId: string, playerId: string): Promise<boo
return !!enrollment
}
/**
* Remediation types for authorization errors
*/
export type RemediationType =
| 'send-entry-prompt' // Teacher needs student to enter classroom
| 'enroll-student' // Teacher needs to enroll student first
| 'link-via-family-code' // User can link via family code
| 'create-classroom' // User needs to create a classroom to be a teacher
| 'no-access' // No remediation available
/**
* Structured authorization error for API responses
*/
export interface AuthorizationError {
error: string
message: string
accessLevel: AccessLevel
remediation: {
type: RemediationType
description: string
/** For send-entry-prompt: the classroom to send the prompt from */
classroomId?: string
/** For send-entry-prompt/enroll-student: the player to act on */
playerId?: string
/** Label for the action button in the UI */
actionLabel?: string
}
}
/**
* Generate a personalized authorization error based on the user's relationship
* with the student and the action they're trying to perform.
*/
export function generateAuthorizationError(
access: PlayerAccess,
action: PlayerAction,
context?: { actionDescription?: string }
): AuthorizationError {
const actionDesc = context?.actionDescription ?? action
// Case 1: Teacher with enrolled student, but student not present
// This is the most common case - teacher needs student to enter classroom
if (access.accessLevel === 'teacher-enrolled' && !access.isPresent) {
return {
error: 'Student not in classroom',
message: `This student is enrolled in your classroom but not currently present. To ${actionDesc}, they need to enter your classroom first.`,
accessLevel: access.accessLevel,
remediation: {
type: 'send-entry-prompt',
description:
"Send a notification to the student's parent to have them enter your classroom.",
classroomId: access.classroomId,
playerId: access.playerId,
actionLabel: 'Send Entry Prompt',
},
}
}
// Case 2: User has a classroom but student is not enrolled
if (access.accessLevel === 'none' && access.classroomId) {
return {
error: 'Student not enrolled',
message: 'This student is not enrolled in your classroom.',
accessLevel: access.accessLevel,
remediation: {
type: 'enroll-student',
description:
'You need to enroll this student in your classroom first. Ask their parent for their family code to send an enrollment request.',
classroomId: access.classroomId,
playerId: access.playerId,
actionLabel: 'Enroll Student',
},
}
}
// Case 3: User has no classroom and no parent relationship
if (access.accessLevel === 'none') {
return {
error: 'No access to this student',
message: 'Your account is not linked to this student.',
accessLevel: access.accessLevel,
remediation: {
type: 'link-via-family-code',
description:
"To access this student, you need their Family Code. Ask their parent to share it with you from the student's profile page.",
playerId: access.playerId,
actionLabel: 'Enter Family Code',
},
}
}
// Fallback for any other case
return {
error: 'Not authorized',
message: `You do not have permission to ${actionDesc} for this student.`,
accessLevel: access.accessLevel,
remediation: {
type: 'no-access',
description: "Contact the student's parent or your administrator for access.",
playerId: access.playerId,
},
}
}

View File

@@ -17,11 +17,14 @@ export {
type PlayerAccess,
type PlayerAction,
type AccessiblePlayers,
type RemediationType,
type AuthorizationError,
getPlayerAccess,
canPerformAction,
getAccessiblePlayers,
isParentOf,
isTeacherOf,
generateAuthorizationError,
} from './access-control'
// Family Management

View File

@@ -268,6 +268,22 @@ export interface PartTransitionCompleteEvent {
sessionId: string
}
/**
* Vision frame from student's abacus camera.
* Sent when student has vision mode enabled during practice.
*/
export interface VisionFrameEvent {
sessionId: string
/** Base64-encoded JPEG image data */
imageData: string
/** Detected abacus value (null if not yet detected) */
detectedValue: number | null
/** Detection confidence (0-1) */
confidence: number
/** Timestamp when frame was captured */
timestamp: number
}
/**
* Sent when a student starts a practice session while present in a classroom.
* Allows teacher to see session status update in real-time.
@@ -401,6 +417,7 @@ export interface ClassroomServerToClientEvents {
'session-resumed': (data: SessionResumedEvent) => void
'part-transition': (data: PartTransitionEvent) => void
'part-transition-complete': (data: PartTransitionCompleteEvent) => void
'vision-frame': (data: VisionFrameEvent) => void
// Session status events (classroom channel - for teacher's active sessions view)
'session-started': (data: SessionStartedEvent) => void
@@ -427,6 +444,7 @@ export interface ClassroomClientToServerEvents {
// Session state broadcasts (from student client)
'practice-state': (data: PracticeStateEvent) => void
'tutorial-state': (data: TutorialStateEvent) => void
'vision-frame': (data: VisionFrameEvent) => void
// Observer controls
'tutorial-control': (data: TutorialControlEvent) => void

View File

@@ -366,7 +366,7 @@ export function getPhaseSkillConstraints(phaseId: string): PhaseSkillConstraints
if (phase.usesFiveComplement && phase.targetNumber <= 4) {
// Target the specific five-complement skill
const skill = FIVE_COMPLEMENT_ADD[phase.targetNumber]
target.fiveComplements = { [skill]: true } as Partial<SkillSet['fiveComplements']>
target.fiveComplements = { [skill]: true } as SkillSet['fiveComplements']
required.fiveComplements = {
[skill]: true,
} as SkillSet['fiveComplements']
@@ -386,7 +386,7 @@ export function getPhaseSkillConstraints(phaseId: string): PhaseSkillConstraints
if (phase.usesFiveComplement && Math.abs(phase.targetNumber) <= 4) {
const skill = FIVE_COMPLEMENT_SUB[Math.abs(phase.targetNumber)]
target.fiveComplementsSub = { [skill]: true } as Partial<SkillSet['fiveComplementsSub']>
target.fiveComplementsSub = { [skill]: true } as SkillSet['fiveComplementsSub']
required.fiveComplementsSub = {
[skill]: true,
} as SkillSet['fiveComplementsSub']
@@ -405,7 +405,7 @@ export function getPhaseSkillConstraints(phaseId: string): PhaseSkillConstraints
required.basic.heavenBead = true
const skill = TEN_COMPLEMENT_ADD[phase.targetNumber]
target.tenComplements = { [skill]: true } as Partial<SkillSet['tenComplements']>
target.tenComplements = { [skill]: true } as SkillSet['tenComplements']
required.tenComplements = { [skill]: true } as SkillSet['tenComplements']
if (!phase.usesFiveComplement) {
@@ -423,7 +423,7 @@ export function getPhaseSkillConstraints(phaseId: string): PhaseSkillConstraints
required.basic.heavenBeadSubtraction = true
const skill = TEN_COMPLEMENT_SUB[Math.abs(phase.targetNumber)]
target.tenComplementsSub = { [skill]: true } as Partial<SkillSet['tenComplementsSub']>
target.tenComplementsSub = { [skill]: true } as SkillSet['tenComplementsSub']
required.tenComplementsSub = {
[skill]: true,
} as SkillSet['tenComplementsSub']
@@ -529,6 +529,10 @@ function createFullSkillSet(): SkillSet {
'-2=+8-10': true,
'-1=+9-10': true,
},
advanced: {
cascadingCarry: true,
cascadingBorrow: true,
},
}
}

View File

@@ -43,10 +43,10 @@ function formatDiagnosticsMessage(diagnostics: GenerationDiagnostics): string {
lines.push(
'This means no valid sequence of terms could be built with the given skill/budget constraints.'
)
if (diagnostics.enabledRequiredSkills.length === 0) {
lines.push('FIX: No required skills are enabled - enable at least some basic skills.')
if (diagnostics.enabledAllowedSkills.length === 0) {
lines.push('FIX: No allowed skills are enabled - enable at least some basic skills.')
} else {
lines.push(`Enabled skills: ${diagnostics.enabledRequiredSkills.slice(0, 5).join(', ')}...`)
lines.push(`Enabled skills: ${diagnostics.enabledAllowedSkills.slice(0, 5).join(', ')}...`)
}
} else if (diagnostics.skillMatchFailures > 0) {
lines.push(

53
apps/web/src/lib/llm.ts Normal file
View File

@@ -0,0 +1,53 @@
/**
* LLM Client Singleton for apps/web
*
* This module provides a singleton instance of the LLM client that reads
* configuration from environment variables. The client supports multiple
* providers (OpenAI, Anthropic) and provides type-safe LLM calls with
* Zod schema validation.
*
* @example
* ```typescript
* import { llm } from '@/lib/llm'
* import { z } from 'zod'
*
* const response = await llm.call({
* prompt: 'Analyze this text...',
* schema: z.object({ sentiment: z.enum(['positive', 'negative', 'neutral']) }),
* })
* ```
*
* @see packages/llm-client/README.md for full documentation
*/
import { LLMClient } from '@soroban/llm-client'
// Create singleton instance
// Configuration is automatically loaded from environment variables:
// - LLM_DEFAULT_PROVIDER: Default provider (default: 'openai')
// - LLM_DEFAULT_MODEL: Default model override
// - LLM_OPENAI_API_KEY: OpenAI API key
// - LLM_OPENAI_BASE_URL: OpenAI base URL (optional)
// - LLM_ANTHROPIC_API_KEY: Anthropic API key
// - LLM_ANTHROPIC_BASE_URL: Anthropic base URL (optional)
export const llm = new LLMClient()
// Re-export types and utilities for convenience
export type {
LLMClientConfig,
LLMRequest,
LLMResponse,
LLMProgress,
LLMProvider,
ProviderConfig,
ProviderRequest,
ProviderResponse,
ValidationFeedback,
ReasoningEffort,
} from '@soroban/llm-client'
export {
LLMValidationError,
LLMApiError,
ProviderNotConfiguredError,
} from '@soroban/llm-client'

View File

@@ -58,3 +58,21 @@ export const entryPromptKeys = {
all: ['entry-prompts'] as const,
pending: () => [...entryPromptKeys.all, 'pending'] as const,
}
// Attachment query keys (for practice photos and worksheet parsing)
export const attachmentKeys = {
// All attachments for a player
all: (playerId: string) => ['attachments', playerId] as const,
// Attachments for a specific session
session: (playerId: string, sessionId: string) =>
[...attachmentKeys.all(playerId), 'session', sessionId] as const,
// Single attachment detail (includes parsing data)
detail: (playerId: string, attachmentId: string) =>
[...attachmentKeys.all(playerId), attachmentId] as const,
// Parsing-specific data for an attachment
parsing: (playerId: string, attachmentId: string) =>
[...attachmentKeys.detail(playerId, attachmentId), 'parsing'] as const,
}

View File

@@ -0,0 +1,328 @@
/**
* @vitest-environment node
*
* Tests for Remote Camera Session Manager
*
* Tests session creation, TTL management, activity-based renewal,
* and calibration persistence.
*/
import { afterEach, beforeEach, describe, expect, it, vi } from 'vitest'
import {
createRemoteCameraSession,
deleteRemoteCameraSession,
getOrCreateSession,
getRemoteCameraSession,
getSessionCalibration,
getSessionCount,
markPhoneConnected,
markPhoneDisconnected,
renewSessionTTL,
setSessionCalibration,
} from '../session-manager'
describe('Remote Camera Session Manager', () => {
beforeEach(() => {
// Clear all sessions before each test
// Access the global sessions map directly
if (globalThis.__remoteCameraSessions) {
globalThis.__remoteCameraSessions.clear()
}
})
afterEach(() => {
vi.restoreAllMocks()
})
describe('createRemoteCameraSession', () => {
it('should create a new session with unique ID', () => {
const session = createRemoteCameraSession()
expect(session.id).toBeDefined()
expect(session.id.length).toBeGreaterThan(0)
expect(session.phoneConnected).toBe(false)
})
it('should set correct timestamps on creation', () => {
const now = new Date()
vi.setSystemTime(now)
const session = createRemoteCameraSession()
expect(session.createdAt.getTime()).toBe(now.getTime())
expect(session.lastActivityAt.getTime()).toBe(now.getTime())
// TTL should be 60 minutes
expect(session.expiresAt.getTime()).toBe(now.getTime() + 60 * 60 * 1000)
})
it('should create multiple sessions with unique IDs', () => {
const session1 = createRemoteCameraSession()
const session2 = createRemoteCameraSession()
expect(session1.id).not.toBe(session2.id)
expect(getSessionCount()).toBe(2)
})
})
describe('getRemoteCameraSession', () => {
it('should retrieve an existing session', () => {
const created = createRemoteCameraSession()
const retrieved = getRemoteCameraSession(created.id)
expect(retrieved).not.toBeNull()
expect(retrieved?.id).toBe(created.id)
})
it('should return null for non-existent session', () => {
const session = getRemoteCameraSession('non-existent-id')
expect(session).toBeNull()
})
it('should return null for expired session', () => {
const session = createRemoteCameraSession()
const sessionId = session.id
// Advance time past expiration (61 minutes)
vi.setSystemTime(new Date(Date.now() + 61 * 60 * 1000))
const retrieved = getRemoteCameraSession(sessionId)
expect(retrieved).toBeNull()
})
})
describe('getOrCreateSession', () => {
it('should create new session with provided ID if not exists', () => {
const customId = 'my-custom-session-id'
const session = getOrCreateSession(customId)
expect(session.id).toBe(customId)
expect(session.phoneConnected).toBe(false)
})
it('should return existing session if not expired', () => {
const customId = 'existing-session'
const original = getOrCreateSession(customId)
// Mark phone connected to verify we get same session
markPhoneConnected(customId)
const retrieved = getOrCreateSession(customId)
expect(retrieved.id).toBe(original.id)
expect(retrieved.phoneConnected).toBe(true)
})
it('should renew TTL when accessing existing session', () => {
const now = new Date()
vi.setSystemTime(now)
const customId = 'session-to-renew'
const original = getOrCreateSession(customId)
const originalExpiry = original.expiresAt.getTime()
// Advance time by 30 minutes
vi.setSystemTime(new Date(now.getTime() + 30 * 60 * 1000))
const retrieved = getOrCreateSession(customId)
// Expiry should be extended from current time
expect(retrieved.expiresAt.getTime()).toBeGreaterThan(originalExpiry)
})
it('should create new session if existing one expired', () => {
const customId = 'expired-session'
const original = getOrCreateSession(customId)
markPhoneConnected(customId) // Mark to distinguish
// Advance time past expiration
vi.setSystemTime(new Date(Date.now() + 61 * 60 * 1000))
const newSession = getOrCreateSession(customId)
// Should be a fresh session (not phone connected)
expect(newSession.id).toBe(customId)
expect(newSession.phoneConnected).toBe(false)
})
})
describe('renewSessionTTL', () => {
it('should extend session expiration time', () => {
const now = new Date()
vi.setSystemTime(now)
const session = createRemoteCameraSession()
const originalExpiry = session.expiresAt.getTime()
// Advance time by 30 minutes
vi.setSystemTime(new Date(now.getTime() + 30 * 60 * 1000))
const renewed = renewSessionTTL(session.id)
expect(renewed).toBe(true)
const updatedSession = getRemoteCameraSession(session.id)
expect(updatedSession?.expiresAt.getTime()).toBeGreaterThan(originalExpiry)
})
it('should update lastActivityAt', () => {
const now = new Date()
vi.setSystemTime(now)
const session = createRemoteCameraSession()
// Advance time
const later = new Date(now.getTime() + 10 * 60 * 1000)
vi.setSystemTime(later)
renewSessionTTL(session.id)
const updatedSession = getRemoteCameraSession(session.id)
expect(updatedSession?.lastActivityAt.getTime()).toBe(later.getTime())
})
it('should return false for non-existent session', () => {
const result = renewSessionTTL('non-existent')
expect(result).toBe(false)
})
})
describe('calibration persistence', () => {
const testCalibration = {
corners: {
topLeft: { x: 10, y: 10 },
topRight: { x: 100, y: 10 },
bottomLeft: { x: 10, y: 100 },
bottomRight: { x: 100, y: 100 },
},
}
it('should store calibration data', () => {
const session = createRemoteCameraSession()
const result = setSessionCalibration(session.id, testCalibration)
expect(result).toBe(true)
})
it('should retrieve calibration data', () => {
const session = createRemoteCameraSession()
setSessionCalibration(session.id, testCalibration)
const retrieved = getSessionCalibration(session.id)
expect(retrieved).toEqual(testCalibration)
})
it('should return null for session without calibration', () => {
const session = createRemoteCameraSession()
const calibration = getSessionCalibration(session.id)
expect(calibration).toBeNull()
})
it('should return null for non-existent session', () => {
const calibration = getSessionCalibration('non-existent')
expect(calibration).toBeNull()
})
it('should renew TTL when setting calibration', () => {
const now = new Date()
vi.setSystemTime(now)
const session = createRemoteCameraSession()
const originalExpiry = session.expiresAt.getTime()
// Advance time
vi.setSystemTime(new Date(now.getTime() + 30 * 60 * 1000))
setSessionCalibration(session.id, testCalibration)
const updatedSession = getRemoteCameraSession(session.id)
expect(updatedSession?.expiresAt.getTime()).toBeGreaterThan(originalExpiry)
})
it('should persist calibration across session retrievals', () => {
const customId = 'calibrated-session'
const session = getOrCreateSession(customId)
setSessionCalibration(session.id, testCalibration)
// Simulate reconnection by getting session again
const reconnected = getOrCreateSession(customId)
expect(reconnected.calibration).toEqual(testCalibration)
})
})
describe('phone connection state', () => {
it('should mark phone as connected', () => {
const session = createRemoteCameraSession()
const result = markPhoneConnected(session.id)
expect(result).toBe(true)
const updated = getRemoteCameraSession(session.id)
expect(updated?.phoneConnected).toBe(true)
})
it('should mark phone as disconnected', () => {
const session = createRemoteCameraSession()
markPhoneConnected(session.id)
const result = markPhoneDisconnected(session.id)
expect(result).toBe(true)
const updated = getRemoteCameraSession(session.id)
expect(updated?.phoneConnected).toBe(false)
})
it('should extend TTL when phone connects', () => {
const now = new Date()
vi.setSystemTime(now)
const session = createRemoteCameraSession()
// Advance time
vi.setSystemTime(new Date(now.getTime() + 30 * 60 * 1000))
markPhoneConnected(session.id)
const updated = getRemoteCameraSession(session.id)
// Expiry should be 60 mins from now (not from creation)
expect(updated?.expiresAt.getTime()).toBeGreaterThan(now.getTime() + 60 * 60 * 1000)
})
it('should return false for non-existent session', () => {
expect(markPhoneConnected('non-existent')).toBe(false)
expect(markPhoneDisconnected('non-existent')).toBe(false)
})
})
describe('deleteRemoteCameraSession', () => {
it('should delete existing session', () => {
const session = createRemoteCameraSession()
const result = deleteRemoteCameraSession(session.id)
expect(result).toBe(true)
expect(getRemoteCameraSession(session.id)).toBeNull()
})
it('should return false for non-existent session', () => {
const result = deleteRemoteCameraSession('non-existent')
expect(result).toBe(false)
})
})
describe('session count', () => {
it('should track total sessions', () => {
expect(getSessionCount()).toBe(0)
createRemoteCameraSession()
expect(getSessionCount()).toBe(1)
createRemoteCameraSession()
expect(getSessionCount()).toBe(2)
})
})
})

View File

@@ -2,7 +2,8 @@
* Remote Camera Session Manager
*
* Manages in-memory sessions for phone-to-desktop camera streaming.
* Sessions are short-lived (10 minute TTL) and stored in memory.
* Sessions have a 60-minute TTL but are renewed on activity.
* Sessions persist across page reloads via session ID stored client-side.
*/
import { createId } from '@paralleldrive/cuid2'
@@ -11,7 +12,17 @@ export interface RemoteCameraSession {
id: string
createdAt: Date
expiresAt: Date
lastActivityAt: Date
phoneConnected: boolean
/** Calibration data sent from desktop (persists for reconnects) */
calibration?: {
corners: {
topLeft: { x: number; y: number }
topRight: { x: number; y: number }
bottomLeft: { x: number; y: number }
bottomRight: { x: number; y: number }
}
}
}
// In-memory session storage
@@ -21,7 +32,7 @@ declare global {
var __remoteCameraSessions: Map<string, RemoteCameraSession> | undefined
}
const SESSION_TTL_MS = 10 * 60 * 1000 // 10 minutes
const SESSION_TTL_MS = 60 * 60 * 1000 // 60 minutes
const CLEANUP_INTERVAL_MS = 60 * 1000 // 1 minute
function getSessions(): Map<string, RemoteCameraSession> {
@@ -44,6 +55,7 @@ export function createRemoteCameraSession(): RemoteCameraSession {
id: createId(),
createdAt: now,
expiresAt: new Date(now.getTime() + SESSION_TTL_MS),
lastActivityAt: now,
phoneConnected: false,
}
@@ -51,6 +63,84 @@ export function createRemoteCameraSession(): RemoteCameraSession {
return session
}
/**
* Get or create a session by ID
* If the session exists and isn't expired, returns it (renewed)
* If the session doesn't exist, creates a new one with the given ID
*/
export function getOrCreateSession(sessionId: string): RemoteCameraSession {
const sessions = getSessions()
const existing = sessions.get(sessionId)
const now = new Date()
if (existing && now <= existing.expiresAt) {
// Renew TTL on access
existing.expiresAt = new Date(now.getTime() + SESSION_TTL_MS)
existing.lastActivityAt = now
return existing
}
// Create new session with provided ID
const session: RemoteCameraSession = {
id: sessionId,
createdAt: now,
expiresAt: new Date(now.getTime() + SESSION_TTL_MS),
lastActivityAt: now,
phoneConnected: false,
}
sessions.set(session.id, session)
return session
}
/**
* Renew session TTL (call on activity to keep session alive)
*/
export function renewSessionTTL(sessionId: string): boolean {
const sessions = getSessions()
const session = sessions.get(sessionId)
if (!session) return false
const now = new Date()
session.expiresAt = new Date(now.getTime() + SESSION_TTL_MS)
session.lastActivityAt = now
return true
}
/**
* Store calibration data in session (persists for reconnects)
*/
export function setSessionCalibration(
sessionId: string,
calibration: RemoteCameraSession['calibration']
): boolean {
const sessions = getSessions()
const session = sessions.get(sessionId)
if (!session) return false
session.calibration = calibration
// Also renew TTL
const now = new Date()
session.expiresAt = new Date(now.getTime() + SESSION_TTL_MS)
session.lastActivityAt = now
return true
}
/**
* Get calibration data from session
*/
export function getSessionCalibration(
sessionId: string
): RemoteCameraSession['calibration'] | null {
const sessions = getSessions()
const session = sessions.get(sessionId)
if (!session) return null
return session.calibration || null
}
/**
* Get a session by ID
*/

View File

@@ -0,0 +1,203 @@
/**
* Traditional CV-based bead detection for abacus columns
*
* Uses edge detection and contour analysis instead of ML.
* Works by detecting the reckoning bar and analyzing bead positions
* relative to it.
*/
export interface BeadAnalysis {
/** Detected digit value (0-9) */
digit: number
/** Confidence based on detection clarity */
confidence: number
/** Position of reckoning bar (0-1, relative to column height) */
reckoningBarPosition: number
/** Number of beads detected above bar */
heavenBeadsDetected: number
/** Whether heaven bead is active (touching bar) */
heavenActive: boolean
/** Number of beads detected below bar */
earthBeadsDetected: number
/** Number of active earth beads (touching bar) */
earthActiveCount: number
}
/**
* Analyze a single column image to detect bead positions
*
* @param imageData - Grayscale image data of a single column
* @returns Analysis result with detected digit
*/
export function analyzeColumn(imageData: ImageData): BeadAnalysis {
const { width, height, data } = imageData
// Step 1: Create vertical intensity profile (average each row)
const rowIntensities = new Float32Array(height)
for (let y = 0; y < height; y++) {
let sum = 0
for (let x = 0; x < width; x++) {
const idx = (y * width + x) * 4
sum += data[idx] // Use red channel (grayscale)
}
rowIntensities[y] = sum / width
}
// Step 2: Find reckoning bar (darkest horizontal region)
// The bar is typically a dark horizontal line in the middle third
const searchStart = Math.floor(height * 0.25)
const searchEnd = Math.floor(height * 0.75)
let darkestRow = searchStart
let darkestValue = 255
// Use a sliding window to find the darkest band
const windowSize = Math.max(3, Math.floor(height * 0.03))
for (let y = searchStart; y < searchEnd - windowSize; y++) {
let windowSum = 0
for (let i = 0; i < windowSize; i++) {
windowSum += rowIntensities[y + i]
}
const windowAvg = windowSum / windowSize
if (windowAvg < darkestValue) {
darkestValue = windowAvg
darkestRow = y + Math.floor(windowSize / 2)
}
}
const reckoningBarPosition = darkestRow / height
// Step 3: Analyze heaven section (above bar)
// Find peaks in intensity (beads are darker than background)
const heavenStart = 0
const heavenEnd = darkestRow - windowSize
const heavenPeaks = findPeaks(rowIntensities, heavenStart, heavenEnd, height)
// Heaven bead is active if it's close to the reckoning bar
const heavenActiveThreshold = height * 0.15 // Within 15% of bar
const heavenActive =
heavenPeaks.length > 0 &&
darkestRow - heavenPeaks[heavenPeaks.length - 1] < heavenActiveThreshold
// Step 4: Analyze earth section (below bar)
const earthStart = darkestRow + windowSize
const earthEnd = height
const earthPeaks = findPeaks(rowIntensities, earthStart, earthEnd, height)
// Earth beads are active if they're close to the reckoning bar
const earthActiveCount = earthPeaks.filter(
(peak) => peak - darkestRow < heavenActiveThreshold
).length
// Step 5: Calculate digit value
// Heaven bead = 5, each earth bead = 1
const heavenValue = heavenActive ? 5 : 0
const earthValue = Math.min(earthActiveCount, 4) // Max 4 earth beads
const digit = heavenValue + earthValue
// Step 6: Calculate confidence based on detection quality
// Higher confidence if we found expected number of beads and clear bar
const expectedHeavenBeads = 1
const expectedEarthBeads = 4
const heavenConfidence = heavenPeaks.length === expectedHeavenBeads ? 1.0 : 0.5
const earthConfidence =
earthPeaks.length >= expectedEarthBeads ? 1.0 : earthPeaks.length / expectedEarthBeads
const barContrast = (255 - darkestValue) / 255 // How dark is the bar?
const confidence = (heavenConfidence + earthConfidence + barContrast) / 3
return {
digit,
confidence,
reckoningBarPosition,
heavenBeadsDetected: heavenPeaks.length,
heavenActive,
earthBeadsDetected: earthPeaks.length,
earthActiveCount,
}
}
/**
* Find peaks (local minima = dark beads) in intensity profile
*/
function findPeaks(
intensities: Float32Array,
start: number,
end: number,
totalHeight: number
): number[] {
const peaks: number[] = []
const minPeakDistance = Math.floor(totalHeight * 0.05) // Min 5% height between peaks
const threshold = calculateAdaptiveThreshold(intensities, start, end)
let lastPeak = -minPeakDistance * 2
for (let y = start + 2; y < end - 2; y++) {
const current = intensities[y]
// Local minimum (darker than neighbors)
if (
current < intensities[y - 1] &&
current < intensities[y + 1] &&
current < intensities[y - 2] &&
current < intensities[y + 2] &&
current < threshold &&
y - lastPeak >= minPeakDistance
) {
peaks.push(y)
lastPeak = y
}
}
return peaks
}
/**
* Calculate adaptive threshold for peak detection
*/
function calculateAdaptiveThreshold(intensities: Float32Array, start: number, end: number): number {
let sum = 0
let min = 255
let max = 0
for (let y = start; y < end; y++) {
sum += intensities[y]
min = Math.min(min, intensities[y])
max = Math.max(max, intensities[y])
}
const avg = sum / (end - start)
// Threshold halfway between average and minimum
return (avg + min) / 2
}
/**
* Analyze multiple columns
*/
export function analyzeColumns(columnImages: ImageData[]): BeadAnalysis[] {
return columnImages.map(analyzeColumn)
}
/**
* Convert bead analyses to digits
*/
export function analysesToDigits(analyses: BeadAnalysis[]): {
digits: number[]
confidences: number[]
minConfidence: number
} {
const digits = analyses.map((a) => a.digit)
const confidences = analyses.map((a) => a.confidence)
const minConfidence = confidences.length > 0 ? Math.min(...confidences) : 0
return { digits, confidences, minConfidence }
}
/**
* Convert digits to number
*/
export function digitsToNumber(digits: number[]): number {
if (digits.length === 0) return 0
return digits.reduce((acc, d) => acc * 10 + d, 0)
}

View File

@@ -214,3 +214,59 @@ export function getMinConfidence(confidences: number[]): number {
if (confidences.length === 0) return 0
return Math.min(...confidences)
}
/**
* Process an image frame for classification (for remote camera frames)
*
* @param image - Image element with the frame
* @param calibration - Calibration grid (if null, assumes entire image is the abacus)
* @param columnCount - Number of columns to slice into
* @param columnWidth - Target column width for model input
* @param columnHeight - Target column height for model input
* @returns Array of preprocessed column ImageData ready for classification
*/
export function processImageFrame(
image: HTMLImageElement,
calibration: CalibrationGrid | null,
columnCount: number,
columnWidth: number = 64,
columnHeight: number = 128
): ImageData[] {
// Create canvas for image frame
const canvas = document.createElement('canvas')
canvas.width = image.naturalWidth || image.width
canvas.height = image.naturalHeight || image.height
const ctx = canvas.getContext('2d')!
// Draw image frame
ctx.drawImage(image, 0, 0)
let roiData: ImageData
if (calibration) {
// Extract ROI using calibration
roiData = extractROI(ctx, calibration.roi)
} else {
// No calibration - use entire image as ROI (already cropped by phone)
roiData = ctx.getImageData(0, 0, canvas.width, canvas.height)
}
// Create a synthetic calibration for slicing if none provided
const sliceCalibration: CalibrationGrid = calibration ?? {
roi: { x: 0, y: 0, width: canvas.width, height: canvas.height },
columnCount,
columnDividers: Array.from({ length: columnCount - 1 }, (_, i) => (i + 1) / columnCount),
rotation: 0,
}
// Slice into columns
const columns = sliceIntoColumns(roiData, sliceCalibration)
// Preprocess each column
return columns.map((col) => {
// Convert to grayscale
const gray = toGrayscale(col)
// Resize to model input size
return resizeImageData(gray, columnWidth, columnHeight)
})
}

View File

@@ -0,0 +1,99 @@
/**
* Shared utilities for cropping images to bounding box regions.
*
* Used by:
* - Server-side: parse-selected/route.ts (with sharp)
* - Client-side: PhotoViewerEditor.tsx (with canvas)
*/
/** Default padding around bounding box (2% of image dimensions) */
export const CROP_PADDING = 0.02
/** Normalized bounding box (0-1 coordinates) */
export interface NormalizedBoundingBox {
x: number
y: number
width: number
height: number
}
/** Pixel-based crop region */
export interface CropRegion {
left: number
top: number
width: number
height: number
}
/**
* Calculate pixel-based crop region from normalized bounding box.
* This is the shared algorithm used by both server (sharp) and client (canvas).
*
* @param box - Normalized bounding box (0-1 coordinates)
* @param imageWidth - Actual image width in pixels
* @param imageHeight - Actual image height in pixels
* @param padding - Padding around the box as fraction of image (default: 0.02 = 2%)
* @returns Pixel-based crop region clamped to image bounds
*/
export function calculateCropRegion(
box: NormalizedBoundingBox,
imageWidth: number,
imageHeight: number,
padding: number = CROP_PADDING
): CropRegion {
// Convert normalized coordinates to pixels with padding
const left = Math.max(0, Math.floor((box.x - padding) * imageWidth))
const top = Math.max(0, Math.floor((box.y - padding) * imageHeight))
const width = Math.min(imageWidth - left, Math.ceil((box.width + padding * 2) * imageWidth))
const height = Math.min(imageHeight - top, Math.ceil((box.height + padding * 2) * imageHeight))
return { left, top, width, height }
}
/**
* Crop an image to a bounding box region using canvas (client-side).
*
* @param imageUrl - URL of the image to crop
* @param box - Normalized bounding box (0-1 coordinates)
* @param padding - Padding around the box as fraction of image (default: 0.02 = 2%)
* @returns Promise resolving to cropped image as data URL
*/
export async function cropImageWithCanvas(
imageUrl: string,
box: NormalizedBoundingBox,
padding: number = CROP_PADDING
): Promise<string> {
return new Promise((resolve, reject) => {
const img = new Image()
img.crossOrigin = 'anonymous'
img.onload = () => {
const { naturalWidth: imageWidth, naturalHeight: imageHeight } = img
const region = calculateCropRegion(box, imageWidth, imageHeight, padding)
// Create canvas and draw cropped region
const canvas = document.createElement('canvas')
canvas.width = region.width
canvas.height = region.height
const ctx = canvas.getContext('2d')
if (!ctx) {
reject(new Error('Failed to get canvas context'))
return
}
ctx.drawImage(
img,
region.left,
region.top,
region.width,
region.height,
0,
0,
region.width,
region.height
)
resolve(canvas.toDataURL('image/jpeg', 0.9))
}
img.onerror = () => reject(new Error('Failed to load image'))
img.src = imageUrl
})
}

View File

@@ -0,0 +1,91 @@
/**
* Worksheet Parsing Module
*
* Provides LLM-powered parsing of abacus workbook page images.
* Extracts arithmetic problems and student answers, then converts
* them into practice session data.
*
* @example
* ```typescript
* import {
* parseWorksheetImage,
* convertToSlotResults,
* type WorksheetParsingResult,
* } from '@/lib/worksheet-parsing'
*
* // Parse the worksheet image
* const result = await parseWorksheetImage(imageDataUrl, {
* onProgress: (p) => setProgress(p.message),
* })
*
* // Review and correct if needed
* if (result.data.needsReview) {
* // Show review UI
* }
*
* // Convert to session data
* const { slotResults, summary } = convertToSlotResults(result.data)
*
* // Create session
* await createSession({ playerId, slotResults, status: 'completed' })
* ```
*/
// Schemas
export {
BoundingBoxSchema,
ProblemFormatSchema,
ProblemTermSchema,
ParsedProblemSchema,
PageMetadataSchema,
WorksheetParsingResultSchema,
ProblemCorrectionSchema,
ReparseRequestSchema,
type BoundingBox,
type ProblemFormat,
type ParsedProblem,
type PageMetadata,
type WorksheetParsingResult,
type ProblemCorrection,
type ReparseRequest,
} from './schemas'
// Parser
export {
parseWorksheetImage,
reparseProblems,
computeParsingStats,
applyCorrections,
// Model configurations
PARSING_MODEL_CONFIGS,
getDefaultModelConfig,
getModelConfig,
type ModelConfig,
type ParseWorksheetOptions,
type ParseWorksheetResult,
} from './parser'
// Prompt Builder
export {
buildWorksheetParsingPrompt,
buildReparsePrompt,
type PromptOptions,
} from './prompt-builder'
// Session Converter
export {
convertToSlotResults,
validateParsedProblems,
computeSkillStats,
type ConversionOptions,
type ConversionResult,
} from './session-converter'
// Crop Utilities
export {
CROP_PADDING,
calculateCropRegion,
cropImageWithCanvas,
type NormalizedBoundingBox,
type CropRegion,
} from './crop-utils'

View File

@@ -0,0 +1,322 @@
/**
* Worksheet Parser
*
* Uses the LLM client to parse abacus workbook page images
* into structured problem data.
*/
import { llm, type LLMProgress, type ReasoningEffort } from '@/lib/llm'
import { WorksheetParsingResultSchema, type WorksheetParsingResult } from './schemas'
import { buildWorksheetParsingPrompt, type PromptOptions } from './prompt-builder'
/**
* Available model configurations for worksheet parsing
*/
export interface ModelConfig {
/** Unique identifier for this config */
id: string
/** Display name for UI */
name: string
/** Provider name */
provider: 'openai' | 'anthropic'
/** Model ID to use */
model: string
/** Reasoning effort (for GPT-5.2+) */
reasoningEffort?: ReasoningEffort
/** Description of when to use this config */
description: string
/** Whether this is the default config */
isDefault?: boolean
}
/**
* Available model configurations for worksheet parsing
*/
export const PARSING_MODEL_CONFIGS: ModelConfig[] = [
{
id: 'gpt-5.2-thinking',
name: 'GPT-5.2 Thinking',
provider: 'openai',
model: 'gpt-5.2',
reasoningEffort: 'medium',
description: 'Best balance of quality and speed for worksheet analysis',
isDefault: true,
},
{
id: 'gpt-5.2-thinking-high',
name: 'GPT-5.2 Thinking (High)',
provider: 'openai',
model: 'gpt-5.2',
reasoningEffort: 'high',
description: 'More thorough reasoning, better for difficult handwriting',
},
{
id: 'gpt-5.2-instant',
name: 'GPT-5.2 Instant',
provider: 'openai',
model: 'gpt-5.2-chat-latest',
reasoningEffort: 'none',
description: 'Faster but less accurate, good for clear worksheets',
},
{
id: 'claude-sonnet',
name: 'Claude Sonnet 4',
provider: 'anthropic',
model: 'claude-sonnet-4-20250514',
description: 'Alternative provider, good for comparison',
},
]
/**
* Get the default model config
*/
export function getDefaultModelConfig(): ModelConfig {
return PARSING_MODEL_CONFIGS.find((c) => c.isDefault) ?? PARSING_MODEL_CONFIGS[0]
}
/**
* Get a model config by ID
*/
export function getModelConfig(id: string): ModelConfig | undefined {
return PARSING_MODEL_CONFIGS.find((c) => c.id === id)
}
/**
* Options for parsing a worksheet
*/
export interface ParseWorksheetOptions {
/** Progress callback for UI updates */
onProgress?: (progress: LLMProgress) => void
/** Maximum retries on validation failure */
maxRetries?: number
/** Additional prompt customization */
promptOptions?: PromptOptions
/** Specific provider to use (defaults to configured default) */
provider?: string
/** Specific model to use (defaults to configured default) */
model?: string
/** Reasoning effort for GPT-5.2+ models */
reasoningEffort?: ReasoningEffort
/** Use a specific model config by ID */
modelConfigId?: string
}
/**
* Result of worksheet parsing
*/
export interface ParseWorksheetResult {
/** Parsed worksheet data */
data: WorksheetParsingResult
/** Number of LLM call attempts made */
attempts: number
/** Provider used */
provider: string
/** Model used */
model: string
/** Token usage */
usage: {
promptTokens: number
completionTokens: number
totalTokens: number
}
/** Raw JSON response from the LLM (before validation/parsing) */
rawResponse: string
/** JSON Schema sent to the LLM (with field descriptions) */
jsonSchema: string
}
/**
* Parse an abacus workbook page image
*
* @param imageDataUrl - Base64-encoded data URL of the worksheet image
* @param options - Parsing options
* @returns Structured parsing result
*
* @example
* ```typescript
* import { parseWorksheetImage } from '@/lib/worksheet-parsing'
*
* const result = await parseWorksheetImage(imageDataUrl, {
* onProgress: (p) => console.log(p.message),
* })
*
* console.log(`Found ${result.data.problems.length} problems`)
* console.log(`Overall confidence: ${result.data.overallConfidence}`)
* ```
*/
export async function parseWorksheetImage(
imageDataUrl: string,
options: ParseWorksheetOptions = {}
): Promise<ParseWorksheetResult> {
const {
onProgress,
maxRetries = 2,
promptOptions = {},
provider: explicitProvider,
model: explicitModel,
reasoningEffort: explicitReasoningEffort,
modelConfigId,
} = options
// Resolve model config
let provider = explicitProvider
let model = explicitModel
let reasoningEffort = explicitReasoningEffort
if (modelConfigId) {
const config = getModelConfig(modelConfigId)
if (config) {
provider = provider ?? config.provider
model = model ?? config.model
reasoningEffort = reasoningEffort ?? config.reasoningEffort
}
} else if (!provider && !model) {
// Use default config
const defaultConfig = getDefaultModelConfig()
provider = defaultConfig.provider
model = defaultConfig.model
reasoningEffort = reasoningEffort ?? defaultConfig.reasoningEffort
}
// Build the prompt
const prompt = buildWorksheetParsingPrompt(promptOptions)
// Make the vision call
const response = await llm.vision({
prompt,
images: [imageDataUrl],
schema: WorksheetParsingResultSchema,
maxRetries,
onProgress,
provider,
model,
reasoningEffort,
})
return {
data: response.data,
attempts: response.attempts,
provider: response.provider,
model: response.model,
usage: response.usage,
rawResponse: response.rawResponse,
jsonSchema: response.jsonSchema,
}
}
/**
* Re-parse specific problems with additional context
*
* Used when the user provides corrections or hints about specific problems
* that were incorrectly parsed in the first attempt.
*
* @param imageDataUrl - Base64-encoded data URL of the worksheet image
* @param problemNumbers - Which problems to focus on
* @param additionalContext - User-provided context or hints
* @param originalWarnings - Warnings from the original parse
* @param options - Parsing options
*/
export async function reparseProblems(
imageDataUrl: string,
problemNumbers: number[],
additionalContext: string,
originalWarnings: string[],
options: Omit<ParseWorksheetOptions, 'promptOptions'> = {}
): Promise<ParseWorksheetResult> {
return parseWorksheetImage(imageDataUrl, {
...options,
promptOptions: {
focusProblemNumbers: problemNumbers,
additionalContext: `${additionalContext}
Previous warnings for these problems:
${originalWarnings.map((w) => `- ${w}`).join('\n')}`,
},
})
}
/**
* Compute problem statistics from parsed results
*/
export function computeParsingStats(result: WorksheetParsingResult) {
const problems = result.problems
// Count problems needing review (low confidence)
const lowConfidenceProblems = problems.filter(
(p) => p.termsConfidence < 0.7 || p.studentAnswerConfidence < 0.7
)
// Count problems with answers
const answeredProblems = problems.filter((p) => p.studentAnswer !== null)
// Compute accuracy if answers are present
const correctAnswers = answeredProblems.filter((p) => p.studentAnswer === p.correctAnswer)
return {
totalProblems: problems.length,
answeredProblems: answeredProblems.length,
unansweredProblems: problems.length - answeredProblems.length,
correctAnswers: correctAnswers.length,
incorrectAnswers: answeredProblems.length - correctAnswers.length,
accuracy: answeredProblems.length > 0 ? correctAnswers.length / answeredProblems.length : null,
lowConfidenceCount: lowConfidenceProblems.length,
problemsNeedingReview: lowConfidenceProblems.map((p) => p.problemNumber),
warningCount: result.warnings.length,
}
}
/**
* Merge corrections into parsing result
*
* Creates a new result with user corrections applied.
*/
export function applyCorrections(
result: WorksheetParsingResult,
corrections: Array<{
problemNumber: number
correctedTerms?: number[] | null
correctedStudentAnswer?: number | null
shouldExclude?: boolean
}>
): WorksheetParsingResult {
const correctionMap = new Map(corrections.map((c) => [c.problemNumber, c]))
const correctedProblems = result.problems
.map((problem) => {
const correction = correctionMap.get(problem.problemNumber)
if (!correction) return problem
if (correction.shouldExclude) return null
return {
...problem,
terms: correction.correctedTerms ?? problem.terms,
correctAnswer: correction.correctedTerms
? correction.correctedTerms.reduce((sum, t) => sum + t, 0)
: problem.correctAnswer,
studentAnswer:
correction.correctedStudentAnswer !== undefined
? correction.correctedStudentAnswer
: problem.studentAnswer,
// Boost confidence since user verified
termsConfidence: correction.correctedTerms ? 1.0 : problem.termsConfidence,
studentAnswerConfidence:
correction.correctedStudentAnswer !== undefined ? 1.0 : problem.studentAnswerConfidence,
}
})
.filter((p): p is NonNullable<typeof p> => p !== null)
// Recalculate overall confidence
const avgConfidence =
correctedProblems.reduce(
(sum, p) => sum + (p.termsConfidence + p.studentAnswerConfidence) / 2,
0
) / correctedProblems.length
return {
...result,
problems: correctedProblems,
overallConfidence: avgConfidence,
needsReview: correctedProblems.some(
(p) => p.termsConfidence < 0.7 || p.studentAnswerConfidence < 0.7
),
}
}

View File

@@ -0,0 +1,206 @@
/**
* Prompt Builder for Worksheet Parsing
*
* Constructs the prompt used to parse abacus workbook pages.
* The prompt provides context about worksheet formats and
* guides the LLM on how to extract problem data.
*/
/**
* Options for customizing the parsing prompt
*/
export interface PromptOptions {
/** Additional context from a previous parse attempt (for re-parsing) */
additionalContext?: string
/** Specific problem numbers to focus on (for re-parsing) */
focusProblemNumbers?: number[]
/** Hint about expected format if known */
expectedFormat?: 'vertical' | 'linear' | 'mixed'
/** Expected number of problems (if known from worksheet metadata) */
expectedProblemCount?: number
}
/**
* Build the main worksheet parsing prompt
*
* This prompt is designed to guide the LLM in extracting
* structured data from abacus workbook page images.
*/
export function buildWorksheetParsingPrompt(options: PromptOptions = {}): string {
const parts: string[] = []
// Main task description with strong anti-sycophancy framing
parts.push(`You are a precise OCR system analyzing an image of an abacus workbook page. Your task is pure TRANSCRIPTION - you must extract exactly what is printed and written on the page, with no interpretation or correction.
## CRITICAL: Transcription Rules
**YOU ARE A TRANSCRIBER, NOT AN EVALUATOR.**
1. **Read the PRINTED problem terms FIRST** - these are typeset/printed numbers, completely independent of any handwriting
2. **Read the HANDWRITTEN student answer SEPARATELY** - this is in the answer box, written by a child
3. **NEVER let the student's answer influence how you read the printed terms**
4. **Students make mistakes - that is EXPECTED and VALUABLE data**
⚠️ **ANTI-SYCOPHANCY WARNING**: Do NOT reverse-engineer problem terms from student answers. If a student wrote "42" but the printed problem is "35 + 12 = ___" (correct answer 47), you MUST report:
- terms: [35, 12]
- correctAnswer: 47
- studentAnswer: 42
The student got it WRONG. Report the mistake. We NEED this data to help them improve.
## Worksheet Context
This is a Japanese soroban (abacus) practice worksheet. These worksheets typically contain:
- 1-4 rows of problems
- 8-10 problems per row (32-40 problems on a full page)
- Each problem has 2-7 terms (numbers to add or subtract)
- Problems are either VERTICAL format (stacked columns) or LINEAR format (horizontal equations)
## Problem Format Recognition
**VERTICAL FORMAT:**
Problems are arranged in columns with numbers stacked vertically. Addition is implied between numbers. Subtraction is indicated by a minus sign. The answer box is at the bottom.
⚠️ **CRITICAL: MINUS SIGN DETECTION** ⚠️
Minus signs in vertical problems are SMALL but EXTREMELY IMPORTANT. Missing a minus sign completely changes the answer!
**How minus signs appear:**
- A small horizontal dash/line to the LEFT of a number
- May appear as: (minus), - (hyphen), or a short horizontal stroke
- Often smaller than you expect - look carefully!
- Sometimes positioned slightly above or below the number's vertical center
**Examples:**
ADDITION problem (NO minus signs):
45
17 ← no symbol = add this number
8 ← no symbol = add this number
----
[70] terms = [45, 17, 8], correctAnswer = 70
SUBTRACTION problem (HAS minus sign):
45
-17 ← small minus sign before 17 = SUBTRACT
8
----
[36] terms = [45, -17, 8], correctAnswer = 36
CRITICAL DIFFERENCE: The ONLY visual difference is that tiny minus sign, but:
- Without minus: 45 + 17 + 8 = 70
- With minus: 45 - 17 + 8 = 36
**You MUST look carefully at the LEFT side of each number for minus signs!**
If student wrote "36": terms = [45, -17, 8], correctAnswer = 36, studentAnswer = 36 ✓
If student wrote "38": terms = [45, -17, 8], correctAnswer = 36, studentAnswer = 38 ✗ (WRONG - report it!)
If student wrote nothing: terms = [45, -17, 8], correctAnswer = 36, studentAnswer = null
**LINEAR FORMAT:**
Problems are written as horizontal equations with operators between numbers.
Example: 45 - 17 + 8 = [___]
Same rules apply - read printed terms independently from handwritten answer.
## Student Answer Reading
- Look carefully at the answer boxes/spaces for student handwriting
- Student handwriting may be messy - try to interpret digits carefully
- If an answer is empty, set studentAnswer to null
- If you cannot confidently read the answer, set studentAnswer to null and lower studentAnswerConfidence
- Common handwriting confusions to watch for:
- 1 vs 7 (some students cross their 7s)
- 4 vs 9
- 5 vs 6
- 0 vs 6
**REMEMBER**: A student getting a problem wrong is NORMAL and EXPECTED. Do not "help" them by changing the problem terms.
## Bounding Boxes
Provide bounding boxes in normalized coordinates where (0,0) is the TOP-LEFT corner of the image and (1,1) is the BOTTOM-RIGHT corner.
- x: distance from LEFT edge (0.0 = left edge, 0.5 = middle, 1.0 = right edge)
- y: distance from TOP edge (0.0 = top edge, 0.5 = middle, 1.0 = bottom edge)
- width: horizontal size as fraction of image width
- height: vertical size as fraction of image height
**Example coordinates for a 4x8 grid of problems:**
- Top-left problem (#1): x ≈ 0.02-0.10, y ≈ 0.05-0.15
- Top-right problem (#8): x ≈ 0.85-0.95, y ≈ 0.05-0.15
- Bottom-left problem (#25): x ≈ 0.02-0.10, y ≈ 0.75-0.85
- Bottom-right problem (#32): x ≈ 0.85-0.95, y ≈ 0.75-0.85
The problemBoundingBox should encompass the entire problem including all terms and the answer area.
The answerBoundingBox should tightly surround just the answer box/area.
**Be precise with coordinates** - they are used to highlight problems in the UI for human review.`)
// Add expected format hint if provided
if (options.expectedFormat) {
parts.push(`
## Format Hint
The problems on this page are expected to be in ${options.expectedFormat.toUpperCase()} format.`)
}
// Add expected count if provided
if (options.expectedProblemCount) {
parts.push(`
## Expected Problem Count
This worksheet should contain approximately ${options.expectedProblemCount} problems. If you detect significantly more or fewer, double-check for missed or duplicate problems.`)
}
// Add focus problems for re-parsing
if (options.focusProblemNumbers && options.focusProblemNumbers.length > 0) {
parts.push(`
## Focus Problems
Pay special attention to problems: ${options.focusProblemNumbers.join(', ')}. The previous parsing attempt had issues with these problems.`)
}
// Add additional context from user
if (options.additionalContext) {
parts.push(`
## Additional Context from User
${options.additionalContext}`)
}
// Final instructions
parts.push(`
## Important Notes
1. **Reading Order**: Extract problems in reading order (left to right, top to bottom)
2. **Problem Numbers**: Use the printed problem numbers on the worksheet (1, 2, 3, etc.)
3. **Term Signs**: First term is always positive. Subsequent terms are positive for addition, negative for subtraction
4. **⚠️ MINUS SIGNS**: Look VERY carefully for small minus signs to the left of numbers - they are small but critical!
5. **Confidence Scores**: Be honest about confidence - lower scores help identify problems needing review
6. **Warnings**: Include any issues you notice (cropped problems, smudges, unclear digits)
7. **needsReview**: Set to true if any problem has confidence below 0.7 or significant warnings
Now analyze the worksheet image and extract all problems.`)
return parts.join('')
}
/**
* Build a prompt for re-parsing specific problems with additional context
*/
export function buildReparsePrompt(
problemNumbers: number[],
additionalContext: string,
originalWarnings: string[]
): string {
return buildWorksheetParsingPrompt({
focusProblemNumbers: problemNumbers,
additionalContext: `${additionalContext}
Previous warnings for these problems:
${originalWarnings.map((w) => `- ${w}`).join('\n')}`,
})
}

View File

@@ -0,0 +1,269 @@
/**
* Worksheet Parsing Schemas
*
* These Zod schemas define the structure of LLM responses when parsing
* abacus workbook pages. The .describe() annotations are critical -
* they are automatically extracted and included in the LLM prompt.
*/
import { z } from 'zod'
/**
* Bounding box in normalized coordinates (0-1)
* Represents a rectangular region on the worksheet image
*/
export const BoundingBoxSchema = z
.object({
x: z
.number()
.min(0)
.max(1)
.describe(
'Left edge of the box as a fraction of image width (0 = left edge, 1 = right edge)'
),
y: z
.number()
.min(0)
.max(1)
.describe(
'Top edge of the box as a fraction of image height (0 = top edge, 1 = bottom edge)'
),
width: z.number().min(0).max(1).describe('Width of the box as a fraction of image width'),
height: z.number().min(0).max(1).describe('Height of the box as a fraction of image height'),
})
.describe('Rectangular region on the worksheet image, in normalized 0-1 coordinates')
export type BoundingBox = z.infer<typeof BoundingBoxSchema>
/**
* Problem format detected in the worksheet
*/
export const ProblemFormatSchema = z
.enum(['vertical', 'linear'])
.describe(
'Format of the problem: "vertical" for stacked column addition/subtraction with answer box below, ' +
'"linear" for horizontal format like "a + b - c = ___"'
)
export type ProblemFormat = z.infer<typeof ProblemFormatSchema>
/**
* Single term in a problem (number with operation)
*/
export const ProblemTermSchema = z
.number()
.int()
.describe(
'A single term in the problem. Positive numbers represent addition, ' +
'negative numbers represent subtraction. The first term is always positive. ' +
'Example: for "45 - 17 + 8", terms are [45, -17, 8]'
)
/**
* A single parsed problem from the worksheet
*/
export const ParsedProblemSchema = z
.object({
// Identification
problemNumber: z
.number()
.int()
.min(1)
.describe('The problem number as printed on the worksheet (1, 2, 3, etc.)'),
row: z
.number()
.int()
.min(1)
.describe('Which row of problems this belongs to (1 = top row, 2 = second row, etc.)'),
column: z
.number()
.int()
.min(1)
.describe('Which column position in the row (1 = leftmost, counting right)'),
// Problem content
format: ProblemFormatSchema,
terms: z
.array(ProblemTermSchema)
.min(2)
.max(7)
.describe(
'All terms in the problem, in order. First term is positive. ' +
'Subsequent terms are positive for addition, negative for subtraction. ' +
'Example: "45 - 17 + 8" → [45, -17, 8]'
),
correctAnswer: z.number().int().describe('The mathematically correct answer to this problem'),
// Student work
studentAnswer: z
.number()
.int()
.nullable()
.describe(
'The answer the student wrote, if readable. Null if the answer box is empty, ' +
"illegible, or you cannot confidently read the student's handwriting"
),
studentAnswerConfidence: z
.number()
.min(0)
.max(1)
.describe(
"Confidence in reading the student's answer (0 = not readable/empty, 1 = perfectly clear). " +
'Use 0.5-0.7 for somewhat legible, 0.8-0.9 for mostly clear, 1.0 for crystal clear'
),
// Problem extraction confidence
termsConfidence: z
.number()
.min(0)
.max(1)
.describe(
'Confidence in correctly reading all the problem terms (0 = very unsure, 1 = certain). ' +
'Lower confidence if digits are smudged, cropped, or partially obscured'
),
// Bounding boxes for UI highlighting
problemBoundingBox: BoundingBoxSchema.describe(
'Bounding box around the entire problem (including all terms and answer area)'
),
answerBoundingBox: BoundingBoxSchema.nullable().describe(
"Bounding box around just the student's answer area. Null if no answer area is visible"
),
})
.describe('A single arithmetic problem extracted from the worksheet')
export type ParsedProblem = z.infer<typeof ParsedProblemSchema>
/**
* Detected worksheet format
*/
export const WorksheetFormatSchema = z
.enum(['vertical', 'linear', 'mixed'])
.describe(
'Overall format of problems on this page: ' +
'"vertical" if all problems are stacked column format, ' +
'"linear" if all are horizontal equation format, ' +
'"mixed" if the page contains both formats'
)
/**
* Page metadata extracted from the worksheet
*/
export const PageMetadataSchema = z
.object({
lessonId: z
.string()
.nullable()
.describe(
'Lesson identifier if printed on the page (e.g., "Lesson 5", "L5", "Unit 2 Lesson 3"). ' +
'Null if no lesson identifier is visible'
),
weekId: z
.string()
.nullable()
.describe(
'Week identifier if printed on the page (e.g., "Week 4", "W4"). ' +
'Null if no week identifier is visible'
),
pageNumber: z
.number()
.int()
.nullable()
.describe('Page number if printed on the page. Null if no page number is visible'),
detectedFormat: WorksheetFormatSchema,
totalRows: z
.number()
.int()
.min(1)
.max(6)
.describe('Number of rows of problems on this page (typically 1-4)'),
problemsPerRow: z
.number()
.int()
.min(1)
.max(12)
.describe('Average number of problems per row (typically 8-10)'),
})
.describe('Metadata about the worksheet page layout and identifiers')
export type PageMetadata = z.infer<typeof PageMetadataSchema>
/**
* Complete worksheet parsing result
*/
export const WorksheetParsingResultSchema = z
.object({
problems: z
.array(ParsedProblemSchema)
.min(1)
.describe(
'All problems detected on the worksheet, in reading order (left to right, top to bottom)'
),
pageMetadata: PageMetadataSchema,
overallConfidence: z
.number()
.min(0)
.max(1)
.describe(
'Overall confidence in the parsing accuracy (0 = very uncertain, 1 = highly confident). ' +
'Based on image quality, problem clarity, and answer legibility'
),
warnings: z
.array(z.string())
.describe(
'List of issues encountered during parsing, such as: ' +
'"Problem 5 terms partially obscured", ' +
'"Row 2 problems may be cropped", ' +
'"Student handwriting difficult to read on problems 3, 7, 12"'
),
needsReview: z
.boolean()
.describe(
'True if any problems have low confidence or warnings that require human review ' +
'before creating a practice session'
),
})
.describe('Complete result of parsing an abacus workbook page')
export type WorksheetParsingResult = z.infer<typeof WorksheetParsingResultSchema>
/**
* User correction to a parsed problem
*/
export const ProblemCorrectionSchema = z
.object({
problemNumber: z.number().int().min(1).describe('The problem number being corrected'),
correctedTerms: z
.array(ProblemTermSchema)
.nullable()
.describe('Corrected terms if the LLM got them wrong. Null to keep original'),
correctedStudentAnswer: z
.number()
.int()
.nullable()
.describe('Corrected student answer. Null means empty/not answered'),
shouldExclude: z
.boolean()
.describe('True to exclude this problem from the session (e.g., illegible)'),
note: z.string().nullable().describe('Optional note explaining the correction'),
})
.describe('User correction to a single parsed problem')
export type ProblemCorrection = z.infer<typeof ProblemCorrectionSchema>
/**
* Request to re-parse with additional context
*/
export const ReparseRequestSchema = z
.object({
problemNumbers: z.array(z.number().int().min(1)).describe('Which problems to re-parse'),
additionalContext: z
.string()
.describe(
'Additional instructions for the LLM, such as: ' +
'"The student writes 7s with a line through them", ' +
'"Problem 5 has a 3-digit answer, not 2-digit"'
),
})
.describe('Request to re-parse specific problems with additional context')
export type ReparseRequest = z.infer<typeof ReparseRequestSchema>

View File

@@ -0,0 +1,226 @@
/**
* Session Converter
*
* Converts parsed worksheet data into SlotResults that can be
* used to create an offline practice session.
*/
import type { SlotResult, GeneratedProblem } from '@/db/schema/session-plans'
import type { WorksheetParsingResult, ParsedProblem } from './schemas'
import { analyzeRequiredSkills } from '@/utils/problemGenerator'
/**
* Options for session conversion
*/
export interface ConversionOptions {
/** Part number to assign to all problems (default: 1) */
partNumber?: 1 | 2 | 3
/** Source identifier for the session results */
source?: 'practice' | 'recency-refresh'
}
/**
* Result of session conversion
*/
export interface ConversionResult {
/** Converted slot results ready for session creation */
slotResults: Omit<SlotResult, 'timestamp'>[]
/** Summary statistics */
summary: {
totalProblems: number
answeredProblems: number
correctAnswers: number
incorrectAnswers: number
skippedProblems: number
accuracy: number | null
}
/** Skills that were exercised across all problems */
skillsExercised: string[]
}
/**
* Convert a single parsed problem to a GeneratedProblem
*/
function toGeneratedProblem(parsed: ParsedProblem): GeneratedProblem {
// Calculate correct answer from terms
const correctAnswer = parsed.terms.reduce((sum, term) => sum + term, 0)
// Infer skills from terms
const skillsRequired = analyzeRequiredSkills(parsed.terms, correctAnswer)
return {
terms: parsed.terms,
answer: correctAnswer,
skillsRequired,
}
}
/**
* Convert a parsed problem to a SlotResult
*/
function toSlotResult(
parsed: ParsedProblem,
slotIndex: number,
options: ConversionOptions
): Omit<SlotResult, 'timestamp'> {
const problem = toGeneratedProblem(parsed)
const studentAnswer = parsed.studentAnswer ?? 0
const isCorrect = parsed.studentAnswer !== null && parsed.studentAnswer === problem.answer
return {
partNumber: options.partNumber ?? 1,
slotIndex,
problem,
studentAnswer,
isCorrect,
responseTimeMs: 0, // Unknown for offline work
skillsExercised: problem.skillsRequired,
usedOnScreenAbacus: false,
hadHelp: false,
incorrectAttempts: isCorrect ? 0 : parsed.studentAnswer !== null ? 1 : 0,
source: options.source,
}
}
/**
* Convert parsed worksheet results to SlotResults
*
* Filters out problems that were marked for exclusion and converts
* the remaining problems into the format needed for session creation.
*
* @param parsingResult - The parsed worksheet data
* @param options - Conversion options
* @returns Conversion result with slot results and summary
*
* @example
* ```typescript
* import { convertToSlotResults } from '@/lib/worksheet-parsing'
*
* const result = convertToSlotResults(parsingResult, { partNumber: 1 })
*
* // Create session with results
* await createSession({
* playerId,
* status: 'completed',
* slotResults: result.slotResults,
* })
* ```
*/
export function convertToSlotResults(
parsingResult: WorksheetParsingResult,
options: ConversionOptions = {}
): ConversionResult {
const problems = parsingResult.problems
const slotResults: Omit<SlotResult, 'timestamp'>[] = []
const allSkills = new Set<string>()
let answeredCount = 0
let correctCount = 0
for (let i = 0; i < problems.length; i++) {
const parsed = problems[i]
const slotResult = toSlotResult(parsed, i, options)
slotResults.push(slotResult)
// Track skills
for (const skill of slotResult.skillsExercised) {
allSkills.add(skill)
}
// Track statistics
if (parsed.studentAnswer !== null) {
answeredCount++
if (slotResult.isCorrect) {
correctCount++
}
}
}
const skippedCount = problems.length - answeredCount
return {
slotResults,
summary: {
totalProblems: problems.length,
answeredProblems: answeredCount,
correctAnswers: correctCount,
incorrectAnswers: answeredCount - correctCount,
skippedProblems: skippedCount,
accuracy: answeredCount > 0 ? correctCount / answeredCount : null,
},
skillsExercised: Array.from(allSkills),
}
}
/**
* Validate that parsed problems have reasonable values
*
* Returns warnings for any issues found.
*/
export function validateParsedProblems(problems: ParsedProblem[]): {
valid: boolean
warnings: string[]
} {
const warnings: string[] = []
for (const problem of problems) {
// Check that correct answer matches term sum
const expectedAnswer = problem.terms.reduce((sum, t) => sum + t, 0)
if (problem.correctAnswer !== expectedAnswer) {
warnings.push(
`Problem ${problem.problemNumber}: correctAnswer (${problem.correctAnswer}) ` +
`doesn't match sum of terms (${expectedAnswer})`
)
}
// Check for negative answers (valid but unusual)
if (expectedAnswer < 0) {
warnings.push(
`Problem ${problem.problemNumber}: negative answer (${expectedAnswer}) - verify this is correct`
)
}
// Check for very large numbers (may indicate misread)
if (Math.abs(expectedAnswer) > 9999) {
warnings.push(
`Problem ${problem.problemNumber}: very large answer (${expectedAnswer}) - verify reading`
)
}
// Check for low confidence
if (problem.termsConfidence < 0.5) {
warnings.push(
`Problem ${problem.problemNumber}: very low term confidence (${problem.termsConfidence.toFixed(2)})`
)
}
}
return {
valid: warnings.length === 0,
warnings,
}
}
/**
* Compute aggregate skill statistics from slot results
*/
export function computeSkillStats(
slotResults: Omit<SlotResult, 'timestamp'>[]
): Map<string, { correct: number; incorrect: number; total: number }> {
const skillStats = new Map<string, { correct: number; incorrect: number; total: number }>()
for (const result of slotResults) {
for (const skill of result.skillsExercised) {
const stats = skillStats.get(skill) ?? { correct: 0, incorrect: 0, total: 0 }
stats.total++
if (result.isCorrect) {
stats.correct++
} else if (result.studentAnswer !== 0) {
// Only count as incorrect if student answered
stats.incorrect++
}
skillStats.set(skill, stats)
}
}
return skillStats
}

View File

@@ -978,6 +978,21 @@ export function initializeSocketServer(httpServer: HTTPServer) {
io!.to(`session:${data.sessionId}`).emit('session-resumed', data)
})
// Session Observation: Broadcast vision frame from student's abacus camera
socket.on(
'vision-frame',
(data: {
sessionId: string
imageData: string
detectedValue: number | null
confidence: number
timestamp: number
}) => {
// Broadcast to all observers in the session channel
socket.to(`session:${data.sessionId}`).emit('vision-frame', data)
}
)
// Skill Tutorial: Broadcast state from student to classroom (for teacher observation)
// The student joins the classroom channel and emits their tutorial state
socket.on(

View File

@@ -1 +1,21 @@
import '@testing-library/jest-dom'
// Mock canvas Image constructor to prevent jsdom errors when rendering
// images with data URIs (e.g., data:image/jpeg;base64,...)
// This works by patching HTMLImageElement.prototype before jsdom uses it
// Guard for node environment where HTMLImageElement doesn't exist
if (typeof HTMLImageElement !== 'undefined') {
const originalSetAttribute = HTMLImageElement.prototype.setAttribute
HTMLImageElement.prototype.setAttribute = function (name: string, value: string) {
if (name === 'src' && value.startsWith('data:image/')) {
// Store the value but don't trigger jsdom's image loading
Object.defineProperty(this, 'src', {
value,
writable: true,
configurable: true,
})
return
}
return originalSetAttribute.call(this, name, value)
}
}

6
apps/web/src/types/css.d.ts vendored Normal file
View File

@@ -0,0 +1,6 @@
// Type declaration for CSS imports
// This allows TypeScript to understand CSS imports in @soroban/abacus-react's declaration files
declare module '*.css' {
const content: { [className: string]: string }
export default content
}

View File

@@ -187,8 +187,8 @@ export interface FrameStabilityConfig {
* Default stability configuration
*/
export const DEFAULT_STABILITY_CONFIG: FrameStabilityConfig = {
minConsecutiveFrames: 10, // ~300ms at 30fps
minConfidence: 0.7,
minConsecutiveFrames: 3, // 600ms at 5fps inference rate
minConfidence: 0.5, // Lower threshold - model confidence is often 60-80%
handMotionThreshold: 0.3,
}

View File

@@ -1,6 +1,6 @@
{
"compilerOptions": {
"lib": ["dom", "dom.iterable", "es6"],
"lib": ["dom", "dom.iterable", "es2020"],
"types": ["vitest/globals", "@testing-library/jest-dom"],
"allowJs": true,
"skipLibCheck": true,
@@ -8,12 +8,12 @@
"noEmit": true,
"esModuleInterop": true,
"module": "esnext",
"moduleResolution": "node",
"moduleResolution": "bundler",
"resolveJsonModule": true,
"isolatedModules": true,
"jsx": "preserve",
"incremental": true,
"target": "es2015",
"target": "es2020",
"downlevelIteration": true,
"plugins": [
{

View File

@@ -12,7 +12,7 @@
"apps/*",
"packages/*"
],
"packageManager": "pnpm@9.15.4",
"packageManager": "pnpm@10.27.0",
"engines": {
"node": ">=18",
"pnpm": ">=8"
@@ -32,19 +32,19 @@
"release": "semantic-release"
},
"devDependencies": {
"@types/node": "^20.0.0",
"eslint": "^8.0.0",
"prettier": "^3.0.0",
"turbo": "^1.10.0",
"typescript": "^5.0.0",
"concurrently": "^8.0.0",
"semantic-release": "^22.0.0",
"@semantic-release/changelog": "^6.0.0",
"@semantic-release/commit-analyzer": "^11.0.0",
"@semantic-release/git": "^10.0.0",
"@semantic-release/github": "^9.0.0",
"@semantic-release/release-notes-generator": "^12.0.0",
"conventional-changelog-conventionalcommits": "^7.0.0"
"@types/node": "^20.0.0",
"concurrently": "^8.0.0",
"conventional-changelog-conventionalcommits": "^7.0.0",
"eslint": "^8.0.0",
"prettier": "^3.0.0",
"semantic-release": "^22.0.0",
"turbo": "^1.10.0",
"typescript": "^5.0.0"
},
"keywords": [
"soroban",

View File

@@ -1,102 +1,135 @@
# [2.19.0](https://github.com/antialias/soroban-abacus-flashcards/compare/abacus-react-v2.18.0...abacus-react-v2.19.0) (2026-01-01)
### Features
* **vision:** add physical abacus column setting and fix remote flash toggle ([b206eb3](https://github.com/antialias/soroban-abacus-flashcards/commit/b206eb30712e4b98525a9fa2544c2b5a235a8b72))
* **vision:** improve remote camera calibration and UX ([8846cec](https://github.com/antialias/soroban-abacus-flashcards/commit/8846cece93941a36c187abd4ecee9cc88de0c2ec))
# [2.18.0](https://github.com/antialias/soroban-abacus-flashcards/compare/abacus-react-v2.17.0...abacus-react-v2.18.0) (2026-01-01)
# [2.21.0](https://github.com/antialias/soroban-abacus-flashcards/compare/abacus-react-v2.20.0...abacus-react-v2.21.0) (2026-01-03)
### Bug Fixes
* allow teacher-parents to enroll their children in other classrooms ([52df7f4](https://github.com/antialias/soroban-abacus-flashcards/commit/52df7f469718128fd3d8933941ffb8d4bb8db208))
* **bkt:** handle missing helpLevelUsed in legacy data causing NaN ([b300ed9](https://github.com/antialias/soroban-abacus-flashcards/commit/b300ed9f5cc3bfb0c7b28faafe81c80a59444998))
* **camera:** handle race condition in camera initialization ([2a24700](https://github.com/antialias/soroban-abacus-flashcards/commit/2a24700e6cb6efe0ae35d9ebd6c428e3a1a1a736))
* **classroom:** auto-transition tutorial→session observation + fix NaN display ([962a52d](https://github.com/antialias/soroban-abacus-flashcards/commit/962a52d7562f566e78f6272816b049bf77daa7c9))
* **classroom:** broadcast digit-by-digit answer and correct phase indicator ([fb73e85](https://github.com/antialias/soroban-abacus-flashcards/commit/fb73e85f2daacefafa572e03c16b10fab619ea57))
* **dashboard:** compute skill stats from session results in curriculum API ([11d4846](https://github.com/antialias/soroban-abacus-flashcards/commit/11d48465d710d0293ebf41f64b4fd0f1f03d8bf8))
* **db:** add missing is_paused column to session_plans ([9d8b5e1](https://github.com/antialias/soroban-abacus-flashcards/commit/9d8b5e1148911f881d08d07608debaaef91609c2))
* **db:** add missing journal entries for migrations 0041-0042 ([398603c](https://github.com/antialias/soroban-abacus-flashcards/commit/398603c75a094e28122c5ccdced5b82badc7fbfb))
* **docker:** add canvas native deps for jsdom/vitest ([5f51bc1](https://github.com/antialias/soroban-abacus-flashcards/commit/5f51bc1871aec325feb32a0b29edabb3b6c5dd1f))
* **docker:** override canvas with mock package for Alpine/musl ([8be1995](https://github.com/antialias/soroban-abacus-flashcards/commit/8be19958af624d22fa2c6cb48f5723f5efc820c3))
* **docker:** skip canvas native build (optional jsdom dep) ([d717f44](https://github.com/antialias/soroban-abacus-flashcards/commit/d717f44fccb8ed2baa30499df65784a4b89c6ffc))
* **observer:** seed results panel with full session history ([aab7469](https://github.com/antialias/soroban-abacus-flashcards/commit/aab7469d9ea87c91a0165e4c48a60ac130cdc1b2))
* only show session stats when there are actual problems ([62aefad](https://github.com/antialias/soroban-abacus-flashcards/commit/62aefad6766ba32ad27e8ed3db621a6f77520cbe))
* **practice:** allow teachers to create student profiles ([5fee129](https://github.com/antialias/soroban-abacus-flashcards/commit/5fee1297e1775b5e6133919d179e23b6e70b2518))
* **practice:** always show add student FAB button ([a658414](https://github.com/antialias/soroban-abacus-flashcards/commit/a6584143ebf1f3e5b3c9f3283e690458a06beb60))
* **practice:** real-time progress in observer modal + numeric answer comparison ([c0e63ff](https://github.com/antialias/soroban-abacus-flashcards/commit/c0e63ff68b26fd37eedd657504f7f79e5ce40a10))
* **practice:** show active sessions for teacher's own children ([ece3197](https://github.com/antialias/soroban-abacus-flashcards/commit/ece319738b6ab1882469d79ea24b604316d28b34))
* **practice:** use Next.js Link for student tiles + fix session observer z-index ([6def610](https://github.com/antialias/soroban-abacus-flashcards/commit/6def6108771b427e4885bebd23cecdad7a50efb0))
* **seed:** accurate BKT simulation for developing classifications ([d5e4c85](https://github.com/antialias/soroban-abacus-flashcards/commit/d5e4c858db8866e5177b8fa2317aba42b30171e8))
* **share:** use getShareUrl for correct production URLs ([98a69f1](https://github.com/antialias/soroban-abacus-flashcards/commit/98a69f1f80e465415edce49043e2c019a856f8e5))
* **vision:** fix manual calibration overlay not showing on remote camera ([44dcb01](https://github.com/antialias/soroban-abacus-flashcards/commit/44dcb01473bac00c09dddbbefd77dd26b3a27817))
* **vision:** fix remote camera calibration coordinate system ([e52f94e](https://github.com/antialias/soroban-abacus-flashcards/commit/e52f94e4b476658c41f23668d2941af1288e4ed8))
* **vision:** swap corners diagonally for webcam orientation ([dd8efe3](https://github.com/antialias/soroban-abacus-flashcards/commit/dd8efe379d4bbcfc4b60f7c00ad6180465b7e7b6))
* **practice:** add fallback error message when photo upload is blocked ([33efdf0](https://github.com/antialias/soroban-abacus-flashcards/commit/33efdf0c0d8b222160956af9c9fd641ca4d07e8a))
* **vision:** hide detection overlay when auto-detection disabled ([995cb60](https://github.com/antialias/soroban-abacus-flashcards/commit/995cb600860950cfdaf070f229351186060ea67e))
* **vision:** remote camera connection and session management ([8a45415](https://github.com/antialias/soroban-abacus-flashcards/commit/8a454158b5e3817f0d9535225b4a99cb0a9ab977))
### Features
* API authorization audit + teacher enrollment UI + share codes ([d6e369f](https://github.com/antialias/soroban-abacus-flashcards/commit/d6e369f9dc9b963938ca8de4562c87f9f1b6d389))
* **camera:** auto-start camera when opening camera modal ([f3bb0ae](https://github.com/antialias/soroban-abacus-flashcards/commit/f3bb0aee4fe23eeffc7b7099981f51ec54636a35))
* **camera:** fullscreen modal with edge-to-edge preview ([db17c96](https://github.com/antialias/soroban-abacus-flashcards/commit/db17c96168078f2d0d723b24395096756a2f63ec))
* **chart:** add grouped structure to chart hover tooltip ([594e22c](https://github.com/antialias/soroban-abacus-flashcards/commit/594e22c428e0a4ee4322c233f127f9250e88b5fa))
* **chart:** improve skill classification visual hierarchy with colors and patterns ([c9518a6](https://github.com/antialias/soroban-abacus-flashcards/commit/c9518a6b9952bda60ab2663d7655092637139fec))
* **classroom:** add active sessions API endpoint ([07f6bb7](https://github.com/antialias/soroban-abacus-flashcards/commit/07f6bb7f9cc2dfbe6da8d16361e89b698405e1c0))
* **classroom:** add real-time enrollment/unenrollment reactivity ([a0693e9](https://github.com/antialias/soroban-abacus-flashcards/commit/a0693e90840f651094f852a6a6f523013786b322))
* **classroom:** add session broadcast and active session indicators ([9636f7f](https://github.com/antialias/soroban-abacus-flashcards/commit/9636f7f44a71da022352c19e80f9ec147dd3af5f))
* **classroom:** add unified add-student modal with two-column layout ([dca696a](https://github.com/antialias/soroban-abacus-flashcards/commit/dca696a29fc20a2697b491c0d2efbe036569a716))
* **classroom:** add unified TeacherClassroomCard with auto-enrollment ([4d6adf3](https://github.com/antialias/soroban-abacus-flashcards/commit/4d6adf359ede5d17c2decd9275ba68635ee0bd4f))
* **classroom:** complete reactivity fixes (Steps 7-11) ([2015494](https://github.com/antialias/soroban-abacus-flashcards/commit/2015494c0eca28457031aa39490d70a2af3da4df))
* **classroom:** consolidate filter pill to single-row design ([78a63e3](https://github.com/antialias/soroban-abacus-flashcards/commit/78a63e35e39948729cbf41e6c5af4e688a506c8d))
* **classroom:** implement enrollment system (Phase 4) ([1952a41](https://github.com/antialias/soroban-abacus-flashcards/commit/1952a412edcd04b332655199737c340a4389d174))
* **classroom:** implement entry prompts system ([de39ab5](https://github.com/antialias/soroban-abacus-flashcards/commit/de39ab52cc60f5782fc291246f98013ae15142ca))
* **classroom:** implement real-time enrollment updates ([bbe0500](https://github.com/antialias/soroban-abacus-flashcards/commit/bbe0500fe9000d0d016417c1b586e9569e3eb888))
* **classroom:** implement real-time presence with WebSocket (Phase 6) ([629bfcf](https://github.com/antialias/soroban-abacus-flashcards/commit/629bfcfc03c611cd3928bb98a67bace485ee3a7b))
* **classroom:** implement real-time session observation (Step 3) ([2feb684](https://github.com/antialias/soroban-abacus-flashcards/commit/2feb6844a4fce48ba7a87d2a77769783c4e8b2f9))
* **classroom:** implement real-time skill tutorial observation ([4b73879](https://github.com/antialias/soroban-abacus-flashcards/commit/4b7387905d2b050327f9b67b834d4e9dfc0b19cb))
* **classroom:** implement teacher classroom dashboard (Phase 3) ([2202716](https://github.com/antialias/soroban-abacus-flashcards/commit/2202716f563053624dbe5c6abb969a3b0d452fd1))
* **classroom:** implement teacher-initiated pause and fix manual pause ([ccea0f8](https://github.com/antialias/soroban-abacus-flashcards/commit/ccea0f86ac213b32cac7363f28e193b1976bd553))
* **classroom:** implement two-way abacus sync for session observation (Step 5) ([2f7002e](https://github.com/antialias/soroban-abacus-flashcards/commit/2f7002e5759db705e213eb9f8474589c8e6149e7))
* **classroom:** improve enrollment reactivity and UX ([77336be](https://github.com/antialias/soroban-abacus-flashcards/commit/77336bea5b5bbf16b393da13588de6e5082e818f))
* **classroom:** integrate create student form into unified add-student modal ([da92289](https://github.com/antialias/soroban-abacus-flashcards/commit/da92289ed1ae570ff48cc28818122d4640d6c84c))
* **classroom:** integrate Enter Classroom into StudentActionMenu ([2f1b9df](https://github.com/antialias/soroban-abacus-flashcards/commit/2f1b9df9d9d605b0c120af6961670ae84718c8d7))
* **dashboard:** add skill progress chart with trend analysis and timing awareness ([1fc8949](https://github.com/antialias/soroban-abacus-flashcards/commit/1fc8949b0664591aa1b0cfcd7c7abd2a4c586281))
* enable parents to observe children's practice sessions ([7b82995](https://github.com/antialias/soroban-abacus-flashcards/commit/7b829956644d369dfdfb0789a33e0b857958e84f))
* **family:** implement parent-to-parent family code sharing (Phase 2) ([0284227](https://github.com/antialias/soroban-abacus-flashcards/commit/02842270c9278174934407a9620777589f79ee1e))
* improve session summary header and add practice type badges ([518fe15](https://github.com/antialias/soroban-abacus-flashcards/commit/518fe153c9fc2ae2f2f7fc0ed4de27ee1c5c5646))
* **observer:** add live active session item to history list ([91d6d6a](https://github.com/antialias/soroban-abacus-flashcards/commit/91d6d6a1b6938b559d8488fe296d562695cf16d1))
* **observer:** add live results panel and session progress indicator ([8527f89](https://github.com/antialias/soroban-abacus-flashcards/commit/8527f892e2b300d51d83056d779474592a2fd955))
* **observer:** implement shareable session observation links ([3ac7b46](https://github.com/antialias/soroban-abacus-flashcards/commit/3ac7b460ec0dc207a5691fbed8d539b484374fe7))
* **practice:** add auto-rotation for captured documents ([ff79a28](https://github.com/antialias/soroban-abacus-flashcards/commit/ff79a28c657fb0a19752990e23f9bb0ced4e9343))
* **practice:** add document adjustment UI and auto-capture ([473b7db](https://github.com/antialias/soroban-abacus-flashcards/commit/473b7dbd7cd15be511351a1fd303a0fc32b9d941))
* **practice:** add document scanning with multi-quad tracking ([5f4f1fd](https://github.com/antialias/soroban-abacus-flashcards/commit/5f4f1fde3372e5d65d3f399216b04ab0e4c9972e))
* **practice:** add fixed filter bar, sticky headers, and shared EmojiPicker ([0e03561](https://github.com/antialias/soroban-abacus-flashcards/commit/0e0356113ddef1ec92cd0b3fda0852d99c6067d2))
* **practice:** add intervention system and improve skill chart hierarchy ([bf5b99a](https://github.com/antialias/soroban-abacus-flashcards/commit/bf5b99afe967c0b17765a7e6f1911d03201eed95))
* **practice:** add mini start practice banner to QuickLook modal ([d1176da](https://github.com/antialias/soroban-abacus-flashcards/commit/d1176da9aa8bd926ca96699d1091e65f4a34d782))
* **practice:** add Needs Attention to unified compact layout ([8727782](https://github.com/antialias/soroban-abacus-flashcards/commit/8727782e45c7ac269c4dbcc223b2a8be57be8bb2))
* **practice:** add photo attachments for practice sessions ([9b85311](https://github.com/antialias/soroban-abacus-flashcards/commit/9b853116ecfbb19bec39923da635374963cf002c))
* **practice:** add photo editing with rotation persistence and auto-detect ([156a0df](https://github.com/antialias/soroban-abacus-flashcards/commit/156a0dfe967a48c211be527da27c92ef8b1ab20c))
* **practice:** add smooth fullscreen transition from QuickLook to dashboard ([cb8b0df](https://github.com/antialias/soroban-abacus-flashcards/commit/cb8b0dff676d48bcba4775c5981ac357d573ab27))
* **practice:** add student organization with filtering and archiving ([538718a](https://github.com/antialias/soroban-abacus-flashcards/commit/538718a814402bd9c83b3c354c5a3386ff69104d))
* **practice:** add StudentActionMenu to dashboard + fix z-index layering ([bf262e7](https://github.com/antialias/soroban-abacus-flashcards/commit/bf262e7d5305e2358d3a2464db10bc3b0866104c))
* **practice:** compact single-student categories and UI improvements ([0e7f326](https://github.com/antialias/soroban-abacus-flashcards/commit/0e7f3265fe2de3b693c47a8a556d3e7cbc726ef4))
* **practice:** implement measurement-based compact layout ([1656b93](https://github.com/antialias/soroban-abacus-flashcards/commit/1656b9324f6fb24a318820e04559c480c99762f5))
* **practice:** implement retry wrong problems system ([474c4da](https://github.com/antialias/soroban-abacus-flashcards/commit/474c4da05a8d761e63a32187f5c301b57fb6aae4))
* **practice:** parent session observation + relationship UI + error boundaries ([07484fd](https://github.com/antialias/soroban-abacus-flashcards/commit/07484fdfac3c6613a6a7709bdee25e1f8e047227))
* **practice:** polish unified student list with keyboard nav and mobile UX ([0ba1551](https://github.com/antialias/soroban-abacus-flashcards/commit/0ba1551feaa30d8f41ec5d771c00561396b043f3))
* **seed:** add category field to all mock student profiles ([f883fbf](https://github.com/antialias/soroban-abacus-flashcards/commit/f883fbfe233b7fb3d366062e7c156e3fc8e0e3a7))
* **session-summary:** redesign ProblemToReview with BKT integration and animations ([430c46a](https://github.com/antialias/soroban-abacus-flashcards/commit/430c46adb929a6c0ce7c67da4b1df7d3e2846cfd))
* **storybook:** add TeacherClassroomCard stories ([a5e5788](https://github.com/antialias/soroban-abacus-flashcards/commit/a5e5788fa96f57e0d918620e357f7920ef792b19))
* **vision:** add AbacusVisionBridge for physical soroban detection ([47088e4](https://github.com/antialias/soroban-abacus-flashcards/commit/47088e4850c25e76fe49879587227b46f699ba91))
* **vision:** add ArUco marker auto-calibration for abacus detection ([9e9a06f](https://github.com/antialias/soroban-abacus-flashcards/commit/9e9a06f2e4dc37d208ac19259be9b9830c7ad949))
* **vision:** add remote phone camera support for abacus detection ([8e4975d](https://github.com/antialias/soroban-abacus-flashcards/commit/8e4975d395c4b10bc40ae2c71473fdb1a50c114c))
* add LLM client package and worksheet parsing infrastructure ([5a4c751](https://github.com/antialias/soroban-abacus-flashcards/commit/5a4c751ebe9c337ce2115253b243b345c4f76156))
* **observer:** responsive session observer layout ([9610ddb](https://github.com/antialias/soroban-abacus-flashcards/commit/9610ddb8f13ef27c4d1fd205ae03a4dc292c2ff7))
* **worksheet-parsing:** add parsing UI and fix parent access control ([91aaddb](https://github.com/antialias/soroban-abacus-flashcards/commit/91aaddbeab8eeef54547d60a41362e9933c3edb1))
* **worksheet-parsing:** add selective re-parsing and improve UI ([830a48e](https://github.com/antialias/soroban-abacus-flashcards/commit/830a48e74f2c38c0247b658104a5db6d2894127a))
# [2.20.0](https://github.com/antialias/soroban-abacus-flashcards/compare/abacus-react-v2.19.0...abacus-react-v2.20.0) (2026-01-02)
### Bug Fixes
* **vision:** clear config when switching camera sources ([ff59612](https://github.com/antialias/soroban-abacus-flashcards/commit/ff59612e7b9bab3ef4a8fba3c60e9dbcb37a140a))
* **vision:** hide flip camera button when only one camera available ([7a9185e](https://github.com/antialias/soroban-abacus-flashcards/commit/7a9185eadb3609de596e3d150090af19225fdab6))
* **vision:** include remote camera in isVisionSetupComplete check ([a8fb77e](https://github.com/antialias/soroban-abacus-flashcards/commit/a8fb77e8e3f2f4293c2dab99ca1ec1de78b1e37c))
* **vision:** remote camera persistence and UI bugs ([d90d263](https://github.com/antialias/soroban-abacus-flashcards/commit/d90d263b2a2a5f228d93af2217bb11241ee8f0f5))
### Features
* **vision:** add activeCameraSource tracking and simplify calibration UI ([1be6151](https://github.com/antialias/soroban-abacus-flashcards/commit/1be6151bae0f2ffc0781792bf002cb7672635842))
* **vision:** add CV-based bead detection and fix remote camera connection ([005140a](https://github.com/antialias/soroban-abacus-flashcards/commit/005140a1e72238459ea987e57f83e169b213d7b9))
* **vision:** add TensorFlow.js column classifier model and improve detection ([5d0ac65](https://github.com/antialias/soroban-abacus-flashcards/commit/5d0ac65bdd2bd22c8e2d586add3a0aba8dd82426))
* **vision:** broadcast vision frames to observers (Phase 5) ([b3b769c](https://github.com/antialias/soroban-abacus-flashcards/commit/b3b769c0e2e15d4a0f4e70219982dc78c72e4e2b))
* **vision:** disable auto-detection with feature flag ([a5025f0](https://github.com/antialias/soroban-abacus-flashcards/commit/a5025f01bc759de1b87c06a2a9d2d94344adc790))
* **vision:** integrate vision feed into docked abacus ([d8c7645](https://github.com/antialias/soroban-abacus-flashcards/commit/d8c764595d34dabb4b836e2eea93e0b869f09cd2))
# [2.19.0](https://github.com/antialias/soroban-abacus-flashcards/compare/abacus-react-v2.18.0...abacus-react-v2.19.0) (2026-01-01)
### Features
- **vision:** add physical abacus column setting and fix remote flash toggle ([b206eb3](https://github.com/antialias/soroban-abacus-flashcards/commit/b206eb30712e4b98525a9fa2544c2b5a235a8b72))
- **vision:** improve remote camera calibration and UX ([8846cec](https://github.com/antialias/soroban-abacus-flashcards/commit/8846cece93941a36c187abd4ecee9cc88de0c2ec))
# [2.18.0](https://github.com/antialias/soroban-abacus-flashcards/compare/abacus-react-v2.17.0...abacus-react-v2.18.0) (2026-01-01)
### Bug Fixes
- allow teacher-parents to enroll their children in other classrooms ([52df7f4](https://github.com/antialias/soroban-abacus-flashcards/commit/52df7f469718128fd3d8933941ffb8d4bb8db208))
- **bkt:** handle missing helpLevelUsed in legacy data causing NaN ([b300ed9](https://github.com/antialias/soroban-abacus-flashcards/commit/b300ed9f5cc3bfb0c7b28faafe81c80a59444998))
- **camera:** handle race condition in camera initialization ([2a24700](https://github.com/antialias/soroban-abacus-flashcards/commit/2a24700e6cb6efe0ae35d9ebd6c428e3a1a1a736))
- **classroom:** auto-transition tutorial→session observation + fix NaN display ([962a52d](https://github.com/antialias/soroban-abacus-flashcards/commit/962a52d7562f566e78f6272816b049bf77daa7c9))
- **classroom:** broadcast digit-by-digit answer and correct phase indicator ([fb73e85](https://github.com/antialias/soroban-abacus-flashcards/commit/fb73e85f2daacefafa572e03c16b10fab619ea57))
- **dashboard:** compute skill stats from session results in curriculum API ([11d4846](https://github.com/antialias/soroban-abacus-flashcards/commit/11d48465d710d0293ebf41f64b4fd0f1f03d8bf8))
- **db:** add missing is_paused column to session_plans ([9d8b5e1](https://github.com/antialias/soroban-abacus-flashcards/commit/9d8b5e1148911f881d08d07608debaaef91609c2))
- **db:** add missing journal entries for migrations 0041-0042 ([398603c](https://github.com/antialias/soroban-abacus-flashcards/commit/398603c75a094e28122c5ccdced5b82badc7fbfb))
- **docker:** add canvas native deps for jsdom/vitest ([5f51bc1](https://github.com/antialias/soroban-abacus-flashcards/commit/5f51bc1871aec325feb32a0b29edabb3b6c5dd1f))
- **docker:** override canvas with mock package for Alpine/musl ([8be1995](https://github.com/antialias/soroban-abacus-flashcards/commit/8be19958af624d22fa2c6cb48f5723f5efc820c3))
- **docker:** skip canvas native build (optional jsdom dep) ([d717f44](https://github.com/antialias/soroban-abacus-flashcards/commit/d717f44fccb8ed2baa30499df65784a4b89c6ffc))
- **observer:** seed results panel with full session history ([aab7469](https://github.com/antialias/soroban-abacus-flashcards/commit/aab7469d9ea87c91a0165e4c48a60ac130cdc1b2))
- only show session stats when there are actual problems ([62aefad](https://github.com/antialias/soroban-abacus-flashcards/commit/62aefad6766ba32ad27e8ed3db621a6f77520cbe))
- **practice:** allow teachers to create student profiles ([5fee129](https://github.com/antialias/soroban-abacus-flashcards/commit/5fee1297e1775b5e6133919d179e23b6e70b2518))
- **practice:** always show add student FAB button ([a658414](https://github.com/antialias/soroban-abacus-flashcards/commit/a6584143ebf1f3e5b3c9f3283e690458a06beb60))
- **practice:** real-time progress in observer modal + numeric answer comparison ([c0e63ff](https://github.com/antialias/soroban-abacus-flashcards/commit/c0e63ff68b26fd37eedd657504f7f79e5ce40a10))
- **practice:** show active sessions for teacher's own children ([ece3197](https://github.com/antialias/soroban-abacus-flashcards/commit/ece319738b6ab1882469d79ea24b604316d28b34))
- **practice:** use Next.js Link for student tiles + fix session observer z-index ([6def610](https://github.com/antialias/soroban-abacus-flashcards/commit/6def6108771b427e4885bebd23cecdad7a50efb0))
- **seed:** accurate BKT simulation for developing classifications ([d5e4c85](https://github.com/antialias/soroban-abacus-flashcards/commit/d5e4c858db8866e5177b8fa2317aba42b30171e8))
- **share:** use getShareUrl for correct production URLs ([98a69f1](https://github.com/antialias/soroban-abacus-flashcards/commit/98a69f1f80e465415edce49043e2c019a856f8e5))
- **vision:** fix manual calibration overlay not showing on remote camera ([44dcb01](https://github.com/antialias/soroban-abacus-flashcards/commit/44dcb01473bac00c09dddbbefd77dd26b3a27817))
- **vision:** fix remote camera calibration coordinate system ([e52f94e](https://github.com/antialias/soroban-abacus-flashcards/commit/e52f94e4b476658c41f23668d2941af1288e4ed8))
- **vision:** swap corners diagonally for webcam orientation ([dd8efe3](https://github.com/antialias/soroban-abacus-flashcards/commit/dd8efe379d4bbcfc4b60f7c00ad6180465b7e7b6))
### Features
- API authorization audit + teacher enrollment UI + share codes ([d6e369f](https://github.com/antialias/soroban-abacus-flashcards/commit/d6e369f9dc9b963938ca8de4562c87f9f1b6d389))
- **camera:** auto-start camera when opening camera modal ([f3bb0ae](https://github.com/antialias/soroban-abacus-flashcards/commit/f3bb0aee4fe23eeffc7b7099981f51ec54636a35))
- **camera:** fullscreen modal with edge-to-edge preview ([db17c96](https://github.com/antialias/soroban-abacus-flashcards/commit/db17c96168078f2d0d723b24395096756a2f63ec))
- **chart:** add grouped structure to chart hover tooltip ([594e22c](https://github.com/antialias/soroban-abacus-flashcards/commit/594e22c428e0a4ee4322c233f127f9250e88b5fa))
- **chart:** improve skill classification visual hierarchy with colors and patterns ([c9518a6](https://github.com/antialias/soroban-abacus-flashcards/commit/c9518a6b9952bda60ab2663d7655092637139fec))
- **classroom:** add active sessions API endpoint ([07f6bb7](https://github.com/antialias/soroban-abacus-flashcards/commit/07f6bb7f9cc2dfbe6da8d16361e89b698405e1c0))
- **classroom:** add real-time enrollment/unenrollment reactivity ([a0693e9](https://github.com/antialias/soroban-abacus-flashcards/commit/a0693e90840f651094f852a6a6f523013786b322))
- **classroom:** add session broadcast and active session indicators ([9636f7f](https://github.com/antialias/soroban-abacus-flashcards/commit/9636f7f44a71da022352c19e80f9ec147dd3af5f))
- **classroom:** add unified add-student modal with two-column layout ([dca696a](https://github.com/antialias/soroban-abacus-flashcards/commit/dca696a29fc20a2697b491c0d2efbe036569a716))
- **classroom:** add unified TeacherClassroomCard with auto-enrollment ([4d6adf3](https://github.com/antialias/soroban-abacus-flashcards/commit/4d6adf359ede5d17c2decd9275ba68635ee0bd4f))
- **classroom:** complete reactivity fixes (Steps 7-11) ([2015494](https://github.com/antialias/soroban-abacus-flashcards/commit/2015494c0eca28457031aa39490d70a2af3da4df))
- **classroom:** consolidate filter pill to single-row design ([78a63e3](https://github.com/antialias/soroban-abacus-flashcards/commit/78a63e35e39948729cbf41e6c5af4e688a506c8d))
- **classroom:** implement enrollment system (Phase 4) ([1952a41](https://github.com/antialias/soroban-abacus-flashcards/commit/1952a412edcd04b332655199737c340a4389d174))
- **classroom:** implement entry prompts system ([de39ab5](https://github.com/antialias/soroban-abacus-flashcards/commit/de39ab52cc60f5782fc291246f98013ae15142ca))
- **classroom:** implement real-time enrollment updates ([bbe0500](https://github.com/antialias/soroban-abacus-flashcards/commit/bbe0500fe9000d0d016417c1b586e9569e3eb888))
- **classroom:** implement real-time presence with WebSocket (Phase 6) ([629bfcf](https://github.com/antialias/soroban-abacus-flashcards/commit/629bfcfc03c611cd3928bb98a67bace485ee3a7b))
- **classroom:** implement real-time session observation (Step 3) ([2feb684](https://github.com/antialias/soroban-abacus-flashcards/commit/2feb6844a4fce48ba7a87d2a77769783c4e8b2f9))
- **classroom:** implement real-time skill tutorial observation ([4b73879](https://github.com/antialias/soroban-abacus-flashcards/commit/4b7387905d2b050327f9b67b834d4e9dfc0b19cb))
- **classroom:** implement teacher classroom dashboard (Phase 3) ([2202716](https://github.com/antialias/soroban-abacus-flashcards/commit/2202716f563053624dbe5c6abb969a3b0d452fd1))
- **classroom:** implement teacher-initiated pause and fix manual pause ([ccea0f8](https://github.com/antialias/soroban-abacus-flashcards/commit/ccea0f86ac213b32cac7363f28e193b1976bd553))
- **classroom:** implement two-way abacus sync for session observation (Step 5) ([2f7002e](https://github.com/antialias/soroban-abacus-flashcards/commit/2f7002e5759db705e213eb9f8474589c8e6149e7))
- **classroom:** improve enrollment reactivity and UX ([77336be](https://github.com/antialias/soroban-abacus-flashcards/commit/77336bea5b5bbf16b393da13588de6e5082e818f))
- **classroom:** integrate create student form into unified add-student modal ([da92289](https://github.com/antialias/soroban-abacus-flashcards/commit/da92289ed1ae570ff48cc28818122d4640d6c84c))
- **classroom:** integrate Enter Classroom into StudentActionMenu ([2f1b9df](https://github.com/antialias/soroban-abacus-flashcards/commit/2f1b9df9d9d605b0c120af6961670ae84718c8d7))
- **dashboard:** add skill progress chart with trend analysis and timing awareness ([1fc8949](https://github.com/antialias/soroban-abacus-flashcards/commit/1fc8949b0664591aa1b0cfcd7c7abd2a4c586281))
- enable parents to observe children's practice sessions ([7b82995](https://github.com/antialias/soroban-abacus-flashcards/commit/7b829956644d369dfdfb0789a33e0b857958e84f))
- **family:** implement parent-to-parent family code sharing (Phase 2) ([0284227](https://github.com/antialias/soroban-abacus-flashcards/commit/02842270c9278174934407a9620777589f79ee1e))
- improve session summary header and add practice type badges ([518fe15](https://github.com/antialias/soroban-abacus-flashcards/commit/518fe153c9fc2ae2f2f7fc0ed4de27ee1c5c5646))
- **observer:** add live active session item to history list ([91d6d6a](https://github.com/antialias/soroban-abacus-flashcards/commit/91d6d6a1b6938b559d8488fe296d562695cf16d1))
- **observer:** add live results panel and session progress indicator ([8527f89](https://github.com/antialias/soroban-abacus-flashcards/commit/8527f892e2b300d51d83056d779474592a2fd955))
- **observer:** implement shareable session observation links ([3ac7b46](https://github.com/antialias/soroban-abacus-flashcards/commit/3ac7b460ec0dc207a5691fbed8d539b484374fe7))
- **practice:** add auto-rotation for captured documents ([ff79a28](https://github.com/antialias/soroban-abacus-flashcards/commit/ff79a28c657fb0a19752990e23f9bb0ced4e9343))
- **practice:** add document adjustment UI and auto-capture ([473b7db](https://github.com/antialias/soroban-abacus-flashcards/commit/473b7dbd7cd15be511351a1fd303a0fc32b9d941))
- **practice:** add document scanning with multi-quad tracking ([5f4f1fd](https://github.com/antialias/soroban-abacus-flashcards/commit/5f4f1fde3372e5d65d3f399216b04ab0e4c9972e))
- **practice:** add fixed filter bar, sticky headers, and shared EmojiPicker ([0e03561](https://github.com/antialias/soroban-abacus-flashcards/commit/0e0356113ddef1ec92cd0b3fda0852d99c6067d2))
- **practice:** add intervention system and improve skill chart hierarchy ([bf5b99a](https://github.com/antialias/soroban-abacus-flashcards/commit/bf5b99afe967c0b17765a7e6f1911d03201eed95))
- **practice:** add mini start practice banner to QuickLook modal ([d1176da](https://github.com/antialias/soroban-abacus-flashcards/commit/d1176da9aa8bd926ca96699d1091e65f4a34d782))
- **practice:** add Needs Attention to unified compact layout ([8727782](https://github.com/antialias/soroban-abacus-flashcards/commit/8727782e45c7ac269c4dbcc223b2a8be57be8bb2))
- **practice:** add photo attachments for practice sessions ([9b85311](https://github.com/antialias/soroban-abacus-flashcards/commit/9b853116ecfbb19bec39923da635374963cf002c))
- **practice:** add photo editing with rotation persistence and auto-detect ([156a0df](https://github.com/antialias/soroban-abacus-flashcards/commit/156a0dfe967a48c211be527da27c92ef8b1ab20c))
- **practice:** add smooth fullscreen transition from QuickLook to dashboard ([cb8b0df](https://github.com/antialias/soroban-abacus-flashcards/commit/cb8b0dff676d48bcba4775c5981ac357d573ab27))
- **practice:** add student organization with filtering and archiving ([538718a](https://github.com/antialias/soroban-abacus-flashcards/commit/538718a814402bd9c83b3c354c5a3386ff69104d))
- **practice:** add StudentActionMenu to dashboard + fix z-index layering ([bf262e7](https://github.com/antialias/soroban-abacus-flashcards/commit/bf262e7d5305e2358d3a2464db10bc3b0866104c))
- **practice:** compact single-student categories and UI improvements ([0e7f326](https://github.com/antialias/soroban-abacus-flashcards/commit/0e7f3265fe2de3b693c47a8a556d3e7cbc726ef4))
- **practice:** implement measurement-based compact layout ([1656b93](https://github.com/antialias/soroban-abacus-flashcards/commit/1656b9324f6fb24a318820e04559c480c99762f5))
- **practice:** implement retry wrong problems system ([474c4da](https://github.com/antialias/soroban-abacus-flashcards/commit/474c4da05a8d761e63a32187f5c301b57fb6aae4))
- **practice:** parent session observation + relationship UI + error boundaries ([07484fd](https://github.com/antialias/soroban-abacus-flashcards/commit/07484fdfac3c6613a6a7709bdee25e1f8e047227))
- **practice:** polish unified student list with keyboard nav and mobile UX ([0ba1551](https://github.com/antialias/soroban-abacus-flashcards/commit/0ba1551feaa30d8f41ec5d771c00561396b043f3))
- **seed:** add category field to all mock student profiles ([f883fbf](https://github.com/antialias/soroban-abacus-flashcards/commit/f883fbfe233b7fb3d366062e7c156e3fc8e0e3a7))
- **session-summary:** redesign ProblemToReview with BKT integration and animations ([430c46a](https://github.com/antialias/soroban-abacus-flashcards/commit/430c46adb929a6c0ce7c67da4b1df7d3e2846cfd))
- **storybook:** add TeacherClassroomCard stories ([a5e5788](https://github.com/antialias/soroban-abacus-flashcards/commit/a5e5788fa96f57e0d918620e357f7920ef792b19))
- **vision:** add AbacusVisionBridge for physical soroban detection ([47088e4](https://github.com/antialias/soroban-abacus-flashcards/commit/47088e4850c25e76fe49879587227b46f699ba91))
- **vision:** add ArUco marker auto-calibration for abacus detection ([9e9a06f](https://github.com/antialias/soroban-abacus-flashcards/commit/9e9a06f2e4dc37d208ac19259be9b9830c7ad949))
- **vision:** add remote phone camera support for abacus detection ([8e4975d](https://github.com/antialias/soroban-abacus-flashcards/commit/8e4975d395c4b10bc40ae2c71473fdb1a50c114c))
### Performance Improvements
* reduce practice page dev bundle from 47MB to 115KB ([fd1df93](https://github.com/antialias/soroban-abacus-flashcards/commit/fd1df93a8fa320800275c135d5dd89390eb72c19))
- reduce practice page dev bundle from 47MB to 115KB ([fd1df93](https://github.com/antialias/soroban-abacus-flashcards/commit/fd1df93a8fa320800275c135d5dd89390eb72c19))
# [2.17.0](https://github.com/antialias/soroban-abacus-flashcards/compare/abacus-react-v2.16.0...abacus-react-v2.17.0) (2025-12-20)

View File

@@ -7,11 +7,19 @@
"types": "dist/index.d.ts",
"exports": {
".": {
"@soroban/source": {
"types": "./src/index.ts",
"default": "./src/index.ts"
},
"types": "./dist/index.d.ts",
"import": "./dist/index.es.js",
"require": "./dist/index.cjs.js"
},
"./static": {
"@soroban/source": {
"types": "./src/static.ts",
"default": "./src/static.ts"
},
"types": "./dist/static.d.ts",
"import": "./dist/static.es.js",
"require": "./dist/static.cjs.js"

4
packages/abacus-react/src/css.d.ts vendored Normal file
View File

@@ -0,0 +1,4 @@
declare module "*.css" {
const content: { [className: string]: string };
export default content;
}

View File

@@ -2,7 +2,11 @@
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*) basedir=`cygpath -w "$basedir"`;;
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -z "$NODE_PATH" ]; then

View File

@@ -2,7 +2,11 @@
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*) basedir=`cygpath -w "$basedir"`;;
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -z "$NODE_PATH" ]; then

View File

@@ -2,7 +2,11 @@
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*) basedir=`cygpath -w "$basedir"`;;
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -z "$NODE_PATH" ]; then

View File

@@ -2,7 +2,11 @@
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*) basedir=`cygpath -w "$basedir"`;;
*CYGWIN*|*MINGW*|*MSYS*)
if command -v cygpath > /dev/null 2>&1; then
basedir=`cygpath -w "$basedir"`
fi
;;
esac
if [ -z "$NODE_PATH" ]; then

Some files were not shown because too many files have changed in this diff Show More