The --quantize_uint8 flag in tensorflowjs_converter corrupts model
weights, causing completely wrong predictions in the browser even
though Python testing works fine.
This was documented in .claude/CLAUDE.md but the training script
still used quantization. Model size increases (556KB → 2.2MB) but
predictions are now correct.
Removed quantization from all 3 export paths:
- SavedModel → GraphModel conversion
- Keras → LayersModel fallback
- Direct Python API fallback
After this fix, re-run training via /vision-training/train to get
a working model.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>