Why I'm Building VisionForge
The case for AI-native CAD/CAE tools.
Engineers spend 80% of their time clicking through menus and 20% actually thinking about design.
Traditional CAD tools were built for a pre-AI world. They’re manual, slow, and break the moment you need to change topology. Want to add a hole? Rebuild from scratch. Want to see stress distribution? Wait 5 minutes for FEM to solve.
VisionForge is the opposite: AI-native from the ground up.
The Vision
- Scan reality - photograph a part, get an editable 3D model
- Edit with language - “make the wall thicker”, “add mounting holes here”
- Real-time physics - see stress/strain as you design, not after
- Export to manufacture - clean geometry ready for fabrication
Neural operators (FNO, DeepONet) can predict stress fields in <100ms. That’s not “good enough”, that’s transformative. You can iterate on designs at the speed of thought.
Why Build in Public
I’m shipping the building blocks openly:
- ava-track - 3D face tracking as a perception baseline
- ava-codec - neural compression for 3D temporal data
- agent-cache - local memory system (learning C++/Rust along the way)
Each piece stands alone but builds toward VisionForge. Shipping in the open to see what emerges.
More to come.