██ ██ ███████ ██████ ██████ █████ ███ ██ ███████ ███████
██ ██ ██ ██ ██ ██ ██ ██ ██ ████ ██ ██ ██
██ ██ █████ ██ ██ ██████ ███████ ██ ██ ██ ███████ █████
██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██
████ ███████ ██████ ██ ██ ██ ██ ████ ███████ ███████
Hi, I'm Ved Panse — I build systems that learn, reason, and understand. Not just "models", not just "software" — but hybrid intelligence.
I work where machine learning, multi-modal reasoning, and programming languages
collide.
My goal: create tools and systems that think, not just execute.
Special focuses:
- Multi-modal fusion (vision + language + structure)
- Neural-symbolic systems
- Semantic representations of code and hardware
- Compiler-aware intelligence
- Structural embeddings for programs, circuits, and 3D space
A semantics-first, multi-modal-aware DSL designed for:
- program structure understanding
- mixed symbolic + neural reasoning
- code embeddings grounded in IR/AST
- intelligent transformations and analysis
Think: a language you can talk to, inspect, and reason with.
I build end-to-end reasoning systems involving:
- computer vision pipelines
- 3D perception and geometric reasoning
- agentic feedback loops
- task-driven multi-modal models
- hardware + software co-design
- interpreters fused with embeddings
If your work involves any of the following, we should probably talk:
- research-grade multi-modal ML
- compilers, interpreters, language tooling
- intelligent developer tools
- symbolic/neural hybrids
- embedding representations of programs
- semantic search, retrieval, or structural indexing
Languages: Rust, C++, Kotlin, Swift, Go, Python, TypeScript
AI/ML: PyTorch, JAX, multimodal transformers, diffusion models
Systems: LLVM, IR design, static/dynamic analysis, embedded tooling
Other: 3D perception, sensor fusion frameworks, real-time pipelines
“Build systems that understand the world. Everything else is optimization.”


