I've been programming for about 9 years now. I like to build things.
Been using LLMs daily since they went mainstream. Watching them evolve has left me skeptical that scale alone gets us to AGI, so I also dabble in research on my free time.
Trying to reverse-engineer neural networks. Making AI explain itself because "because I said so" doesn't cut it anymore.
Text alone isn't enough. Spatial reasoning and visual understanding matter, and we need to get better at interpreting vision models.

