I'm a software engineer and researcher passionate about people, startups, and artificial intelligence.
I'm actively researching embodied intelligence, multimodal knowledge retrieval, agentic memory systems, and continual learning at the University of Pennsylvania's GRASP Lab. As an undergraduate, I researched language model–assisted code generation at the Sanghani Center for Artificial Intelligence and Data Analytics, and developed sparse-sensor pose estimation methods at the Assistive Robotics Lab. I also spent some time working on computer vision techniques for monocular dense reconstruction at MathWorks.
I'm a huge hackathon enthusiast (and former hackathon director!), and several of my projects are available on Devpost. Most recently, I received runner up for "Most Technically Complex/Challenging Hack" award at UC Berkeley for developing Haven, an agentic assistant for anxiety attack detection and intervention (real-time voice agents, language models, and procedurally generated voice-to-3D environments!) Earlier this year, I also won "Best Overall" at Virginia Tech for developing SafeSound, a multithreaded, real-time "transparency mode" for headphones to filter trigger words for PTSD patients.
Beyond hackathons, I actively experiment with language models, program robots, and automate absolutely everything.
If you're interested in reaching out, feel free to contact me at jkatyan@upenn.edu or datastructure on Discord.
🐟 I'm also a big fan of fish :)


