Skip to content
View MeryylleA's full-sized avatar
🏠
Working from home
🏠
Working from home

Block or report MeryylleA

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
MeryylleA/README.md
Typing SVG

Francisco Antonio

LinkedIn X Email


About Me

17-year-old AI Researcher focused on designing and training language model architectures from scratch. I build systems to understand the why behind learning dynamics — not just replicate them.

  • Current Focus: Collaborative Expert Systems (MoC) for emergent reasoning in sparse models
  • Specialties: Deep learning optimization, distributed training, experimental architectures
  • Goal: Join a team that values technical rigor, creativity, and ambitious R&D

Tech Stack

Languages & Frameworks: Python, PyTorch, Custom Autograd Functions

Architectures: Mixture-of-Experts (MoE), Sparse Attention (NSA), Grouped-Query Attention (GQA), QK-Norm, RoPE

Training at Scale: DDP, FSDP

Tooling: Weights & Biases, Docker, Git


Featured Projects

Lunaris Codex Lunaris Codex MoC
Modular LLM training toolkit featuring SOTA Dense models and Hybrid NSA-MoE architectures. Novel "Mixture-of-Collaborative-Experts" designed for emergent reasoning via 2-Pass communication.

GitHub Stats

stats languages

Pinned Loading

  1. lunariscodex lunariscodex Public

    A high-performance PyTorch toolkit for pre-training modern, Llama-style language models. Based on nanoGPT with significant architectural enhancements.

    Python 13 2