Stars
🚨 GROW YOUR AUDIENCE WITH HUGOBLOX! 🚀 HugoBlox is an easy, fast no-code website builder for researchers, entrepreneurs, data scientists, and developers. Build stunning sites in minutes. 适合研究人员、企业家、…
Astroplate is a free starter template built with Astro, TailwindCSS & TypeScript providing everything you need to jumpstart your Astro project. Get started with Astroplate and save yourself hours o…
nbwipers is a command line tool to wipe clean jupyter notebooks, written in Rust.
📦 Serverless and local-first Open Data Platform
Qwen2.5 is the large language model series developed by Qwen team, Alibaba Cloud.
Use PEFT or Full-parameter to finetune 400+ LLMs (Qwen2.5, InternLM3, GLM4, Llama3.3, Mistral, Yi1.5, Baichuan2, DeepSeek3, ...) and 150+ MLLMs (Qwen2-VL, Qwen2-Audio, Llama3.2-Vision, Llava, Inter…
An efficient pure-PyTorch implementation of Kolmogorov-Arnold Network (KAN).
A programming framework for agentic AI 🤖 PyPi: autogen-agentchat Discord: https://aka.ms/autogen-discord Office Hour: https://aka.ms/autogen-officehour
We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts…
Initial public release of code, data, and model weights for FourCastNet
An official implementation of Pangu-Weather
Fast and easy echarts with polars backend for wrangling and a simple API
Llama-3 agents that can browse the web by following instructions and talking to you
Fully private LLM chatbot that runs entirely with a browser with no server needed. Supports Mistral and LLama 3.
repository for the educational technical content developed for BH TIL-AI 2024
Polars extension for general data science use cases
Dataframes powered by a multithreaded, vectorized query engine, written in Rust
machine learning with logical rules in Python
Code and notebooks for my Medium blog posts
An open source python library for automated feature engineering based on Genetic Programming
A VSCode extension to generate development environments using micromamba and conda-forge package repository
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
Code and documentation to train Stanford's Alpaca models, and generate the data.