Stars
Solve puzzles. Improve your pytorch.
Bayesian active learning with EPIG data acquisition
Helps you write algorithms in PyTorch that adapt to the available (CUDA) memory
A general-purpose, deep learning-first library for constrained optimization in PyTorch
Coverage tests to check the quality of your posterior estimators.
SWE-agent takes a GitHub issue and tries to automatically fix it, using GPT-4, or your LM of choice. It can also be employed for offensive cybersecurity or competitive coding challenges. [NeurIPS 2…
💎 A curated list of awesome Competitive Programming, Algorithm and Data Structure resources
Software design principles for machine learning applications
Pytorch-like dataloaders in JAX.
Automated molecular dynamics simulations workflow for high-throughput assessment of protein-ligand dynamics
Hardware accelerated, batchable and differentiable optimizers in JAX.
Optimal transport tools implemented with the JAX framework, to get differentiable, parallel and jit-able computations.
This repository contains a collection of resources and papers on Diffusion Models for RL, accompanying the paper "Diffusion Models for Reinforcement Learning: A Survey"
TorchCFM: a Conditional Flow Matching library
Implicit Deep Adaptive Design (iDAD): Policy-Based Experimental Design without Likelihoods
Platform to experiment with the AI Software Engineer. Terminal based. NOTE: Very different from https://gptengineer.app
Small Python library to automatically set CUDA_VISIBLE_DEVICES to the least loaded device on multi-GPU systems.
emilemathieu / escnn_jax
Forked from QUVA-Lab/escnnEquivariant Steerable CNNs Library for Pytorch https://quva-lab.github.io/escnn/
Interact with your documents using the power of GPT, 100% privately, no data leaks
Chat with your database (SQL, CSV, pandas, polars, mongodb, noSQL, etc). PandasAI makes data analysis conversational using LLMs (GPT 3.5 / 4, Anthropic, VertexAI) and RAG.
A list of totally open alternatives to ChatGPT
Differentiable, Hardware Accelerated, Molecular Dynamics
A playbook for systematically maximizing the performance of deep learning models.