-
Anthropic
- San Francisco
- anthropic.com
- @NotTomBrown
Stars
- All languages
- C
- C#
- C++
- CSS
- Clojure
- CoffeeScript
- Cuda
- Cython
- Dart
- Dockerfile
- Emacs Lisp
- Go
- HTML
- Haskell
- Java
- JavaScript
- Jupyter Notebook
- Lua
- MATLAB
- MLIR
- Makefile
- Nunjucks
- OCaml
- Objective-C
- Objective-C++
- PHP
- Pascal
- Perl
- PowerShell
- Processing
- Python
- Roff
- Ruby
- Rust
- SCSS
- Scala
- Shell
- Swift
- TeX
- Terra
- TypeScript
- Vim Script
`shed` canonicalises Python code. Shed your legacy, stop bikeshedding, and move on. Black++
Typed interactions with the GitHub API v3
Google Research
Development repository for the Triton language and compiler
Chrome extension that records your browser interactions and generates a Playwright or Puppeteer script.
JavaScript API for Chrome and Firefox
Simple flask boilerplate with Postgres, Docker, and Heroku/Zeit now
Fully featured framework for fast, easy and documented API development with Flask
SQLAlchemy database migrations for Flask applications using Alembic
Read Google Cloud Storage, Azure Blobs, and local paths with the same interface
Find, verify, and analyze leaked credentials
Gin provides a lightweight configuration framework for Python
Browser extension that simplifies the GitHub interface and adds useful features
An open clone of the GPT-2 WebText dataset by OpenAI. Still WIP.
PyTorch original implementation of Cross-lingual Language Model Pretraining.
Magenta: Music and Art Generation with Machine Intelligence
Code repository to accompany the O'Reilly book: "Strengthening Deep Neural Networks: Making AI Less Susceptible to Adversarial Trickery"
This repository provides state of the art (SoTA) results for all machine learning problems. We do our best to keep this repository up to date. If you do find a problem's SoTA result is out of date …
XLNet: Generalized Autoregressive Pretraining for Language Understanding
Renders papers from arXiv as responsive web pages so you don't have to squint at a PDF.
Profiling and inspecting memory in pytorch
Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Blazingly fast analytics database that will rapidly devour all of your data.
Collective communications library with various primitives for multi-machine training.