- S.t. Louis, U.S.
Highlights
- Pro
Stars
Implementation of AAAI 2024 paper "High-Fidelity Gradient Inversion in Distributed Learning"
Algorithms to recover input data from their gradient signal through a neural network
A framework for few-shot evaluation of language models.
Shepherd: A foundational framework enabling federated instruction tuning for large language models
Everything about federated learning, including research papers, books, codes, tutorials, videos and beyond
[MICCAI 2024] Codebase for "Stable Diffusion Segmentation for Biomedical Images with Single-step Reverse Process"
FEDML - The unified and scalable ML library for large-scale distributed training, model serving, and federated learning. FEDML Launch, a cross-cloud scheduler, further enables running any AI jobs o…
Code for "Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors"
CodeGen is a family of open-source model for program synthesis. Trained on TPU-v4. Competitive with OpenAI Codex.
PrivacyGuard is a platform that combines blockchain smart contract and TEE to enable transparent enforcement of private data computation and fine-grained usage control. This repo includes prototype…
https://dl.acm.org/doi/10.1145/3576915.3623209
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (V…
An implementation of loss thresholding attack to infer membership status as described in paper "Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting" (CSF 18) in PyTorch.
Dynamic and Iterative Spanning Forest (DISF) superpixel segmentation framework
[NeurIPS 2021]: Are Transformers More Robust Than CNNs? (Pytorch implementation & checkpoints)
Implementation of the algorithms described in the papers "ZO-AdaMM: Zeroth Order Adaptive Momentum" by Chen et al., "Stochastic first- and zeroth-order methods" by Ghadimi et al. and "SignSGD via z…
A decision-based dense attack
Square Attack: a query-efficient black-box adversarial attack via random search [ECCV 2020]
Triangle Attack: A Query-efficient Decision-based Adversarial Attack (ECCV 2022)
A library for experimenting with, training and evaluating neural networks, with a focus on adversarial robustness.
[ICML 2023] Are Diffusion Models Vulnerable to Membership Inference Attacks?
This is an official repository for Practical Membership Inference Attacks Against Large-Scale Multi-Modal Models: A Pilot Study (ICCV2023).
Codebase for Generative Adversarial Imputation Networks (GAIN) - ICML 2018
Source Code of the ROAD benchmark for feature attribution methods (ICML22)

