Explanations to key concepts in ML
- AlexNet
- BART
- BEiT
- BERT
- ColD Fusion
- ConvMixer
- Deep and Cross Network
- DenseNet
- DistilBERT
- DiT
- DocFormer
- Donut
- EfficientNet
- Entity Embeddings
- ERNIE-Layout
- Fast RCNN
- Faster RCNN
- Feature Pyramid Network
- Feature Tokenizer Transformer
- Focal Loss (RetinaNet)
- InceptionNet
- InceptionNetV2 and InceptionNetV3
- InceptionNetV4 and InceptionResNet
- Layout LM
- Layout LM v2
- Layout LM v3
- Lenet
- LiLT
- Mask RCNN
- Masked Autoencoder
- MobileBERT
- MobileNetV1
- MobileNetV2
- MobileNetV3
- RCNN
- ResNet
- ResNext
- SentenceBERT
- Single Shot MultiBox Detector (SSD)
- StructuralLM
- Swin Transformer
- TableNet
- TabTransformer
- Tabular ResNet
- TinyBERT
- Transformer
- VGG
- Vision Transformer
- Wide and Deep Learning
- Xception
- XLNet
- Lenet
- AlexNet
- VGG
- InceptionNet
- InceptionNetV2 and InceptionNetV3
- ResNet
- InceptionNetV4 and InceptionResNet
- ResNext
- Xception
- DenseNet
- MobileNetV1
- MobileNetV2
- MobileNetV3
- EfficientNet
- Entity Embeddings
- Tabular ResNet
- Wide and Deep Learning
- Deep and Cross Network
- TabTransformer
- Feature Tokenizer Transformer
- Convolutional Neural Networks
- Layout Transformers
- Region-based Convolutional Neural Networks
- Tabular Deep Learning
Reach out to Ritvik or Elvis if you have any questions.
If you are interested to contribute, feel free to open a PR.