Deep learning in smiles win / loss evaluation.
-
Updated
Jun 24, 2024 - Python
Deep learning in smiles win / loss evaluation.
This repository contains the annotation framework, dataset and code used for the resource paper "TACO -- Twitter Arguments from COnversations".
Very Simple Transformers provides a simplified interface for packaging, deploying, and serving Transformer models.
Simple Transformers Fork that supports T5TokenizerFast and umT5
Backend for MindPeers ML (NLP) models such as Sentiment Analysis & Keyword Extraction (including Feedback Loops)
Backend for MindPeers ML (NLP) models such as Sentiment Analysis & Keyword Extraction (including Feedback Loops)
Simpletransformer library is based on the Transformers BERT library by HuggingFace. The goal of Question Answering is to find the answer to a question given a question and an accompanying context. The predicted answer will be either a span of text from the context or an empty string (indicating the question cannot be answered from the context.)
This repository contains code for a fine-tuning experiment of CamemBERT, a French version of the BERT language model, on a portion of the FQuAD (French Question Answering Dataset) for Question Answering tasks.
Deep learning in FEN’s win / loss evaluation.
This library is based on simpletransformers and HuggingFace's Transformers library.
Implementation and demo of explainable coding of clinical notes with Hierarchical Label-wise Attention Networks (HLAN)
Small application to test out some functionality of OpenAIs Generative Pre-Trained Transformer (GPT-2) Model
Application for training the pretrained transformer model DeBERTaV3 on an Aspect Based Sentiment Analysis task
Machine Learning Hackathon by MachineHack hosted by Ugam
The objective of this challenge is to create a machine translation system capable of converting text from French into Fongbe or Ewe.
Text classification code used to identify spam messages for a class Kaggle competition. The library used is Simple Transformers. Placed 2nd with a .98 accuracy score.
The goal of this challenge is to build a machine translation model to translate sentences from Yorùbá language to English language in several domains like news articles, daily conversations, spoken dialog transcripts and books.
Weak Supervised Fake News Detection with RoBERTa, XLNet, ALBERT, XGBoost and Logistic Regression classifiers.
🏷️ Classificação multi-label com BERT.
Classify forum posts into one of the amazon e-commerce forum categories using Natural Language Processing (NLP) and Machine Learning.
Add a description, image, and links to the simpletransformers topic page so that developers can more easily learn about it.
To associate your repository with the simpletransformers topic, visit your repo's landing page and select "manage topics."