Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

2023-07-28-twitter_xlm_roberta_base_sentiment_en #13906

Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
---
layout: model
title: twitter-xlm-roberta-base-sentiment
title: twitter_xlm_roberta_base_sentiment(Cardiff nlp) (Veer)
author: veerdhwaj
name: twitter_xlm_roberta_base_sentiment
date: 2023-07-28
tags: [sentiment, roberta, en, open_source, tensorflow]
tags: [en, open_source, tensorflow]
task: Text Classification
language: en
edition: Spark NLP 5.0.0
Expand All @@ -19,18 +19,21 @@ use_language_switcher: "Python-Scala-Java"

## Description

This is a multilingual XLM-roBERTa-base model trained on ~198M tweets and finetuned for sentiment analysis. The sentiment fine-tuning was done on 8 languages (Ar, En, Fr, De, Hi, It, Sp, Pt) but it can be used for more languages
Huggingface : https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment
This is a multilingual XLM-roBERTa-base model trained on ~198M tweets and finetuned for sentiment analysis. The sentiment fine-tuning was done on 8 languages (Ar, En, Fr, De, Hi, It, Sp, Pt) but it can be used for more languages (see paper for details).

Paper: XLM-T: A Multilingual Language Model Toolkit for Twitter.
Git Repo: XLM-T official repository.
This model has been integrated into the TweetNLP library.

## Predicted Entities

`sentiment`


{:.btn-box}
<button class="button button-orange" disabled>Live Demo</button>
<button class="button button-orange" disabled>Open in Colab</button>
[Download](https://s3.amazonaws.com/community.johnsnowlabs.com/veerdhwaj/twitter_xlm_roberta_base_sentiment_en_5.0.0_3.2_1690535217423.zip){:.button.button-orange.button-orange-trans.arr.button-icon}
[Copy S3 URI](s3://community.johnsnowlabs.com/veerdhwaj/twitter_xlm_roberta_base_sentiment_en_5.0.0_3.2_1690535217423.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3}
[Download](https://s3.amazonaws.com/community.johnsnowlabs.com/veerdhwaj/twitter_xlm_roberta_base_sentiment_en_5.0.0_3.2_1690542160993.zip){:.button.button-orange.button-orange-trans.arr.button-icon}
[Copy S3 URI](s3://community.johnsnowlabs.com/veerdhwaj/twitter_xlm_roberta_base_sentiment_en_5.0.0_3.2_1690542160993.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3}

## How to use

Expand All @@ -39,34 +42,33 @@ Huggingface : https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentime
<div class="tabs-box" markdown="1">
{% include programmingLanguageSelectScalaPythonNLU.html %}
```python
import spark.implicits._
import com.johnsnowlabs.nlp.base._
import com.johnsnowlabs.nlp.annotator._
import org.apache.spark.ml.Pipeline
from pyspark.ml import Pipeline

val documentAssembler = new DocumentAssembler()
.setInputCol("text")
.setOutputCol("document")
document_assembler = DocumentAssembler() \
.setInputCol('text') \
.setOutputCol('document')

val tokenizer = new Tokenizer()
.setInputCols("document")
.setOutputCol("token")
tokenizer = Tokenizer() \
.setInputCols(['document']) \
.setOutputCol('token')

val sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained('twitter_xlm_roberta_base_sentiment')
.setInputCols("token", "document")
sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained('twitter_xlm_roberta_base_sentiment')\
.setInputCols(["document",'token'])\
.setOutputCol("class")
.setCaseSensitive(true)

val pipeline = new Pipeline().setStages(Array(
documentAssembler,
tokenizer,
sequenceClassifier
))
pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
sequenceClassifier
])

# couple of simple examples
example = spark.createDataFrame([['사랑해!'], ["T'estimo! ❤️"], ["I love you!"], ['Mahal kita!']]).toDF("text")

val data = Seq("I loved this movie when I was a child.", "It was pretty boring.").toDF("text")
val result = pipeline.fit(data).transform(data)
result = pipeline.fit(example).transform(example)

result.select("class.result").show(false)
# result is a DataFrame
result.select("text", "class.result").show()
```

</div>
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
---
layout: model
title: twitter_xlm_roberta_base_sentiment_pdc(cardiff)
author: veerdhwaj
name: twitter_xlm_roberta_base_sentiment_pdc
date: 2023-07-31
tags: [en, open_source, tensorflow]
task: Text Classification
language: en
edition: Spark NLP 5.0.0
spark_version: 3.2
supported: false
engine: tensorflow
annotator: XlmRoBertaForSequenceClassification
article_header:
type: cover
use_language_switcher: "Python-Scala-Java"
---

## Description

Huggingface model: https://huggingface.co/cardiffnlp/twitter-xlm-roberta-base-sentiment

## Predicted Entities



{:.btn-box}
<button class="button button-orange" disabled>Live Demo</button>
<button class="button button-orange" disabled>Open in Colab</button>
[Download](https://s3.amazonaws.com/community.johnsnowlabs.com/veerdhwaj/twitter_xlm_roberta_base_sentiment_pdc_en_5.0.0_3.2_1690779049644.zip){:.button.button-orange.button-orange-trans.arr.button-icon}
[Copy S3 URI](s3://community.johnsnowlabs.com/veerdhwaj/twitter_xlm_roberta_base_sentiment_pdc_en_5.0.0_3.2_1690779049644.zip){:.button.button-orange.button-orange-trans.button-icon.button-copy-s3}

## How to use



<div class="tabs-box" markdown="1">
{% include programmingLanguageSelectScalaPythonNLU.html %}
```python
from pyspark.ml import Pipeline

document_assembler = DocumentAssembler() \
.setInputCol('text') \
.setOutputCol('document')

tokenizer = Tokenizer() \
.setInputCols(['document']) \
.setOutputCol('token')

sequenceClassifier = XlmRoBertaForSequenceClassification.pretrained('twitter_xlm_roberta_base_sentiment_pdc')\
.setInputCols(["document",'token'])\
.setOutputCol("class")

pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
sequenceClassifier
])

# couple of simple examples
example = spark.createDataFrame([['사랑해!'], ["T'estimo! ❤️"], ["I love you!"], ['Mahal kita!']]).toDF("text")

result = pipeline.fit(example).transform(example)

# result is a DataFrame
result.select("text", "class.result").show()
```

</div>

{:.model-param}
## Model Information

{:.table-model}
|---|---|
|Model Name:|twitter_xlm_roberta_base_sentiment_pdc|
|Compatibility:|Spark NLP 5.0.0+|
|License:|Open Source|
|Edition:|Community|
|Input Labels:|[document, token]|
|Output Labels:|[class]|
|Language:|en|
|Size:|1.0 GB|
|Case sensitive:|true|
|Max sentence length:|512|