A structured approach to opinion mining β extracting who expresses what sentiment toward whom, with a polarity label.
- Problem Statement
- Architecture
- Methodology
- Baseline System
- Features
- Datasets
- Results
- Installation
- Usage
- Project Structure
- Further Work
Structured Sentiment Analysis (SSA) extends traditional sentiment analysis by extracting complete sentiment graphs from raw text. Instead of simply classifying a text as positive or negative, SSA identifies all the structured opinion tuples present in a sentence.
Given a text
Where each element is defined as:
| Symbol | Role | Description |
|---|---|---|
| Holder | The entity who expresses the opinion | |
| Expression | The polar expression (words conveying sentiment) | |
| Target | The entity toward which the opinion is directed | |
| Polarity | The sentiment label: positive, negative, or neutral
|
Sentence: "Even though the price is decent for Paris, I would not recommend this hotel."
Opinion Tuple 1:
Holder β "I"
Expression β "would not recommend"
Target β "this hotel"
Polarity β NEGATIVE
Opinion Tuple 2:
Holder β (implicit)
Expression β "decent"
Target β "the price"
Polarity β POSITIVE
Unlike standard sentiment analysis, SSA captures complex, overlapping, and multi-target opinions within a single sentence β making it far more expressive and practically useful.
Our model follows a unified end-to-end architecture built on top of a pretrained BERT-based encoder, enhanced with a CRF decoding layer and a separate polarity head.
ββββββββββββββββββββββββββββββββββββββββββββ
β Input Text β
βββββββββββββββββββββ¬βββββββββββββββββββββββ
β
βββββββββββββββββββββΌβββββββββββββββββββββββ
β [CLS] wβ wβ ... wβ [SEP] β
β Token Embedding Layer β
βββββββββββββββββββββ¬βββββββββββββββββββββββ
β
βββββββββββββββββββββΌβββββββββββββββββββββββ
β Norwegian BERT Encoder β
β (NbAiLab / nb-bert-base) β
β Contextual Hidden State Representations β
ββββββββββββ¬βββββββββββββββββββββ¬βββββββββββ
β β
βββββββββββββββββΌβββββββ βββββββββββΌβββββββββββββ
β CRF for BIO Taggingβ β Polarity Head β
β β β ([CLS] token) β
β B-Source I-Source β β β
β B-Target I-Target β β ββββββββββββββββ β
β B-Polar I-Polar β β β Positive β β
β O β β β Negative β β
ββββββββββββ¬ββββββββββββ β β Neutral β β
β β ββββββββββββββββ β
β βββββββββββ¬βββββββββββββ
β β
ββββββββββββΌββββββββββββββββββββββββββΌβββββββββββ
β Structured Opinion Generation β
β β
β Merge CRF spans + polarity predictions β
β Apply offset mapping & span filtering β
β β
β Output: (Source, Target, Expression, Polarity)β
βββββββββββββββββββββββββββββββββββββββββββββββββββ
| Component | Description | Technology |
|---|---|---|
| Encoder | Contextual representation of input tokens | RoBERTa / BERT (multilingual) |
| BIO Tagger | Sequence labeling for span identification | Linear Projection |
| CRF Decoder | Structured decoding with label constraints | Conditional Random Field |
| Polarity Head | Classifies sentiment of identified spans | Multi-layer Classifier on [CLS] |
| Opinion Builder | Assembles final structured output tuples | Post-processing pipeline |
- SSA extends basic sentiment analysis by extracting four key components: Holder, Target, Expression, Polarity
- We use a pre-trained RoBERTa model (
PolitAi Lab/nb-bert-base, [Liu et al., 2019]) as our encoder - The encoder transforms raw input text into rich contextual embeddings for each token
- We benchmarked RoBERTa with
bert-baseandbert-mediumβ RoBERTa consistently outperforms both
# Encoder produces hidden states for every token position
hidden_states = roberta_encoder(input_ids, attention_mask)
# Shape: [batch_size, seq_len, hidden_dim]- A linear projection layer maps contextual embeddings to BIO tag logits
- We use the BIO (Beginning-Inside-Outside) tagging scheme to represent the boundaries of each opinion component
O β Outside any opinion component
B-Source β Beginning of a Holder span
I-Source β Inside a Holder span
B-Target β Beginning of a Target span
I-Target β Inside a Target span
B-Polar β Beginning of a Polar Expression span
I-Polar β Inside a Polar Expression span
where
After obtaining token-level hidden states from BERT/RoBERTa, a CRF layer is applied to decode BIO tag sequences in a globally-consistent manner.
- The CRF learns transition weights between adjacent tags
- This enforces structural constraints (e.g.,
I-Sourcecannot followB-Target) - Enables span detection while explicitly modeling tag dependencies
Key Insight: CRF outperforms a plain softmax decoder because it captures inter-tag dependencies critical for multi-span structured extraction.
- A separate multi-layer classification head takes the
[CLS]token representation as input - Predicts one of three sentiment labels:
Sentiment Labels:
β
Positive β Favorable / approving sentiment
β Negative β Critical / disapproving sentiment
β Neutral β Factual / non-opinionated expression
-
The model struggles with ambiguous sentences like:
"The food was great, but the service was terrible."
where polarity is mixed at the span level β a key challenge that motivates future work on span-level polarity.
In the final stage, we combine CRF-predicted spans with polarity predictions to produce the complete structured opinion quadruple (holder, expression, target, polarity).
Post-processing pipeline:
- Convert span boundaries to character offsets using BERT's offset mapping with adjacent token merging
- Filter short spans (< 3 characters) to remove noise
- Assemble final output as JSON-formatted opinion tuples
β οΈ This filtering step leads to a small loss (~a few percent) of true predictions.
As a comparison point, we implement a Sequence Labeling + Relation Classification pipeline using BiLSTM models.
Step 1: Span Extraction
βββ BiLSTM β Extract Holders
βββ BiLSTM β Extract Targets
βββ BiLSTM β Extract Polar Expressions
Step 2: Relation Prediction
βββ BiLSTM + Max Pooling
βββ Full text representation
βββ Holder / Target representation
βββ Expression representation
βββ Concatenate β Linear β Sigmoid
β Binary: has_relation? (threshold = 0.5)
Step 3: Assemble tuples β (holder, target, expression, polarity)
The baseline uses GloVe / FastText embeddings and trains three separate BiLSTMs, one per annotation type, followed by a relation prediction model.
| Feature | Description |
|---|---|
| π Multilingual Support | Works across Norwegian, English, Spanish, Catalan, and Basque |
| π·οΈ BIO Sequence Labeling | Precise span-level identification using structured tagging |
| π CRF Decoding | Globally-consistent tag sequence prediction with structural constraints |
| π Polarity Classification | 3-class (Positive / Negative / Neutral) sentiment head |
| π§© Quadruple Extraction | Complete (holder, target, expression, polarity) output |
| π Weighted F1 Evaluation | Partial-overlap scoring using token-level Jaccard intersection |
| π Cross-lingual Transfer | Train on English, evaluate on low-resource target languages |
| π¦ Modular Architecture | Encoder, CRF, and classifier heads are independently configurable |
| Dataset | Language | Domain |
|---|---|---|
norec |
Norwegian | Professional reviews (multi-domain) |
opener_en |
English | Hotel reviews |
opener_es |
Spanish | Hotel reviews |
multibooked_ca |
Catalan | Hotel reviews |
multibooked_eu |
Basque | Hotel reviews |
darmstadt_unis |
English | University reviews (online) |
mpqa |
English | News (opinion annotations) |
Trained on high-resource English data, evaluated on:
| Test Dataset | Language |
|---|---|
opener_es |
Spanish |
multibooked_ca |
Catalan |
multibooked_eu |
Basque |
Each dataset is in JSON format with the following schema:
{
"sent_id": "opener/en/hotel/english00164-6",
"text": "Even though the price is decent for Paris, I would not recommend this hotel.",
"opinions": [
{
"Source": [["I"], ["44:45"]],
"Target": [["this hotel"], ["66:76"]],
"Polar_expression": [["would not recommend"], ["46:65"]],
"Polarity": "negative",
"Intensity": "average"
}
]
}Average across all 7 datasets
| Metric | Score |
|---|---|
| SF1 (Sentiment F1) | 0.46 |
| SP (Sentiment Precision) | 0.62 |
| SR (Sentiment Recall) | 0.65 |
Per-dataset Breakdown:
| Dataset | SF1 | SP | SR |
|---|---|---|---|
| Opener_en | 0.41 | 0.37 | 0.47 |
| Opener_es | 0.35 | 0.33 | 0.38 |
| NoReC | 0.23 | 0.30 | 0.18 |
| Multibooked_ca | 0.57 | 0.53 | 0.63 |
| Multibooked_eu | 0.53 | 0.40 | 0.71 |
| darmstadt_unis | 0.55 | 0.58 | 0.00 |
| MPQA | 0.52 | 0.55 | 0.00 |
Average across 3 target language datasets
| Metric | Score |
|---|---|
| SF1 (Sentiment F1) | 0.35 |
| SP (Sentiment Precision) | 0.85 |
| SR (Sentiment Recall) | 0.63 |
Per-dataset Breakdown:
| Dataset | SF1 | Precision | Recall |
|---|---|---|---|
| Opener_es | 0.000 | 0.000 | 0.000 |
| Multibooked_ca | 0.481 | 0.461 | 0.503 |
| Multibooked_eu | 0.671 | 0.618 | 0.733 |
- π’ BERT significantly improves span extraction results over CRF baseline alone β especially for English and language-similar corpora
- π‘
Multibooked_euperforms best in cross-lingual settings β likely due to the smaller dataset size and consistent hotel-review characteristics - π΄ Complex/ambiguous expressions (different polarity in different contexts) present a challenge across all datasets
- π Character-level BERT representations outperform word-level representations as a comparison baseline
- Python β₯ 3.8
- PyTorch β₯ 1.9
- CUDA (optional, for GPU acceleration)
git clone https://github.com/your-username/Structured-Sentiment-Analysis-IIITD-NLP-PROJECT.git
cd Structured-Sentiment-Analysis-IIITD-NLP-PROJECTpip install torch torchvision transformers
pip install nltk scikit-learn tqdm gensimpip install -r baseline/sequence_labeling/requirements.txtpip install -r data/requirements.txtMPQA 2.0 β Download from the MPQA website and run:
cd data/mpqa && bash process_mpqa.shDarmstadt Service Review Corpus β Download from TU Darmstadt and run:
cd data/darmstadt_unis && bash process_darmstadt.shRun the official evaluation script on model predictions:
python evaluate.py <input_dir> <output_dir>Where:
<input_dir>/res/contains yourpredictions.jsonper dataset<input_dir>/ref/data/contains the gold test files
# Train all BiLSTM baseline models across datasets
cd baseline/sequence_labeling
bash get_baselines.sh# Run inference on a specific dataset and split
python baseline/sequence_labeling/inference.py \
--DATADIR opener_en \
--FILE dev.jsonOutput will be saved to:
baseline/sequence_labeling/saved_models/relation_prediction/<DATADIR>/prediction.json
Prediction files must match the gold data format. Each entry should look like:
{
"sent_id": "unique-sentence-id",
"text": "Raw input sentence here.",
"opinions": [
{
"Source": [["holder text"], ["start:end"]],
"Target": [["target text"], ["start:end"]],
"Polar_expression": [["expression text"], ["start:end"]],
"Polarity": "positive"
}
]
}Structured-Sentiment-Analysis-IIITD-NLP-PROJECT/
β
βββ π evaluate.py β Official evaluation script (SF1 / SP / SR)
β
βββ π baseline/
β βββ sequence_labeling/
β βββ extraction_module.py β BiLSTM span extractor (Holder / Target / Expr)
β βββ relation_prediction_module.py β BiLSTM relation classifier
β βββ inference.py β End-to-end inference pipeline
β βββ convert_to_bio.py β Convert JSON data to BIO format
β βββ convert_to_rels.py β Convert predictions to relation pairs
β βββ utils.py β Data loading & vocabulary utilities
β βββ WordVecs.py β Pretrained word embedding loader
β βββ get_baselines.sh β Script to train all baseline models
β βββ requirements.txt
β
βββ π data/
β βββ norec/ β Norwegian multi-domain reviews
β β βββ train.json
β β βββ dev.json
β β βββ test.json
β βββ opener_en/ β English hotel reviews
β βββ opener_es/ β Spanish hotel reviews
β βββ multibooked_ca/ β Catalan hotel reviews
β βββ multibooked_eu/ β Basque hotel reviews
β βββ mpqa/ β MPQA news corpus
β βββ darmstadt_unis/ β English university reviews
β βββ README.md β Data format documentation
β
βββ π predictions/
βββ norec/
βββ predictions.json β Sample model predictions
We identify several promising directions to build upon this work:
In the future, we would likely move to a dependency graph parsing approach for structured sentiment β augmenting the token-level representation with their heads in a dependency tree (Kurtz et al., 2020). This allows richer relational reasoning between opinion components.
- Exploring multi-task learning across languages to better leverage shared structure in multilingual sentiment graphs
- Joint training on all monolingual datasets with language-specific adapters
- Point Network (Samuel & Straka, 2020) β a strong graph parser for SSA
- PERIN β a permutation-invariant structured prediction framework
- Currently, polarity is predicted globally per sentence via the
[CLS]token - Moving to span-level polarity prediction would handle cases like "The food was great, but the service was terrible"
- Explore whether large pre-trained models (e.g., GPT-4, LLaMA) can directly predict structured opinion tuples via in-context learning or fine-tuning, without an explicit CRF layer
Barnes, J. et al. (2021). SemEval-2022 Task 10: Structured Sentiment Analysis.
Proceedings of the 16th Workshop on Semantic Evaluation.
Liu, Y. et al. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach.
arXiv:1907.11692.
Kurtz, et al. (2020). Improving Low-Resource NMT through Relevance Based Linguistic Features.
ACL 2020.
Samuel, D. & Straka, M. (2020). ΓFAL at MRP 2020: Permutation-Invariant Semantic Parsing.
CoNLL 2020 Shared Task.
Made with β€οΈ at IIIT Delhi | NLP Course Project
For questions or contributions, please open an issue or pull request.