Open
Description
This experiment compares the performance of the base CryptoBERT model and the fine-tuned CryptoBERT model (fine-tuned on the triple barrier labels) with cross-validation format on a collection of 70000 tweets from a shuffled dataset of tweets from 2015 to 2021, 10000 tweets from each year. the intention behind this experiment is to average the metrics comparisons between the base and fine-tuned models across time.
dataset
2020 daily tweets from the dataset introduced by Zou containing 64000 combined tweets.
2020 daily candles
triple barrier labels
model
metrics
accuracy
precision
recall
f1 measure
confusion matrix
Metadata
Metadata
Assignees
Labels
No labels