Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting error RuntimeError: unexpected EOF, expected 5253807 more bytes. The file might be corrupted #74

Closed
moh-yani opened this issue Dec 3, 2019 · 32 comments

Comments

@moh-yani
Copy link

moh-yani commented Dec 3, 2019

When I tried to perform the minimal example of QuestionAnswering, I found an error like this:

Traceback (most recent call last):
File "example.py", line 60, in
model = QuestionAnsweringModel('distilbert', 'distilbert-base-uncased-distilled-squad', args={'reprocess_input_data': True, 'overwrite_output_dir': True})
File "/home/yani/projects/latihan/bert/sources/simpletransformers/simpletransformers/question_answering/question_answering_model.py", line 73, in init
self.model = model_class.from_pretrained(model_name)
File "/home/yani/anaconda3/envs/simpletransformers/lib/python3.7/site-packages/transform
ers/modeling_utils.py", line 395, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location='cpu')
File "/home/yani/anaconda3/envs/simpletransformers/lib/python3.7/site-packages/torch/serialization.py", line 387, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "/home/yani/anaconda3/envs/simpletransformers/lib/python3.7/site-packages/torch/serialization.py", line 581, in _load
deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: unexpected EOF, expected 5253807 more bytes. The file might be corrupted.

I use GPU 8GB to perform that.

Anyone knows why was it happened?

@ThilinaRajapakse
Copy link
Owner

ThilinaRajapakse commented Dec 3, 2019

context': "Other legislation followed, including the Migratory Bird Conservation Act of 1929, a 1937 treaty prohibiting the hunting of right and gray whales,
            and the Bald Eagle Protection Act of 1940. These later laws had a low cost to society—the species were relatively rare—and little opposition was raised",

This string might be too long and not being formatted properly when copy pasted. Add a \ to the end if the string is breaking up over multiple lines.

@moh-yani
Copy link
Author

moh-yani commented Dec 3, 2019

@ThilinaRajapakse

Thank you for response.

I tried to short the context be:

    'context': "Other legislation followed, including the Migratory Bird Conservation Act of 1929, a 1937 treaty prohibiting the hunting of right and gray whales",
    'qas': [
        {
            'id': "00002",
            'is_impossible': False,
            'question': "What was the cost to society?",
            'answers': [
                {
                    'text': "low cost",
                    'answer_start': 225
                }
            ]
        },
        ...
      EOF

However the error is still same like this:

Traceback (most recent call last):
File "example.py", line 61, in
model = QuestionAnsweringModel('distilbert', 'distilbert-base-uncased-distilled-squad', args={'reprocess_input_data': True, 'overwrite_output_dir': True})
File "/home/yani/projects/latihan/bert/sources/simpletransformers/simpletransformers/question_answering/question_answering_model.py", line 73, in init
self.model = model_class.from_pretrained(model_name)
File "/home/yani/anaconda3/envs/simpletransformers/lib/python3.7/site-packages/transformers/modeling_utils.py", line 395, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location='cpu')
File "/home/yani/anaconda3/envs/simpletransformers/lib/python3.7/site-packages/torch/serialization.py", line 387, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "/home/yani/anaconda3/envs/simpletransformers/lib/python3.7/site-packages/torch/serialization.py", line 581, in _load
deserialized_objects[key]._set_from_file(f, offset, f_should_read_directly)
RuntimeError: unexpected EOF, expected 5253807 more bytes. The file might be corrupted.
*** Error in `python': corrupted double-linked list: 0x0000559ba89a9c70 ***
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(+0x777e5)[0x7f92059d67e5]
/lib/x86_64-linux-gnu/libc.so.6(+0x7e7c3)[0x7f92059dd7c3]
/lib/x86_64-linux-gnu/libc.so.6(+0x80678)[0x7f92059df678]
..
...
python(+0x1daf7d)[0x559ba5be7f7d]
======= Memory map: ========
559ba5a0d000-559ba5a68000 r--p 00000000 08:02 51450230 /home/yani/anaconda3/envs/simpletransformers/bin/python3.7
559ba5a68000-559ba5c44000 r-xp 0005b000 08:02 51450230 /home/yani/anaconda3/envs/simpletransformers/bin/python3.7
559ba5c44000-559ba5ceb000 r--p 00237000 08:02 51450230 /home/yani/anaconda3/envs/simpletransformers/bin/python3.7
559ba5cec000-559ba5cef000 r--p 002de000 08:02 51450230 /home/yani/anaconda3/envs/simpletransformers/bin/python3.7
559ba5cef000-559ba5d58000 rw-p 002e1000 08:02 51450230 /home/yani/anaconda3/envs/simpletransformers/bin/python3.7
...
...
7fcc68c31000-7fcc68c33000 rw-p 0003d000 08:02 51563288 /home/yani/anaconda3/envs/simpletransformers/lib/python3.7/lib-dynload/pyexpat.cpython-37m-x86_64-linux-gnu.
so
7fcc68c33000-7fcc68d33000 rw-p 00000000 00:00 0
7fcc68d33000-7fcc68d46000 r-xp 00000000 08:02 58213213 /home/yani/anaconda3/envs/simpletransformers/lib/python3.7/site-packages/sklearn/utils/murmurhash.cpython-37
m-x86_64-linux-gnu.soAborted

Do I have any problems?

@ThilinaRajapakse
Copy link
Owner

    'context': "Other legislation followed, including the Migratory Bird Conservation Act of 1929, a 1937 treaty prohibiting the hunting of right and gray whales",
    'qas': [
        {
            'id': "00002",
            'is_impossible': False,
            'question': "What was the cost to society?",
            'answers': [
                {
                    'text': "low cost",
                    'answer_start': 225
                }
            ]
        },
        ...
      EOF

Is this the actual snippet? If so, the last 2 lines (with the triple dots and the EOF) shouldn't be there!

@moh-yani
Copy link
Author

moh-yani commented Dec 3, 2019

@ThilinaRajapakse

No, this is not actual snippet. Here the actual snippet:

{
    'context': "Other legislation followed, including the Migratory Bird Conservation Act of 1929, a 1937 treaty prohibiting the hunting of right and gray whales",
    'qas': [
        {
            'id': "00002",
            'is_impossible': False,
            'question': "What was the cost to society?",
            'answers': [
                {
                    'text': "low cost",
                    'answer_start': 225
                }
            ]
        },
        {
            'id': "00003",
            'is_impossible': False,
            'question': "What was the name of the 1937 treaty?",
            'answers': [
                {
                    'text': "Bald Eagle Protection Act",
                    'answer_start': 167
                }
            ]
        }
    ]
}

I also have tried to use the real dataset of dev-v1.1.json from squad1.1 for training. However it results same error. What did happened?

@ThilinaRajapakse
Copy link
Owner

Can you try one of the other examples like the classification example? If you get a similar error, I think your pytorch or python installation is corrupted.

@moh-yani
Copy link
Author

moh-yani commented Dec 3, 2019

@ThilinaRajapakse

Even I comment all of the codes excepts:

from simpletransformers.question_answering import QuestionAnsweringModel
import json
import os

Create the QuestionAnsweringModel

model = QuestionAnsweringModel('distilbert', 'distilbert-base-uncased-distilled-squad', args={'reprocess_input_data': True, 'overwrite_output_dir': True})

It results same error. Any suggestion for this issue?

@moh-yani
Copy link
Author

moh-yani commented Dec 3, 2019

@ThilinaRajapakse

I have tried to perform Minimal Start for Binary Classification, and this runned well.

FYI, I used PyTorch version for cudatoolkit=9.0 by installing with this script:

conda install pytorch torchvision cudatoolkit=9.0 -c pytorch

Could you know what was it occured for minimal example in QuestionAnswering model?

@ThilinaRajapakse
Copy link
Owner

I don't think it's a cuda issue if the classification is working and only QA is having issues. But it's hard to completely rule it out. The error trace points to an issue in a torch file. Can you try reinstalling torch with a new environment? Use the latest version of torch from their website.

@moh-yani
Copy link
Author

moh-yani commented Dec 3, 2019

@ThilinaRajapakse

I have reinstalled new pytorch version in a new environment at conda by using:

conda install pytorch cudatoolkit=10.1 -c pytorch

However, it results a similar error:

Traceback (most recent call last):
File "example.py", line 60, in
model = QuestionAnsweringModel('distilbert', 'distilbert-base-uncased-distilled-squad', args={'reprocess_input_data': True, 'overwrite_output_dir': True})
File "/home/yani/projects/latihan/bert/sources/simpletransformers/simpletransformers/question_answering/question_answering_model.py", line 73, in init
self.model = model_class.from_pretrained(model_name)
File "/home/yani/anaconda3/envs/transformer3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 395, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location='cpu')
File "/home/yani/anaconda3/envs/transformer3/lib/python3.7/site-packages/torch/serialization.py", line 426, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "/home/yani/anaconda3/envs/transformer3/lib/python3.7/site-packages/torch/serialization.py", line 620, in _load
deserialized_objects[key].set_from_file(f, offset, f_should_read_directly)
RuntimeError: unexpected EOF, expected 5253807 more bytes. The file might be corrupted.
terminate called after throwing an instance of 'c10::Error'
what(): owning_ptr == NullType::singleton() || owning_ptr->refcount
.load() > 0 INTERNAL ASSERT FAILED at /opt/conda/conda-bld/pytorch_1573049306803/work/c10/util/intrusive_ptr.h:348, please report a bug to PyTorch. intrusive_ptr: Can only intrusive_ptr::reclaim() owning pointers that were created using intrusive_ptr::release(). (reclaim at /opt/conda/conda-bld/pytorch_1573049306803/work/c10/util/intrusive_ptr.h:348)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x47 (0x7fed48d4c687 in /home/yani/anaconda3/envs/transformer3/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: + 0x143b27f (0x7fed4be2a27f in /home/yani/anaconda3/envs/transformer3/lib/python3.7/site-packages/torch/lib/libtorch.so)
frame #2: THStorage_free + 0x17 (0x7fed4c553127 in /home/yani/anaconda3/envs/transformer3/lib/python3.7/site-packages/torch/lib/libtorch.so)
frame #3: + 0x3ecd8d (0x7fed799a6d8d in /home/yani/anaconda3/envs/transformer3/lib/python3.7/site-packages/torch/lib/libtorch_python.so)

frame #19: __libc_start_main + 0xf0 (0x7fed88b87830 in /lib/x86_64-linux-gnu/libc.so.6)

Aborted

Any other suggestion?

@moh-yani
Copy link
Author

moh-yani commented Dec 3, 2019

@ThilinaRajapakse

Could you tell what size of distilbert-base-uncased-distilled-squad model is?

@ThilinaRajapakse
Copy link
Owner

You can find all the model details here.

Try it with another model. I think something went wrong with the model download.

model = QuestionAnsweringModel('distilbert', 'distilbert-base-uncased', args={'reprocess_input_data': True, 'overwrite_output_dir': True})

@moh-yani
Copy link
Author

moh-yani commented Dec 3, 2019

@ThilinaRajapakse

I have tried to use another model:

model = QuestionAnsweringModel('bert', 'bert-base-uncased', args={'reprocess_input_data': True, 'overwrite_output_dir': True})

However, it still results same error.

@ThilinaRajapakse
Copy link
Owner

from transformers import DistilBertForSequenceClassification

DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased', force_download=True)

Try that.

@moh-yani
Copy link
Author

moh-yani commented Dec 3, 2019

Done. And then, what's the next?

@ThilinaRajapakse
Copy link
Owner

Did it download the model successfully? If so, try this now.

model = QuestionAnsweringModel('distilbert', 'distilbert-base-uncased', args={'reprocess_input_data': True, 'overwrite_output_dir': True})

@moh-yani
Copy link
Author

moh-yani commented Dec 3, 2019

Done. I didn't see any error messages. It is okay, isn't it?. If yes, What's the next?

By the way, where is the downloaded model stored with this script?

DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased', force_download=True)

@ThilinaRajapakse
Copy link
Owner

If you don't see any error messages now, that means the issue was with the models not being downloaded properly.

The downloaded models should be in /home/<user>/.cache/torch

You'll have to run the script with the force_download for any models that are throwing the error. Or you could probably just delete everything in the model cache given above so that everything will be downloaded from scratch.

@moh-yani
Copy link
Author

moh-yani commented Dec 3, 2019

Okay. However, when I re-tried to perform completely the code for minimal question answering. It still results same error. Should I modify for any codes?

@moh-yani
Copy link
Author

moh-yani commented Dec 3, 2019

This error like below:

(simpletransformers) yani@riset-3x-1080-2:~/projects/latihan/bert/sources/simpletransformers$ python example_qa.py
Traceback (most recent call last):
File "example_qa.py", line 69, in
model = QuestionAnsweringModel('distilbert', 'distilbert-base-uncased-distilled-squad', args={'reprocess_input_data': True, 'overwrite_output_dir': True})
File "/home/yani/projects/latihan/bert/sources/simpletransformers/simpletransformers/question_answering/question_answering_model.py", line 73, in init
self.model = model_class.from_pretrained(model_name)
File "/home/yani/anaconda3/envs/simpletransformers/lib/python3.7/site-packages/transformers/modeling_utils.py", line 395, in from_pretrained
state_dict = torch.load(resolved_archive_file, map_location='cpu')
File "/home/yani/anaconda3/envs/simpletransformers/lib/python3.7/site-packages/torch/serialization.py", line 387, in load
return _load(f, map_location, pickle_module, **pickle_load_args)
File "/home/yani/anaconda3/envs/simpletransformers/lib/python3.7/site-packages/torch/serialization.py", line 581, in _load
deserialized_objects[key].set_from_file(f, offset, f_should_read_directly)
RuntimeError: unexpected EOF, expected 5253807 more bytes. The file might be corrupted.
terminate called after throwing an instance of 'c10::Error'
what(): owning_ptr == NullType::singleton() || owning_ptr->refcount
.load() > 0 ASSERT FAILED at /opt/conda/conda-bld/pytorch_1556653215914/work/c10/util/intrusive_ptr.h:350, please report a bug to PyTorch. intrusive_ptr: Can only intrusive_ptr::reclaim() owning pointers that were created using intrusive_ptr::release(). (reclaim at /opt/conda/conda-bld/pytorch_1556653215914/work/c10/util/intrusive_ptr.h:350)
frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x45 (0x7f80db879dc5 in /home/yani/anaconda3/envs/simpletransformers/lib/python3.7/site-packages/torch/lib/libc10.so)
frame #1: THStorage_free + 0xca (0x7f80dc5bd20a in /home/yani/anaconda3/envs/simpletransformers/lib/python3.7/site-packages/torch/lib/libcaffe2.so)
frame #2: + 0x14837d (0x7f810166237d in /home/yani/anaconda3/envs/simpletransformers/lib/python3.7/site-packages/torch/lib/libtorch_python.so)

frame #18: __libc_start_main + 0xf0 (0x7f8110988830 in /lib/x86_64-linux-gnu/libc.so.6)

Aborted

@ThilinaRajapakse
Copy link
Owner

Yeah, you should try with distilbert-base-uncased. Or, you can run the script below first.

from transformers import DistilBertForSequenceClassification

DistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased-distilled-squad', force_download=True)

@moh-yani
Copy link
Author

moh-yani commented Dec 3, 2019

Okay. Now I got this error when I re-tried to perform step above and then run completely the code:

Traceback (most recent call last):
File "example_qa.py", line 65, in
model.train_model('data/train.json')
File "/home/yani/projects/latihan/bert/sources/simpletransformers/simpletransformers/question_answering/question_answering_model.py", line 227, in train_model
train_dataset = self.load_and_cache_examples(train_examples)
File "/home/yani/projects/latihan/bert/sources/simpletransformers/simpletransformers/question_answering/question_answering_model.py", line 144, in load_and_cache_examples
examples = get_examples(examples, is_training=not evaluate)
File "/home/yani/projects/latihan/bert/sources/simpletransformers/simpletransformers/question_answering/question_answering_utils.py", line 150, in get_examples
start_position = char_to_word_offset[answer_offset]
IndexError: list index out of range

@ThilinaRajapakse
Copy link
Owner

Try this. It's the same minimal example, except with the slash added.

from simpletransformers.question_answering import QuestionAnsweringModel
import json
import os


# Create dummy data to use for training.
train_data = [
    {
        'context': "This is the first context",
        'qas': [
            {
                'id': "00001",
                'is_impossible': False,
                'question': "Which context is this?",
                'answers': [
                    {
                        'text': "the first",
                        'answer_start': 8
                    }
                ]
            }
        ]
    },
    {
        'context': "Other legislation followed, including the Migratory Bird Conservation Act of 1929, a 1937 treaty prohibiting the hunting of right and gray whales,\
            and the Bald Eagle Protection Act of 1940. These later laws had a low cost to society—the species were relatively rare—and little opposition was raised",
        'qas': [
            {
                'id': "00002",
                'is_impossible': False,
                'question': "What was the cost to society?",
                'answers': [
                    {
                        'text': "low cost",
                        'answer_start': 225
                    }
                ]
            },
            {
                'id': "00003",
                'is_impossible': False,
                'question': "What was the name of the 1937 treaty?",
                'answers': [
                    {
                        'text': "Bald Eagle Protection Act",
                        'answer_start': 167
                    }
                ]
            }
        ]
    }
]

# Save as a JSON file
os.makedirs('data', exist_ok=True)
with open('data/train.json', 'w') as f:
    json.dump(train_data, f)


# Create the QuestionAnsweringModel
model = QuestionAnsweringModel('distilbert', 'distilbert-base-uncased-distilled-squad', args={'reprocess_input_data': True, 'overwrite_output_dir': True})

# Train the model with JSON file
model.train_model('data/train.json')

# The list can also be used directly
# model.train_model(train_data)

# Evaluate the model. (Being lazy and evaluating on the train data itself)
result, text = model.eval_model('data/train.json')

print(result)
print(text)

print('-------------------')

# Making predictions using the model.
to_predict = [{'context': 'This is the context used for demonstrating predictions.', 'qas': [{'question': 'What is this context?', 'id': '0'}]}]

print(model.predict(to_predict))

@moh-yani
Copy link
Author

moh-yani commented Dec 3, 2019

Yeaaah.... It works. Thank you very much.

@moh-yani moh-yani closed this as completed Dec 3, 2019
@moh-yani moh-yani reopened this Dec 3, 2019
@moh-yani
Copy link
Author

moh-yani commented Dec 3, 2019

Sorry I have one question. Is it possible if I use a real dataset like train-v1.1.json from squad1.1? If yes, why when I tried it to perform that code above appears error like this:

Traceback (most recent call last):
File "example_squad.py", line 16, in
model.train_model('/home/yani/projects/latihan/bert/data/squad1.1/train-v1.1.json')
File "/home/yani/projects/latihan/bert/sources/simpletransformers/simpletransformers/question_answering/question_answering_model.py", line 227, in train_model
train_dataset = self.load_and_cache_examples(train_examples)
File "/home/yani/projects/latihan/bert/sources/simpletransformers/simpletransformers/question_answering/question_answering_model.py", line 144, in load_and_cache_examples
examples = get_examples(examples, is_training=not evaluate)
File "/home/yani/projects/latihan/bert/sources/simpletransformers/simpletransformers/question_answering/question_answering_utils.py", line 108, in get_examples
raise TypeError("Input should be a list of examples.")
TypeError: Input should be a list of examples.

@moh-yani
Copy link
Author

moh-yani commented Dec 3, 2019

Okay. I tried this code:

from simpletransformers.question_answering import QuestionAnsweringModel
import json
import os

with open('/home/yani/projects/latihan/bert/data/squad1.1/train-v1.1.json', 'r') as f:
train_data = json.load(f)

train_data = [item for topic in train_data['data'] for item in topic['paragraphs'] ]

os.makedirs('data', exist_ok=True)
with open('data/train.json', 'w') as f:
json.dump(train_data, f)

model = QuestionAnsweringModel('distilbert', 'distilbert-base-uncased-distilled-squad', args={'reprocess_input_data': True, 'overwrite_output_dir': True})

model.train_model('data/train.json')
result, text = model.eval_model('data/train.json')

print(result)
print(text)

print('-------------------')

to_predict = [{'context': 'If bidirectionality is so powerful, why hasn’t it been done before? To understand why, consider that unidirectional models are efficiently trained by predicting each word conditioned on the previous words in the sentence', 'qas': [{'question': 'What are bidirectional usage of BERT?', 'id': '0'}]}]

print(model.predict(to_predict))

And it results this:
Traceback (most recent call last):
File "squad.py", line 20, in
model.train_model('data/train.json')
File "/home/yani/projects/latihan/bert/sources/simpletransformers/simpletransformers/question_answering/question_answering_model.py", line 227, in train_model
train_dataset = self.load_and_cache_examples(train_examples)
File "/home/yani/projects/latihan/bert/sources/simpletransformers/simpletransformers/question_answering/question_answering_model.py", line 144, in load_and_cache_examples
examples = get_examples(examples, is_training=not evaluate)
File "/home/yani/projects/latihan/bert/sources/simpletransformers/simpletransformers/question_answering/question_answering_utils.py", line 141, in get_examples
is_impossible = qa["is_impossible"]

What was happened for is_impossible = qa["is_impossible"] ?

@ThilinaRajapakse
Copy link
Owner

Squad 2.0 has an additional attribute is_impossible to indicate whether it's possible or not to answer the question from the given context. You'll need to set it to False for all answers if you are using squad 1.0 since all questions in squad 1.0 are answerable.

@moh-yani
Copy link
Author

moh-yani commented Dec 4, 2019

I have changed dataset to squad2.0. It results:

Traceback (most recent call last):
File "squad2_0.py", line 41, in
result, text = model.eval_model('data/train.json')
File "/home/yani/projects/latihan/bert/sources/simpletransformers/simpletransformers/question_answering/question_answering_model.py", line 388, in eval_model
result, texts = self.calculate_results(truth, all_predictions)
File "/home/yani/projects/latihan/bert/sources/simpletransformers/simpletransformers/question_answering/question_answering_model.py", line 583, in calculate_results
truth_dict[answer['id']] = answer['answers'][0]['text']
IndexError: list index out of range

What should I modify?

@moh-yani
Copy link
Author

moh-yani commented Dec 4, 2019

It happened when performing:

result, text = model.eval_model('data/dev.json')

@moh-yani
Copy link
Author

moh-yani commented Dec 4, 2019

I actually have not understood what you explain in https://towardsdatascience.com/question-answering-with-bert-xlnet-xlm-and-distilbert-using-simple-transformers-4d8785ee762a especially in your explanation below:

"
Evaluation
The correct answers for the dev data are not provided in the SQuAD dataset but we can upload our predictions to the SQuAD website for evaluation. Alternatively, you could split the train data into training and validation datasets and use the model.eval_model() method to validate the model locally.
"

Does it mean squad2.0 could be evaluated with the manual splitting the train-v20.json into two (train and dev data)? If so, why when I tried to evaluate using the same file with the train data (train-v2.0.json) it results the error above?

Hopefully, I could get the response about this.

Thank you.

@moh-yani moh-yani closed this as completed Dec 4, 2019
@ThilinaRajapakse
Copy link
Owner

ThilinaRajapakse commented Dec 4, 2019

Yes, that is what it means. However, the guide also says that SQuAD data needs to be converted into a format that is compatible with Simple Transformers.

I see you closed the issue so I hope you fixed it. If not, try the following.

with open('data/train-v2.0.json', 'r') as f:
    eval_data = json.load(f)

eval_data = [item for topic in eval_data['data'] for item in topic['paragraphs'] ]

model.eval_model(eval_data)

@moh-yani
Copy link
Author

moh-yani commented Dec 4, 2019

Thank you for response.

I have tried the code above, but still results this error:

Traceback (most recent call last):
File "squad2_0_method_eval.py", line 28, in
model.eval_model(eval_data)
File "/home/yani/projects/latihan/bert/sources/simpletransformers/simpletransformers/question_answering/question_answering_model.py", line 388, in eval_model
result, texts = self.calculate_results(truth, all_predictions)
File "/home/yani/projects/latihan/bert/sources/simpletransformers/simpletransformers/question_answering/question_answering_model.py", line 583, in calculate_results
truth_dict[answer['id']] = answer['answers'][0]['text']
IndexError: list index out of range

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants