-
Notifications
You must be signed in to change notification settings - Fork 160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
using custom data but gives run time error . Please suggest #357
Comments
from pyabsa import DatasetItem from pyabsa import ModelSaveOption, DeviceTypeOption warnings.filterwarnings("ignore") config.batch_size = 16 trainer = ATEPC.ATEPCTrainer( suggest to correct and debug the code |
try |
[2023-10-08 12:23:53] (2.3.4) Set Model Device: cuda:0 Downloading pytorch_model.bin: 100% Done.
|
try pip install pyabsa==2.3.4rc0 |
Thank you so much for quick response . I am not able to Fine-Tuning Training for Your Own Model . I stuck now. I am not able to debug the error from this pip install pyabsa==2.3.4rc0 . Please suggest the script for Fine-Tuning Training for Your Own Model. Thank you |
Please suggest to debugging |
I cannot well understand your reply, does the error seem like the previuous? |
Yes, the error seem like the previous. |
Sorry I cannot reproduce the error. Did you restart the Kernal after the update? |
my code script here from pyabsa import ModelSaveOption, DeviceTypeOption warnings.filterwarnings("ignore") config.batch_size = 16 trainer = ATEPC.ATEPCTrainer( [2023-10-09 09:22:34] (2.3.4rc0) Set Model Device: cuda:0 Downloading pytorch_model.bin: 100% Done.
|
You just forget to annotate the test dataset |
Thank you so much |
RuntimeError Traceback (most recent call last)
Cell In[18], line 15
10 config.verbose = False # If verbose == True, PyABSA will output the model strcture and seversal processed data examples
11 config.notice = (
12 "This is an training example for aspect term extraction" # for memos usage
13 )
---> 15 trainer = ATEPC.ATEPCTrainer(
16 config=config,
17 dataset=my_dataset,
18 from_checkpoint="english", # if you want to resume training from our pretrained checkpoints, you can pass the checkpoint name here
19 auto_device=DeviceTypeOption.AUTO, # use cuda if available
20 checkpoint_save_mode=ModelSaveOption.SAVE_MODEL_STATE_DICT, # save state dict only instead of the whole model
21 load_aug=False, # there are some augmentation dataset for integrated datasets, you use them by setting load_aug=True to improve performance
22 )
File /opt/conda/lib/python3.10/site-packages/pyabsa/tasks/AspectTermExtraction/trainer/atepc_trainer.py:69, in ATEPCTrainer.init(self, config, dataset, from_checkpoint, checkpoint_save_mode, auto_device, path_to_save, load_aug)
64 self.config.task_code = TaskCodeOption.Aspect_Term_Extraction_and_Classification
65 self.config.task_name = TaskNameOption().get(
66 TaskCodeOption.Aspect_Term_Extraction_and_Classification
67 )
---> 69 self._run()
File /opt/conda/lib/python3.10/site-packages/pyabsa/framework/trainer_class/trainer_template.py:241, in Trainer._run(self)
239 self.config.seed = s
240 if self.config.checkpoint_save_mode:
--> 241 model_path.append(self.training_instructor(self.config).run())
242 else:
243 # always return the last trained model if you don't save trained model
244 model = self.inference_model_class(
245 checkpoint=self.training_instructor(self.config).run()
246 )
File /opt/conda/lib/python3.10/site-packages/pyabsa/tasks/AspectTermExtraction/instructor/atepc_instructor.py:794, in ATEPCTrainingInstructor.run(self)
793 def run(self):
--> 794 return self._train(criterion=None)
File /opt/conda/lib/python3.10/site-packages/pyabsa/framework/instructor_class/instructor_template.py:357, in BaseTrainingInstructor._train(self, criterion)
354 pass
356 # Resume training from a previously trained model
--> 357 self._resume_from_checkpoint()
359 # Initialize the learning rate scheduler if warmup_step is specified
360 if self.config.warmup_step >= 0:
File /opt/conda/lib/python3.10/site-packages/pyabsa/framework/instructor_class/instructor_template.py:455, in BaseTrainingInstructor._resume_from_checkpoint(self)
451 self.model.module.load_state_dict(
452 torch.load(state_dict_path[0])
453 )
454 else:
--> 455 self.model.load_state_dict(
456 torch.load(
457 state_dict_path[0], map_location=self.config.device
458 )
459 )
460 self.model.config = self.config
461 self.model.to(self.config.device)
File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:2041, in Module.load_state_dict(self, state_dict, strict)
2036 error_msgs.insert(
2037 0, 'Missing key(s) in state_dict: {}. '.format(
2038 ', '.join('"{}"'.format(k) for k in missing_keys)))
2040 if len(error_msgs) > 0:
-> 2041 raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
2042 self.class.name, "\n\t".join(error_msgs)))
2043 return _IncompatibleKeys(missing_keys, unexpected_keys)
RuntimeError: Error(s) in loading state_dict for FAST_LCF_ATEPC:
Unexpected key(s) in state_dict: "bert4global.embeddings.position_ids".
The text was updated successfully, but these errors were encountered: