Skip to content

AttributeError: 'Tensor' object has no attribute 'isnan' #9

Closed
@ncepu-liudong

Description

Hello,
I want to try the code followed the README.md , but meet a AttributeError . This is my nohup_0.log.

2021-04-23 09:40:30,348 - mmdet - INFO - Environment info:

sys.platform: linux
Python: 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0]
CUDA available: True
CUDA_HOME: /usr/local/cuda-10.0
NVCC: Cuda compilation tools, release 10.0, V10.0.130
GPU 0: GeForce GTX 1080 Ti
GCC: gcc (Ubuntu 5.3.1-14ubuntu2) 5.3.1 20160413
PyTorch: 1.4.0
PyTorch compiling details: PyTorch built with:

  • GCC 7.3
  • Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v0.21.1 (Git Hash 7d2fd500bc78936d1d648ca713b901012f470dbc)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • NNPACK is enabled
  • CUDA Runtime 10.0
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
  • CuDNN 7.6.3
  • Magma 2.5.1
  • Build settings: BLAS=MKL, BUILD_NAMEDTENSOR=OFF, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -fopenmp -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -O2 -fPIC -Wno-narrowing -Wall -Wextra -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Wno-stringop-overflow, DISABLE_NUMA=1, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_STATIC_DISPATCH=OFF,

TorchVision: 0.5.0
OpenCV: 4.5.1
MMCV: 1.0.5
MMDetection: 2.3.0+b6976f3
MMDetection Compiler: GCC 5.3
MMDetection CUDA Compiler: 10.0

2021-04-23 09:40:30,349 - mmdet - INFO - Distributed training: True
2021-04-23 09:40:30,549 - mmdet - INFO - Config:
model = dict(
type='RetinaNet',
pretrained='torchvision://resnet50',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=True),
norm_eval=True,
style='pytorch'),
neck=dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
start_level=1,
add_extra_convs='on_input',
num_outs=5),
bbox_head=dict(
type='MIAODRetinaHead',
C=20,
in_channels=256,
stacked_convs=4,
feat_channels=256,
anchor_generator=dict(
type='AnchorGenerator',
octave_base_scale=4,
scales_per_octave=3,
ratios=[0.5, 1.0, 2.0],
strides=[8, 16, 32, 64, 128]),
bbox_coder=dict(
type='DeltaXYWHBBoxCoder',
target_means=[0.0, 0.0, 0.0, 0.0],
target_stds=[1.0, 1.0, 1.0, 1.0]),
FL=dict(
type='FocalLoss',
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
loss_weight=1.0),
SmoothL1=dict(type='L1Loss', loss_weight=1.0)))
train_cfg = dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.4,
min_pos_iou=0,
ignore_iof_thr=-1),
allowed_border=-1,
pos_weight=-1,
debug=False,
param_lambda=0.5)
test_cfg = dict(
nms_pre=1000,
min_bbox_size=0,
score_thr=0.05,
nms=dict(type='nms', iou_threshold=0.5),
max_per_img=100)
data_root = '/data/database/VOCdevkit/'
dataset_type = 'VOCDataset'
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='Resize', img_scale=(1000, 600), keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1000, 600),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]
data = dict(
samples_per_gpu=2,
workers_per_gpu=2,
train=dict(
type='RepeatDataset',
times=3,
dataset=dict(
type='VOCDataset',
ann_file=[
'/data/database/VOCdevkit/VOC2007/ImageSets/Main/trainval.txt',
'/data/database/VOCdevkit/VOC2012/ImageSets/Main/trainval.txt'
],
img_prefix=[
'/data/database/VOCdevkit/VOC2007/',
'/data/database/VOCdevkit/VOC2012/'
],
pipeline=[
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='Resize', img_scale=(1000, 600), keep_ratio=True),
dict(type='RandomFlip', flip_ratio=0.5),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='DefaultFormatBundle'),
dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels'])
])),
val=dict(
type='VOCDataset',
ann_file='/data/database/VOCdevkit/VOC2007/ImageSets/Main/test.txt',
img_prefix='/data/database/VOCdevkit/VOC2007/',
pipeline=[
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1000, 600),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]),
test=dict(
type='VOCDataset',
ann_file=[
'/data/database/VOCdevkit/VOC2007/ImageSets/Main/trainval.txt',
'/data/database/VOCdevkit/VOC2012/ImageSets/Main/trainval.txt'
],
img_prefix=[
'/data/database/VOCdevkit/VOC2007/',
'/data/database/VOCdevkit/VOC2012/'
],
pipeline=[
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipAug',
img_scale=(1000, 600),
flip=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='RandomFlip'),
dict(
type='Normalize',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
to_rgb=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='Collect', keys=['img'])
])
]))
evaluation = dict(interval=3, metric='mAP')
checkpoint_config = dict(interval=1)
log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')])
dist_params = dict(backend='nccl')
log_level = 'INFO'
load_from = None
resume_from = None
workflow = [('train', 1)]
optimizer = dict(type='SGD', lr=0.001, momentum=0.9, weight_decay=0.0001)
optimizer_config = dict(grad_clip=None)
lr_config = dict(policy='step', step=[2])
epoch_ratio = [3, 1]
epoch = 2
X_L_repeat = 2
X_U_repeat = 2
k = 10000
X_S_size = 413
X_L_0_size = 827
cycles = [0, 1, 2, 3, 4, 5, 6]
work_directory = './work_dirs/MI-AOD'
gpu_ids = range(0, 1)

2021-04-23 09:40:30,549 - mmdet - INFO - Set random seed to 666, deterministic: False
2021-04-23 09:40:30,695 - mmdet - INFO - Set random seed to 666, deterministic: False
2021-04-23 09:40:31,433 - mmdet - INFO - load model from: torchvision://resnet50
2021-04-23 09:40:31,718 - mmdet - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: fc.weight, fc.bias

2021-04-23 09:40:49,383 - mmdet - INFO - Start running, host: dreamtech@Dreamtech-Ubuntu, work_directory: /data/liudong/MI-AOD/work_dirs/MI-AOD/20210423_094030
2021-04-23 09:40:49,383 - mmdet - INFO - workflow: [('train', 1)], max: 3 epochs
2021-04-23 09:41:10,869 - mmdet - INFO - Epoch [1][50/827] lr: 1.000e-03, eta: 0:17:23, time: 0.429, data_time: 0.125, memory: 2472, l_det_cls: 1.1565, l_det_loc: 0.6727, l_imgcls: 0.2681, L_det: 2.0973
2021-04-23 09:41:24,112 - mmdet - INFO - Epoch [1][100/827] lr: 1.000e-03, eta: 0:13:46, time: 0.265, data_time: 0.003, memory: 2472, l_det_cls: 1.1592, l_det_loc: 0.6610, l_imgcls: 0.2422, L_det: 2.0624
2021-04-23 09:41:37,323 - mmdet - INFO - Epoch [1][150/827] lr: 1.000e-03, eta: 0:12:24, time: 0.264, data_time: 0.002, memory: 2472, l_det_cls: 1.1534, l_det_loc: 0.6440, l_imgcls: 0.2421, L_det: 2.0395
2021-04-23 09:41:50,599 - mmdet - INFO - Epoch [1][200/827] lr: 1.000e-03, eta: 0:11:38, time: 0.266, data_time: 0.002, memory: 2472, l_det_cls: 1.1565, l_det_loc: 0.6424, l_imgcls: 0.2262, L_det: 2.0251
2021-04-23 09:42:03,864 - mmdet - INFO - Epoch [1][250/827] lr: 1.000e-03, eta: 0:11:04, time: 0.265, data_time: 0.002, memory: 2472, l_det_cls: 1.1567, l_det_loc: 0.6499, l_imgcls: 0.2463, L_det: 2.0529
2021-04-23 09:42:17,229 - mmdet - INFO - Epoch [1][300/827] lr: 1.000e-03, eta: 0:10:38, time: 0.267, data_time: 0.003, memory: 2472, l_det_cls: 1.1537, l_det_loc: 0.6276, l_imgcls: 0.2298, L_det: 2.0111
2021-04-23 09:42:30,514 - mmdet - INFO - Epoch [1][350/827] lr: 1.000e-03, eta: 0:10:15, time: 0.266, data_time: 0.003, memory: 2472, l_det_cls: 1.1505, l_det_loc: 0.6406, l_imgcls: 0.2471, L_det: 2.0382
2021-04-23 09:42:43,814 - mmdet - INFO - Epoch [1][400/827] lr: 1.000e-03, eta: 0:09:55, time: 0.266, data_time: 0.003, memory: 2472, l_det_cls: 1.1510, l_det_loc: 0.6212, l_imgcls: 0.2421, L_det: 2.0143
2021-04-23 09:42:57,238 - mmdet - INFO - Epoch [1][450/827] lr: 1.000e-03, eta: 0:09:37, time: 0.268, data_time: 0.003, memory: 2472, l_det_cls: 1.1472, l_det_loc: 0.6159, l_imgcls: 0.2403, L_det: 2.0034
2021-04-23 09:43:10,702 - mmdet - INFO - Epoch [1][500/827] lr: 1.000e-03, eta: 0:09:19, time: 0.269, data_time: 0.003, memory: 2472, l_det_cls: 1.1101, l_det_loc: 0.5917, l_imgcls: 0.2279, L_det: 1.9297
2021-04-23 09:43:24,124 - mmdet - INFO - Epoch [1][550/827] lr: 1.000e-03, eta: 0:09:03, time: 0.268, data_time: 0.003, memory: 2472, l_det_cls: 1.0495, l_det_loc: 0.5960, l_imgcls: 0.2370, L_det: 1.8824
2021-04-23 09:43:37,522 - mmdet - INFO - Epoch [1][600/827] lr: 1.000e-03, eta: 0:08:47, time: 0.268, data_time: 0.003, memory: 2472, l_det_cls: 0.9738, l_det_loc: 0.5769, l_imgcls: 0.2151, L_det: 1.7659
2021-04-23 09:43:50,945 - mmdet - INFO - Epoch [1][650/827] lr: 1.000e-03, eta: 0:08:31, time: 0.268, data_time: 0.003, memory: 2472, l_det_cls: 0.9126, l_det_loc: 0.5775, l_imgcls: 0.2348, L_det: 1.7250
2021-04-23 09:44:04,240 - mmdet - INFO - Epoch [1][700/827] lr: 1.000e-03, eta: 0:08:15, time: 0.266, data_time: 0.003, memory: 2472, l_det_cls: 0.8241, l_det_loc: 0.5727, l_imgcls: 0.2263, L_det: 1.6231
2021-04-23 09:44:17,694 - mmdet - INFO - Epoch [1][750/827] lr: 1.000e-03, eta: 0:08:00, time: 0.269, data_time: 0.003, memory: 2472, l_det_cls: 0.9404, l_det_loc: 0.5767, l_imgcls: 0.2281, L_det: 1.7452
2021-04-23 09:44:31,113 - mmdet - INFO - Epoch [1][800/827] lr: 1.000e-03, eta: 0:07:45, time: 0.268, data_time: 0.003, memory: 2472, l_det_cls: 0.7694, l_det_loc: 0.5403, l_imgcls: 0.2050, L_det: 1.5147
2021-04-23 09:44:39,094 - mmdet - INFO - Saving checkpoint at 1 epochs
2021-04-23 09:44:58,374 - mmdet - INFO - Epoch [2][50/827] lr: 1.000e-03, eta: 0:07:20, time: 0.381, data_time: 0.115, memory: 2472, l_det_cls: 0.7876, l_det_loc: 0.5425, l_imgcls: 0.2281, L_det: 1.5582
2021-04-23 09:45:11,835 - mmdet - INFO - Epoch [2][100/827] lr: 1.000e-03, eta: 0:07:06, time: 0.269, data_time: 0.003, memory: 2472, l_det_cls: 0.7754, l_det_loc: 0.5320, l_imgcls: 0.2188, L_det: 1.5262
2021-04-23 09:45:25,274 - mmdet - INFO - Epoch [2][150/827] lr: 1.000e-03, eta: 0:06:52, time: 0.269, data_time: 0.002, memory: 2472, l_det_cls: 0.7756, l_det_loc: 0.5378, l_imgcls: 0.2221, L_det: 1.5355
2021-04-23 09:45:38,635 - mmdet - INFO - Epoch [2][200/827] lr: 1.000e-03, eta: 0:06:37, time: 0.267, data_time: 0.003, memory: 2472, l_det_cls: 0.7371, l_det_loc: 0.5270, l_imgcls: 0.2007, L_det: 1.4648
2021-04-23 09:45:51,883 - mmdet - INFO - Epoch [2][250/827] lr: 1.000e-03, eta: 0:06:23, time: 0.265, data_time: 0.003, memory: 2472, l_det_cls: 0.8152, l_det_loc: 0.5017, l_imgcls: 0.2200, L_det: 1.5369
2021-04-23 09:46:05,192 - mmdet - INFO - Epoch [2][300/827] lr: 1.000e-03, eta: 0:06:09, time: 0.266, data_time: 0.003, memory: 2472, l_det_cls: 0.8940, l_det_loc: 0.5325, l_imgcls: 0.2306, L_det: 1.6571
2021-04-23 09:46:18,661 - mmdet - INFO - Epoch [2][350/827] lr: 1.000e-03, eta: 0:05:55, time: 0.269, data_time: 0.003, memory: 2472, l_det_cls: 0.7503, l_det_loc: 0.5077, l_imgcls: 0.2171, L_det: 1.4751
2021-04-23 09:46:31,928 - mmdet - INFO - Epoch [2][400/827] lr: 1.000e-03, eta: 0:05:41, time: 0.265, data_time: 0.003, memory: 2472, l_det_cls: 0.7683, l_det_loc: 0.5151, l_imgcls: 0.2229, L_det: 1.5063
2021-04-23 09:46:45,251 - mmdet - INFO - Epoch [2][450/827] lr: 1.000e-03, eta: 0:05:27, time: 0.266, data_time: 0.003, memory: 2472, l_det_cls: 0.7067, l_det_loc: 0.4886, l_imgcls: 0.2108, L_det: 1.4061
2021-04-23 09:46:58,539 - mmdet - INFO - Epoch [2][500/827] lr: 1.000e-03, eta: 0:05:13, time: 0.266, data_time: 0.003, memory: 2472, l_det_cls: 0.8194, l_det_loc: 0.4934, l_imgcls: 0.2091, L_det: 1.5219
2021-04-23 09:47:11,879 - mmdet - INFO - Epoch [2][550/827] lr: 1.000e-03, eta: 0:05:00, time: 0.267, data_time: 0.003, memory: 2472, l_det_cls: 0.7372, l_det_loc: 0.4922, l_imgcls: 0.2097, L_det: 1.4391
2021-04-23 09:47:25,295 - mmdet - INFO - Epoch [2][600/827] lr: 1.000e-03, eta: 0:04:46, time: 0.268, data_time: 0.003, memory: 2472, l_det_cls: 0.7582, l_det_loc: 0.4991, l_imgcls: 0.2144, L_det: 1.4717
2021-04-23 09:47:38,733 - mmdet - INFO - Epoch [2][650/827] lr: 1.000e-03, eta: 0:04:32, time: 0.269, data_time: 0.003, memory: 2472, l_det_cls: 0.6970, l_det_loc: 0.4771, l_imgcls: 0.2037, L_det: 1.3777
2021-04-23 09:47:52,157 - mmdet - INFO - Epoch [2][700/827] lr: 1.000e-03, eta: 0:04:18, time: 0.268, data_time: 0.003, memory: 2472, l_det_cls: 0.6835, l_det_loc: 0.4964, l_imgcls: 0.2033, L_det: 1.3831
2021-04-23 09:48:05,558 - mmdet - INFO - Epoch [2][750/827] lr: 1.000e-03, eta: 0:04:05, time: 0.268, data_time: 0.003, memory: 2472, l_det_cls: 0.7212, l_det_loc: 0.4887, l_imgcls: 0.2004, L_det: 1.4103
2021-04-23 09:48:18,852 - mmdet - INFO - Epoch [2][800/827] lr: 1.000e-03, eta: 0:03:51, time: 0.266, data_time: 0.003, memory: 2472, l_det_cls: 0.6734, l_det_loc: 0.4589, l_imgcls: 0.1959, L_det: 1.3282
2021-04-23 09:48:26,143 - mmdet - INFO - Saving checkpoint at 2 epochs
2021-04-23 09:48:45,242 - mmdet - INFO - Epoch [3][50/827] lr: 1.000e-03, eta: 0:03:29, time: 0.378, data_time: 0.114, memory: 2472, l_det_cls: 0.6987, l_det_loc: 0.4601, l_imgcls: 0.1850, L_det: 1.3438
2021-04-23 09:48:58,675 - mmdet - INFO - Epoch [3][100/827] lr: 1.000e-03, eta: 0:03:16, time: 0.269, data_time: 0.003, memory: 2472, l_det_cls: 0.6915, l_det_loc: 0.4607, l_imgcls: 0.1937, L_det: 1.3460
2021-04-23 09:49:11,901 - mmdet - INFO - Epoch [3][150/827] lr: 1.000e-03, eta: 0:03:02, time: 0.264, data_time: 0.003, memory: 2472, l_det_cls: 1.1146, l_det_loc: 0.5606, l_imgcls: 0.2404, L_det: 1.9156
2021-04-23 09:49:25,052 - mmdet - INFO - Epoch [3][200/827] lr: 1.000e-03, eta: 0:02:49, time: 0.263, data_time: 0.003, memory: 2472, l_det_cls: 1.1014, l_det_loc: 0.5852, l_imgcls: 0.2500, L_det: 1.9365
2021-04-23 09:49:38,401 - mmdet - INFO - Epoch [3][250/827] lr: 1.000e-03, eta: 0:02:35, time: 0.267, data_time: 0.003, memory: 2472, l_det_cls: 0.9846, l_det_loc: 0.5921, l_imgcls: 0.2359, L_det: 1.8126
2021-04-23 09:49:51,723 - mmdet - INFO - Epoch [3][300/827] lr: 1.000e-03, eta: 0:02:22, time: 0.266, data_time: 0.003, memory: 2472, l_det_cls: 0.8661, l_det_loc: 0.5424, l_imgcls: 0.2485, L_det: 1.6571
2021-04-23 09:50:05,118 - mmdet - INFO - Epoch [3][350/827] lr: 1.000e-03, eta: 0:02:08, time: 0.268, data_time: 0.003, memory: 2472, l_det_cls: 0.8491, l_det_loc: 0.5686, l_imgcls: 0.2555, L_det: 1.6733
2021-04-23 09:50:18,524 - mmdet - INFO - Epoch [3][400/827] lr: 1.000e-03, eta: 0:01:55, time: 0.268, data_time: 0.003, memory: 2472, l_det_cls: 0.9489, l_det_loc: 0.5600, l_imgcls: 0.2339, L_det: 1.7427
2021-04-23 09:50:31,776 - mmdet - INFO - Epoch [3][450/827] lr: 1.000e-03, eta: 0:01:41, time: 0.265, data_time: 0.003, memory: 2472, l_det_cls: 0.8918, l_det_loc: 0.5770, l_imgcls: 0.2355, L_det: 1.7042
2021-04-23 09:50:45,165 - mmdet - INFO - Epoch [3][500/827] lr: 1.000e-03, eta: 0:01:28, time: 0.268, data_time: 0.003, memory: 2472, l_det_cls: 0.8749, l_det_loc: 0.5689, l_imgcls: 0.2469, L_det: 1.6907
2021-04-23 09:50:58,391 - mmdet - INFO - Epoch [3][550/827] lr: 1.000e-03, eta: 0:01:14, time: 0.265, data_time: 0.002, memory: 2472, l_det_cls: 0.8550, l_det_loc: 0.5952, l_imgcls: 0.2485, L_det: 1.6987
2021-04-23 09:51:11,735 - mmdet - INFO - Epoch [3][600/827] lr: 1.000e-03, eta: 0:01:01, time: 0.267, data_time: 0.003, memory: 2472, l_det_cls: 0.8344, l_det_loc: 0.5818, l_imgcls: 0.2470, L_det: 1.6631
2021-04-23 09:51:25,076 - mmdet - INFO - Epoch [3][650/827] lr: 1.000e-03, eta: 0:00:47, time: 0.267, data_time: 0.003, memory: 2472, l_det_cls: 0.8499, l_det_loc: 0.5668, l_imgcls: 0.2123, L_det: 1.6290
2021-04-23 09:51:38,406 - mmdet - INFO - Epoch [3][700/827] lr: 1.000e-03, eta: 0:00:34, time: 0.267, data_time: 0.003, memory: 2472, l_det_cls: 0.8395, l_det_loc: 0.5657, l_imgcls: 0.2429, L_det: 1.6481
2021-04-23 09:51:51,771 - mmdet - INFO - Epoch [3][750/827] lr: 1.000e-03, eta: 0:00:20, time: 0.267, data_time: 0.003, memory: 2472, l_det_cls: 0.8112, l_det_loc: 0.5347, l_imgcls: 0.2298, L_det: 1.5758
2021-04-23 09:52:05,099 - mmdet - INFO - Epoch [3][800/827] lr: 1.000e-03, eta: 0:00:07, time: 0.267, data_time: 0.003, memory: 2472, l_det_cls: 0.8015, l_det_loc: 0.5313, l_imgcls: 0.2344, L_det: 1.5672
2021-04-23 09:52:12,446 - mmdet - INFO - Saving checkpoint at 3 epochs
2021-04-23 09:52:16,575 - mmdet - INFO - Start running, host: dreamtech@Dreamtech-Ubuntu, work_directory: /data/liudong/MI-AOD/work_dirs/MI-AOD/20210423_094030
2021-04-23 09:52:16,575 - mmdet - INFO - workflow: [('train', 1)], max: 1 epochs
Traceback (most recent call last):
File "./tools/train.py", line 267, in
main()
File "./tools/train.py", line 203, in main
distributed=distributed, validate=(not args.no_validate), timestamp=timestamp, meta=meta)
File "/data/liudong/MI-AOD/mmdet/apis/train.py", line 122, in train_detector
runner.run([data_loaders_L, data_loaders_U], cfg.workflow, cfg.total_epochs)
File "/home/dreamtech/.conda/envs/miaod/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 192, in run
epoch_runner([data_loaders[i], data_loaders_u[i]], **kwargs)
File "/home/dreamtech/.conda/envs/miaod/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 78, in train
outputs = self.model.train_step(X_U, self.optimizer, **kwargs)
File "/home/dreamtech/.conda/envs/miaod/lib/python3.7/site-packages/mmcv/parallel/distributed.py", line 36, in train_step
output = self.module.train_step(*inputs[0], **kwargs[0])
File "/data/liudong/MI-AOD/mmdet/models/detectors/base.py", line 228, in train_step
losses = self(**data)
File "/home/dreamtech/.conda/envs/miaod/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call
result = self.forward(*input, **kwargs)
File "/data/liudong/MI-AOD/mmdet/core/fp16/decorators.py", line 51, in new_func
return old_func(*args, **kwargs)
File "/data/liudong/MI-AOD/mmdet/models/detectors/base.py", line 162, in forward
return self.forward_train(x, img_metas, **kwargs)
File "/data/liudong/MI-AOD/mmdet/models/detectors/single_stage.py", line 83, in forward_train
losses = self.bbox_head.forward_train(x, img_metas, y_loc_img, y_cls_img, y_loc_img_ignore)
File "/data/liudong/MI-AOD/mmdet/models/dense_heads/base_dense_head.py", line 81, in forward_train
loss = self.L_wave_min(*loss_inputs, y_loc_img_ignore=y_loc_img_ignore)
File "/data/liudong/MI-AOD/mmdet/core/fp16/decorators.py", line 131, in new_func
return old_func(*args, **kwargs)
File "/data/liudong/MI-AOD/mmdet/models/dense_heads/MIAOD_head.py", line 483, in L_wave_min
if value.isnan():
AttributeError: 'Tensor' object has no attribute 'isnan'
Traceback (most recent call last):
File "/home/dreamtech/.conda/envs/miaod/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/home/dreamtech/.conda/envs/miaod/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/dreamtech/.conda/envs/miaod/lib/python3.7/site-packages/torch/distributed/launch.py", line 263, in
main()
File "/home/dreamtech/.conda/envs/miaod/lib/python3.7/site-packages/torch/distributed/launch.py", line 259, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/home/dreamtech/.conda/envs/miaod/bin/python', '-u', './tools/train.py', '--local_rank=0', 'configs/MIAOD.py', '--launcher', 'pytorch']' returned non-zero exit status 1.

Metadata

Assignees

No one assigned

    Labels

    duplicateThis issue or pull request already existspackage errorError from external package

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions