Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: Can't pickle local object 'main.<locals>.<lambda>' #181

Open
Ameliecc opened this issue May 31, 2022 · 6 comments
Open

AttributeError: Can't pickle local object 'main.<locals>.<lambda>' #181

Ameliecc opened this issue May 31, 2022 · 6 comments

Comments

@Ameliecc
Copy link

This error occurs when I execute the train_semseg.py:

PS F:\pointnet_pointnet2_pytorch-master> python train_semseg.py --model pointnet2_sem_seg --test_area 5 --log_dir pointnet2_sem_seg
PARAMETER ...
Namespace(batch_size=16, decay_rate=0.0001, epoch=32, gpu='0', learning_rate=0.001, log_dir='pointnet2_sem_seg', lr_decay=0.7, model='pointnet2_sem_seg', npoint=4096, optimizer='Adam', step_size=10, test_area=5)
start loading training data ...
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 118/118 [00:09<00:00, 12.18it/s]
[1.1122853 1.1530312 1. 2.2862618 2.3985515 2.3416872 1.6953672
2.051836 1.7089869 3.416529 1.840006 2.7374067 1.3777069]
Totally 28940 samples in train set.
start loading test data ...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 46/46 [00:04<00:00, 10.26it/s]
[ 1.1516608 1.2053679 1. 11.941072 2.6087077 2.0597224
2.1135178 2.0812197 2.5563374 4.5242124 1.4960177 2.9274836
1.6089553]
Totally 12881 samples in test set.
The number of training data is: 28940
The number of test data is: 12881
Use pretrain model

Learning rate:0.000700
BN momentum updated to: 0.050000
Traceback (most recent call last):
File "train_semseg.py", line 295, in
main(args)
File "train_semseg.py", line 181, in main
for i, (points, target) in tqdm(enumerate(trainDataLoader), total=len(trainDataLoader), smoothing=0.9):
File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\site-packages\torch\utils\data\dataloader.py", line 355, in iter
return self._get_iterator()
File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\site-packages\torch\utils\data\dataloader.py", line 301, in _get_iterator
return _MultiProcessingDataLoaderIter(self)
File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\site-packages\torch\utils\data\dataloader.py", line 914, in init
w.start()
File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\multiprocessing\context.py", line 326, in _Popen
return Popen(process_obj)
File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\multiprocessing\popen_spawn_win32.py", line 93, in init
reduction.dump(process_obj, to_child)
File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'main..'
Traceback (most recent call last):
File "", line 1, in
File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "F:\miniconda3\envs\pytorch_1.8_wsh\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

I don't know how to solve this problem, could anyone give me a hand?

@XeonHis
Copy link

XeonHis commented Jun 5, 2022

Try to modify num_workers to 0 in train_semseg.py. You can find details in matterport/Mask_RCNN#93
It works fine in my PC.

@Ameliecc
Copy link
Author

Ameliecc commented Oct 11, 2022 via email

@morankim
Copy link

It is related with memory overflow. you should change the file loading with hdf5 type files, then you can set num_workers >0.
here is a sample code, that loads hdf5 type dataset.

import h5py
import numpy as np
import torch
import torch.utils.data as data

BASE_DIR = os.path.dirname(os.path.abspath(file))

def _get_data_files(list_filename):
with open(list_filename) as f:
return [line.rstrip() for line in f]

def _load_data_file(name):
f = h5py.File(name, "r")
data = f["data"][:]
label = f["label"][:]
return data, label

class Indoor3DSemSeg(data.Dataset):
def init(self, num_points, train=True, download=True, data_precent=1.0):
super().init()
self.data_precent = data_precent
self.folder = "indoor3d_sem_seg_hdf5_data"
self.data_dir = os.path.join('../', BASE_DIR, self.folder)
#self.url = ( "https://shapenet.cs.stanford.edu/media/indoor3d_sem_seg_hdf5_data.zip" )

    labelweights = np.zeros(13)

    self.train, self.num_points = train, num_points

    all_files = _get_data_files(os.path.join(self.data_dir, "all_files.txt"))
    room_filelist = _get_data_files(
        os.path.join(self.data_dir, "room_filelist.txt")
    )
    for f in all_files:
        _, labels = _load_data_file(os.path.join(BASE_DIR, f))
        tmp, _ = np.histogram(labels, range(14))
        labelweights += tmp

    labelweights = labelweights.astype(np.float32)
    labelweights = labelweights / np.sum(labelweights)
    self.labelweights = np.power(np.amax(labelweights) / labelweights, 1 / 3.0)


    data_batchlist, label_batchlist = [], []
    for f in all_files:
        data, label = _load_data_file(os.path.join(BASE_DIR, f))
        data_batchlist.append(data)
        label_batchlist.append(label)

    data_batches = np.concatenate(data_batchlist, 0)
    labels_batches = np.concatenate(label_batchlist, 0)

    test_area = "Area_5"
    train_idxs, test_idxs = [], []
    for i, room_name in enumerate(room_filelist):
        if test_area in room_name:
            test_idxs.append(i)
        else:
            train_idxs.append(i)

    if self.train:
        self.points = data_batches[train_idxs, ...]
        self.labels = labels_batches[train_idxs, ...]
    else:
        self.points = data_batches[test_idxs, ...]
        self.labels = labels_batches[test_idxs, ...]

def __getitem__(self, idx):
    pt_idxs = np.arange(0, self.num_points)
    np.random.shuffle(pt_idxs)

    current_points = torch.from_numpy(self.points[idx, pt_idxs].copy()).float()
    current_labels = torch.from_numpy(self.labels[idx, pt_idxs].copy()).long()

    return current_points, current_labels

def __len__(self):
    return int(self.points.shape[0] * self.data_precent)

def set_num_points(self, pts):
    self.num_points = pts

def randomize(self):
    pass

@assissanliu
Copy link

@Ameliecc Did you already solve this problem in this way?

@morankim
Copy link

morankim commented Dec 8, 2023 via email

@wqlevi
Copy link

wqlevi commented Jan 22, 2024

Hi,

I've met similiar error when I train my model, but I get around with it by setting number_worker = 1 in the dataloader.

Hope this helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants