-
Notifications
You must be signed in to change notification settings - Fork 940
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AttributeError: Can't pickle local object 'start.<locals>.<lambda>' #198
Comments
Unfortunately, Windows is not supported at the moment (due specifically to this pickling issue). Closing this ticket as a dupe of #36 |
win10 + tensorflowonspark AttributeError: Can't pickle local object 'start..' File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 2499, in pipeline_func
19/05/09 10:38:45 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, executor driver, partition 1, PROCESS_LOCAL, 7331 bytes)
19/05/09 10:38:45 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
Driver stacktrace: |
I met this problem on mac, does any one have a solution? |
F:\tool\python35\python.exe F:/duhanmin_py/人脸识别TensorFlowOnSpark/人脸识别.py
18/01/11 11:34:00 INFO SparkContext: Running Spark version 1.6.1
18/01/11 11:34:00 INFO SecurityManager: Changing view acls to: zyxrdu
18/01/11 11:34:00 INFO SecurityManager: Changing modify acls to: zyxrdu
18/01/11 11:34:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(zyxrdu); users with modify permissions: Set(zyxrdu)
18/01/11 11:34:01 INFO Utils: Successfully started service 'sparkDriver' on port 53746.
18/01/11 11:34:01 INFO Slf4jLogger: Slf4jLogger started
18/01/11 11:34:01 INFO Remoting: Starting remoting
18/01/11 11:34:01 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.190.229:53759]
18/01/11 11:34:01 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 53759.
18/01/11 11:34:01 INFO SparkEnv: Registering MapOutputTracker
18/01/11 11:34:01 INFO SparkEnv: Registering BlockManagerMaster
18/01/11 11:34:01 INFO DiskBlockManager: Created local directory at C:\Users\zyxrdu\AppData\Local\Temp\blockmgr-45c437cd-3173-47eb-bc9a-d80af7333153
18/01/11 11:34:01 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
18/01/11 11:34:01 INFO SparkEnv: Registering OutputCommitCoordinator
18/01/11 11:34:01 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
18/01/11 11:34:01 INFO Utils: Successfully started service 'SparkUI' on port 4041.
18/01/11 11:34:01 INFO SparkUI: Started SparkUI at http://192.168.190.229:4041
18/01/11 11:34:01 INFO Executor: Starting executor ID driver on host localhost
18/01/11 11:34:01 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 53778.
18/01/11 11:34:01 INFO NettyBlockTransferService: Server created on 53778
18/01/11 11:34:01 INFO BlockManagerMaster: Trying to register BlockManager
18/01/11 11:34:01 INFO BlockManagerMasterEndpoint: Registering block manager localhost:53778 with 511.1 MB RAM, BlockManagerId(driver, localhost, 53778)
18/01/11 11:34:01 INFO BlockManagerMaster: Registered BlockManager
2018-01-11 11:34:01,735 INFO (MainThread-12652) Reserving TFSparkNodes w/ TensorBoard
2018-01-11 11:34:01,735 INFO (MainThread-12652) listening for reservations at ('192.168.190.229', 53780)
2018-01-11 11:34:01,735 INFO (MainThread-12652) Starting TensorFlow on executors
2018-01-11 11:34:01,969 INFO (MainThread-12652) Waiting for TFSparkNodes to start
2018-01-11 11:34:01,969 INFO (MainThread-12652) waiting for 5 reservations
18/01/11 11:34:02 INFO SparkContext: Starting job: foreachPartition at F:\tool\python35\lib\site-packages\tensorflowonspark\TFCluster.py:257
18/01/11 11:34:02 INFO DAGScheduler: Got job 0 (foreachPartition at F:\tool\python35\lib\site-packages\tensorflowonspark\TFCluster.py:257) with 5 output partitions
18/01/11 11:34:02 INFO DAGScheduler: Final stage: ResultStage 0 (foreachPartition at F:\tool\python35\lib\site-packages\tensorflowonspark\TFCluster.py:257)
18/01/11 11:34:02 INFO DAGScheduler: Parents of final stage: List()
18/01/11 11:34:02 INFO DAGScheduler: Missing parents: List()
18/01/11 11:34:02 INFO DAGScheduler: Submitting ResultStage 0 (PythonRDD[1] at foreachPartition at F:\tool\python35\lib\site-packages\tensorflowonspark\TFCluster.py:257), which has no missing parents
18/01/11 11:34:02 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 18.3 KB, free 18.3 KB)
18/01/11 11:34:02 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 12.6 KB, free 31.0 KB)
18/01/11 11:34:02 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:53778 (size: 12.6 KB, free: 511.1 MB)
18/01/11 11:34:02 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
18/01/11 11:34:02 INFO DAGScheduler: Submitting 5 missing tasks from ResultStage 0 (PythonRDD[1] at foreachPartition at F:\tool\python35\lib\site-packages\tensorflowonspark\TFCluster.py:257)
18/01/11 11:34:02 INFO TaskSchedulerImpl: Adding task set 0.0 with 5 tasks
18/01/11 11:34:02 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2064 bytes)
18/01/11 11:34:02 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, partition 1,PROCESS_LOCAL, 2064 bytes)
18/01/11 11:34:02 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, localhost, partition 2,PROCESS_LOCAL, 2064 bytes)
18/01/11 11:34:02 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, localhost, partition 3,PROCESS_LOCAL, 2064 bytes)
18/01/11 11:34:02 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
18/01/11 11:34:02 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
18/01/11 11:34:02 INFO Executor: Running task 2.0 in stage 0.0 (TID 2)
18/01/11 11:34:02 INFO Executor: Running task 3.0 in stage 0.0 (TID 3)
2018-01-11 11:34:02,975 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:03,977 INFO (MainThread-12652) waiting for 5 reservations
18/01/11 11:34:04 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "F:\tool\spark-1.6.1-bin-2.5.0-cdh5.3.6\python\lib\pyspark.zip\pyspark\worker.py", line 111, in main
File "F:\tool\spark-1.6.1-bin-2.5.0-cdh5.3.6\python\lib\pyspark.zip\pyspark\worker.py", line 106, in process
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 317, in func
return f(iterator)
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 759, in func
r = f(it)
File "F:\tool\python35\lib\site-packages\tensorflowonspark\TFSparkNode.py", line 143, in _mapfn
TFSparkNode.mgr = TFManager.start(authkey, ['control'], 'remote')
File "F:\tool\Python35\lib\site-packages\tensorflowonspark\TFManager.py", line 52, in start
mgr.start()
File "F:\tool\Python35\lib\multiprocessing\managers.py", line 479, in start
self._process.start()
File "F:\tool\Python35\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "F:\tool\Python35\lib\multiprocessing\context.py", line 313, in _Popen
return Popen(process_obj)
File "F:\tool\Python35\lib\multiprocessing\popen_spawn_win32.py", line 66, in init
reduction.dump(process_obj, to_child)
File "F:\tool\Python35\lib\multiprocessing\reduction.py", line 59, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'start..'
18/01/11 11:34:04 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, localhost, partition 4,PROCESS_LOCAL, 2064 bytes)
18/01/11 11:34:04 INFO Executor: Running task 4.0 in stage 0.0 (TID 4)
18/01/11 11:34:04 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "F:\tool\spark-1.6.1-bin-2.5.0-cdh5.3.6\python\lib\pyspark.zip\pyspark\worker.py", line 111, in main
File "F:\tool\spark-1.6.1-bin-2.5.0-cdh5.3.6\python\lib\pyspark.zip\pyspark\worker.py", line 106, in process
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 317, in func
return f(iterator)
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 759, in func
r = f(it)
File "F:\tool\python35\lib\site-packages\tensorflowonspark\TFSparkNode.py", line 143, in _mapfn
TFSparkNode.mgr = TFManager.start(authkey, ['control'], 'remote')
File "F:\tool\Python35\lib\site-packages\tensorflowonspark\TFManager.py", line 52, in start
mgr.start()
File "F:\tool\Python35\lib\multiprocessing\managers.py", line 479, in start
self._process.start()
File "F:\tool\Python35\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "F:\tool\Python35\lib\multiprocessing\context.py", line 313, in _Popen
return Popen(process_obj)
File "F:\tool\Python35\lib\multiprocessing\popen_spawn_win32.py", line 66, in init
reduction.dump(process_obj, to_child)
File "F:\tool\Python35\lib\multiprocessing\reduction.py", line 59, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'start..'
18/01/11 11:34:04 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
18/01/11 11:34:04 INFO TaskSchedulerImpl: Cancelling stage 0
18/01/11 11:34:04 INFO Executor: Executor is trying to kill task 1.0 in stage 0.0 (TID 1)
18/01/11 11:34:04 INFO Executor: Executor is trying to kill task 2.0 in stage 0.0 (TID 2)
18/01/11 11:34:04 INFO TaskSchedulerImpl: Stage 0 was cancelled
18/01/11 11:34:04 INFO Executor: Executor is trying to kill task 3.0 in stage 0.0 (TID 3)
18/01/11 11:34:04 INFO Executor: Executor is trying to kill task 4.0 in stage 0.0 (TID 4)
18/01/11 11:34:04 INFO DAGScheduler: ResultStage 0 (foreachPartition at F:\tool\python35\lib\site-packages\tensorflowonspark\TFCluster.py:257) failed in 2.456 s
18/01/11 11:34:04 INFO DAGScheduler: Job 0 failed: foreachPartition at F:\tool\python35\lib\site-packages\tensorflowonspark\TFCluster.py:257, took 2.595130 s
Exception in thread Thread-3:
Traceback (most recent call last):
File "F:\tool\python35\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "F:\tool\python35\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "F:\tool\python35\lib\site-packages\tensorflowonspark\TFCluster.py", line 257, in _start
background=(input_mode == InputMode.SPARK)))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 764, in foreachPartition
self.mapPartitions(func).count() # Force evaluation
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 1004, in count
return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 995, in sum
return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 869, in fold
vals = self.mapPartitions(func).collect()
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 771, in collect
port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File "F:\tool\python35\lib\site-packages\py4j\java_gateway.py", line 1160, in call
answer, self.gateway_client, self.target_id, self.name)
File "F:\tool\python35\lib\site-packages\py4j\protocol.py", line 320, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "F:\tool\spark-1.6.1-bin-2.5.0-cdh5.3.6\python\lib\pyspark.zip\pyspark\worker.py", line 111, in main
File "F:\tool\spark-1.6.1-bin-2.5.0-cdh5.3.6\python\lib\pyspark.zip\pyspark\worker.py", line 106, in process
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 317, in func
return f(iterator)
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 759, in func
r = f(it)
File "F:\tool\python35\lib\site-packages\tensorflowonspark\TFSparkNode.py", line 143, in _mapfn
TFSparkNode.mgr = TFManager.start(authkey, ['control'], 'remote')
File "F:\tool\Python35\lib\site-packages\tensorflowonspark\TFManager.py", line 52, in start
mgr.start()
File "F:\tool\Python35\lib\multiprocessing\managers.py", line 479, in start
self._process.start()
File "F:\tool\Python35\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "F:\tool\Python35\lib\multiprocessing\context.py", line 313, in _Popen
return Popen(process_obj)
File "F:\tool\Python35\lib\multiprocessing\popen_spawn_win32.py", line 66, in init
reduction.dump(process_obj, to_child)
File "F:\tool\Python35\lib\multiprocessing\reduction.py", line 59, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'start..'
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "F:\tool\spark-1.6.1-bin-2.5.0-cdh5.3.6\python\lib\pyspark.zip\pyspark\worker.py", line 111, in main
File "F:\tool\spark-1.6.1-bin-2.5.0-cdh5.3.6\python\lib\pyspark.zip\pyspark\worker.py", line 106, in process
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 317, in func
return f(iterator)
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 759, in func
r = f(it)
File "F:\tool\python35\lib\site-packages\tensorflowonspark\TFSparkNode.py", line 143, in _mapfn
TFSparkNode.mgr = TFManager.start(authkey, ['control'], 'remote')
File "F:\tool\Python35\lib\site-packages\tensorflowonspark\TFManager.py", line 52, in start
mgr.start()
File "F:\tool\Python35\lib\multiprocessing\managers.py", line 479, in start
self._process.start()
File "F:\tool\Python35\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "F:\tool\Python35\lib\multiprocessing\context.py", line 313, in _Popen
return Popen(process_obj)
File "F:\tool\Python35\lib\multiprocessing\popen_spawn_win32.py", line 66, in init
reduction.dump(process_obj, to_child)
File "F:\tool\Python35\lib\multiprocessing\reduction.py", line 59, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'start..'
Traceback (most recent call last):
File "", line 1, in
File "F:\tool\Python35\lib\multiprocessing\spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "F:\tool\Python35\lib\multiprocessing\spawn.py", line 116, in _main
self = pickle.load(from_parent)
EOFError: Ran out of input
2018-01-11 11:34:05,024 INFO (MainThread-12652) waiting for 5 reservations
18/01/11 11:34:05 INFO Executor: Executor killed task 2.0 in stage 0.0 (TID 2)
18/01/11 11:34:05 WARN TaskSetManager: Lost task 2.0 in stage 0.0 (TID 2, localhost): TaskKilled (killed intentionally)
18/01/11 11:34:05 WARN PythonRunner: Incomplete task interrupted: Attempting to kill Python Worker
18/01/11 11:34:05 INFO Executor: Executor killed task 4.0 in stage 0.0 (TID 4)
18/01/11 11:34:05 WARN TaskSetManager: Lost task 4.0 in stage 0.0 (TID 4, localhost): TaskKilled (killed intentionally)
Traceback (most recent call last):
File "", line 1, in
File "F:\tool\Python35\lib\multiprocessing\spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "F:\tool\Python35\lib\multiprocessing\spawn.py", line 116, in _main
self = pickle.load(from_parent)
EOFError: Ran out of input
18/01/11 11:34:05 WARN PythonRunner: Incomplete task interrupted: Attempting to kill Python Worker
18/01/11 11:34:05 INFO Executor: Executor killed task 1.0 in stage 0.0 (TID 1)
18/01/11 11:34:05 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, localhost): TaskKilled (killed intentionally)
Traceback (most recent call last):
File "", line 1, in
File "F:\tool\Python35\lib\multiprocessing\spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "F:\tool\Python35\lib\multiprocessing\spawn.py", line 116, in _main
self = pickle.load(from_parent)
EOFError: Ran out of input
18/01/11 11:34:05 INFO Executor: Executor killed task 3.0 in stage 0.0 (TID 3)
18/01/11 11:34:05 WARN TaskSetManager: Lost task 3.0 in stage 0.0 (TID 3, localhost): TaskKilled (killed intentionally)
18/01/11 11:34:05 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
2018-01-11 11:34:06,025 INFO (MainThread-12652) waiting for 5 reservations
Traceback (most recent call last):
File "", line 1, in
File "F:\tool\Python35\lib\multiprocessing\spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "F:\tool\Python35\lib\multiprocessing\spawn.py", line 116, in _main
self = pickle.load(from_parent)
EOFError: Ran out of input
2018-01-11 11:34:07,040 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:08,040 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:09,040 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:10,041 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:11,041 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:12,044 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:13,045 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:14,058 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:15,061 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:16,066 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:17,082 INFO (MainThread-12652) waiting for 5 reservations
The text was updated successfully, but these errors were encountered: