Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: Can't pickle local object 'start.<locals>.<lambda>' #198

Closed
duhanmin opened this issue Jan 11, 2018 · 3 comments
Closed

Comments

@duhanmin
Copy link

F:\tool\python35\python.exe F:/duhanmin_py/人脸识别TensorFlowOnSpark/人脸识别.py
18/01/11 11:34:00 INFO SparkContext: Running Spark version 1.6.1
18/01/11 11:34:00 INFO SecurityManager: Changing view acls to: zyxrdu
18/01/11 11:34:00 INFO SecurityManager: Changing modify acls to: zyxrdu
18/01/11 11:34:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(zyxrdu); users with modify permissions: Set(zyxrdu)
18/01/11 11:34:01 INFO Utils: Successfully started service 'sparkDriver' on port 53746.
18/01/11 11:34:01 INFO Slf4jLogger: Slf4jLogger started
18/01/11 11:34:01 INFO Remoting: Starting remoting
18/01/11 11:34:01 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.190.229:53759]
18/01/11 11:34:01 INFO Utils: Successfully started service 'sparkDriverActorSystem' on port 53759.
18/01/11 11:34:01 INFO SparkEnv: Registering MapOutputTracker
18/01/11 11:34:01 INFO SparkEnv: Registering BlockManagerMaster
18/01/11 11:34:01 INFO DiskBlockManager: Created local directory at C:\Users\zyxrdu\AppData\Local\Temp\blockmgr-45c437cd-3173-47eb-bc9a-d80af7333153
18/01/11 11:34:01 INFO MemoryStore: MemoryStore started with capacity 511.1 MB
18/01/11 11:34:01 INFO SparkEnv: Registering OutputCommitCoordinator
18/01/11 11:34:01 WARN Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
18/01/11 11:34:01 INFO Utils: Successfully started service 'SparkUI' on port 4041.
18/01/11 11:34:01 INFO SparkUI: Started SparkUI at http://192.168.190.229:4041
18/01/11 11:34:01 INFO Executor: Starting executor ID driver on host localhost
18/01/11 11:34:01 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 53778.
18/01/11 11:34:01 INFO NettyBlockTransferService: Server created on 53778
18/01/11 11:34:01 INFO BlockManagerMaster: Trying to register BlockManager
18/01/11 11:34:01 INFO BlockManagerMasterEndpoint: Registering block manager localhost:53778 with 511.1 MB RAM, BlockManagerId(driver, localhost, 53778)
18/01/11 11:34:01 INFO BlockManagerMaster: Registered BlockManager
2018-01-11 11:34:01,735 INFO (MainThread-12652) Reserving TFSparkNodes w/ TensorBoard
2018-01-11 11:34:01,735 INFO (MainThread-12652) listening for reservations at ('192.168.190.229', 53780)
2018-01-11 11:34:01,735 INFO (MainThread-12652) Starting TensorFlow on executors
2018-01-11 11:34:01,969 INFO (MainThread-12652) Waiting for TFSparkNodes to start
2018-01-11 11:34:01,969 INFO (MainThread-12652) waiting for 5 reservations
18/01/11 11:34:02 INFO SparkContext: Starting job: foreachPartition at F:\tool\python35\lib\site-packages\tensorflowonspark\TFCluster.py:257
18/01/11 11:34:02 INFO DAGScheduler: Got job 0 (foreachPartition at F:\tool\python35\lib\site-packages\tensorflowonspark\TFCluster.py:257) with 5 output partitions
18/01/11 11:34:02 INFO DAGScheduler: Final stage: ResultStage 0 (foreachPartition at F:\tool\python35\lib\site-packages\tensorflowonspark\TFCluster.py:257)
18/01/11 11:34:02 INFO DAGScheduler: Parents of final stage: List()
18/01/11 11:34:02 INFO DAGScheduler: Missing parents: List()
18/01/11 11:34:02 INFO DAGScheduler: Submitting ResultStage 0 (PythonRDD[1] at foreachPartition at F:\tool\python35\lib\site-packages\tensorflowonspark\TFCluster.py:257), which has no missing parents
18/01/11 11:34:02 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 18.3 KB, free 18.3 KB)
18/01/11 11:34:02 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 12.6 KB, free 31.0 KB)
18/01/11 11:34:02 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:53778 (size: 12.6 KB, free: 511.1 MB)
18/01/11 11:34:02 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
18/01/11 11:34:02 INFO DAGScheduler: Submitting 5 missing tasks from ResultStage 0 (PythonRDD[1] at foreachPartition at F:\tool\python35\lib\site-packages\tensorflowonspark\TFCluster.py:257)
18/01/11 11:34:02 INFO TaskSchedulerImpl: Adding task set 0.0 with 5 tasks
18/01/11 11:34:02 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, partition 0,PROCESS_LOCAL, 2064 bytes)
18/01/11 11:34:02 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, partition 1,PROCESS_LOCAL, 2064 bytes)
18/01/11 11:34:02 INFO TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, localhost, partition 2,PROCESS_LOCAL, 2064 bytes)
18/01/11 11:34:02 INFO TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, localhost, partition 3,PROCESS_LOCAL, 2064 bytes)
18/01/11 11:34:02 INFO Executor: Running task 0.0 in stage 0.0 (TID 0)
18/01/11 11:34:02 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
18/01/11 11:34:02 INFO Executor: Running task 2.0 in stage 0.0 (TID 2)
18/01/11 11:34:02 INFO Executor: Running task 3.0 in stage 0.0 (TID 3)
2018-01-11 11:34:02,975 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:03,977 INFO (MainThread-12652) waiting for 5 reservations
18/01/11 11:34:04 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)
org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "F:\tool\spark-1.6.1-bin-2.5.0-cdh5.3.6\python\lib\pyspark.zip\pyspark\worker.py", line 111, in main
File "F:\tool\spark-1.6.1-bin-2.5.0-cdh5.3.6\python\lib\pyspark.zip\pyspark\worker.py", line 106, in process
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 317, in func
return f(iterator)
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 759, in func
r = f(it)
File "F:\tool\python35\lib\site-packages\tensorflowonspark\TFSparkNode.py", line 143, in _mapfn
TFSparkNode.mgr = TFManager.start(authkey, ['control'], 'remote')
File "F:\tool\Python35\lib\site-packages\tensorflowonspark\TFManager.py", line 52, in start
mgr.start()
File "F:\tool\Python35\lib\multiprocessing\managers.py", line 479, in start
self._process.start()
File "F:\tool\Python35\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "F:\tool\Python35\lib\multiprocessing\context.py", line 313, in _Popen
return Popen(process_obj)
File "F:\tool\Python35\lib\multiprocessing\popen_spawn_win32.py", line 66, in init
reduction.dump(process_obj, to_child)
File "F:\tool\Python35\lib\multiprocessing\reduction.py", line 59, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'start..'

at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

18/01/11 11:34:04 INFO TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, localhost, partition 4,PROCESS_LOCAL, 2064 bytes)
18/01/11 11:34:04 INFO Executor: Running task 4.0 in stage 0.0 (TID 4)
18/01/11 11:34:04 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "F:\tool\spark-1.6.1-bin-2.5.0-cdh5.3.6\python\lib\pyspark.zip\pyspark\worker.py", line 111, in main
File "F:\tool\spark-1.6.1-bin-2.5.0-cdh5.3.6\python\lib\pyspark.zip\pyspark\worker.py", line 106, in process
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 317, in func
return f(iterator)
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 759, in func
r = f(it)
File "F:\tool\python35\lib\site-packages\tensorflowonspark\TFSparkNode.py", line 143, in _mapfn
TFSparkNode.mgr = TFManager.start(authkey, ['control'], 'remote')
File "F:\tool\Python35\lib\site-packages\tensorflowonspark\TFManager.py", line 52, in start
mgr.start()
File "F:\tool\Python35\lib\multiprocessing\managers.py", line 479, in start
self._process.start()
File "F:\tool\Python35\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "F:\tool\Python35\lib\multiprocessing\context.py", line 313, in _Popen
return Popen(process_obj)
File "F:\tool\Python35\lib\multiprocessing\popen_spawn_win32.py", line 66, in init
reduction.dump(process_obj, to_child)
File "F:\tool\Python35\lib\multiprocessing\reduction.py", line 59, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'start..'

at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

18/01/11 11:34:04 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
18/01/11 11:34:04 INFO TaskSchedulerImpl: Cancelling stage 0
18/01/11 11:34:04 INFO Executor: Executor is trying to kill task 1.0 in stage 0.0 (TID 1)
18/01/11 11:34:04 INFO Executor: Executor is trying to kill task 2.0 in stage 0.0 (TID 2)
18/01/11 11:34:04 INFO TaskSchedulerImpl: Stage 0 was cancelled
18/01/11 11:34:04 INFO Executor: Executor is trying to kill task 3.0 in stage 0.0 (TID 3)
18/01/11 11:34:04 INFO Executor: Executor is trying to kill task 4.0 in stage 0.0 (TID 4)
18/01/11 11:34:04 INFO DAGScheduler: ResultStage 0 (foreachPartition at F:\tool\python35\lib\site-packages\tensorflowonspark\TFCluster.py:257) failed in 2.456 s
18/01/11 11:34:04 INFO DAGScheduler: Job 0 failed: foreachPartition at F:\tool\python35\lib\site-packages\tensorflowonspark\TFCluster.py:257, took 2.595130 s
Exception in thread Thread-3:
Traceback (most recent call last):
File "F:\tool\python35\lib\threading.py", line 914, in _bootstrap_inner
self.run()
File "F:\tool\python35\lib\threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "F:\tool\python35\lib\site-packages\tensorflowonspark\TFCluster.py", line 257, in _start
background=(input_mode == InputMode.SPARK)))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 764, in foreachPartition
self.mapPartitions(func).count() # Force evaluation
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 1004, in count
return self.mapPartitions(lambda i: [sum(1 for _ in i)]).sum()
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 995, in sum
return self.mapPartitions(lambda x: [sum(x)]).fold(0, operator.add)
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 869, in fold
vals = self.mapPartitions(func).collect()
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 771, in collect
port = self.ctx._jvm.PythonRDD.collectAndServe(self._jrdd.rdd())
File "F:\tool\python35\lib\site-packages\py4j\java_gateway.py", line 1160, in call
answer, self.gateway_client, self.target_id, self.name)
File "F:\tool\python35\lib\site-packages\py4j\protocol.py", line 320, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "F:\tool\spark-1.6.1-bin-2.5.0-cdh5.3.6\python\lib\pyspark.zip\pyspark\worker.py", line 111, in main
File "F:\tool\spark-1.6.1-bin-2.5.0-cdh5.3.6\python\lib\pyspark.zip\pyspark\worker.py", line 106, in process
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 317, in func
return f(iterator)
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 759, in func
r = f(it)
File "F:\tool\python35\lib\site-packages\tensorflowonspark\TFSparkNode.py", line 143, in _mapfn
TFSparkNode.mgr = TFManager.start(authkey, ['control'], 'remote')
File "F:\tool\Python35\lib\site-packages\tensorflowonspark\TFManager.py", line 52, in start
mgr.start()
File "F:\tool\Python35\lib\multiprocessing\managers.py", line 479, in start
self._process.start()
File "F:\tool\Python35\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "F:\tool\Python35\lib\multiprocessing\context.py", line 313, in _Popen
return Popen(process_obj)
File "F:\tool\Python35\lib\multiprocessing\popen_spawn_win32.py", line 66, in init
reduction.dump(process_obj, to_child)
File "F:\tool\Python35\lib\multiprocessing\reduction.py", line 59, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'start..'

at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
at org.apache.spark.api.python.PythonRDD$.collectAndServe(PythonRDD.scala:405)
at org.apache.spark.api.python.PythonRDD.collectAndServe(PythonRDD.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:381)
at py4j.Gateway.invoke(Gateway.java:259)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:209)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "F:\tool\spark-1.6.1-bin-2.5.0-cdh5.3.6\python\lib\pyspark.zip\pyspark\worker.py", line 111, in main
File "F:\tool\spark-1.6.1-bin-2.5.0-cdh5.3.6\python\lib\pyspark.zip\pyspark\worker.py", line 106, in process
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 2346, in pipeline_func
return func(split, prev_func(split, iterator))
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 317, in func
return f(iterator)
File "F:\tool\python35\lib\site-packages\pyspark\rdd.py", line 759, in func
r = f(it)
File "F:\tool\python35\lib\site-packages\tensorflowonspark\TFSparkNode.py", line 143, in _mapfn
TFSparkNode.mgr = TFManager.start(authkey, ['control'], 'remote')
File "F:\tool\Python35\lib\site-packages\tensorflowonspark\TFManager.py", line 52, in start
mgr.start()
File "F:\tool\Python35\lib\multiprocessing\managers.py", line 479, in start
self._process.start()
File "F:\tool\Python35\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "F:\tool\Python35\lib\multiprocessing\context.py", line 313, in _Popen
return Popen(process_obj)
File "F:\tool\Python35\lib\multiprocessing\popen_spawn_win32.py", line 66, in init
reduction.dump(process_obj, to_child)
File "F:\tool\Python35\lib\multiprocessing\reduction.py", line 59, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'start..'

at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRDD.scala:166)
at org.apache.spark.api.python.PythonRunner$$anon$1.<init>(PythonRDD.scala:207)
at org.apache.spark.api.python.PythonRunner.compute(PythonRDD.scala:125)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:70)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more

Traceback (most recent call last):
File "", line 1, in
File "F:\tool\Python35\lib\multiprocessing\spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "F:\tool\Python35\lib\multiprocessing\spawn.py", line 116, in _main
self = pickle.load(from_parent)
EOFError: Ran out of input
2018-01-11 11:34:05,024 INFO (MainThread-12652) waiting for 5 reservations
18/01/11 11:34:05 INFO Executor: Executor killed task 2.0 in stage 0.0 (TID 2)
18/01/11 11:34:05 WARN TaskSetManager: Lost task 2.0 in stage 0.0 (TID 2, localhost): TaskKilled (killed intentionally)
18/01/11 11:34:05 WARN PythonRunner: Incomplete task interrupted: Attempting to kill Python Worker
18/01/11 11:34:05 INFO Executor: Executor killed task 4.0 in stage 0.0 (TID 4)
18/01/11 11:34:05 WARN TaskSetManager: Lost task 4.0 in stage 0.0 (TID 4, localhost): TaskKilled (killed intentionally)
Traceback (most recent call last):
File "", line 1, in
File "F:\tool\Python35\lib\multiprocessing\spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "F:\tool\Python35\lib\multiprocessing\spawn.py", line 116, in _main
self = pickle.load(from_parent)
EOFError: Ran out of input
18/01/11 11:34:05 WARN PythonRunner: Incomplete task interrupted: Attempting to kill Python Worker
18/01/11 11:34:05 INFO Executor: Executor killed task 1.0 in stage 0.0 (TID 1)
18/01/11 11:34:05 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, localhost): TaskKilled (killed intentionally)
Traceback (most recent call last):
File "", line 1, in
File "F:\tool\Python35\lib\multiprocessing\spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "F:\tool\Python35\lib\multiprocessing\spawn.py", line 116, in _main
self = pickle.load(from_parent)
EOFError: Ran out of input
18/01/11 11:34:05 INFO Executor: Executor killed task 3.0 in stage 0.0 (TID 3)
18/01/11 11:34:05 WARN TaskSetManager: Lost task 3.0 in stage 0.0 (TID 3, localhost): TaskKilled (killed intentionally)
18/01/11 11:34:05 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
2018-01-11 11:34:06,025 INFO (MainThread-12652) waiting for 5 reservations
Traceback (most recent call last):
File "", line 1, in
File "F:\tool\Python35\lib\multiprocessing\spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "F:\tool\Python35\lib\multiprocessing\spawn.py", line 116, in _main
self = pickle.load(from_parent)
EOFError: Ran out of input
2018-01-11 11:34:07,040 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:08,040 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:09,040 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:10,041 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:11,041 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:12,044 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:13,045 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:14,058 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:15,061 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:16,066 INFO (MainThread-12652) waiting for 5 reservations
2018-01-11 11:34:17,082 INFO (MainThread-12652) waiting for 5 reservations

@leewyang
Copy link
Contributor

Unfortunately, Windows is not supported at the moment (due specifically to this pickling issue). Closing this ticket as a dupe of #36

@wwwa
Copy link

wwwa commented May 9, 2019

win10 + tensorflowonspark

AttributeError: Can't pickle local object 'start..'

File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 2499, in pipeline_func
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 2499, in pipeline_func
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 2499, in pipeline_func
[Previous line repeated 1 more time]
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 352, in func
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 801, in func
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflowonspark\TFSparkNode.py", line 179, in _mapfn
TFSparkNode.mgr = TFManager.start(authkey, ['control', 'error'], 'remote')
File "D:\github\TensorFlowOnSpark\tensorflowonspark\TFManager.py", line 64, in start
mgr.start()
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\multiprocessing\managers.py", line 543, in start
self._process.start()
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'start..'

    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:453)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:588)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:571)
    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at scala.collection.Iterator.foreach(Iterator.scala:941)
    at scala.collection.Iterator.foreach$(Iterator.scala:941)
    at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
    at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
    at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
    at scala.collection.TraversableOnce.to(TraversableOnce.scala:313)
    at scala.collection.TraversableOnce.to$(TraversableOnce.scala:311)
    at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
    at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:305)
    at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:305)
    at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
    at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:292)
    at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:286)
    at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
    at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:945)
    at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2101)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:411)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

19/05/09 10:38:45 INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, localhost, executor driver, partition 1, PROCESS_LOCAL, 7331 bytes)
19/05/09 10:38:45 INFO Executor: Running task 1.0 in stage 0.0 (TID 1)
19/05/09 10:38:45 WARN TaskSetManager: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 377, in main
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 372, in process
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 2499, in pipeline_func
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 2499, in pipeline_func
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 2499, in pipeline_func
[Previous line repeated 1 more time]
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 352, in func
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 801, in func
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflowonspark\TFSparkNode.py", line 179, in _mapfn
TFSparkNode.mgr = TFManager.start(authkey, ['control', 'error'], 'remote')
File "D:\github\TensorFlowOnSpark\tensorflowonspark\TFManager.py", line 64, in start
mgr.start()
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\multiprocessing\managers.py", line 543, in start
self._process.start()
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'start..'

    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:453)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:588)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:571)
    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at scala.collection.Iterator.foreach(Iterator.scala:941)
    at scala.collection.Iterator.foreach$(Iterator.scala:941)
    at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
    at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
    at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
    at scala.collection.TraversableOnce.to(TraversableOnce.scala:313)
    at scala.collection.TraversableOnce.to$(TraversableOnce.scala:311)
    at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
    at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:305)
    at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:305)
    at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
    at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:292)
    at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:286)
    at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
    at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:945)
    at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2101)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:411)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

19/05/09 10:38:45 ERROR TaskSetManager: Task 0 in stage 0.0 failed 1 times; aborting job
19/05/09 10:38:45 INFO TaskSchedulerImpl: Cancelling stage 0
19/05/09 10:38:45 INFO TaskSchedulerImpl: Killing all running tasks in stage 0: Stage cancelled
19/05/09 10:38:45 INFO Executor: Executor is trying to kill task 1.0 in stage 0.0 (TID 1), reason: Stage cancelled
19/05/09 10:38:45 INFO TaskSchedulerImpl: Stage 0 was cancelled
19/05/09 10:38:45 INFO DAGScheduler: ResultStage 0 (foreachPartition at C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflowonspark\TFCluster.py:321) failed in 1.942 s due to Job aborted due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent failure: Lost task 0.0 in stage 0.0 (TID 0, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 377, in main
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\worker.py", line 372, in process
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 2499, in pipeline_func
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 2499, in pipeline_func
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 2499, in pipeline_func
[Previous line repeated 1 more time]
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 352, in func
File "D:\spark\spark-2.4.2-bin-hadoop2.7\python\lib\pyspark.zip\pyspark\rdd.py", line 801, in func
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflowonspark\TFSparkNode.py", line 179, in _mapfn
TFSparkNode.mgr = TFManager.start(authkey, ['control', 'error'], 'remote')
File "D:\github\TensorFlowOnSpark\tensorflowonspark\TFManager.py", line 64, in start
mgr.start()
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\multiprocessing\managers.py", line 543, in start
self._process.start()
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\multiprocessing\process.py", line 112, in start
self._popen = self._Popen(self)
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\multiprocessing\popen_spawn_win32.py", line 89, in init
reduction.dump(process_obj, to_child)
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'start..'

    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:453)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:588)
    at org.apache.spark.api.python.PythonRunner$$anon$3.read(PythonRunner.scala:571)
    at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:406)
    at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
    at scala.collection.Iterator.foreach(Iterator.scala:941)
    at scala.collection.Iterator.foreach$(Iterator.scala:941)
    at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
    at scala.collection.generic.Growable.$plus$plus$eq(Growable.scala:62)
    at scala.collection.generic.Growable.$plus$plus$eq$(Growable.scala:53)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:105)
    at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:49)
    at scala.collection.TraversableOnce.to(TraversableOnce.scala:313)
    at scala.collection.TraversableOnce.to$(TraversableOnce.scala:311)
    at org.apache.spark.InterruptibleIterator.to(InterruptibleIterator.scala:28)
    at scala.collection.TraversableOnce.toBuffer(TraversableOnce.scala:305)
    at scala.collection.TraversableOnce.toBuffer$(TraversableOnce.scala:305)
    at org.apache.spark.InterruptibleIterator.toBuffer(InterruptibleIterator.scala:28)
    at scala.collection.TraversableOnce.toArray(TraversableOnce.scala:292)
    at scala.collection.TraversableOnce.toArray$(TraversableOnce.scala:286)
    at org.apache.spark.InterruptibleIterator.toArray(InterruptibleIterator.scala:28)
    at org.apache.spark.rdd.RDD.$anonfun$collect$2(RDD.scala:945)
    at org.apache.spark.SparkContext.$anonfun$runJob$5(SparkContext.scala:2101)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:411)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)

Driver stacktrace:
19/05/09 10:38:45 INFO DAGScheduler: Job 0 failed: foreachPartition at C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\site-packages\tensorflowonspark\TFCluster.py:321, took 2.018948 s
2019-05-09 10:38:45,970 ERROR (Thread-3-15896) Exception in TF background thread
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "C:\Users\wjl\AppData\Local\Programs\Python\Python37\lib\multiprocessing\spawn.py", line 115, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
19/05/09 10:38:46 INFO Executor: Executor killed task 1.0 in stage 0.0 (TID 1), reason: Stage cancelled
19/05/09 10:38:46 WARN TaskSetManager: Lost task 1.0 in stage 0.0 (TID 1, localhost, executor driver): TaskKilled (Stage cancelled)
19/05/09 10:38:46 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
2019-05-09 10:38:46,871 INFO (MainThread-15896) waiting for 2 reservations
19/05/09 10:38:46 INFO SparkUI: Stopped Spark web UI at http://wjl-PC:4040
19/05/09 10:38:46 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
19/05/09 10:38:46 INFO MemoryStore: MemoryStore cleared
19/05/09 10:38:46 INFO BlockManager: BlockManager stopped
19/05/09 10:38:46 INFO BlockManagerMaster: BlockManagerMaster stopped
19/05/09 10:38:46 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
19/05/09 10:38:46 INFO SparkContext: Successfully stopped SparkContext
19/05/09 10:38:47 INFO ShutdownHookManager: Shutdown hook called
19/05/09 10:38:47 INFO ShutdownHookManager: Deleting directory C:\Users\wjl\AppData\Local\Temp\spark-35416d9b-d0c0-4096-ace1-a2aa46298b8c
19/05/09 10:38:47 INFO ShutdownHookManager: Deleting directory C:\Users\wjl\AppData\Local\Temp\localPyFiles-ca4a570a-3ad3-48dc-b383-7b85981633f1
19/05/09 10:38:47 INFO ShutdownHookManager: Deleting directory C:\Users\wjl\AppData\Local\Temp\spark-484de9bb-9a8f-45cc-b47d-41f4e655ab6f\pyspark-ee7a8d31-e772-436b-a333-2061231d2cfa
19/05/09 10:38:47 INFO ShutdownHookManager: Deleting directory C:\Users\wjl\AppData\Local\Temp\spark-484de9bb-9a8f-45cc-b47d-41f4e655ab6f

@chansonzhang
Copy link

I met this problem on mac, does any one have a solution?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants