Skip to content

Fixed broken pyspark shell. #444

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 2 commits into from
Closed

Fixed broken pyspark shell. #444

wants to merge 2 commits into from

Conversation

rxin
Copy link
Contributor

@rxin rxin commented Apr 18, 2014

No description provided.

@AmplabJenkins
Copy link

Merged build triggered.

@AmplabJenkins
Copy link

Merged build started.

rxin referenced this pull request Apr 18, 2014
…lt is Py3 from shell.py

Python alternative for #392; managed from shell.py

Author: AbhishekKr <abhikumar163@gmail.com>

Closes #399 from abhishekkr/pyspark_shell and squashes the following commits:

134bdc9 [AbhishekKr] pyspark require Python2, failing if system default is Py3 from shell.py
@yinxusen
Copy link
Contributor

Yep, I think python shell's document should be update same time. sys.version_info only became a named tuple in 2.7. To get this to work in 2.6, it needs to be accessed as a regular tuple:

if sys.version_info[0] != 2

see line 25.

@rxin
Copy link
Contributor Author

rxin commented Apr 18, 2014

Can you submit a PR for that?

@yinxusen
Copy link
Contributor

Sure. I'll modify it.

@rxin
Copy link
Contributor Author

rxin commented Apr 18, 2014

Actually never mind I will do it here. You can't just change that line without changing the indent anyway.

@rxin
Copy link
Contributor Author

rxin commented Apr 18, 2014

ok pushed

@yinxusen
Copy link
Contributor

Yes, agree with you.

@AmplabJenkins
Copy link

Merged build triggered.

@AmplabJenkins
Copy link

Merged build started.

@AmplabJenkins
Copy link

Merged build finished.

@AmplabJenkins
Copy link

Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14229/

@AmplabJenkins
Copy link

Merged build finished.

@AmplabJenkins
Copy link

Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14232/

@rxin
Copy link
Contributor Author

rxin commented Apr 18, 2014

Jenkins, retest this please.

@AmplabJenkins
Copy link

Merged build triggered.

@AmplabJenkins
Copy link

Merged build started.

@rxin
Copy link
Contributor Author

rxin commented Apr 18, 2014

Ok I merged this since none of the tests will run this code path, and it is a hot fix.

asfgit pushed a commit that referenced this pull request Apr 18, 2014
Author: Reynold Xin <rxin@apache.org>

Closes #444 from rxin/pyspark and squashes the following commits:

fc11356 [Reynold Xin] Made the PySpark shell version checking compatible with Python 2.6.
571830b [Reynold Xin] Fixed broken pyspark shell.

(cherry picked from commit 81a152c)
Signed-off-by: Reynold Xin <rxin@apache.org>
@asfgit asfgit closed this in 81a152c Apr 18, 2014
@AmplabJenkins
Copy link

Merged build finished. All automated tests passed.

@AmplabJenkins
Copy link

All automated tests passed.
Refer to this link for build results: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/14239/

pwendell added a commit to pwendell/spark that referenced this pull request May 12, 2014
Clarify that Python 2.7 is only needed for MLlib
pdeyhim pushed a commit to pdeyhim/spark-1 that referenced this pull request Jun 25, 2014
Author: Reynold Xin <rxin@apache.org>

Closes apache#444 from rxin/pyspark and squashes the following commits:

fc11356 [Reynold Xin] Made the PySpark shell version checking compatible with Python 2.6.
571830b [Reynold Xin] Fixed broken pyspark shell.
@rxin rxin deleted the pyspark branch August 13, 2014 08:01
andrewor14 pushed a commit to andrewor14/spark that referenced this pull request Jan 8, 2015
Clarify that Python 2.7 is only needed for MLlib
(cherry picked from commit 4f0c361)

Signed-off-by: Patrick Wendell <pwendell@gmail.com>
markhamstra pushed a commit to markhamstra/spark that referenced this pull request Nov 7, 2017
* Use a list of environment variables for JVM options.

* Fix merge conflicts.
bzhaoopenstack pushed a commit to bzhaoopenstack/spark that referenced this pull request Sep 11, 2019
Fixes issues apache#444

Signed-off-by: Melvin Hillsman <mrhillsman@gmail.com>
arjunshroff pushed a commit to arjunshroff/spark that referenced this pull request Nov 24, 2020
yaooqinn added a commit that referenced this pull request Mar 28, 2025
…t/txt format plans

### What changes were proposed in this pull request?
This PR adds a download link to the ExecutionPage for SVG/dot/txt format plans.
![image](https://github.com/user-attachments/assets/3359ac26-b4a6-4952-9bf0-b6ac22e6e199)

### Why are the changes needed?

These downloaded assets can improve the UX for sharing/porting to papers, social media, external advanced visualization tools, e.t.c.

### Does this PR introduce _any_ user-facing change?
Yes, UI changes

### How was this patch tested?
- SVG
```svg
<svg xmlns="http://www.w3.org/2000/svg" viewBox="-16 -16 304.046875 95.53125" width="304.046875" height="95.53125"><style>/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

.label {
  font-size: 0.85rem;
  font-weight: normal;
  text-shadow: none;
  color: #333;
}

svg g.cluster rect {
  fill: #A0DFFF;
  stroke: #3EC0FF;
  stroke-width: 1px;
}

svg g.node rect {
  fill: #C3EBFF;
  stroke: #3EC0FF;
  stroke-width: 1px;
}

/* Highlight the SparkPlan node name */
svg text :first-child:not(.stageId-and-taskId-metrics) {
  font-weight: bold;
}

svg text {
  fill: #333;
}

svg path {
  stroke: #444;
  stroke-width: 1.5px;
}

/* Breaks the long string like file path when showing tooltips */
.tooltip-inner {
  word-wrap:break-word;
}

/* Breaks the long job url list when showing Details for Query in SQL */
.job-url {
  word-wrap: break-word;
}

svg g.node rect.selected {
  fill: #E25A1CFF;
  stroke: #317EACFF;
  stroke-width: 2px;
}
svg g.node rect.linked {
  fill: #FFC106FF;
  stroke: #317EACFF;
  stroke-width: 2px;
}

svg path.linked {
  fill: #317EACFF;
  stroke: #317EACFF;
  stroke-width: 2px;
}
</style><g><g class="output"><g class="clusters"/><g class="edgePaths"/><g class="edgeLabels"/><g class="nodes"><g class="node" id="node0" transform="translate(136.0234375,31.765625)" style="opacity: 1;" data-original-title="" title=""><rect rx="5" ry="5" x="-136.0234375" y="-31.765625" width="272.046875" height="63.53125" class="label-container"/><g class="label" transform="translate(0,0)"><g transform="translate(-131.0234375,-26.765625)"><foreignObject width="262.046875" height="53.53125"><div xmlns="http://www.w3.org/1999/xhtml" style="display: inline-block; white-space: nowrap;"><br /><b>Execute CreateHiveTableAsSelectCommand</b><br /><br /></div></foreignObject></g></g></g></g></g></g></svg>
```
![plan (4)](https://github.com/user-attachments/assets/ba9dab38-515b-4ebf-82ab-2cd35e42fe8f)
- DOT
```dot
digraph G {
  0 [id="node0" labelType="html" label="<b>Execute InsertIntoHadoopFsRelationCommand</b><br><br>task commit time: 7 ms<br>number of written files: 1<br>job commit time: 24 ms<br>number of output rows: 1<br>number of dynamic part: 0<br>written output: 468.0 B" tooltip="Execute InsertIntoHadoopFsRelationCommand file:/Users/hzyaoqin/spark/spark-warehouse/t, false, Parquet, [parquet.compression=zstd, serialization.format=1, mergeschema=false, __hive_compatible_bucketed_table_insertion__=true], Append, `spark_catalog`.`default`.`t`, org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe, org.apache.spark.sql.execution.datasources.InMemoryFileIndex(file:/Users/hzyaoqin/spark/spark-warehouse/t), [c]"];
  1 [id="node1" labelType="html" label="<br><b>WriteFiles</b><br><br>" tooltip="WriteFiles"];

  subgraph cluster2 {
    isCluster="true";
    id="cluster2";
    label="WholeStageCodegen (1)\n \nduration: 158 ms";
    tooltip="WholeStageCodegen (1)";
      3 [id="node3" labelType="html" label="<br><b>Project</b><br><br>" tooltip="Project [1 AS c#0]"];
  4 [id="node4" labelType="html" label="<b>Scan OneRowRelation</b><br><br>number of output rows: 1" tooltip="Scan OneRowRelation[]"];
  }

  1->0;

  3->1;

  4->3;

}
```

- TXT

[plan.txt](https://github.com/user-attachments/files/19480587/plan.txt)

### Was this patch authored or co-authored using generative AI tooling?
no

Closes #50427 from yaooqinn/SPARK-51629.

Authored-by: Kent Yao <yao@apache.org>
Signed-off-by: Kent Yao <yao@apache.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants