Skip to content

Commit ca2cc36

Browse files
author
Suzanne Scala
authored
metadata for new site (apache#252)
1 parent 3eab411 commit ca2cc36

21 files changed

+141
-82
lines changed

docs/custom-docker.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
---
2-
post_title: Custom Docker Images
3-
menu_order: 95
4-
feature_maturity: ""
5-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
excerpt:
5+
title: Custom Docker Images
6+
menuWeight: 95
7+
featureMaturity:
8+
69
---
710

811
**Note:** Customizing the Spark image Mesosphere provides is supported. However, customizations have the potential to adversely affect the integration between Spark and DC/OS. In situations where Mesosphere support suspects a customization may be adversely impacting Spark with DC/OS, Mesosphere support may request that the customer reproduce the issue with an unmodified

docs/fault-tolerance.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
---
2-
post_title: Fault Tolerance
3-
menu_order: 100
4-
feature_maturity: ""
5-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
excerpt:
5+
title: Fault Tolerance
6+
menuWeight: 100
7+
featureMaturity:
8+
69
---
710

811
Failures such as host, network, JVM, or application failures can affect the behavior of three types of Spark components:

docs/hdfs.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,10 @@
11
---
2-
post_title: Integration with HDFS and S3
3-
nav_title: HDFS
4-
menu_order: 20
5-
enterprise: 'no'
2+
layout: layout.pug
3+
excerpt:
4+
title: Integration with HDFS and S3
5+
navigationTitle: HDFS
6+
menuWeight: 20
7+
68
---
79

810
# HDFS

docs/history-server.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,10 @@
11
---
2-
post_title: History Server
3-
menu_order: 30
4-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
excerpt:
5+
title: History Server
6+
menuWeight: 30
7+
58
---
69

710
DC/OS Apache Spark includes The [Spark History Server][3]. Because the history server requires HDFS, you must explicitly enable it.

docs/index.md

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,11 @@
11
---
2-
post_title: Spark version 1.1.0-2.1.1
3-
nav_title: v1.1.0-2.1.1
4-
menu_order: 0
5-
post_excerpt: ""
6-
feature_maturity: ""
7-
enterprise: 'no'
2+
layout: layout.pug
3+
title:
4+
navigationTitle:
5+
menuWeight: 0
6+
excerpt:
7+
featureMaturity:
8+
89
---
910

1011
Welcome to the documentation for the DC/OS Apache Spark. For more information about new and changed features, see the [release notes](https://github.com/mesosphere/spark-build/releases/).

docs/install.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
---
2-
post_title: Install and Customize
3-
menu_order: 0
4-
feature_maturity: ""
5-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
excerpt:
5+
title: Install and Customize
6+
menuWeight: 1
7+
featureMaturity:
8+
69
---
710

811
Spark is available in the Universe and can be installed by using either the GUI or the DC/OS CLI.

docs/job-scheduling.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
---
2-
post_title: Job Scheduling
3-
menu_order: 110
4-
feature_maturity: ""
5-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
excerpt:
5+
title: Job Scheduling
6+
menuWeight: 110
7+
featureMaturity:
8+
69
---
710

811
This document is a simple overview of material described in greater detail in the Apache Spark documentation [here][1] and [here][2].

docs/kerberos.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,10 @@
11
---
2-
post_title: Kerberos
3-
nav_title: Kerberos
4-
menu_order: 120
5-
enterprise: 'no'
2+
layout: layout.pug
3+
excerpt:
4+
title: Kerberos
5+
navigationTitle: Kerberos
6+
menuWeight: 120
7+
68
---
79

810

docs/limitations.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
---
2-
post_title: Limitations
3-
menu_order: 135
4-
feature_maturity: ""
5-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
excerpt:
5+
title: Limitations
6+
menuWeight: 135
7+
featureMaturity:
8+
69
---
710

811
* Mesosphere does not provide support for Spark app development, such as writing a Python app to process data from Kafka or writing Scala code to process data from HDFS.

docs/quickstart.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
---
2-
post_title: Spark Quickstart
3-
menu_order: 10
4-
feature_maturity: ""
5-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
excerpt:
5+
title: Spark Quickstart
6+
menuWeight: 10
7+
featureMaturity:
8+
69
---
710

811
This tutorial will get you up and running in minutes with Spark. You will install the DC/OS Apache Spark service.

docs/release-notes.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,11 @@
11
---
2-
post_title: Release Notes
3-
menu_order: 140
4-
post_excerpt: ""
5-
feature_maturity: ""
6-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
title: Release Notes
5+
menuWeight: 140
6+
excerpt:
7+
featureMaturity:
8+
79
---
810

911
## Version 2.2.0-2.2.0-2-beta

docs/run-job.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
---
2-
post_title: Run a Spark Job
3-
menu_order: 80
4-
feature_maturity: ""
5-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
excerpt:
5+
title: Run a Spark Job
6+
menuWeight: 80
7+
featureMaturity:
8+
69
---
710
1. Before submitting your job, upload the artifact (e.g., jar file)
811
to a location visible to the cluster (e.g., HTTP, S3, or HDFS). [Learn more][13].

docs/runtime-config-change.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
---
2-
post_title: Runtime Configuration Change
3-
menu_order: 70
4-
feature_maturity: ""
5-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
excerpt:
5+
title: Runtime Configuration Change
6+
menuWeight: 70
7+
featureMaturity:
8+
69
---
710

811
You can customize DC/OS Apache Spark in-place when it is up and running.

docs/security.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,10 @@
11
---
2-
post_title: Security
3-
menu_order: 40
4-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
excerpt:
5+
title: Security
6+
menuWeight: 40
7+
58
---
69

710
This topic describes how to configure DC/OS service accounts for Spark.

docs/spark-shell.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,10 @@
11
---
2-
post_title: Interactive Spark Shell
3-
menu_order: 90
4-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
excerpt:
5+
title: Interactive Spark Shell
6+
menuWeight: 90
7+
58
---
69

710
# Interactive Spark Shell

docs/spark-versions.md

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,11 @@
11
---
2-
post_title: Spark Distributions
3-
feature_maturity: ""
4-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
menuWeight: 0
5+
excerpt:
6+
title: Spark Distributions
7+
featureMaturity:
8+
59
---
610

711
https://downloads.mesosphere.com/spark/assets/spark-1.6.0.tgz <br>

docs/troubleshooting.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,10 @@
11
---
2-
post_title: Troubleshooting
3-
menu_order: 125
4-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
excerpt:
5+
title: Troubleshooting
6+
menuWeight: 125
7+
58
---
69

710
# Dispatcher

docs/uninstall.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
---
2-
post_title: Uninstall
3-
menu_order: 60
4-
feature_maturity: ""
5-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
excerpt:
5+
title: Uninstall
6+
menuWeight: 60
7+
featureMaturity:
8+
69
---
710

811
dcos package uninstall --app-id=<app-id> spark

docs/upgrade.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
---
2-
post_title: Upgrade
3-
menu_order: 50
4-
feature_maturity: ""
5-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
excerpt:
5+
title: Upgrade
6+
menuWeight: 50
7+
featureMaturity:
8+
69
---
710

811
1. Go to the **Universe** > **Installed** page of the DC/OS GUI. Hover over your Spark Service to see the **Uninstall** button, then select it. Alternatively, enter the following from the DC/OS CLI:

docs/usage-examples.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
---
2-
post_title: Usage Example
3-
menu_order: 10
4-
feature_maturity: ""
5-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
excerpt:
5+
title: Usage Example
6+
menuWeight: 10
7+
featureMaturity:
8+
69
---
710

811
1. Perform a default installation by following the instructions in the Install and Customize section of this topic.

docs/version-policy.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,11 @@
11
---
2-
post_title: Version Policy
3-
menu_order: 130
4-
feature_maturity: ""
5-
enterprise: 'no'
2+
layout: layout.pug
3+
navigationTitle:
4+
excerpt:
5+
title: Version Policy
6+
menuWeight: 130
7+
featureMaturity:
8+
69
---
710

811
We have selected the latest version of the [Apache Spark](http://spark.apache.org) stable release train for new releases. We support HDFS version 2.6 by default and version 2.7.

0 commit comments

Comments
 (0)